Sciency Words: The Gartner Hype Cycle

Hello, friends!  Welcome back to Sciency Words, a special series here on Planet Pailly where we talk about the definitions and etymologies of science or science related terms.  Today, we’re talking about:

THE GARTNER HYPE CYCLE

In my last blog post, I shared my thoughts about A.I. generated art.  It’s a new technology.  There’s a lot of hype about this new technology right now, and my suspicion is that A.I. art is getting a little more hype than it really deserves.  I feel that way, in part, because of something called the Gartner hype cycle.

Definition of the Gartner hype cycle: The Gartner hype cycle is a curvy line on a graph that purportedly models how the hype for a newly introduced technology changes over time.  First, the hype will go up—way up.  Then the hype will plummet down.  In the final phases of the cycle, hype will go slightly up again, before leveling off.

Etymology of the Gartner hype cycle: The idea that new technologies experience a “hype cycle” was first introduced in 1995 by tech analyst Jackie Fenn.  She worked for a tech consulting firm called Gartner Inc., which continues to use hype cycle charts in presentations about new and emerging technologies.

As Gartner Inc. describes it on their website, the Gartner hype cycle has five distinct phases:

Innovation Trigger: A new technology is introduced.  Hype starts to grow (and grow and grow).

Peak of Inflated Expectations: The hype surrounding this new technology gets blown way out of proportion.  Media reports make it sound like almost all the world’s problems could be solved by this new technology.  Investors on Wall Street start screaming “Buy! Buy! Buy!”

Trough of Disillusionment: The hype bubble bursts.  It becomes clear that this new technology cannot solve all the world’s problems, and those Wall Street people start screaming “Sell! Sell! Sell!” 

Slope of Enlightenment: While the new technology can’t solve all of the world’s problems, it turns out that it can solve some problems.  Interest and investment in the new technology starts to build again, based on more realistic expectations.

Plateau of Productivity: The new technology becomes normalized after finding its proper niche in society.

There are at least three major criticisms of this concept.  First, the word “cycle” is misleading.  It implies that this process is cyclical when it clearly isn’t.  Second, this concept is not good science.  How do you measure something like hype, scientifically speaking?  And third, the Gartner cycle would have you believe that every new technology will eventually find its niche.  There’s no guarantee of that.  Sometimes a new technology simply fails.  It falls into that “trough of disillusionment” and never comes back.

Despite those valid criticisms, I do think the Gartner cycle can be a helpful first approximation of what might (might!) happen with a newly introduced technology.  The cycle may not be good science.  It may not make exact predictions, and it can’t guarantee anything.  But the general idea that the hype for a new technology will go way up, then go way down, and then settle somewhere in the middle… that does seem to happen, more often than not.  There’s enough truthiness to the Gartner cycle that it’s influenced my own thinking about A.I. art, as well as my thinking on topics like cryptocurrency, commercial space flight, self-driving cars, and a bunch of other things.

And the Gartner cycle is something I’m starting to think about in my Sci-Fi writing as well.  What might happen when we invent antigravity technology?  Faster-than-light travel?  Time machines?  Would those technologies experience something like the Gartner hype cycle?  Maybe.  Or maybe not.

Again, there are no guarantees with this one. In my mind, the Gartner cycle is a useful first approximation of what might happen. Nothing more.

WANT TO LEARN MORE?

I first heard about the Gartner cycle in a video by Wendover Productions, which uses drone delivery services as an example of the Gartner hype cycle in action.  Click here to watch.

I, For One, Welcome Our New A.I. Overlords

Hello, friends!

I was originally planning to post this last week, but then I got nervous.  I’m about to say something controversial.  I’m going to take a stance on an issue, fully aware of the fact that some of you will disagree with me—some of you may vigorously disagree.  That makes me a little bit nervous, but what makes me even more nervous is that I’m not 100% sure I agree with myself.  Even after taking an extra week to think things over, I still have doubts about what I’m about to say.  Okay, here goes: I am not super worried about A.I. generated art.

There’s a long history of people freaking out over new technologies.  I grew up in the 90’s.  I only vaguely remember the first time I heard about the Internet.  What I remember is that the Internet sounded scary.  No one seemed to fully understand what this Internet thing was or how it worked, but whenever adults started talking about it, they all had strong opinions.  Strongly negative opinions, it seemed.  Things were going to be different because of the Internet.  Things were going to change.  And if there’s one constant in life, it’s that change scares people.

And I’m not immune to that fear.  A few weeks ago, I saw a news article about a newly discovered exoplanet.  Photographing exoplanets is still, in most cases, beyond our current technology, so NASA sometimes publishes “artist conceptions” of what newly discovered exoplanets might look like.  Not this time, though.  This time, they used an A.I. generated image.  The image was beautiful.  I’m sure it was scientifically accurate, too.  And it left me feeling like I’d just been punched in the gut.  Drawing or painting what exoplanets might look like?  That’s a job, and it’s a job that will probably go away soon, because it’s a job an A.I. can easily do.

So I am worried about what A.I. means for creative folks like me.  However, I’ve also seen a little too much hyperbolic fear-mongering about A.I. generated art, writing, and music.  Despite what a lot of people are saying right now, I am not worried about A.I. replacing human artists entirely.  For the purposes of this blog post, I’d like to draw a distinction between creating art and producing content.  In theory, I could have an A.I. write blog posts for me, and I could have an A.I. generate silly cartoons for my blog, too.  Would you, dear reader, find those blog posts interesting and informative?  Would those A.I. generated cartoons make you smile?  Maybe.  But I suspect the novelty of that would only last so long.

I write and draw because I have things I want to say, and I don’t know any other way to say them.  Art exists to express feelings and ideas that would otherwise be inexpressible.  At its core, art is a form of communication.  An A.I. can produce content.  It may even produce informative or entertaining content.  But if I filled this blog with A.I. generated blog posts and A.I. generated cartoons, all the the thoughts and feelings I wanted to express would remain unexpressed, and the kind of human-to-human connection that art facilities would not occur.

So for that reason, I’m not super worried about A.I. generated art.  I am a little worried, because things are going to change, and certain niches in the art world (like artist conceptions of exoplanets) may disappear.  But this isn’t the end of art anymore than the Internet was the end of… whatever grownups in the 90’s thought the Internet would be the end of.  There will always be a need for and a desire for human-made art, because art is fundamentally a form of self expression.  It’s a form of communication.  A.I. can produce content, but it can never replace the human-to-human connection of genuinely human-made art.

WANT TO LEARN MORE?

My deepest concerns about A.I. art have little to do with the technology itself and more to do with the law.  Fortunately, the YouTube Channel Legal Eagle just did an episode about what the law has to say about A.I. generated art. Click here to watch!

And YouTuber Tom Scott recently did an episode about sigmoid curves and what they have to do artificial intelligence.  Click here for that.

Sciency Words: Artificial Intelligence

Sciency Words: (proper noun) a special series here on Planet Pailly focusing on the definitions and etymologies of science or science-related terms.  Today’s Sciency Word is:

ARTIFICIAL INTELLIGENCE

In 1955, American cognitive scientists John McCarthy and Marvin Minsky sent out an extraordinary proposal:

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire.  The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.

McCarthy and Minsky go on to write that that machines can be made to learn, solve problems for themselves, and “improve themselves.”  They also claim that “significant advancement” can be made toward these goals if a group of experts were to “work on it together for a summer.”

Ah, such optimism!

That 1955 proposal is the first documented usage of the term “artificial intelligence.”  Apparently McCarthy initially wanted to use the term “automata studies,” but even among scientists and engineers, “automata studies” didn’t sound sexy enough.  So McCarthy coined the term “artificial intelligence” and ran with that instead.

According to this article from the Science History Institute: “The name implied that human consciousness could be defined and replicated in a computer program […].”  Whether of not that’s true—whether or not computers really can reproduce human-style consciousness—is a topic of ongoing debate.  Regardless, McCarthy’s new term got the attention he wanted, and the 1956 conference at Dartmouth was a success.

However, it turns out it would take more than “a summer” to trigger the robot apocalypse.  Still, the 1956 Dartmouth Conference started something important, and today, we are living with the consequences!

Oh No! It’s the Internet!

Here in the U.S., we’re about to celebrate my favorite holiday: Thanksgiving.  It’s a holiday all about good food and spending time with good friends, and… that’s basically it.  And that’s why I love it.  No need to agonize over finding just the right gift, or anything like that.  Just relax and enjoy being human.

This year, I am most thankful for the Internet.  Now you might be thinking how could anyone be thankful for the Internet?  There’s so much online harassment going on.  Political disinformation campaigns are plentiful.  People are being cheated and scammed, and faceless corporations are collecting personal data on each and every one of us.

Yes, the Internet can be a scary place.  Without a doubt, some bad things have happened to me online, and I know far worse things have happened to other people.  But as a wise woman once told me: nothing good in life comes without risk or without sacrifice.  And at least in my personal experience, the good stuff on the Internet far outweighs the bad.

The Internet has fed my passion for writing and art.  It’s fed my passion for science and space exploration.  It’s given me access to so many resources, and I’ve read so much original research (unfiltered by the popular press) thanks to the Internet.  I’ve learned so much, and I’ve been exposed to perspectives and worldviews that I, as someone living in one specific region of the United States, never would have encountered otherwise.  And the Internet has left me with an awareness that, despite all this knowledge I’ve gained, I still have so much more to learn.

And most importantly of all, I’ve made new friends here on the Internet.  I may not have met you in person, but I love you all the same!  I know some people would take a dim view of me for claiming my online friends count as “real” friends, but it’s true.  I really do consider many of you to be good friends.  For that, I am very thankful.

Okay wait… do I really want to share that in a blog post…?

Do You Miss 2D Television?

A few years back, the TV station I work for upgraded to HD.  It cost a lot of money and was a lot of hard work, and everyone was more than happy when it was finished.  At the time, I joked that our next upgrade would be holograms.  Turns out I was right.

According to recent reports, researchers at MIT have developed a new, holographic television.  With current technology, the light shining from your TV screen looks the same in every direction.  As I understand it, a holographic television alters light’s wavelength at different angles, creating a 3D image without the aid of 3D glasses.

A pixel on a regular television looks the same no matter how you see it.
A pixel on a regular television looks the same no matter how you see it.

A pixel on a holographic television appears to be a different color depending on your point of view, creating the illusion of a 3D image.
A pixel on a holographic television appears to be a different color depending on your point of view, creating the illusion of a 3D image.

I have a bad feeling that when this product comes out, all my favorite movies will suddenly look as old-fashioned as black-and-white silent films.  I already feel like a crotchety grandpa shouting, “In my day, televisions showed us two-dimensional pictures, and we liked it!”

On the bright side, the computer chip that makes MIT’s holographic TV work costs about $10.  That’s right: $10.  So at least when these new televisions hit the market, we can expect them to be affordable.

So what do you think?  Are you going to buy a holographic TV?

P.S.: I am not a scientist or engineer.  I’ve done my best to explain how the holographic TV works based on what I’ve read so far, but if you know more about how they work please tell us in the comments below!