The strange, marvelous year of generative AI

10 quirks, consequences, and questions from an amazing and unsettling year of dizzying AI advances.

LLMs, ChatGPT, Generative AI
NicoElNino/Shutterstock

Has it only been a year? Just one year since ChatGPT smacked us with all of the slings and arrows of outrageous, Terminator-grade, singularity-trumpeting, sci-fi fortune?

Just one year since we started believing that artificial intelligence really might free us from toil, and deliver us to a life of lounging like George and Jane Jetson?

Just one year since we started worrying about the AIs coming for our jobs, the jobs of our spouses, the jobs of our children, and the jobs of everyone except the CEO of OpenAI. Oh wait. Even that job wasn’t safe!

In the spirit of year-end review, let us now pause and reflect on what this year of insane AI acceleration has left in its wake. Here are 10 takeaways.

AI naysayers look stupid

One major casualty of the rise of ChatGPT is the snooty attitudes of hard-core, logic-loving professors who said AI would never come. One of my professors used to sneer at the term AI. He liked to pooh-pooh the talk about machines actually thinking. He liked to say that it would be decades or even centuries before the smart machines from Star Trek really arrived. Sometimes he would say AI never stood a chance.

He’s lucky he retired before ChatGPT came along. He’s lucky because these new generative AI bots make it much more difficult for the killjoy logicians to keep saying that computers can’t do more than stitch together NAND gates.

Worrying about electricity bills

A more sympathetic victim may be the planet, given the stockpiles of hydrocarbons that we’ll need to burn to keep the GPUs and TPUs fed with electricity. AIs may end carbon-based life not out of malice or righteous indignation, but out of a relentless need to burn every hydrocarbon to keep running.

A real challenge for the AI world is finding a way to unlock all of the grand opportunities without running up a yuge electricity bill. There is some hope that new chips, better algorithms, and more judicious use of layering in the networks will save a few supertankers filled with oil. Will that be enough?

A rush on AI hardware

A big challenge for a new AI project is lining up enough computational power to start learning. The demand is so high that the GPU manufacturers like Nvidia can’t keep up. The cloud providers that have GPU instances are able to rent them out at top dollar.

Will this keep up? While free markets have a way of fixing scarcity, the relentless growth and the big dreams of Silicon Valley can scale up even faster than the market can deliver. And then there are the geopolitical issues.

Doomers and Boomers square off

The list of intractable problems like the politics of the Middle East just got longer, with the addition of the debate over what AI will do to humanity. On one side are the Doomers, who see AI destroying jobs, social ties, and maybe even all of humanity. On the other side are the Boomers, who see a cornucopia of wonderful gifts being delivered to us as we lounge on our virtual Lanai.

Who has the more accurate vision of the future? The pundits and prognosticators will be chewing on this topic for months and perhaps years to come. If the answer were obvious, we would know it already. I would joke that we should just ask an AI, but the companies have already lawyered them. Straight answers on hot-button topics can be hard to come by.

The dangers of AI hallucinations

Is that AI thinking? Or just running some big statistical mechanism that chooses the next token with a roll of some virtual dice? We know that the algorithm is just some stats, but is that enough to qualify as deep thought? What are the odds?

There are a number of metaphors that help explain just what the dominant algorithms are doing. Some like to call them “stochastic parrots.” Others like to think of them as a version of the statistical compression algorithms like Huffman coding. We’re still working to find the best way to explain the mixture of genius and hallucination that comes out of these functions.

The dangers of AI accuracy

AIs tend to behave like children. Sometimes they make things up and that’s bad enough, but the real danger is when they start speaking the unfiltered truth. Some love the idea of truth-telling AIs and imagine that they will bring more knowledge and understanding to the world. Others know that Jack Nicholson’s character in “A Few Good Men” was right about humanity when he said, “You can’t handle the truth.”

The lawyers at the AI companies must be freaking out at having to defend all of the truth telling. I asked Google’s Bard an anodyne question about a living, breathing person—in other words, the kind of topic that could sue for libel. Bard told me with a very snippish tone, “I am a large language model, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions, but my knowledge about this person is limited.” The lawyers keep trying to nail shut that barn door but will the horses leap out the window?

Creatives call their lawyers

When a human says, “I learned everything I know about the topic from Bob,” it seems like a humble gesture and a kind acknowledgement. When an AI says something similar, Bob starts to wonder if he can sue for compensation.

As a writer, I’m torn. I was somewhat proud that one of my books (Disappearing Cryptography) made its way into the Books3 corpus that trained some of the smartest AIs out there. They’re like my very own offspring now. That’s cute.

But these AIs are also destroying the marketplace for my book and many others. To make matters worse, they’re doing it at a massive scale. Why shouldn’t the authors be compensated? Fair use stops being fair when it destroys the marketplace.

Will the internet remain open?

It’s one thing to pirate old books that were written under an antiquated economic model. The real question is whether anyone will bother writing a book, a magazine article, or a blog post again. Why bother if the AIs will just come along and absorb the knowledge with Borg-like efficiency?

Copyright had problems, but it nurtured a functioning marketplace of ideas that supported publishers, writers, and universities. Now all of those old business models are being washed away like a sand castle when the tide comes in. At least when the internet and search engines came along, people waved their hands and talked about advertising support or patronage. No one seems to have any clue how AIs will support new knowledge synthesis by humans.

A virtuous or vicious circle?

The first generation of AIs learned from human-created information. After those generative AI models slipped out of the labs, AI-generated content has started leaching into the internet and into the training corpus of the next generation. Some imagine that this will lead to marvelous leaps of insight. I tend to think of the feedback that comes from putting a microphone too close to the amplifier.

How high will AI fly?

Some pundits see AI as overhyped, like Pets.com before the crash. Others see it like Amazon in the early days. The early days of any discovery are always filled with speculation and AI is no different. Some say that Microsoft will outstrip Apple based on its deep investment in the technology. Others see nothing but disappointment awaiting the big dreams. Time for another box of popcorn. See you next year.

Copyright © 2023 IDG Communications, Inc.