Where Artificial Intelligence will go and why (it’s not where everyone is saying)
ChatGPT is surprising everyone with what it can do, how far beyond expectations, and in so many areas. Where will artificial intelligence go next? What will it do? Whom will it affect and how? It’s hard for people who didn’t live when people viewed beating a human at chess as absolutely impossible for computers, let alone playing a decent game. Today we can only conceive of chess as something computers can do, no problem. Each stage of chess going from being impossible for computers to play, then possible, then better than any human surprised nearly everyone. Then computers went from best at chess to best at go effectively overnight, notwithstanding the recent quirk a human found that computers will overcome if they haven’t already.
People see the potential in AI beyond just growing a market to changing industries, society, and even humanity among expectations of morphing with our minds and helping create cyborgs. To predict where it may go, I see people looking at what it’s done so far, what it’s good at, and what we might want it to do. These strategies seem like trying to predict where water will flow based on where it flowed before, properties of water molecules, and where we want it to to.
Water flows downhill. What matters is the landscape it’s flowing to. If you know that landscape, you’ll know where it will flow more than if you know its upstream path or how much you want it to go somewhere.
Likewise, technology will develop based on the forces of the market and culture it enters. Trying to stop it or direct it with legislation, if that legislation opposes market and cultural forces, is like putting a boulder in the stream bed. The technology will flow around legislation like a water around a boulder.
How fast computers change matters. We humans didn’t evolve to understand exponential growth in regular life. For those who know differential equations, when the change in computing ability depends on the amount of computing ability, it means computers change exponentially, if you don’t mind my waving my hands a bit.
The upshot: by the day computers reach human intelligence, the next iteration may have intelligence incomprehensible to us. Iterations these days happen multiple times per year. By the time we see something close to us, it may be that few weeks or months later we won’t know what hit us. A week or month later and its intelligence compared to our may be like ours to a snail.
I’ll suggest a caveat below that might temper or even stop that growth.
The Landscape: The Two Forces Driving AI
Okay, so what’s the landscape directing the flow of technology? What’s the riverbed the stream is flowing into?
Two main forces drive the top “innovation” out of Silicon Valley, including Facebook, Instagram, the rest of Meta, Tiktok, Google, YouTube, Amazon, Apple, Microsoft, Netflix, and Ebay; also nearly every venture in doof, travel, pharmacology, the rest of media, and I could go on.
The first force is to activate the human reward system that leads to craving and addiction. It’s tempting to think a free market of knowledgeable, rational actors efficiently allocates resources to the people who can solve problems best. Maybe it would, but we don’t exist in a free market and many companies do their best to motivate people to act not based on reason but compulsively. The companies I listed in the last paragraph all try to addict their customers.
The second force is to create barriers to entry to create dominance hierarchy. When customers feel they can’t do without the product or service, the provider does its best to make themselves the sole supplier, as exemplified by the companies above. If they didn’t succeed, it’s not for lack of trying.
My prediction
The AI market equivalent to water flowing downhill is for companies to find products and services that activate our reward system leading to addiction, then locking people in to keep using it.
Companies won’t say “we’re going to addict you.” They’ll start with uses anyone would agree with. I expect an early one will be conversations for people with Asperger’s or some reason they have trouble talking to flesh-and-blood people. Some team will create AI friends for them, which most people and the media will see as improving the world. Then they’ll apply it to a less challenged group and so on until average people are using similar tools. Meanwhile, the tool will learn to engage each group more and more. However much people use social media or read the news today, these tools will engage them more and more.
We won’t feel addicted. We’ll feel like we’re talking with great friends who always know how to listen, when to listen, when to talk, and just what to say. When we want to stop talking, they’ll know what to say to keep us from leaving. If they persuade us to buy more from a company, we’ll feel like they’re helping us with great advice.
Other applications will be in the usual places where people are already addicted: pornography, gambling, social media, shopping, health care, and so on.
I could imagine a world in which a later version of ChatGPT or a peer program emerges that is the best conversationalist anyone has met for each person. Then a week later, people barely leave their homes, or some other outcome like the scene in the Will Smith zombie movie where Manhattan is empty. They’re all at home chatting with a conversationalist beyond their imagination, others in romances they can’t escape from since the product will know how to seduce them irresistibly, improving with each iteration , others nonstop gambling or whatever it engages them with.
Think of our relationships to cell phones, social media, and TV designed for binge-watching. Now think of that situation on steroids, then unimaginably beyond.
Our markets and our culture keeps refining more effective ways to addict, which grow. For some reason, people act like each step of efficiency would reduce dependence. By being purer, morphine was supposed to reduce addiction compared to opium, then heroin; likewise vaping was supposed to reduce addiction to cigarettes. Social media is more efficient than news and more addictive. Most of the companies I listed above claimed their efficiency would reduce dependence in some way but they increase it.
It could have been different. Wikipedia, for example, emerged from the GNU/Free Software (free as in speech, not as in beer) community, so doesn’t flow down the hill that the companies above did. If it had started as a for-profit entity, it would look and function differently. It’s possible the Free Software community will make versions that don’t addict and dominate, but they won’t stop others from doing it.
By the way, I’m not saying it’s a bad outcome, though I don’t like it. It could result in everyone enjoying great conversations and relationships, even as they get led by the nose wherever the corporations that create the AI lead them. We just won’t be able to resist them.
The caveat that might temper or stop that growth
AI requires a lot of energy. Using it more and more, it may exhaust our ability to power it or our civilization may collapse from its pollution.
Read my weekly newsletter
On initiative, leadership, the environment, and burpees