A robot hand picking up a globe that looks like Earth on fire

AI is in headline after headline, most of which focus on the risks that are supposed to be keeping us up at night. The very people who developed these technologies and brought them to market are now warning us that their creations threaten to wreak….well, they can’t tell us what, exactly, but we should apparently be Very Very Scared.

Maybe they’re right! But it’s hard for me to get super excited about the unspecified, hypothetical risks of AI at a time when huge swaths of North America are choked by fire and smoke. The risks of climate change are no longer hypothetical: The best-case scenario now is that we somehow avert the very worst-case scenario.

Faced with this existential, certain and unavoidable terror, no wonder we prefer to focus on a risk that is less tangible, more distant, and that we’d like to believe is still within our power to avert. Preventing the robot takeover is a control fantasy: a place for us to put our hope that humanity can yet save itself.

 

Why AI companies drive AI alarm

But it’s no accident this fantasy is brought to us by the very companies who are already making millions and billions from the AI moment. As Douglas Rushkoff and others have pointed out, all these headlines serve to exaggerate the value and significance of the AI we have now, making it seem vastly more powerful than it is today.

The companies selling today’s AI tools have more to gain than to lose by portraying their products as so powerful, they need government regulation. Everything we’ve learned from the past two decades suggests that meaningful tech regulation is far beyond the technical capacity or political will of government officials—and that was when they were asked to regulate technologies they could at least hope to understand!

Two decades of ineffective tech regulation and five decades of insufficient environmental regulation leave me with little expectation that government will somehow rescue us from the Big Bad that AI companies have not-quite-apologized for unleashing. Everything about late-stage global capitalism is set up to drive a competitive race to the biggest, most powerful, most profitable AI we can achieve.

In dangling the possibility that this AI will also destroy us, corporate AI warnings not only distract us from the far more certain and immediate threat of climate disaster, but also, from the far more certain and immediate threats posed by widespread AI adoption.

The Terminator might be waiting in the wings, but thousands of employees have already lost their jobs to AI. A superpowered AI might decide to sacrifice humanity in favor of paperclips, but falsely generated information is already fooling everyone from professors to lawyers. A future AI might turn us all into human batteries, but today’s AI is already replicating and extending racial bias in everything from healthcare to (surprise!) policing.

 

Robots vs. capitalism

I have little faith that the business people running AI companies are either interested in or capable of a coordinated, systemic response to the problems created by the AI of today. Indeed, it feels less likely that companies will come up with a deliberate solution to AI risks than that they’ll stumble onto an accidental solution to climate change: I like to imagine that an omniscient AI might just turn out to be better than the humans who created it, and decide to save us from ourselves.

We’ve already given the AI all the tools it needs to do the job. After all, if Vladimir Putin can swing a US election with a warehouse full of half-assed misinformation generators, a superpowered AI can surely lead us towards voting for policies and policy-makers that more meaningfully address climate change. And if a bunch of marketers can use social and digital media to make us buy everything from bologna face masks to death-branded water, a benevolent AI might steer us towards the consumption decisions (or better yet, the non-consumption decisions) that are necessary for our planet’s survival. It’s unlikely we’d even experience much pain from this kind of mass manipulation: Everything about the current media and marketing environment shows that our awareness of manipulation is remarkably low, and our tolerance for manipulation is remarkably high.

That’s the close-enough-to-utopian scenario that lets me sleep at night, but I don’t know that we’re going to get that lucky. A far more likely and immediate scenario is that a non-omniscient AI will be used by self-interested humans to spread misinformation and manipulate elections—in ways that are far more likely to serve corporate profits than planetary well-being.

Regulators could actually do something about those risks, with boring stuff like campaign spending laws, ballot access measures and regulations that hold social media companies accountable if their platforms are used to spread misinformation. But why hold humans accountable today, if we can instead worry about the robots who are coming for us tomorrow?

And it’s not just regulators who are getting distracted by AI fear-mongering. By keeping us focused on the distant and hypothetical risks of robot takeover, AI companies are giving too many people and organizations license to disengage. If AI is the next existential threat, then people of good conscience can and should refrain from using it.

 

To fix AI, start using AI

But people of good conscience are exactly who we need interacting with AI, right now. Every day, AI models learn from their interactions with the humans who put their hands on the keyboard.

So far, they’re learning from people who want to get rich, write better ads, or generate more marketing copy. Even well-intentioned explorers are teaching the AIs some dangerous lessons: By testing how far AIs can be pushed towards white supremacy, towards violence, towards their shadow selves, we are showing the AIs not just the nature of human curiosity, but especially, human curiosity about evil.

It’s hard to know how these lessons will be incorporated into the way tomorrow’s AIs work; even AI leaders profess to increasing uncertainty about how these AIs really work or learn. But let’s not use that uncertainty as a reason to desert the field of battle, or to delude ourselves with some fantasy that we can destroy the monster we’ve already unleashed.

Instead, let’s see if we can tame that monster, by showing it the very best of what humans can do, instead of the very worst. Let’s throw ourselves into AI interactions that are characterized by courtesy, by creativity, by generosity and by kindness. Let us fill the AI’s experience of the world not with marketers and profit-seekers but with idealists, activists and artists.

Let us introduce AI to the very best of humanity, and hope that AI someday, somehow saves humanity from itself.

This post was originally featured in the Thrive at Work newsletter. Subscribe here to be the first to receive updates and insights on the new workplace.