No, the AI Sky Isn’t Falling

Pedro Domingos
4 min readApr 20, 2023

--

As if AI alarmism wasn’t already running far ahead of reality, a group of AI researchers and tech personalities including Elon Musk, Steve Wozniak and deep-learning pioneer Yoshua Bengio has now written an open letter fanning the flames. The letter calls for a six-month moratorium on training powerful AI systems because of the supposedly imminent danger they pose. MIT professor Max Tegmark, one of the letter’s organizers, says that the ongoing competition to improve AI is “more of a suicide race”. The only problem is that the alleged risks are unrealistic, the assumed state-of-the-art in AI laughably off-base, and the proposed measures quite harmful.

Let’s calm down for a moment and look at what AI really is, where it might be headed, and what (if anything) to do about it.

The least of the letter writers’ fears is that AI will “flood our information channels with propaganda and untruth”. But they’re already flooded with them, courtesy of our fellow humans, and slowing down AI development will first and foremost hinder our ability to automatically detect and stop misinformation, which is the only scalable option. The fear of AI-generated falsehoods also makes the pernicious assumption that humans are dumb, and will naively keep taking AI-produced untruths at face value. But anyone who has played with ChatGPT already knows better, and this will improve rapidly as people gain more experience of interacting with AI.

The letter writers also fear that AI will “automate away all the jobs”, as if this is remotely realistic in the foreseeable future. It also ignores the experience of the last 200 years, where automation has systematically created more jobs than it destroyed. For most occupations, AI will automate some tasks but not others, making workers more productive and the work less routine. Lowering the cost of what AI can do will increase the demand for its complements, and leave more money in consumers’ pockets, which they can then spend on other things. AI will create many entirely new occupations, as previous waves of automation have (e.g., app developer). All these increase demand for labor rather than lower it. The AI revolution is already well underway, but the unemployment rate is the lowest in memory. We need more AI, not less, to improve productivity and grow the economy.

The jobs AIpocalypse is just the beginning, however. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” asks the letter. “Should we risk loss of control of our civilization?” This falls into the basic fallacy of confusing AI with human intelligence. (AI researchers, of all people, should know better than this.) AIs have no desire or ability to take control of our civilization. They are just very complex optimization systems, capable only of trying to reach the goals we set for them in computer code. Achieving these goals typically requires exponential computing power, but checking the results is easy. (Curing cancer is hard. Checking if a treatment worked isn’t.) Most AI systems look nothing like humans, and have no desires, emotions, consciousness, or will. They just do useful things, and the better they do them, the better. Why hinder their development? The best way to make them safer is to make them more intelligent.

But the AI alarmists’ solution to all these hypothetical problems is — you guessed it — extensive new regulation. Not just of AI, but for good measure, of “large pools of computational capability” (presumably the entire cloud). Governments should intervene in AI and direct its development. Why all this would do more good than harm is left completely unaddressed by my wise colleagues. They mention past moratoriums in support of theirs, all of which were in biology and medicine, where the stakes are entirely different. They refer to a “widely-endorsed” set of AI principles, most of whose signatories are in fact AI researchers. They back their claim that AI’s “profound risks” have been “shown by extensive research” with a short list of controversial books and ideologically-driven articles, rather than serious scientific studies. And they ignore that even if a near-term worldwide moratorium on some types of AI research were a good idea, it’s a completely impractical one — leading many to wonder what the actual purpose of the letter could possibly be.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” claims the letter. Good thing we didn’t do the same with fire, the wheel, the printing press, steam engines, electricity, cars, computers, and countless other technologies, because if we had we’d still be living in caves. AI leaders like Yann LeCun and Andrew Ng have publicly opposed the idea of an AI moratorium, and I’d like to add my voice to theirs. Before we start being told that “science says AI must be regulated”, the public deserves to know, at a minimum, that there are two sides to this debate. And before we start panicking about AI’s hypothetical dangers, maybe we should consider the damage that such a panic would do.

--

--

Pedro Domingos
Pedro Domingos

Written by Pedro Domingos

Professor of computer science at U. Washington and author of “The Master Algorithm”. pedrodomingos.org

No responses yet