AI’s Greatest Risk Is Not Having Enough of It

Pedro Domingos
12 min readJan 8, 2024

--

AI and Extinction

Does artificial intelligence risk leading to the extinction of humanity? Some say yes, including famous names in tech like Elon Musk and AI pioneers like Yoshua Bengio. According to this theory, if we create machines that are more intelligent than us, they could escape our control and decide to exterminate us. And recent developments like ChatGPT have convinced them that this is not a remote danger, but something that is fast approaching and requires urgent intervention from governments. Before anything else, they say, we need to limit and slow down the progress of AI, or risk losing control of the planet.

Before we succumb to hysteria, however, let’s take a closer look at the assumptions underlying this apocalyptic scenario.

A well-known error in AI is the so-called “homunculus fallacy”, the notion that within each AI system there is a “mini-man” with all the characteristics of human beings: emotions, desires, prejudices, intentions, motivations, autonomy , even consciousness. This error is natural and very common, especially among non-AI experts, because humans reason by analogy, and when they see a system behaving intelligently they immediately project onto it the other characteristics of the only intelligent entities they know: humans and other animals. The error is compounded by Hollywood, in whose films AIs and robots invariably appear as thinly disguised human beings, because that is the easiest way to make those films interesting to us.

But it’s a complete error. AI is extremely different from human intelligence, and without understanding this it’s difficult to understand what its real risks are. AI algorithms are just that: algorithms, i.e., sequences of instructions given by us that the computer executes without deviating an inch. ChatGPT, for example, does nothing more than predict the next word in a text based on the words so far, and thus, word by word, create a new text based on text is given to it.

The alarmism around AI focuses in particular on machine learning algorithms (my specialty for the last thirty years). In machine learning, the computer develops its own algorithms from data. You can, for example, learn how to diagnose lung cancer from X-rays and corresponding diagnoses produced by doctors, or you can, like ChatGPT, learn how fill in the blanks in a text based on similar texts. The source of the alarm is the idea that when a computer starts learning on its own, it is impossible to predict what it will learn, and it may start doing arbitrarily evil things. But this is an illusion. Machine learning systems, like many others, are governed by fixed objectives such as maximizing engagement (as in social networks) or minimizing the error rate in predictions (as in ChatGPT). The system conducts a wide search to find the best model according to these criteria, and at each step the models that do not constitute improvements are discarded. It is physically and mathematically impossible for the system to decide on its own to start evolving in other directions, for example to satisfy its own desires (which it does not have). Machine learning is similar to selective breeding, except instead of producing domesticated plants or animals it produces algorithms. No lies awake at night fearing that their dog will kill them, but dogs are just domesticated wolves. Why then fear being killed by a robot, whose evolutionary process was much more completely controlled by us?

But, alarmists insist, what if AI decides that the best way to achieve its goals is to exterminate humanity? But the problem here is not AI’s evil, but its incompetence. And the solution to incompetence is more intelligence, not less! What if, despite everything, AI goes down the wrong path? Being much more intelligent than us, won’t it be impossible for us to detect it? No. Technically, AI is the subfield of computer science that deals with solving intractable problems, i.e., problems that, in the worst case, take exponential time to solve, but whose solutions can be quickly verified. In other words, even if AIs are exponentially more powerful than us, it is still easy to verify that they have not strayed from the right path. For example, one of AI’s greatest potential applications is curing cancer. The smarter a system for this purpose is, the better. Do we really want to limit its intelligence on the basis that it will magically decide to start killing patients, and even more absurdly, that we won’t notice?

Even more laughable is the notion that a takeover by rogue AIs is an imminent danger. ChatGPT’s real intelligence is much lower than its apparent intelligence, and is still far from ours; the surprisingly good results it often produces derive mainly from having processed an unimaginable amount of data. Since the beginnings of AI in the 1950s there has always been a marked tendency, even among experts, to grossly underestimate the difficulty of the problem. Intelligence seems easy because it is largely subconscious, but evolving it took billions of years and the human brain is the most complex entity in the universe. And even if a malevolent human-level AI emerged tomorrow, it would have little chance against us. Our brains have greater total computational capacity, our energy needs are vastly lower, our mobility and dexterity are much better, we learn more from much less data, we have millions of years of experience fighting adversaries, etc. The risk of AI exterminating humanity belongs to science fiction films, not reality.

AI and Disinformation

But the supposed risks of AI are not limited to existential ones. Another one that has received a disproportionate amount of attention is that AI will lead to a massive increase in online disinformation, humans will be easily manipulated by it, and this will be the end of democracy. The reality shown by several studies, however, is that online disinformation makes little difference. Voters are far more influenced by mass media, for example. Further, the amount of online disinformation is already vast, and the main limit is to its penetration is our attention, not the production capacity that AI would increase. But, alarmists say, AI will be able to manipulate humans much better, and therein lies the danger. Fortunately, humans are less stupid than intellectuals think. In the fifties the big worry was that TV ads would easily manipulate us, but people quickly learned to discount them, and the same is already starting to happen with ChatGPT. Everyone knows that it often lies, and manipulating it has turned into a pastime with often hilarious results. At least for now, humans are much better at manipulating AI than the other way around.

Most importantly, however, the best way to combat online disinformation online is precisely with AI. The notion that AI will generate disinformation on a massive scale is a hypothesis about the future; AI’s essential role in detecting it is a reality today. The main problem is that in many cases AI is not yet sophisticated enough to detect disinformation, and so it passes through automatic filters in greater quantities than human operators can handle. The solution, once again, is more intelligence, not less.

AI and Bias

Another supposed major risk of AI is that of “perpetuating biases”. AI is supposedly an instrument of oppression and discrimination, and an obstacle to justice and equity. Every day seems to bring new examples of AI systems treating men and women, or whites and blacks, differently. The purveyors of these examples, however, systematically confuse correlation with causality, attributing any and all differences in results to bias. There are typically good reasons for these differences. For example, they reflect reality: if there are more male engineers, do we want AI systems to communicate that fact to us, or do we want them to build an alternative Orwellian world in which the number of engineers of both sexes is equal? Or the differences reflect properties of machine learning that have nothing to do with bias. For example, if 90% of the population is of one race and 10% of another, it’s obvious that the model learned for the first will probably be better than the one learned for the second. Discriminating against one to compensate for the other, as some argue, would be perverse; the aim should be to learn the best possible model of each individual, regardless of their sex or race.

The reality is that AI is an unparalleled opportunity to combat bias and discrimination. Learning algorithms are mathematically incapable of biases related to sex or race, because they don’t even know they exist. (The data may be biased, but the solution here is to obtain faithful samples of the population, a well-known problem in statistics; and to use as data the real values of variables, not values hypothesized by humans.) In Thinking, Fast and Slow, Daniel Kahneman devotes a whole chapter to the fact that algorithms make better decisions than human experts, and quotes: “There is no controversy in social science which shows such a large number of qualitatively diverse studies gives coming out so uniformly in the same direction as this one.”

AI and Privacy

Another great evil of AI is supposedly its invasion of privacy. Using learning algorithms, big tech companies like Google and Meta infer our interests from our online behavior, and they use this to choose the ads they show us. This is allegedly an immoral exploitation of users. “If the product is free, you are the product,” goes the saying. The European Union attacked these and other uses of data with the General Data Protection Regulation (GDPR), a law it proudly considers a model for the entire world to follow. But this idea that targeted advertising is harmful to users misunderstands economics, technology and psychology all at once.

First of all, targeted advertising is not a zero-sum game. The fact that, for example, Google does not charge us anything for Web search does not mean that it has to extract money from us in some other way. On the contrary, everyone benefits from targeted advertising: Google, which is paid by advertisers; advertisers, who earn when users buy their products; content producers, who are indirectly paid by advertisers through Google; and users, who enjoy a valuable service (search) for free. And of all these, the biggest beneficiaries are by far the users. Google’s revenue per user is in the hundreds of dollars per year. The value of its search engine to the average user, according to an MIT study, is in the tens of thousands of dollars per year — a hundred times greater!

Furthermore, I, as a user, greatly prefer to see relevant ads rather than wasting time on generic ads unrelated to my interests. The biggest problem with systems like Google’s is that their ability to predict which ads will interest me is still very limited, partly because the data Google has about me is limited, and partly because learning algorithms could be much better. Once again, the problem is a lack of intelligence, not an excess of it. And our preference as users should be to provide Google with more and better data about us, not to “protect our privacy”. This idea that we have to protect our data from malevolent companies is similar to the idea that we have to keep our money under the mattress so that banks don’t steal it. Our data should be invested for our benefit, just like our money. The preoccupation with privacy that older generations have is not shared by the new ones, and in the long term the idea of hiding our data from companies will seem as ridiculous as keeping our money under the mattress.

But the GDPR, far from understanding this, wastes users’ time authorizing cookies and more cookies, without any visible benefits. Worse still, it creates unnecessary costs and obstacles for companies, and in particular for startups, who are less able to handle them. The EU’s main concern should be to encourage these companies and bring the enormous economic gains of advanced technology to Europe, but on the contrary, it keeps making their life harder. One of the main requirements of the GDPR is that data can only be used for the purposes for which it was originally collected, which destroys most of its value in one fell swoop. Almost all of the most innovative and important uses of machine learning involve precisely using data for purposes that were not anticipated by anyone, such as using Web searches to detect the onset of an epidemic (not to mention all the great scientific discoveries that were made by accident, like penicillin and X-rays).

Not satisfied with this, the GDPR also requires algorithms to provide explanations for their decisions, apparently unaware that the most accurate learning algorithms, like deep learning, are typically those that are incapable of doing so. (Some, like ChatGPT, can make stuff up, but that’s a different matter.) I would rather be diagnosed by a system that is 99% accurate and provides no explanations than by one that is 90% accurate and provides them. Other people may have the opposite preference, but the decision should be made by each one of us, not arbitrarily imposed by law.

And as if this weren’t enough, the GDPR also invents a new right: the right to be forgotten, that is, the right to demand that data about us be erased. It sounds good, but my right to be forgotten infringes on your right to remember, and what kind of intelligence can computers have if their memory can be arbitrarily maimed?

At this point, with both users and companies screaming bloody murder, we would expect the EU to realize its mistake and repeal the GDPR, but instead, adding insult to injury, it passed the Digital Markets Act and the Digital Services Act. And these will soon be followed by the AI Act, which has the dubious honor of being one of the worst-conceived in EU history (and deserves a whole article about it). Why some in the US and other countries think this series of own goals is an example to emulate completely mystifies me.

AI and Unemployment

And the list of AI’s imaginary dangers goes on, but I’ll end with one of the most salient: the wave of job losses that it will supposedly cause. According to this theory, the Industrial Revolution automated manual work, AI will quickly automate intellectual work, and when that happens there will be nothing left for humans to do. But the Industrial Revolution did not lead to large-scale unemployment: on the contrary, it created many more jobs than it eliminated, and the same will happen with AI. Each job consists of multiple tasks, and typically AI is capable of performing some but not others. What matters is that each worker understand which parts of their work they can use AI for, and by doing so they will increase their productivity and wind up earning more, not less. It’s not man vs. machine, it’s man with machine vs. man without, and the former will inevitably win.

Economically, the main effect of AI is to lower the cost of intelligence. And when the cost decreases, demand increases. With AI, things like medicine, law, journalism, programming and education will be done on a vastly larger scale than today, with all the benefits and additional jobs that entails. When the price of a product falls, the value of complementary products rises; and the great and indispensable complement of AI is precisely human intelligence.

But, say the alarmists, what about workers who are unable to retrain for new occupations? Surely these will suffer? No. When prices fall due to automation — and a new form of automation is what AI is — consumers have more money left, and can in turn spend it on, for example, dining out more often or buying a better house, creating new jobs for cooks, construction workers, and many others.

Developed countries, and soon the whole world, urgently need to boost productivity to make up for an aging population. AI is the best solution we have for this problem. Do we want to slow it down or speed it up?

The Real Risks and Benefits

One of the problems with the current obsession with imaginary AI risks is that it distracts from the real ones. One of the greatest is authoritarian regimes like China using AI to repress their people and attain military superiority. But we can’t stop them; the only solution is to develop our AI better and faster than them, and use it to improve the functioning of democracy, defend human rights, and defeat these regimes on battlefields both real and virtual. Similarly, the best way to combat the use of AI by criminals is to ensure that the police have better AI. Bank robbers fleeing by car is no reason to prohibit cars from being faster than horses; It’s a reason for the police to have faster cars than robbers.

But the biggest risk of AI is artificial stupidity. I don’t know of a single case to date of harm caused by overly intelligent AIs. Instances of damage caused by AIs that are too stupid, on the other hand, are too many to count. Every day, AI systems make millions of decisions, from who gets a credit card to who gets a job interview, but because they lack common sense, they make unnecessary mistakes. The most urgent task for us is to equip AIs with this common sense, and foolish alarmism doesn’t help at all.

Above all, the potential benefits of AI are extraordinary. Curing diseases, better managing cities and the environment, generating wealth, making new scientific discoveries, giving each of us an infinite number of personal assistants: the sooner the better. The biggest risk of AI is that obsessing over its risks will cause us to miss this once-in-a-generation opportunity.

--

--

Pedro Domingos
Pedro Domingos

Written by Pedro Domingos

Professor of computer science at U. Washington and author of “The Master Algorithm”. pedrodomingos.org

Responses (1)