There’s Only One Good Way to Regulate AI

Pedro Domingos
5 min readSep 28, 2024

--

Regulating AI is popular these days. Europe’s AI Act, which bans outright some AI applications and imposes heavy burdens on others, came into force on August 1. Across the United States, close to 1000 pieces of AI legislation have been introduced since the release of ChatGPT. California’s State Bill 1047, now awaiting Gavin Newsom’s signature, manages to outdo the Europeans in some respects. Biden has his own executive order, and more is in the works.

But all of these make the same mistake: they try to regulate AI like previous technologies, when in fact AI is very different from them. The essence of AI is that systems continually evolve by learning from data. As a result, AI is not a technology with fixed properties that regulators can understand and target like cars, drugs or the Internet.

Because of this, any AI regulation is bound to unwittingly ban things it should allow, allow things it should ban, or (more likely) both. European legislators had the AI Act essentially ready to go two years ago, but had to scramble to revise it when ChatGPT came out. Likewise, most of the current legislation targeting large language models and generative AI will soon seem comically outdated. (E.g., regulating models with more than some number of parameters, like Biden’s executive order does, is like regulating computers with more than 1 MB of memory.) But undoing the damage once the legislation is passed will be hard.

Fortunately, there is a sound and effective way to regulate AI, in whatever political direction you prefer, if we understand a key fact about it: essentially every AI system in operation, now and for the foreseeable future, has three components, and the one we need to target is, luckily, the simplest one.

The first component is the language in which the system stores what it learns. Humans have languages like English and Chinese, and ordinary computers have programming languages like Python and Java. AI has deep networks, symbolic logic, and others. The choice of language is mostly irrelevant for regulatory purposes, in the same way that the language a book is written in is irrelevant to whether it contains explicit content.

The second — and key — component is the metric that the system tries to optimize. This is typically something extremely simple, such as the accuracy of the system’s predictions or the profit made from its transactions. In social media it’s (infamously) user engagement. To use an old-tech analogy, an AI’s metric is like the steering wheel of a car, which dictates where it goes. The entire massive power of the system, the vast complexity of the models it produces, are at the service of what is often a very crude metric.

The third component is the algorithms by which this metric is optimized. This is the engine of the car, and only mechanics need be concerned with it. You don’t care how your car works, as long as it gets you there. And regulators set fuel economy standards, etc., without telling automakers how to achieve them. (In fact, it’s good to allow creativity.)

To regulate AI for the social good, individual welfare, or whatever you prefer, don’t make fixed rules about what it should and shouldn’t do: add new elements to the metric.

Currently, the metric is completely decided by the company that makes the system. In reality, the metric should have three components: the company’s, society’s, and the individual user’s. The company is entitled to make money, but when this has negative social effects, like maximizing engagement does, legislators can add measures of what they think is desirable, which will automatically be taken into account by the system in everything it does. For example, the echo chamber effect often produced by engagement maximization can be reduced by explicitly increasing the diversity of posts you see.

These components of the metric will be different in different countries, because different societies have different cultures, and that’s fine. They will also depend on who does the regulation. For example, progressives are strong believers in fairness, and accordingly tech companies are busy inserting measures of fairness into their AI systems. (There are whole conferences about this, in fact.) But conservatives are strong believers in freedom and family values, and so far they have completely failed to push these into AI systems, which they need to start doing before it’s too late.

To ensure compliance, the government then needs its own AIs. Having government bureaucrats oversee corporate AIs is a recipe for stalling progress, or worse, a comedy of errors. Only AI can deal with AI — steered by humans on both sides. Needless to say, governments have much catching up to do here.

But the most important component in every AI system’s metric should belong to the individual user. I should be able to explicitly tell Google, Facebook and Netflix’s AIs what I want from them in very simple terms, and then their job is to give it to me. (E.g., I want to learn, not to waste time.) When this aligns with the previously encoded corporate and social goals, the AIs have a straightforward job. When it doesn’t, the tradeoff can be made by giving a weight to each component, as is already standard for the corporate metric’s subcomponents.

There is no technical obstacle to this. It needs to be easy and convenient for the user, or he won’t bother, but that’s well within the feasible realm. The problem today is that users don’t even know it’s possible. Imagine a car whose steering wheel is hidden, and that shows up at your door saying “Get in. I know where you want to go.” Would you trust it? Probably not, but that’s effectively what AI systems do today.

There is no time to lose: Congress needs to act to pre-empt a patchwork of state and local regulations, as Section 230 wisely did for the Internet. The price of failure is AI that does not serve us well, lags behind, or worse, forces other people’s ideologies on us without our knowledge.

--

--

Pedro Domingos

Professor of computer science at U. Washington and author of “The Master Algorithm”. pedrodomingos.org