You don’t stay awake at night worrying that your dog will attack you. Why would you worry about your robot? It was bred in much finer detail than your dog, and it’s that much less likely to go awry. Dogs are the result of selective breeding; robots are the result of machine learning. When you train a dog, it doesn’t pass what it learned to its puppies; but when you train a robot, the knowledge remains in its brain forever, and is easily copied to other robot brains. Machine learning is how we evolve robots and computers to do what we want without having to program them in detail. Instead, computers learn from their own experience, by looking at data and generalizing from it. Machine learning has made a lot of progress in recent years, and is responsible for most of the big successes of AI, from Watson to self-driving cars. But it also causes a lot of fear: if computers start to figure things out by themselves, what’s to stop them from turning against us? Will they drive us to extinction?
In reality, machine learning will make humans more powerful than ever. Learning algorithms will augment our brains in the same way that cars augment our legs and tools augment our hands. And the main thing we need to do to bring about this happier outcome is to become savvier consumers of AI. That starts with realizing how different machine intelligence is from the human kind.
The reason we fear AI is that AI is a mirror in which we see ourselves. From Terminator to Ex Machina, the robots are humans in disguise. Since we’re the most intelligent creatures we know, any talk of intelligence inevitably evokes an image of us — and humans are indeed dangerous. But being intelligent and being human are two very different things. The machine behind the mirror doesn’t work like you or me. If we understand what it’s really like, we’ll lose our fear of it, and — even more important — we’ll know how to use it. You can train your AI just like you can train your dragon, and then you’ll be able to fly on its back.
Machine learning algorithms come in two main flavors: supervised and unsupervised. Supervised algorithms need a teacher (us). A spam filter learns what spam is by generalizing from example emails that have been labeled “spam” or “not spam.” Most learning algorithms in use today are of this variety. Big data makes them very powerful — lots of examples to learn from — but they learn to do one very specific job at a time, and that’s all. Unsupervised algorithms learn on their own, like children at play, but even they are guided by a control signal, like children are guided by pleasure and pain. And the control signal is determined by us.
Computers could be infinitely more intelligent than us and still not a threat in any way, because we have the easy job, and they have the hard one. Our job is to define the problem and check the solution. The computer’s job is to solve it. We want the computer to be infinitely intelligent, so it can solve problems that seem beyond us, like curing cancer. By design, that infinite intelligence cannot diverge from the goal we set for it. Machine learning algorithms keep score for the programs they evolve, and only the fittest programs survive. Any program that’s not using all its resources to strive for the goal will quickly fall by the wayside. Skynet is as likely to arise from this process as we are to breed wolves from dogs.
Unfortunately, today’s learning algorithms can only be trained by experts, but it doesn’t have to be that way. You should be able to tell Amazon’s recommendation system what you like and don’t like, inspect its beliefs, and correct its mistakes, rather than hope it will gradually get the hint from seeing what you buy. Algorithms that let you do this already exist in the lab, and it’s time to start deploying them. In the future, you’ll have your own AI, and it will hold all your data and share it only as needed. And you’ll have a job that can be done neither by computers, which lack common sense, nor by unaided humans, who have only so much time and memory. AI is a horse for your mind, and horses don’t compete with their riders; they let them go farther and faster.
This is not to say that there’s nothing to worry about in relation to AI. Bad guys may get hold of it, so we need an AI police to catch their creations. But most of all we need to beware of AIs giving us what we asked for instead of what we wanted. Computers make a lot of bad decisions because they don’t know any better, from picking the wrong stock to buy from picking the wrong date for you. People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world. Let’s teach them better and reap the rewards.