Our Fear of Artificial Intelligence
Are We Smart Enough to Control Artificial Intelligence?
A true AI might ruin the world—but that assumes it’s possible at all
Paul Ford writes: Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. “Don’t laugh at me,” he said, “but I was counting on the singularity.”"The question 'Can a machine think?' has shadowed computer science from its beginnings."
My friend worked in technology; he’d seen the changes that faster microprocessors and networks had wrought. It wasn’t that much of a step for him to believe that before he was beset by middle age, the intelligence of machines would exceed that of humans—a moment that futurists call the singularity. A benevolent superintelligence might analyze the human genetic code at great speed and unlock the secret to eternal youth. At the very least, it might know how to fix your back.

Agility: rapid advances in technology, including machine vision, tactile sensors and autonomous navigation, make today’s robots, such as this model from DLR, increasingly useful
“Superintelligence: Paths, Dangers, Strategies” BY NICK BOSTROM OXFORD UNIVERSITY PRESS, 2014
"Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term 'artificial intelligence' in 1955."
If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative superintelligence Bostrom describes is far in the future and perhaps impossible.
"Extreme AI predictions are 'comparable to seeing more efficient internal combustion engines… and jumping to the conclusion that the warp drives are just around the corner,' Rodney Brooks writes."
When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware. As Kurzweil described it, this would begin a beautiful new era.Such machines would have the insight and patience (measured in picoseconds) to solve the outstanding problems of nanotechnology and spaceflight; they would improve the human condition and let us upload our consciousness into an immortal digital form. Intelligence would spread throughout the cosmos.
You can also find the exact opposite of such sunny optimism. Stephen Hawking has warned that because people would be unable to compete with an advanced AI, it “could spell the end of the human race.” Upon reading Superintelligence, the entrepreneur Elon Musk tweeted: “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” Musk then followed with a $10 million grant to the Future of Life Institute. Not to be confused with Bostrom’s center, this is an organization that says it is “working to mitigate existential risks facing humanity,” the ones that could arise “from the development of human-level artificial intelligence.”
No one is suggesting that anything like superintelligence exists now. In fact, we still have nothing approaching a general-purpose artificial intelligence or even a clear path to how it could be achieved. Recent advances in AI, from automated assistants such as Apple’s Siri to Google’s driverless cars, also reveal the technology’s severe limitations; both can be thrown off by situations that they haven’t encountered before. Artificial neural networks can learn for themselves to recognize cats in photos. But they must be shown hundreds of thousands of examples and still end up much less accurate at spotting cats than a child.
This is where skeptics such as Brooks, a founder of iRobot and Rethink Robotics, come in… (read more)
Paul Ford, a freelance writer in New York, wrote about Bitcoin in March/April 2014.
Leave a Reply