- A full-scale figure of a terminator robot, used in the movie “Terminator 2″. Artificial intelligence experts say the risk of robot wars may be closer than we think.
- Getty Images
The risk of robot wars popularized in science fiction series like “The Terminator” may be closer than we think, and a group of technology and artificial intelligence leaders is trying to stop it.
Famed astrophysicist Stephen Hawking, Tesla Motors Inc. founder Elon Musk and Apple Inc. co-founder Steve Wozniak are among the signatories of a letter presented today calling for a ban on autonomous weapons. The missive, unveiled by the Future of Life Institute at the International Joint Conference on Artificial Intelligence in Argentina, labels autonomous weapons “the third revolution in warfare,” following gunpowder and nuclear arms.
“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow,” says the letter, which includes Skype co-founder Jaan Tallinn and prominent linguist Noam Chomsky as endorsers . “Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.”
The experts warn that the deployment of autonomous systems may be possible in years not decades. They say a ban on such weapons “beyond meaningful human control” must be enacted to avoid an arms race they believe is more dangerous than the Cold War.
The industry leaders are striving to protect AI technology in the public eye. “Most AI researchers have no interest in building AI weapons – and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits,” the letter says. It compares their effort to the bans on chemical and biological weapons supported by those fields.
The conference where the letter will be presented primarily will focus on the “optimism” researchers have for the progress that has been made toward creating machines that can think for themselves, according to event organizers. But as they discuss recent developments in the field, the record number of attendees are also expected to weigh the potential impact AI could have on society.
Skype’s Mr. Tallinn previously said he thought today’s AI is unlikely to pose a threat, but that could change moving forward.
AI has been a hotly contested issue as firms increasingly invest in the technology. From IBM ’s Watson to Apple’s Siri, AI is already used commercially in finance, healthcare and iPhones. But many have raised questions about whether or not it is possible to create AI that acts ethically.
Some are already working to limit the use of AI research in the military. When Google acquired DeepMind, the AI company made it a condition of the deal that their technology not be used for military purposes. Demis Hassabis, chief executive of Google DeepMind, and Peter Norvig, Google research director, were among the letter’s endorsers.
______________________________________________________
For the latest news and analysis, follow @wsjd.
And like us on Facebook to get our news right in your feed:
Get breaking news and personal-tech reviews delivered right to your inbox.
More from WSJ.D: And make sure to visit WSJ.D for all of our news, personal tech coverage, analysis and more, and add our XML feed to your favorite reader.