Max Tegmark: Is An Arms Race With Killer Robots On Its Way?
Along a small strip of a demilitarized zone on the South Korean border, a weapon of terrifying proportions is being used. It’s called the Samsung’s SGR-A1 sentry gun and it has capabilities that far outstrip any traditional aim and shoot weapon. It performs surveillance duties, has voice-recognition, tracks and fires with a mounted machine gun. That might sound like a scene out of Terminator but it is happening right in front of our eyes, which is why the Boston-based Future of Life Institute has taken the audacious step of writing an open letter to the UN warning of a future scenario where autonomous warfare will not have an off switch.
The letter, underwritten by a member of TFOLI Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales, was additionally signed off by other tech luminaries including Elon Musk, Deep Mind founders Demis Hassabis & Mustafa Souleyman and 116 specialists from across 26 countries. In the letter it states, “Once developed, lethal autonomous weapons will permit armed conflict to be fought on a scale greater than ever, and at timescales faster than humans can comprehend.”
So in light of this open letter, we recently sat down with Max Tegmark, founder of Future of Life Institute who explained a little more about the risks we face and how serious they are. See the full interview next week.
“People obsess over the threat of future of intelligence systems, asking questions like are our jobs going to go? But we face even more urgent questions, like are we going to start an arms race with killer robots?
There’s a UN meeting happening in November where we will discuss that specific issue, and it will become clear if we will or we will not be having an international treaty about it. That’s not science fiction, that is happening right now and if we go full tilt with it then forget about terrorist attacks with cars, Kalashnikovs, and vans, instead the terrorist of tomorrow will be using weapons that can kill millions of people by using A.I. drones perfect for assassinations or killing a specific ethnic group…”
“…We’re going to end up in a horrible situation where nation states basically cede a lot of influence to terrorist groups and other non-state actors, so there’s a big push from the AI community to stop this. This open letter came from the people building the AI, saying we want our AI to be used to create a better future, not to just create a new arms race.
We failed epically with nuclear bombs, we have 15,000 of them now, with Kim Jong-un and Donald Trump in a nuclear pissing contest, but with AI technology we don’t it to end up like that. Let’s try and be more proactive.
These scientists building the AI technology are very idealistic. They want to cure cancer, eliminate poverty and create a better future, but there are other people who want to use it for their own ends which would be less noble. We can’t just leave this discussion to the world’s generals.”