I didn’t even know that the Future of Life Institute existed until a couple of days ago, when Elon Musk, Steve Wozniak, Stephen Hawking and many others signed a petition aiming to ban the military use of artificial intelligence (AI), and in particular autonomous weapons.
I tend to agree that over the next few years we’ll see some kind of breakthrough that will make AI viable. Maybe we’ll have a machine that, after learning, is able to solve simple speculative problems like most mammals do. Maybe we’ll have something more. But that doesn’t scare me.
What scares me is that we could give birth to something more intelligent than us. Yes, it’s fiction. For now.
But what is the difference between autonomous and intelligent weapons?
Autonomous means pre-programmed with certain patterns and able to operate without further input – e.g. “kill all combatants with this insignia.” Building this kind of weapon is easier than you think.
Intelligent means able to learn about the enemy and the battlefield and make decisions on what to do – like a human would – with all the attached issues (can an intelligent weapon desert? maybe change its mind and turn against its commander?). This is hard.
Now, the problem with that petition, which I signed anyway, is that it’s not going to stop the military to develop autonomous, and maybe even intelligent weapons, for the simplest of reasons: someone’s going to build such weapons, so everyone else will as well. As a deterrent, of course, but also to develop countermeasures.
I don’t like talking like this as I know I sound cynical, but we have countless precedents. The most renowned is Albert Einstein’s letter to Truman advising him to develop nuclear weapons before the Germans did. It’s an obvious and very human reaction to a threat, and it happens all the time (think about competition).
The petition is… naive? I think so.
I’m a bit too lazy right now to find documentation, but we have banned anti-personnel mines and chemical weapons, we have signed non-proliferation treaties, and yet Earth’s full of anti-personnel mines, chemical weapons, and nukes. And we keep building and improving them. All it takes is just one bad guy, as we humans are as intelligent as the stupidest of us.
The same will happen with autonomous/intelligent weapons. Knowing what we know about how we, as a species, develop weapons, we should all build military AI as soon as possible. At least we’d get a stalemate quickly and move on to the next military threat, while in the meantime we could use AI for better reasons (like we are doing with computers and the Internet).