As a little kid I worried a lot, about everything from marauding robot armies laying waste to Americans with death rays (which I saw in a terrifying old sci-fi movie called Target Earth) to the prospect of starving hordes of old people ravaging the countryside after Social Security broke down (which I heard in an even more terrifying Barry Goldwater campaign speech). My father’s inevitable response to my regular heebie-jeebie attacks was, “Don’t worry, it’ll never happen.”
Good as my dad was at most things, his talents for prophesy were limited. Actuarial tables long ago blew the whistle on Social Security, a Ponzi scheme that starts to totter when population increases flatten out. And the killer robots aren’t just in our future, they’re already here, as I learned at a weekend conference on the ethical and legal implications of advancing robot technology.
My father was right about one thing: The military robots aren’t carrying the colors of the Venus Interplanetary Expedition forces, but those of the U.S. Army. The Pentagon already has, by its own count, $20 billion of robots in uniform, doing everything from reconnaissance missions to clearing land mines and booby traps.
Those assignments may sound relatively benign as martial arts go. But cruise missiles, which locate and navigate to targets on their own after being launched by humans, are a species of robot. So are the missile-firing drone aircraft that roam the skies of Afghanistan and Pakistan, blowing up suspected terrorists. They’ll soon have company on the ground. Great Britain’s QinetiQ Group is marketing a robot tank that packs a 7.62 mm machine gun and a four-barreled grenade launcher. Another machine-gun-equipped robot tank made by Samsung is already patrolling South Korea’s northern border.
The Samsung tank doesn’t open fire unless a human operator back at headquarters tells it to — but it could. It’s equipped with heat and motion sensors that enable it to identify human targets and shoot them. The same is true, or soon will be, for most of the other weapons. The University of Ottawa law school’s Ian Kerr and Katie Szilagy, in a paper delivered at the conference, said that more than 40 countries are developing so-called autonomous weapon systems in which machines rather than humans will deliver “targeting instructions and even decisions about whether and when to pull the trigger...”