Biometric security breakthroughs are coming that would let the military capture from a distance an iris and facial scan of an individual and immediately match it to a biometrics-based "Watch List" of suspected terrorists, combatants or criminals.
"Gathering biometrics covertly from a distance — there are dozens of technologies that hold promise," said U.S. Air Force Maj. Mark Swiatek, assistant professor and deputy head, department of philosophy, United States Air Force Academy. "They will be able to be deployed in the next few years."
But the idea of automated killing in war based on "tactical non-cooperative biometrics" in which the military lets "the boxes and systems do all the dirty work" without any real human intervention to make a decision, raises troubling questions, pointed out Swiatek, who spoke at the Biometric Consortium Conference on this topic.
While this high-tech approach in any conflict might well save innocent lives, there's the question of whether such facial and iris-recognition systems have high accuracy rates. And what's high enough to justify mistakes?
"Can we claim proper intention?" Swiatek asked the session audience, noting that even war has philosophical underpinnings that ask for reasoning in what is just in war.
"Human beings always say they didn't mean to do it," said Swiatek, but noted the robotic approach means the intent is programmed into the software. He said such questions need to be carefully answered on both philosophical and legal grounds and perhaps it's time to "slow down the march" of automated war systems.
But the pace of advance might make that hard to do.