From Mechanization to Roboticization:

Since my 2006 blog post on the roboticization of warfare, a number of developments have emerged. Countries across the globe have continued their love affair with unmanned weapons of war and the U.S. has committed nearly 3 billion USD to the cause for fiscal year 2008¹. The robotics community has finally weighed in on the issue – on both sides of the aisle. And because of this, campaigners have begun to push for a ban on the technology.

The two major figures emerging from the robotics community are Sheffield University’s Noel Sharkey and the Georgia Institute of Technology’s Ronald C. Arkin. In 2007 Arkin produced a 117 page paper detailing the mechanisms and philosophy behind robotic warfare and purporting that compared to human combatants, robotic fighters are to be desired because of their ability to have an unwavering, hardwired ethical code. Sharkey has taken the counter-point, emphasizing his belief that taking the decision making process away from humans is flawed, mainly because autonomous units don’t (and probably never will) have the discriminating power to do so.

When dissecting both arguments, I find some major flaws. Firstly, when applied to the functional domain of reality, Arkins’ views don’t stand up. When discussing his proposed override systems, Atkins inserts external review mechanisms as a way to assure culpability. The problem with this approach is that it can simply be overridden by other human systems – namely government injunction. If we have learned anything from the policies enacted since 2001, we should recognize that tools such as ‘national security’ and ‘executive privilege’ are remarkably powerful when it comes to nullifying carefully planned oversight. This is not to say that removing overrides is the answer to this problem, as a system with no external feedback is essentially the subject of no-one and therefore can pursue whatever course of action it takes without recourse.

Sharkey on the other hand is looking in the right direction, but does not see the detail on the road ahead of him. When interviewed by New Scientist, Sharkey stated that “we should not use autonomous armed robots unless they can discriminate between combatants and noncombatants. And that will be never.” Strategically, this is an erroneous move. Personally, I believe that algorithms able to determine “friend” from “foe” eventually will be developed enough to successfully nullify any counter-arguments on the subject. That does not mean that such technology will, or even needs to be able to perform this function in an objective, foolproof manner – it merely means that if the system can resist the scrutiny it receives, it will be deployed.

The reason why I call Sharkey’s statement a strategically erroneous move is because at the heart of the matter, how these systems perform regarding friend/foe recognition is not the real issue. Just because an autonomous unit (built of silicone, steel and software) can kill the people its designers deem ‘suitable targets’ does not mean that we as a species should be introducing another layer of complexity to the madness of modern warfare.


¹ A figure that will rise to 4 billion in 2010. According to the roadmap, the total funding allocated for 2007 – 2013 is a little over 24 billion dollars.