Attack Of The Killer Robots. Part 1. Imagining Future Combat.

by Daniel Russ on June 26, 2013







On the 13th of June 1943, the world’s first robot weapon began its lethal arc. The Vergeltunswaffen or Vengeance Weapon, or more commonly the V-1 rocket launched off of a ramp and with a fairly sophisticated guidance system, found its way to London. There, over the city, the engine sputtered out and the rocket landed and exploded harmlessly. Nine more that day found London. One killed 6 people. Another destroyed a truss bridge.


The first robot weapons had killed human beings.

Professor Chrisof Heyns, U.N. Special Rapporteur on extrajudicial, summary or arbitrary executions, also calls for the creation of “a high level panel on LARs to articulate a policy for the international community on the issue.”… “It is very difficult to get states to abandon weaponry once developed, especially when it is so sophisticated and offers so many military advantages. I am not necessarily saying LARs should never be used, but I think we need to understand it much better before we cross that threshold, and we must make sure that humans retain meaningful control over life and death decisions.”


Today, as the threshold of technology has been breeched where processors can find viable targets outside the immediate purview of a decision-making person, the question of whether machines will be given autonomous lethal power has already been answered. First of all, it has already happened. The philosophers and moralists in high dudgeon over lethal robots are missing the point. The era of lethal and autonomous weapons is already upon us. It’s all a question of degree.


Tacit Rainbow was a DARPA program that created in the eighties for a cruise missile designed to loiter in enemy airspace. There it would it would light up anti aircraft fire control radars. Then it would attack them. Using a cruise missile platform it acted as a UAV and found targets ahead of strike packages and hit without asking permission. It was cancelled not because of the technology, but because of questionable expenditures.


Lethal Autonomous Robots are in a speculative sense machines that kill without a person in the decision loop. So it’s natural to question whether a LAR could recognize child standing next to a target and know that the blast from the shot would likely kill or injure the child. If it did, how would its programming guide it? Don’t shoot because of the presence of an innocent? Or perhaps shoot, this target is too important. Could a machine show compassion? Probably not in the sense that a machine feels anything. But it might also not panic or kill out of revenge. It might be possible to get LARs right to prevent a My Lai or the Iraqi wedding party massacre.


In the scheme of things, is a LAR anymore more frightening than a crooked cop who will rob and beat and kill in his own neighborhood? Perhaps not. But at least a crooked cop isn’t a problem we created in a lab.


To find out whether LARs will be allowed , we have to first look at how people behave around new weapons. When the first catapult launched a missile over the walls of a city besieged, did the attackers ever really consider that the missile would kill an innocent child or a mother? “I shot an arrow into the air and where it lands I know not where,” goes the saying. Consider that every time an arrow was fired at Agincourt into a crowd of cavalry, the longbow man was out of the loop decision-making process. Human beings at war rarely consider the innocents or the possibility that a weapon will boomerang. Successful weapons, ones that show the possibility that many enemy can be vanquished with little loss of life on the part of the attackers are adopted and developed.


Inside the control stations of drone operators sit a Judge Advocate General, or a military attorney. This person, armed with intelligence approves or disapproves the use of weapons against targets. Recently the X-47B, the naval carrier launched drone made a takeoff and landed safely on board. How long will it be before the X-47 B is patrolling? And how much longer after that will it be before the UCAV is patrolling with no JAG assigned to an interlinked observation station?




So where is this all going?


Tomorrow let’s imagine combat in the future.


Source: Rollingstone





Related Posts:

Leave a Comment

Previous post:

Next post: