When we imagine robotic combatants, we naturally expect that they will be modelled on ourselves. Ancient Crete, for example, had the bronze, xenophobic robot Talos, who indiscriminately hurled missiles towards all foreign ships. Talos had autonomy, the power to reason (however dimly), and the power to determine his own behaviour. But most importantly, behind the face of Talos was a single agent, an agent modelled after a human subject.
But as the Economist reports, BAE systems, in conjunction with several UK universities, has put forward a vastly different intelligence model for our future robotic warriors. Eschewing full-fledged autonomy, the individual combatants are designed to pool information about the environment, potential targets, and available resources, and then arrive globally on a course of action; individual robots may also ‘bid’ to avail themselves of resources, but the allocation of resources will again be decided globally. No agent can deviate from the plan for the whole, and the plan for the whole belongs to no individual agent. A team of such combatants is like one vast neural net spread over several agents.
The system, called ALADDIN (Autonomous Learning Agents for Decentralised Data and Information Networks) still lacks actual robotic bodies for its agents, but in simulations, ALADDIN beats human soldiers in allocating resources for the wounded, and in responding effectively to emergency scenarios like earthquakes and floods. As the Economist puts it, “No human egos [or artificial egos] get in the way.” ALADDIN has the capacity to save more lives and minimise collateral damage in battle scenarios because there is no possibility of diverging emotional or personal motives on the part of its members.
But a question emerges: is the emotional and moral confusion experienced by the rescuer or soldier an essential element to the moral structure of warfare and emergency response? BAE, aware of concern over the prospect of ALADDIN having the power to give or take life, proposes that the decision to strike an opponent or not to save a given victim could always be passed on to a human moderator– but this proposal seems only to entrench the idea that making such choices in battle or an emergency should be done by a human subject who is subject to a normal array of human passions, and then we have written in to ALADDIN the very confusion we were trying to minimise or eliminate.
Related Articles
One thought on “War is Reason Free From Passion”