Alexander Kott is chief of the Network Science Division at the Army Research Laboratory; in a new paper, he rounds up several years' worth of papers that he wrote or co-authored, along with some essays and articles by others, on what an "Internet of Battle Things" will look like.
Kott describes a future in which sensing/actuating militarized devices are capable of gathering intelligence and acting on it, sometimes with lethal force (everything from launching a missile to firing a gun to using "directed energy weapons"), and predicts that "constraints" in "cognitive bandwidth" (that is, the ability of humans to interpret and act on the intelligence from these devices) makes these things effectively autonomous, dealing out death untouched by human hands.
Of course, Kott also anticipates that malware will be a big problem for these systems — once you trust a lethal robot to act autonomously to kill your adversaries and not your own troops, anything that compromises that robot could turn it into a fifth columnist that wiped out its own side. Kott proposes that active-defense software will be common — that is, software that detects other software trying to compromise it and strikes back by attempting to compromise the enemy's automatic systems.
Kott describes how these battlefields will consist primarily of gadgets with a small rump of people who oversee and maintain them, but what he omits is the political dimension of this: American wars of aggression are often ended when Americans sicken at the deaths of their children on the battlefield; the incredible staying power of the perpetual wars in the Middle East and Afghanistan (not to mention Yemen) owes much to the use of ranged weapons that are deployed with a minimum of exposure to American lives; and the use of contractors to do the dirtiest, most dangerous jobs in the battlefield, so that the official soldier body-count is low.
As battlefields become roboticized, Americans engaged in military adventurism primarily become riskers-of-treasure, not riskers-of-blood. What's more, a battle that kills a bunch of expensive robots is a profit-center for the company that replaces those robots for the next battle, producing excess capital that can be used to lobby for more battles and more wars and more blown up robots and more purchase orders for robots to replace them.
But also unspoken and implicit in the essay is that the typical American adversary is long on blood and short on treasure, pitting insurgent human flesh and IEDs against drones and other advanced materiel. The robots will not merely be killing other robots — they will be shedding oceans of blood. It just won't be American blood, and thus the blood will only do a little to shift American public opinion away from more and more war.
Yet another challenge that is uniquely exacerbated by battlefield condition s are constraints on the available electric power . Most successful AI relies on vast computing and electrical power resources including cloud – computing reach-back when necessary. The battlefield AI, on the other hand, must operate within the constraints of edge devices , such as small sensors, micro-robots, and handheld radios of warfighters. This means that computer processors must be relatively lights and small, and as frugal as possible in the use of electrical power. One might suggest that a way to overcome such limitations on computing resources available directly on the battlefield is to offload the computations via wireless communications to a powerful computing resource located outside of the battlefield. Unfortunately, it is a viable solution, because the enemy's inevitable interference with friendly networks will limit the opportunities for use of reach-back computational resources.A team that includes multiple warfighters and multiple artificial agents must be capable of distributed learning and reasoning. Besides distributed learning, these include such challenges as: multiple decentralized mission-level task allocation; self-organization, adaptation, and collaboration; space management operations; and joint sensing and perception. Commercial efforts to date have been largely limited to single platforms in benign settings. Military-focused programs like the MAST CTA (Piekarski et al . 2017) , have been developing collaborative behaviors for UAVs. Ground vehicle collaboration is challenging and is largely still at the basic research level at present. In particular, to address such challenges, a new collaborative research alliance called Distributed and Collaborative Intelligent Systems and Technology (DCIST) has been initiated ( https://dcist-cra.org/ ) . Note that the battlefield environment imposes yet another complication: because the enemy interferes with communications, all this collaborative, distributed AI must work well even with limited, intermittent connectivity.
Challenges and Characteristics of Intelligent Autonomy for Internet of Battle Things in Highly Adversarial Environments [Alexander Kott/U.S. Army Research Laboratory]
(via Four Short Links)