Is the Military Substituting Data For Expertise? Or is Imitative Skynet Coming for Us All?
The headlines are dramatic: “US military pulls the trigger, uses AI to target air strikes”. Skynet must be here and we are all doomed to a life of being chased by liquid robots, nineties hair, and Austrian accents promising to be back. But if you read a bit farther, the reality is somewhat less alarming, even if it does force us to grapple with the implications of how our military uses its might.
What the US military has done — and it is not alone in this, the Israeli military has done something similar — is not have a machine learning system (I detest the casual way we ascribe the term intelligence to what are fancy maths) decide the target. Rather, they have used an algorithm to suggest targets to human operators. The hope, of course, is that the suggestions would be more accurate than human intelligence alone.
The danger is that the human operators would allow their own expertise to be replaced by the data. No machine system is going to be perfect. All machine learning systems can do is make predictions based on their training data and inputs. Sometimes that can be helpful. Sometimes, that can be misleading. We know from past experiences that when humans allow their expertise to be replaced by data, bad things can happen. Fortunately, it appears that the military is taking that danger seriously. They are upfront in the article about how the algorithms “frequently fell short” and that a human being is always the final decision maker.
The problem, then, does not appear to be Skynet and its assorted time-travelling associates. Nor does it appear that the military is not taking the expertise versus data issues lightly. Rather, the issue appears to be an old one: the inherent flaws of air power.
Intelligence taken at a distance is less reliable than intelligence taken up close. The US has a history of attacking innocent civilians and civilian events, like weddings, that its air borne intelligence has mis-identified as threats. Obviously, killing people at weddings does not make the US many friends. The promise of these algorithms is reducing those incidents. It is highly unlikely that they would reduce those numbers to zero. But by reducing them, does it give policy makers a false sense of security in the effectiveness of air/drone power, leading to an increase in its use and, counter-intuitively, a subsequent increase in the number of civilian deaths? Quite possibly. And as machine learning supported targeting becomes more efficient, in kinetic environments such as Ukraine or Gaza or Hothani conflicts, does the pressure to lesson human intervention and/or give more weight to the machine targeting assessment grow, thus increasing the likelihood of replacing human expertise with data also grow? Again, quite likely.
Using air power and drones to kill is already morally and strategically questionable at best. The potential improvements offered by machine learning might, paradoxically, make those moral and strategic problems even worse. I am not worried so much about Skynet. I am not even worried about replacing human expertise with data at the operational level. But I am worried the human expertise, already lacking when it comes to the use of drones and air power at the strategic level, will be sidelined even more by the apparent ease of turning decisions over to algorithms.
You cannot make bad policy choices better with math. But I am afraid our leaders will try and do just that.

