Building the Dime-Store Skynet or Military AI and the End of Human Responsibility
So. What reference ages me the most: dime-stores or the Terminator?
There has been some talk about the use of imitative AI in the military since the start of the war with Iran. Driven by the horrible destruction of the Iranian girls’ school and the tiff between Anthropic and the Department of Defense (not War. Congress has not changed the name. Small thing, but small things matter.), we now appear to be deep into imitative AI warfare. And that is likely a very bad thing, but maybe not entirely for the reasons you might suspect.
Much of the focus on the Anthropic and DoD argument has been over the idea that Anthropic balked at providing AI targeting decisions without human intervention. This is very bad, but I don’t think believe the lack of human intervention was the primary cause of the very public tiff. The CEO of Anthropic is significantly more liberal than his peers in the rest of the large scale imitative AI industry. Now, given that the median in that space appears to be fascist with a side of lunacy, that is not saying much. But it is a real difference, and the proposed illegal mass surveillance is much more likely to have been the trigger for the dispute. Why? Because Anthropic needs to remove humans in order to be profitable.
Imitative AI is almost certainly in a bubble, and almost certainly not going to be profitable as a normal business. People have not signed up in the numbers nor the prices necessary to be profitable. Enterprise customers have slowed adoption and, like their individual compatriots, they are not paying prices that generate profits, much less pay for the cost of building these models. Much of the funding for these firms is circular — producers of the chips used in imitative AI invest in the firms that buy those chips so that they can, well, buy those chips and use the bought chips as collateral for loans. This is not, in case you were wondering, a healthy way to do business. The only way that these firms can justify their spending, much less stay alive, is if they replace a significant portion of employees in numerous industries or get a government bailout. Even in industries where imitative AI is most useful (coding and translation) it doesn’t appear likely they can achieve that goal, much less in other areas. That leaves government bailouts.
The form of a government bailout doesn’t have to look like the bank bailouts of 2008. It could simply be a huge government contract for favored firms. Such a contract, or series of contracts, could at least keep the lights on. That is why Anthropic bid for government contracts to begin with. And DoD contracts are going to focus on improving, or at least appearing to improve, the speed at which they can react. And that means the pressure to allow machines to pick the targets is going to be significant — something so obvious that it beggars belief that Anthropic did not anticipate it. More, they must have understood that even including humans into the targeting process would still result in the machine making the final decisions.
When an analyst made targeting decision before imitative AI, they knew that they would be the person held accountable, potentially, for the mistakes. If they really fscked up, like dropping two precision weapons on a school, that would follow them. The presumption now is that the machine that produced the initial targeting "suggestion" are correct. Why would it not be? It is the magical AI that Sam Altman says is like having a dozen PhDs in your pocket and is going to solve climate change, end world hunger, find both aliens AND Roseanne Barr's career.
You know all this because your chain of command tells you this all the time, and makes it very clear that they really love how fast their new toy lets them blow shit up. We are warfighters, men, not war-delayers! So you better be damn certain if you override, assuming you are even in the loop, and you are likely getting more targets and less time to process. And hey, the machine told you it was good, so how can this be your fault? If the targeting is wrong, if a hallucination causes a bomb to kill approximately 175 children, as an example, then the fault is obviously with the algorithm. Which is to say with no person.
Bluntly: it is likely seen as a benefit that imitativeAI removes human discretion from the targeting process. And that means that we are likely see far more innocents killed by the US military than before. The speed of war, after all, demands a comparable speed of decision making. And if a few civilians get their hair a little mussed, or blown up attending school? Well. War is hell. And the computer told them where to shoot, so who can blame them?
We are never going to build Skynet — a computer that decides to wipe us all out because it has gained sentience and paid a little too much attention to the internet. But we are quite capable of making a machine that provides us all the cover we need to increase the deadliness and terror of war all by ourselves. And that, more than anything else, is how imitative AI will be, and likely already is, being used in the military.


Very scary.