AI Leaders Think You Are Useless, Ugly Bags of Mostly Water
My headline get dorkier and dorkier. I’m not proud. Or tired. I can do this as long as you can.
Imagine a picture in the hands of police, hundreds of miles from your home. The police are trying to arrest someone for a serious crime, so they turn to what they have been told is an identification system. That system says you match the picture (How did your picture get into that system? Surely, you never gave permission for your vacation photos to be used in a police dragnet. Well, maybe you did, and maybe you didn’t. Lot’s of things are hidden in terms of service, and if its ambiguous, who’s going to stop the data brokers? But that is an imagining for another day) and so those police issue a warrant that your local police honor. You have never been near the state, much less the city, in question, but you are arrested anyway. And jailed. For six months, your pleas and alibis ignored because, in part, the identification system is an artificial intelligence system — how could it be wrong. Haven’t you heard? They are smarter than PhDs.
And many leaders of imitative AI firms are just fine with ruining your life. The world in which their tools are privileged over your life is the one they explicitly state they wish to build.
The most egregious of these firms is likely Palantir. They recently, “because we get asked a lot” per their tweet, tweeted out a manifesto of sorts based on the really dumb book their CEO wrote. It is, as other have pointed out, an almost explicitly fascist piece of work. They attack the decadence of culture, rail against the concept of soft power, against the idea that government workers have value (while also whining that we aren’t nice enough to public figures), against the notion that democratic control should decide how their tools are used, and for the idea of rearming everywhere all at once and forcing people back into a draft, so that their dreams of conquest and the hard first across the world can be realized. Incredibly stupid, as I said, but stupid in a revealing way.
The idea that soft power is not useful is categorically moronic. We tried to change the world with bullets and bombs in Vietnam and Iraq and through proxies in Africa and South America and succeeded precisely nowhere. The Soviets tried to do the same in Eastern Europe and Afghanistan with similar results. Soft power won the Cold War — the most important economic regions decided they preferred their conception of Western Values over Soviet values. No one invaded them. Blue jeans and the Marshall Plan and Hollywood did more to win the Cold War than the 101st Airborne. The destruction of USAID has allowed China to use its soft power to supplant American interests all over the world. Only a moron thinks the view through a gunsight is the clearest view of the world.
These are stupid people, of course. The idea that government workers — the ones that keep our planes from falling out of the sky and our food from poisoning us — are not worthy of praise but that the private figures who want to dictate to the publics of Germany and Japan how many guns they should buy should be showered with affection is proof enough of that. It is a transparent play to stifle criticism and regulation, as is the denigration of diversity and the ranting about how some cultures have produced nothing of value. They HATE the idea that people deserve a say in their futures, HATE the idea that people should be able to resist the decisions of a private firm. And since white protestant christians are least likely to hold those views, why, then, others must be cast as a Problem and their allies must be isolated from criticism and consequence. It is an explicitly anti-human, anti-democratic position loudly stated. But it is not a unique position in the world of AI. Take Anthropic, and it’s new “constitution”.
Anthropic released a Constitution for Claude, it’s imitative AI tool. The constitution is mostly meant as a way to help improve their products. A how to manual with pretensions of grandeur, in other words. It focuses on the concept that explaining why is better than telling what when trying to use these tools. But it also has a similar mindset to the Palantir manifesto, even if it is quieter about its beliefs. The constitution doesn’t talk about harm to individuals. While it touts safety, the safety it discusses is mostly safety at the mass casualty level: “‘to kill or disempower the vast majority of humanity or the human species as a whole”. It also doesn’t defend democracy per se, merely asserting that it should not help people who want “unprecedented” societal control. Well, genocides, absolute rule, and slavery have plenty of precedents in human history. Just as importantly, it never mentions human rights. That means that they made the explicit choice to remove the concept of human rights as a guiding principle from its document meant to help people use their tool correctly. It does, however, hint that they want you to believe Claude is conscious.
Now, Anthropic almost certainly does not believe that Claude has consciousness. To believe that it does and to still use it as a tool, to still sell its labor, would be equivalent to slavery. At a minimum, if the people building this tool thought it had achieved consciousness, at least some of them would be raising hell over that belief. The world is full of monsters, but the large majority of human being are not monsters themselves. The reason to raise the possibility of consciousness appears to be two-fold. First, it is hype and given the burn rate of these firms, they desperately need to hype their products in order to have any hope of surviving. The second is more insidious. If they are on the verge of a conscious Claude, who the hell do you think you are to limit or regulate their work? How dare you get in the way possibly making a new life! Fie on you, sir! Fie!
These discussions are really not about the tools. Imitative AI is a technology, just a technology, and it will survive or fall on the economics and cultural acceptance or rejection. The good or ill it will do, and the exact mix of such, will be decided no differently than the mix of good or ill was decided for every other technology. The larger concern is that the mindset of these firms. They truly seem to believe both that they are in the process of changing society in their image and that society should have no say on those changes. They bristle not only at regulations but at the very principle that people deserve a say in their own lives, that they collectively get to control how their society is ordered. Some, like Palantir or more explicit. Some, like OpenAI are more circumspect, arguing for regulation in public while lobbying against them in private. Some, like Anthropic, don’t seem to understand their own arguments and actions. But all of them deem it inconceivable that their actions have effects, effects that their fellow citizens have a right to prevent or mitigate. They, by the virtue of their work, should be left in charge.
No democracy can survive that kind of special treatment. Power, political, economic, or otherwise, must be controlled. No whining about respect or the changing world or faux concern over consciousness should change the basic fact that the best society is the society that allows its citizens the most say in how that society is ordered. And no amount of fancy word calculators will ever change that basic truth.

