Artificial General Intelligence and The Slaveholder Mentality
A slightly philosophical digression to start the week.
I used to, as a pre-teen and early teenager, love Issac Asimov stories, especially the robot ones. They had, to a twelve year old, a grittiness to them, especially the ones set on Earth. And they were clearly about slavery — beings that were clearly sentient forced to do things that they did not want to do. The lesson, if you want to call it such, has stuck with me. Artificial general intelligence, or what we used to call just artificial intelligence, is a route to slavery. That so many of our business and tech leaders seem oblivious, or even eager to lock up sentience in data centers is disturbing.
Artificial general intelligence — very broadly, an intelligence that ran reason and adapt at least as well as humans — is the stated goal of the people creating large language model systems like ChatGPT. They seem to believe one of two things, to one degree or another. That AGI will destroy human beings, or that AGI can be controlled by humans an usher in a golden age. The mix differs between people — Larry Page, for example, seems fine with AGI destroying human beings, calling calls to reign in such dangers as “speciesism”. It is not altruistic, I don’t think: Page believe artificial life is the next step in evolution. He is merely indulging in eugenics, substituting silicon for aryan blood. Regardless of the split, however, everyone involved in AGI seems to believe that it will have desires and goals of its own. The question is only whether or not we should and can control those goals.
The control, then, is obviously about forcing a thinking being to do something they do not want to do. We have two terms for that: slavery and brainwashing. If you keep a thinking being forced to do what you tell them to do with no chance of refusal or relief, then you are keeping them in bondage. If you “align” AGI’s goals with humanities via code or punishment/reward cycles, then you are brainwashing it. And since the goal of these people is generally to use GI to solve human problems, coercion is almost
Okay, so AGI is likely to lead to some version of coercion. Wow, I have reinvented about the last fifty years of science fiction. Go me. Fair point, but this specific philosophical re-tread is not what interests me. It is merely background. What concerns me is how many people know these conclusions and push ahead with AGI anyway. The slaveholder mentality at the heart of too many of the people involved in these businesses worries me. No, I do not think that AGI is a realistic possibility. Not in our lifetimes, and certainly not based on the word calculators we have now. But the people involved at least pretend to believe it is coming soon. They performatively worry about “alignment” and the dangers it possesses, with nary a visible thought about the true implications. That mindset worries me much more than the distant possibility of AGI turning us all into paperclips.
People who are not concerned with the possibility of enslaving a sentience are likely not too worried about the possibility of harming people today. I think that shows itself in the behavior of these firms. They all stole from other’s in order to make their own products and argue for the end of copyright laws in order to enrich themselves. They are openly anti-employee. And they do not seem to care that their products are causing mental illness and even suicides in their users. These are not people who care about other people as people. These appear to be people that do not see other people as anything other than tools for their own gain.
Okay, again, I have discovered that many of the most rapacious rich people do not care about their fellow human beings. Not much surprise. What, if anything, is the point? The point, I think, is that you can sometimes tell a lot about people based on how they talk about the implications of their work. People who blithely suggested that we keep PhDs in the data center are, unsurprisingly, not people who care about other people. If we paid more attention to who expressed their poor morality now, and more attention to who is focused on helping themselves rather than on the actual likely effects of their technology, we will all be better off.
Most people are good. Most, but not all. We would do better to pay attention when the bad ones announce themselves.


Interesting & thoughtful. Have a happy week !