AI Discrimination is a Larger Concern than AI Caused Extinction
The EU passed a draft version of its AI Act, a law meant to reign in bad practices by so-called AI systems. It would require things such as disclosing training data and its sources, ban certain uses of biometric recognition, and require certain systems to identify risks before being put on the market, among other items. The full law won't be finalized until later in the year, so aspects of it are likely to change. But it is heartening to see a major government take the real risks of AI systems seriously.
Far too much has been made of the hypothetical danger of AI systems destroying humanity. Very often, such concern is a red-herring, a way to focus regulatory attention away from contemporary risks onto far future possibilities. This, of course, serves the purposes of the people building AI system in the here and now. The more we can be made to focus on the future, the less we focus on the real discrimination that AI systems have been caught doing today.
You can argue that regulations may or may not achieve their goals. Frankly, I would like to see these regulations stronger in some areas and more forgiving in others. AI systems that produce inaccurate or misleading information, for example, should have some level of liability for that information's use in my opinion. Agree or disagree, the point is that is a potential regulation dealing with the harms of today. It is focused on helping ensure that AI systems are beneficial, to the extent possible, and controlling their potential damage.
That is what regulations should do. They should not be focused on the far future or pie in the sky possibilities. So, argue all you want about the value of the specifics of the EU regulations. Just don't get caught up in the idea that they must be bad because they won't save from an infinite paperclip producing machine. People focused on that usually don't want you focused on mortgage discrimination or the theft of artist's work to create training data.


Big fan of your SubStack in general; this *particular* post (esp. the title, which is false on its face, when you think about it) is, I think, pernicious, though. For me, bottom-line: Both things can be true. Regulations, therefore, should address both, esp since most regulations (forcing transparency, eg) that can address one can addresses the other. It's really that simple.
Broaden the discussion beyond AI to all issues, for example, and we'd never do anything about calamitous climate change—since there's *always* going to be something more pressing for us to focus on in the more immediate time-frame. It's a dead end argument, imho.