AI Makes Doctors Worse
A recent study shows that doctors who started using imitative AI to screen for cancer got worse at the screening for cancer. In a month.
This is not good.
It is not a good sign that the use of imitative AI for a critical function reduces the doctor’s abilities to perform that function. We cannot reply upon imitative AI to be doctors, since there is absolutely no way to prevent imitative AI from bullshitting you, no matter the situation. We just saw a few days ago an imitative AI system invent a body part. These are not entirely reliable services. And that might be okay under certain circumstances.
In programming, for example, if you know what you are doing, well, imitative AI will always type the code faster than you can. If you know enough to know when it’s gone sideways, how to keep it on track, how to make it more determinative and less, well, bullshit-y, there are circumstances when it could help you. This is likely true in other fields. But the key is that the people who are controlling the situation are able to find the bullshit. If doctors are getting worse at finding cancer after using the tools, how the hell can we trust the outputs?
This isn’t a matter of “oh, the tool is better than people!” We know the tool, unlike a calculator, for example, is not accurate most of the time. We know that it needs verifying in order to produce useful results, at least where we care about accuracy. We need doctors to oversee these things if we want to get the best use out of them, if we want them to help rather than hurt.
In the larger sense, why do we put up with this? Why do we let these firms advertise themselves as medical tools when they haven’t proven they are helpful and useful? No drug would be allowed on the market if handling it made you 6% less likely to properly diagnose conditions. Or made it more likely to perpetuate racist, debunked medical theories. But somehow, imitative AI systems are allowed to do both with nary a hint of regulation or liability.
I know I must sound repetitive on this issue, but it is all so tiring. Imitative AI is a technology, nothing more. It likely can make some things better for some people, like many other technologies have done in the past. But we have allowed its owners to, well, bullshit us about the magical machine that is going to do everything, fix everything, take everyone’s job and be everyone’s doctor, writer, therapist, and best friend. We could choose to do better, choose to treat it like any other technology, take the good and control the bad. We could have a world where it’s not allowed to make things worse with no consequences.
But that, apparently, is too much to ask of our tech, business, and political leaders. Better, apparently, to make medicine worse so that a handful of people can become richer than they already are.

