AI as Accountability Destroyer
This is a point I have harped on before, but a handful of recent-ish stories reminds me that it is an evergreen problem. I suspect that the primary purpose of most AI, imitative AI especially, is not so much to replace employees (though that will be a part of it as we see) but rather to ensure a complete lack of accountability. If the machine made the decision, if the math is the reason, then it must be an impartial decision, correct? Well, no, but it certainly does appear that decision makers will at least pretend that is the case.
The first story is the one that combines the two drives: the introduction of AI into medicine. An experienced nurse has written an article about how AI has bene forced into their workflows. It is not going well. I should note that the nurse is not opposed to all automation — she thinks highly of a system that ensured all critical care steps were taken for each patient. But more and more, machine are being used to replace judgment:
This determines whether a patient was low, medium or high-need. And we’d figure out staffing based on that. If you had lots of high-need patients, you needed more staffing. If you had mostly low-need patients, you could get away with fewer.
We used to answer the questions ourselves and we felt like we had control over it. We felt like we had agency. But one day, it was taken away from us. Instead, they bought this AI-powered program without notifying the unions, nurses, or representatives. They just started using it and sent out an email saying, ‘Hey, we’re using this now.’
The new program used AI to pull from a patient’s notes, from the charts, and then gave them a special score. It was suddenly just running in the background at the hospital.
The problem was, we had no idea where these numbers were coming from. It felt like magic, but not in a good way. It would spit out a score, like 240, but we didn’t know what that meant. There was no clear cutoff for low, medium, or high need, making it functionally useless.
She goes on to point out that since nurses could not understand why the decision was being made they were no longer in a position to advocate for their patients. In the past, nurses both understood the reasoning and could argue for or against a decision around that reasoning if they thought that the patient’s health would be affected. Now, they are out of that loop and patients are entirely at the mercy of the decision of the machines, regardless of the expertise of the nurses involved. The hospital is replacing expertise with data, and that is likely to have a direct impact on patient health:
“Efficiency” is a buzzword in Silicon Valley, but get it out of your mind when it comes to healthcare. When you’re optimizing for efficiency, you’re getting rid of redundancies. But when patients’ lives are at stake, you actually want redundancy. You want extra slack in the system. You want multiple sets of eyes on a patient in a hospital.
When you try to reduce everything down to a machine that one person relies on to carry out decisions, then there’s only one set of eyes on that patient. That may be efficient, but by creating efficiency, you’re also creating a lot of potential points of failure. So, efficiency isn’t as efficient as tech bros think it is.
To the bean counters and investment bankers who run hospitals, this is the best of both worlds. They get to reduce the amount spent on nurses and they avoid responsibility for the fact that there are fewer nurses per patient. If something goes wrong, they can blame the nurses for not following the machine’s instructions or, in a worse case, blame the algorithm itself. Patients may suffer, but pinning the blame on a specific person or group of persons will be more difficult.
This shift away from accountability does not just affect large shifts like the one in the nurse’s article. Smaller decisions, though no less important, are also subject to this dynamic. A recent study, for example, demonstrates that an AI resume reader discriminates, wildly, as if it was a member of the KKK, against non-white males. Another recent study shows that at least some verification processes are biased against minorities. While it might be a stretch to say that the people who used these tools wanted to discriminate, I do not think it a stretch to say that the people behind the use of these tools did not especially care. These limitations have been known for years. But the use of these tools both limited the time needed to review the specific data in question and removed the accountability from a human making a discriminatory chose to a machine making a choice.
While this is not a guarantee that accountability will always be avoided, proving discrimination in our current legal context is already difficult. Pushing the decision to an algorithm makes that even harder. Same with staff reductions. It is easier to argue that a machine intelligence can help make up for the staff reductions and that the machines made the decisions that showed where staff could be reduced than explain to regulators, staff, and patient advocates that you simply think fewer nurses doing more work is good.
We need, as a society, to harshly punish this shift away from accountability. We need to ensure that anyone who produces discriminatory systems is severely punished and that the firms that use these tools are as well. Banning these systems in certain critical areas — like employment, benefit determination, education — is probably justified at this point, given how often these issues reoccur. We need to ensure that no machine can make a critical decision around health decisions, staffing, etc. in critical areas. A person must be the ultimate decision maker and must therefore be held accountable.
Accountability is the magic sauce that makes pretty much every institution run well. When you remove accountability, then you effectively license bad behavior, intentional or not. Humans must make the critical decisions for humans. We must not allow the use of fancy algorithms to be a replacement for judgement and thus accountability. Ultimately, a human is making a decision — it is just being hidden behind a machine. That obfuscation can be used to destroy accountability and thus worsen outcomes for the people subjected to the decisions the organization is making.
It is bad enough when a hospital counts nursing staff. It is worse when the people who make that decision can hide behind an algorithm.

