AI Will Kill Your Dog. Or Not Everything is a Tech Problem
An essay I wrote about a couple of weeks ago is going viral for something I did not discuss: the fact that an AI chatbot convinced a woman to euthanize her dog. (Apparently, I have no marketing sense at all since I never mentioned that.) The CEO apparently took pleasure in the chatbot’s ability to use “escalating tactics” by the bot to convince the woman to put her dog down. This apparent glee is being met with justifiable anger, but I think this, as well as another story from this week, point out that AI is a solution in search of a problem.
Why do we need a chat bot to give us veterinary advice? What purpose does it serve better than alternative solutions? Based on the story, there is precious little to suggest that the dog needed to be put down at that time. And we know that imitative AI is not good at diagnosing people, something on which there is almost certainly better training data compared to veterinary medicine. It does not seem likely that imitative AI would be better at helping your sick dog than it would be at helping your sick child.
If the argument is that some people cannot afford the vet, a true statement. But why is the solution a bad, imitative vet with no ability to deal with human emotions effectively, no or limited accountability, and no clear track record of accuracy the solution? Why not take the money invested in imitative vet AIs and use it to hire more vets and vet assistants? Or fight for a reduction in private equity investment that is driving up vet costs? Or even just provide free or reduced price clinics for people who need them?
Part of the answer is that the makers of imitative AI need to make money, so they push these products. But that is only part of the problem. A significant aspect is that we, as a society, are simply too willing to reach for a technological solution. Take, for example, this MIT Tech Review article about a group of IT people and ethicists who want to build a “digital surrogate” of a person from emails, social media, and other publicly available data generated by the person. They argue that such a duplicate could make end of life decisions absent clear prior instructions.
That is insane.
The idea that someone could use social media and emails and claim to have made a person sufficiently like the real person so as to entrust it with whether or not the person lives or dies is a something that I couldn’t pitch to an agent as a dystopian sci-fi. I would be laughed out of the room. And yet MIT covered it as if it was a serious proposal. The article was at least somewhat skeptical, but the fact that it exists implies that the idea is, at least. worth exploring. It is of course not. No one who knows anything about how imitative AI systems work would seriously suggest that the word calculators we have no could imitate an entire person. But MIT giving it attention and validation pushes the notion that, well, maybe it might be.
Look, I get it. I work in tech and I, too, initially see many problems as an IT shaped nail. I, personally, because I am the dangerous kind of lazy, spent hours building a system to automate a 30 second task that I do maybe once a month. The appeal is real. But is also often misguided. Tech cannot improve everything. And it especially has a hard time improving things related to human interactions. And while my first instinct might be to start attacking a problem with technology, I have also spent a lot of time waking away from ideas that don’t actually make the situation better. One of the most important skills of an IT person is to realize when IT won’t help. We as a society will do well to keep that in mind.
Tech doesn’t solve every problem. It would be better if we treated their proposals with a lot more skepticism than we, as a society, due today. If the answer to the question “who does this help and how?” is largely “the makers by putting money in their pocket”, which it often is, we shouldn’t be afraid to say so just because the idea comes from tech.
And maybe, then, we and our dogs can live happier, healthier, longer lives.


Spot on. My dad, IBM's lead logician, worked/taught with Marvin Minsky ('father of AI' if you went to MIT) back when bellbottoms were the thing. Minsky was going on to the adoring roundtable about how AI would make possible a conversation with your thermostat. My dad brought that thread to a grinding halt by asking the assembled boffins: 'Who the hell wants to have a conversation with a thermostat?' This c1975. Plus ça change...