Tech Should Be About Helpfulness, Not the Techies
I have a confession that may surprise readers: I think LLMs are kind of cool.
I don’t think that stealing from authors and artists with the intention of replacing them is cool. I don’t think that using enough energy to power Scandanavia (figures may vary) cool. I don’t find pretending that hallucinations or ignoring the limits of the “knowledge” they produce is cool. I don’t find the complete lack of concern about the bias inherent in these models cool. But from a purely technical perspective? They are kind of cool.
But that does not mean I support their use. In fact, I think that coolness highlights the largest problem I have with LLMs and the people trying to make money from them: they are making LLMs about them, not about their users.
Technology should be about being helpful, not about finding a use for the technology just for the sake of using it. That way lies bullshit and wasted time and money. Too much of the imitative AI “boom” has been about trying to find a use for a technology that really doesn’t have a strong commercial use. Take Edutech, for example. AI is heavily pushed by Edutech companies, and pushed with language that implies you are as unhip as, well, calling someone hip if you don’t use AI. But teachers are rejecting it left and right.
Imitative AI tools are not helping teachers, according the people who are experts on what helps teachers: teachers.
Let’s put it plainly: the usage numbers haven’t budged in a year.
In spite of the release of more sophisticated AI models, in spite of increased training in AI, in spite of a summer to slow down, regroup, and take an undistracted look at this new technology, Education Week found that teachers are using AI at lower rates in October 2024 than in December 2023—from 33% to 32%.
…
6% more teachers said they don’t think the technology is applicable to their subject matter or grade.
Many more teachers have learned about AI over the last year and, among the ones who don’t use it, more of them are saying “because it ain’t for me.” I would devour an interview with every one of those teachers right now, and if you build tools for teachers, I hope you would as well.
This is not surprising to me. The doyens of Silicon Valley do not try and figure out what people need and want — they decide that they need to make money on a given technology and so they will make money on tat technology, regardless of how helpful it is to the putative users. Crypto is largely only useful to speculate on and help commit crimes. NFTs were useful for no one other than the people who sold them to the early marks. And now it is imitative AI’s turn.
No one seems to be really making any profits from imitative AI. This is not entirely unexpected. Most imitative AI is only good for small, low level tasks and even then you have to spend time and resources validating its output. If you do not, then you have to spend more time and resources dealing with the fallout of, say, providing tax advice via your chatbot that gets your users audited.
Those small tasks are simply not valuable enough to make up for the enormous energy and storage costs that these models generate. Microsoft, for example, is going to add CoPilot AI to the Office 365 subscription, with an increase in price, in certain markets (in related news, I am now looking for a non-Google alternative to Microsoft Office). A successful product does not need to hold hostage things that people actually use in order to grow. The same principle applies to Google adding AI to search — no one wants a search that tells them to glue cheese to their pizzas, but Google needs to justify its investment in imitative AI.
Tech simply doesn’t listen. In the education article mentioned earlier, when asked what teachers need in order to increase uptake o imitative AI, its purveyors said better education of its capabilities. But we have had an intensive year of pushing imitative AI’s purported capabilities and still teachers have not taken it up in serious numbers. In part because tech companies tend to be monopolies, and in part because their leadership tends to be arrogant people, large tech companies can try and force their users to subsidize imitative AI investments. Areas where there is slightly more competition, like Edutech, see customers either ignoring those products or only using them along the edges of their work. In response, companies in those domains claim that people just need to be better educated on the glories of imitative AI.
As a technologist, I find all of this infuriating. When you find a new way to do something, it is natural to see if you can turn it into a product, or at least something people might find useful. But when and if your new product turns out to not be commercially viable, or to have too many external costs, you should move on. You can try to improve it, or you can move on to something new. You should not try and force people to use it if it does not work for them. And you should not try and browbeat people into using it.
Technology is supposed to help. Yes, like everything else, there are often trade-offs. But we aren’t talking, with things like crypto, or social media algorithms, or imitative AI, about balancing the good and the bad. We are talking about things that society, largely, simply does not want — no balancing required. If technology was focused on helping, this would be a non-controversial opinion. Instead, it is focused on believing it and it alone knows best. Is it any wonder people have started to turn on the tech industry? A little bit of humility would go a long way to restoring trust. Unfortunately, it seems clear that the tech industry as a whole understands neither humility nor what its users find actually helpful.

