AI Hype and Help
The Verge has two stories about supposed AI systems (I will get to the supposed AI nature of them as we go) that seem pretty representative of the paths that AI based automation can take. On eis s story of a hype, one is a story of real success.
First the hype. A company called has released a device, called the Rabbit R1, that can supposedly run your apps for you. The selling point of the device appears ot be two things: AI and getting all your answers from one system. Per the article, you can train the device to do any action — it “learned” (this is where the supposed AI comes in) by watching people use apps and then copying those actions. Now, anyone who knows anything about apps sees the flaw — the settings icon may very well look different tomorrow, and how do you get it to work with apps it hasn’t bene trained on? Well, you cna train it yourself:
he R1 also has a dedicated training mode, which you can use to teach the device how to do something, and it will supposedly be able to repeat the action on its own going forward. Lyu gives an example: “You’ll be like, ‘Hey, first of all, go to a software called Photoshop. Open it. Grab your photos here. Make a lasso on the watermark and click click click click. This is how you remove watermark.’” It takes 30 seconds for Rabbit OS to process, Lyu says, and then it can automatically remove all your watermarks going forward.
Neat how the example is a way to help you remove proof of work from images. Tech CEOs do so despise creative people, don’t they?
Regardless of the intent, this is mostly hype. What they have built is a templating machine — it will watch what you do and then mimic it. It might be slightly more flexible than previous iterations because the math has advanced, allowing for less required certainty in the copying, but this is a templating machine. Which is fine — iteration is good and beneficial, though this specific device looks silly. I already have a voice assistant on my phone that can do most of this, and many free templating apps for repeatable work on my computer. By slapping the term AI on it, they can jump on the hype train and get at least some credulous press coverage at places like, well The Verge.
It is mostly hype. A device that will do some things worse than existing interfaces but does seem to have the potential to be allow you a simpler action template system. But “trade Sisir for more flexibility in templating actions” doesn’t sound as cool as “large action module”, apparently.
The second story is one of direct benefit to people. A system has helped narrow down potential battery component that is less likely to burst into flames. the system is pretty simple conceptually — it looked at the physical and chemical characteristics of known materials and selected for combinations that fit its requirements of less lithium usage, stability (the whole not bursting into flames thing), cost, availability, etc. and selected some candidates. Human researchers then put those candidates through their paces to come up with the material combination that Microsoft is hopeful can create better batteries.
This is not hype — this is using machine learning to help accelerate the jobs of research scientists. It has legitimate uses, it augmented human ability rather than trying to replace it, and it represents a real commercial and societal benefit (I personally prefer when my batteries do not explode. Your mileage may vary).
We are deep not an AI hype bubble. But as I have talked about before, the question is whether this is the crypto bubble — something that produced nothing but gambling and crime — or the initial internet bubble — that left real skills and miles of fiber optic lines when it burst. The Rabbit device and ChatGPT demonstrates one path — very small incremental improvements that don’t help much and/or destroy human creativity. Microsoft’s machine learning system is another — significant augmentation of human skills combined with technical expertise that is transferrable to other problem domains.
Which bubble we get will depend in part on where the investment, private and government, money goes. this is one of the reasons I am so down on things like ChatGPT and other imitative AI systems. They are not really doing anything other than attempt to automate creativity out of the creative parts of human work, and doing so by taking creatives’ work for its on uses without recompense. that path is just a mess of generic business speak and bland, middle of the road art that never leads to anything interesting. A series of bots that write each other content less emails and systems that produce the blandest, most generic art possible.
Microsoft’s system, however, shows that we can use this automation to accelerate human knowledge for the benefit of the whole society. We should be spending a lot more time and money on those efforts than on the hype around imitative AIs.

