The Middle, Part 3
AI desire paths through the middle, a good interview, Gemini CLI, and an AI vending machine

A note: I didn’t initially anticipate the concept of ‘innovating in the middle’ being a multi-part thing, but it made sense when writing about these concepts to make it that way. Here’s the first and second parts of this series:
AI is Creating Desire Paths:
What happens when AI creates a middle path all on its own? This article about a web-based product adapting to what ChatGPT thought it did answers that question in a super interesting way. Soundslice, which is a product I didn’t know existed until now, is a website / app that lets you take a photo of sheet music and upload it, and it will help teach you how to play it by sounding it out for you, letting you slow down the audio, etc. It seems very cool, and a unique use of AI (I’m assuming it’s using a multi-modal feature set) where it didn’t exist before.
But back to the article. Instead of uploading images of sheet music, with the five lines, treble and base clefs, and notes, users started uploading screenshots of ChatGPT sessions that had a guitar tab…
Obviously that’s not music notation. It’s ASCII tablature, a rather barebones way of notating music for guitar.
Our scanning system wasn’t intended to support this style of notation. Why, then, were we being bombarded with so many ASCII tab ChatGPT screenshots?
Problem is, we didn’t actually have that feature. We’ve never supported ASCII tab; ChatGPT was outright lying to people. And making us look bad in the process, setting false expectations about our service.
The developer who was trying to figure this out did what a good investigator did, and went to the source - ChatGPT:
Turns out ChatGPT is telling people to go to Soundslice, create an account and import ASCII tab in order to hear the audio playback. So that explains it!
So that raised an interesting product question. What should we do?
Let’s pause there. ChatGPT is hallucinating a nonexistant feature of a real-world product. Hallucinations are bad, right? In law, the most commonly identified hallucination is fake cases. At least those are the most readily identified when they get caught inside the court system, often by the attorneys for the opposing party (or by the appellate court after the losing party appeals a judge’s order that relied on fake cases). Bad all around. But maybe there are other examples of ChatGPT getting things slightly wrong, such as suggesting people contact a particular legal aid org to help them with a legal problem that the legal aid org doesn’t help with.
The question the developer asks is super interesting - what should we do? Too often the easy answer is to just do nothing. Companies know AI hallucinates - and companies also (should) have a roadmap of upcoming features that was previously agreed on and laid out. So just ignore ChatGPT and maybe put a disclaimer on the website that says “sorry we don’t do that.”
The harder answer is to change the roadmap and add the hallucinated feature. But that’s what Soundslice did:
We ended up deciding: what the heck, we might as well meet the market demand. So we put together a bespoke ASCII tab importer (which was near the bottom of my “Software I expected to write in 2025” list). And we changed the UI copy in our scanning system to tell people about that feature.
This is a company listening to and acting on market demand, albeit a demand created by an AI hallucination. ChatGPT basically circumvented the process of them doing user feedback surveys over time, A/B testing potential features, etc. - essentially creating an expansion of a customer base that they didn’t plan for or anticipate.
Desire paths, at least conceptually:
If you haven’t heard of a “desire path” before, the idea is pretty simple: in spaces like urban public parks, college campuses, and medical campuses the sidewalks and landscaping are planned out beforehand. But once people start actually walking around, things get more interesting. Trails crop up that weren’t originally in the plan, because a person on-foot isn’t especially inclined to stick to the sidewalk to make a 90-degree turn when they can just walk diagonally across the grass and get there faster. Once those trails get worn in, the property manager or whoever is in charge faces a choice: ignore it (or in some cases work to actively prevent it by putting up a fence), or accept that this is the way things are going to be and create a sidewalk or landscaped path that follows the trail.
Here’s what seems to me to be the most half-hearted attempt at fencing off a desire path:

And here’s a Google Maps image of the Quad where I went to college, where they decided to just make desire paths into actual sidewalks:
From above it’s not the most logical-looking series of sidewalks, but it works. Roll Tide.
The idea of AI-created desire paths is fascinating to me. We may well see more of this kind of thing, and when we face a choice of allowing the trail, or putting up a fence I wonder which path we’ll take.
But enough about the middle for now. On to other things.
Gemini CLI is Freaking Wild:
If you listen to the Top LinkedIn Legal AI Posters, it seems that agentic AI has all been figured out (by them). I don’t think that’s the case at all. I do know that the Gemini CLI, which by any definition is “agentic,” runs in the terminal and just does the stuff you tell it to. It’s absolutely mind-blowing. It will just create the needed files directly on your computer. It will go read documentation before deciding how to do something. If you ask it to add a soundtrack to a minigame it will search the internet for audio files it thinks will work and download them. It’ll look for a public API to use to get the data it needs, and wire up the routines to do whatever.
Google only released this thing about two weeks ago, and, as someone posted on Hacker News the day it was released, “Hi - I work on this. Uptake is a steep curve right now, spare a thought for the TPUs today.” So no, I don’t think Agentic AI is figured out.
A Good Interview:
If you have an hour, this interview with Ethan Mollick is well worth it:
The AI Store:
Anthropic decided to let Claude run its own vending service inside their offices, and then report on how things went. It’s an absolutely wild story, complete with the AI having a psychotic break that led to a brief identity crisis over the span of 48 hours. You can read their writeup here. The best take on the whole thing comes from a law professor I know, and who you should follow on BlueSky if you’re at all interested in this stuff:
I haven’t seen a post that adequately describes how amazingly odd the whole thing reads. It’s like some odd madcap office sci-fi comedy where you can’t quite make out who’s in on the joke. Words fail me. Just read it.
I do imagine the AI critics could say something like this:
“Well, we can see AI is totally unfit to run its own small business operating vending machines! That’s why it’ll never be ready for prime time.”
Let’s pause for a second and just consider how insane it is that an AI model is capable of any of these functions at all, if even in a completely haphazard and slapstick way. Five or six years ago this kind of thing would be science fiction. If I had approached you, dear reader, even three years ago and told you “soon there will be an AI capable of running a vending service, ordering goods, selling them to customers, setting prices, and managing inventory BUT! it will be prone to psychotic breaks” you’d ask me if I was feeling ok, or maybe offer to call my wife to come drive me home. But here we are, busily moving goalposts on the jagged frontier.
What “desire paths” will we see in legal Ai tools? Maybe people will want to predict outcomes: what are my chances of winning this case? Lawyers hate answering such questions but AI might create a prediction tool.
Maybe our new smart cars should have restrictions concerning desired paths in rambling neighborhoods that have been designed to have diverse views and the illusion of more territory. What if my BYD creates new paths through my neighbors tulips?