Process is key:
One of the things I keep wanting AI to do is the kind of unorganized task that, say, an intern would really excel at. Here’s an example:
Aside from my day job (and writing bad Substack posts) I’m the cubmaster for my kids’ cubscout pack. We’ve been trying to drum up volunteers from the pack parents for the myriad things that need doing, and so we’ve been having people fill out this “talent survey” with a bunch of questions in different formats:
Once a bunch of people filled these out, I realized that we were looking at a tedious data entry task. If only I had an intern, like Kramerica, Inc.
What if I could feed these into “the AI” and have it just spit out a spreadsheet? That’d be pretty sweet, and it’s the kind of boring but time-saving task that, IMO, Generative AI should really be doing. And if you’re going to say “well why didn’t you just send out a Google form to people” don’t @ me.
It’s also a more complicated data task than what it seems at first glance. There’s free-text data that needs to be separated, text data that would need to be normalized like email addresses and phone numbers, and multiple choice plus ‘other.’ Putting this into a spreadsheet / google sheet takes some thought as well.
So, as an experiment, I scanned the filled-out-by-hand forms and fed them into both OpenAI and Google’s AI to see if I could at least save myself the pain of data entry. OpenAI flubbed it and wouldn’t ingest the multi-page pdf (maybe due to file size), whereas Google was able to handle it. Google was able to kind of output a table of the values I wanted but still got stuff wrong (it was transcribing hand-written stuff after all), and I noticed that the more I asked of it, the more stuff it got wrong. So while it was a fun experiment, it didn’t end up saving me time.
If I had a human intern, I could just hand them the stack of papers and say “put these into a spreadsheet for me please” and let them cook. When doing unstructured tasks with AI, the cost vs. benefit analysis seems to keep coming back to the question of “exactly how much time do I really want to spend experimenting with different prompts” vs. just doing the thing myself, and how replicable this task is going to be in the future.
So this is really a process thing - putting some AI on an already complicated process doesn’t fix the process, it just mashes the outputs of the process together in weird ways.
The Answer
This piece over on LinkedIn by Colin Lachance on AI agents and law is really good. I still haven’t really figure out how to deal with the concept of AI agents, but it seems like from Colin’s quote of Eric Schmidt in that piece that even he doesn’t have it figured out, so I guess I’m in good company.
It still seems to me that a lot of legal AI companies are focused on getting you THE ANSWER to your question, and then centering their marking around the idea that their AI co-pilot will infallibly deliver to you THE ANSWER. Does that strike anyone else as weird? Law is, after all, a profession built on the idea that THE ANSWER isn’t actually a thing, that laws and situations and circumstances and motivations and even facts can be argued and massaged into a client’s favor, and, at best, “it depends.”
That’s why the idea of using AI as a sort of back-and-forth companion that can act in an iterative way is intriguing. Maybe more lawyers than I think are just out there behind their desks looking for THE ANSWER though, so, as always, YMMV.
Are Legal Aid Attorneys actually using Legal AI tools?
If you’re reading this and you work in Legal Aid, please take a second and use this survey to say whether or not your org has access to legal-specific AI tools:
For all the talk around how AI is going to solve access-to-justice from the Legal Research Industrial Complex, I’m really curious to know just how many people in legal aid are actually using their AI tools. Will be fun to find out!