I’ve been thinking lately about the difference between what’s possible with generative AI, vs. what’s actually useful, and then vs. what’s actually profitable. Here’s a very bad drawing since I’m a visual person:
I think right now in terms of products and ideations, we’re seeing a heck of a lot of stuff being trotted out in the “possible” area, but very little that has made its way into the “profitable” area. Take, for example, the recent adoption of generative AI by both WestLaw and Lexis - that seems to me to be more of a “well we’ve got to do something with this” move rather than “let’s do this and attract loads of new customers!” Granted I don’t know how many firms switch from West to Lexis or vice-versa on a weekly basis, but I just don’t think there’s that much churn out there.
An interesting article in the WSJ ran recently about those “InsurTech” startups that were supposed to disrupt the industry of insurance by using data. Spoiler: turns out the big insurance companies had all the data, and it’s very hard to compete with them on the premise of data when you’re having to go out and get new data yourself.
As I’ve pointed out before, there’s not a real “moat” anymore on legal tech companies offering “AI” products that are actually AI and not a glorified decision tree. I think the real moat is data. But to me the biggest issue is distinguishing between what’s possible and what’s actually useful.
Is it possible to tailor car insurance premiums to a particular driver’s habits? Well, sure. But is it actually useful? Will someone want to switch their car insurance in order to save $7.25 - while at the same time the InsurTech company is spending out the nose to try and collect enough data to justify that? IDK.
Likewise in law - is it possible to rub some generative AI on every process in a legal department? Well, sure. But what, exactly, are legal departments going to pay startups to do with generative AI that can’t be done by the companies that already have all the data? And what will legal departments actually find to be useful?
So maybe all this new technology does is further entrench the existing players.
Will it replace lawyers?
Hell, I don’t know. Tom Martin has a very good article here that I mostly agree with (you should read it and form your own conclusions). Two quotes stuck me:
ChatGPT in the wild (as I refer to the consumer version of ChatGPT, chat.openai.com) is particularly well-suited to mislead those who are not legally trained into believing in the accuracy of the response provided because it states the responses confidently and due to the fact that a knowledge imbalance prevents a user from being able to see the cracks in the facade (errors) that a legal practitioner would notice right away.
Unfortunately, there are at least a few problems with this line of reasoning:
Users don’t care;
Users have had enough with not being able to get answers to the legal questions they have and are too broke (80% of them can’t afford a lawyer) or frustrated to hire a lawyer to help them; and
The answers users get are ‘good enough’ for them.
Exactly - personally I think a lot of people in legal, and especially in legal aid, get caught making the perfect the enemy of the good.
Where I don’t agree with Tom is that I think the people who will use ChatGPT instead of hiring a lawyer are the people who don’t hire lawyers anyway. Millions of people use Google search as their legal issue triage system every day. In terms of people dealing with their own legal problems, there hasn’t been a real gate for a long time, just maybe a bunch of minefields that we as lawyers force unrepresented people to march through. If ChatGPT helps people avoid that, then I’m for it.
Coincidentally, I think the biggest opportunities for this technology aren’t with doing legal research or legal brief writing, but in alleviating a lot of the drudgery crap that small firm lawyers deal with, like the first drafts of letters, sorting through things, even making the first drafts of blog posts or website copy.
A Comparison on Generative Search
Speaking of search, Google is rolling out their version of generative search to try and catch up to Bing (which is something I didn’t think I would ever type). I tried asking it some legal questions:
What’s interesting is there seems to be a breaking point where it switches to straight search, particularly when you ask it to do more involved legal things:
When I asked Bing, it doesn’t have the same breaking point:
To be fair I don’t know if Google has built in a YMML (Your Money or Your Life) evaluator around asking for legal advice, or if their search just defaults to this when the response is more complicated.
An Experiment:
I’ve also been experimenting with what generative AI can do, and one interesting area is how it can explain things like new laws or changes to existing laws:
I created an app that you can feed bills and staff analyses to, and then let people chat with those bills. One curious thing is how credulous the algorithm is - if the bill or staff analysis says something, then it will believe it. I’m still working on it but as a voter education concept it’s kind of interesting.