On some level, it’s safe to be a cynic. There’s not a lot of risk in staying on the sidelines and calling out all the potential pitfalls something new brings with it. Here’s a little sample of what this sounds like:
“__company’s__ products are all rife with data privacy issues”
“Using _____ (AI is the current one) only perpetuates existing harms, like biases, climate change, and racism.”
“AI hallucinates, so we can’t use it for anything.”
“__product__ isn’t made for legal. We need something that’s specifically made for legal use cases.”
“__some technology__ isn’t as good as full legal representation, so it’s off the table.”
There’s a large amount of this kind of thinking on in the legal world. I think a portion of it has to do with the natural skepticism that lawyers have to anything new, another portion has to do with the amount of patently stupid hype around any new legal tech tool. And skepticism is good! But cynicism and skepticism will keep you mired in doing things the way you’ve always done them, which isn’t good.
Overcoming skepticism
Despite my ingrained skepticism (believe it or not I’m a licensed attorney), I still really think that generative AI can be a powerful tool for people accessing justice. I also think we need that we need to be careful, and not do stupid things like oversell the benefits, or offer to pay a lawyer 1 million dollars to use an AI tool in oral arguments before the U.S. Supreme Court.
But there’s a lot of reluctance to experiment (safely) with AI tools in the legal help community. I’m of the opinion that we should experiment - small, controlled, and safe experiments, but experiments nonetheless. And if the result of an experiment is that everyone hates it, great! Let’s figure out why that is, and then not do that.
Here’s an example of a tool that commercial businesses are building with AI:
This should start around the 38:30 mark for you.
Imagine this kind of tool for a legal aid organization, where you could upload a document you got in the mail, and the AI agent would build a workflow for you, pull in relevant resources, and be able to help you make decisions on what to do.
Or, think about something like an AI agent for renters to make a complaint to their landlord about unsafe or unsanitary conditions, where they upload photos or a video of their apartment, and it creates the letter for them, include the city/county code provisions, and will even send it to the landlord and CC the renter.
Or think about one for reviewing documents for people facing a denial for Social Security Disability. Or one that helps people understand the rights of their loved ones in nursing homes. Or how to navigate the asylum process. There are other use cases as well.
People in and out of the legal system have to navigate many byzantine systems. One of the obvious things that we, as a society, should do is find ways to reduce bureaucracy and complexity wherever possible. But until we actually do that, maybe there’s something we can do.
Here’s a controlled, limited experiment a legal aid org could do:
What if a legal aid organization created their own “Custom GPT” in the OpenAI store (which is now available to free subscribers), and did some type of controlled limited test to see how helpful people found it to be. They could enable it to access a directory of legal aid organizations based on the user’s zip code (this is actually surprisingly easy to do), and upload helpful materials to it. Make sure in the settings to turn “Use conversation data in your GPT to improve our models” to off though.
Take a set of users from a clinic that would volunteer to test it out. Give each one a scripted scenario, like “pretend you’ve just gotten an eviction complaint, and you’re using this tool to find out what to do.” Then see how they use it, and measure whether or not they actually find it helpful. Margaret Hagan at Stanford has been doing similar research (I think this is pre-publication). But let see if what we keep talking about has any actual value, and then if it does, iterate on it and find what works best.
The cynical take on this is that this isn’t going to be perfect, and AI isn’t going to be a substitute for full legal representation, so we shouldn’t do it. Otherwise we’re perpetuating a two-tiered system of justice for the haves and the have-nots. Or something like that.
But I’m on the side of experimentation and improvement. The alternative is more of what we have now.