NYSBA Task Force on Artificial Intelligence just issued its Report and Recommendations this past week. In it, they discuss whether or not generative AI models are capable of the unlicensed practice of law.
To begin a discussion about what constitutes the unauthorized practice of law (UPL) and specifically how use of generative AI, including LLMs, such as ChatGPT, Claude, Bard, and Midjourney, may be considered UPL, we first examine what is the practice of law.
While there is no nationally agreed definition of what constitutes the practice of law, the ABA Model Rules provides one (discussed below). Some states have also fashioned their own definitions of the practice of law. Yet, without a uniform definition and precise meaning of the practice of law, we fall upon the adage: “You know it when you see it.”
The ABA defines the practice of law as the application of legal principles and judgment regarding the circumstances or objectives of a person that require the knowledge and skill of a person trained in the law. However, New York State does not offer a precise definition of the term.
[discussion of the ABA model rule 5.5 which I’m removing]
Based on these rules, AI programs that do not involve a human-lawyer in the loop in providing legal advice arguably violate the rules and may be considered UPL.
Not great! Especially when they just got done explaining that they can’t tell us what the practice of law / legal advice actually is. Later the committee report goes on:
The reality of the situation is that generative AI is here, and it is not going away but will rather become more advanced and more available to the general public as time goes on. It should be noted that the challenges facing the legal profession are not unique. The medical profession also is addressing the challenges presented by patients who have consulted with generative AI and arrive at an appointment with opinions on what is the correct medical advice. Lawyers will similarly be challenged by clients who have compiled information and learned about their legal options using generative AI.
We believe it is important not to dismiss innovation, and to allow vendors and companies to develop programs that will help guide the general public. It is just as important for attorneys to educate themselves on AI so they can utilize it and understand how their clients may be using it as well.
So are we allowing vendors and tech companies to develop stuff, or are we going to say that’s UPL? Maybe both!
I would say that regulatory uncertainty is a big roadblock in A2J innovation, especially for this moment, when instead of fighting over whether making legal forms available online is the unlicensed practice of law, we can now fight over whether an AI that predicts what word comes next is the unlicensed practice of law.
From a great article by Damien Riehl in the Minnesota Bench & Bar Magazine:
Indeed, if you upload a statute into an LLM and ask it to consider how your specific facts apply to that statute, the LLM will provide a response. And that response might be shockingly similar to the words that a lawyer would write. Maybe even better.
Those LLMs are also likely to provide the same types of disclaimers that you provide in offering details about your firm and its practice areas on your website: “This is not legal advice.” Of course, these disclaimers help keep lawyers from creating attorney-client relationships. Do they also keep consumers from believing that any attorney-client relationship exists when those consumers use tools like LLMs?
That’s a good point about disclaimers. Right now I’d say most LLM responses include them whenever they’re closing in on whatever legal advice actually is. But It may be that, because nobody knows what legal advice is, we put disclaimers on stuff that people who aren’t attorneys wouldn’t ever consider to be actual legal advice, like tweets about sports or politics or what restaurant you just ate at:
But from a study I did in this paper, the presence of a disclaimer makes a huge difference in whether people thought an AI response was legal advice or legal information:
So what conclusion should we draw from this? Obviously disclaimers are an important part of perception, but I’m concerned that the guild could (perhaps rightly) view them as mere window-dressing and judge AI based on the content itself.
My biggest concern is that we move from this period of benign neglect to one of muscular enforcement “for the protection of the public” and let the guild gaslight its way into something far worse than regulatory uncertainty.
This article by Marc Lauritsen on regulating legal document software is great on many levels, but I want to pull this quote out:
What is the right regulatory response? Is it good policy to forbid automated legal assistance? Should lawyers be given a monopoly over legal software as well as over personal legal services?
Later on the author draws some very important distinctions, such as
The difference between help and representation.
Help can cover a lot of bases - giving information, giving advice, completing a form, etc. In some sense, legal help is what people already get from non-lawyers, like their friends & family members, or from the internet. Representation, however, carries with it a whole host of other relationship implications, not least that the person representing is ultimately responsible for their client. This is far different than just giving help.
The difference between doing things lawyers do, versus saying you are a lawyer.
I think everyone would agree that holding yourself out as a lawyer is bad. But lawyers end up doing many things. Do we think that because doctors, as part of their medical practice, end up writing patient notes or recording symptoms, that an parent writing down their child’s temperature or what time they gave their child medication is the unlicensed practice of medicine?
So what’s the right answer? Honestly I’m not sure that a free-for-all is what we really want, but at the same time I also don’t want the organized bar trying to regulate AI they way they tried to regulate books like Norman Dacey’s How to Avoid Probate.
Some self and state regulated professions allow for supervised ancillary personnel activities. Pharmacists have pharmacy technicians. Dentists have dental hygienists. Physicians have physician assistants and nurse practitioners ( getting more and more independent). If a flesh and blood licensed lawyer is the ultimate responsible party over an AI law function would that not be legal?