How Would Regulating Generative AI Even Work?
And the problems with the "I'll know it when I see it" test for what is legal advice.
No, my smock rampant. The right is, my master
Knows all, has pardon'd me, and he will keep them;
Doctor, 'tis true—you look—for all your figures:
I sent for him, indeed. Wherefore, good partners,
Both he and she be satisfied; for here
Determines the indenture tripartite
'Twixt Subtle, Dol, and Face. All I can do
Is to help you over the wall, o' the back-side,
Or lend you a sheet to save your velvet gown, Dol.
Here will be officers presently, bethink you
Of some course suddenly to 'scape the dock:
For thither you will come else.Johnson, The Alchemist
Regardless, we don't want to get involved with all them lawyers
And judges just to hold grudges in a courtroom
Outkast
As I’ve said before, unless you’ve been living under a rock you’ve heard about Generative AI and how it’s going to impact lawyers, courts, and people seeking legal help. I’m increasingly concerned about how the state bars are going to try and regulate things.
What follows is my in-progress analysis of this, and some recommendations. I say “in progress” because I’m still thinking this over, and am honestly not sure what the absolute best regulatory structure is for allowing innovation and access, but while still ensuring that the public isn’t harmed. In other words, these aren’t even hot takes, maybe warm takes.
I’m also including case citations! And footnotes! Don’t judge me on citation format since: 1. I don’t care, and 2. I no longer own a Bluebook.
Let’s get started with some Q&A…
Wait, can state bars even regulate Generative AI?
Great question - it seems counterintuitive, that a state bar association that regulates lawyers and that state’s bar exam could extend its reach over a generalized technology. Generative AI isn’t marketed as a legal product, and people are using it for all kinds of things that aren’t seeking legal advice.
But state bars aren’t limited to just regulating lawyers. Any time someone, somewhere who isn’t a lawyer gives legal advice (whatever that is) or practices law, the regulators in state bar offices everywhere get a special tingly feeling.
It would indeed be an anomaly if the power of the courts to protect the public from the improper or unlawful practice of law were limited to licensed attorneys and did not extend or apply to incompetent and unqualified laymen and lay agencies. Such a limitation of the power of the courts would reduce the legal profession to an unskilled vocation, destroy the usefulness of licensed attorneys, as officers of the courts, and substantially impair and disrupt the orderly and effective administration of justice by the judicial department of the government; and this the law will not recognize or permit.
West Virginia State Bar v. Earley, 109 S.E.2d 420 (1959), emphasis mine.
So yes - state bars can step in if they think the unlicensed practice of law is happening.
But what if services using Generative AI put a disclaimer somewhere saying that it’s not giving legal advice?
That doesn’t matter. There are many many UPL cases where the person convicted of practicing law without a license had some sort of disclaimer or waiver.1
Doesn’t there have to be actual harm to someone before a state bar can step in?
Nope. From my least favorite Florida Supreme Court case, TIKD:
There is also no requirement in cases involving the unlicensed or unauthorized practice of law that the Bar produce evidence of actual harm to the public; rather, the potential for such harm is sufficient.2
I think there’s a great “parade of horribles” argument to be made that the potential for harm is immense. Generative AI is a “great bullshitter” in that it can spew out facts as well as fiction, and all with a very high appearance of confidence. So just use your imagination on how someone could be harmed in asking ChatGPT for advice about their divorce, or better yet, go and try it.
So what is the unlicensed practice of law, anyway?
Put another way: Is there a definition of “legal advice” or “practice of law” somewhere that Generative AI models can get trained on?
No! Stop trying to impose logic and common sense onto this!
State bars determine what constitutes legal advice or the practice of law on a case-by-case basis, and state bars have so far resisted creating a firm definition of the concept.3 There is a kind of test laid out in Florida Bar v. Sperry:
[I]n determining whether the giving of advice and counsel and the performance of services in legal matters for compensation constitute the practice of law it is safe to follow the rule that if the giving of [the] advice and performance of [the] services affect important rights of a person under the law, and if the reasonable protection of the rights and property of those advised and served requires that the persons giving such advice possess legal skill and a knowledge of the law greater than that possessed by the average citizen, then the giving of such advice and the performance of such services by one for another as a course of conduct constitute the practice of law.
Florida Bar v. Sperry, 140 So.2d 587 (1962)
Part and parcel with the practice of law is giving legal advice. There are many cases where state courts said things like merely completing forms was not practicing law, but giving people information about what to do with the form was practicing law.4
The point of Generative AI is to generate - not to recite or fill in blanks. You can make it do whatever you want, to a point, by asking it. That’s why I think the case-by-case test doesn’t work well because there’s a myriad of possible scenarios that are all real.
So let’s say regulators do step in. What does that look like?
I think there’s at least 3 or 4 options here, and I’m still working my way through these but want to get them down on paper at this point. So let’s get down to brass tacks.
Option 1: A Complete Ban on Generative AI
First off, I think this is legally unenforceable due to the First Amendment. There’s a good analysis in Upsolve v. James5 and the upshot is that any type of complete ban on the use of Generative AI would be the antithesis of being narrowly tailored to meet the strict scrutiny test.
Second, any kind of complete ban is unenforceable in a practical sense too, like the plaintiff in U.S. ex rel Gerald May v. Satan and his Staff, state bars would be forced to try and shut down a myriad of entities both here and abroad.
So why include this option? I think it’s not outside the realm of possibility that a complete ban gets proposed by some enterprising state bar regulators (looking at you, Florida and California).
Option 1.5: A Lightweight Ban
In this option, state bars decide that Generative AI is ok as long as it’s censored when creating content that involves legal concepts. So for example, you can ask ChatGPT “Where is the best place to park near the courthouse,” but as soon as you ask “Give me three good arguments the judge shouldn’t impound my car” it stops and says “Sorry I can’t give legal advice.”
Big problem - every damn thing can be a legal concept if you want it to be. Taxes. Cars. Social Security benefits. Hospital admissions. Just take a look at the list of topics in the two main legal help taxonomies, LIST and NSMI, and see how many everyday things are in there.
And as I said above, there’s no clear definition of what is and isn’t legal advice or the practice of law. So this is, IMO, pure fancy to think a regulation could be written this way and actually work.
Option 2: A Balancing Test
A balancing test would weigh the potential harm to the user against the potential benefit to them. This kind of test is outlined here on page 41-42:
“the relevant inquiry is ‘whether the risk of harm is substantially greater [than with assistance of counsel] . . .; whether consumers are able to gauge that risk; and whether categorical prohibitions . . . are the best response.’ In its current form, AI has the potential to cause significant harm to a pro se litigant, which outweighs potential benefits, particularly as consumers would not be able to gauge whether the AI is giving them good advice or not”6
I think this is more of a step in the right direction, but in practicality could work the same as an outright ban, just without the First Amendment concerns.
Let’s say a state supreme court goes through the balancing test, holds hearings, the whole bit. At the end of it, the court issues a very well-reasoned opinion saying that because the potential harms outweigh the potential benefits, pro se parties aren’t allowed to use Generative AI. Who’s going to stop them? Consider that both Google and Bing are furiously toiling away to out-do each other in bringing Generative AI to search - which for most people is the front door to the internet.
One method of enforcement would be to make people without lawyers certify under penalty of perjury on their court filings that they did not use Generative AI. I think that would either have a chilling effect on people filing a case, or lead to people just simply ignoring it, or a combination of both. All three are bad outcomes.
Option 3: State Bars Proactively Issue Guidelines for Generative AI
This is the one I think is most practical, which means it’s probably the one least likely to happen. State bars would issue a defined set of recommendations for Generative AI companies and applications that leverage it. Some points:
Some way of showing accuracy and non-hallucination;
Make any predefined prompts public, as well as any fine-tuning data that was used; and
Provide a robust disclaimer on anything that uses Generative AI that it shouldn’t be taken as legal advice.
In essence, as long as a service using Generative AI follows these steps, it gets some type of “seal of approval” from the state bar. Given that the Florida Bar freaked out over people saying other people had areas of expertise on LinkedIn (yes, you read that right), I can’t imagine this kind of thing would ever happen. But it’s fun to think about.
Again, these are very much in-progress thoughts, and I’m still reading through the literature. I’d like to turn this into something more detailed, and I’m wondering if a state legislature is a more appropriate vehicle due to state bars’ tendency to clamp down on anything that they don’t remotely like.
“[D]isclosure of its nonlawyer status to the public does not permit it to do what its status as a nonlawyer prohibits it from doing.” Fla. Bar v. TIKD Servs. LLC, 326 So. 3d at 1082 (Fla. 2021)
TIKD, supra.
See Florida Bar v. Brumbaugh, 355 So.2d 1186 (1978), citing State Bar of Michigan v. Cramer, 399 Mich. 116 (1976): “[A]ny attempt to formulate a lasting, all encompassing definition of ‘practice of law’ is doomed to failure ‘for the reason that under our system of jurisprudence such practice must necessarily change with the everchanging business and social order.'"
THE FLORIDA BAR re ADVISORY OPINION — MEDICAID PLANNING ACTIVITIES BY NONLAWYERS, 183 So.3d 276 (2015), citing The Florida Bar v. Brumbaugh, 355 So.2d 1186 (Fla.1978)
22-cv-627 (PAC) (S.D.N.Y. May. 24, 2022)
Gunder, Jessica, Why Can't I Have a Robot Lawyer? Limits on the Right to Appear Pro Se (March 16, 2023). Tulane Law Review, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4391167 or http://dx.doi.org/10.2139/ssrn.4391167