What is best in life?
What should we really be teaching law students about AI? And Google I/O brings back a classic
What do I worry about with AI? Well, lots. But frequently I worry that we’re all worrying about the wrong things, however meta that sounds. For example, I see a version of this question get bounced around a lot on LinkedIn and other places:
“How should we go about teaching law students to adopt AI?”
There’s some assumptions underlying that question. First, there’s an assumption that everyone will have to be using AI in some form in the very near future. I don’t disagree with that, at least in that everyone will be interacting with AI, or AI-managed systems, or consuming AI-generated content, in the very near future if we all aren’t already.
It’s the second assumption that worries me - that because AI is going to be ubiquitous we should make students see themselves as parts of systems that will be using AI, and therefore outsourcing some of their human judgment to it. Let me explain that better.
I worry that in schools teachers and administrators are wringing their hands over “how do we teach our students to use AI” instead of worrying about teaching students to think critically and for themselves.
Here’s the thing - Google, Meta, OpenAI, Anthropic, etc. all are busily spending billions of dollars on getting people to use their AI products. They’ve got entire office parks of shiny buildings in the most expensive real estate market in this arm of the Milky Way, full of high-IQ designers, researchers, developers, product managers, and sociopaths working around the clock on this very question, fueled on free, organic, non-GMO snack food and high-quality espresso. And it’s working. Look at these percentages from a study on how different people use AI - it’s from a year ago. ChatGPT is the currently the fifth most-used website in the world.
I don’t think we should be worrying about whether or not students will know how to type words into the magic box on their phone/tablet/laptop/brain implant. Prompting is so 2024. It’s 2025, baybee.
Do we really want to be teaching students the ability to outsource their critical thinking skills to a machine? I would hope that we double down on the opposite - teaching the critical thinking skills necessary to navigate the world we’re creating for ourselves.
Let me be clear on one point though: I do not, in any way, think we should all become anti-technologists or whatever the atheists on BlueSky would call themselves. Admittedly, I am probably on the hype-guy end of the AI spectrum, at least according to my haters. Generative AI is on track, I think, to be one of the most important technological innovations we’ve yet seen. But that shouldn’t mean we teach our children to be its servants.
So what to do? I have two kids, both under the age of twelve, and this keeps me up at night. My hope is that my kids learn to think for themselves - that their education isn’t some kind of glorified training course aimed at slotting them into a cubicle at age 23, where they feed instructions into the magic AI box. We should make education about developing the mental framework to think, reason, tell good from bad, right from wrong, plausible from implausible. To give students the tools to imagine what the world should be and means to make it reality.

Anyway.
AI 2027
If you haven’t read AI 2027, you should. It’s eye-opening, although when they talk about the president doing something rational, I worry that they don’t know who the president currently is….
Google I/O
Another thing I worry about with AI is that we’re still stuck in old modes of thinking and framing things. Here’s an example - a lot of legal CLE’s talk about how AI can:
create summaries;
create first drafts;
help with research;
re-word paragraphs;
create correspondence;
and so forth.
But as pointed out by Ethan Mollick in his Substack:
Companies used to pay tens of thousands of dollars for a single research report, now they can generate hundreds of those for free. What does that allow your analysts and managers to do? If hundreds of reports aren’t useful, then what was the point of research reports?
A lot of “legal work” has been quantified into documents and time. A typical billing entry for an associate is like this:
3.8 hours: Research an analyze case law and statues concerning legal issue X and draft memorandum concerning same, comparing and contrasting facts of current case and procedural posture.
If AI can do that in less than 10 minutes for essentially free what is the point of the associate doing it? Or rather, what is the value proposition to the client in asking them to pay $300 x 3.8 ($1,140) for that? Training up new lawyers?
And yes, I understand that AI does not have malpractice insurance, hasn’t passed the ethics exam, all of that. What is the value, currently, of what we conceive of as legal work? Maybe we need to conceive of it differently.
But back to Google I/O, there was a point in this I promise.
Google (and probably OpenAI) are currently rolling out AI interfaces that interact with the real world, in real time. If you subscribe to Gemini Advanced or whatever they’re calling it, you can use the app on your phone to feed video from your camera directly into the AI’s brain or whatever, and have it help you out with things. Google demoed this by showing a guy using it to help him fix his bike.
But imagine this - a self-represented person being able to point phone at a court form and say “can you help me fill this out.” Someone using it to get advice on a homeowner’s or renter’s insurance claim by giving it their policy and then showing it the water damage from a leak. An automated car insurance system that, in the event of a crash, uses the telemetry from both vehicles, traffic cam footage, and the medical histories and health insurance data of everyone involved to determine a repair estimate, medical cost forecast, and offer to grease that entire process by going ahead and setting up medical and body shop appointments for its insured.
But we keep trying to put AI in a box where it will neatly summarize documents or create first drafts. It’s too big to fit in that box already.
One of the things Google demoed is a fully AI-integrated set of glasses, that is reading the world around you, and able to talk to you and do things for you. Google Glass vibes notwithstanding, we need to be thinking about what the world looks like where people wear those into court hearings, or a deponent is using them while giving testimony. They’re very stylish, unlike the OG Google Glass. But I don’t think law is ready for a world where legal advice is free and on-demand through a damn pair of Warby Parkers.
Under the Clinton Administration (Rumour) several laws were passed to make it advantageous for some to remove children from homes. I know you have a heart for legal injustices. I wonder if you and/or your followers would be interested in watching a Libertarian report on this issue of CPS run amok..... https://corbettreport.substack.com/p/you-are-the-power-solutionswatch?publication_id=725827&post_id=164606598&r=pf17y&triedRedirect=true
Agree that critical thinking deserves to be a priority. AI can provide answers, but it still can’t teach students how to ask the right questions. That remains our responsibility