Reading Time: 10 minutes

I had to laugh to myself when a journal editor said that it was all they could do to avoid an AI-only volume. We were on a panel discussing artificial intelligence in law school and I felt their pain. Who wants to read anything about AI any more? Haven’t we all had enough? At the same time, it is the gift that keeps on giving for people studying legal research and legal writing. I can’t get enough cautionary examples and comparative points.

One of the interesting things the editor pointed out was that the glut of wannabe published authors on AI was creating pre-emption problems. The pipeline of AI articles was so full that, by the time they might reach publication, someone else, somewhere else, had already discussed the same issue and published it. It seems as though, if there were ever a time to stay in one’s lane and focus on something other than AI, now is the time.

The students who led the panel did a great job of setting up questions. They were curious about the impact on law libraries, on legal research, on preparing for practice. Even better for me, the questions picked at threads that I have also been picking at.

Can Law Students Use AI Ethically?

The first question had to do with ethical use. Can it be done? The short answer is yes. The longer answer is that AI, specifically generative AI, isn’t the problem (and it may also not be a solution) and it is that some lawyers are exchanging their process for a product.

We have used a pretty strong search process for a long time. We would search for something and we would retrieve results back. The search tools have improved over the years by incorporating artificial intelligence like natural language processing. We went from having to match a word to being able to retrieve results that were in the ballpark of a word.

But we always had to check the results. Google search for a great restaurant that serves tachin morgh. Do I just go with the first search result or do I click on a few to see what the reviews or menus or prices look like? My experience, both as a searcher and someone who watches search engine optimization, is that people click through to links. They would look at some of the results, finding something that was satisfying, satisficing, or, if not, tweaking their result with a new search. The rule of thumb for search engine marketing was to get to the first page which, by default, had only 10 results on it, although users might be able to display more results on their first page. But people seemed to work through at least some of those 10 results.

The same thing has always been necessary with commercial legal publishers. Browse to a resource (optional), run a search, read the results, rinse, repeat. As I told my research students, there is no perfect first search. You need to refine and iterate. Each repetition should bring you closer to responsive information for the question to which you’re seeking an answer.

When I was walking students through some examples of prompts and responses, we discussed how AI output really resembles secondary sources. The responses lack the human expertise behind most secondary sources but that is the role AI output can play: a summary and a guide to relevant resources, but not an endpoint. Also, holy cats, running anything but the most basic prompt—what is essentially nothing more than a natural language query—can take forever. I’m finding AI with legal publisher platforms unworkable in the classroom just due to the staggering delays in results. I don’t know how lawyers can justify running an AI prompt for 10 minutes, and then doing a follow up. If AI is a time saver in that situation, it begs questions about the lawyer’s own skills.

AI should not change that process. The difference in legal research terms is that you are delegating more of the information processing portion. The AI, not you, is browsing to resources. It, not you, is selecting likely matches, and it, not you, is reading them. It is then providing you with an answer and you then have the choice to adopt that answer or investigate it.

If a law student or lawyer keeps to their process, they will, at this point, read every result. That is what they would have done after a search using non-generative AI. A law student or lawyer who fails to take this step is risking a professional misstep, potentially leading to discipline or sanctions. But it’s not the AI’s fault the researcher altered their process if they did not verify their research.

AI is a research tool. A researcher has to figure out if and when it fits into their research process. They can make a choice to skip AI. We are not—yet—at the “AI made me do it” stage.

What Can Legal Professionals Do to Mitigate AI’s Defects?

There was a follow up question to that and it was whether researchers and AI developers can mitigate the bad outcomes we’re seeing? The answer is really the same thing. No, they can’t mitigate generative AI’s failings so they most verify the output. Most technologies that lawyers use are designed for a purpose by someone other than the lawyer. Most law students and legal professionals are not in a position to impact those technologies directly. No one has ever been able get Microsoft Word to reveal codes the way WordPerfect did, even if we might have wanted that kind of granular understanding of our documents. Even when a law firm or legal publisher provides a RAG layer with a large language model, it is not enough to eliminate all of the errors that generative AI may produce. We are not all going to be perfect at prompting generative AI. You still need your research process.

Also, one of the impressions I get is that AI is just showing how poorly lawyers may already have been performing legal research. When you get a situation where one party files a pleading with confabulated citations and content and a judge assimilates the arguments and makes a judgment without noticing, the problem is with the professionals, not the technology. Who knows how many lawyers and judges in how many courts have been using work product with unsubstantiated law in it prior to generative AI. Who but the lawyers and judges would catch it?.

I really think this is a process problem. Sloppy lawyering should result in discipline and sanctions and legal malpractice findings. The takeaway for me is that AI is perhaps exposing this sloppy lawyering rather than being the direct cause of it. Also, lawyers are not great at levying consequences on their self-regulated peers. The risk of AI slop and sloppiness isn’t very high for lawyers.

There was a discussion about the ABA Model Rules and specifically comment 8 to Rule 1.1. I am not a fan of Comment 8. I think it merely restates the requirement of the actual rule, which is that a lawyer has to be competent. It says “technology” but it means “computers” because lawyers still had to be competent using a paper Shepard’s citator set—which seems clearly to be a “technology”—and could be in trouble if they muffed that use through incompetence. The comment is superfluous.

I would really not want to see the regulators try to keep up with changes in technology by adding to or amending Comment 8 to consider artificial intelligence. This will not be the last technology that lawyers need to adapt to and I think an expectation of competence is already clearly stated. States like Alabama and Georgia who have not adopted Comment 8 do not seem any worse off than states like California or New York that have. The comment and any AI enhancements are not going to make clients safer than they already are with the basic language of the rule.

Should Law Students Use AI?

The broader question I would ask is should law students or lawyers use any technology? If the answer is yes, then we should approach adoption of AI in the same way we would any other technology. At its heart, a lawyer should use a technology if it improves their ability to practice law. If it doesn’t, they shouldn’t. If the only way it improves their ability to practice is to result in unethical outcomes, I think it’s pretty clear it shouldn’t be used.

But AI can be used as part of an ethical, professional process. The real question is whether a lawyer or law student can afford the benefits of using generative AI when balanced with the drawbacks. I don’t think we even have to look at confabulated citations and misstated law. I went looking for an opinion using generative AI, on a RAG-infused platform, and came up with nothing, even though the opinion I sought was already in PACER and on Courtlistener via RECAP.

A screenshot of a Westlaw CoCounsel result when asked for a citation for a case that is in the Westlaw database. It says that, "after conducting an extensive search through Ninth Circuit Court of Appeals cases from 2017 to 2021), no specific case was found..." The AI prompt had said the opinion was from "this week" in other words, October 2025.
A Westlaw CoCounsel result telling me that an opinion with the Westlaw citation of 2025 WL 2951371 doesn’t exist.

Let’s start (and perhaps end) with time. We may think legal professionals sell their knowledge but it’s really documents and time they monetize. When I fire up a Westlaw CoCounsel or Lexis Protege’ request and it takes 5 minutes to complete, I think of it as a tenth-of-a-billable hour gone. If it gets me closer to my goal, that’s great. If it doesn’t, I’ve just lost a billable segment.

Inevitably, it will not always get me closer to my goal and I need to re-prompt. I think it would be rare to hit the complete answer on the first attempt, especially for lawyers who are not prompting all day long. Yes, that’s a me problem but it also is now a twenty-percent-of-a-billable hour problem. As a researcher, I am now faced with determining whether I could be writing better prompts and (given Rule 18.3 of the Bluebook) do I really want to?

Which leads me to my second bit of good news: These clunky search-related and AI-related rules are all located in The Bluebook’s Whitepages, which are specifically designed for law review students and academic authors. Those rules generally do not apply to practicing lawyers and judges. The simpler Bluepages, which are designed for legal practitioners, do not include these new rules. And while a lawyer may use the Whitepages rules to supplement the Blue-pages, you certainly don’t have to.

Review: What a mess. Practicing lawyers and judges should not use these rules to cite their search processes. 

Everything You Need to Know About the Bluebook, David J.S. Ziff, Washington State Bar News, October 15, 2025

The reason artificial-intelligence-as-natural-language-processing was so valuable is that it sped up searching. Even if I was a wizard with boolean connectors, I could fall back on NLP to help me in situations where I wasn’t making headway. A sequence of searches and review of retrieved results could be iterated through rapidly, without the lag of that AI seems to require. I could switch between the two tactics to shake out my search results.

But time doesn’t have to be the essence. What about skills development? I’ve already touched on that recently (see, even my writing pipeline is constipated with confabulation). So let’s also talk about skills atrophy. Law students need to develop these skills so we can let them know that AI is going to diminish that skill building. Lawyers need to build these skills throughout the start of their practice. But we can also let them know that, in the future, even after they’ve built some skills, reverting to generative AI may undercut what they’ve acquired.

I mentioned this study on endoscopists (here’s an explainer), those doctors who handle colonoscopies. It’s obviously early days but the improvement in AI polyp detection may be offset by a decrease in the ability of the doctors to handle colonoscopies unassisted after using AI. It makes sense that this would be a concern with other skills-based professionals. The talk of replacing entry-level roles with AI to do the less-expert roles in law firms may assume that someone can become a more senior lawyer without having developed those skills. It may also mean that those senior lawyers, if the AI becomes unavailable—say, if any of the platforms rely on a Microsoft Azure or Amazon Web Services region that crashes out—will be unable to perform skills that they have not themselves developed and maintained, having delegated them to AI.

Many randomized trials show that AI use for polyp detection increases [adenoma detection rate (ADR)] by absolute 5-20%, generating enthusiasm for this technology among physicians, industry, and society. However, this excitement has diverted attention from another key clinical question: the impact of AI on human capability…. Our primary analysis showed the continuous exposure to AI reduced ADR of standard, non-AI assisted colonoscopy from 28.4% to 22.4% with a 6% absolute difference, suggesting a detrimental effect on endoscopist capability

Endoscopist de-skilling after exposure to artificial intelligence in colonoscopy:
a multicenter observational study
, Krzysztof Budzyń and others, The Lancet Gastroenterology & Hepatology, Volume 10, Issue 10, 896 – 903, October 2025.

AI is not a 100% tool. We know it isn’t able to hit 100% accuracy in legal research responses. To be fair, lawyers do not hit 100% accuracy in legal research with or without AI. But as long as lawyers are also searching using other skills, rather than delegating that learning or maintenance of learning to someone else, they should be able to weather whatever changes come in research.

So yes, feel free to use AI. But only as a tool among many others that ensures you are able to work without it. In the same way that law students should not be using one legal publisher platform but should be learning information literacy, so that they can adapt to whatever platform they are provided or can afford when they enter practice.

What Should Law Students Expect When They Enter Practice?

I love the future-looking question. In 2006, did we know about the impact that the iPhone would have? Or the cloud? Speaking of clouds, the puffery around AI is something to behold. Nothing spoke to me more than OpenAI pivoting to porn, because porn drives the internet. Perhaps OpenAI should have started there before rolling ChatGPT out to everyone.

It spurred innovation in other areas, too. Online pornography providers were pioneers in web technologies, such as video file compression and user-friendly payment systems, and in business models, such as affiliate marketing programmes. All these ideas went on to find much wider uses. And as the internet expanded, it gradually became less for pornography and more for all that other stuff.

Does pornography still drive the internet?, Tim Harford, June 4, 2019.

Talk of the AI bubble is growing. Except for the consultants still pounding the agentic AI story like McKinsey—if you haven’t read “The Big Con” yet, I highly recommend it—we are seeing cooler suggestions about what’s coming. For one thing, agentic AI sounds great but useful versions may be a decade away. It also may have some real trust issues to overcome.

This is without touching on how lawyers might experience AI in the future, through their law libraries or otherwise. How about AI-powered web browsers like Comet from Perplexity or OpenAI’s Atlas. What are those browsers going to be fed on, as more sites take adversarial postures to those companies and their large language model scraping? Sites that get behind Cloudflare firewalls or customize their own, creators who use products like Nightshade and Glaze to poison AI learning their images. We’re learning, from Anthropic no less, that it may only take 250 documents to poison any size large language model.

Poisoning attacks can compromise the safety of large language models (LLMs) by injecting malicious documents into their training data. Existing work has studied pretraining poisoning assuming adversaries control a percentage of the training corpus. However, for large models, even small percentages translate to impractically large amounts of data. This work demonstrates for the first time that poisoning attacks instead require a near-constant number of documents regardless of dataset size…. We find that 250 poisoned documents similarly compromise models across all model and dataset sizes, despite the largest models training on more than 20 times more clean data.

Poisoning attacks on LLMs require a near-constant number of poison samples,
Alexandra Souly and others, arXiv:2510.07192 [cs.LG], October 8, 2025

A direct question from a student was whether or how long it would be until we could rely entirely on generative AI for a citation to be correct every time? The answer is “never” so long as lawyers are bound by ethics rules. A lawyer will never be able to take a search result or a generative AI output and skip the last step of the process: verify it exists, validates what it says. Unless the courts get rid of Rule 11 and its equivalents, and regulators stop regulating, there will never be a time where a lawyer can defensibly say, “meh, looks good. Let’s file!”

Which is totally fine with me. That doesn’t negate the possibilities that artificial intelligence has provided and that generative AI may offer. It just means that there is a legal research process. It is a skill that is developed over time, that uses a variety of tools (perhaps including generative AI), and that should have a significant professional penalty for failing to use it appropriately if it has a negative impact on a client or a court or other process. As artificial intelligence embedded in legal research tools improves, it may impact that time and productivity issue. But it will never get rid of the obligations that a lawyer has to know what the law is and says.

In the meantime, AI will be the gift that keeps on giving. I have been able to use it in legal research and writing classes and look forward to seeing it pop up in faculty assemblies, professional meetings, and other courses I teach. I saw on an EBSCO webinar that someone had compared generative AI to the steam engine and electricity for its profound, inescapable impact. That may turn out to be true. But for future lawyers and those who train them, I think that impact is constrained even if its use is pervasive.