Benjamin Alarie and Abdi Aidid are legal experts who are heavily involved in the development of legal technology. They are releasing a new book, The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better later this year. 
Benjamin Alarie is a tax law professor at the University of Toronto and has been in the tax law profession since 2004. He became interested in the future of legal education and how artificial intelligence will affect the profession, which led him to co-found Blue J, a legal technology company in Toronto. On the other hand, Abdi Aidid practiced as a commercial litigator in New York before becoming the Vice President of Legal Research at Blue J. He led the team of lawyers and research analysts and helped develop AI-informed predictive tools, which predict how future courts are likely to rule on new legal situations. Abdi is now a full-time law professor at the University of Toronto, teaching subjects like torts and civil procedure.
Naming the book “The Legal Singularity” is a big claim by the authors, so we asked them to explain what they meant by it. According to Abdi Aidid, the legal singularity is the practical elimination of legal uncertainty and its impact on our institutions and society. It is a future state where the law is unknowable in real time and on demand, and we can start doing things that we were not previously able to do because the law was either difficult to ascertain or we did not have a normative consensus around what the law ought to be. The concept of the legal singularity is related to the idea of a technological singularity, but it is not a totalizing event like the technological singularity. Instead, it is an equally socially important concept that focuses on how technological improvements affect the law and related institutions.
Alarie and Aidid suggest that the legal market needs to address bias in AI tools by keeping humans in the loop in arbitration and judicial contexts for a significant period of time. They believe that even as the legal singularity approaches and people begin to have confidence in algorithmic decision making, humans should still be involved in the process to audit machine-generated decisions. They argue that this is necessary because the law deals with deeply human questions, and there is more at stake than just ones and zeros. They believe that humans have to contribute to the legal system’s notions of mercy, fairness, empathy, and procedural justice. They also suggest that involving humans in the process helps to inform the technology before disastrous consequences and helps to refine it. Therefore, they emphasize the need for human review of machine judgments, which will lead to accelerated learning in the law. Furthermore, they highlight that the legal market needs to distinguish between the kinds of problems that are a reflection of unaddressed social problems or those that are new technological problems. They stress that the legal market is still collectively responsible for resolving these issues.
Listen on mobile platforms:  Apple Podcasts LogoApple Podcasts |  Spotify LogoSpotify
Transcript

Marlene Gebauer 0:08
Welcome to The Geek in Review. The podcast focused on innovative and creative ideas in the legal industry. I’m Marlene Gebauer,

Greg Lambert 0:14
And I’m Greg Lambert. So Marlene and I are in the office today we had a presentation at lunch and we just decided to stick around here today. But we have brought in a couple of authors on Toronto. So we have Abdi Aidid, Assistant Professor of Law at the University of Toronto, and Benjamin Alarie, the Ostler chair in business law at the University of Toronto and also affiliated faculty member at the Vector Institute for Artificial Intelligence, which I just love that they have a new book coming out later this year in July, called the legal singularity how artificial intelligence can make law radically better. Abdi and Ben, welcome to The Geek in Review.

Benjamin Alarie 1:00
Thanks. Great. Thanks, Marlene.

Abdi Aidid 1:02
Thanks for having us.

Marlene Gebauer 1:03
So before we dive into the book, Abdi. And Ben, tell us a little bit about yourselves and what got you interested first in the law, and then about AI and the law? Ben, why don’t why don’t you take a stab at this first?

Benjamin Alarie 1:14
Sure. I think this could be a very long answer. I’m gonna make it and the shorter answer. I’ve been a tax law professor here at the University of Toronto since 2004. I went straight through undergrad went through law school, did graduate work in economics, graduate work in law, and and then did a judicial clerkship at the Supreme Court. And, and as an academic, my focus is tax law. I’ve been at the tax law, professing game since 2004. In 2011, I was associate dean of the faculty here at the University of Toronto, it was in the context of leading review of the curriculum, the first year curriculum that I got really interested in thinking about the future of legal education. And it became totally obvious to me that artificial intelligence is going to dramatically affect the future of the profession and of legal education. And that led me to ultimately co found blue J, which is a legal technology company here in Toronto with a couple of my colleagues here from the law school. It was in that context that I then became interested in in writing this book with Abdi. And so it’s a series of coincidences that led me to this point, but really an intense interest in artificial intelligence and the law and the law fascinates me tax law, including tax law, it’s kind of the operating system for society. And so this is a continuous outgrowth and development of my thinking about the law.

Marlene Gebauer 2:48
Oh, that’s interesting. I am sure many of our listeners are familiar with with Blue Jay. opti How about you?

Abdi Aidid 2:56
Sure. So my answer really should be shorter. Because I’m a lot earlier in my career than Ben.

Greg Lambert 3:02
That’s a polite way to put it.

Benjamin Alarie 3:05
Yeah, he’s saying I am old.

Abdi Aidid 3:08
I think that’s hardly hardly Ben’s actually one of one of our youngest faculty here at the law school. So I like Ben, grew up here in Canada, went through my undergraduate education, went off to law school, later did a postgraduate law degree. And I practice for a few years as a commercial litigator in New York, focusing on complex corporate litigation and arbitration. And in that time, I quite enjoyed what I was doing. But I noticed pretty quickly that a lot of what I was doing was drudge work, though, kind of was somewhat, at least on occasion, a bit of a letdown based on what I’d studied, you know, I went to law school, and we were debating lofty ideas and thinking about complex legal questions. And I spent a lot of my time in the early years of practice as your listeners are, for sure familiar, you know, in the legal research universe, trying to find the right cases, running endless Boolean search strings, right. So part of what I was doing, was helping to supply information to more senior lawyers and helping to come up with legal advice for clients. But I was doing so in a manner that I even recognize then as being a little bit inefficient. And so I had always had my eye on ways to improve the practice that day to day especially for early career associates like myself, but I also had my eye on more creative pursuits. I wanted to affect the law at a more systemic level. I wanted to help think through complex problems and the day to day you know, one off client engagements weren’t really exactly my speed because I wanted to sort of ponder the big questions and I was fortunate when I moved back to Toronto to get to know Ben a little bit better and learn about Blue J and eventually joined blue J as later the vice president of legal research, so I led the team have lawyers and research analysts that did the work that helped inform the tools that Bucha developed. The most notable of which for my own work was the AI informed predictive tools helping to predict how future courts are likely to rule on new legal situations. And so in that time, Ben and I realized that there was a ton of overlap in our ideas. And we sort of thought about the future of the law in similar ways. And since then, you know, as part of that work, I was teaching part time here at the law school, related courses and legal research and writing. And this past year, I joined the faculty as a full time law professor teaching, foundational first year subjects like torts and civil procedure.

Greg Lambert 5:45
So the name of the book is the legal singularity, which first of all, that’s pretty bold title, you know, thinking that title think thinking about law becoming self aware. So, Ben, or Abdi, what’s what’s kind of the main thesis of the book? What should the reader expect?

Benjamin Alarie 6:03
The main thesis of the book is that we look at technology, we look at big things that are happening with technology, this is going to come as no surprise to your audience. Now in the wake of ChatGPT, and all of the developments that are pretty top of mind these days, four years ago, when Abdi and I started working on the book, it was it was a lot less obvious that artificial intelligence was going to make a really big impact, at least among most lawyers, it wasn’t obvious, there was a lot of skepticism. The claim of the book is that as we see computational power continue to double every couple of years. And as we see the cost of computing power come down, as we see more and more legal information becoming digitized. And as we see improvements in algorithms, machine learning algorithms, the cost of predicting legal outcomes is going to essentially vanish, it’s going to become very clear what would happen in court with respect to a particular situation in terms of the legal outcome. And this becomes extremely interesting. What happens as a result of this and for Abdi, and for me, we, we decided that it would make sense to do a book length study of the implications for the law across a number of dimensions and explore what does it mean for the judiciary? What does it mean for legal education? What does it mean for practicing attorneys? What does it mean for society? Generally, what does it mean for democracy and voting? So we decided to take it all down. But essentially, the key insight is this, which is that the law is going to become complete, it’s going to become much clearer, much more fair as a result. And as we put it in the subtitle of the book, it’s how artificial intelligence can make the law radically better.

Marlene Gebauer 7:59
So I mean, how are you seeing that applied? Now? I mean, how are you seeing that movement now? Or is this something truly you think is going to be more in the future?

Abdi Aidid 8:10
One, you’re seeing widespread curiosity, right, you’re seeing a lot of lawyers recognizing that there’s great opportunity for them to do their job faster, better with additional technological support, Blue Jays, sort of proof positive of that the company started in 2015. And since then, has had many, many clients among large law firms, accounting firms, both in tax and employment law. And so the the evidence of the uptake, the fact of the uptake rather, is evidence that there’s curiosity and interest and that lawyers are wanting to put their money where their mouth is, when it comes to being forward looking. What we’re also seeing is, I would say deep interest in the future of the profession, sometimes it’s motivated by anxiety about what’s to come. Right, right. But it’s a healthy conversation for us to engage in nevertheless, and part of why you wrote this book was not just to reassure people, but to help them see sort of their place in this unfolding future. And so the fact that we’re having that conversation, the fact that ChatGPT has sort of woken everyone up and made them realize, hey, this is like more around the corner than you think. It gives us an opportunity to engage in a conversation with more rigor, to have a more sophisticated and mature conversation closer to what you’re seeing and other sort of industries. For instance, right now, there’s a pretty sophisticated and mature conversation having happening in the world of transportation policy around how to accommodate self driving vehicles. You don’t you don’t have yet the same as a widespread acceptance in the world of law about this being the imminent future, so to speak. And so part of what we’re trying to do is motivate that conversation and encourage people to really engage with us about what we want this feature to look like, because we have a window right now, for us to sort of forge a consensus about how we want things to look, I want the legal industry to look how I want all of our legal institutions to look. And by extension, how we want our society to look. And so the time for this book, the time for this conversation, is now and the early uptake that we’re seeing in the legal industry through technologies like blue J, is a suggestion that they’re not going away, and they’re only expected to proliferate.

Greg Lambert 10:26
Well, you know, on that, I want to ask you about the intended audience for the book. But I want to preface that a little bit with, have you ever seen the legal market reacts so quickly, as it has over the past, you know, three, three and a half months, since the launch of ChatGPT, because I think we’ve seen, well, here, here’s how I view it. And you can tell me if I’m wrong with this is, you know, for the past, I’d say five years, we have had product, legal product after legal product after a legal product that says, we use artificial intelligence, except as the user, you never really you kind of have to trust that that’s going on in the background. And it wasn’t until we had a product like ChatGPT completely not illegal product, just every you know, every person product that all of a sudden engages the user in it, I think it all of a sudden, after five years of being told, you know AI is coming. Now all of a sudden, it’s real, because we’ve interacted with it, we get to see the response to it. So your the timing of this book may may be perfect for the market. So first of all, have you ever seen the the the legal market react like it has now and then Who do you think is going to be very interested in reading your book? Now?

Benjamin Alarie 11:58
I think what’s really fascinating, Greg is, we haven’t seen a reception of any kind of tech product, like we’ve seen for ChatGPT. Right? I think it launched at the end of November, by the end of January, open AI reported that there were over 100 million active users of the platform. And that apparently is a faster adoption curve than any other software ever in history. So it’s not just the legal profession that has reacted more aggressively in response to ChatGPT. Everybody more generally, who’s who’s reacted really aggressively to it, it’s it’s very interesting, I would say, the particular genius of it, yes, the chat. And the results that you get out of using ChatGPT are really engaging and curious. And I think the the one of the key things, the underappreciated aspects of it is the user interface and the user experience, we’re all used to sending messages back and forth to each other. And it makes people feel really comfortable with having that interaction with the system, haven’t sending a message and then awaiting the response from the system, seeing what it is and then sending another response back and forth. That’s really the I think the conquering genius of chat GPT. And I say that because I was very excited about open AI dropping, but they call the DaVinci, three model, their GPT three model, which ChatGPT is is using. I was very excited about that weeks and weeks before ChatGPT dropped. And I was I was announcing it on social media and talking about it and say, Hey, read this output. I think this is really engaging. And it was available to everybody on open API’s playground, and people could use it. And it didn’t catch fire because it wasn’t wrapped in that user interface of having a conversation with ChatGPT. I think that was a really great innovation. And I think because that innovation has really caught people’s imagination. I think the intended audience for this book is going to be quite broad. I think people are really curious about, well, what does this mean for things like medicine? What does this mean for things like education? What does this mean for areas like law that are really core to how we organize ourselves as a society? And I think we have a lot to say about that. In the book. We also asked a lot of questions in the book. And we provide far fewer answers, but we’re identifying the questions. And we want to convene a conversation, we want to trigger the right kinds of conversations in the book. And we are strong believers that that these are conversations that need to happen. And so this is this is why we’ve written the book and you know, lawyers don’t have a monopoly on good thoughts about what should happen to the legal system. we’re firm believers in that too.

Marlene Gebauer 14:59
I’m not sure if they do But yeah. So technological singularity. So that’s like the definition of that is is basically this sort of hypothetical future point where technology growth becomes uncontrollable, irreversible, and, you know, results in unforeseen change. How does that relate to the concept of of legal clarity, which is your title. So,

Abdi Aidid 15:28
the legal singularity is really as been described earlier about the practical elimination of legal uncertainty, and the sort of impact on our institutions and our society as a result of the elimination of that uncertainty. It’s a future state where the law is unknowable in real time and on demand. And where we can start to do things that we’re not we weren’t previously able to do because the law was either difficult to ascertain, or we didn’t often have a sort of normative consensus around what the law ought to be. And so it’s a concept that rests somewhat on the idea of a technological singularity. But it is not as though the two can develop sort of CO extensively. Why because the technological singularity contemplates much more of a totalizing event. It’s about like all of society, dramatically changing as a result of technology reaching this ability to perform functions that we couldn’t previously imagine and anticipate. What we’re trying to do is account for the impact of technological improvement on law and related institutions. And so in some ways, it’s an equally socially important concept, but it doesn’t purport to describe as much as its technology was simulated before, it’s to describe our focus as lawyers, law professors, as people interested in all of the things that emerge from the world of law here has been on law, its institutions, and also the sort of law jacent subject areas, but we’re not here making a we’re not exactly making predictions about the impact of the same technologies on radiology on necessarily the labor market on transportation, where our analysis is not cabined by any measure, but it is really a law oriented approach.

Greg Lambert 17:24
So, uh, one of the things that and you mentioned it earlier is talking about using AI to enable legal prediction. I want to that. I mean, if you take that to an extreme, I could see some people going, Well, why don’t we just, you know, the do away with with judges and let computers in let the AI take over and sign and genocide. And then that way, you know, we take the human element out of it, and we you know, and we just have, you know, the law, but I don’t think that’s necessarily what you’re going for your. But when you talk about AI enabled legal prediction work, what what are some of the potential benefits of that deep that you see? Sure, I can start

Abdi Aidid 18:11
us off, let me first start by saying, prediction is already what we do in the law. I know some people are spooked by the idea of technologized legal prediction, but think about the legal statements that we make. If I say I own a home, I might be saying my name is on a document somewhere. But what am I really say? I’m saying that if it came down to it, and someone contested my rights, or my interest in my home, I’m confident that a future court would vindicate my property interest. I’m making a legal prediction, almost all of our legal statements involve some prediction, and that the thing that lawyers do day to day, is try to forecast what’s likely to happen based on their experience based on their knowledge, you know, and this has been laws long held aspiration, if you go back to Oliver Wendell Holmes in 1897, and the path of the law, he talked about the prophecies of what courts might do, and nothing less is what I mean by the law. I’m maybe butchering that quote a little bit. But when we’re rendering legal advice, clients are not coming to us saying, explain the contours and the multistep doctrine involved in a given tax question. They’re saying, If I structure my transaction this way, am I likely to elicit scrutiny from regulators? am I likely to run afoul of what courts have determined is the law in this given area? So really, what we’re trying to do all along, even in our, let’s call them analog ways, is make legal predictions. And so part of the idea here is that technology through harnessing computing, power AI machine learning different data science techniques, is able to assist with that prediction in ways that we might be too limited to do right. And so it’s helping to achieve what is the thing that we already do and the long held aspiration of the law? So what does that practically mean? Well, it means a few things, it means that AI can help you predict how future courts are likely to rule in new situations for litigators, that means evaluating the strength of your case, right? It means getting a sense of whether or not your case is a winner or a loser, it means being able to have the certainty of outcome such that you can actually plan your affairs accordingly. If you’re, say, a corporate lawyer or transactional lawyer engaged in for instance, tax planning, you will have a sense of what kind of advice to give based on where the line is, so to speak. And so this is an idea that is sort of broadly applicable and can fundamentally change our relationship to the law, not as the uncertain thing that we have to worry about. But the thing that we know with absolute confidence operates on us, and then we can adjust our behavior accordingly. Of course, I should add that there’s sort of long tail potential here, which is, you and I, in our sort of retail sense, having a real time sense of our legal rights and obligations. One of the major challenges that people have, you know, we talk a lot about access to justice, as lawyers will, one major problem of access to justice is inaccessible legal knowledge, not only because it’s actually hard to reach as a matter of fact, and sometimes the behind lock and key expensive lock and key. But also, because sometimes it’s difficult to understand, difficult to parse, that’s why you need train intermediates very often. So one potential thing that could happen is just lay people, individuals without sophisticated legal training will have sort of a sense of what the law is. But long before that, we’re talking about great opportunity for cost savings for lawyers to be able to reduce the cost of the legal research, perform analysis, much more quickly, obviously, you know, layering their own experience, and expertise, a top that, but being able to bring down their costs, so they can take on more cases potentially more engagements, pivoting the economic model of legal services, from margin to volume, you know, being able to do more satisfy more legal needs. And then it goes without saying that it’s a great opportunity for us to be creative, right? Like think about why many of us went to law school. I didn’t go to law school necessarily to spend my waking hours searching through legal research databases, you know, I dreamed of having a courtroom rhetorical flourish, you know, like a Perry Mason moment or something, or the Jack Nicholson, you can’t handle the truth. moment, right? Well, really, if we can start to automate through technology, some of the Drudge work involved, we can free up the time to be more creative, to do things like provide strategic value to our clients think through implications, and socially, have conversations, not just about what the law is, what the law ought to be, and what we want it to look like.

Marlene Gebauer 22:57
So, you know, you highlighted a few areas of where law can improve. Do you have any other areas where there might be holes, you know, in the current state of law and legal institutions, that could be improved? And how?

Benjamin Alarie 23:13
I think there are no shortage of examples of where the law could be.

Marlene Gebauer 23:20
Another year, right.

Benjamin Alarie 23:22
Yeah, I think I think what we’ll say is I’ll put my law professor hat on as a tax law professor here and say one of the things that we often see the courts talk about are the values of certainty, predictability and fairness in tax law and, and how taxpayers are entitled to plan their affairs on the basis of, you know, the Internal Revenue Code and the regulations. And there are so many instances where taxpayers really struggled to find out where where are the boundaries, where are the where are the lines that ought not to be crossed in setting up a tax structure? We know that there are doctrines like the economic substance doctrine, the step transactions doctrine where it really requires a judgment call about Have I gone too far have I overstepped in availing myself of tax benefits that I really want? I think I can get like I’m reading the text of the Internal Revenue Code, or I’m reading the text of these regs, or I’m reading this private letter ruling. I think I can get there by reading these documents, but I’m not entirely sure if this is going to survive scrutiny. If an auditor comes looking at this very closely. And I don’t want to ask because if I asked for sure, they’re going to look very closely at it. So I’d prefer not to ask I’m just going to do it and go ahead. But then I as a business person, if this is my business, I’m going to have the sword of Damocles. I’m going to feel like there’s a sword of Damocles dangling like over my head until those years are no longer auditable by either the IRS is going to take a position or not. It’s it’s an uncertain tax position, I don’t know for sure if it’s going to survive. That’s just one really simple example. If and when we approach a legal singularity, many years from now, it’ll become absolutely clear what the tax law requires. And it, I will have confidence that that’s the right outcome. Equally, the IRS is going to have confidence if they audit me, and it won’t be an interesting question. It’ll be okay. That over you overstepped right there, we see that in the documentation, and it’ll just make for very good fences between taxpayers and the IRS, for example, that’s just one small example. But what we see is, it’ll lead to certainty, predictability and fairness in the tax law. And it, it’s, it’s gonna be a good outcome for everyone, except, you know, maybe for some of us who really enjoy tax games, because they’re a really interesting puzzle. I love tax games. As a tax professor, I love looking at the complexity of the lawn and figuring out new strategies. But I can I can take that desire to look at really difficult puzzles offline, as it were, and maybe not do it professionally. For tax planning.

Greg Lambert 26:18
I’m just curious, do you see similar problems arising, as we’re seeing, say, in the, the automotive industry was self driving cars about who is responsible? So if we get to a point, for example, that we use AI to do our tax advice or something, I think more practical, let’s say I’m a consumer, and I’m having a, you know, I want to return a product to my, to the store, which I bought it, and they have an arbitration clause, you know, do you see like the AI being the arbitrator in, you know, coming up with an answer, and then, you know, do we allow the the AI to do it? Or, you know, what if it contradicts what the law, you know, what a judge would say. And I can see some really interesting versions of this on the horizon,

Benjamin Alarie 27:19
Our strong view is that it, it really makes a ton of sense to keep humans in the loop in a lot of these arbitration context and judicial contexts for a significant period of time. And even even as we approach the legal singularity, and people begin to have kind of, like just a lot of confidence in algorithmic decision making, I still think for the same reason that we have pilots in aircraft. And you know, if you’re taking a commercial flight, virtually anywhere, a lot of that flight is going to be done by algorithm. But I feel personally much more comfortable knowing that there are pilots in the front of that aircraft, as we’re taking off, even though intellectually understand, they’re basically running the algorithms that are responsible for piloting that plane much of the time, I think the same thing is going to be true in our legal system, even if an initial answer is machine generated, I think I’m going to want as a consumer and in the case that you describe, Greg, I’m going to want the ability to appeal to a human to have a look at that what the algorithm is suggesting. If I think that it misfired, if I think that it missed something, if I think that I’m being treated unfairly, I’m going to want to human to interrogate an audit that machine generated decision, and I don’t think that goes away. For a very long time, I think we’re going to want those rights of appeal. It’s really, we want the belt and suspenders kind of approach here. And I think it’s going to be a feature of the system for a long time. Now, a lot of your listeners are gonna say, Oh, thank goodness, now I can still appeal decisions. And then now it just actually expands the number of things that I could object to. On appeal, I could talk about well, the algorithms biased, and what has been trained and how to reach this decision. And I think that’s very healthy for the system. And so if this ties into something that Abdi was saying earlier, we think that the, the composition of legal jobs is going to change over time, the activities that we find ourselves engaged in as lawyers will change over the coming years, but I think we’re gonna see even more lawyers and practitioners, doing things a little bit differently going into the future, but providing a much greater access to justice. Right now. It’s such an elite thing. Many folks don’t really have the resources to hire an attorney if they have if they if they have a dispute. They just have to make the difficult decision to lump it and kind of carry on as faster they can they don’t have the resources necessary to fight for their rights and vindicate those rights. And I think that’s going to change too. And so this is how we think the law becomes radically better. There’s better information about what your entitlements are. And actually, there’s human review of machine judgments, which will lead to accelerated learning in the law.

Abdi Aidid 30:22
Greg, if I may, also, human beings evolved in the loop really improves the technology over time. So I heard a story about an algorithm that was being used by a state government to determine whether certain people with mobility issues were entitled to a publicly funded personal support worker. And so he previously was a determination done by nurses acting as bureaucrats. And they would interview people and review applications and determine whether or not they should have a personal support worker. Well, as part of an austerity measure, they phased out the nurses and had a algo algorithm and a questionnaire that people would fill out. And one of the questions on the questionnaire was, do you have foot pain? And of course, people does binary choice of yes or no? Well, some of the people that answered no, were people who were happy tease people who are paralyzed, because technically speaking, they don’t have foot pain. Now, if there was a nurse conducting an interview, they would have spotted it like this. Right. And so keeping them in the loop helps to inform the technology before the disastrous consequences, helps to refine it, so that the next iteration of that questionnaire, which is going to retrieve the algorithmic information, is going to actually take into account the kinds of things that are nurse would in a deeply human interaction. And we’re not saying the human should be evolved in the loop for the foreseeable future, as a concession to people who are skeptical of the technology. We’re saying this, because we know that the things that the law deals with are deeply human questions, and it’s a deeply human enterprise. And there’s more than ones and zeros at stake. There’s, you know, our, our legal system has notions of mercy and fairness and empathy and procedural justice and all these things that right now human beings still have to contribute. And then of course, there’s the situations where we say, okay, the algorithm is predicting this outcome as the correct or optimal one. But taking into account emerging social mores, or, or taking into account the, you know, broader consequences that might escape the underlying data, we want to go in this different direction, we want to mount a social intervention, we want to treat this outcome differently, because we think that that’s what’s going to put us on the best path down the line. And so keeping human beings involved in and engaged in that dialogue, I think is absolutely critical at this stage. Now, of course, there could be a time when humans have sort of done the job of improving technology to the point where we can start to focus on other things. And part of our contention in this book is that if that day is gonna come, it’s gonna take a lot of our effort and coordination to get there.

Marlene Gebauer 33:11
So how do you address concerns about AI bias and ethics in the legal domain?

Benjamin Alarie 33:17
This is an Abdi answer for sure.

Abdi Aidid 33:19
Yeah. So we’ve been we’ve been talking about this and thinking about this, because we’re, you know, we’re optimists about the future state of technology. But we’re not people who are burying our heads in the sand around the problems. And so I think you can sort of think about the problems of bias and ethical questions raised by AI, you know, not all of those problems are created equal. So in one sense, you have, like Reflection and Amplification type of problems, where what’s happening is we’re projecting social harm social ills and negative histories into the future. For instance, if we built a prediction tool tomorrow, that wanted to predict wages, right, what the right wages are for a given employee? Well, if we’re not attentive to the history of the gender wage gap, right, we risk project, projecting it into the future, if the Federal Housing Administration built a predictor tool that was trying to determine whether or not or sorry, who was eligible for federally insured mortgage product of some kind, and you use the historical data that FHA has, and he did it pay attention to the history of redlining and racial discrimination, which the FHA has admitted to, then you risk projecting that into the future, right. And so part of the the, the problem of bias is really a problem of us having social ills that a society we haven’t adequately addressed. Right? There is a history of, say racism, gender discrimination, class discrimination, these kinds of problems, which of course are going to surface through technology if we’re not paying attention. If we’re not taking adequate stock of those things now, that technology doesn’t take us off the hook, we still have to address those issues and make a decision about the kind of society we want to live in. Right. But there’s a way in which it helps to disinfect the problem, so to speak. So if we built a predictor again, that was trying to predict future wages, and it did indeed project the wheat gender wage gap into the future, and it said, women get 78 cents men get 100 cents, we wouldn’t stand for that we would be shocked, you know, it would it would be jarring to see. But the gender wage gap nevertheless persist in our society in in the diffuse way it exists in relationships that existed, it’s embedded in institutions, and much to our shame, it continues. And so there’s a way in which the technology can bring those things into stark relief. And so even as an empirical research tool, the predictive tools have some value now, we have to have the collective conversation around what the right interventions are. So that’s those are problems of just projecting our, you know, bad history into the future. But then there are other kinds of problems which are distinctly posed by technology. So for instance, there are some people who think that a decision rendered by technology of some kind is per se, neutral. All right, we have to have conversations around transparency and all those kinds of things. There’s some times there have been questions, especially in around some of the cases involving algorithmic prediction and criminal justice around whether or not we could adequately cross examine an algorithm because of say, our trade secret laws. Like these are the kinds of things that we need to contend with. But I think we need to distinguish between the kinds of problems which are really a reflection of unaddressed social problems or inadequate address social problems, and the kinds of problems which are new technological problems. I think if you’re I think for those folks that are paying close attention, you’ll see there’s a lot of that first category, and we are still on the hook collectively as a society for resolving those issues.

Greg Lambert 36:58
Now, we’ve had, I think this may be our fifth or sixth show just this year, where we’ve kind of focused on AI. And there has been a consistent answer to this question. So I’m gonna, I’m gonna throw that answer out first. But then I want I want you guys to address it in a broader sense. And that is, when we ask about how is this going to affect the labor market, especially in the legal industry? Typically, with the answer we get back and I think Damien RIehl may have been the the person that gave this answer was that AI will not replace lawyers, but ai plus lawyers will replace lawyers who do not use AI. So I want you guys to think broader about that. Does that same concept apply to law professors, tax advisers, court clerks, judges, you know, look broader at the legal industry? And where do you think there may be significant impacts on the labor involved in the legal industry?

Benjamin Alarie 38:02
It is really interesting, I think one of the big impacts is going to be I think they’re going to be so many impacts across different legal roles. Let’s focus on legal education. I gave a, I gave a talk to the faculty here, all my colleagues, about a month ago is February 6, about ChatGPT. What is it? Why should we care? What should we do about it? And it was totally fascinating to see my colleagues really struggling with this, should we and and they they went back and forth? Should we try to ban it? Should we contain it? Should we make it clear to students that this would be an academic offense if they leveraged ChatGPT to help inform their academic writing, and to that I said, colleagues in my classes, so I teach an income tax law course, two of them and I teach a law and technology seminar I explained in my classes. I’m encouraging my students to use these technologies. I think I’m, I’m convinced it’s part of the future of legal practice, it behooves us to encourage our students to become familiar with this technology to figure out how to prompt the systems to produce better work product that they’ll be able to use to do their jobs, and also to, you know, to, by extension, improve their academic research to improve their writing, improve their thinking, they can effectively have a conversation with a vast body of literature that would have taken much longer for them to have that effectively a conversation with in the past. And so it’s a way to learn very quickly to outline an argument and to improve their own understanding and so on. To me, it’s very clear that that these are tools that can be harnessed to improve our understanding of the law to improve our ability to think critically and creatively about the law. It’s one of the reasons why in this book Abdi and I are predicting that AI can make the law radically better, because it can make critical thinking about the law and creative thinking about the law better. So I’m not interested as an academic in getting papers that have been produced in the absence of AI, because somehow that that preserves some purity of thought, I’m much more interested in in students having a full throated engagement with the literature, and developing for themselves their own thinking and outlining to the best of their ability kind of no holds barred to produce an original analysis that that they themselves are responsible for, you know, a little bit less directly, maybe than traditionally, they would have been, but I think it’s a much better work product. And it’s going to accelerate our learning, as law professors and in the profession. Generally, if, if we, as educators are open to it, I think super added on to that are all the considerations around can you actually effectively stop students from leveraging these kinds of tools? There are questions about how is this really different from using Google to find research sources and reading them very quickly? And you know, 20 years ago, we could have had a very similar conversation. Well, if students are using Google or searching Google Books, and, you know, systematically canvassing all this information, is that okay, you know, maybe we should insist on them using a card catalog to search the Law Library. And isn’t that the pure way of doing research? I think, I think we don’t want to battle the technology, I think we want to embrace it. And I think we can make the legal system work a lot better. As a result. Yeah. In. And so I, I’m going to stick with our earlier answer, which is that I think it’s going to maintain or increase employment amongst legal professionals, it is going to change how we approach the job, though, and I think it’s going to raise different issues. For regulators, for example, I think we’re likely to see a serious generational contest between those who are relatively young, those who are graduating from law school, this year, next year, the year after, and an old guard, and it’s going to be interesting to see how things play out because the old guard have a ton of judgment, a ton of experience, they have the gray hair or like me, no hair. And they, you know, they’re they’re thinking about things in a very strategic, experienced way. And so they have certain advantages from that experience, in a contest, say, in the context of a particular litigation, the new generation are going to have new tools that they’re very comfortable with that. So they’re gonna be comfortable using algorithms using prediction tools to figure out, you know, how cases likely to go. And there’s going to be some turbulence in the interim as we have, like, this battle between the older generation who have tons of experience and have intuitions and judgment informed by decades of experience and the new generation who are leveraging massive amounts of data and are comfortable with algorithms and are comfortable making algorithmically informed decisions. And I think it’s going to be bumpy for several years. Yep. And it’s going to be interesting, ultimately, to figure out how this plays out. I think Damien is right, though, I think ultimately, lawyers with AI are gonna replace lawyers without AI.

Abdi Aidid 43:51
If I can just add one thing to that. Part of what we’re saying in the book is that the legal singularity doesn’t necessarily come about at the end of a perfectly linear trajectory, right. Like Ben said, it could it could be a rocky road for some periods. And there could be a messy interregnum. And I think it’s important to talk about that, because I continue to want to call people into the conversation because there’s a great risk right now around ChatGPT of overreacting. So let’s say, let’s say, Ben doesn’t win the day at the faculty arguments or the faculty meetings. And instead, the old guard wins the day and they say, we’re going to scrap all writing altogether. And we’re going to do oral exams, right? Like, and students are not going to prepare anything written. They’re going to perform all their knowledge in front of us in a moment, or perhaps even they are going to be able to write but they’re going to write on pen and pad under our dutiful AI. Now, if that’s the case, then what have you done there? Number one, you’ve pushed back on technology that’s going to continue to proliferate anyway. So the students who engage in that old guard salsa Some are going to be worse off, because at some point, they’re not going to be your students, they’re gonna have to contend with these technologies that they’re suddenly not trained in. Right? That’s one thing. The other major risk is that you, you know, cut off in spite your face, suddenly you don’t teach writing skills, you think that there’s two confounding a problem, we’re never gonna be able to evaluate students and improve their writing. So we’re just going to, we’re not going to do it all together, we’re going to test them purely orally, what you’ve done in that situation and overreacting, is you’ve put yourself in a far worse position that you would have been in had you sort of intelligently carefully rigorously figured out how to absorb the technology into what you’re already doing. And as educators, I don’t think we should be intimidated at all. In fact, we should look at our colleagues in the world of mathematics who have found ways to teach fundamentals, even despite the fact that calculators are capable of doing a lot of the major work. And we should, we should sort of rid ourselves of that conservative impulse and really pay attention to the possibilities because our students can become better integrated legal thinkers, and more creative and summon all of the resources, as opposed to have to perform a sort of narrow evaluation tasks with artificial constraints.

Greg Lambert 46:15
Yeah, and, and quite frankly, I think anyone can think back and see parallels to this. I remember when you couldn’t bring a laptop into a classroom. And and you could bring a laptop in, but it couldn’t be connected to the internet. There were times you could not use Google. And so this is in courts, same thing, where technology was was banned, and into the courts you have so I mean, this is it’s not a new reaction. But I think again, all of those kind of barriers that they tried to artificially put up, all fell to the to the wayside. And I think this is something else again, I know like the New York City, public schools have banned ChatGPT access, except everybody has it on their phone. In so you know, it’s fighting about the rules of the game that we’ve already lost. So it’s just really interesting to see how it’s going to be kind of fun to see how people try to fight this knowing that that it’s a losing battle.

Abdi Aidid 47:25
Or Greg, its argument, the rules of a game that we already won, right? Because it’s about access to more information. And now if you have a sort of humble approach, and you say, ChatGPT can assist me with what I’m doing. But really, my job here is not just to reiterate information, but to synthesize, to use it creatively to figure out how to connect it up to bigger problem solving tasks, then you have this like, really knowledgeable assistant, or, you know, the, you know, the the world’s the world and unlimited research assistants like that. That’s, that sounds like a win to me. Now, obviously, you know, I’m Ben and I wrote this book, we’re all we’re thinking about all the problems, too. But we can’t lose sight of that massive gain potential gain, either. And let’s not be so cynical as to throw out our chance at having unlimited information.

Marlene Gebauer 48:25
Yeah, we’re super cynical, sorry. It’s good that you’re talking to us, though. We ask all of our guests at the conclusion of the interview to answer our crystal ball question. I know you guys are optimists about this. So I’ll kind of couch it this way is, you know, what, what are the challenges you see on the horizon? When it comes to the legal industry to AI and legal produce legal prediction over the next two to five years? You know, what are? What are some of the pitfalls that we should be looking out for so that we do move in that more positive direction?

Benjamin Alarie 49:09
I think this is a great question. I think. I think what I’m going to say, in response is, I think we’re going to see a mindset change on the part of the regulators of the profession, around how they deal with attorneys who are not adopting technology, and for that reason, giving poor advice, and it’s really, it’s still going to be about protecting the public interest. But there’s going to be a big change in the way that that the thinking goes for a lot of regulators. I think we see, you know, the start of that, but we’re going to see, it’s gonna be a really big change. So the folks who are not adopting technology are predictably going to be delivering substantially lower quality legal services to their clients. And we’re gonna see State Bar Association’s other regulators around the world really struggling with? How do we deal with predominantly older practitioners who have a particular way of conducting their practices, which may not really be, you know, to the best advantage of their clients? And how did the regulator’s deal with that, and I think it’s going to lead to serious problems for those regulators like how do we deal with the old guard who don’t want to change? And there’s an incontestable issue here, because the public interest is very clear. We need, you know, up to date proper advice. You know, I think it’s going to be a challenge. I don’t know exactly. Is it two years away? Is it five years away? Is it eight years away? But I think it’s coming. And I see it coming very clearly.

Abdi Aidid 50:52
I’ll also add to that. I totally agree with everything Ben is saying, I think what we’re likely to see in the next two to five years, and what I actually hope to see some more coordination among the profession about what kinds of legal predictive tools we need and should produce. Because right now, and you see this a lot in the world of algorithmic decision making. Quite a lot of the technologies are being produced by really vendors to solve a discrete problem, right? Like, someone sees an opportunity to improve bankruptcy law, and they build a tool that’s for bankruptcy lawyers. And someone sees an opportunity to provide some algorithm for policing. And they they sell it exclusively to policing the police departments and, and whatnot. And I think that you actually do need people sort of sitting atop that structure and giving some thought to what are the implications these technologies? What are the right safeguards? What are the appropriate? What tools are appropriate now, based on our technology and capabilities, which which kind of tools do we need some more research and study for? I think, if we all accept that, the technology is going to develop, and we can stand in its way and that there’ll that it has real potential benefits for us, that we can start to come to the table and get on the same page about what we want, what we want it all to look like, and the right sort of cadence for developing these tools. Now, there’s not me suggesting we need to put our thumb on the on the scale or centrally plan what’s likely to come out in the marketplace. It’s just to say that, the sooner that we accept all of this as real and the reality of our very near future, the sooner that we can engage in meaningful and rigorous conversations about what we need and how to absorb it into our day to day practice. And so I would love to see firms talking to each other. I’d love to see state bars looking at examples of other states and how they’ve approached things like regulating alternative legal service providers. I would love to see the judiciary in broader conversation with the profession. And the day to day practitioners about what do our courts of the future look like? Maybe legislatures can get into that conversation, too. We need a little bit more of that consensus building and little more of that coordination. And I think we’re going to be forced to, to reckon with these questions in that sort of two to five year frame.

Greg Lambert 53:36
Well, I was with you all the way up until you said the legislature’s getting involved.

Marlene Gebauer 53:41
Oh, sorry. Sorry. legislatures and coordination.

Abdi Aidid 53:47
Yeah, so I hope so.

Greg Lambert 53:50
Well, Ben Alarie, and Abdi Aidid, I want to thank both of you for coming in and joining us and talking about your book. So before we go, where can listeners go to pre order your book, everything now is about pre ordering.

Benjamin Alarie 54:05
So yeah, folks can go to their favorite online bookstore, whether that’s Amazon or Barnes and Noble, or whatever it may be. It’s available for preorder from everyone’s favorite online bookstore. The title is the legal singularity how artificial intelligence can make law radically better.

Marlene Gebauer 54:21
And we’llhave a we’ll have a link in the show notes.

Greg Lambert 54:23
Yeah, we’ll definitely definitely link that out. So, Ben, and Abdi, thank you again.

Marlene Gebauer 54:29
And I want to thank all of our audience for taking the time to listen to The Geek in Review podcast. If you enjoy the show, share it with a colleague. We’d love to hear from you. So reach out to us on social media. I can be found at @gebauerm on Twitter,

Greg Lambert 54:43
And I can be reached @glambert on Twitter. Ben, how about you? Where can people reach out to you?

Benjamin Alarie 54:49
I’m on Twitter @BAlarie

and Abdi

Abdi Aidid 54:53
I’m also on Twitter as @AbdiAidid A B D I A I D I D.

Marlene Gebauer 54:59
And for the Luddites out there. You can leave us a voicemail on The Geek in Review Hotline at 713-487-7821 and as always, the music you hear is from Jerry David DeCicca Thank you so much Jerry.

Greg Lambert 55:11
Thanks, Jerry. I’ll talk to you later.

Marlene Gebauer 55:14
Okay, bye bye .

Transcribed by https://otter.ai