COGNITIVE WORLD

View Original

Will A.I. Put Lawyers Out Of Business?

AI in law. Image: Depositphotos enhanced by CogWorld

Source: COGNITIVE WORLD on Forbes

What is the law but a series of algorithms? Codified instructions proscribing dos and don’ts—ifs and thens. Sounds a lot like computer programming, right? The legal system, on the other hand, is not as straightforward as coding. Just consider the complicated state of justice today, whether it be problems stemming from backlogged courts, overburdened public defenders, and swathes of defendants disproportionately accused of crimes.

So, can artificial intelligence help?

Very much so. Law firms are already using AI to more efficiently perform due diligence, conduct research and bill hours. But some expect the impact of AI to be much more transformational. It’s predicted AI will eliminate most paralegal and legal research positions within the next decade. Could judges and lawyers share the same fate? My coauthor Michael Ashley and I spoke to experts about AI’s impact on the legal system for our upcoming book, Own the A.I. Revolution: Unlock Your Artificial Intelligence Strategy to Disrupt Your Competition.

“It may even be considered legal malpractice not to use AI one day,” says Tom Girardi, renowned civil litigator and the real-life inspiration for the lawyer in the movie, Erin Brockovich. “It would be analogous to a lawyer in the late twentieth century still doing everything by hand when this person could use a computer.”

There are many reasons to believe AI could benefit the legal industry in ways as meaningful as the personal computer. Currently, the legal system relies on armies of paralegals and researchers to discover, index, and process information. For law firms at present, this reliance can be expensive, driving up the rates they charge. And in understaffed public defender offices, investigators can only spend a few minutes interviewing each of their clients, greatly diminishing the service they can provide.

However, for just a fraction of the time and expense, AI could be used to conduct time-consuming research, reducing the burdens on courts and legal services and accelerating the judicial process. There are also situations where using AI might be preferable to interacting with a human, such as for client interviews. For instance, it’s been demonstrated people are more likely to be honest with a machine than with a person, since a machine isn’t capable of judgment.

Of course, AI can’t replace all means of collecting information. There are instances in which depositions would be more conducive to fact-gathering. Still, when preparing for a cross-examination of an expert witness, AI could be effectively deployed to determine every case in which a particular witness testified, what his/her opinions were, and how juries reacted, much faster and more thoroughly than any human investigator ever could.

Such effectiveness is great, but will lawyers panic when they can’t bill as many hours? Not according to Girardi. He believes those firms willing to adopt AI will possess a strategic advantage. “It’s a lawyer’s job to solve a problem as quickly and inexpensively as possible,” Girardi explains. “AI will be a godsend because it’ll give lawyers the information they need to resolve conflicts faster.”

Yes, AI-wielding lawyers wouldn’t be able to technically bill as many hours since the AI would work much faster than they ever could; however, according to Girardi, these attorneys’ enhanced effectiveness would likely garner repeat business and lead to more clients. “If a lawyer can use AI to win a case and do it for less than someone without AI, who do you think the client will choose to work with next time?” says Girardi. Accordingly, the promise for law firms using AI is that they will still be able to generate the same amount or even more revenue while expanding their client rosters. Conversely, firms too slow to adapt to AI and automation will suffer a competitive disadvantage.

While conventional wisdom still suggests job security for lawyers and judges is more secure than other professions, there have been calls to relieve our backlogged court system by outsourcing minor cases to AI. To this end, some courts are even considering using AI to determine eligibility for bail by detecting behavioral patterns indicating flight risk — a decision flesh-and-blood judges traditionally made in the past.

Courtrooms are likely to transform in other meaningful ways due to technological advances. Girardi believes courts might one day welcome AI technology to aid with jury selection. “If a civil dispute concerns a matter of fact—Did the doctor cut off the wrong leg?—today, it’s largely settled out of court,” says Girardi. “However, if a case concerns the interpretation of the law—Did he or she wait too long before performing a procedure? Did a doctor make a bad judgment call?—the case can go to trial, which is why the philosophical makeup of a jury is so important.”

AI could be valuable in a trial setting because it could predict such philosophical makeups. Adept at rapidly collecting important information, it could gather data about potential jurors, including their accident history, if they have served before, the verdicts of those trials, and a juror’s political and charitable affiliations. AI could also be used to analyze facial reactions and body language indicating how a potential juror feels about an issue. Before a potential juror even answers a question, the movement of his or her eyes, a change in skin coloration, or a shift in body positioning could nonverbally communicate an emotional response demonstrating a positive or negative bias. Such data could be used for optimal jury selection, facilitating greater fairness.

In spite of such developments inside the courtroom, it’s nonetheless hard to imagine how trial lawyers might be replaced by artificial intelligence. For now, a human’s unique ability to create empathy with jurors and judges alike makes them indispensable to legal deliberations. But what if judges were one day replaced by robots? After all, we know humans are fallible creatures, prone to prejudices and biases.

Song Richardson, Dean of the University of California-Irvine School of Law, worries about just this possibility. “Why does someone become a lawyer or judge? It’s certainly not to become a cog in the wheel of an assembly line system of justice. The fact that we have backlogs resulting in a failure to give people the individualized attention they deserve tells us there’s something fundamentally wrong with our justice system. Expediting the mass processing of people using AI isn’t the answer. It’s the opposite of justice.”

Richardson believes AI can benefit the legal profession, yet she cautions us to be careful how we implement it. Even the best AI needs to be taught, which means it can only be as objective as the people who teach it. “People often view AI and algorithms as being objective without considering the origins of the data being used in the machine-learning process,” says Richardson, who specializes in the dangers of unconscious bias. “Biased data is going to lead to biased AI. When training people for the legal profession, we need to help future lawyers and judges understand how AI works and its implications in our field.”

In spite of these concerns, Richardson still believes the net impact of new technology will be positive. Lawyers and judges are only as good as the information they receive, and AI has the potential to significantly increase the quality of information. Still, no matter how sophisticated the technology becomes, she and Girardi both agree it will never be a substitute for the judgment and decision-making only humans can provide. “AI isn’t going to replace the need for critical thinking. We still need to prepare students to think like lawyers, and I don’t think that’s ever going to change,” says Richardson.

Though no consensus exists yet as to how AI will ultimately shape the legal profession, we do know AI is poised to transform nearly every facet of our lives, and the new technologies it’s powering will create a host of unprecedented legal issues, including ownership, liability, privacy and policing. For a taste of what’s coming, just consider this: when self-driving cars start getting into accidents, who will be deemed responsible? The car owner? The manufacturer? The software designer? The very fact these are complicated issues soon to be exacerbated by unprecedented technology reveals the need for more lawyers, but not just any kind of lawyers. We need those capable of making sense of our rapidly evolving society.

“What worries me is that we won’t have lawyers who understand algorithms and AI well enough to even know what questions to ask, nor judges who feel comfortable enough with these new technologies to rule on cases involving them,” says Richardson. In light of such valid concerns, it is becoming increasingly clear our law schools must prepare tomorrow’s lawyers to use the new technology. But even this isn’t enough. We also need today’s practicing counsel and judges to grasp AI and all it promises to better serve and protect our fellow humans.

Follow Neil Sahota on LinkedIn