Article by Andrew Chan, guest contributor
Having set my sights on a career in law, I was unexpectedly taken aback when Yuval Noah Harari described lawyers as a “useless class” of workers that will eventually be taken over by artificial intelligence. What follows is a development of my curiosity, research and predictions that stemmed from this potentially forlorn reality.
The birthplace of AI was situated in Dartmouth and championed by John McCarthy and his team in 1956. At this early stage, their goal was to try and create a computer that could function on its own, as if it were a young child demonstrating abilities such as learning new processes and basic problem solving. But, of course, a lot has happened since then. AI is already (perhaps unbeknownst to you) being implemented throughout society today – be that through a recommendation for your online shopping list, or through the assisting systems of Siri/Alexa built into your phone. With such large strides taking place in the relatively early stages of artificial intelligence, its potential appears boundless and progressively intrusive into the legal sphere.
A sub-category of AI is something called “machine learning” and it is only a matter of time before this particular facet becomes instrumental to firms around the world. Why? Well, to illustrate this, I think it would be useful to use Ronald Dworkin’s idea of a Herculean judge. According to Dworkin, this fictitious judge has the power to recollect all past judicial decisions that have ever existed and to decide cases in the most morally coherent way possible. He will always be able to conclude the “right answer,” especially in hard cases.
There are parallels between this Herculean ideal and a machine’s ability to learn. As McCarthy intended, machines learn like toddlers when they experience new things; a toddler formulates the idea of a dog from the various encounters it may have had from parks or pet shops. A machine, much like the toddler’s mind, is fed encounters (data) and manipulates them in such a way that it will always achieve its goal – in this case, by identifying dogs. Both the machine and human have the ability to obtain information, process the information, and then make further decisions based on this information and their respective understandings.
However, where these two start to differ, lies in their respective capabilities to learn. Both instances gain knowledge as time progresses, but the machine has the Herculean ability to draw from all examples across the world and from whatever time period in history; the rate at which the machine can learn therefore far surpasses any human’s capability. Humans are susceptible to myriad qualities that simply make them inferior agents. We forget things, we get distracted and we need an ample amount of sleep to even function properly, to name but a few problems. Judges are impartial, but machines are more so. In the book ‘Life 3.0,’ Tegmark coins the term “robojudges.” Depicting their potential, these robojudges can learn to “transparently [apply] the law in a truly unbiased fashion” whilst maintaining “the same high legal standards to every judgement without succumbing to human errors.” With such exceptional capabilities, a machine’s resultant judicial decisions promise to yield higher quality results.
Having outlined AI and its ability to learn, you may have already flirted with various ideas of how it might influence the legal world. I have chosen to develop a two-pronged approach, were a firm to adopt and implement an AI system. These components are (1) effectiveness and (2) efficiency. Firms hire a large number of employees to carry out research and find nuances within the law. With AI’s learning ability, it would seem from a firm’s point of view, that a costly one-time investment of an AI system may prove to be effective in the long run. The yearly wages of thousands of workers over, say a 10-year period, is a tremendous amount of money – a figure that no doubt accumulates to multiple AI systems. Costs would be lower, and for the public sector in particular, this could be a game-changer. However, the system’s effectiveness is not so succinct. For example, what if we become so heavily reliant on it and we no longer remain in control? After all, we’re all aware of computers getting hacked or bugged. Furthermore, with this possibility, there are obvious concerns regarding privacy. The dilemma we face is that the more privacy we are allowed to keep, the less information that is available to the computers. With less information available, the system’s ability to make the ‘fairest’ decision possible will be compromised. So, should we really allow AI into every aspect of our lives to obtain the best possible results? And to what degree should we allow AI’s invasion of our privacy? This scenario seems like an awfully dark, 1984-esq future, and a never-ending line of questions of this nature will continue to be offered by AI antagonists.
Having said that, the implementation of an AI system will undisputedly increase efficiency. Time is the biggest factor holding firms back – just think about how time-consuming contract research and analysis is – an AI system can do this in a matter of seconds. But, would a firm really want this? Firms seek to profit maximise and charge for precisely these tasks. By dramatically cutting the time to carry out the exact same tasks, firms will longer be able to charge copious amounts of lawyer fees for their time. Ultimately, a reduced profit margin (to the benefit of the client) may have to be accepted in order to prevent falling behind other competing firms that offer the technology.
The scales are very much at the tipping point of an industry change in favour of AI and its involvement. I am pessimistic about the future as it only seems a matter of time before it is fully developed and implemented in an extensive manner. For my sake, I just hope that it takes a long time – but then again, that would be optimistic.