AI and the Law

By Lewis Wright


Artificial intelligence has been adopted in most industries by now, with the most notable (and useful) being spam filters, smart personal assistants, and security surveillance.


Whilst Siri is great, the ramifications of AI have left a large number of people without jobs, as there is an ever-looming trend of replaceability overshadowing the workforce. For example, I am sure we have all used a self-scan checkout or spoke to an automated call centre. These small little changes can make a huge difference to some workers as it can make the difference between unemployment and a full wage.


But surely the legal sector is way too advanced to be so easily supplanted? Members of this area certainly agree, with over 90% of people stating they did not know their firm used AI.


This assumption, whilst perhaps well founded due to its innate archaic nature, is completely false, and perhaps has already began through certain avenues. One such example is e-discovery, which allows for machine learning to streamline an otherwise labour-intensive task. Slaughter and May and the University of Cambridge developed such a system called Luminance which models how solicitors think and draws out key findings without the need to be told what to look for


The introduction of such processes has definitely streamlined certain menial jobs, but this can be devastating for staff, especially for lower grades of staff. Certain groups suggest that those completing standardised legal work will be made obsolete, resulting in fewer human legal roles, and more technologically specific roles, such as legal technicians.


Roles will decline, almost definitely, but new roles will emerge that may somewhat sustain employability rates. For example, the rise of AI elsewhere (self-driving cars, robot surgeons, factory workers, etc.) pose new, difficult, and interesting legal questions that have never before been explored. And, when new challenges are posed, AI is faced with a barrage of challenges as it primarily relies on previous situations to run its algorithms. Whilst these problems have their workarounds, for the time being, human ingenuity prevails.


A better example of human speciality is the principle of stare decisis which requires that similar cases be decided similarly. “While this doctrine puts the focus squarely on reasoning from case to case, it is silent on how “similarity” should be determined. In fact, similarity is not static; it can depend on one’s viewpoint and desired outcome.”

So, the question arises, why bother implementing such changes which may devastate the current workforce? Well as it turns out, there a lot of reasons. One, being that AI is less likely to produce mistakes or participate in ‘human error’. In 2018, LawGeex, Duke, and Stanford University pitted 20 highly trained U.S. lawyers with decades of experience against AI. The task was reviewing an NDA. The AI did this is 26 seconds with an accuracy rate of 94%. It took the lawyers an average of 92 minutes. If humans can only achieve an 85% accuracy rate in such time, the choice of candidate is a no-brainer, no?


The pros and cons for and against AI in any sector are numerous, but those engulfed in the legal occupation are apposed to change, in more ways than one. And so the biggest issue, public acceptance. This kind of technology is not yet trusted by the wider public, especially in situations that hold such life-altering consequences. With machines becoming ‘normal’, the room for error is slowly becoming more acceptable, but the most misguided assumption in the legal sector is to assume AI is useless if it is any less than perfect.

20 views

Welcome to TLA where it is our goal to Encourage, Unify and Educate individuals interested in the evolving world of law!     Learn More