The Rise of Legal Machines: Good or Bad?
Technology is becoming increasingly incorporated in, and relied on, in day to day life. The continual connectivity of most individuals means that we expect to access services on demand; good examples of this are how many now access television programmes, taxi services and how we order and receive takeaway food.
Here Lawyer Monthly hears from Emma Stevens, Associate Solicitor for Dispute Resolution at Coffin Mew, who discusses ongoing advancements in legal tech and the subsequent benefits and risks of the AI world.
What is automation?
The immediacy that technology allows for has already impacted the way in which legal services are delivered, with consumers and clients expecting their legal services providers to be as connected and immediately available as they are.
Automation involves the use of technology to replace tasks currently undertaken by individuals; types of technology involved include computer programmes, algorithms and robots. Amazon is leading example of a company who are estimated to have robots as 20% of its workforce. So, are humans in the workplace becoming redundant in a world of growing technology?
Consideration of, and the use of, automation and artificial intelligence in the workplace is becoming increasingly prevalent across a number of industries; sufficiently so, that the Office of National Statistics (ONS) has seen fit to consider which jobs are at the highest risk of automation and the Information Commissioner’s Office (ICO), along with many other sources, have produced varying guidance and participated in consultations on the potential risks associated with more wide-spread implementation.
The automated future of the legal profession…or not
Opportunities for automation generally arise where routine and repetitive tasks can be carried out more swiftly and effectively by an algorithm than they would by a human being. This substitution is therefore more likely for tasks and roles which are process driven with consistent steps and outcomes, rather than more bespoke practice areas where the actions needed are more varied.
On 25 March 2019, the ONS released a report following an analysis in 2017 of 20 million jobs in England. The report focuses on occupations believed to be at the highest risk of automation and found around 1.5 million jobs which fell in to the “high risk” category. In general, and perhaps unsurprisingly, those job roles which require particular training and education, which typically lead to higher skilled jobs (e.g. doctors and higher education teaching professionals), have a lower chance of becoming automated.
The ONS consider an occupation to be high risk if the probability of automation is above 70%; happily for those in the legal sector, the statistics suggest that the risks to “legal professionals” are well below this, at just 24.31%. As technology continues to evolve and becomes more affordable, it is evident that options and ideas for how this is used to benefit the delivery of legal services will continue to develop and with that evolution, the probability for the profession may change.
The ONS consider an occupation to be high risk if the probability of automation is above 70%; happily for those in the legal sector, the statistics suggest that the risks to “legal professionals” are well below this, at just 24.31%.
Understanding your client’s business and circumstances has always been key to delivering good client service, but it is unquestionable that there is now a significant level of demand from clients, particularly in the commercial sector, for professional advisers who understand the role which technology plays in their business. There is also a growing requirement that professional advisers will keep on top of trends and new developments in technology; this is rapidly becoming less something which differentiates firms and more something which is expected in the modern commercial world.
Benefits and risks?
For many, the concept of job automation automatically brings to mind the replacement of individuals in existing roles and, potentially, redundancies as a result. Although this is often how this concept is portrayed in the media, it is clear that technology is not yet at a stage to work completely autonomously.
As one common everyday example of automation, the supermarket self-checkout demonstrates a number of the issues which automation which requires customer/client interaction can bring. These are far from perfect and still prone to errors and difficulties, in some cases admittedly caused by user error, and are generally overseen by employees as part of their job roles.
For the legal profession, if the existing challenges can be overcome, then there could be benefits to the automation of certain roles. It could lead to a reduction in the likelihood of errors for certain types of work; the ability to speed up routine and repetitive tasks, such as due diligence, disclosure and research; and, perhaps, as a result, it could provide the opportunity for professionals to focus on more complex, bespoke work and advice, where automation cannot assist, which may allow for higher levels of productivity overall.
Whilst the benefits show potential for the profession, there are also clear risks associated with automation which need to be considered as part of any implementation. The initial coding a programming of systems is completed by humans and there remains a risk of human error.
AI cannot empathise in the way that humans can, and automated systems do not yet deal particularly well with ambiguous scenarios, which many in law come across on a daily basis. There is also a question as whether AI is able to ask relevant questions in all scenarios and to consider external factors – algorithms are, to some extent, as good as their programming.
AI cannot empathise in the way that humans can, and automated systems do not yet deal particularly well with ambiguous scenarios, which many in law come across on a daily basis.
The other key risk which causes those considering automated systems a concern is the risk of data breaches. The relatively recent implementation of the General Data Protection Regulation (“GDPR”) is at the forefront of most practitioners’ minds; with the ICO reporting that in 2018/2019 there were 4,056 data security incidents. One particular consideration from this angle is how an autonomous system would identify and report a data breach in the event that the automated process should fail in some way. The immediate answer is that it seems likely that some human oversight would still be needed.
In January 2019, the European Commission’s expert group produced ethical guidelines surrounding the use of AI and its potential governance and challenges. The ICO have recognised that there is a currently a lack of public trust in AI generally at the moment, highlighting: “the importance of a shared ethical framework underpinning the international landscape of AI governance”. There is a clear view that trustworthy and ethical AI could compliment existing services, but that the issues of public trust in AI systems would first need to be overcome and clear governance and regulatory systems would need to be implemented in order for this to happen.
The good news for lawyers is that based on current reports and the risks of automation, it seems unlikely at present that we will be replaced completely; the costs, risks and existing challenges still remain unsolved.
It seems more likely that firms will begin, or continue, to implement autonomous processes as they become available to compliment and assist with the work conducted by humans in the profession and that these systems will have limited direct client contact, if any. If the recent ethics consultation is anything to go by, there is still some way to go before automation and AI become more widespread and the current position changes significantly.