Lawyer Monthly - May 2023

AI proclamation is by verifying that information themselves. So while there may be some operational gains in time or cost savings when relying on AI to take over menial tasks, these benefits may be counteracted by requiring a human element: a user who checks and verifies all the AI’s outputs. A related concern inherent in language processing tools is their bias – something that even the best factchecker might not be able to mitigate against. How a language processing tool is trained will determine its output information. This means that the people used to create the tool, the decisions they make about where the training information is sourced and how, is critical to the output information a user receives. This bias may not be necessarily malicious, but it will be present – especially when the tool is used to deliver ‘opinions’ or make human-like decisions. There may well be future regulatory requirements that firms will have to adhere to around the use of language processing in law to tackle the difficult task of bias elimination. Accuracy and bias concerns also go hand-in-hand with ethical considerations. Lawyers must serve the best interests of their clients – can they still do so if they are relying more heavily on AI to deliver content and complete tasks? And what does it mean for the profession as a whole if lawyers spend their time fact-checking the work done by language processing tools? Firms and their lawyers go through rigorous training and are bound by strict regulations. They have an ethical obligation to uphold professional standards; ChatGPT does not. But it is the firms themselves which will be held liable if content from ChatGPT is used inappropriately. The malpractice implications could be huge. Implications for Client Confidentiality Firms must keep their clients’ data confidential and secure. This is an 38 LAWYER MONTHLY MAY 2023 existential obligation; data mishandling or misuse could violate data protection laws or industry codes of conduct. The problem with AI tools is that users often do not know what’s happening with the data they input. Relinquishing control of data in this way is a risk that firms really shouldn’t take. Before using any AI tool to assist with legal studies, firms should understand exactly how inputted data is processed and used. Where is this data stored? Is it shared with third parties? What security systems are in place to ensure that the risk of data leaks are minimised? Firms already have multiple systems and processes in place to protect their clients’ data, with separate approaches for data stored on premise, in the cloud and across multiple devices. With the introduction of AI tools, it is not enough anymore for firms just to secure their own infrastructures. Are there processes in place to protect specifically against a data leak or misuse of data by AI technology? Firms might want to consider how their digital communications policies Should law firms and their employees use ChatGPT and generative AI for legal work? “Yes”, replied 51% of the survey’s respondents.

RkJQdWJsaXNoZXIy Mjk3Mzkz