Skip to related content
20th Annual Law Reform Lecture
The Right Hon. Sir Geoffrey Vos
Bar Council of England and Wales
20th Annual Law Reform Lecture
Thursday 21 June 2023
- I am grateful to the Bar Council for inviting me to deliver this short lecture. My instructions told me that I was to speak about the role of Artificial Intelligence in the law, the future of law in a virtual world and the modernisation of courts in a changing landscape. I fear that if I were even to attempt to cover those three topics, we would all be here a lot longer than was anticipated. But here goes …
- When I started at the Bar in 1977, technology was a typewriter and a bottle of Tippex, and the occasional revolutionary telex machine. We have come a long way. We started with word processors, moved to fax machines and personal computers, onto the internet, into digitisation, the blockchain and artificial intelligence towards the metaverse, quantum computing, and the decentralised Web 3.0. That has been quite a journey. In many ways, the latest developments are coming so fast, as compared with the earlier ones I mentioned, that they are harder to get to grips with.
- Let me start by stating a few guiding principles, from a legal and judicial perspective, as to the adoption of new technologies generally.
- First, I think we all owe a duty to those we serve – namely citizens and businesses here in England and Wales – to make constructive use of whatever technology is available if it helps to provide a better, quicker and more cost effective service to clients, if you are a lawyer, and to provide a better, quicker and more cost effective dispute resolution process if you are a judge.
- Secondly, it is an integral part of the adoption of new technologies that we need to do all we can to protect the very same citizens and businesses from their adverse effects. That means that, where appropriate, we need to promote effective regulation, rule-making, data protection, the protection of confidential material, and the minimisation of cyber-crime. All these present risks to the communities we serve to a greater or lesser extent.
- But the third principle, I believe, is that none of these risks means that we should forsake new technologies and the benefits they bring just because they offer risk or challenge to the way things were always done. If we were to do so, we would not be properly serving either the interests of justice or access to justice or the interests of citizens and business who have a right to live in a democratic society governed by the rule of law where the ability to vindicate legal rights quickly and effectively is central to that rule of law.
How then will new technologies affect legal practice?
- I have been vocal in the past few years about the impact of digital technologies and smart contracts on the issues judges will have to decide and on the subjects upon which lawyers will be asked to advise. In the three short months since GPT-4 was released by OpenAI on 14 March 2023, everyone has been sounding off about how generative AI in its current and future iterations will affect legal practice. The answer is obviously “a lot”. Goldman Sachs estimated last week that generative AI could automate 44% of legal tasks in the US. It is hard to see why it should be different here in the UK. Though, I must confess to being somewhat bemused by the seemingly precise figure that they have identified – how can we possibly know quite so exactly?
- ChatGPT itself says that its most valuable uses for lawyers are to assist them with (a) drafting, (b) document review, (c) predicting case outcomes to inform strategy, and (d) settlement negotiations. We all now know, weeks after GPT-4 was released, about the risk that generative AI will produce inaccuracies and downright false information. This has been epitomised in two recent cases in New York and Manchester, about which I spoke in some detail in a speech to the Law Society of Scotland last week. But I am more interested today in considering how AI might be truly useful.
- Let us consider first predicting case outcomes, before we turn to the creation of advice and transactional and court documentation.
- Current generative AI is capable of accessing a large proportion of the data on the internet. That makes it obvious, I would have thought, that any litigation client would want to know, if they could, what it thought as to their prospects of success. The putative client would also obviously want to know what their human lawyers thought. Since the AI has access to more and different data than the humans, its opinion would at least be worth taking into consideration. It is also perhaps likely that specialist legal AIs, such as Spellbook, will provide more accurate and reliable predictions than unspecific programmes such as ChatGPT. Will the use of AI to predict litigation outcomes materially reduce the work needed to be done by lawyers? I doubt it. Similar, if less sophisticated technology, has been in use for years.
- I think the same question in relation to AI-created legal advice and legal documentation needs to be set against a background. I have for some time warned that lawyers have something of an unhealthy fixation with analogue programmes such as MS word and PDF. They have, as yet, generally been reluctant to utilise machine readable documentation that would have and could have already revolutionised their work. Lawyers may be somehow determined that the information that their documents contain should be created from scratch each time, missing out on the potential benefits of a machine readable format. Perhaps there is a thought that having transferrable data, which allows even the potential of computed solutions, would surely never be of immediate relevance in their practice area. But one of the things that Chat GPT will have brought into focus is that retaining the natural language approach will not make you as immune from all this computing as you once thought. Whilst there is clearly some way to go, it is obvious that data can be drawn from analogue text.
- Moreover, the benefits of machine readable documentation are that you can obtain data from a far greater variety of sources, and that data fields offer you solutions at every stage, and allow you later to draw comprehensive data-driven conclusions from what has been done.
- The problem with using AI to produce submissions or draft contracts, for example, is not nowadays that it cannot do so in human language. It can. But it is the accuracy and appropriateness of what is produced.
- So far as submissions are concerned, regulators and the rules committees, including the new Online Procedure Rules Committee that I am chairing, will need to consider whether there should be any rules concerning the use of AI in the synthesis of submissions. But one thing is certain. That is that any lawyer using an AI to help write their submissions, will need to check them very carefully, because it will be they who are responsible for their accuracy, not the AI. Lawyers may find that the time needed to check the work of an AI is greater in some cases than writing the submissions themselves.
- I would expect AIs to be more useful in providing legal research and making sure that things are not missed in the lawyers’ preparatory work.
- I also think that large language models like ChatGPT with access to huge swathes of the internet are likely to be of less use to lawyers than either (a) the specialist legal AIs I have already mentioned, or (b) AIs with access to a more limited store of data.
- For example, an AI with access to the White Book, the National Archives case law database, BAILLI, Westlaw, and Lexis Nexis, but unable to scrape the bulk of the internet, is more likely to create accurate material for lawyers to use.
- Will the ability of generative AI to create documents and submissions reduce that kind of work for lawyers? In time, it very likely will. GPT-4 is certainly not the last iteration, and future iterations, particularly specialist iterations, will be better and more useful. Lawyers will need to use them. It is likely, though, that other jobs for human lawyers will be created explaining, adapting and controlling the privacy of what AI produces.
Use of AI within the Digital Justice System
- As many of you will know, we are introducing in England and Wales a digital justice system that will allow citizens and businesses, at its first tier, to go online to be directed to the most appropriate online pre-action portal or dispute resolution forum. It will also hopefully provide users with what the Lord Chancellor calls “ELSA” or “Early Legal Services and Advice”. Some of that early legal advice will undoubtedly be provided by AI, drawing on a limited database of quality-assured materials to ensure its accuracy. We all know that diagnosing the nature of the problem and an initial marshalling of the facts can be assisted by AI. We have all been faced with AI driven chatbots that, without great sophistication, steer you towards the obvious answers and identify where human intervention is needed. These processes will have a role in the digital justice system.
- I acknowledge also, of course, that some of that ELSA will need to be provided by real lawyers, so that the creation of this first tier of the digital justice system will need to be a partnership between MoJ, HMCTS, the judiciary, the OPRC and the legal profession.
- The second tier of the “funnel” of the digital justice system will consist of a range of ombuds and pre-action portals, many of which are already driven by AI: each will use available mechanisms to bring about resolution without the need for legal proceedings. Examples include the Official Injury Portal (the whiplash portal), ACAS, the FOS, and hopefully a SME portal that is now under active consideration.
- It is only if resolution is not achieved at the second tier, that the case data will be transmitted by API into the third tier, the court-based part of the digital justice system, now being created by the HMCTS reform programme for civil, family and tribunals cases.
- As I see it, AI will be used at every stage of the digital justice system: in giving ELSA, to diagnose the problem in simple cases, to enable everyone to be fully informed of every stage of the process that is being undertaken, to help people understand and interrogate complex sets of rules and instructions, and also, perhaps, to take simple decisions at different stages of the resolution process.
- As for robo-judging, the controls that will be required are (a) for the parties to know what decisions are taken by judges and what by machines, and (b) for there always to be the option of taking a case to appeal to allow it to be scrutinised by a human judge. The limiting feature for machine-made decisions is likely to be the requirement that the users have confidence in that system. There are some decisions – like for example intensely personal ones relating to the welfare of children – that humans are unlikely ever to accept being decided by machines. But in other kinds of disputes, such as commercial and compensation disputes, parties may come to have confidence in machine made decisions more quickly than many might expect.
The future of law in a virtual world
- None of what I have described thus far is likely – overall – to lead to less work for lawyers. Some tasks will change as generative AI and other technologies make legal research and contract preparation easier. But generative AI is just one more technology. The lives of all our citizens have become more, not less, complex and will continue to do so. Greater complexity necessitates more advice and more simple explanations. Moreover, humans in general and lawyers in particular will require training to be able to interact effectively with AIs.
- Mr Steven Schwartz, the lawyer in New York who got into trouble using ChatGPT to prepare his submissions, found out, to his cost, that one does not always get entirely reliable AI answers to human questions. Mr Schwarz had asked ChatGPT if the case of Varghese, which apparently supported his submissions, was a real case. The AI replied in the affirmative. He then asked for its source, and the AI said “Upon double-checking, I found that the Varghese case does indeed exist”. When asked if the other cases it had provided were fake, it said “No, the other cases I provided are real and can be found in reputable legal databases”.
- The thing here is that bits of what ChatGPT had said about the case were real – for example, the reference. What Mr Schwarz would have learnt from this was that you need to ask AI much more granular questions if you want to be able to rely on the answers. So, he ought perhaps to have asked whether the case of Varghese was reported at such and such a reference, whether it positively decided such and such a point, and whether the words quoted were used by the judge in that case.
- Lawyers are not going to get away without using AI for the benefit of their clients – whether it is for legal research, predicting outcomes or undertaking negotiations. Clients will not pay for things that are available free, so lawyers will need to adapt quickly to the world of AI, the metaverse, and the decentralised Web 3.0, but that will perhaps need to be the subject of another lecture.