What a difference a year makes: speech about AI by the Master of the Rolls
Legal Geek Conference
“What a difference a year makes“
15 October 2025
Sir Geoffrey Vos, Master of the Rolls and Head of Civil Justice in England and Wales
Introduction
It is amazing what a difference a year makes in the world of legal tech.
Last year, lawyers were generally in denial about the value of AI to their treasured profession. Now, they are piling into using Harvey, Legora, ChatGPT 5, CoPilot, Claude and Gemini and everything else they can find for every purpose under the sun.

Photo: Legal Geek
Perhaps the most extraordinary thing about this change of heart is that, as a result, the sun has not actually fallen from the sky, and the moon has not turned to green cheese.
Lawyers are beginning to realise what was obvious to most of us from the start, namely that AI is just a tool like so many other tech tools we use every day. True, it is an important, innovative and useful tool, but, just like a chain saw, a helicopter or a slicing machine, in the right hands it can be very useful, and in the wrong hands, it can be super- dangerous.
I have recited the three core rules of AI for lawyers and judges in numerous speeches now. They are not rocket-science. They are: first, that you need to understand what an LLM is doing before you use it; secondly, you need to avoid putting private data into a public LLM, and thirdly, you need to check what comes out of an LLM before you use it for any purpose at all. All that would be obvious to a qualified lawyer anyway.
What I think, though, has surprised many lawyers and judges is the number of things that AI can help us with every day, in order to save time and drudgery. The summarising capability that appears on all our screens every time we open a Word document is, perhaps, the greatest labour-saving device. An AI generated summary does not mean we can avoid reading an important document, legal precedent or judgment, but it does give us easy access to the guts of it far more quickly than was ever available before.
What should AI be used for?
Can AI properly be used to generate legal advice for lawyers and judgments for judges? Yes, of course it can. But the big question of our age is about what it should be used for. I see no reason why AI should not be used to draft contracts and to research legal questions. Lawyers and clients should always check what it has done carefully before using it, but that is a different issue.
The ethical issue that has occupied my attention constantly since I last addressed this conference, though, is what do we, as a society think AI should be used for in the way of judicial decision-making. The answer to that question is truly difficult and potentially troubling for a whole host of reasons.
The answer is not obvious, because nobody can really tell me why AI should not be used to assess, for example, personal injury damages by reference to the numerous authorities found in the textbooks. That task would take an AI a couple of minutes, whilst the wait for a judicial hearing and determination might be more like two years.
Having acknowledged, then, that there may be some judicial decisions that people might really want to be made by machines, why should we baulk at allowing that to happen?
The answer there is threefold.
First, judicial decisions are the last resort for everyone in our society. If the decision is wrong, at least after an appeal, nothing can be done about it in most cases – Parliament is unlikely to change the law to reverse a run-of-the-mill AI-generated judicial decision made by a machine as to personal injury damages.
Secondly, machines, even those sporting the much-vaunted artificial general intelligence when it comes, will arguably never be able completely satisfactorily to mimic a human’s emotion, idiosyncracy, empathy and insight.
Thirdly, with an AI judicial decision, you will be getting something generated from the state of intelligence at a given point in time, without the application of developing human thought. That may be fine for a while, but where will it leave us in generations to come? There is a potential problem if we, as humans, become unable to second guess or even check what the machine is suggesting or deciding. In that situation, it might be very difficult for human thought processes to influence the law of the future in the way that many people might think remained appropriate.
What should we do now?
None of this is, in any sense, science fiction. That is why I have argued for some time now that we need a serious debate now, before it is too late, to consider: (a) what human rights people should have in the light of ever more capable AI, and (b) what humans want, as a matter of consensus, human judges rather than machines, to decide in the future.
The first question as to human rights is probably one of the most critical present-day legal questions. It is whether a machine-made decision can ever be properly regarded as having been made by an “independent and impartial tribunal established by law” for the purposes of article 6 of the European Convention on Human Rights and Fundamental Freedoms. Some think so, but many more think not.
The second question is very accessible. What do we, as humans, want human judges rather than machines, to decide in the future? What do we, as a society, want machines to decide about our lives in preference to human judges, and ought we to have a choice. Ought a criminal, before being sentenced, be able to say that they want to be sentenced by a machine rather than a human or vice versa? In China, judges already routinely use AI in that process. Do we want judges to feed the facts of our cases into an AI tool, to see what an AI tool, or even a range of AI tools, think the answer should be? Or would we rather stick with the grumpy old judge – or even – the vibrant young judge – whose experiences may differ one from another, and whose idiosyncrasies we cannot predict, and only the Court of Appeal can correct.
I urge all of you to think carefully about these questions.
Finally, before I end, let me say something briefly about a new project that has incepted this year in the field of digital assets and digital trading.
The International Jurisdiction Taskforce
This year, we have started the International Jurisdiction Taskforce comprising leading legal minds and central bankers from Japan, the EU, the USA, the UK, France, Singapore and Australia. The idea is to see how the private laws of these jurisdictions (obviously English law, not UK law; NY law and not USA law; and not EU private law) could be better aligned so that transactions involving digital assets on chain are not impeded by a clash between unaligned private law systems and the conflicts of those laws.
The IJT project is at an early stage, but it will start by looking at the differences between those private laws and their fast- changing regulatory environments as of today. They will then try to see how and whether the UKJT’s legal statements on digital assets as property, securitisation of digital assets, and digital assets in insolvency are applicable more broadly.
I have said before on this stage that, I think, it is critically important that the law and the lawyers try to set the stage for effective and well-regulated cross-border digital trade using digital assets. We already have the Electronic Trade Documents Act 2024 and should shortly, Parliament permitting, have the Property (Digital Assets etc) Act. I hope the IJT will be another brick in the wall towards the widespread adoption of global digital trading.
Conclusion
It is always an honour to address this well-attended and engaged audience. Many thanks for the opportunity to have done so again this year. I believe that the voices of those implacably opposed to the use of technology in the law are reducing in volume.
I think that most of us in this room are committed to two significant objectives. First, we want to see that AI is used responsibly, effectively and safely in legal systems and processes. Secondly, I think we all want to see the continuing creation of an efficient, economic and expeditious Digital Justice System to resolve people’s many legal disputes online and out of court and, thereby, to create greater access to justice for all.
In delivering these two admirable objectives, we must make sure that we bring the entire legal community with us.
I believe, as I indicated at the start, that we have made great strides in that direction in the last year.
Thank you for listening.