Speech by the President of the King’s Bench Division: The Mayflower Lecture 2025
Dame Victoria SharpPresident of the King's Bench DivisionNewsSpeeches
The rise of machine learning and generative AI (GenAI) tools, especially large language models (LLMs) – introduces opportunities and potential risk into the legal process. AI, when properly directed, raises the real prospect of increased efficiency and access to justice. On the other hand, its opaque “black box” decision processes are capable, if not properly regulated, of undermining core legal values underpinning the legitimacy of the law. The views of Sir Edward Coke provide a useful prism through which to consider some important questions about the role of judicial reasoning, in the age of AI in common law systems.
Artificial Reason and Algorithmic Process: Judicial Reasoning from Coke to the Age of AI
Introduction
It is a great pleasure to be here this evening to deliver the Mayflower lecture to this distinguished audience, in the City of Plymouth.
When the Mayflower set sail in July 1620, almost 405 years ago, it had many things on board apart from the 135 or so souls who comprised its intrepid passengers and crew. Space was cramped: its gun deck where the passengers had their living quarters, measured only twenty feet by eighty feet. The main deck where no doubt some slept when the weather permitted was only marginally bigger. Underneath, in the hold was everything the Pilgrims thought they could fit into this small vessel for their new life – conscious as they would have been that previous attempts to settle in the new world had seen most of the earlier settlers die of starvation in their first winter. So, they carried everything they could fit in, that would be needed for the journey and for their new lives; tools, provisions, obviously, food – including some live animals, and weapons.[i]
So it is not without interest, that amongst the things on board, there were some legal works of Sir Edward Coke (spelled Coke but pronounced Cook).
Now by any measure, Sir Edward was a most remarkable man.[ii] He lived a long life, between 1552 and 1634. His career in politics and the law spanned the reign of three monarchs: Elizabeth I, James I and Charles I, and he managed to die peacefully in his own bed, not an insignificant achievement for someone navigating the politics of ambition, patronage and advancement in the Royal Court of the day.
Coke was a great legal scholar, a successful barrister, and a distinguished judge and parliamentarian. He was celebrated, during his lifetime, and venerated after his death, particularly in the United States, but on this side of the pond too, as someone who has had a profound influence on the underpinnings of our constitution and the development of the common law.
“All rising to great place is by a winding stair” wrote Sir Francis Bacon, Coke’s great rival throughout his life, in his famous essays.[iii] And never truer a word was said about both of those two great men.
There are many in this audience who are not lawyers and who may not be familiar with Coke’s life and times, so let me mention just a few things about him, some illustrious, some less so.
Coke became Chief Justice of two of the courts dispensing justice in the reign of James I. In 1606 he became Chief Justice of the Court of the Common Pleas, which dealt with actions between subjects; and in 1613, he was moved – with reluctance on his part – to become Chief Justice of the King’s Bench. This was, in theory, a more prestigious position, involving cases of interest to the King. Coke remained CJ for three years, until he was removed from judicial office and returned to politics.
On his way up “the winding stair,” Coke had had a stellar career at the Bar and rose rapidly in public service. He became a Member of Parliament in 1589 and was elected Speaker of the House of Commons by 1593. He was for a short period, the Attorney General – notoriously prosecuting with great vigor – indeed brutality –some of the great treason trials of the day. Three in particular merit a mention: the trial of the Earl of Essex, the trial of Sir Walter Raleigh and the trials of gunpowder plot conspirators, including Guy Fawkes (for those of you with an historical bent, the transcripts are worth reading: certainly, his manner of cross-examination was extraordinarily strong stuff, even for its day).
Between times, as a judge, Coke decided a large number of important cases, many of which are still relevant today. In Dr Bonham’s case,[iv] Coke proclaimed that laws against common reason are void – a decision thought to have been highly influential in the assertion by the US Supreme Court in Marbury v. Madison in 1803[v] of its powers of constitutional judicial review. In the Case of Proclamations,[vi] Coke, then Chief Justice of the Common Pleas, together with his fellow Chief Justices, famously held that an attempt to alter the law of the land by the use of the Crown’s prerogative powers was unlawful, concluding that “the King hath no prerogative, but that which the law of the land allows him”. In other words, the limits of prerogative powers were set by law, not the Monarch and were determined by the courts. This passage was cited over 400 years later, in Miller No 2,[vii] when the Supreme Court decided in 2019 that the decision of the Prime Minister of the day, to advise Her Majesty to prorogue Parliament, was unlawful because it had the effect of frustrating or preventing the ability of Parliament to carry out its constitutional functions without reasonable justification.
Coke was also a prolific author. He wrote the Institutes of the Lawes of England, a series of legal treatises published in four parts which many see as the foundational document of the common law.[viii] And in 1628 when the political tensions between Parliament and the Monarchy were at their height, Coke introduced into Parliament The Petition of Right. This Act, still in force, sets out some of our fundamental liberties, in confronting the belief of Charles 1 that he could govern by the Royal Prerogative without the advice or consent of Parliament.[ix]
Coke was no plaster saint, however. In the interests of balance, I should refer to the view of one distinguished judge (Sir Stephen Sedley) who said:
“…by most accounts [Coke was], a thoroughly unpleasant man, a bully and a domestic tyrant; in Macaulay’s view, he was a ‘pedant bigot and brute’, but – and this is perhaps the reason he made history – ‘an exception to the maxim … that those who trample on the helpless are disposed to cringe to the powerful’. If Macaulay was thinking of the tenacity with which Coke repeatedly faced down royal claims to be the ultimate source of English law, the concession was unnecessarily generous: Coke could cringe lavishly when he had to. When his insistence that it was for the judges, not the king, to determine the content of the common law provoked James into raising his fist (at least according to a bystander), he fell on all fours to beg for mercy.”[x]
On the topic of his domestic tyranny, Coke’s conduct makes for uncomfortable reading. Eliza Hatton, his second wife, was a highly intelligent woman: rich and beautiful she was a sought-after prize in the marriage market of the day. After the death of her first husband, she chose Coke as her second. The marriage was not a happy one. In an incident, notorious even in its day, she and her 14-year-old daughter were forced to flee when Coke for reasons to do with his own ambitions, offered the daughter’s hand to a thoroughly unsuitable man (the younger brother of the King’ favourite: an “ugly simpleton”). Coke tracked his wife and daughter down; broke down the doors of the house in which they were hiding, dragged his daughter off on horseback, confined her and forced her to submit to the marriage.[xi] We can well understand why when Coke died, his wife (reportedly) said: “We shall not see his like again…Thanks be to God.”
Moving from his private life, however and back to the law, it cannot be doubted that occasional cringe or not, Coke thought that judicial reason and judgment lay at the heart of the common law and its development, nor can it be doubted that this view was central to his opinion that legal disputes were for judges to resolve, rather than the King.
The Prohibitions del Roy, or the Case of Prohibitions[xii]concerned a land dispute which involved the Archbishop of Canterbury, which King James thought it was for him to resolve. Coke gives the following account of the exchanges which took place between himself and his King:
“the King said, that he thought the Law was founded upon reason, and that he and others had reason, as well as the Judges: To which it was answered by me, that true it was, that God had endowed his Majesty with excellent Science, and great endowments of nature; but his Majesty was not learned in the Lawes of his Realm of England, and causes which concern the life, or inheritance, or goods, or fortunes of his Subjects; they are not to be decided by naturall reason but by the artificiall reason and judgment of Law, which Law is an act which requires long study and experience, before that a man can attain to the cognizance of it; that the law was the golden met-wand and measure to try the causes of the subject; and which protected his Majesty in safety and peace: with which the King was greatly offended, and said, that then he should be under the law, which was treason to affirm, as he said, to which I said, that Bracton saith quod Rex non debit esse sub homine, sed sub Deo et lege [That the King ought not be under any man but under God and the law].”[xiii]
Twenty years later Coke returned to the issue of the law and reason:
“Reason [said Coke] is the life of the law, nay the common law itself is nothing else but reason; which is to be understood of an artificial perfection of reason, gotten by long study, observation and experience, and not of every man’s natural reason; for Nemo nascitur artilex.[no one is born and artist or an expert] This legal reason est summa ratio. [the highest reason] And therefore if all the reason that is dispersed into so many several heads, were united into one, yet could he not make such a law as the law in England is; because by many successions of ages it hath been fined and refined by an infinite number of grave and learned men, and by long experience grown to such a perfection, for the government of this realm, as the old rule may be justly verified of it, Neminem oportet esse sapientiorem legibus: No man out of his own private reason ought to be wiser than the law, which is the perfection of reason.”[xiv]
In Coke’s view, legal reasoning was artificial because it was a refined collective process – an accretion of wisdom if you like – that no single individual’s natural reason could match. He and his contemporaries saw the law as a living tradition, requiring human insight to continuously “fine and refine” rules in light of fairness and societal values. The common law was therefore the product of reason and experience, distilled over generations, with an approach to precedent rooted in concepts of consistency and fairness.
Coke’s concept of “artificial reason” remains a touchstone for our understanding of legal reasoning. Charles Gray described an experienced common lawyer’s trained intuition as “a refined sense of ‘what fits’” that goes beyond a layperson’s reasoning. And this hard-won “sense of what fits,” essentially the art of judging, reflects the human capacity to adapt principles to novel circumstances.[xv]
Why might you ask, should we consider Coke’s views today in a discussion of AI and judging?
Two reasons I would say. First, because his views should command respect. He was and remains an enormously influential jurist whose various publications (including of what amounted to the first proper series of law reports (in English) and his commentaries on them) brought coherence to the principles of the common law.
This is not just a parochial view.
John Rutledge for example, one of the founding Fathers and an associate justice of the US Supreme Court, advised one of the signatories of the Declaration of the Independence (his brother, Edward) in 1769:[xvi]
“[W]ith regard to particular law books Coke’s Institutes seem to be almost the foundation of our law. These you must read over and over, with the greatest attention, and not quit him till you understand him thoroughly, and have made your own everything in him, which is worth taking out.”
Returning to this jurisdiction, in 1824, Chief Justice William Best said of Coke:[xvii]
“The fact is, Lord Coke had [often] no authority for what he states, but I am afraid we should get rid of a great deal of what is considered law in Westminster Hall, if what Lord Coke says without authority is not law. He was one of the most eminent lawyers that ever presided as a judge in any court of justice.”
As Sir Stephen says, “Coke’s law reports, many of them his own cases, continue to be uniquely relevant to the modern law governing the use and extent of prerogative powers and much else besides. Whilst Coke’s name and achievements may not be familiar to many, we have him to thank for much of our innate sense of what our society considers just and fair in the judicial approach to the law even now.”
The second reason can be shortly put. Coke highlights what is inherently human about judicial reasoning, with qualities such as wisdom and experience being central to legal systems such as ours, that prize stability and incremental change through analogical reasoning.
The transformative benefits of AI and mitigating risk
Let me move on then from the distant past to the astonishing present. Looking at the matter broadly, AI has the potential to bring enormous benefits, indeed, to be transformative in the justice system.
Firstly, AI has the capacity to increase the efficiency and effectiveness of the operation of the court system, and the administrative processes that underpin decision making in the law itself. This is both timely and relevant as we face increasing stringency, and a record backlog of cases waiting to be heard in every jurisdiction. Secondly, as two eminent academics have recently commentated “Generative AI is already being used in courtrooms worldwide, with lawyers and self-represented litigants using chatbots and other AI tools for research and drafting” and the technology can provide access for justice by helping ordinary individuals, without means, through the use of generative AI tools, to “understand, articulate and assert their legal rights”.[xviii] The ability of ordinary members of the public to “self diagnose” and “self help” has implications for the resourcing of our legal system and the design of its processes which it is important to consider and address.
At the same time: [xix]
“Policymakers across various sectors are grappling with the same question: how can we harness the transformative benefits of AI while mitigating its risks? The justice sector is no exception to the impact of AI. Regardless of whether one views AI as a transformative opportunity or an existential threat, it is essential to anticipate how new and emerging technologies are changing the justice system.”
When considering the potential role of AI in adjudication (or that of a human judge when using AI) it is necessary to consider and confront what is called the “Black Box” Problem.
The ‘Black Box’ Problem in AI Systems
Modern AI systems – particularly those based on machine learning and neural networks – often function as “black boxes.” This means their internal decision-making processes are not visible or capable of interpretation by humans, even by the engineers who design them. In a typical machine-learning model, especially deep neural networks, there may be millions of artificial “neurons” with weighted connections that are adjusted through largely automated training. By the time such a model (say, an LLM like ChatGPT) produces an output, the process that has produced that output is distributed across countless mathematical parameters in a nonlinear fashion. No clear chain of logical steps can be traced; instead, the output emerges from complex pattern recognition in the training data.
An AI system might be trained on thousands of cases and statutes and be asked to predict an outcome or suggest a legal argument. It will generate an answer based on patterns in the data, not by applying the law or deploying legal reasoning. If we ask the AI system how it arrived at that answer, it does not have internal reasons to show us; it will simply generate a plausible explanation (after the event). Asking AI to justify or explain its answer results in a new sequence of computations that may be entirely unrelated to the computations leading to the first result. AI systems can also amplify biases (including racial, gender, or socioeconomic biases) present in training data. It matters therefore that even if the answer given appears to be correct, we cannot see inside the “black box” to understand how or why a particular result was reached.
It goes without saying that it is easier to deal with the challenges created by new technologies that we can see with our own eyes, rather than those that are invisible.
One visible problem created by AI is now, I hope widely known amongst legal professionals at least. Tempted by the truly phenomenal powers of Generative AI to process data, some legal professionals as well as individual litigants now use ChatGPT (other platforms are also available) to produce legal documents and written submissions to the court. The availability of AI can, as I have already said, assist in improving access to justice, but its use in this way can present a significant trap for the unwary.
Generative AI, designed as it is to produce coherent text rather than the “right” answer, if asked to generate a legal argument, can produce “hallucinations.” These are fake cases, or fake citations from fake cases or even fake quotations from real cases. The plausibility of these hallucinations has to be seen to be believed. In one case which I dealt with, the problem surfaced because AI – fortunately – did not know that the actual case was to be dealt with by the very High Court judge to whom it attributed what turned out to be a fake decision.[xx] Needless to say, she was not fooled. The risk of injustice and misinformation filtering into the legal system is nonetheless real, but at least such problems can be spotted, and those responsible for misusing technology can be held to account by the courts and professional regulators.
In technical terms, however, the black box problem is concerned with things that we cannot see –And the opacity of AI decision-making raises issues of transparency, accountability, and legitimacy.
Transparency, accountability, and legitimacy
If we return to Coke for a moment, and his artificial reason, we might reasonably think that his expectation would be not merely that a judge must reach a just decision based on her wisdom and experience, but she should also provide her actual reasons for reaching it.
This accords with the fundamental legal principle with which we are all familiar, that justice must not only be done, it must be seen to be done. In this respect, one might think, this chimes with the requirements of Article 6 of the European Convention on Human Rights (the right to a fair trial), that decisions must be reasoned and delivered by an “independent and impartial tribunal.[xxi]
Whilst researchers in explainable AI (XAI) are working on methods to make the product or AI outputs easier to interpret (by extracting for example, the factors that “influence” the output or by designing models that follow logical rules), for large neural networks, the real options currently remain limited. Methods typically use post-hoc techniques, which do not reveal the actual internal process but rather create a separate, simpler, interpretable model for a specific decision.[xxii] Moreover, accountability and legitimacy are closely tied to transparency. If we can see who made the decision, what they decided, and why, judges, consistent with judicial independence, can be held accountable for their decisions, through the appellate process. If, however, an AI system makes a decision or influences it, who is responsible for an AI-generated judgment or recommendation? The judge who adopted it? The programmers who developed the AI? The government or company that provided the tool? Or is it “no one’s” decision (what some have called the “accountability gap.”).
Time does not permit a fuller consideration of the issue of legitimacy but let me touch on two additional points.
First, an algorithm cannot be impartial in the human sense, nor can it be held responsible in the way a person can. Even if AI could perfectly predict the outcomes a human judge would reach, we might still object that no one is actually judging – and for some, that might not accord with their concept of justice. Justice being seen to be done involves more than the giving of reasons. Sometimes (not always) but sometimes, it is important for a particular issue to be ventilated in a court room, so that the opposing arguments can be seen by the public as well as the judge, and so the parties can be seen as well as (feel) heard.
Secondly, judicial independence connotes amongst other things, that decisions are made free from inappropriate influence. If courts started relying on AI systems in decision making, we might want to ask who built those systems and on what data, who controls that data and whether the data embeds hidden biases or priorities.
Values
Turning next to ethical issues, law is not a purely rule-based construct. English law for example developed equitable principles to mitigate the rigor of the common law, in order to prevent outcomes that would be unconscionable and to enable justice to be done in the particular case. Similarly, in criminal law, judges may temper justice with mercy: at sentencing, they consider the offender’s personal circumstances, remorse, and capacity for rehabilitation. These are profound human determinations that engage empathy and discretion.
AI lacks practical wisdom or what Aristotle called phronesis – the virtue that allows humans to navigate moral dilemmas and exercise good judgement. The fact that AI, currently at least, in contrast to a human judge, can only mimic the outcomes seen in the data, has practical implications if a legal question hinges on values. What is “in the best interests of a child” for example, in a family case, or whether in a defamation case the right to freedom of expression outweighs reputational harm.
We may want to see what is “under the hood” or inside the black box, or the factors or variables that influence the algorithms behind particular inputs or outputs, but we are in the world of proprietary AI systems, commercial confidentiality and complex machine learning, and it is unrealistic to suppose as things stand, that we will be able to do so at least any time soon – or indeed that we – as mere mortals – would understand the processes that lead to a particular result even if we did.
Legal development
There is one further issue to consider. By integrating AI into substantive legal reasoning there is the risk that we create a “closed loop.” The common law’s vitality stems from the way it functions in a changing society. Judges bring their understanding of social norms to their application of legal principles, allowing the law to adapt over time.If AI models begin to make judicial decisions, their outputs will inevitably become part of the body of case law. Future AI models will then be trained on this AI-generated data. This could risk a self-referential system—a “closed loop of recycled data”—in which the law is no longer tethered to the experience and values of the human community it is meant to serve.
Where next?
In considering these issues I am not, I hope, raising a straw man.
Returning to the question posed some moments ago (how can we harness the transformative benefits of AI while mitigating its risks) we can see an emerging consensus that some decisions involve profoundly human judgments and are never going to be for AI to resolve alone. In what is a dynamic debate, courts and policy makers across the world are generally keeping humans in the loop and proceeding carefully. Where judgment is required, AI is seen as a tool not a decision-maker, providing valuable assistance, with research, or summaries or transcription for example. Thus far in other words, but for now, no further.
The regulatory landscape is rapidly evolving, just as is the technology itself. Taking three jurisdictions, the U.S, the EU and this jurisdiction, there is a clear recognition that “AI in the courtroom” needs rules. The EU has gone the furthest in codifying those rules. England and Wales and the U.S. rely more on guidance and existing legal principles, but these jurisdictions converge on certain core ideas: the maintenance of human responsibility, transparency and fairness by rules or guidance under human control.
Looking abroad for a moment, the U.S. is broadly addressing the issue through a combination of internal judicial policies and broader AI governance principles. The U.S. federal courts formed an AI Advisory Task Force in 2023, which by mid-2025 issued interim guidance encouraging controlled experimentation with AI but stipulating that “the integrity and independence” of the courts be preserved. The American Bar Association has similarly drafted guidelines for judges, emphasising that AI tools can be used for administrative efficiencies, but stating that judges must not delegate judicial decisions and must ensure any AI use is consistent with ethical obligations.
The Council of Europe’s 2018 Ethical Charter on AI in Judicial Systems highlights the principles to uphold, including respect for fundamental rights, non-discrimination, quality and security of datasets, transparency, impartiality, and fairness and under user, which means human control of decisions. It cautions against letting AI undermine judicial independence or equality before the law.
The Council of Europe’s Treaty on AI and human rights, democracy and the rule of law [xxiii] was opened for signature on 5 September 2024 and was billed as the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is consistent with human rights, democracy and the rule of law. The Treaty has already been signed by the EU, the UK and the USA, amongst others.
The EU’s AI Act (2024) started coming into force on 1 August 2024. Article 6(2) and paragraph 8(a) of Annex III to the EU’s AI Act[xxiv] categorises AI systems intended to be used in the administration of justice and intended to assist in judicial decision-making into “High Risk AI systems” requiring human oversight. Specifically, Annex III of the Act lists AI systems “intended to be used by a judicial authority… to assist in researching and interpreting facts and the law and in applying the law to a concrete set of facts” as high-risk. For high-risk AI, the Act imposes further requirements (primarily on providers of AI). These include implementing a risk management system (Article 9) to identify and mitigate potential harms, ensuring high-quality training data to avoid bias, and human oversight (Article 14) to supervise the AI’s operation.
In this jurisdiction the judiciary has published guidance on the use of AI since 2023.[xxv] Judicial Guidance was updated in April 2025 and again on 31 October. The guidance assists Judicial Office Holders in relation to the use of AI and reinforces personal responsibility for material produced in a judge’s name. Examples of appropriate uses of AI are set out. The guidance warns judges to be aware of possible bias in training data which may influence outputs.[xxvi] It reminds judges that they alone are responsible for their decisions. It advises caution in the use of AI in certain cases, including legal research and analysis, because of AI’s limitations in relation to verification and inferential thinking, noting that current systems “do not produce convincing analysis or reasoning.” It says that “the accuracy of any information […] provided by an AI tool must be checked before it is used or relied upon.”
It is necessary to consider too, the General Data Protection Regulation (the GDPR), and its potential to affect the adoption and development of AI processes in general and in automated decision making.[xxvii] For the law students amongst you, this raises what has been described as “The article 22 problem”. Article 22 of the GDPR (and of the UK GDPR) provides that: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. It is certainly open to argument, whether article 22 prohibits decision-making within its ambit or provides rights for the data subject to enforce in the event of its violation.[xxviii] As has been pointed out, this “obviously has repercussions if AI or automated decision-making were to be used in our judicial processes in the future. If AI were ever to be used in judicial decision-making, an automated decision could arguably not be effective.”[xxix]
Conclusion
When considering judicial reasoning on the one hand, and the AI “black box” on the other, Sir Edward Coke’s words from 400 years ago remind us that the common law is an “artificial perfection of reason” that is adaptive, nuanced, and deeply tied to principles of justice. In this context, it is worth reflecting on what Oliver Wendall Holmes said about the common law in 1881, well before computing was a twinkle in Alan Turing’s eye:
“The life of the law has not been logic; it has been experience… The law embodies the story of a nation’s development through many centuries, and it cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics.”[xxx]
In the common law traditions of England, the legitimacy of courts flows from transparent reasoning, the accountability of its decision-makers and the infusion of equity and conscience into legal rules. These are qualities that a “black box” cannot perform, display, or reproduce. And we should avoid anthropomorphism when talking or thinking about AI. Using words that describe human attributes (wisdom, reasoning, intelligence etc.) to describe computational functions can significantly mislead. AI is not human. For all its computational prowess, it is neither wise nor humanly intelligent. It is a powerful mimic that lacks consciousness and hence understanding.
The conflict is not merely procedural—pitting transparency against opacity—but is more profoundly epistemological. It represents two disparate ways of knowing and validating a conclusion. Judicial reasoning is a method of justification, built upon cumulative logic, precedent, and deliberation. In contrast, AI is a method of calculation, deriving outputs from statistical correlations within vast datasets, through pathways that are inscrutable even to its creators.
That said, AI is here to stay in the area of justice as everywhere else, and there are safe and beneficial ways to harness it in the judicial process under careful human oversight. We can see this even now (in its ability to digest vast volumes of information in Case Management and in E-Discovery for example). The current consensus however is that in matters where judgment is required, we can use AI as a tool, not a substitute for judges; and that in applying legal rules and in filtering the output of AI, human judgement matters as does transparency, accountability, and fairness.
The goal therefore is not to resist technology but to channel it in service of justice. As one of my colleagues observed, AI can be a “jolly useful tool,” but true judgment – the weighing of arguments in the service of justice – remains entirely human, augmented by innovation but never supplanted by it. By this means we can ensure that reason and justice prevail, with AI as a “jolly useful” obedient servant, not an inscrutable and unaccountable master.
Thank you.
Dame Victoria Sharp
President of the King’s Bench Division
[i] Godfrey Hodson, A Great and Godly Adventure: The Pilgrims and the Myth of the First Thanksgiving (Public Affairs, October 2007).
[ii] See Catherine Drinker Bowen, The Lion and the Throne (Little Brown and Co, 1957, reprinted 1990); Stephen D. White, Sir Edward Coke and “The Grievances of the Commonwealth” (Manchester University Press, 1979); and John Hostettler, Sir Edward Coke: A Force for Freedom (Barry Rose Law Publishers, 1997).
[iii] For a delightful account of this rivalry (mostly fact, and some fiction): see Jesse Norman, The Winding Stair (Biteback Publishing, 2023).
[iv] (1610) 77 ER 638.
[v] 5 U.S. (1 Cranch) 137 (1803); see the opinion of Chief Justice John Marshall.
[vi] (1611) 12 Co Rep 74, 76.
[vii] [2019] UKSC 41.
[viii] Some 30 years earlier in Rooke’s Case 5 Co Rep 99b, 100a, Coke had described [judicial discretion] as “a science or understanding to discern between falsity and truth, between right and wrong, and between shadows and substance, between equity and colourable glosses and pretences, and not to doe according to their wills and private affections.”
[ix] The long title of the Act is as follows: “The Petition Exhibited to His Majestie by the Lordes Spirituall and Temporall and Commons in this present Parliament assembled concerning divers Rightes and Liberties of the Subjectes: with the Kinges Majesties Royall Aunswere thereunto in full Parliament.” The Act concerned taxation without consent, in particular, the imprisonment of certain individuals who refused to lend money to the King, martial law and the billeting troops on people’s homes.
[x] Sir Stephen Sedley, ‘Coke v. Bacon’ (2023) 45(15) London Review of Books.
[xi] Thomas Longueville, The Curious Case of Lady Purbeck (Longmans, Green and Co, 1909), 41.
[xii] Prohibitions Del Roy [1607] 12 Co. Rep. 63; 77 ER 1342, 1342: “The King in his own person cannot adjudge any case, either criminal or betwixt party and party; but it ought to be determined and adjudged in some Court of Justice, according to the law and custom of England. The King may sit in the King’s Bench, but the Court gives the judgment. No King after the conquest assumed to himself to give any judgment in any cause whatsoever which concerned the administration of justice, within the realm; but these causes were solely determined in the Courts of Justice.” See and note the introduction to Gibson’s Codex, E.Gibson, Codex Juris Ecclesiastici Anglicani (1713), vol.1, 20-21.
[xiii] Prohibitions Del Roy [1607] 12 Co. Rep. 63, 64.
[xiv] Sir Edward Coke, A Commentary upon Littleton (1628) 97b.
[xv] Charles Gray, ‘Reason, Authority, and Imagination: The Jurisprudence of Sir Edward Coke’, in Perez Zagorin (ed), Culture and Politics from Puritanism to the Enlightenment (University of California Press 1950) 61
[xvi] John Belton O’Neall, Biographical Sketches of the Bench and Bar of South Carolina, vol 2 (Charleston, S G Courtenay & Co 1859) 124.
[xvii] Garland v Jekyll (1824) 2 Bing 273, 296-297; 130 ER 311, 320.
[xviii] Mimi Zou and Ellen Lefley, ‘Generative AI and Article 6 of the European Convention on Human Rights: The Right to a Human Judge?’in Mimi Zou, Cristina Poncibò, Martin Ebers, Ryan Calo (eds), The Cambridge Handbook of Generative AI and the Law (Cambridge University Press 2025) ch 25.
[xix] Ibid, 451.
[xx] R. (on the application of Ayinde) v Haringey LBC [2025] EWHC 1383 (Admin).
[xxi] European Convention on Human Rights, art 6(1): “In the determination of his civil rights and obligations or of any criminal charge against him, everyone is entitled to a fair and public hearing within a reasonable time by an independent and impartial tribunal established by law. Judgment shall be pronounced publicly but the press and public may be excluded from all or part of the trial in the interest of morals, public order or national security in a democratic society, where the interests of juveniles or the protection of the private life of the parties so require, or to the extent strictly necessary in the opinion of the court in special circumstances where publicity would prejudice the interests of justice.”
[xxii] Such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) which may provide explanations that can be unstable, computationally expensive, and can fail to capture the complex, non-linear interactions within the original model.
[xxiii] Council of Europe, Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (opened for signature 5 September 2024).
[xxiv] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) [2024] OJ L1689/1, Annex III, para 8(a): ““AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution.”
[xxv] Artificial Intelligence (AI) Guidance for Judicial Office Holders (31 October 2025).
[xxvi] Ibid, 5; the Guidance provides: “AI tools based on LLMs generate responses based on the dataset they are trained upon. Information generated by AI will inevitably reflect errors and biases in its training data, perhaps mitigated by any alignment strategies that may operate. You should always have regard to this possibility and the need to correct this. You may be particularly assisted by reference to the Equal Treatment Bench Book.”
[xxvii] Data Protection Act 2018, sch 1 (UK GDPR).
[xxviii] See: Case C-634/21 SCHUFA Holding (Scoring) [2023] ECLI:EU:C: 2023:957.
[xxix] Sir Geoffrey Vos, ‘AI and the GDPR’ (Speech, Judiciary of England and Wales, 9 October 2024).
[xxx] Oliver Wendell Holmes, The Common Law (Little, Brown and Co 1881) 1.