The aspiring litigation lawyer spends years in the library parsing over often dry legal texts and cases, slogging through attritional exams before their reward: cutting their teeth before a judge in court. The law comes to life, the courtroom their stage, they and judges lead actors in the honourable pursuit of justice. Often enough, however, they fall back down to earth with a thud. Watertight legal arguments painstakingly crafted over many months disintegrate in a matter of minutes before a judge who simply does not see things the same way. Legal rules appear to be cast aside and carefully prepared submissions are largely ignored (much worse, perhaps, than them at least being acknowledged and torn to shreds). As a trainee solicitor, I vividly remember working on an employment law case that our client, it seemed, simply could not lose. We did. The experience was chastening, even demoralising.

No doubt this is a relatively common experience for the early-career litigation lawyer. Of course, over time they grow battle-hardened, quickly realising that predicting the likelihood of success of their client’s case is not just about the strength of evidence or the subtleties of language in legal rules and principles. It can be as much about knowing the judge. With experience, the litigation lawyer learns the personalities of the judges they appear before most often, developing a sense of their ‘form’ in particular areas of law. However, often this exercise of getting to know the judge can be ad hoc, intuitive, unscientific. Perhaps approaching this exercise through a more empirically rigorous, data-driven lens may improve their chances?

Knowing the judge

Rather than doggedly work on my own litigation battle-readiness, I defected to the role of observer as a law academic. I immersed myself in a remarkable body of research from disparate academic disciplines that investigates the role of the judge, asking what beyond the law can make the difference between winning and losing. The product is How Judges Judge: Empirical Insights into Judicial Decision-Making, a book that compiles and analyses this research.

Over the years, political scientists, psychologists, economists, law academics, and even professionals seemingly removed from the world of law altogether such as neurologists and computer scientists, have applied their knowledge and understanding of their respective fields to better understand judicial decision-making. These researchers have investigated how, among other things, psychological effects, numerical reasoning, implicit biases, court rules and processes, influences from political and other institutions, and new technologies can all affect judicial decision-making in different contexts.

This research is increasingly broad in scope, evermore sophisticated and sometimes revelatory. It is undoubtedly valuable material for the lawyer, the law student and, of course, judges. However, these researchers often operate in disciplinary silos and their findings sometimes pass by each other like ships in the night. As a consequence, judges and lawyers may not be as aware of this research as they could be, and of its value to them.

How Judges Judge ties the strands of research together and presents it to those who stand to benefit from it the most. Understanding this work should help litigation lawyers to be more categoric and certain in their advice, better able to respond to their client’s all-important question: ‘what are my chances?’ Judges will be able to better understand their own role, and consider factors beyond their primary discipline of law in their everyday work. Judicial training experts, policy experts and professionals working in court systems looking at ways to understand and improve judicial performance will also benefit.

Judging factors

To briefly peer through the window of this diverse body of research, many studies correlate external factors in judges’ lives to trends in their decision-making, often with memorable findings. Judges may sometimes let rather trivial emotions infiltrate their judging: sentence lengths meted out to young black men in the US state of Louisiana increased immediately after a hugely popular local college football team (comprising mostly of young black men) lost unexpectedly. Some researchers suggest that a judge’s mood can make a difference: a well-known study on Israeli judges suggested that whether they had had their lunch or affected the likelihood they would grant parole (although this correlation has been challenged). Relationships in a judge’s life may matter: US judges who are parents to daughters tend to be more sympathetic to plaintiffs in gender discrimination claims.

Other studies investigate correlations between demographic and political characteristics and judicial outcomes. Women barristers appearing before the Australian High Court fare worse than male barristers do, with the effect considerably curtailed by the presence of a woman judge on the High Court panel. Black US appellate judges are more likely to be overturned by their white colleagues on appeal than white judges are.

The influence of a judge’s politics is a heavily researched area. A now-global body of research seeks to demonstrate that judges’ political affiliations and loyalties correlate to trends in their judicial decision-making. Case outcomes often align to a judges’ preferred political party’s policies, particularly (and unsurprisingly) in highly-polarised political systems like in the US. That said, the effect appears far less pronounced or even non-existent on apex courts in the UK and Ireland.

Studies demonstrating correlations between judges’ and litigants’ characteristics and case outcomes, while intriguing, can only go so far. Of course, they are confined to analysing specific courts across a particular time span and many more variables may be at play in the real world. More probative still, perhaps, is a rich vein of experimental research that has developed over the last 20 years or so where practising judges bravely serve as judicial guinea pigs in hypothetical mock trials. Here, researchers isolate and control for specific factors that they think will affect participating judges’ decision-making. They test for cognitive errors, emotion, motivated reasoning and implicit biases. With a microscope forensically peering over their work, often as part of judicial training days, there is no hiding place for the judge who bravely puts their skills to the test.

Judicial guinea pigs

One strand of this experimental work looks at judges’ numerical reasoning. Judges make numerical judgements all the time. Sometimes they have to convert something qualitative (the culpability of a criminal’s conduct, or the level of liability of negligence) into something quantitative (the length of a sentence, or the amount of an award of damages). To take one strand of this research, a handful of studies demonstrate the profound consequences of the anchoring effect – the tendency to be drawn to initial values when making a numerical judgement, even where that initial value may be irrelevant or unrealistic. In one experiment, German judges were asked to hand down a sentence in a hypothetical case. They were told to roll dice and to take whatever value emerged as the recommended sentence length of the prosecutor. Even though they knew it to be a game of chance, and even though all judges heard the same set of case facts, the number appearing on the dice affected sentencing outcomes considerably. In another experiment, US administrative judges were asked to decide on an employment law discrimination claim. Where the claimant referred to an outlandish amount of compensation plucked from a similar case she had seen on a court reality TV show, this irrelevant anchor had a sizable effect, raising the amount of compensation the judges were prepared to award. ‘Ask and ye shall receive’ seems to be the gist of these findings.

More unsettling perhaps was an experimental mock trial that found that US judges were unable to ignore inadmissible evidence about an alleged rape victim’s prior sexual history. Judges exposed to this irrelevant, inadmissible evidence that ought to have been disregarded were much less likely to convict the accused when exposed to it.

There is at least some comfort in knowing that real-life litigants are not to suffer in such experimental studies and that judges may learn from their experiences by participating in them. It is also worth acknowledging that researchers present null findings on plenty of occasions – judges’ impartiality and objectivity often wins the day.

The potential for change

Some of the findings from this research are undoubtedly unnerving. It seems logical that judges (both practising and in training) ought to at least be exposed to this research in a systematic way, to know about it, and maybe even participate in similar exercises during judicial training programmes. This may be particularly important in common law jurisdictions where, in the main, successful litigation lawyers are appointed to the bench. After forging a career out of ferociously fighting on one side of every case they have ever taken, overnight they must morph into the blindfold wearing, even-scale wielding judge envisaged by Lady Justice. The skills required to be a good lawyer and those required to be a good judge of course overlap to some extent, but by no means entirely. This line of research could inform appointments processes and could be integrated into initial judicial training. If judging is about making decisions, newly-appointed judges ought to know what factors affect decision-making beyond the law.

The overarching conclusion from this body of work is hardly surprising, yet may be overlooked: judges, of course, are human. They are social actors and, to varying extents, political actors. Judges are overwhelmingly motivated to get it right and very often do, but sometimes they make mistakes, perhaps in ways that they may not even be aware of.

Some argue that the inexorable march of technology will come to the rescue. Artificial intelligence systems can help to refine and improve judges’ decision-making, or even supplant human judges altogether. That era is already upon us in several legal contexts in a handful of jurisdictions. To give two recent examples, AI systems determine sentence lengths in drug-crime cases in Malaysia, and in Chinese courts give “abnormal judgment warnings” to judges where they go against the grain of other decisions in the case database.

Robo-judges, proponents argue, will take subjectivity and human error out of the equation, clearing clogged-up case lists for good measure by spitting out judgments almost instantaneously. However, an AI judging platform is only as good as the programming and data that goes into it. AI judges’ decisions may recreate or even exacerbate pre-existing biases embedded in that data. Transparency is also a significant issue. Questions of the ‘sweet mystery’ of machine learning as some have put it, the potential for governments’ abuse of power over controlling the algorithms, and public acceptability of such systems all present thorny questions that are tricky to answer. That said, the benefits of AI judging should not be discarded. Big data and predictive analytics are already transforming how litigation is done and it seems inevitable and sensible that the technology ought to be harnessed cautiously by judicial systems.

A warning

Some jurisdictions appear to fear the consequences that new technologies can bring. In 2016, French tax lawyer and machine-learning expert Michaël Benesty published data on his website about French asylum judges’ decision-making trends. The data highlighted wide discrepancies in the judges’ decision-making on whether to grant asylum seekers asylum in France. Some judges rejected asylum seekers’ claims nearly 100% of the time while others, even colleagues on the same courts, had very low rejection rates. It was a probative piece of research, one that did not portray some asylum judges in a good light. The French parliament’s response was sweeping. Article 33 of the Justice Reform Act, made it a criminal offence to evaluate, analyse, compare or predict the behaviour of individual judges, the first such ban in the world. The maximum sentence for this offence is an extraordinary five years in prison. The law effectively prohibits scholarship on the French judiciary altogether, insofar as individual judges and their decisions must not be identified or analysed. It is a regressive, overbroad and wholly disproportionate step, effectively casting judicial scholars as potential criminals. Closing off academic scrutiny of judiciaries in this way is unwarranted and a disservice to the ideals of transparent, better-quality justice.

Empirically-driven research on judges is useful and important. However, it is striking how often researchers identify the problems, inconsistencies or irrationalities in judicial decision-making, yet how infrequently they propose and test solutions to make judging less error-prone, more consistent and more rational. Researchers should now change tack, to emphasise solutions and interventions to improve how judges perform their function, as much as it continues to identify judges’ flaws and errors in decision-making. This will lead to better, fairer justice.

How Judges Judge: Empirical Insights into Judicial Decision-Making (hardback £150; ebook £135) is available now, published by Informa Law by Routledge.

By Dr Brian Barry, Lecturer in Law, Technological University Dublin

Twitter: @brianbarryirl


Featured image: The Judgment of Solomon, painting on ceramic, Castelli, 18th century, Lille Museum of Fine Arts (via Wikimedia commons)