Understanding Litigation Risk: Methods of Measurement

There are two methods currently employed, often in combination, to evaluate litigation risk.  These are (i) subject matter experts, generally lawyers with experience in the particular area of the law, and (ii) decision tree analysis, which forces lawyers to be more precise about particular risk evaluations and therefore brings additional rigor to the process.   A promising avenue for continuing research is the use of quantitative methods for evaluating litigation risk. Each of these will be discussed in turn.


Subject Matter Experts: Qualitative Methods

Lawyers have always helped their clients by attempting to predict the outcomes of lawsuits.  Indeed, as early as 1987, Oliver Wendell Holmes wrote: “object of our study, then, is prediction, the prediction of the incidence of the public force through the instrumentality of the courts.”  Lawyers often predict outcomes based on their or their colleagues past experience with similar cases.  Often these predictions are based on fine grained understanding of the interplay between facts and law and of the locality.  How do judges in the particular court where the suit was filed treat such cases and various types of motions that are brought?  Is the case likely to be consolidated with others like it? What are the jurors like in that geographic location?

Depending on the lawyer’s comfort level, these evaluations may not be precise, but they can give a client a general sense of the prognosis.  For example, a lawyer might describe the chances of prevailing on a motion to dismiss as “low” or “high.” It is not clear, as we shall see below, whether the appearance of precision by using percentages in fact increases the reliability or usefulness of evaluation of litigation risk in concrete cases.  Clients are sometimes frustrated when lawyers are vague about the likelihood of success of various pathways in litigation. On the other hand, there are limitations to the accuracy of predictions that are more concrete: such predictions may create the illusion of reliability but in fact mask uncertainty.   

Reliance on subject matter expertise raises two problems, both of which basically come down to either bias or simple inaccuracy.  

First, lawyers basing their evaluation of a case on their own experience may be relying on an unreliable sample.  It may be that in the run of cases liability is more or less likely, but the limitations of the lawyer’s own knowledge base prevent them from giving an accurate assessment.  

Second, in the firm or in-house context, group-think and path dependency may also play a role in hindering an accurate assessment of risk. For example, while a particular case may have a higher chance of success if a particular procedural path is chosen, a past practice of not choosing that path or simple lack of familiarity with the options may result in an inaccurate assessment of risk.

Finally, there is evidence from other areas that people are bad at being consistent in their evaluations, and tend to be error prone and inaccurate even when these inaccuracies are not biased in a particular direction.  

Decision Tree Analysis

One way to harness subject matter expertise with a greater degree of precision is to use decision tree analysis.  This method was pioneered by Marc Victor, who remains the most prominent consultant in the area of litigation risk analysis.  It has been used by insurers and lawyers for over thirty years.

Decision tree analysis for litigation operates as follows.  Lawyers map out various pathways that a litigation may take and predict the likelihood of the outcome at each stage of the litigation. Those probabilities are calculated to lead to a range of possible outcomes and an expected value (or range of values) of the suit.  Decision tree analysis encourages lawyers to be more precise and rigorous with their analysis of case outcomes and pushes them to articulate specific numerical probabilities. By specifically laying out alternative outcomes in the decision tree format, subject matter experts are encouraged to consider all possibilities and (potentially) biases are counteracted.  Using arithmetic to calculate the ultimate expected value can avoid predictions based on fear, over-optimism, or mistaken focus on particular elements of the case.

Although decision tree analysis introduces more rigor and consistency into the analysis of subject matter experts, it is still dependent on the knowledge base and perception of experts.  That is, if the expert believes that there is an x% chance of prevailing on a motion to dismiss, but has missed some key factor that changes that probability to 2x%, the decision tree itself will not help to correct that error.  A decision tree does not answer the question of what the probabilities are that go into the calculation. It is therefore subject to some of the same concerns raised with subject matter experts in the previous section.


Quantitative Methods

The rise of big data may ultimately lead to significant progress in evaluating litigation risk, and particularly in providing more accurate assessments of the probability of litigation events occurring, either independently of or in connection with subject matter experts. But at the moment the access to data is too limited to be particularly useful.  This section will consider the reasons for limits in current quantitative analysis capabilities.

 In order to predict case outcomes based on outcomes in past cases, a lot needs to be known about a case.  Most of this information is not available in easily searchable form. For example, it is not enough to know what claims were made.  It is also important to know whether the case is one that turns on questions of fact or questions of law, the complexity of the underlying facts, the extent to which dueling expert evidence will play a role and the experts retained, the skill of the lawyers, the extent to which the parties have asymmetric information, asymmetric stakes, and asymmetric funding to pursue litigation.  If the dataset that is being analyzed does not contain this information, then any prediction will be flawed, potentially too flawed to be useful.

The primary problem with the available data which is currently being analyzed by various large services such as Bloomberg Law, Westlaw, and smaller start-up companies, is that it is selective.  For example, some services provide information about judicial decision patterns: they analyze the judges’ published or unpublished opinions to determine the chance of a party prevailing before this judge in a particular category of case.  But the data being analyzed is just the tip of the iceberg, many cases generate no written opinion. These cases may be resolved on a summary order, or by settlement, for example, but they would not be included in the analysis of the judges’ decision patterns.  Yet the outcome of those cases is also influenced by the judge. Nor are these services able to distinguish between cases that turn on legal as opposed to factual questions or the level of factual complexity in the various cases. So an analysis might reveal that the judge grants summary judgment in 60% of the published opinions in employment discrimination cases, but ignore the fact that the dataset constitutes only 10% of all the employment discrimination cases before that judge.  If the remaining 90% of the employment cases are invisible to the observer, the data with respect to published decisions is not very useful for predicting the outcome of a newly filed case or one that is yet to be filed. Furthermore, there is no reason to think that the 10% of the observable cases are representative of the universe.  Indeed, there is reason to think that there is something special about a case that is selected for a published opinion.  The same is true for many services that purport to measure lawyer success.

There is no publicly available dataset of all court filings in any U.S. jurisdiction. The largest database of lawsuit information is the federal PACER database.  Access to this database is costly, although a lawsuit is currently in progress to make the data freely accessible and there has been some suggestion of Congressional action.  Some entities have been able to collect partial PACER data. Yet even when a dataset of all court filings becomes available, it would still not provide information about the parties, such as their level of risk aversion, information asymmetry, and their funding, although some of this information could presumably be deduced from other sources and combined with the court dataset.  Nor is it clear that it is possible to use technology or artificial intelligence to sift through the data at this juncture and determine, for example, which cases are based on questions of law as opposed to complex factual scenarios within a given subject matter category.

Importantly, even if with full access there is no way to account for the fact that the system is not stable.  Even if one could correlate outcomes with certain factors in a reliable way for past cases using the entire universe of federal cases, this knowledge itself would change the conduct of participants in the system, rendering the predictions based on past behavior useless for predicting future behavior.  An analysis that could predict the mechanism for achieving past outcomes would be more useful for predicting future outcomes, but this is not a question of access alone but rather of models for analyzing case progress and outcomes.  So far, no satisfactory model exists, but this may be the most important area for future research.

In sum, the available data is too partial to provide very good results, and even if unlimited data were available a better theory is needed as to what actually causes outcomes of cases in order to create predictive value.