As a former trial lawyer and lifelong baseball fan, I was interested to learn about an upstart legal technology company that claims to have developed an Artificial Intelligence system it has applied to litigation. I had a lengthy conversation with its CIO/inventor who explained the system creates complex tables that provide statistical data on trial results culled from court records. This includes “won-loss” records of law firms as well as success rates for individual lawyers. He noted that the data was broken down in many other ways including statistics on “win rates” for partners versus associates, attorney/firm performance before a particular judge, average length of a lawyer’s cases (his metric for efficiency), comparing win rates to billing rates, etc. He characterized all this as an invaluable predictive tool enabling clients to make the “right” choice selecting counsel as well as predicting outcomes. His thesis is that counsel selection and professional reputation have historically been guided by subjective factors often at odds with empirical data. Of course, I liked his analogy to baseball but noted that in baseball–as in trial practice–some things “don’t show up in the box score.”
Baseball and the Evolution of Metrics
Baseball has been a game of statistics since its inception. Over time, especially during the past few decades, an sophisticated overhaul of baseball metrics (statistics) has occurred, resulting in new metrics and a reordering of established ones. “Sabermetrics” has given rise to “MoneyBall”. Translation: astute students of the game have determined–and demonstrated empirically–that certain traditional metrics are not as important as newly-developed ones that more accurately measure player contributions and, hence, value. What is significant to this discussion is not so much what those changes were but rather, how and why established metrics were being reevaluated and, ultimately, accorded less weight. The answer is–and this is apposite to BigLaw– that teams with the biggest payrolls and star-stocked rosters were not necessarily ones who won championships. Why? That’s what the baseball statisticians sought to find out. What they came up with were new metrics more reliable in correlating performance with outcome. If this sounds like law’s “cost: value” gap, it should.
So let’s get back to the inventor and his claim that his distillation of Big Data has yielded metrics that are “game changers” (his words) for clients evaluating attorneys. Put another way, has he asked the right questions of the data and provided metrics that are relevant–if not crucial–to attorney performance and value? Metrics are valuable but only when they measure something meaningful.
Some General Comments on Litigation
Can we stipulate that litigation is generally a lousy way to resolve disputes? It is expensive, protracted, uncertain, lacking finality (appeals, new trials, etc.), as well as high in lost opportunity and human costs.
Add to that list that 98% of all cases settle prior to trial. That statistic is itself misleading because it fails to reveal how much expense, time, and other collateral costs was expended from filing to (pre-trial) resolution. It begs the question: “Why are more cases not settled pre-suit?” A lawyer’s ability to avoid litigation altogether is a big “win.” I would argue that litigation avoidance is one of the most important things a good litigator (or corporate lawyer) can do. Compared to providing metrics for these more meaningful measures of performance, our inventor’s trial metrics pale. And while we’re at it, even the won-lost records are questionably relevant, because appeals and post-trial settlements are not captured, meaning that the ultimate disposition of the case is not factored into the “won-lost” data.
Are trial statistics reliable tools to base selection of counsel or, more broadly, to gauge how effective a lawyer or firm is? The short answer is “no”; such data, at best, provides one piece of the puzzle. Baseball again provides guidance for the development of more meaningful metrics. For example, won-lost percentage was once deemed the most significant measure of a pitcher’s ability (as the CIO/inventor contends it is in evaluating a trial lawyer). But earned run average, the number of earned runs a pitcher allows on average over nine innings, has emerged as a far more significant measure of performance than winning percentage. The pitcher who lost a 1-0 “mound duel” performed better than the winning pitcher in an 11-9 slugfest. Same with trial work. “Winning” a case is, of course, meaningful, but it does not tell the whole story. David Boies “lost” Bush v. Gore, but does that mean that he did not perform at the highest level and is not a superior lawyer even though tagged with a loss? The old maxim, “tough cases make great lawyers” speaks to the “degree of difficulty” that is an integral component of trial work. There are so many factors that go into evaluation of trial performance that “won-lost” percentages are virtually meaningless. One caveat: it is important to know how many cases a lawyer has tried, because it is a good indicator of experience and client confidence. Unfortunately, the degree of sophistication achieved by baseball statisticians is not presently matched by most litigation metrics. Put another way, there are many important case and trial variables that “don’t show up in the box score” and for which no metric presently exists.
Metrics and Litigation
Metrics came late to the legal vertical, although business and other professional service providers have been managed by them for years. One obvious place for their application to legal services is addressing the “cost: value divide” and answering the question: “Which providers–internal and/or external–are working efficiently and cost-effectively relative to the significance attached to the matter(s) being handled?” And though the potential utility of metrics extends to all segments of the legal value chain and across practice areas and geographies, an obvious place to focus is on litigation. After all, it has long been the largest element of legal spend, especially in the highly litigious U.S. market.
There is certainly a need for metrics in litigation, both on the performance, cost, and value axes (the cost: value divide). Recent years have seen the emergence of metrics for document review, data management, legal research, and a host of other “high-volume/low value” tasks once performed by law firms and now routinely handled by service providers. Likewise, data analytics companies focused on the legal vertical now offer detailed information related to fees broken down by size of firm, practice area, geography, level of attorney experience, etc. So too has the cost of particular tasks–such as motions, depositions, or other aspects of litigation been scrutinized by Big Data. Another recent manifestation of objective performance focus is the emergence of Procurement Directors in the legal space. This is yet another sign that the C-Suite–if not General Counsel–is getting serious about applying the same metrics and fiscal accountability to legal delivery as is routine in other parts of the business and professional service arenas. Metrics have hit the legal shore and are likely to have a profound influence on the delivery system as well as the “freedom” of lawyers to operate with the independence they once did.
But let’s return to the issue of metrics and lawyer selection/evaluation. Here are some key things worth measuring:
- Domain expertise
- Grasp of client’s business/ethos
- Effective use of resources (and willingness to collaborate with others outside the law firm)
- Efficiency in delivery (measured by willingness to provide detailed, transparent, fixed-price scope of work detailing: who, what, when, and what will be delivered broken down by tasks/functions). Think: engaging a general contractor
- Data on historical delivery performance
Performance and value can be measured against realistic expectations agreed upon by client and lawyer/firm at the inception of an engagement. True, most cases take unexpected turns, but an experienced attorney–as well as sophisticated client– can reasonably foresee the incidence and variety of those turns. That goes to experience and “professional judgment” that is the crux of what a lawyer should bring to each engagement. Insurance companies (among others) have long applied “reserves” for cases broken down into: (1) liability; and (2) legal spend. An easy measure for gauging effectiveness is to answer the question: “How did the lawyer/firm perform based upon these two metrics?” That goes to “cost: value” and performance.
Conclusion
Big data has its place in law, especially litigation. But metrics are only as useful as the criterion they measure. Like evidence, metrics themselves must be evaluated on a spectrum from highly probative to irrelevant. We, as lawyers, should focus on relevant metrics that help to reveal our true value. If we don’t, others will do it for us.