How do I prove that TAR makes sense?
By Mark Walker, VP Advisory
Services at iControl ESI
Please post a response with your thoughts, especially if you
disagree with any of this, or I get anything wrong. This post is intended to
prompt discussion on this topic.
Introduction
When does it make sense to use a
TAR workflow? Those of us that work with predictive analytics (a/k/a predictive
coding) and TAR workflows have been asked this question more times than we can
count. The answer is usually the unpopular “it depends” response. At the end of
the day, there should be a cost vs. benefit math exercise. However,
non-monetary factors can also impact the decision. The time allotted to get the
production out the door, resources available, and the budget are also factors.
Even case strategy can factor into the equation. There are a lot of variables.
Virtually everyone agrees that in most cases we simply cannot review everything.
Most just resort to using date, file type and search term filters. We can do
better.
Those
of us that have been using TAR workflows for years know that a well-planned TAR
workflow using machine learning (preferably active learning) will save both
time and money. We know that using the right technology is highly accurate when
based upon sound sampling methods where humans teach the technology to find
what they seek. But how do we prove it to someone who has never traveled that
road? Lawyers are all about proof. That’s what they do. We have a tough
audience..
Defining the Problem
A few weeks ago, I reconnected with
a LitSupport manager at a major law firm. He has been in the industry a very
long time and closely follows the most cutting-edge technology. As a LitSup
manager, he has had success convincing lawyers within his firm to use TAR
workflows. Well, some of them. This time, I asked him the dreaded question, but
in a slightly difference way – “What kind of cases should your lawyers consider
using predictive analytics.” His answer, tongue in cheek: “Every Case!” We both
got a good chuckle out of that answer. While we chuckled, he is exactly right.
But, like everyone else in the industry, he is also frustrated with the
industry as a whole’s inability to make the argument in a way that resonates
with lawyers. Some use fear to convince – if you don’t do it, others will.
Lawyers like litmus tests. Bright lines. They don’t like grey. Lawyers don’t
react well to threats and attempts to invoke fear.
When reviewing documents, lawyers
want documents that are relevant. Sure, good lawyers are concerned about cost
and one would think would be interested in anything that will make them more
efficient. But, they are also concerned about risk and trust.
Here’s the root of the problem:
Relevancy rates in collected documents are often as low as 1% in most cases.
That means 99 out of every 100 documents collected have no value. Sure, there
are exceptions, but it is rare that a document review relevancy rate is above
50% using traditional search and review work flows. No matter how you cut it,
when 50% of what you review (best case) is wasted effort, there is an expensive
problem that needs to be solved. By the way, a search and review workflow that
achieves a 50% reduction and relevancy rate is a phenomenal achievement. We
traditionally see closer to 30% without leveraging a TAR workflow. We can do
better! We must get as close to the 1% we seek as possible.
Using a document count litmus test
to determine whether to use predictive analytics doesn’t work. For example,
“use predictive analytics when you have 10,000 documents to review”. The
average single custodian (witness) has on average 10,000 documents collected.
If 1% of that is what we expect to be relevant, then out of 10,000 documents
your seeking 100 that are relevant. There are too many other factors that might
make it more cost effective to just review the 10,000 documents. Document count
is not the right litmus test.
Solving the Problem - Do the math
Using our 10,000 document, single
custodian example, we arrive at a conservative 50% relevancy rate litmus test.
That is, if you expect that whatever method you use to filter down before
review will yield less than a 50% relevancy rate during review, then it makes
sense for you to deploy TRUSTED predictive analytics technology to your review,
often in conjunction with validating search terms to exchange with the
opposition. See
Combining Search and Predictive Coding in 5 Easy Steps. While you can’t
really know for certain what the actual relevancy rate will be up front,
obviously, we can usually have a pretty good idea if it’s going to be above
50%.
In our 10,000-document example
using a traditional filter, search and review methods, one might cut the review
in half and only review 5,000 documents. At a billing rate of $250 per hour,
and a typical review rate of 55 docs per hour, the cost to review 5,000
documents is $22,727.27. $250 an hour is low compared to the market rate for
associates. Make your estimates conservative.
If predictive analytics rate is
$0.06 per document, the cost to classify with predictive analytics the 10,000
documents available for review is $600. All other technology costs such as
processing and hosting will be incurred no matter what review method you chose.
Leveraging predictive analytics,
you should typically see an 80% or above relevancy rate during review. If you
only achieve 50% using traditional search and review, then spending $600 on
analytics achieves at least 30% improvement, which is very conservative.
Therefore, in this very conservative example you reduce the review by 1,500
documents and avoid 27.7 hours of review time. At $250 per hour, that’s
$6,818.18 of review cost avoided. Since the analytics cost just $600, the net
savings is $6,218.18. How can anyone ignore that advantage?
Ah, naysayers might say, we are
going to use contract reviewers at $55 per hour! Even with the dramatically reduced billing rate, there is still a net savings
of $900, and don’t discount speed either.
Predictive Analytics is not just for Big cases anymore.
In the example above, we’ve used a
very small case - a 10,000 document case hosted in a review platform is, well,
rare these days. Many of the cases we deal with are multi-million document
cases. 100,000 hosted is common. Using the same modeling as outlined above, the
savings achieved on a 100,000-document population is persuasive and undeniable.
At a $250 per hour review rate
At a $55 per hour review rate
Conclusion
With very few exceptions,
leveraging a TAR workflow that includes predictive analytics (a/k/a predictive
coding) will save considerable time and money. The courts have been encouraging
lawyers to leverage technology. Clients are demanding their outside counsel
reduce costs. Fixed fee arrangements are
becoming common place where lawyers have skin in the game to keep the time they
spend on matters low. For contingent fee lawyers, time really is money.
Do the math yourself. Apply
whatever assumptions you feel appropriate. Increase document decisions per
hour, lower hourly rates, increase the per doc cost of analytics. What you will
find is that using even the most extreme and efficient methodology, leveraging
predictive analytics simply makes financial sense for everyone involved. Reach
out to me and I’ll provide you with a calculator so you can input your own
assumptions.
So, what’s keeping you from
leveraging predictive analytics? Inquiring minds want to know.
No comments:
Post a Comment