Yes, even during the pandemic, people are looking for better ways to manage Technology Assisted Review (TAR) and provide transparency into the process. In a recent article on Law360 (subscription required, also available here), Christine Payne of Redgrave LLP and Michelle Six of Kirkland & Ellis LLP proposed a new framework for using TAR in the context of civil litigation. Christine and Michelle were nice enough to sit down with me for an interview to discuss the framework, how it came about and where they see it heading from here.
The article detailed the inconsistent standards applied to TAR versus attorney review, as well as the need to recognize that the benefits of TAR can be eviscerated by lengthy negotiations and overbearing protocols. The authors proposed a technique to reduce cost and contention, addressing legitimate concerns about the accuracy of large-scale responsiveness reviews while also mitigating unhelpful and expensive arguments over process.
About The Authors
Christine Payne is a nationally-recognized advocate specializing in eDiscovery and litigation strategy. She handles all aspects of case strategy and discovery for complex commercial litigation, restructuring-related litigation, products-liability litigation, antitrust matters, Section 220 requests, and ongoing or anticipated investigations. Before joining Redgrave in May 2019, Christine was a partner at Kirkland & Ellis, where she led the Firmwide Electronic Discovery Committee. At Redgrave, Christine leads the Restructuring Discovery group and is the chair of the firm’s Diversity Committee. Christine lives in Austin, TX and is the mom of two cool kids, Ty and Addie.
Michelle Six is a partner in Kirkland’s New York office, where she focuses exclusively on electronic discovery law, concentrating on creating, monitoring, and implementing best practices and strategies for e-discovery. As Vice-Chair of the Firmwide Electronic Discovery Committee, she leads the Firm’s eDiscovery efforts and counsels clients on litigation readiness, developing eDiscovery strategy, and data privacy considerations and compliance. She frequently speaks at conferences and CLE programs on issues and solutions related to electronic discovery. Michelle is a Chambers-ranked attorney and mother of two budding Shakespeare lovers, Oliver and Emma.
Part 1 of our interview is being published today. We will publish the conclusion tomorrow.
What has been your experience with regard to TAR approaches and how did that lead the two of you to develop this new proposed framework involving a “report card” system?
Michelle: TAR definitely can be faster and cheaper than attorney review, and so it’s pretty attractive to clients. But a lot of them end up rejecting the idea. They don’t want to create an opening for contentious opposing counsel to come in and make a lot of unnecessary hay, creating discovery disputes and driving up attorneys’ fees. I know there a lot of plaintiffs’ attorneys out there who feel like you have to be up in everybody’s business to get reliable results. And that’s too bad, maybe it’s reflective of defense attorneys being too … defensive? Some parties also see it as too risky because they don’t trust the idea of technology identifying relevant documents as reliably as actual live humans, and the available research isn’t always convincing for them. So the promise of TAR as the “review of the future” hasn’t really taken off. If we can re-orient the discussion to focus on results in real-world scenarios, we think everyone benefits.
Why do you think that lawyers are so much more demanding of transparency with TAR approaches than they are with traditional attorney review?
Christine: Because they can be. In the early days of TAR, any doubts about the technology’s capabilities were smoothed over by offers of unprecedented transparency. It made TAR very attractive, but also set us up in this weird posture where using TAR now opens a door to enhanced transparency that isn’t required with traditional attorney review.
We’re just getting started! It’s another cliffhanger! Tune in for the conclusion tomorrow!
So, what do you think? Have you had cooperation challenges with opposing counsel over TAR? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
“Some parties also see it as too risky because they don’t trust the idea of technology identifying relevant documents as reliably as actual live humans, and the available research isn’t always convincing for them.”
Follow up question: So why don’t those parties do their own experiments? That is, instead of relying on the available research, why not use your own data and the actual process that the humans followed without TAR, and then, via simulation, show how that same exact process would have worked, with TAR? There are standard methodologies for doing this kind of simulation that date back to the 1960s.. such comparisons are not new or unique in any way, i.e. there is enough precedent in how to do such comparisons that should satisfy even the most conservative among us.
I work for a legal technology vendor, and I have been making this offer to attorneys for literally the past 9 years.. and very few avail themselves of the opportunity to actually do it. I don’t say this to talk up my own offering, but rather to point out that there are more options available than either blind trust/distrust in the technology, or generic published research. What are Michelle and Christine’s thoughts on that?
Hi Jeremy. You’re certainly “preaching to the choir” as far as I’m concerned. My opinion is that it is, at least, part of the general challenge to get lawyers to embrace and keep up with technology trends. The same is true with regard to understanding duties regarding preservation of mobile device technology and messaging apps. Or form of production options.
I will reach out to Christine and Michelle and see if they want to weigh in on your comment as well. Thanks!
Oh, agreed that it is part of the general challenge to get ’em to keep up with the trends. But thankfully that’s not a challenge that you and I have to solve. That’s what the ABA Model Rules of Professional Conduct, specifically the duty of technological competence, is for. So what I’m curious about from Christine and Michelle, is how (a) folks do (or don’t) make good on that duty, and (b) how folks _should_ make good on that duty.
Because what I’m hearing is that folks don’t believe that TAR is more accurate than the human’s ability, but that they also don’t accept (aren’t convinced by?) the peer-reviewed scientific research that has been published on the topic.
I maintain that it’s axiomatic that a belief without evidence does not meet the standards of a technological competence duty. So if peer-reviewed scientific research does not satisfy someone’s belief, to what are they turning to garner that evidence — either for or against? It’s certainly not to self-experimentation, in the vast majority of scenarios I’ve seen. So then, to what are they turning? How are they resolving it?
Or are they abdicating on their technological competence duty? It is as simple as that?
(BTW: I don’t think one should have to keep up with any and every minute little technological change. But it’s also not an all-or-nothing proposition. If one can’t keep up with everything, that doesn’t mean one doesn’t have to keep up with anything, either. And the notion of whether humans or machines can get to more relevance documents, faster, is a basic idea that is common to almost every technological variation.. one that has been around for a decade or more in the legal industry. So pretty much every lawyer should have had a plan by now to figure out how they’re going to go from belief, to testing that belief. Right?)
Jeremy, we are so thankful for your engaging question. This type of discourse is precisely what we were hoping the article might spur. To answer your question, I can tell you that I’ve been involved with a number of parties that have indeed done their own informal testing. Those are rare occasions—most litigants don’t have the time or resources to run these types of tests, investing the time and personnel required to ensure that it’s done in a way that would be legally defensible (if it were real) and that the statistics are analyzed correctly. And in those circumstances where we have run large-scale comparisons, the results aren’t what you might imagine. But then also, there’s something important about published studies that are available to everyone. The public nature of research helps us in our discussions with folks not in the tent of privilege—judges, special masters, opposing counsel. So I would say if you’ve got results from the last 9 years, publish them! Continually feeding the current understanding of new and developing technologies always pushes us in the right direction.
And thank you for your response as well! I’m tempted to jump into the requisite questions around all the details of the tests you’re talking about, but I think for this forum I should keep it on a high level. And then move the more detailed discussion to other forums.
So that said, let me just say that the types of comparisons that I’ve been doing for the past ten years in this industry require literally no extra time and resources from the client other than what it takes to ship the data over. Which should be relatively trivial, especially when compared against the value that the knowledge brings when the comparison is done. There are even ways of doing it without shipping the ground truth over all at once, too, which sets up a trusted environment.
I’ve also witnessed some entities literally go about the whole process in the completely wrong way, including hiring a summer intern to do nothing but manually input / transfer doc coding values from one platform to another platform, one at a time. That’s a mindset that needs changing here: That there could be other ways of doing comparisons that don’t require heavy resources from the client.
I’m not sure what you mean by doing the comparison in a “legally defensible” way, though. For any comparison, the four most important factors are (a) consistency of collection scope, (b) consistency of ground truth, (c) choice of baseline, and (d) choice of metric. If what you are saying is that the the metric needs to be aligned with what the actual task is.. e.g. if a minimum of 80% recall is legally required, then the metric needs to be either precision@80% recall, time@80% recall, or cost@80% recall, then yes, I absolutely agree. Especially when I see tons of market claims all throughout the industry that don’t even come anywhere near the necessary metrics. There needs to be raised awareness of what one should actually measure.
Sometimes I feel like there needs to be more discussion not only around report cards, but around how one sets up a proper comparison.
Otherwise, the nice thing about comparing against human review is that the statistics are trivial to compute. Because the documents have been judged in full, one does not need to rely on sampling to get the accurate numbers.
And yes, I’ve published a number of experiments over the years, both peer reviewed and white papers. And will continue to do so. 🙂