Here’s a New Industry Initiative to Develop a Proportionality Benefit-Burden Model: eDiscovery Best Practices

Since the 2015 changes to the Federal Rules of Civil Procedure (FRCP), proportionality in eDiscovery has become more emphasized – and rightly so.  But the process of balancing benefit and burden to determine what’s proportional has been highly subjective before and since the Rules changes.  Here’s a new industry initiative that is looking to develop a model to make the process less subjective and more defensible.

FRCP Proportionality Amendments of 2015

In case you didn’t know (or forgot), the 2015 Amendments to FRCP 26(b)(1) specifies six factors in determining whether the propounded discovery is “proportional to the needs of the case:” 1) the importance of the issues at stake in the action; 2) the amount in controversy; 3) the parties’ relative access to relevant information; 4) the parties’ resources; 5) the importance of the discovery in resolving the issues; and 6) whether the burden or expense of the proposed discovery outweighs its likely benefit. It’s important to note that much of the language associated with the Rule 26(b)(1) amendment is actually from the old Rule 26(b)(2)(C)(iii), but raised to higher prominence in the Rules with the 2015 amendments. Notably, the comparison of burden or expense to benefit is the last proportionality factor in the rule.

Advertisement
ProSearch

Assessing Proportionate Benefit and Burden ESI Model

The James F. Humphreys Complex Litigation Center of The George Washington University Law School has embarked on a project (listed here among the current Litigation Center projects) to develop a proportionality benefit-and-burden model that provides a practical means of assessing claims of proportionality by plaintiff and defense counsel.  The model, which is based on Prism Litigation Technology’s Evidence Optix® proportionality assessment framework, is a process that ranks custodians and their respective data sources by priority and discovery burden. After sorting into four broad categories, custodians ranked highest in priority at least burden are quickly identified. Under the process, a table of projected discovery costs for every custodian and every data source is also developed, and updated during the litigation, to refine proportionality assessments.

The model is intended to provide a structured methodology, which enhances a party’s proportionality assessments, facilitates discovery negotiations with the opposing party, and better informs judicial resolution of discovery disputes.  It also provides an early snapshot identifying custodians and data sources most likely to bear fruit at least burden, which can lead to a better understanding of the needs of the case and identify custodians and data sources who should be examined next.   Participants on the steering committee and editorial board include numerous judges, attorneys, and eDiscovery experts.

John Rabiej, who was previously Director of Duke Law School Center for Judicial Studies (which included an active role in leadership of EDRM when it was owned by Duke Law), is partnering with GW’s Humphreys Complex Litigation Center to lead this initiative.  With regard to the model and how some might view it skeptically as a tool to be weaponized against themselves in actual litigation, Rabiej said, “The model has two strong safeguards to prevent that.  First, the model targets only the benefit and burden/cost factor.  It does not claim to make the final proportionality assessment finding.  That can be done only after considering the six Rule 26(b) factors.  Second, the model is not static.  Its benefit and burden assessments are based on criteria that can be updated throughout the litigation as new information is learned.”

Advertisement
ModeOne

Mandi Ross, who is Founder and CEO of Prism Litigation Technology and one of the members of the steering committee, said: “The time and energy that is dedicated to eDiscovery still often overshadows the merits of the case, making it too expensive, time consuming, and overly broad.  The GW framework we are developing is designed to be an industry standard model which operationalizes proportionality and enables legal teams to create a defensible, transparent approach to right-size discovery early.  It’s also designed to be applicable to both MDL class action litigation, which is traditionally asymmetrical, as well as commercial B-to-B litigation.”

The proportionality model is not without controversy.  Although the finished model will be adopted and built by a responding party (often the defense but increasingly the plaintiff as well) and will depend largely on information that is exclusively in its hands (e.g., burden and cost), the project at the outset invited 12 plaintiff lawyers. 

Rabiej added, “Their suggestions resulted in many edits – clarifying and refining the project’s goals.  Nonetheless, the plaintiff lawyers expressed concerns that any proportionality assessment might exclude relevant information and concluded that the model must mandate fulsome party cooperation in its development to safeguard against losing important relevant matter.  Although the federal rules promote party cooperation as an aspirational goal, cooperation is not required. The project leaders concluded that the model could not mandate how counsel are to develop it, and the user must decide for themselves the value and extent of transparency and party cooperation.”

“Instead of mandating cooperation, the model itself is drafted neutrally; leaving the decision to the user whether to share and work with opposing counsel in developing the rankings and estimating costs, or maintain confidentiality and develop the model on their own terms”, Rabiej continued.  “The Complex Litigation Center plans to develop best practices in the future implementing the proportionality model, which will focus on transparency and party cooperation.  The plaintiff lawyers disagreed and determined to go in an entirely different direction, however, and withdrew from the project.”

The model is targeted to be published for public comment by the end of the year.  In addition, the Complex Litigation Center plans to hold an online bench-bar conference on the proportionality model on March 25-26, 2021.  The views from all quarters of the legal profession and eDiscovery experts will be sought and seriously considered by the project’s editorial board before it issues a final version.

Proportionality arguments are among the most common eDiscovery-related disputes there are (believe me, I know, as I cover 60 to 70 cases a year), so a model that helps operationalize proportionality determinations will certainly be a good thing for the industry and the legal profession.  Even if it means I might have less eDiscovery case law to cover!  ;o)

So, what do you think?  Do we need a formalized approach to proportionality determinations in litigation? Would the judiciary appreciate a more standardized model? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

9 comments

  1. Two comments. First, it’s a good idea to prioritize custodians. But presumably if a TAR platform is working well, you should be getting more/most of the responsive docs up front anyway. So I’m trying to understand the essence of the proposal here: Is it that it saves you from having to collect from certain custodians in the first place, and thus save you the pennies in GB hosting costs for those additional custodians?

    By itself, that’s not a bad thing. But of course it all hinges on (a) the quality of the custodian prediction, and (b) the amount that it saves you. (a) Do they have any hard numbers that they’re willing to share, with respect to what the prediction quality is, presumably by having tested it on a dozen cases that were “fully” collected (i.e. full collection being the ground truth “gold standard” baseline), and how many fewer custodians would have had to be collected using the prediction, and (b) even assuming that the custodian prediction is 100% accurate, what would that end up having saved you? I.e. imagine that you overcollected custodians doing it the manual way. How many more custodians would you have collected, and what would the hosting cost for those additional custodians have been (again, presuming that you’d be doing TAR post-collection anyway, and would not have to review the docs from those non-responsive custodians).

    Let’s ground the discussion in concrete numbers.

  2. Second comment: “The model has two strong safeguards to prevent that. First, the model targets only the benefit and burden/cost factor. It does not claim to make the final proportionality assessment finding. That can be done only after considering the six Rule 26(b) factors. Second, the model is not static. Its benefit and burden assessments are based on criteria that can be updated throughout the litigation as new information is learned.”

    So it’s a continuous (CAL-like, TAR 2.0) model, rather than a static SAL / TAR 1.0 model? That’s great; I’m all for that. But having been working on TAR 2.0 for literally 10 years now (since Oct 2010), I know firsthand that predictions can fluctuate. So it’s great that they’re “continuously actively predicting”, but how does that relate to proportionality? That is to say, how do they guarantee that the prediction about what is proportional keeps ahead of what is actually proportional? Maybe if the reviewers reviewed just a few more documents (thereby giving feedback on the value of this or that custodian), the predictions would change, and something that wasn’t proportional before, becomes proportional after. That is, how do they keep from falling into local maxima?

  3. Thanks for the comments, Dr. J. In this case, this is in the identification/preservation/collection phase and establishing an appropriate level of custodians to collect from to feed the review process (whether TAR or otherwise). A lot of organizations over-collect and include custodians with a very low probability of having responsive ESI. There are also some custodians that are very likely to have responsive ESI and also some that may or may not. The question is where to draw the line and this initiative is designed to help with that process — to balance the burden and benefit associated with the discovery effort in a methodical and defensible manner.

    TAR can certainly help get to the responsive docs quicker, but there is still a burden associated with preservation of and collection of ESI from custodians. To the extent that you can have a process where the decisions on which custodians to include are handled in a methodical and defensible manner, the more efficient it is regardless of downstream efficiencies gained by review technologies and protocols.

    My two cents.

  4. Written discussions are difficult, so apologies if I didn’t express myself clearly the first time around. I look forward to the day when we can all chill out in a conference room or in the hotel lounge and have these sorts of discussions, casually.

    So yes, I’m with you on everything you said above, about how there is value in preventing over-collection. I just see difficulty in _proving_ that value, in real time, particularly if the approach is a “continuous learning one, as it seems to be in the sentence that says “the model is not static”.

    What I’m trying to get at is that custodian prediction is actually a form of TAR, if you think about it. Only instead of categorizing or prioritizing individual documents, you’re categorizing or prioritizing entire chunks of documents. Which chunks are grouped by custodian.

    And if the model is not static, if things are constantly changing, then imagine the following: Suppose that the model currently says that a particular chunk of documents (i.e. a particular custodian) is of low likely relevance to the matter. That therefore it wouldn’t be proportional to collect from that custodian. But it’s just on the cusp, so you collect anyway. Then you discover upon reviewing the docs from that custodian that there was a lot more there than you thought there was going to be. Models aren’t always right, and the whole point of the “model is not static” safeguard you mention is that the model can change. So now because this new custodian turns out to be a lot more relevant than the model originally thought, it makes other custodians appear to be more relevant than they did before, too.

    So at the end of the day, you still end up collecting from a number of custodians. Maybe fewer than you would have before. But maybe not.. maybe you collect from just as many custodians, but they’re the right custodians instead of the wrong custodians. So you don’t actually save any money, but you do do a better job and finding all the information that you’re legally required to find.

    So I’m curious about two things:

    (1) what sort of experiments have they done, to show empirically how these predictions change the volume that is collected. What was their ground truth? Did they test the “null set” of custodians to see if their model was correct? If the models did save from having to do some collection, how many custodians does it save? And more to the point: (2) If it does save a particular number of custodians, what is the total cost savings? I’ll give an example in my next comment.

  5. Unless one can assess total cost savings, one cannot make a proportionality argument, non? So let’s imagine the following. Let’s pretend there is some matter, where the manual, human-driven collection approach yields 30 custodians. And that the predictive approach yields 20. Chops 1/3 of the custodians away, which sounds great, right?

    However, that might not be the full story. Let’s suppose predictive approach initially narrowed the original selection down to (let’s say) 12, but because it is not a static model, and learns continuously the entire way through, found 8 more custodians that (a) the human hadn’t identified, and (b) were originally not thought to be that relevant — but that later turn out to be relevant. This was learned because of the model’s safeguards, i.e. “[the model’s] benefit and burden assessments are based on criteria that can be updated throughout the litigation as new information is learned.”

    But all in all, let’s say 30 custodians for the manual approach, 20 for the predictive approach. For the sake of discussion, let’s pretend that each custodian has the same number of documents, d.

    The cost of hosting for the manual approach is:

    30d * perdoc_hosting_cost.

    The cost of hosting for the predictive custodian approach is:

    20d * perdoc_hosting_cost.

    TAKEAWAY: More custodians (and their documents), higher hosting costs.

    Let’s further presume that, where there are responsive documents to be found, TAR saves you from having to review half (50%) of them. Where there are not responsive documents to be found, though, because it’s a non-relevant custodian that has been overcollected, let’s say TAR saves you from having to review 99% of that custodian’s collection. Right? Because that’s the whole point of TAR. If that custodian has no responsive documents, TAR won’t guide you to reviewing (most of) those documents, even if you’ve overcollected that custodian.

    With the manual approach, only 12 of the 30 custodians were “relevant” custodians. The other 18 were overcollected and not so relevant. Recall that d was the number of documents for each custodian. The cost to review documents for these custodians with TAR would be:

    ((12 * 0.5 * d) + (18 * 0.01 * d)) * perdoc_review_cost =
    (6d + 0.18d) * review_cost =
    6.18d * perdoc_review_cost

    With the custodian-predictive approach, all 20 of the 20 custodians are “relevant” custodians. Right? Because it whittled the original 30 down to 12, and then found 8 more that were initially unknown. So the cost to review documents for these custodians with TAR would be:

    ((20 * 0.5 * d) + (0 * 0.01 * d)) * perdoc_review_cost =
    10d * perdoc_review_cost

    TAKEAWAY: More relevant documents, higher review costs. Even with TAR.

    Alright, so let’s put it all together. For the manual collection approach we have:

    (30d * perdoc_hosting_cost) + (6.18d * perdoc_review_cost)

    And for the custodian-predictive approach we have:

    (20d * perdoc_hosting_cost) + (10d * perdoc_review_cost)

    Next, let’s use the RAND estimate that review is approximately 73% of the total cost, while the tech side is 27%. So what ever the actual cost c of every document, 0.27 of that amount goes to the hosting, and 0.73 goes to review. This leaves us with the following figure for the manual collection approach:

    (30d * 0.27) + (6.18d * 0.73) =
    8.1d + 4.51d =
    12.61d

    And for the custodian-predictive approach:

    (20d * 0.27) + (10d + 0.73)
    5.4d + 7.3d
    = 12.7d

    Thus, it _could be_ slightly more expensive (12.7 * d) to do custodian prediction than it is to not do custodian prediction (12.6 * d).

    TAKEAWAY: When arguing proportionality, one has to look at the total cost of the entire endeavor, not just the hosting costs separately, or the review costs separately, or any other cost separately.

    These numbers will of course be different on real data, and will further more be different from case to case. How different depends on a number of factors: (1) how much overcollection is done manually, (2) how much custodian prediction both takes away and adds, and (3) the various other TAR efficiencies, collection richnesses, document counts for each custodian, etc. etc.

    So my point isn’t to insinuate that these numbers are the final answer. And I absolutely do not mean to suggest that custodian prediction will necessarily yield a worse result. I simply wish to show that in order to show how much custodian prediction bends the cost curve, one needs both real data and a proper baseline. And not just for one case, but for a half dozen or so. Because of the variability among cases.

    You know my ongoing motif is evaluation, evaluation, evaluation. I want to see more of it, everywhere in the industry. It took me must longer than probably necessary to say all that, but that’s really all I’m after here: Show me the money. Show what the various tradeoffs and curves do, on real data. Only then can one answer your final question, which is “Do we need a formalized approach to proportionality determinations in litigation? Would the judiciary appreciate a more standardized model?” In order to answer that, I need to see the model in action, on real data, and what effects it has.

  6. Absolutely…I think this is an excellent idea! I don’t believe anyone will argue that we are going to see less data/document levels in eDiscovery in the future, so the ability to apply proportionality is simply going to be just as exponentially important as addressing the data growth itself.

    It seems to me that a model to serve as guideline in data collection aids all parties, but of course especially the producing party who wishes to be as accurate and complete as possible with the lowest cost as well. A model that is implemented within the organization seems at least close to qualifying as work product as well, arguably providing protection for the organization. And I like it as being similar to an attorney legal hold certification form and process I developed, as just another “tool in the toolkit” for the firm or Legal Department to be more efficient and effective in their eDiscovery program.

    Best Regards,

    Aaron Taylor

Leave a Reply