In his latest post, Craig Ball gives his assessment of the Reconstruction-Grade eDiscovery Standard. Here are his thoughts in a nutshell.
In Craig’s post (A Dog and Its Tail: Don’t Let Version Uncertainty Cloud Linked Attachment Production, available here), Craig begins by referencing his pair of posts (3/29/24 and 4/8/24) that he wrote about linked attachments—what Microsoft calls “Cloud Attachments” (about which at least one Microsoft technologist expressed regret over the name) —arguing that producing parties had been getting away with murder by not collecting and searching them. Craig also notes: “The landscape has shifted since, and largely in the right direction”, referencing the Carvana case from last year, which required defendants to conduct (what was in effect) a pilot test on their capabilities to produce hyperlinked files (even with the expectation to retrieve the contemporaneous version to the extent possible).
The focus of the post, however, is the Reconstruction-Grade eDiscovery Standard, authored by Peter Kozak and Brandon D’Agostino (of Cloudficient), which has articulated an architectural framework for what preservation of collaborative evidence should look like. As Craig notes: “It’s ambitious and thoughtful” and “I think it gets several things right”.
In his analysis, Craig points out that the RG standard identifies what it calls the “Preservation Gap” (the referenced content is never preserved at all) and the “Context Gap” (the content is preserved but not in the state it existed at the relevant time). Craig says: “That’s a useful distinction” and I agree.
The standard treats deterministic version resolution—preserving the as-sent version of a linked document, the version that existed when the message was transmitted—as a core conformance requirement, which Craig refers to as the “gold standard”.
So, what’s the problem?
Craig’s problem is that “the gold standard can become the enemy of any standard at all… To my eye, the versioning concern has been weaponized. It goes like this: a requesting party asks for linked attachments. The producing party raises the specter of versioning—’Which version do you want? The as-sent version? The as-accessed version? The current version? We can’t be sure which is the ‘right’ one, so the whole exercise is fraught with uncertainty.’ And that uncertainty becomes the justification for producing no version. Not the wrong version. No version.”
I get Craig’s concerns – he’s been involved in a lot more negotiations about ESI issues than most of us, including me. Having said that, I would think it would be feasible to propose an approach that gets the contemporaneous version if possible, with a fallback to the most recent version when it’s not. As the tools continue to improve, that contemporaneous version will be easier to get – it already is.
Craig also says: “That’s the tail wagging the dog”, where the “dog” is the threshold obligation is to collect and search linked attachments, and the “tail” is the versioning issue (now you understand today’s image!). While Craig says he doesn’t dismiss it, he questions how often it actually happens that the version of the document comes into play. While renewing his call for meaningful stats on what percentage of cloud attachments are actually modified after transmittal, Craig provides his own “intuition based on experience, not evidence… that fewer than ten- to twenty percent of linked attachments are meaningfully modified after being shared, and perhaps far fewer than that.”
Fair point. I don’t have any more than my own intuitional guess at the percentage of linked files being modified after being sent. I do think the norms are changing – as we collaborate more, I think the percentage is going up. In the meantime, I’ll add my call to Craig’s for more meaningful stats.
Craig does include a section on “What the Standard Gets Right”, including exception transparency (of what couldn’t be collected and why), the aforementioned Preservation Gap vs. Context Gap distinction, and capability testing as an emerging judicial norm (via more cases like Carvana).
Craig concludes by drawing a comparison between “the immediate obligation” to collect what you can today and “the aspirational architecture” of reconstruction-grade fidelity, which is “where the industry needs to go”. Craig states: “the bridge between those two isn’t ‘wait until perfect tools exist.’ The bridge is ‘do what you can now, document what you can’t, and improve your capabilities over time.’”
Couldn’t agree more. Still, we need “the aspirational architecture” because the data and the technology is evolving rapidly and it’s important to think ahead on standards that address those rapid changes. If you’re not thinking ahead, you’re falling behind.
So, what do you think? Have you heard of the RGR Standard for eDiscovery? You have now! 😊 Please share any comments you might have or if you’d like to know more about a particular topic.
P.S.: Thanks (once again) for the kind words, Craig!
Image created using DALLE-3, using the term “robot dog looking at its tail quizzically”.
Disclosure: Cloudficient is an Educational Partner and sponsor of eDiscovery Today
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

