Craig Ball is on fire lately! He has just released a new practitioner’s guide to detecting deep fakes and authenticating digital evidence!
In Craig’s guide, titled (wait for it!) Forensic Tells: A Practitioner’s Guide to Detecting Deep Fakes and Authenticating Digital Evidence (introduced in his blog post here and available here), he discusses a concept in the Introduction that I agree with wholeheartedly: “Metadata, I’ve preached, is the DNA of digital evidence. Now, as we enter an era when any photograph, video, or audio recording can be convincingly fabricated by artificial intelligence, that metadata has become more than a curiosity for the technically inclined. It has become the last line of defense against manufactured reality.”
He adds this: “We have arrived at a moment I long feared: the democratization of deception. What once required a Hollywood studio, a team of visual effects artists, and a budget measured in millions can now be accomplished by a teenager with a laptop and a few hours to spare. Deep fake* technology—the use of artificial intelligence to create synthetic media depicting events that never occurred or words that were never spoken—has matured from a novelty to a genuine threat to the integrity of our courts.”
Craig concludes his Introduction with this: “In an age when seeing is no longer believing, the lawyers who understand digital authenticity will be the lawyers who win.”
The guide begins by explaining how deep fake technology works and why it poses such a profound risk to the justice system. Modern AI tools can now generate entirely fabricated media, from face-swapped videos to fully synthetic recordings and cloned voices. These tools are increasingly accessible, making the creation of false evidence easier than ever.
This problem is not theoretical. Courts are already seeing synthetic media in disputes ranging from family law to employment litigation. In some cases, fabricated recordings are introduced as “proof” of misconduct; in others, parties attempt to dismiss authentic evidence by claiming it is fake. This dual threat is sometimes referred to as the “Liar’s Dividend”: the ability of wrongdoers to deny real evidence simply because deep fake technology exists.
As a result, lawyers must now be prepared to both challenge fabricated evidence and defend authentic evidence against accusations of falsification.
Regarding the important theme that metadata is the most powerful tool for authenticating digital evidence (because digital files contain hidden contextual information about their origin, creation, and handling), Craig discusses the example that photographs typically contain EXIF metadata showing the device used, camera settings, timestamps, and sometimes GPS location data. This information functions much like a chain of custody, documenting the file’s history and integrity.
Importantly, deep fake media usually lacks this authentic metadata. AI-generated images are not captured by cameras, so they often contain missing, incomplete, or inconsistent metadata. As the guide notes, authentic digital evidence carries a “birth certificate,” while fabricated evidence often appears with no verifiable origin.
Here are two cases (here and here) I’ve covered in recent years where the lack of metadata was instrumental in determining whether images were real or fake. The first one was not an AI generated image, but it still illustrates how metadata is key for validating evidence; the second one involved both what appeared to be deep faked images and videos.
However, the absence of metadata alone does not prove fabrication, since legitimate processes such as texting, social media sharing, or screenshots can strip metadata. Instead, lawyers must evaluate metadata in context alongside other indicators.
Beyond metadata, Craig’s guide outlines visual and auditory “tells” that may reveal deep fake manipulation.
These include:
- Lighting inconsistencies or unnatural shadows
- Flickering or instability across video frames
- Poor audio synchronization between speech and lip movements
- Unnatural voice patterns or background noise inconsistencies
These clues occur because AI systems mimic reality without fully understanding the physical world. While these artifacts are becoming less obvious as technology improves, they remain important investigative tools – for now, at least.
Still, the guide cautions that visual inspection alone is insufficient. Metadata analysis and forensic examination provide the most reliable evidence of authenticity.
Craig also stresses that lawyers too often rely on screenshots, printed photos, or compressed clips, all of which strip critical metadata. Instead, discovery requests should seek:
- The original digital file in its native format
- All application metadata associated with the file
- File system metadata
- Transmission records from email or cloud systems
Craig also notes that you should request the source device or forensic image thereof, if the authenticity of the evidence is genuinely contested and the stakes warrant it.
Interrogatories and depositions can also help lock opposing parties into specific claims about how evidence was created, making inconsistencies easier to expose.
Craig’s guide also explains how deep fake issues fit within existing evidentiary rules. Under Rule 901, evidence must be authenticated by showing it is what it purports to be. Metadata, chain of custody, and expert testimony can all help satisfy – or challenge – this requirement. Of course, there are proposed rules changes to address this challenge (including this one), so eventual rules changes will have to be considered here. Guides are meant to be updated, right? 😉
Notably, authentication disputes often become battles over weight rather than admissibility. Courts may allow contested evidence to be presented to the jury, leaving jurors to evaluate competing expert opinions and technical findings.
Looking ahead, Craig’s guide highlights emerging technologies designed to combat deep fakes. Initiatives such as Content Credentials and C2PA standards aim to embed cryptographic provenance into digital media at the moment of creation. These tools may eventually make authentication easier, but widespread adoption will take time.
Ultimately, Craig’s guide emphasizes that understanding digital authenticity is no longer optional. Ethical rules require lawyers to remain competent in relevant technology, and deep fake issues are rapidly becoming a routine part of litigation.
The key takeaway is clear: lawyers who understand metadata, digital forensics, and synthetic media risks will be better equipped to protect their clients, as well as the integrity of the justice system itself.
In an era when fabricated reality is increasingly indistinguishable from truth, Craig contends that the ability to authenticate digital evidence may become one of the most critical advocacy skills of all. I couldn’t agree more.
Again, Craig’s practitioner’s guide to detecting deep fakes and authenticating digital evidence is available here.
So, what do you think? How concerned are you about deep fakes? Please share any comments you might have or if you’d like to know more about a particular topic.
Image created using Google Gemini, using the term “robot lawyer reviewing a photo on a computer workstation”. I guess I should have told it to forget the previous image! 🤣
*Note: Craig spells “deep fake” as two words (which MS Word also prefers) while many people spell it as one word.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.




Thanks for the shout out (shoutout??). Always much appreciated. Idon’t knowif deep fake is oneword ortwo; Inever botheredto check. MEACULPA.