After reading about Microsoft’s new VASA-1 image generator, I decided to do some research and found resources for learning how to spot deepfakes.
As I discussed this morning, Microsoft announced VASA-1, their premiere model, which uses a framework for generating lifelike talking faces of virtual characters with appealing visual affective skills (VAS), given a single static image and a speech audio clip. VASA-1 is capable of not only producing lip movements that are exquisitely synchronized with the audio, but also capturing a large spectrum of facial nuances and natural head motions that contribute to the perception of authenticity and liveliness. And it’s absolutely mind-blowing, as you’ll see if you check out some of the clips in their announcement.
With that in mind, I decided to look for resources for learning how to spot deepfakes. I conducted a brief search and decided to limit sharing to only any resources published within the past few months. My quick search – which was aided by Microsoft Copilot in Bing Search – yielded one video, one article and two research papers:
- The top 5 ways to spot ‘deepfake’ videos and images (video): This is a 3-minute video published by my very own hometown news station KPRC 2 Click2Houston five months ago and reported on by Joel Eisenbaum. As you can imagine, it’s a very high-level discussion of the topic, but does give a few common sense pointers for identifying potential deepfakes.
- How to Spot Deepfakes: Trends, Regulations, Best Practices & Tips (article): A good in-depth article published by Techopedia back in March and written by Tim Keary that discusses the crisis, the current regulatory landscape, how to detect deepfakes and more. A good 5-10 minute read on the topic.
- A Contemporary Survey on Deepfake Detection: Datasets, Algorithms, and Challenges (research paper): An in-depth 22-page paper published in January, which is “a comprehensive overview of several typical facial forgery detection methods proposed from 2019 to 2023” designed to “provide a reference for further research to develop more reliable detection algorithms.”
- Deepfake Generation and Detection: A Benchmark and Survey (research paper): An in-depth 24-page paper published earlier this month (v2), which “comprehensively reviews the latest developments in deepfake generation and detection, summarizing and analyzing the current state of the art in this rapidly evolving field.”
I will admit I haven’t read the last two (at least yet), but if you want to get “into the weeds” on deepfake detection, they look like excellent and current guides to do so.
Based on these resources, here are six telltale signs to look for to spot deepfakes (again, with help from Microsoft Copilot):
- Unnatural Eye and Hand/Body Movements: Pay attention to any unusual or unnatural eye movements, as well as inconsistencies in hand or body gestures. Deepfakes often struggle to replicate natural human movements accurately.
- Lip Sync: Check if the lip movements match the audio. In deepfakes, the mouth movements may not sync perfectly with the spoken words, especially during speech or conversations.
- Lighting and Shadows: Observe the lighting and shadows in the video. Deepfakes may have inconsistencies in lighting, especially if the source material was taken under different conditions.
- Blinking Frequency: Deepfakes might alter the blinking frequency of the subject. Pay attention to any irregularities in blinking patterns.
- Mouth Details: The mouth is a significant giveaway. Observe how well the mouth movements align with the spoken words. Deepfakes often struggle to convincingly replicate natural mouth movements.
- Consider the Source: Investigate the original source of the video. Look for credible sources, such as official accounts or reputable news networks. Be cautious if the video originates from anonymous or disreputable accounts.
Of course, these tips were before VASA-1 was introduced and several of these normal telltale signs are practically impossible to detect in many of the videos. Good luck!
So, what do you think? Do you agree that deepfakes are getting harder to spot than ever? Please share any comments you might have or if you’d like to know more about a particular topic.
Image created using GPT-4’s Image Creator Powered by DALL-E, using the term “robot sitting at a desk in front of a computer showing a picture of another robot”.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.




[…] While AI programs like Microsoft’s new VASA-1 are generating increasingly lifelike virtual characters, there are methods available to detect manipulated deepfakes. Read more @ ediscoverytoday.com […]