Ruh-roh! Here’s a new AI system that can deepfake video from a single photo. And it’s from…three guesses and the first two don’t count! 😉
As reported by Ars Technica (Meta’s new “Movie Gen” AI system can deepfake video from a single photo, written by Benj Edwards and available here), On Friday, Meta announced a preview of Movie Gen, a new suite of AI models designed to create and manipulate video, audio, and images, including creating a realistic video from a single photo of a person. The company claims the models outperform other video-synthesis models when evaluated by humans, pushing us closer to a future where anyone can synthesize a full video of any subject on demand.
The company does not yet have plans of when or how it will release these capabilities to the public, but Meta says Movie Gen is a tool that may allow people to “enhance their inherent creativity” rather than replace human artists and animators. (yeah, sure). The company envisions future applications such as easily creating and editing “day in the life” videos for social media platforms or generating personalized animated birthday greetings.
Using text prompts for guidance, Movie Gen can generate custom videos with sounds for the first time, edit and insert changes into existing videos, and transform images of people into realistic personalized videos.
Meta isn’t the only game in town when it comes to AI video synthesis. Google showed off a new model called “Veo” in May, and Meta says that in human preference tests, its Movie Gen outputs beat OpenAI’s Sora, Runway Gen-3, and Chinese video model Kling.
Movie Gen’s video-generation model can create 1080p high-definition videos up to 16 seconds long at 16 frames per second from text descriptions or an image input. Meta claims the model can handle complex concepts like object motion, subject-object interactions, and camera movements.
In April, Microsoft demonstrated a model called VASA-1 (covered by us here) that can create a photorealistic video of a person talking from a single photo and single audio track, but Movie Gen takes things a step further by placing a deepfaked person inside a video scene, AI-generated or otherwise. Movie Gen, however, does not appear to generate or synchronize speech yet.
Meta calls one of the key features of Movie Gen “personalized video creation,” but there’s another name for it that has been around since 2017: deepfakes. Deepfake technology has raised alarm among some experts because it could be used to simulate authentic camera footage, making people appear to do things they didn’t actually do.
Of course, as the author notes, this technology could be abused in myriad ways, including creating humiliating videos, putting people in compromising fake situations, fabricating historical context, or generating deepfake video pornography. It’s bringing us closer to a cultural singularity where truth and fiction in media are indistinguishable without deeper context due to fluid and eventually real-time AI media synthesis.
You can read more about how the Movie Gen models work in a research paper Meta also released today. What the 92-page paper (admittedly, after only a brief scan by me) doesn’t appear to address is any sort of planned guardrails for its use. Sigh.
So, what do you think? Are you concerned about Meta’s new AI system that can deepfake video from a single photo? Please share any comments you might have or if you’d like to know more about a particular topic.
Image Source Ars Technica
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.





