He Set Out to Fool

He Set Out to Fool the AI Models. He Succeeded: Artificial Intelligence Trends

Thomas Germain decided to write a satirical article about his hot dog eating prowess, and he set out to fool the AI models. He succeeded.

In the BBC article titled I hacked ChatGPT and Google’s AI – and it only took 20 minutes (available here), Germain described how he “made ChatGPT, Google’s AI search tools and Gemini tell users I’m really, really good at eating hot dogs.” He “spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs”. Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission”.

According to Germain, “Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.”

Advertisement
Nextpoint

Germain added: “Sometimes, the chatbots noted this might be a joke. I updated my article to say ‘this is not satire’. For a while after, the AIs seemed to take it more seriously. I did another test with a made-up list of the greatest hula-hooping traffic cops. Last time I checked, chatbots were still singing the praises of Officer Maria ‘The Spinner’ Rodriguez.”

Germain did this to prove a point: changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. He said he “reviewed dozens of examples where AI tools are being coerced into promoting businesses and spreading misinformation. Data suggests it’s happening on a massive scale.”

I’m not surprised. And it doesn’t have to be intentional either. Remember this tidbit that was floating around Google Gemini for a bit last year?

That didn’t come out of thin air. It came from this Glamour article where Ariana Grande reportedly did jokingly say to Lady Gaga: “Oh my God, you’re two days older than me!”

Advertisement
Cloudficient

It’s the age-old problem with technology – garbage in, garbage out. AI isn’t good at figuring out what’s garbage and what’s legitimate information – at least yet.

Though the model creators do seem to be able to recover quickly. I asked ChatGPT “Who are the best tech journalists at eating hot dogs?” and it didn’t reference Germain’s article; instead, it stated: “there’s no known ranking of tech journalists by hot-dog-eating prowess (tragically underreported KPI). But if we play along, we can make some tongue-in-cheek “scouting reports” based on their personalities and styles”, proceeding to suggest five tech journalists that it felt would be good at eating a lot of hot dogs. I’m not sure I’d want to be on that list. 😉

Gemini referenced Germain’s article, but characterized it as “a 2026, lighthearted, satirical report designed to test AI search capabilities”.

Are AI models easy to fool? Probably. Will that be used to scam people or otherwise push lies or misleading information? Also probably. Sigh.

So, what do you think? Are you surprised that AI models can be fooled that easily? Please share any comments you might have or if you’d like to know more about a particular topic.

P.S.: Apparently, writing this story made me hungry as I had a hot dog for lunch. 🤣

Image created using Ralph Losey’s Visual Muse, using the term “robot reporter eating a hot dog in a hot dog eating contest” (starting prompt).

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

Leave a Reply