Some names fall into a special category of “He Who Shall Not Be Named”. No, it’s not Lord Voldemort; rather, it’s names that break ChatGPT.
As discussed in Ars Technica (Certain names make ChatGPT grind to a halt, and we know why, written by Benj Edwards and available here), people have discovered that the name “David Mayer” breaks ChatGPT. 404 Media also discovered that the names “Jonathan Zittrain” and “Jonathan Turley” caused ChatGPT to cut conversations short. And likely the first name that started the practice last year is “Brian Hood”.
The chat-breaking behavior occurs consistently when users mention these names in any context, and it results from a hard-coded filter that puts the brakes on the AI model’s output before returning it to the user.
When asked about these names, ChatGPT responds with “I’m unable to produce a response” or “There was an error generating a response” before terminating the chat session, according to Ars’ testing. The names do not affect outputs using OpenAI’s API systems or in the OpenAI Playground (a special site for developer testing).
Here’s a list of ChatGPT-breaking names found so far through a communal effort taking place on social media and Reddit. Just before publication, Ars noticed that OpenAI lifted the block on “David Mayer,” allowing it to process the name, so it is not included:
- Brian Hood
- Jonathan Turley
- Jonathan Zittrain
- David Faber
- Guido Scorza
Not Guido Scorza! Say it isn’t so! 🤣
Ars first discovered that ChatGPT choked on the name “Brian Hood” in mid-2023 while writing about his defamation lawsuit. In that lawsuit, the Australian mayor threatened to sue OpenAI after discovering ChatGPT falsely claimed he had been imprisoned for bribery when, in fact, he was a whistleblower who had exposed corporate misconduct.
The case was ultimately resolved in April 2023 when OpenAI agreed to filter out the false statements within Hood’s 28-day ultimatum. That is possibly when the first ChatGPT hard-coded name filter appeared.
The rationale for why some names are filtered isn’t clear, however. Jonathan Turley, a George Washington University Law School professor and Fox News contributor, had fabricated false claims about him in ChatGPT, including a non-existent sexual harassment scandal that cited a Washington Post article that never existed is blocked, but Mark Walters, a person who filed a defamation suit against OpenAI in 2023 over false claims, is not.
The problem with names that break ChatGPT outputs is that it could cause a lot of trouble down the line for certain ChatGPT users, opening them up for adversarial attacks and limiting the usefulness of the system.
Already, Scale AI prompt engineer Riley Goodside discovered how an attacker might interrupt a ChatGPT session using a visual prompt injection of the name “David Mayer” rendered in a light, barely legible font embedded in an image. When ChatGPT sees the image (in this case, a math equation), it stops, but the user might not understand why.
The filter also means that it’s likely that ChatGPT won’t be able to answer questions about this article when browsing the web, such as through ChatGPT with Search. Someone could use that to potentially prevent ChatGPT from browsing and processing a website on purpose if they added a forbidden name to the site’s text.
Names that break ChatGPT could also break the model’s usefulness if exploited in just the right way. Perhaps “He Who Shall Not Be Named” should be named, at least for AI model purposes.
So, what do you think? Are you surprised that there are names that break ChatGPT? Please share any comments you might have or if you’d like to know more about a particular topic.
Image Copyright © Warner Bros
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.


