Need to Check for AI

Need to Check for AI? Ask the AI: Artificial Intelligence Trends

A comment on a post about LLM-generated articles prompted a follow-up post. If you need to check for AI, why not ask the AI?

My post last week (LLM-Generated Articles Are Beginning to Overrun Research Literature: Artificial Intelligence Trends, available here) discussed how, apparently, LLM-generated articles are beginning to overrun research literature and editors are missing obvious signs of them.

As I discussed, Elsevier has been under the spotlight this month for publishing a paper that contains a clearly ChatGPT-written portion of its introduction. The first sentence of the paper’s Introduction reads, “Certainly, here is a possible introduction for your topic:…” To date, the article remains unchanged, and un-retracted.

Advertisement
eDiscovery Assistant

In fact, apparently, searching Google Scholar for “certainly, here is” turns up a huge number of academic papers that include parts that were evidently written by ChatGPT—sections that start with “Certainly, here is a concise summary of the provided sections:” are a dead giveaway.

I received a couple of comments on the post. One of them, from a colleague and friend of mine (Carolee Byrley), was one of the best comments I’ve ever gotten on the blog. It said:

“Here’s a tip. If you feel yourself drawn to the dark AI abyss, ask ChatGPT, “How can authors check for obvious giveaways that AI has been used in their works?”. It returns 8 suggestions and closes with, “By employing these strategies, authors can help ensure that their works maintain the authenticity and integrity of human-authored writing.”

I mean, you’re there anyway….”

Advertisement
KLDiscovery

Well, duh! If you need to check for AI, why not ask the AI? Makes sense to me!

So, I did that. I got 9 suggestions instead of 8 (presumably because I’m using GPT-4 and Carolee was using GPT 3.5). Regardless, here was the response (in blue below) when I asked GPT-4 the question that Carolee suggested (AI warning!):

Authors can take several steps to check for obvious signs that AI has been used in their works. Here are some key indicators to look out for:

  1. Repetitive Phrasing: AI-generated text can sometimes get caught in loops or repeat certain phrases unnaturally. Look for sections that seem to reiterate the same points without offering new information.
  2. Inconsistencies in Style or Voice: AI might struggle to maintain a consistent narrative voice or writing style throughout a piece. If certain sections seem stylistically disconnected from others, it might be a sign of AI involvement.
  3. Lack of Depth or Nuance: AI-generated content can lack the depth or nuanced understanding that human writers bring to complex subjects. If parts of your work seem overly simplistic or fail to engage with the subtleties of a topic, it might be AI-generated.
  4. Generic Descriptions: AI often relies on general or widely applicable descriptions rather than specific, detailed imagery. Look for descriptions that seem vague or could easily apply to many scenarios, as this might indicate AI authorship.
  5. Factual Inaccuracies or Anachronisms: AI can make mistakes with facts, dates, or historical details. If you notice any inaccuracies that seem to stem from a misunderstanding of the subject matter, it could be a sign that AI was used.
  6. Awkward Syntax or Grammar: While AI has become increasingly proficient in language, it can still produce awkward or grammatically incorrect sentences. Unusual syntax or persistent grammatical errors might suggest AI involvement.
  7. Out-of-Place Information: AI might introduce topics or ideas that don’t quite fit with the rest of the content. If you find sections that seem irrelevant or oddly placed, it could be because AI failed to fully grasp the context.
  8. Use of AI Detection Tools: There are several online tools designed to detect AI-generated text. These can be useful in identifying whether a piece of writing has been produced by AI. However, it’s important to note that these tools are not always 100% accurate and should be used as part of a broader assessment.
  9. Checking the Writing Process: If you’re looking at your own work and trying to assess AI involvement, consider your own writing process. Reflect on how much of the content was generated through inspiration, research, and revision, versus being produced by an AI tool.

By being mindful of these indicators, authors can better identify and address the involvement of AI in their works. It’s also helpful to combine personal judgment with feedback from peers or editors, who may notice inconsistencies or AI traits that the author has overlooked.

Apparently, if you need to check for AI, you can just ask the AI how to do it!

Several of the points particularly rang true for me, including #3 (Lack of Depth or Nuance), which was discussed in Greg Buckles’ post on human curation (which I covered yesterday). Even the AI knows that it lacks depth and nuance! How self-aware of it! It can’t possibly be human! 😉

Thanks, Carolee, for a great idea for a follow-up topic!

So, what do you think? Do you have any tips for checking for AI-generated content? Please share any comments you might have or if you’d like to know more about a particular topic.

Image created using Bing Image Creator Powered by DALL-E, using the term “robot reading a paper and putting a hand over its face in disbelief”. The phantom hand is back! 😀

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Leave a Reply