Conversational-Amplified Prompt Engineering

Conversational-Amplified Prompt Engineering, What is It and How Can You Use It?: Artificial Intelligence Trends

Here’s an interesting article about something called Conversational-Amplified Prompt Engineering, discussing what it is and how you can use it.

As discussed in Forbes (Conversational-Amplified Prompt Engineering Is Gaining Traction In Generative AI, written by Dr. Lance B. Eliot and available here), he showcases a prompt engineering technique that he refers to as conversational-amplified prompt engineering (CAPE). Now, you see what I did there! 🤣

The underlying concept is that you can substantively improve your prompting by carrying on a conversation with generative AI and large language models (LLMs), such that the AI pattern-matches and trains to how you write your prompts.

Advertisement
Veracity Forensics

As the author discusses, generative AI is highly capable of identifying patterns in how humans write. Indeed, the initial data training for LLMs is done by widely scanning the Internet for human composed essays, narratives, stories, poems, and the like. Via computational and mathematical pattern-matching, AI figures out the underlying patterns associated with human compositions. That’s how AI is so seemingly fluent when conversing with the user.

That same pattern-matching facility can be used to learn how someone tends to write their prompts, essentially data-training the generative AI on your prompting style. This means the AI will be more likely to interpret your prompts as per what you have in mind and not wander afield of what you intend.

According to the author, here are some outstanding benefits due to the CAPE technique:

  • Generative AI will be able to undertake personalized prompt interpretations.
  • Reduces the overall prompt engineering effort required by the user.
  • Increases efficiency since you don’t have to be laborious in your prompts.
  • Enhances prompting including inclusion of popular prompt engineering techniques.
  • Promotes adaptation to domain-specific language or instructions.
  • Saves on cost because of less miscast prompt clarifications.

The author also provides some examples of how to apply CAPE. Here’s one of them:

Advertisement
Cimplifi
  • My entered prompt: “Summarize this article.”
  • Generative AI response: “Here’s a summary. Let me know if you want a different focus.”
  • My entered prompt: “Looks good, but I prefer bullet points over paragraphs.”
  • Generative AI response: “Got it! Here’s the summary in bullet points.”
  • My entered prompt: “Thanks, I want you to remember that when I ask for summaries, I normally intend that bullet points are to be used rather than paragraphs.”
  • Generative AI response: “I will remember that preference and abide by it accordingly.”

I’ve discussed doing this type of training of the model with a couple of people in the past week or so – it’s definitely a terrific way to be more efficient and effective with prompting of LLMs. I’m not sure if the term “Conversational-Amplified Prompt Engineering” will become a standard, but I will have more graphics ready to go if it does! 😁

So, what do you think? What best practices have you learned for prompting AI models? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

Leave a Reply