Rage is normally not a good thing. But when it comes to large language models (LLMs) RAGE may be useful for explaining LLMs!
By “RAGE”, we mean a new tool created by a team of researchers based at the University of Waterloo, designed for explaining LLMs like ChatGPT, where they are getting their information and whether that information can be trusted.
As you know (or should know), LLMs like ChatGPT rely on “unsupervised deep learning,” making connections and absorbing information from across the internet in ways that can be difficult for their programmers and users to decipher. As you surely know, LLMs are prone to “hallucination” – that is, they write convincingly about concepts and sources that are either incorrect or nonexistent.
Wouldn’t it be great if you could supply some of the knowledge sources that influence the answer and cut down on those hallucinations?
As discussed in this study from Joel Rorseth, Parke Godfrey, Lukasz Golab, Divesh Srivastava and Jaroslaw Szlichta, RAGE is an interactive tool for explaining LLMs augmented with retrieval capabilities in that it’s able to query external sources and pull relevant information into their input context. Because it applies retrieval-augmented generation (RAG) explainability, they’ve nicknamed it “RAGE Against the Machine”. See what they did there? Now, the blog post image makes sense! 😀
Their explanations are counterfactual in the sense that they identify parts of the input context that, when removed, change the answer to the question posed to the LLM. RAGE includes pruning methods to navigate the vast space of possible explanations, allowing users to view the provenance of the produced answers.
The study is a mere four pages, so it’s a quick read (albeit a bit technical in places). The team introduces the topic, describes the system (via a problem description, architecture and RAG explanations) and walks through three use cases to illustrate how RAGE can work (using examples involving champion tennis players, which help illustrate the points). None of them were John McEnroe, who was really good at expressing the other kind of rage. 😉
So, if hallucinations from LLMs tend to make you want to rage, RAGE Against the Machine to address those hallucinations! Will it be a terrific tool for explaining LLMs? We’ll see, but it’s an interesting concept!
So, what do you think? Have you applied any RAG concepts to generative AI? Please share any comments you might have or if you’d like to know more about a particular topic.
Image Copyright © Wikipedia
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
Discover more from eDiscovery Today by Doug Austin
Subscribe to get the latest posts sent to your email.



