Earlier this month, OpenAI announced a new GPT builder for making custom GPTs. So far, the GPT builder is a bit of a “downer” in terms of accurate results.
I decided my first GPT should be something I know about very well – case law covered on eDiscovery Today! I’ve covered at least 225 cases over 3 1/2 years (among the 440 total posts that reference case law, not counting this one).
So, I told the GPT builder that I wanted to “make an eDiscovery case law expert that can answer questions about case law on ediscoverytoday.com”. I also suggested the name “eDiscovery Today Case Law Navigator” for it and accepted its suggestion for a profile picture “featuring a balance scale and a digital circuit pattern in a hand-drawn style”.
For “specific types of questions or topics you’d expect to ask this GPT”, I said: “I would expect to ask the GPT questions about types of issues associated with the cases, such as: which cases have sanctions for spoliation of ESI? or in which cases did an ESI Protocol factor into the decision?” I also didn’t give it any “types of information that it should focus on or steer clear of” and told it to “ask for clarification if a question is too vague or broad”.
Interestingly, while the GPT builder asked: “Should it use a formal and professional tone, or something more conversational?”, it didn’t wait for an answer and assumed a formal and professional tone. Hmmm.
Once built, I started off with one of my example questions: “what cases involve an ESI protocol?”
The eDiscovery Today Case Law Navigator gave me a list of ten items, each of the first nine had a different case, and the last one had two cases. Eleven cases involving ESI protocols! Great!
Or maybe not that great. Each of the items includes a link to the source. As I started checking the links, I realized that every one of them points to this blog post from April: Ten ESI Protocol Lessons Learned from Case Law Rulings: eDiscovery Best Practices. The list that my GPT gave was the exact list of cases (in the exact order) as the blog post. Not a true test as I technically gave it the answer. Let’s try something a little more difficult.
My next prompt: “what cases involve potential sanctions for spoliation of video?”
This time, I got six cases. The first three were Castro v. Smith, Walkie Check Productions, LLC v. ViacomCBS Inc. and Reed v. Royal Caribbean Cruises, Ltd. The first two of those were covered within the past month, the other one covered 2 1/2 years ago. All correct cases.
The other three? Three cases from an unfamiliar law firm from one of their blog posts back in 2015! Hey, this was supposed to be a GPT for cases from eDiscovery Today!!
As Tom O’Connor would say: C’mon man!
I tried another prompt: “what cases have involved terminating sanctions, default judgment or case dismissal?”
This time, I got four cases. The first one – DR Distribs. v. 21 Century Smoking – was from eDiscovery Today. It doesn’t even involve terminating sanctions (though it includes just about every other kind there is).
The other three? Not from eDiscovery Today. One was from the ABA site, one was from the National Law Review, and one was from my blog at my old company! Zero cases in the list were cases involving terminating sanctions from eDiscovery Today. Missed cases included this one, this one, this one, this one and this one, just a few of the terminating sanctions cases I’ve covered.
So, even though I wouldn’t think I would have to specify to a GPT dedicated to my site, I tried this prompt: “what cases covered by eDiscovery Today involve potential sanctions for spoliation of video?”
This time, I did get four cases and they were all from eDiscovery Today. The first two were the same two from the video query above, and it also retrieved Hollis v. CEVA Logistics U.S., Inc. from last year. The other case was In re Google Play Store Antitrust Litigation, which involved spoliation of Google Chat data, but no video (the word “video” doesn’t even appear in the case write up). The result failed to include at least three other cases (here, here and here), plus the third case from the previous list above.
My final prompt was an attempt to get all the cases involving potential sanctions for spoliation of video (“give me ALL cases covered by eDiscovery Today that involve potential sanctions for spoliation of video”). I ran that one twice – both times, it cranked away for a while, then gave me the message “There was an error generating a response”.
So far, I’m less than impressed with my first attempt to use the GPT builder. Perhaps, it’s not optimized for live content on web sites yet – after all, the “Browse with Bing” feature has only been moved out of beta about a month. Perhaps I need to refine my instructions regarding how the GPT should work or be more precise with my instructions (not sure how much more precise I can be). More testing to come. Regardless, my first attempt using the GPT builder was a bit of a “downer”. Womp, womp!
So, what do you think? Have you used OpenAI’s GPT builder yet? If so, what did you think? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.