OpenAI announced yesterday that its “Browse with Bing” feature is moving out of beta, which means you can search past the November 2021 cut off.
This applies to Plus and Enterprise users (i.e., paid users), who no longer need to switch the beta toggle to use browse and are able to choose “Browse with Bing” from the GPT-4 model selector. Here’s what it looks like in the platform:
As you’ll notice, the feature still lists as “Beta”, but presumably that will be removed soon. The browsing feature was originally rolled out back in May, but was temporarily disabled in July, then rolled back out in Beta on September 27.
Of course, I had to try it out. So, I thought of a perfect, highly relevant 2023 topic well past the November 2021 cut off to ask it:
Have lawyers filed any cases in the last year with bogus case citations?
In the past year, there have been instances where lawyers filed cases with bogus or incorrect case citations. Notably, two New York lawyers faced sanctions for submitting a legal brief containing fictitious case citations generated by an AI, specifically ChatGPT, which caused a U.S. judge to impose sanctions on them1. This case saw the legal professionals, Peter LoDuca and Steven A. Schwartz, along with their law firm Levidow, Levidow & Oberman, being ordered to pay a penalty of $5,000 by a federal district judge, after they relied on ChatGPT for legal research and ended up filing a brief laden with bogus cases2. In a related hearing, one of the attorneys, Steven Schwartz, pleaded ignorance regarding AI, with a federal judge indicating that citing AI-generated fake cases could be the onset of punishable legal malpractice3. In their defense, the lawyers claimed that ChatGPT tricked them into including fictitious legal research in their court filing4. This incident highlights the potential pitfalls and legal ramifications of relying on AI for legal research, especially when it comes to verifying the accuracy and authenticity of case citations.
Notice anything unique about this response? That’s right, it included actual footnote links to actual sources of the information!* The sources included Reuters, Bob Ambrogi’s LawNext site, Courthouse News Service and KPLC News in Lake Charles, Louisiana (how did they get in there?). They are reproduced here, with the actual links shown at the bottom.
Of course, there have been other instances of fake case citations, including this one (perhaps my use of the term “bogus” caused it to be missed?), but it did a good job with the famous Avianca case example many of us know.
Using the “Browse with Bing” option is (not surprisingly) much slower than searching using the Default method (where GPT-4 has indexed the content being searched. Still, it delivered results within one to two minutes of the search that I did.
I also tried to see if I could get it to find a blog post I wrote earlier this year that compared the lessons learned from the Blackberry to the adoption of generative AI. My question “Have there been any stories in the past year comparing the adoption of generative ai to the blackberry?” didn’t retrieve information about the post.
So, then I asked point blank “What does the article “the blackberry lesson for ediscovery providers” discuss?”
This time, GPT-4 gave me a detailed synopsis of my post, with an introductory paragraph, six key points discussed during the post, and a conclusory paragraph (for space considerations, I won’t include it here). It was well done and even included links to two publications that cited my post, one of which was a terrific article from Tom O’Connor that I wasn’t aware of that I’ll have to cover in a follow-up post. ChatGPT finally knows who I am! 😉
Needless to say (but I’ll say it anyway), the ability to ask GPT-4 questions past the November 2021 cut off and the ability to get answers with footnoted sources are both huge developments for the technology. This shows just how fast things are moving with generative AI technology!
So, what do you think? Are you excited about the ability to apply GPT-4 past the November 2021 cut off? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
- https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/#:~:text=NEW%20YORK%2C%20June%2022%20,case%20citations%20generated%20by%20an ↩︎
- https://www.lawnext.com/2023/06/court-imposes-sanctions-on-lawyers-who-filed-bogus-cases-after-relying-on-chatgpt-for-legal-research.html ↩︎
- https://www.courthousenews.com/lawyer-who-cited-bogus-legal-opinions-from-chatgpt-pleads-ai-ignorance/#:~:text=Lawyer%20who%20cited%20bogus%20legal,2023%2C%20hearing%20in%20which ↩︎
- https://www.kplctv.com/2023/06/09/lawyers-blame-chatgpt-tricking-them-into-citing-bogus-case-law/ ↩︎