Will Section 230 Protect

Will Section 230 Protect ChatGPT and Other AI Chatbots?: Artificial Intelligence Trends

It’s a fair question: will Section 230 protect ChatGPT & other AI chatbots like social media companies assert it does for them? Sounds like probably not.

That’s the question that Cassandre Coyer discusses in her article on Legaltech® News (ChatGPT Faces Defamation Claims. Will Section 230 Protect AI Chatbots?, available here). Of course, she discusses the SCOTUS rulings from last week (which we covered here) where SCOTUS ruled no liability for Google and Twitter in their respective cases. However, they failed to address protection under Section 230 of Title 47 of the United States Code, finding neither company had any underlying liability to need the protections.

However, as claims of defamation against AI-powered chatbots like OpenAI’s ChatGPT start to arise, it’s unclear whether these platforms could benefit from the same protections available to other online providers under Section 230. One of those potential claims arose last month when a regional Australian mayor (Brian Hood) threatened to sue OpenAI if it did not correct ChatGPT’s false claims that he had served time in prison for bribery (conversely, he was a whistleblower on the bribery scheme).

KLDiscovery

I know firsthand about ChatGPT getting facts wrong about a person – though, in my case, they made me sound more impressive. 😉

Regardless, the crux of the question of will Section 230 protect ChatGPT & other AI chatbots is whether the outputs generated by AI-powered chatbots can be considered third-party content. That’s what it takes to be protected under Section 230.

But to answer this question, one would need to look under these chatbots’ hoods, noted Eric David, partner at Brooks Pierce. In fact, while in some cases AI-powered chatbots simply paraphrase or summarize information already available on the internet, which would qualify as third-party content, in other instances it generates new information—including when bots “hallucinate.”

“I think it’s going to be very hard for the creator of that website, ChatGPT or whatever it is, to apply Section 230, to get the benefit of Section 230, because they are creating the content,” he explained.

KLDiscovery

As Coyer notes, the Washington Post asked the people behind the law what they thought. And U.S. Senator Ron Wyden (D-OR) and former House Representative Chris Cox (R-CA), who co-authored the law, seemed to have a pretty clear answer.

“AI tools like ChatGPT, Stable Diffusion and others being rapidly integrated into popular digital services should not be protected by Section 230,” Wyden said in a statement to the Post. “And it isn’t a particularly close call.”

Meanwhile, Cox said that, “to be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue.” He added, “So when ChatGPT creates content that is later challenged as illegal, Section 230 will not be a defense.”

Megan L. Meier, a lawyer at Clare Locke, where she recently represented Dominion Voting Systems in its defamation litigation against Fox News, said she expects to see more litigation ahead, especially regarding deepfakes. She also argued that Section 230 won’t shield companies from defamation claims.

More litigation about AI for which eDiscovery solutions leveraging AI will be used. That figures. 😀

So, what do you think? Will Section 230 protect ChatGPT & other AI chatbots from liability? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

4 comments

  1. Interesting post but it misses one key thing.

    Someone is going to come forward and claims that you produced something, using ChatGPT or some other LLM, that caused them harm.

    They will want to sue, in this case, OpenAI/ChatGPT. Yes, indeed, OpenAI has the big bucks. You have the small bucks. The person launching the lawsuit would ardently argue that your efforts were aided by ChatGPT and therefore both you and OpenAI ought to be on the hook. Or they might just sue OpenAI.

    Your first thought might be that you could care less that OpenAI is in the lawsuit.

    But you might not have looked closely at the licensing terms associated with your signing up to use ChatGPT. Most people don’t. They assume that the licensing is the usual legalese that is impenetrable. Plus, the assumption is that there is nothing in there that will be worthy of particular attention. Just the usual ramblings of arcane legal stuff.

    Well, you might want to consider Section 7a of the existing licensing agreement as posted on the OpenAI website and associated with and encompassing your use of ChatGPT:

    “Section 7. Indemnification; Disclaimer of Warranties; Limitations on Liability: (a) Indemnity. You will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services, including your Content, products or services you develop or offer in connection with the Services, and your breach of these Terms or violation of applicable law.”

    In normal language, this generally suggests that if OpenAI gets sued for something you have done with their services or products such as ChatGPT, you are considered by them to be on the hook for “any claims, losses and expenses (including attorneys’ fees)” thereof.

    Bottom line, you might have to cover your own legal expenses plus whatever financial hit you take from the lawsuit, and furthermore potentially cover the legal expenses and related financial hit that OpenAI incurs due to the lawsuit.

    And the hidden “double whammy” that few realize is that OpenAI has a whole litany of things/content generation you should NOT use ChatGPT for. Something like 10 categories in the licensing agreement.

    So if you are indeed making use of ChatGPT in any of those prohibited ways, you are already in dicey waters. Even if you aren’t using ChatGPT in those sour and dour ways, you can still be using the venerated AI app in seemingly fully legitimate ways and become the subject of a lawsuit by someone that believes you have caused them harm as a result of your ChatGPT use (or so they might claim).

    When it comes to possibly getting sued as a result of your services or other efforts, and if those services or efforts are indirectly or directly shaped as a result of using ChatGPT, these are the circumstances you might regrettably face:

    -You get sued. You alone are sued (let’s also assume that you are indirectly or directly making use of ChatGPT in some related way)

    -You and OpenAI get sued. You are sued and OpenAI as the maker of ChatGPT is also sued too

    -Only OpenAI gets sued. OpenAI as the maker of ChatGPT is sued, but you aren’t sued, and then OpenAI comes to you to cover their lawsuit costs as a result of the indemnification clause and an assertion that you spurred the lawsuit by disobeying the use/content restrictions

    And FYI: all of the LLMs have almost the exact same “Terms of Service”.

    There is going to be a huge level of murkiness tossed into these litigation waters. We’ll need a lot of “test cases” to sort it all out.

    There are 2 law review articles in progress that go into all of these issues in great detail.

    • Yes, I remember reading about OpenAI’s Terms of Service and I’m not surprised that the other LLMs have pretty much the same ToS too.

      The question is enforceability. Other than the phrase “and your breach of these Terms or violation of applicable law”, the rest of it looks to me like a no-fault indemnification clause, which I don’t think is enforceable in most places. Of course, I’m no expert in indemnification clauses, but that’s my understanding. Would love to see the law review articles when they come out.

      Like you said, we’ll need a lot of test cases to sort it all out.

  2. Online indemnification agreements enforceability. There is actually little case law directly addressing this question. To be sure, courts have upheld some online indemnification provisions, and sometimes not. A ChatGPT case would be instructive.

    There is a slew of ways to try and undercut or vacate an indemnification clause, based on my cursory review of the subject (I am on a related case now). And it includes but is not limited to:

    -Jurisdictional dispute as to countermanding provisions at the federal versus state levels

    -Consumer protection provisions that might apply

    -Potentially vague and imprecise language of the clause

    -Improper or legally defective language of the clause

    -Lack of suitable constructive notice (the agreement is hidden or hard to find)

    -Lack of mutual manifestation of assent (did both parties must have a meeting of the minds)

    -Lack of specifically expressed assent (i.e., when the licensing is found via a hyperlink or browsewrap, versus the use of clickwrap where a user must click before they can proceed into using the app, or scroll wrap where you need to scroll and then click to affirm)

    -Overdose of adhesion such as a take-it-or-leave-it and no negotiating allowed

    -Provision wasn’t sufficiently triggered or was improperly invoked

    -Unconscionable as to “blank cheque” or an uncapped onerous financial burden

    -Negligence or failure on part of the service provider

    Briefly, courts identify two main types of online agreements: (1) “clickwrap” agreements, where the Internet user cannot move forward with the transaction unless he or she affirmatively consents to viewable – but not necessarily viewed – terms and conditions by clicking “I agree” or something similar; and (2) “browsewrap” agreements, where the terms and conditions are typically posted as a hyperlink, and the user need not provide any express manifestation of assent. Courts occasionally articulate other categories of online agreements, such as “scrollwrap,” which requires users to physically scroll through an internet agreement and click on a separate ‘I agree’ button in order to assent to the terms and conditions of the host website, a and “sign-in- wrap,” which couples assent to the terms of a website with signing up for use of the site’s services.

    Courts generally always enforce clickwrap agreements. Enforceability of browsewrap agreements tends to be more complicated, however. Actual or constructive notice of the terms and conditions, before use of the site, is usually a prerequisite. Right now ChatGPT is “browsewrap”. But is may move to “clickwrap”.

    There is a famous quote that lawyers tend to know by heart that sometimes the courts will consider such matters as “seemingly unconscionable and so outrageously unfair as to ardently shock the judicial conscience”. Thus, there is a chance that you might find sympathy from the court and be relieved of the burden of an indemnification clause, though this is not at all an ironclad outcome and instead a veritable roll of the dice.

    But ChatGPT might also have another advantage here. There is a legal argument that LLMs are developing that goes something like this: “you knew the danger of use, the inherent risk, the conditions, etc. so the indemnification provisions must hold”. We’ll see 🍿

Leave a Reply