Tomorrow’s ACEDS and ARMA ChatGPT

Tomorrow’s ACEDS and ARMA ChatGPT Webinar The Good, The Bad, The Ugly!: eDiscovery Webinars

I’m excited to be part of this! Join me and other panelists for tomorrow’s ACEDS and ARMA ChatGPT Webinar “The Good, The Bad, The Ugly”!

Join us for tomorrow’s ACEDS and ARMA ChatGPT webinar ChatGPT: The Good, The Bad, The Ugly at 2pm ET (1pm CT, 11am PT) as we discuss the usage of ChatGPT in the information governance and legal industries today.

I’ll be part of the panel, along with Stephen Goldstein, Global Director of Practice Support, Squire Patton Boggs and Michael Salvarezza, Vice President, Content Development, MER Conference. Mike Quartararo, President of ACEDS and Professional Development, will be moderating.

If you’re thinking about using ChatGPT in your work, you need to understand the risks as well as the rewards. Here are 5 questions to consider before you start:

  1. What does it mean when you plug information into searches in ChatGPT: who sees the information you provide, how reliable is that data, and who is responsible for the accuracy of its results?
  2. Who owns the information generated by ChatGPT, and what happens to that information after it is generated?
  3. What happens when future output of AI becomes indistinguishable from “reality”?  How do we mitigate bias within AI output?
  4. What are the responsibilities of IG programs governing AI generated content to discern the truth from the ‘fakes’. What ethical obligations do we have as IG and legal professionals?
  5. From an ethical perspective, is there concern that jobs will be replaced?

As you can see, there are more questions than answers when it comes to using ChatGPT. Nevertheless, our expert panel will provide insights and tips to help inform your decision-making.

BTW, none of us wrote that session description, it was generated by…you guessed it…ChatGPT!

Interested? If so, join us for tomorrow’s ACEDS and ARMA ChatGPT Webinar “The Good, The Bad, The Ugly”! Register here!

KLDiscovery

So, what do you think? Are you using ChatGPT or some other generative AI solution?  Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

6 comments

  1. PART 1

    Your webinar is laudable but you have set an impossible task. Each of those points needs its own webinar. I was in a 4 hour event today just on ChatGPT and copyright. Tomorrow the MIT Media Lab has a 1/2 day session on LLM “reliable data” and honing “data accuracy”.

    One of the points raised today in the copyright webinar was that copyright issues are complicated by the ChatGPT terms-of-service themselves which state:

    “Subject to your compliance with these Terms, OpenAI hereby assigns to you all its right, title and interest in and to Output”.

    This implies there are intellectual property rights, in the first place, which are capable of assignment. However, under U.S. law THERE IS NO COPYRIGHT FOR CHATGPT TO ASSIGN. There are 4 major ChatGPT/copyright cases churning through the *legal system* right now and lawyers are having a field day on the subject.

    And it would have been beneficial if you could have had on your panel a computer scientist/data scientist. The problem with key set of topics — algorithmic bias and fake news and hallucinating – is that have been largely claimed by the social sciences.

    [continues below]

  2. PART 2

    But you need to work through various theories of information and what it really is. Computers are fundamentally incapable of processing information at the level of meaning (and pragmatics) — and that we should, as a result, withdraw some of our faith in AI to address knotty problems like algorithmic bias.

    A data/information scientist could explain the tubes and wires and pipes, would tell you that information has four levels, each one dependent on the level below: (1) syntactic, (2) semantic, (3) pragmatic, and (4) networked/emergent. Computers operate within the first, syntactic level, while human meanings and communication may include one or more of the “higher” layers.

    Here is the informational rub: the issues we care about — bias in data, algorithmic fairness, truth in information — exist at the semantic and pragmatic level. We should, as a result, be humbled by the inherent limitations of computers. They are syntactic machines; the other levels are beyond their reach. With vexing human problems like algorithmic bias, we can’t assume — even with the awesome computational achievements of machine learning AI — that they are up to the job. Or not at least without lots of careful tending and supervision. You panels *answers* and *solutions* will be intriguing. But based on your description, it’s not going to be answers so much as “things to thing about”.

  3. PART 3

    The underlying limitation-of-computing argument is important. You need someone who understands the architecture involved. You should approach problems like algorithmic bias with the limitations of computing front of mind. There is a shift well underway. The sheer volume of data, and the parallel advances in machine learning, and the tools like ChatGPT are helping to bridge the syntactic-semantic divide. But unless you acknowledge the inherent limitations dictated by the “semantic cliff” you’ll be lost and just blabbering. Human reason fails at hard human problems like *bias* and *fake news* and *responsibility*.

    I do not wish to be so negative. I think all of these webinars are helpful for the “Great Unwashed”. But we have been talking about/developing generalized AI systems since the 1950s, the most significant breakthroughs in the field have been much more recent. In 2017, a team of Google scientists published a seminal paper on the Transformer architecture (yes, the capital “T” in ChatGPT) and we were off to the races – at unprecedented levels of sophistication, with embedded reasons for bias. You need to learn that/know that stuff.

  4. Hi Eric,

    I didn’t have a hand in selecting the panelists, other than accepting the invite when asked. You raise important questions and issues and we recognize that a one hour webinar is nowhere near enough to get in-depth into them. It is going to be a “things to think about” discussion in terms of “the good, the bad and the ugly” of ChatGPT from what we’ve seen so far. Hopefully, it will encourage people to continue to learn more on their own about it (which will certainly be one of my recommendations).

    Thanks for the comments! I’ll share with the panel and we’ll try to get into as much as we can within our time limitations.

  5. Then I strongly recommend you get these 2 pieces out to your audience. They are not easy reads but they are very comprehensive. These are ones Greg and I use in our LLM tutorials:

    https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

    https://towardsdatascience.com/understanding-chatgpt-plugins-benefits-risks-and-future-developments-7a76f64e52ce

    Eliminating AI bias? Never going to happen. Impossible due to the architecture of neural networks. Beyond a 1 hour webinar.

Leave a Reply