Apple Has Forbidden Employees

Apple Has Forbidden Employees from Using ChatGPT While Offering it as an App: Artificial Intelligence Trends

ChatGPT is now a smartphone app available on the App Store for iOS users. But Apple has forbidden their employees from using ChatGPT in any form.

According to this story from Newser, the free ChatGPT app became available on iPhones in the US on Thursday and will later be coming to Android phones. Unlike with the web version, you can also ask it questions using your voice with the app. The company that makes it, OpenAI, said it will remain ad-free but “syncs your history across devices.” A blog post announcing the new app, which is described in the App Store as the “official app” by OpenAI, said, “We’re starting our rollout in the US and will expand to additional countries in the coming weeks.”

However, according to the Wall Street Journal, Apple is also forbidding workers from using external AI tools like ChatGPT and Microsoft-owned GitHub’s Copilot, which automates the writing of software code, for fear that confidential data entered into the programs could be leaked.

Advertisement
UnitedLex

ChatGPT stores user interactions, which are used to train the AI model. OpenAI identified a bug in March that exposed elements of users’ chat history. According to Reuters, ChatGPT has since come out with an “incognito mode” that allows users to turn off chat history, per Reuters. But the Verge reports that “even with this setting enabled, OpenAI still retains conversations for 30 days with the option to review them ‘for abuse’ before deleting them permanently”.

Apple joins JP Morgan, Verizon, Amazon, and others that have restricted the use of ChatGPT. Even Italy banned it before it was allowed to return after OpenAI made changes to satisfy Italian regulators.

Speaking of regulations, you probably have already read that Sam Altman, CEO of OpenAI, testified before members of a Senate subcommittee and largely agreed with them on the need to regulate the increasingly powerful AI technology being created inside his company and others like Google and Microsoft, as CNN reported here.

“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman said in his opening remarks.

Advertisement
KLDiscovery

One way the US government could regulate the industry is by creating a licensing regime for companies working on the most powerful AI systems, Altman said on Tuesday. This “combination of licensing and testing requirements,” Altman said, could be applied to the “development and release of AI models above a threshold of capabilities.”

Will that actually happen, much less soon? Seems doubtful. In the meantime, it appears that companies will be their own regulators, even as one of them offers the ChatGPT app within their own App Store. Considering that ChatGPT is already approaching 1 billion monthly active users without the aid of an official mobile device app up to this point (where a majority of people use the internet these days), it appears that the ride will only get wilder and crazier.

So, what do you think? Are you surprised that Apple has forbidden employees from using ChatGPT? Should there be government regulations on AI? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

Leave a Reply