OpenAI, makers of the generative AI platform ChatGPT, has been dragged to court in California over allegations that it illegally uses private information to train its AI model.
The lawsuit, filed on June 28, alleges that OpenAI used the private data of millions of individuals to train ChatGPT without seeking their express consent. According to the court filing, the plaintiffs claim that the data in question were gleaned from blog posts, social media comments, and even meal recipes posted on the internet.
“By collecting previously obscure and personal data of millions and permanently entangling it with the Products, Defendants knowingly put Plaintiffs and the Classes in a zone of risk that is incalculable — but unacceptable by any measure of responsible data protection and use,” the filing read.
Clarkson Law Firm is handling the class action lawsuit and mentions five OpenAI-related entities as defendants. Microsoft Corporation (NASDAQ: MSFT), an early investor in OpenAI, was also mentioned as a defendant in a case where the plaintiffs demand a jury trial.
The plaintiffs allege that OpenAI’s misuse of data stood in breach of the Electronic Communication Privacy Act, Computer Fraud and Abuse Act, California Invasion of Privacy Act, and Illinois’ Biometric Information Privacy Act, among others.
The class-action suit further claims that OpenAI’s actions amounted to the offenses of negligence, invasion of privacy, unjust enrichment, failure to warn, and conversion. Aside from obtaining data without consent, the plaintiffs argue that OpenAI designed ChatGPT to be inappropriate for children and deceptively tracked children without their consent.
“While holding themselves out publicly as respecting privacy rights, Defendants tracked the information, behaviors, and preferences of vulnerable children solely for financial gain in violation of well-established privacy protections, societal norms, and the laws encapsulating those protections,” the filing states.
In early June, Japanese lawmaker Takashi Kii predicted an avalanche of copyright cases stemming from AI platforms’ rogue data collection methods.
Hurtling toward AI regulations
The surge in AI adoption has forced governments worldwide to scramble for new regulations to ensure the safe usage of the technology. Currently, the European Union (EU) is finishing its AI Act while the rest of the world is floating public consultations to decide on appropriate approaches.
Amid the regulatory scramble, consumer groups are urging governments to step up the pace in the face of graver risks posed by AI in health, finance, mass media, and Web3. Others are calling for a moratorium on AI development until necessary safeguards have been put in place by regulators.
CoinGeek Conversations with Jerry Chan: Does AI know what it’s doing?
New to Bitcoin? Check out CoinGeek’s Bitcoin for Beginners section, the ultimate resource guide to learn more about Bitcoin—as originally envisioned by Satoshi Nakamoto—and blockchain.