Following a backlash over a proposed data scraping plan to train artificial intelligence (AI) models, video conferencing giant Zoom announced that it will always seek the consent of its customers in relying on user content.
In a company blog post, Zoom confirmed an update to its terms of service to reflect its stance on obtaining consent. The company’s Chief Product Officer, Smita Hashim, stated that all user-generated content will not be used for training AI models “without customer content.”
Zoom recently caught the public attention after X (formerly Twitter) users threatened a boycott of the platform over seemingly far-reaching permissions. The users claimed that Zoom’s permissions did not provide an opt-out option for customers, effectively allowing the platform to use customer data to train their AI and machine learning (ML) models.
“It’s important to us at Zoom to empower our customers with innovative and secure communication solutions,” read the post. “We’ve updated our terms of service (in section 10.4) to further confirm that we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent.”
Zoom has been experimenting with AI in recent months, specifically dabbling into generative AI following the successes of ChatGPT and Bard. The company rolled out two generative AI features—Zoom IQ Meeting Summary and Zoom IQ Team Chat Compose—to summarize meetings automatically.
In response to privacy concerns, Zoom disclosed that it would make key changes using its generative AI features. According to the post, users will be notified whenever Zoom’s generative AI features are in use as an added layer of protection for users.
“When you choose to enable Zoom IQ Meeting Summary or Zoom IQ Team Chat Compose, you will also be presented with a transparent consent process for training our AI models using your customer content,” Zoom said.
The company adds that upon giving consent, customer data is used to solely improve the accuracy of its AI service. Zoom disclosed that the user data will not be used outside its platform, saying that “it will not be used for training of any third-party models.”
In July, Google (NASDAQ: GOOGL) updated its privacy policy to allow the use of public data for training its AI models. Per the update, the company may use any “information that is publicly available online” to train large language and other AI models.
Privacy at the center of AI regulation
The need to protect user data and install privacy safeguards for safe AI use is at the core of the push for AI regulation. Several class-action lawsuits are accusing OpenAI and Meta (NASDAQ: META) of illegal data scraping in training their AI models.
X reduced the number of tweets accessible to users because it was “getting data pillaged so much that it was degrading service for normal users.” In Europe, consumer protection watchdogs are leading the charge for stricter AI regulation in the face of privacy threats and risks to key sectors like Web 3, finance, and security.
Watch: Does AI know what it’s doing
New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.