As Ad Ops and Rev Ops professionals gear up to integrate generative AI into their daily routines, they must understand the legal consequences of not adhering to privacy and copyright laws, ensuring consumer protection.
Amid their lawsuit against the NYT, OpenAI faces privacy scrutiny in Europe after a multi-month investigation into ChatGPT’s data collection methods. Italy’s Data Protection Authority gave OpenAI 30 days to respond to the allegations.
Critics have called out generative AI’s spread of misinformation and data privacy concerns for quite some time. There are also concerns about deep fakes — such as using artists’ voices to create new music or the horrific fake explicit images of American pop princess Taylor Swift.
This technology is evolving much faster than regulation can control. But regulators like Italy’s Data Protection Authority are working to ensure we can all reap AI’s benefits while complying with data ethics.
As Italy’s privacy watchdog readies its case against OpenAI and OpenAI prepares its response, it could set new precedents in AI regulation standards.
A History of Data Collection Misfires and the Lawsuits They Bore
Should the EU court confirm the breach, OpenAI faces a potential €20 million fine, or up to 4% of global annual turnover. Beyond financial penalties, data protection authorities can mandate changes in a company’s data processing methods for violating privacy laws. This could lead to altered data collection practices or even cessation of the tech’s usage in regions where they enforce compliance.
Given its history with legal challenges over its data collection practices, OpenAI is no stranger to the intricacies of AI data handling. This includes the notable lawsuit brought by The New York Times for allegedly using copyrighted data to enhance their chatbot’s intelligence.
Attorney Justin Nelson, representing the New York Times in the lawsuit, accused OpenAI of “building this product on the back of other people’s intellectual property. OpenAI is saying they have a free ride to take anybody else’s intellectual property since the dawn of time, as long as it’s been on the internet.”
In both cases, OpenAI responded that the lawsuits were without merit — a big shocker. In the case of the NYT lawsuit, OpenAI released a public statement saying that using publicly available internet materials is fair use.
In response to Italy’s DPA, they said, “We believe our practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy. We want our AI to learn about the world, not about private individuals. We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people.”
The Regulatory Perspective and the Implication for the Ops Industry
Last year, Italian authorities raised GDPR concerns about OpenAI, temporarily banning ChatGPT’s local data processing. The March 30 provision cited issues like the lack of a legal basis for personal data collection, AI’ hallucinations,’ and child safety problems. The authority suspected GDPR breaches in Articles 5, 6, 8, 13, and 25.
AI regulators are fighting tooth and nail for industry-wide standards, and no sign of their momentum stopping. For example, the FTC launched a new inquiry into five major AI players investigating how their investments and partnerships impact competition — Alphabet, Amazon, Microsoft, Anthropic, and OpenAI. More specifically, the FTC is examining “whether tech giants are using their power to trick the public, and whether the AI investments allow giants to ‘exert undue influence or gain privileged access’ to secure an advantage across the AI sector.”
“Just as we’ve seen behavioral advertising fuel the endless collection of user data, model training is emerging as another feature that could further incentivize surveillance,” said FTC chair Linda Kahn. “The FTC’s work has made clear that these business incentives cannot justify violations of the law.”
As the Ad and Rev Ops industries prepare their AI capabilities, and I know they are, they must also be cognizant of the potential pitfalls of using this technology. Mark Sturino, VP of Data and Analytics, Good Apple, said at his Keynote address at AdMonsters Ops in 2023 that publishers can differentiate themselves by utilizing AI technology to provide insights and transparency. Still, they must be careful in using AI to create targeted audiences.
“AI is playing more of a role from a publisher selection perspective. At least at Good Apple, it is less and less about flash, and it’s more about the actual results you’re giving us because everybody will be judged based on performance,” said Sturino.