The EU AI Act is the first comprehensive law to regulate generative AI. Still, the tech community is concerned that EU lawmakers created loopholes for public authorities and weak regulation for monopolistic practices.
When generative AI hit the streets, it was every publisher and consumer’s favorite shiny new toy. A tool that can generate content and images and help improve workflows, among other time-saving tasks. What more could one ask for?
However, as this new technology gained traction, its flaws became apparent. Accusations of misinformation and privacy breaches surfaced, and there was a growing unease about its development outpacing regulatory measures.
In March of last year, The US and EU joined forces to discuss ways to regulate the fast-paced growth of AI technology. They cited wanting “our economy to get those benefits, but there’s also real worry about it.” At the time, the EU upended previous AI regulation — the Artificial Intelligence Act.
Fast forward, now Europe proves to be ahead of the curve again with the passage of the EU AI Act. It is the world’s first comprehensive law regulating AI. President Biden issued an executive order to regulate AI last October, and though we’ve seen no action yet, movements in the EU may spur some motion in the US.
What is the EU AI Act?
Lawmakers in the European Parliament overwhelmingly approved the Artificial Intelligence Act last week, and they expect implementation by the end of the year. This groundbreaking legislation garnered both enthusiasm and concern about the future.
The EU AI Act adopts a horizontal, risk-based approach applicable across various AI development sectors. It categorizes AI into four groups:
- prohibited
- high-risk
- limited-risk
- minimal-risk
Systems that violate human rights, such as those enabling social scoring or mass surveillance, are banned outrightly.
Before entering the market, high-risk systems, like those in biometric identification or used in education, health, and law enforcement, must meet stringent requirements, including human oversight and security assessments. Systems involving user interaction, such as chatbots and image-generation programs, fall under limited risk and must disclose AI involvement while offering opt-out options to users.
Non-compliance with the AI Act carries hefty penalties, reaching up to EUR 35 million (USD 38 million) or 7% of the company’s global annual turnover from the previous financial year—nearly double the maximum penalty for GDPR breaches introduced six years ago.
It’s essential to be up to date on AI compliance or prepare to run your pockets.
The Bittersweet Reaction to the AI Act
The European Commission has requested Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube, and X to report on their efforts in controlling generative AI risks.
The EU mainly focuses on AI hallucinations, the spread of deepfakes, and AI manipulation in elections. However, some in the tech community and researchers express concerns and dissatisfaction with the legislation.
According to Max von Thun, Europe director of the Open Markets Institute, there are “significant loopholes for public authorities” and “relatively weak regulation of the largest foundation models that pose the greatest harm.” Foundation models are machine learning tools trained on data and performing various tasks, such as ChatGPT.
Thun’s biggest concern is tech monopolies, and he warns that the EU should be wary of monopolistic abuse in the AI ecosystem. Of course, walled gardens get their piece of the pie in any way possible.
“The AI Act is incapable of addressing the number one threat AI currently poses: its role in increasing and entrenching the extreme power of a few dominant tech firms,” said Thun.
In addition, startups and small and medium-sized enterprises may face increased workloads because of the new regulations. Marianne Tordeux Bitker, public affairs chief at France Digitale, remarked, “This decision leaves a bittersweet taste. Although the AI Act addresses transparency and ethics, it imposes significant obligations on all companies using or developing artificial intelligence.”
Bitker is concerned it will create more regulatory hurdles, favoring American and Chinese competition and limiting opportunities for European AI innovations to emerge.
The Complex Balance of Advancement and Regulation
Other countries, such as Brazil, China, Israel, and Japan, have already drafted AI legislation, but none quite as comprehensive as the EU. Will the US be next on the docket to draft AI regulation?
If US federal privacy law is our measurement benchmark, we’ll wait awhile. But publishers and advertisers should still be watching for how these regulations will affect them, especially those with audiences in the EU.
Futurist and Gen AI expert Bernard Marr, who just wrote the book, Generative AI in Practice: 100+ Amazing Ways Generative Artificial Intelligence is Changing Business and Society, believes this will impact the advertising and digital media industry twofold. On the one hand, the act could introduce new compliance challenges, requiring adjustments in how AI is utilized for content creation, personalization, and ad targeting to ensure companies meet ethical standards.
On the other hand, it offers an opportunity to build trust with consumers by adhering to high data protection standards and ethical AI use.
“Ultimately, the EU AI Act could encourage more responsible and innovative uses of AI in advertising, enhancing consumer experiences while safeguarding against potential abuses of the technology,” said Marr.
He agrees that the “bittersweet sentiments” in reaction to the act display the complex balance of fostering technological advancement and ensuring robust safeguards. Yet, for Marr, the act sets a precedent for companies that use AI to “prioritize transparency, accountability, and ethical use of AI technologies.”