EU Investigates Risks of AI on Major Online Platforms
The European Commission has launched an extensive investigation into the potential risks posed by generative AI technology on major online platforms and search engines worldwide.
This inquiry, initiated on March 14, targets eight prominent online services, including Google Search, Microsoft Bing, Facebook, X, Instagram, Snapchat, TikTok, and YouTube.
According to the European Commission, the investigation pertains to both the dissemination and creation of Generative AI content. The platforms are required to provide detailed information on their risk management strategies, particularly concerning AI-induced phenomena such as “hallucinations,” the proliferation of deepfakes, and the automated manipulation of content that could mislead voters.
The inquiry addresses a wide range of concerns, encompassing the impact of generative AI on electoral integrity, the spread of illegal content, protection of fundamental rights, gender-based violence, child protection, and mental health. It focuses on content generated by AI technologies, covering both its creation and dissemination.
The emphasis on election issues aligns with the broader efforts of the European Commission to mitigate AI-related risks, including through the Digital Services Act (DSA).
Under the DSA, Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) are obligated to comply with comprehensive regulations aimed at combating the dissemination of illegal content and minimizing adverse effects on fundamental rights, electoral processes, mental well-being, and child protection.
READ MORE: Judge Rules Against Craig Wright’s Claim to Bitcoin Founder Identity
The platforms must provide information regarding elections by April 5 and information on other categories by April 26. Failure to provide accurate, complete, and transparent information may result in significant penalties, including fines.
This initiative underscores the EU’s commitment to enforcing the DSA and addressing risks associated with digital technologies to ensure a safe online environment. It follows previous reports about the EU’s Artificial Intelligence Act, which regulates certain biometrics applications of AI with exceptions for law enforcement purposes.