The European Commission is set to mandate tech platforms such as TikTok, X, and Facebook to detect artificial intelligence (AI)-generated content, aiming to safeguard the upcoming European elections from misinformation.
In a move towards enhancing election security, the commission has launched a public consultation on proposed guidelines for very large online platforms (VLOPs) and very large online search engines (VLOSEs).
The recommendations seek to mitigate the democratic threats posed by generative AI and deepfakes.
Outlined in the draft guidelines are various measures to counter election-related risks, including specific strategies pertaining to generative AI content, pre- and post-election risk mitigation planning, and providing clear directives for European Parliament elections.
Generative AI has the potential to mislead voters and manipulate electoral processes by fabricating and circulating synthetic content that is inauthentic and misleading, including depictions of political figures, events, polls, and narratives.
The draft election security guidelines are presently open for public consultation in the European Union until March 7.
They advocate for alerting users on relevant platforms about potential inaccuracies in content generated by generative AI.
READ MORE: Dencun Upgrade Clears Final Testing Hurdle, Sets Stage for Ethereum Mainnet Deployment
According to the draft, the guidelines also propose directing users to authoritative information sources and advocate for tech giants to implement safeguards against the creation of misleading content that could significantly influence user behaviour.
Regarding AI-generated text, the current recommendation for VLOPs/VLOSEs is to “indicate, where possible, in the outputs generated the concrete sources of the information used as input data to enable users to verify the reliability and further contextualize the information.”
The proposed “best practices” for risk mitigation outlined in the draft guidance draw inspiration from the EU’s recently approved legislative proposal, the AI Act, and its non-binding counterpart, the AI Pact.
Concerns surrounding advanced AI systems, such as large language models, have escalated since the widespread adoption of generative AI in 2023, bringing tools like OpenAI’s ChatGPT into the spotlight.
While the commission has not specified the timeline for companies to label manipulated content under the EU’s content moderation law, the Digital Services Act, Meta announced plans in a company blog post to introduce fresh guidelines concerning AI-generated content on Facebook, Instagram, and Threads in the coming months.
Any content identified as AI-generated, whether through metadata or intentional watermarking, will be visibly labelled.
Discover the Crypto Intelligence Blockchain Council