The European Union has expressed concerns about the potential risks that readily available generative AI tools pose to democratic societies, particularly in the context of elections. Vera Jourova, the EU’s values and transparency commissioner, emphasized the threat of AI-generated disinformation and urged platforms to implement efficient safeguards to combat this issue.
Key Takeaway
The European Union has called for increased safeguards against the risks of generative AI tools in the context of elections. The EU is urging platforms to be vigilant and implement efficient measures to combat AI-generated disinformation. Adherence to the EU’s anti-disinformation Code is considered essential, especially in light of the upcoming EU AI Act and Digital Services Act.
In an update on the EU’s Code of Practice on Disinformation, Jourova acknowledged the efforts by some mainstream platforms to address the risks associated with AI, such as informing users about the synthetic origin of online content. However, she emphasized the need for continued and intensified efforts, especially considering the high potential of realistic AI products in creating and disseminating disinformation.
The EU commissioner emphasized the importance of platforms being vigilant and providing effective safeguards in the context of elections. In line with this, she mentioned a meeting with representatives from OpenAI, the maker of ChatGPT, to discuss the issue further. OpenAI is not currently a signatory to the EU’s anti-disinformation Code, so it may face pressure to join the effort.
How badly will AI-generated images impact elections?
Jourova’s remarks reflect the earlier pressure applied to platforms this summer to label deepfakes and other AI-generated content. The EU is working on an AI regulation, known as the EU AI Act, which is expected to make user disclosures a legal requirement for generative AI technologies. However, the legislation is still in the draft stage and is not expected to come into effect for a few more years.
As a temporary solution, the EU has turned to the Code of Practice to encourage platforms to proactively address deepfake disclosures. Adherence to the Code is also seen as a favorable signal for compliance with other legal requirements, such as the Digital Services Act (DSA), which applies to large platforms and search engines.
Jourova highlighted the upcoming national and EU elections as an important test for the Code, urging platforms to take their responsibility seriously. She emphasized that platforms need to mitigate the risks they pose for elections, particularly in light of the binding DSA.
The EU’s voluntary Code of Practice on Disinformation has 44 signatories, including major social media and search platforms, as well as entities from the ad industry and civil society organizations involved in fact-checking. Several of these signatories, including Google, Meta, Microsoft, and TikTok, have published reports covering their efforts to combat disinformation.
Google, for example, has been working on large-scale AI models and has published AI principles to guide its work responsibly. It has also established a governance team to conduct ethical reviews and has published guidance on AI-generated content. Google plans to integrate new innovations in watermarking and metadata into its generative models.
Similarly, Microsoft has developed Responsible AI Principles and Information Integrity Principles to ensure responsible implementation of AI. It has also set up partnerships to combat manipulated media and false narratives and has focused on Bing Search’s policies and practices to mitigate risks related to generative AI.
TikTok has updated its community guidelines to address the use of AI-generated content and has committed to transparency obligations and manipulative practices prohibited by the proposed AI Act.
Meta (formerly Facebook and Instagram) is working with partners to tackle AI-generated misinformation and is launching a Community Forum on Generative AI. Meta intends to develop sustainable solutions in collaboration with governments, industry, civil society, and academia.
Jourova also emphasized the importance of platforms’ efforts in combating Kremlin propaganda, particularly in light of the Russian state’s engagement in disinformation campaigns. The EU expects signatories to adjust their actions to reflect the ongoing war in the information space waged against them and the upcoming elections.
In summary, the EU is calling for stricter safeguards and labeling of AI-generated content in order to address the risks of disinformation in democratic processes. The Code of Practice, while voluntary, is seen as an essential step towards a co-regulatory framework for addressing disinformation risks.