Meta, the parent company of Facebook, Instagram, and Threads, is expanding its labelling of AI-generated imagery on its social media platforms. The expansion will cover synthetic imagery created using other companies’ generative AI tools, provided that the content bears “industry standard indicators” of being AI-generated, which Meta’s detection technology is capable of identifying.
Key Takeaway
Meta is expanding its labelling of AI-generated imagery on its social media platforms, aiming to provide users with more transparency regarding the origin of the content they encounter.
Meta’s Efforts to Detect AI-Generated Imagery
Meta has been working with industry partners to establish common technical standards for identifying AI-generated content. The company already detects and labels “photorealistic images” created with its own generative AI tool, “Imagine with Meta,” which was launched in December. However, it has not previously labelled synthetic imagery generated using tools from other companies. Meta’s president, Nick Clegg, announced the expansion of labelling in a recent blog post, stating that the company will be rolling out the expanded labelling in the coming months and applying labels in all supported languages.
Challenges in Detecting AI-Generated Video and Audio
While Meta is making progress in labelling AI-generated imagery, detecting AI-generated video and audio remains a challenge. Clegg highlighted the difficulty in identifying these types of fakes due to the lack of widespread adoption of marking and watermarking, as well as the potential for such signals to be stripped out through editing and manipulation.