Newsnews

YouTube Implements Stricter Policies To Regulate AI Content Depicting Deceased Individuals

youtube-implements-stricter-policies-to-regulate-ai-content-depicting-deceased-individuals

YouTube recently announced an update to its harassment and cyberbullying policies, which will now encompass a crackdown on content that “realistically simulates” deceased minors or victims of fatal or violent incidents. The platform, owned by Google, is set to enforce these new regulations starting January 16.

Key Takeaway

YouTube is implementing stringent measures to address the use of AI in creating content that depicts deceased individuals, particularly minors, in a realistic manner. The platform’s updated policies aim to curb the dissemination of such sensitive and potentially harmful material.

AI Recreation of Deceased Individuals

The policy shift comes in response to the employment of AI by certain true crime content creators to recreate the likeness of deceased or missing children. These creators have been utilizing AI to provide a childlike “voice” to narrate the deaths of these high-profile cases, a practice that has raised significant concerns.

YouTube’s Enforcement and Penalties

YouTube has outlined strict consequences for those who violate the updated policies. Content found to be in breach of these regulations will be removed, and the offending users will face penalties such as the inability to upload videos, livestreams, or stories for a week. Subsequent violations will result in the permanent removal of the user’s channel from the platform.

YouTube’s Response to AI Content

These changes follow YouTube’s recent introduction of updated policies related to responsible disclosures for AI content, as well as the introduction of tools to request the removal of deepfakes. Users are now required to disclose the creation of altered or synthetic content that appears realistic. Failure to comply with these disclosure requirements may lead to severe penalties, including content removal, suspension from the YouTube Partner Program, or other sanctions.

Furthermore, YouTube has warned that AI-generated content depicting “realistic violence,” even if labeled as such, may be subject to removal.

Industry-wide Response

YouTube’s actions align with the broader trend of social media platforms addressing the proliferation of AI-generated content. In a similar vein, TikTok had earlier introduced a tool for creators to label their AI-generated content, mandating the disclosure of synthetic or manipulated media depicting realistic scenes. The platform’s policy empowers it to remove undisclosed realistic AI images.

Leave a Reply

Your email address will not be published. Required fields are marked *