Newsnews

Swift Retaliation: Fans Strike Back After Explicit Deepfakes Flood X

swift-retaliation-fans-strike-back-after-explicit-deepfakes-flood-x

You know you’ve screwed up when you’ve simultaneously angered the White House, the TIME Person of the Year, and pop culture’s most rabid fanbase. That’s what happened last week to X, the Elon Musk-owned platform formerly called Twitter, when AI-generated, pornographic deepfake images of Taylor Swift went viral.

Key Takeaway

Social platforms need a complete overhaul of how they handle content moderation to protect users from abusive content.

Content Moderation Failure

One of the most widespread posts of the nonconsensual, explicit deepfakes was viewed more than 45 million times, with hundreds of thousands of likes. That doesn’t even factor in all the accounts that reshared the images in separate posts – once an image has been circulated that widely, it’s basically impossible to remove.

X lacks the infrastructure to identify abusive content quickly and at scale. Even in the Twitter days, this issue was difficult to remedy, but it’s become much worse since Musk gutted so much of Twitter’s staff, including the majority of its trust and safety teams.

Community Response

Taylor Swift’s massive and passionate fanbase took matters into their own hands, flooding search results for queries like “taylor swift ai” and “taylor swift deepfake” to make it more difficult for users to find the abusive images. As the White House’s press secretary called on Congress to do something, X simply banned the search term “taylor swift” for a few days.

Failure of Content Moderation

This content moderation failure became a national news story, since Taylor Swift is Taylor Swift. But if social platforms can’t protect one of the most famous women in the world, who can they protect?

Proposed Solutions

Dr. Carolina Are, a fellow at Northumbria University’s Centre for Digital Citizens in the U.K., suggests that social platforms need to be more transparent with individual users about decisions regarding their account or their reports about other accounts. She also recommends a more personalized, contextual, and speedy response to reports of abuse.

Platform Response

X announced this week that it would hire 100 content moderators to work out of a new “Trust and Safety” center in Austin, Texas. However, under Musk’s purview, the platform has not set a strong precedent for protecting marginalized users from abuse.

Responsibility of AI Companies

In the case of AI-generated deepfakes, the onus is not just on social platforms. It’s also on the companies who create consumer-facing generative AI products. According to an investigation by 404 Media, the abusive depictions of Swift came from a Telegram group devoted to creating nonconsensual, explicit deepfakes.

Call for Accountability

A principal software engineering lead at Microsoft, Shane Jones, highlighted vulnerabilities in the AI model used to create the deepfakes and urged companies to be accountable for the safety of their products and disclose known risks to the public.

Conclusion

As the world’s most influential companies bet big on AI, platforms need to take a proactive approach to regulate abusive content. The incident with Taylor Swift’s deepfakes underscores the unreliability of platforms in protecting marginalized communities online.

Leave a Reply

Your email address will not be published. Required fields are marked *