Newsnews

Swifties Rally To Protect Taylor Swift From Nonconsensual Deepfakes

swifties-rally-to-protect-taylor-swift-from-nonconsensual-deepfakes

Nonconsensual deepfake porn of Taylor Swift went viral on X this week, with one post garnering more than 45 million views, 24,000 reposts and hundreds of thousands of likes before it was removed.

Key Takeaway

Swifties are mobilizing to protect Taylor Swift from nonconsensual deepfake porn, showcasing the power of dedicated fanbases in combating online abuse.

Swifties Take Action

The pop star has one of the world’s most dedicated, extremely online, and incomprehensibly massive fanbases. Now, the Swifties are out for blood. When mega-fandoms get organized, they’re capable of immense things, like when K-pop fans reserved hundreds of tickets to a Donald Trump rally in an attempt to tank attendance numbers.

But today isn’t election day, and Swifties are focused on something more immediate: making the musician’s nonconsensual deepfakes as difficult to find as possible. Now, when you search terms like “taylor swift ai” or “taylor swift deepfake” on X, you’ll find thousands of posts from fans trying to bury the AI-generated content. On X, the phrase “PROTECT TAYLOR SWIFT” has been trending with over 36,000 posts.

Sometimes, these fandom-driven campaigns can cross a line. While some fans are encouraging each other to dox the X users who circulated the deepfakes, others worry about fighting harassment with more harassment, especially when the suspected perpetrator has a relatively common name, and in some cases, the Swifties could be going after the wrong guy.

Challenges and Legislative Response

With the rise of accessible generative AI tools, this harassment tactic has become so widespread that last year, the FBI and international law enforcers issued a joint statement about the threat of sextortion. According to research from cybersecurity firm Deeptrace, about 96% of deepfakes are pornographic, and they almost always feature women.

Congress is making some legislative headway to criminalize nonconsensual deepfakes. Virginia has banned deepfake revenge porn, and Representative Yvette Clarke (D-NY) recently reintroduced the DEEPFAKES Accountability Act, which she first proposed in 2019.

This abuse campaign is emblematic of the problems with AI’s steep ascent: companies are building too fast to properly assess the risks of the products they’re shipping. So, maybe Taylor Swift fans will take up the fight for thoughtful regulation of fast-developing AI products — but if it takes a mass harassment campaign against a celebrity for undertested AI models to face any sort of scrutiny, then that’s a whole other problem.

Leave a Reply

Your email address will not be published. Required fields are marked *