AI threat: Taylor Swift fans trended #ProtectTaylorSwift as explicit deepfakes spark outrage on X

Taylor Swift is considering legal action against a deepfake porn website responsible for circulating explicit AI-generated images of the artist.

By
  • Storyboard18,
| January 27, 2024 , 1:08 pm
A source close to Taylor Swift told the Daily Mail, "Whether or not legal action will be taken is being decided, but there is one thing that is clear: these fake AI-generated images are abusive, offensive, exploitative, and done without Taylor's consent and/or knowledge." (Image sourced via Forbes)
A source close to Taylor Swift told the Daily Mail, "Whether or not legal action will be taken is being decided, but there is one thing that is clear: these fake AI-generated images are abusive, offensive, exploitative, and done without Taylor's consent and/or knowledge." (Image sourced via Forbes)

Taylor Swift is reportedly considering taking legal action against a deepfake NSFW website that hosts explicit AI-generated images of the artist. The controversy began earlier this week when explicit images appeared on various social media platforms such as X (formerly Twitter), Reddit, and Instagram.

Swift’s loyal followers quickly mobilized, launching actions to remove the images and flooding social media with messages of support for the singer. # Protect Taylor Swift. The offending images are tracked back to an X account with the handle @FloridaPigMan, which remained on the platform for a shocking 19 hours before being removed, including one image shared on X, formerly Twitter, that the New York Times reported had been viewed 47 million times before the account was suspended, causing criticism of the platform’s content moderation policies. The seriousness of the situation has enraged Swift’s family, friends, and fans, who believe the explicit AI-generated images were abusive, offensive, and exploitative, and were created without Taylor’s permission or knowledge.

A source close to Taylor Swift told the Daily Mail, “Whether or not legal action will be taken is being decided, but there is one thing that is clear: these fake AI-generated images are abusive, offensive, exploitative, and done without Taylor’s consent and/or knowledge.” The incident reignited the debate about AI-generated false explicit content and the issues that social media platforms face in preventing its dissemination.

The controversy took a new turn when it was revealed that the explicit AI-generated images may have originated from a Telegram group. Tech website 404 Media discovered that the images originated in a Telegram group dedicated to making non-consensual AI-generated sexual images of women, where users share such content, which is created using Microsoft Designer. Despite X’s explicit policies prohibiting artificial media, as well as nonconsensual nudity, the platform faced criticism for allowing the images to remain.

X (Twitter) issued a public statement addressing the issues of AI and how it condemns its use as a platform but did not mention Taylor Swift’s AI-generated images. The incident serves as a stark reminder of the challenges platforms face in tackling the spread of AI-generated content.

In response to the explicit deepfakes, Swift’s fan base took it to X flooding hashtags # Protect Taylor Swift and circulating her original content with messages like “‘Taylor Swift is a billionaire she’ll be fine’ THAT DOESN’T MEAN U CAN GO AROUND POSTING SEXUAL AI PICS OF HER LIKE A FREAK SHE’S STILL HUMAN BEING WITH FEELINGS,” others also chimed in saying “using ai generated pornography of someone is awful and inexcusable. you guys need to be put in jail,”.

The incident serves as a cautionary example, emphasizing the importance of ongoing discussions about the ethical use of AI technology and protecting individuals from the possible consequences of deepfakes.

Leave a comment

Your email address will not be published. Required fields are marked *