enterprise

Explicit AI deepfakes of Taylor Swift have fans and lawmakers up in arms


If you checked in on X, the social network formerly known as Twitter, sometime in the last 24-48 hours, there was a good chance you would have come across AI-generated deepfake still images and videos featuring the likeness of Taylor Swift. The images depicted her engaged in explicit sexual activity with an assortment of fans of her pro U.S. football athlete boyfriend Travis Kelce’s NFL team the Kansas City Chiefs.

This explicit nonconsensual imagery of Swift was resoundingly condemned and decried by her legions of fans, with the hashtag #ProtectTaylorSwift trending alongside “Taylor Swift AI” on X earlier today, and prompting headlines in news outlets around the world, even as X struggled to remove the content and block it, playing “whack-a-mole” as it was re-posted by various new accounts.

It has also led to renewed calls by U.S. lawmakers to crack down on the fast-moving generative AI marketplace.

But there remain big questions about how to do so without stifling innovation or outlawing parody, fan art, and other unauthorized depictions of public figures that have traditionally been protected under the U.S. Constitution’s First Amendment, which guarantees citizens rights to freedom of expression and speech.

It’s still unclear just what AI image and video generation tools were used to make the Swift deepfakes — leading services Midjourney and OpenAI’s DALL-E 3, for example, prohibit the creation of sexually explicit or even any sexually suggestive content on a policy and technical level.

According to Newsweek, the X account @Zvbear admitted to creating some of the images and has since turned their account on private.

Independent tech news outlet 404 Media tracked the images down to a group on the messaging app Telegram, and said they used “Microsoft’s AI tools,” and Microsoft’s Designer more specifically, which are powered by OpenAI’s DALL-E 3 image model, which also prohibits even innocuous creations featuring Swift or other famous faces.

These AI image generation tools, in our usage of them (VentureBeat uses these and other AI tools to generate article header imagery and text content), actively flag such instructions from users (known as “prompts”), block the creation of imagery containing this content, and warn the user that they risk losing their account for violating the terms of use.

Still, the popular Stable Diffusion image generation AI model created by the startup Stability AI is open source, and can be used by any individual, group, or company to create a variety of imagery including sexually explicit imagery.

In fact, this is exactly what got the image generation service and community Civitai into trouble with journalists at 404 Media, who observed users creating a stream of nonconsensual pornographic and deepfake AI imagery of real people, celebrities, and popular fictional characters.

Citivai has since said it is working to stamp out the creation of this type of imagery, and there has been no indication yet that it is responsible for enabling the Swift deepfakes at issue this week.

Readers Also Like:  McKinsey: Gen AI adoption rockets, generates value for enterprises

Additionally, model creator Stability AI’s implementation of the Stable Diffusion AI generation model on the website Clipdrop also prohibits explicit “pornographic” and violent imagery.

Regardless of all these policy and technical measures designed to prevent the creation of AI deepfake porn and explicit imagery, clearly, users have found ways around them or other services that provide such imagery, leading to the flood of Swift images over the last few days.

My take: even as AI is readily embraced for consensual creations by increasingly famous names in pop culture, such as the new HBO series True Detective: Night Country, the rapper and producer formerly known as Kanye West, and before that, Marvel, the technology is also clearly being used for increasingly malicious purposes, which may stain its reputation among the public and lawmakers.

AI vendors and those who rely on them may suddenly find themselves in hot water for using the tech at all, even if it is for something innocuous or inoffensive, and need to be prepared to answer how they will prevent or stamp out explicit and offensive content. If and when new regulation does come into effect, it could severely limit AI generation models’ capabilities, and therefore the work products of those who depend on them for less offensive uses.

Litigation incoming?

A report from UK tabloid newspaper The Daily Mail notes the Swift nonconsensual explicit images were uploaded to the website Celeb Jihad, and that Swift is reportedly “furious” about their dissemination and considering legal action. Whether that is against Celeb Jihad for hosting them, or the AI image generator tool companies such as Microsoft or OpenAI for enabling their creation, is still not known.

The very spread of these AI-generated images has prompted renewed concern over the use of generative AI creation tools and their ability to create imagery that depicts real people — famous or otherwise — in compromising, embarrassing, and explicit situations.

Perhaps then it is not surprising to see calls from lawmakers in the U.S., Swift’s home country, to further regulate the technology.

Tom Kean, Jr., a Republican Congressman from the state of New Jersey who has recently introduced two bills designed to regulate AI — the AI Labeling Act and the Preventing Deepfakes of Intimate Images Act — released a statement to the press and VentureBeat today, urging Congress to take up and pass said legislation.

Kean’s proposed legislation would, in the case of the first bill, require AI multimedia generator companies to add “a clear and conspicuous notice” to their generated works that it is “AI-generated content.” It’s not clear, however, how this notice would stop the creation or dissemination of explicit AI deepfake porn and images.

Readers Also Like:  NIU Splits Double Dual With No. 19 South Dakota State and Cal ... - Northern Illinois University Athletics

Already, Meta includes one such label and seal as a logo for images generated using its Imagine AI art generator tool trained on user-generated Facebook and Instagram imagery, which launched last month. OpenAI recently pledged to begin implementing AI image credentials from the Coalition for Content Provenance and Authenticity (C2PA) to its DALL-E 3 generations, as part of its work to prevent misuse of AI in the runup to the 2024 elections in the U.S. and around the globe.

C2PA is a non-profit effort by tech and AI companies and trade groups to label AI-generated imagery and content with cryptographic digital watermarking so that it can be reliably detected as AI-generated going forward.

The second bill, cosponsored by Kean and his colleague across the political aisle, Joe Morelle, a Democratic Congressman of New York state, would amend the 2022 Violence Against Women Act Reauthorization Act to allow victims of nonconsensual deep fakes to sue the creators and possibly the software companies behind them for damages of $150,000, plus legal fees or additional damages shown.

Both bills stop short of banning AI generations of famous faces wholesale, which is probably a smart move, given it is likely that such a prohibition would ultimately be overturned by the lower courts or the U.S. Supreme Court. Unauthorized artworks of public figures have traditionally been viewed as allowable speech by the courts under the U.S. Constitution’s First Amendment, and even prior to AI, could be found widely in the form of editorial cartoons, caricatures, editorial illustrations, fan art — even explicit fan art — and more media not signed off by the subjects depicted.

This is because courts have found public figures and celebrities to have waived their “right to privacy” by capitalizing on their image. However, celebrities have successfully sued those who misappropriated their image for commercial gain under the “right of publicity,” a term coined by federal appeals court judge Jerome N. Frank in a 1953 case, which essentially comes down to celebrities being able to control the commercial usage of their own image. If Swift sues, it would likely be under this latter right. The new bills are unlikely to help her particular case, but would presumably make it easier for future victims to successfully sue those who deepfaked them.

In order to actually become law, both of the new bills will have to be taken up by relevant committees and voted through to the full House of Representatives, as well as an analogous bill introduced to the U.S. Senate and passed by that separate but related body. Finally, the U.S. President would need to sign a reconciled bill uniting the work from both legislative bodies of Congress. So far, the only thing that has happened on both bills is their introduction and referral to committees.

Readers Also Like:  First Solar earnings could surge 370% by 2027 as Big Tech hunts for renewables to power AI, UBS says - CNBC

Read Kean’s full statement on the Swift deepfake matter below:

Kean Statement on Taylor Swift Explicit Deepfake Incident

Contact: Dan Scharfenberger

(January 25, 2024) BERNARDSVILLE, NJ – Congressman Tom Kean, Jr spoke out today after reports that fake pornographic images of Taylor Swift generated using artificial intelligence were circulated and became viral on social media.

“It is clear that AI technology is advancing faster than the necessary guardrails,” said Congressman Tom Kean, Jr. “Whether the victim is Taylor Swift or any young person across our country – we need to establish safeguards to combat this alarming trend. My bill, the AI Labeling Act, would be a very significant step forward.” 

In November 2023, students at Westfield High School used similar artificial intelligence to make fake pornographic images of other students at the school. Reports found that students’ photos were manipulated and shared around the school, which created a concern amongst the school and the community on the lack of legal recourse of AI-generated pornography. These kinds of altered pictures are known online as “deepfakes”.  

Congressman Kean recently co-hosted a press conference in Washington, DC with the victim, Francesca Mani, and her mother, Dorota Mani. The Manis have become leading advocates for AI regulations.

In addition to introducing HR 6466, the AI Labeling Act, a bill that would help ensure people know when they are viewing AI-made content or interacting with an AI chatbot by requiring clear labels and disclosures, Kean is also a cosponsoring H.R. 3106, the Preventing Deepfakes of Intimate Images Act.

Kean’s AI Labeling Act would:  

  • Direct the Director of the National Institute of Standards and Technology (NIST) to coordinate with other federal agencies to form a working group to assist in identifying AI-generated content and establish a framework on labeling AI.
  • Require that developers of generative AI systems incorporate a prominently displayed disclosure to clearly identify content generated by AI. 
  • Ensure developers and third-party licensees take responsible steps to prevent systematic publication of content without disclosures.  
  • Establish a working group of government, AI developers, academia, and social media platforms to identify best practices for identifying AI-generated content and determining the most effective means of transparently disclosing it to consumers.   

You can read more about the bill HERE

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.