Surge in Threats Faced by Women and Children in AI Scandals
In recent weeks, the unsettling circulation of explicit and pornographic images purportedly featuring megastar Taylor Swift has brought the dark side of artificial intelligence (AI) into sharp focus. While the unauthorized use of AI-generated content, particularly manipulated images of women, is not a new phenomenon, the incident involving the country-pop artist serves as a stark reminder of the escalating threats faced by individuals across the globe, from school-going children to young adults.
Taylor Swift is said to be considering legal action against the deepfake website that generated explicit AI images of her which circulated online, Daily Mail reports. pic.twitter.com/fQ961NdZTU
— Pop Base (@PopBase) January 25, 2024
The issue goes beyond celebrities, as people are increasingly becoming targets of AI-generated malicious content. The ease of access to AI tools exacerbates this problem, with reports emerging of high school students having their faces manipulated by AI and shared online by peers. A renowned female Twitch streamer also fell victim to her likeness being used in a fake, explicit pornographic video that rapidly circulated within the gaming community.
View this post on Instagram
Danielle Citron, a professor at the University of Virginia School of Law, emphasizes that the impact is widespread, affecting people in various professions and walks of life. The recent targeting of Swift, a beloved celebrity with a massive fan base, has brought attention to the growing issues surrounding AI-generated imagery.
The unauthorized use of AI to manipulate and distribute explicit content is akin to the well-known practice of "revenge porn." What sets the current trend apart is the difficulty in discerning the authenticity of manipulated images and videos. Social media platforms, where much of this content is shared, struggle to effectively monitor and moderate such material due to a lack of sufficient guardrails.
Swift's case underscores the challenges faced by victims, as the response to the incident took considerable time, and many manipulated images continue to circulate in less regulated spaces. Social media platforms, including X (formerly Twitter), have implemented policies against the sharing of synthetic or manipulated media, but their reliance on automated systems and reduced content moderation teams raises concerns about the effectiveness of these measures.
Deepfake technology means anyone can take your face, your voice and steal your identity for whatever they want.
— Control AI (@ai_ctrl) December 19, 2023
They don’t need your consent, and can generate entire videos from a single picture.
96% of deepfakes are non-consensual and pornographic - those that aren’t, are… pic.twitter.com/ulHkYmgsAA
As AI tools become more accessible, the creation of manipulated images and videos becomes easier, with unmoderated AI models available on open-source platforms contributing to the challenge. Currently, only nine U.S. states have laws against the creation or sharing of non-consensual deepfake photography, highlighting the need for comprehensive federal legislation.
Experts are calling for changes to Section 230 of the Communications Decency Act, which protects online platforms from liability over user-generated content. However, addressing the complex issue of AI-generated malicious content requires a multifaceted approach, combining legal reforms, improved content moderation, and public awareness.
To protect themselves from becoming victims of non-consensual imagery, individuals are advised to keep their profiles private and share photos only with trusted individuals. As AI systems become more efficient, limiting shared content becomes increasingly important. Furthermore, maintaining robust cybersecurity practices, such as using strong passwords and safeguarding personal data, can help mitigate the risk of hackers exploiting victims.