Surge in Threats Faced by Women and Children in AI Scandals

Nowadays, cyber violence against women and kids is an increasing concern.

By

Feb. 1 2024, Published 4:09 a.m. ET

pn//uploads/ca beaa ee be bcbce__

In recent weeks, the unsettling circulation of explicit and pornographic images purportedly featuring megastar Taylor Swift has brought the dark side of artificial intelligence (AI) into sharp focus. While the unauthorized use of AI-generated content, particularly manipulated images of women, is not a new phenomenon, the incident involving the country-pop artist serves as a stark reminder of the escalating threats faced by individuals across the globe, from school-going children to young adults.

Article continues below advertisement

The issue goes beyond celebrities, as people are increasingly becoming targets of AI-generated malicious content. The ease of access to AI tools exacerbates this problem, with reports emerging of high school students having their faces manipulated by AI and shared online by peers. A renowned female Twitch streamer also fell victim to her likeness being used in a fake, explicit pornographic video that rapidly circulated within the gaming community.

Danielle Citron, a professor at the University of Virginia School of Law, emphasizes that the impact is widespread, affecting people in various professions and walks of life. The recent targeting of Swift, a beloved celebrity with a massive fan base, has brought attention to the growing issues surrounding AI-generated imagery.

The unauthorized use of AI to manipulate and distribute explicit content is akin to the well-known practice of "revenge porn." What sets the current trend apart is the difficulty in discerning the authenticity of manipulated images and videos. Social media platforms, where much of this content is shared, struggle to effectively monitor and moderate such material due to a lack of sufficient guardrails.

Article continues below advertisement
pn/add ca  bb cfee

Swift's case underscores the challenges faced by victims, as the response to the incident took considerable time, and many manipulated images continue to circulate in less regulated spaces. Social media platforms, including X (formerly Twitter), have implemented policies against the sharing of synthetic or manipulated media, but their reliance on automated systems and reduced content moderation teams raises concerns about the effectiveness of these measures.

Article continues below advertisement

As AI tools become more accessible, the creation of manipulated images and videos becomes easier, with unmoderated AI models available on open-source platforms contributing to the challenge. Currently, only nine U.S. states have laws against the creation or sharing of non-consensual deepfake photography, highlighting the need for comprehensive federal legislation.

Experts are calling for changes to Section 230 of the Communications Decency Act, which protects online platforms from liability over user-generated content. However, addressing the complex issue of AI-generated malicious content requires a multifaceted approach, combining legal reforms, improved content moderation, and public awareness.

To protect themselves from becoming victims of non-consensual imagery, individuals are advised to keep their profiles private and share photos only with trusted individuals. As AI systems become more efficient, limiting shared content becomes increasingly important. Furthermore, maintaining robust cybersecurity practices, such as using strong passwords and safeguarding personal data, can help mitigate the risk of hackers exploiting victims.

Advertisement

Latest News News and Updates

    © Copyright 2026 Market Realist. Market Realist is a registered trademark. All Rights Reserved. People may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.