The Rising Threat of Deepfake Pornography in the AI Era
In the age of artificial intelligence, a disturbing trend is emerging: the rise of deepfake pornography. This phenomenon, fueled by photo apps that digitally undress women, sexualized text-to-image prompts creating “AI girls”, and manipulated images used for “sextortion” rackets, is growing at a pace that outstrips regulatory efforts in the US and Europe.
Deepfakes, once associated with viral images of famous personalities, are now more commonly used to generate non-consensual porn that can devastate ordinary lives. Women are particularly targeted by AI tools and apps, which are widely available for free and require no technical expertise. These tools allow users to digitally strip off clothing from pictures or insert faces into sexually explicit videos.
Sophie Maddocks, a researcher at the University of Pennsylvania tracking image-based sexual abuse, warns of the normalization of using a woman’s image or likeness without her consent. This trend raises serious questions about societal attitudes towards consent.
The proliferation of online deepfakes underscores the threat of AI-enabled disinformation, which can damage reputations and lead to bullying or harassment. While celebrities have been victims of deepfake porn, women not in the public eye are also targeted. A 2019 study by the Dutch AI company Sensity revealed that 96 percent of deepfake videos online are non-consensual pornography, and most of them depict women.
New technologies such as Stable Diffusion, an open-source AI model developed by Stability AI, have made it possible to conjure up realistic images from text descriptions. This advancement has given rise to an “expanding cottage industry” around AI-enhanced porn, with many deepfake creators taking paid requests to generate content featuring a person of the customer’s choice.
The FBI recently issued a warning about “sextortion schemes,” where fraudsters capture photos and videos from social media to create “sexually themed” deepfakes used to extort money. The victims include minor children and non-consenting adults.
The rapid proliferation of AI tools has outstripped regulation. While some US states and the UK have proposed laws to criminalize the sharing of pornographic deepfakes, victims often have little legal recourse if the perpetrators live outside these jurisdictions. Dan Purcell, chief executive and founder of the AI brand protection company Ceartas, emphasizes the need for a unified international law to protect people against this form of exploitation.
As we navigate the AI era, it’s crucial to address the ethical and legal implications of technologies like deepfakes. The battle against deepfake porn is not just about regulating technology, but also about redefining societal norms around consent and respect for individuals’ privacy.