First at Night, Then in the Digital Space: The Rise of AI-Generated Abuse

Modern Life News » First at Night, Then in the Digital Space: The Rise of AI-Generated Abuse
Preview First at Night, Then in the Digital Space: The Rise of AI-Generated Abuse

From June 20 to mid-September 2025, an anonymous YouTube user, operating under the profile “Woman Shot A.I,” published a series of disturbing videos. These included titles like “Japanese Schoolgirls Shot in the Breast,” “Sexy Housewife Shot in the Breast,” “Tragic End of a Reporter,” and “Headshot AI.” This AI-generated content depicted women pleading for their lives moments before being stabbed, shot in the chest, or beheaded. The channel garnered over a thousand followers and more than 175,000 views. “Woman Shot A.I” brought these violent fantasies to life using Veo 3, a Google AI Studio tool promoted for its “greater realism… and unprecedented control to realize your most ambitious visions.”

Crucially, this wasn’t content hidden in the dark corners of the internet, but readily available on a global video platform used daily by millions, including children and adults. The failure of existing verification mechanisms—first within Veo 3, which is supposed to block prompts violating Google’s policies, and then on YouTube—underscores the superficial approach taken towards violence against women. The controversial profile was only removed after independent portal 404 Media reported the case and sought a response from YouTube. In essence, “Woman Shot A.I” became a PR liability before any real action was taken. While profit motives prevailed, sexually explicit and violent material, officially prohibited, went unnoticed by their detection systems.

The True Boundaries of Deepfake

AI-generated visual content that lacks any basis in reality (a point we will revisit) is known as deepfakes. The term’s first recorded use reveals an original purpose strikingly similar to the “Woman Shot A.I.” case: as early as 2017, members of the r/deepfakes subreddit forum were superimposing the faces of famous women onto existing pornographic material. The widespread concern surrounding AI’s intrusion into all aspects of life has largely focused on its political implications and destructive potential—creating false narratives, spreading hate, and persecuting individuals or social groups. However, few approach with equal gravity the issue of AI’s use to degrade and intimidate women: specifically, the creation and dissemination of misogyny through deepfake pornography and the fueling of female murder fantasies, all enabled by readily accessible applications and programs.

It’s no coincidence that the titles of the controversial deepfake content mentioned earlier sound like hybrids of crime news and Pornhub material. In each of the 27 deleted videos, a menacing male figure, with his back to the viewers, loomed over a frightened woman, a weapon in his hands from which shots were fired.

Statistics consistently remind us that fantasies of killing and mutilating the female body are not new, nor are attempts to discredit and control women by associating them with sex and sexuality. The objective remains constant, but the means and methods are now technologically more sophisticated. Consequently, anyone with a few dollars and malicious intent can exact revenge on an ex-partner by digitally “undressing” her using AI and sharing the fabricated nude images with friends on WhatsApp. All it takes is a few photos of her, which, in an age of extreme technological dependency, is usually effortless, and the deed is done. By the time the victim realizes what has happened—if she ever does—it’s already too late.

Pornographic material, much like cockroaches, burrows deep into computers, laptops, other electronic devices, clouds, and digital chambers, capable of surviving even a nuclear catastrophe. While we might want to believe we can distinguish reality from artificially generated content, AI learns and evolves at such a pace that it can now deceive even the most technologically savvy eyes. In practice, this means the boundaries of our worlds are increasingly blurred. Even if something didn’t happen in reality, it becomes part of our reality simply through interaction and consumption. AI-generated reality has real consequences; victims of deepfake pornography often compare their experience to rape.

Therapy Not Included

Deepfakes are increasingly becoming tools for blackmail, manipulation, and intimidation. In her book, The New Age of Sexism, British journalist and activist Laura Bates transports readers to the once-tranquil Spanish town of Almendralejo, where in September 2023, over 20 high school girls became targets of AI-driven abuse. Their fabricated nude images, created using the “ClothOff” app, circulated in local WhatsApp groups and online, causing many victims—the youngest just 11 years old—to refuse to leave their homes for days. The perpetrators were identified as a group of boys, their peers, who, out of sheer boredom, traumatized an entire generation of local girls.

However, before we cast stones at the boys, I believe it’s far more crucial to address the very existence of publicly available applications designed to “undress” anyone, especially minors. The behavior of these adolescents should not be viewed as an isolated incident. Instead, their decision to sexualize and humiliate their classmates should be interpreted within a broader social context where violence against women is routinely normalized—arguably, presented as the inherent burden of being a woman.

As tech giants feed their algorithms with information imbued with the dominant values and ideologies of their originating societies, technology merely perpetuates existing power structures. This is why, for instance, facial recognition systems often struggle to identify Black women, and many “undressing” apps fail to work when fed photos of men. The most literal example of this ideological mirroring can be found in the design of the new generation of female sex robots. As promotional videos for RealDoll products reveal, these robots “do everything for you,” making them, combined with hourglass figures, white skin, and customizable nipples, supposedly perfect partners. One can only hope that the astronomical price tag of $11,349.99 for a model like Tanya includes a psychologist.

More Than Punishment

Similar to other forms of gender-based violence, education about the perils of modern technology often includes a section on how girls and women should be more careful in digital spaces, mindful of whom they friend, and what photos they share, because, as the narrative goes, “the internet is full of crazy people.” Instead of focusing attention on perpetrators or tech companies like Meta, whose platforms regularly advertise deepfake creation apps, or Google and YouTube, whose control mechanisms evidently fail to detect explicit content, blame is once again shifted to the victims. Why is the only solution we seem capable of offering a proposal to silence women and restrict their freedom of movement—first at night, and now in the digital realm?

Bates shrewdly concludes that the only positive aspect of AI technology is its definitive clarification that victims of revenge pornography were never responsible for the crimes and suffering inflicted upon them: “The great irony is that the very existence of deepfakes directly proves the absurdity of victim-blaming. When image-based sexual abuse first emerged… one of the most common responses to the problem… was, of course, the suggestion that women should stop taking intimate photos… And then came deepfake technology, which utterly debunked the notion that women who had never taken intimate photos were somehow protected from pornographic abuse. How ridiculous all those police officers, headteachers, and publicists now look when any woman, anywhere, regardless of whether she ever took such photos, can still have nude images of herself circulated across the internet, used to victimize and shame her. While we were so busy controlling women, perpetrators gladly used the time to develop increasingly sophisticated tools to ‘undress’ them.”

The term “non-consensual pornography,” in contrast to the commonly used “revenge porn,” acknowledges that the reasons someone might share another person’s intimate content without their consent can be more complex than mere vengeance. In the Almendralejo case, the primary motivation was a desire to gain popularity among peers. And let’s not forget the revenues from an industry valued at no less than $15 billion, with some recent estimates soaring to $100 billion.

A recent case involving the use of Elon Musk’s xAI product, the Grok AI chatbot, to “undress” real adults and minors, ultimately confirms the thesis that for tech giants, profit always comes first. On New Year’s Day 2026, Musk shared a swimsuit photo of himself, created with Grok, on his X social media platform, encouraging other users to do the same—though not with their own photos, presumably. Only after media reports highlighted a flood of controversial deepfake content on the former Twitter did Musk threaten “consequences” for anyone using Grok to create child pornography. Shortly thereafter, he began charging for the AI content creation service. Combined with OpenAI CEO Sam Altman’s announcement that ChatGPT would soon feature an “adult mode,” it’s abundantly clear that the use of AI to fulfill users’ pornographic fantasies, whether innocent or pathological, is being intensely promoted.

All member states of the European Union (EU) have collectively recognized the dangers of AI technology. In Croatia, for example, since 2022, Article 144.a of the Penal Code allows for perpetrators who create sexually explicit content to be sentenced to one year in prison, extendable up to three years if the material was made accessible to a wider audience. However, the question remains how this jurisdiction translates into practice. The boys from Almendralejo, Spain, were punished, but this was more an exception than the rule. While I am not advocating for prosecuting more children, it is unacceptable for perpetrators to evade punishment while their victims’ digital bodies remain at the mercy of unpredictable internet currents, especially when approximately 50% of non-consensual pornography victims contemplate suicide.

From videos of kittens chasing each other across beds to unsettling images of Donald Trump with Elon Musk’s foot in his mouth, anyone who hasn’t deleted their social media will tell you that the internet has devolved into a cesspool brimming with AI-generated content. Implementing a stricter regulatory framework is a crucial first step, but only an ideological and intellectual shift can truly rescue us from this dystopian nightmare into which we are increasingly sinking. Of course, it is important to teach children from an early age how to navigate dangerous digital waters, but even more important is to instill in them empathy and solidarity, ensuring they would never consider using their classmates’ school photos to create fake pornographic material.