In a significant move towards combating deepfake image abuse, campaigners from Stop Image-Based Abuse have presented a petition bearing over 73,000 signatures to Downing Street. This call to action comes as new legislation criminalising the creation of non-consensual AI-generated explicit images officially comes into force. However, advocates insist that the measures, while a step forward, do not go far enough in safeguarding victims.
Legal Changes and Victim Advocacy
The new law, an amendment to the Data (Use and Access) Act 2025, received royal assent last July but only became enforceable recently. Victims and campaigners have voiced their frustration over the delay, suggesting that it may have led to countless additional victims in the interim. Jodie, a victim who chose to remain anonymous, expressed her relief at the law’s implementation but lamented, “The delay has caused millions more women to become victims, and they won’t be able to get the justice they desperately want.”
Jodie, who discovered in 2021 that deepfake images of her were circulating online, has been vocal about her experience. She, alongside 15 other women, testified against Alex Woolf, a 26-year-old convicted of posting non-consensual explicit images online. “I had a really difficult route to getting justice because there simply wasn’t a law that really covered what I felt had been done to me,” she shared, highlighting the challenges faced by victims seeking redress.
Calls for Broader Protections
The petition presented to the government not only calls for stronger criminal penalties but also advocates for civil remedies such as takedown orders for abusive content on various platforms. Additionally, campaigners are pushing for enhanced relationships and sex education and increased funding for support services like the Revenge Porn Helpline.
Madelaine Thomas, a sex worker and founder of the tech forensics firm Image Angel, expressed mixed feelings about the new legislation. Although elated at the progress, she pointed out gaps in the law that leave sex workers particularly vulnerable. “When commercial sexual images are misused, they’re only seen as a copyright breach… By discounting commercialised intimate image abuse, you are not giving people who are going through absolute hell the opportunity to get the help they need,” she said. Having dealt with the trauma of her own images being shared without consent for years, Thomas highlighted the urgent need for more comprehensive protections.
Government Response and Future Actions
In response to the growing concerns surrounding deepfake technology, a spokesperson from the Ministry of Justice stated, “Weaponising technology to target and exploit people is completely abhorrent.” The official emphasised that the creation of non-consensual deepfakes is now a criminal offence, and actions are being taken against the companies producing harmful applications. The government aims to classify the creation of such content as a priority offence under the upcoming Online Safety Act, which will impose additional responsibilities on digital platforms to prevent the dissemination of such material.
As the landscape of digital abuse evolves, campaigners remain vigilant, advocating for further measures to protect individuals from the misuse of technology.
Why it Matters
The issue of deepfake image abuse is not merely a technological concern; it is a pressing social justice issue that affects countless individuals, particularly women. As digital platforms continue to expand and evolve, the risks associated with non-consensual imagery and the exploitation of personal data increase. The recent legislative changes represent a critical step towards accountability and protection for victims, yet the ongoing advocacy for broader reforms underscores the urgent need for systemic change. Addressing these concerns is essential in fostering a safer online environment and ensuring that victims have access to the justice they deserve.