**
A recent advertisement for the PixVideo AI editing application has been swiftly banned by the UK’s Advertising Standards Authority (ASA), following significant public backlash regarding its portrayal of women. The YouTube ad, which surfaced in January, featured a striking “before” and “after” transformation of a young woman, implying that users could digitally erase clothing, raising alarms about the objectification of women in advertising.
The Controversial Ad
The contentious advertisement showcased a young woman with a red scribble obscuring her midriff in the ‘before’ image, while the ‘after’ image revealed bare skin, accompanied by the phrase “Erase anything” and a heart-eyes emoji. The ASA received eight complaints alleging that the advertisement sexualised and objectified women, labelling it as both offensive and irresponsible.
While the ASA did not ascertain whether the model featured was a real person or an AI-generated image, they concluded that the ad suggested viewers could use the app to alter women’s bodies without their consent. “Because the ad implied that viewers could use an app to remove a woman’s clothing, we considered it condoned digitally altering and exposing women’s bodies without their consent,” the ASA stated in its ruling.
Saeta Tech’s Response
Saeta Tech, the company behind PixVideo, acknowledged the potential for offence arising from the advertisement but attributed the backlash to its misleading presentation rather than the intended functionality of the app. The company clarified that it prohibits the creation of nude or sexually explicit content and employs automated tools to detect and block such imagery. Following the ruling, Saeta Tech has paused all advertising initiatives while undertaking an internal review.

Broader Implications in the Tech Industry
The controversy surrounding PixVideo’s advertisement arrives at a time when the tech industry is grappling with ethical considerations surrounding AI and digital content creation. Earlier this year, a similar situation arose when Elon Musk’s chatbot, Grok, was implicated in generating sexualised images, prompting a global outcry and leading to restrictions on the bot’s functionalities in certain jurisdictions.
In response to mounting concerns over digital manipulation and its effects, the UK government announced plans in December to criminalise the development and distribution of AI tools that enable users to edit images to appear as if clothing has been removed. These new regulations aim to expand the existing framework governing sexually explicit deepfakes and the misuse of intimate images.
The Fight Against Digital Exploitation
The ASA’s decisive action against PixVideo reflects a growing intolerance for advertisements that perpetuate harmful stereotypes and objectify individuals, particularly women. As technology advances, the intersection of AI and digital representation is under increasing scrutiny, with regulators and society alike demanding accountability from tech companies.

The UK’s commitment to legislate against the misuse of AI in this context underscores a broader societal shift towards safeguarding individuals from potential exploitation in the digital realm.
Why it Matters
This incident serves as a stark reminder of the ethical responsibilities borne by tech companies in today’s digital landscape. As the capabilities of AI continue to evolve, the potential for misuse grows, necessitating stringent oversight and proactive measures to protect individuals from exploitation. The swift action taken by the ASA may signal a turning point, pushing the industry towards greater accountability and awareness of the societal impacts of digital content creation.