In a bid to safeguard the integrity of this year’s Scottish and Welsh elections, election officials are racing to implement a pilot project that utilises cutting-edge software capable of detecting AI-generated deepfake videos and images. The Electoral Commission in Scotland, working in tandem with the Home Office, expects the software to be operational before the election campaigns begin in late March.
Sarah Mackie, the commission’s chief in Scotland, has emphasised the importance of this initiative, stating that if the software identifies a hoax video or image, officials will promptly notify the police, the candidate concerned, and the public. However, Mackie acknowledged that the software cannot provide a 100% guarantee of accuracy.
The commission is also seeking legally enforceable “takedown” powers that would require social media platforms to remove any identified hoax material. Mackie lamented the current voluntary nature of such actions, saying, “What we don’t have at the moment, and what we want, is called takedown powers – where we approach social media companies and demand something is taken down.”
The use of deepfakes has surged in elections abroad, a trend exacerbated by the proliferation of free AI image-generation tools. British elections and referendums have repeatedly been targeted by often state-sponsored fake social media accounts from countries including Russia, Iran, and North Korea, designed to sow discord and amplify controversy.
Alongside the deepfake detection efforts, the commission is also collaborating with the Scottish Parliament and police on a “safety and confidence” project to support women and ethnic minority candidates who experience gender- or race-based abuse or harassment. A 2022 study found that about half of all female election candidates had experienced abuse, with many saying they would not stand again. Candidates from minority ethnic backgrounds also reported that the abuse made them too scared to run, undermining diversity in Holyrood.
Mackie emphasised that the commission is working to address the emerging threat of AI-driven and pornographic “undressing” technology, particularly generated by Elon Musk’s Grok AI platform, which could be used to target candidates during an election. Such content would be reported to the police.
The pilot project, if successful, could be rolled out for all UK elections, as the commission and Home Office seek to navigate the legal complexities surrounding the regulation of deepfakes during elections. Mackie acknowledged the “empty space” in the current regulatory framework, saying, “We don’t regulate campaigning, but there is an empty space here where it’s a little bit like there’s lots of regulations surrounding the edge of the ring.”
The Home Office has stated that the UK’s Online Safety Act requires social media companies to remove unlawful content and prevent the spread of false information that could cause harm. The pilot project, the Home Office says, “will help to detect and tackle deepfake material and protect the public from the impact of disinformation.”