The UK’s law enforcement agencies are at the forefront of a technological revolution as they increasingly adopt artificial intelligence (AI) to enhance their crime-fighting capabilities. However, the journey is fraught with challenges, particularly concerning the inherent biases present in many AI systems. Alex Murray, the director of threat leadership at the National Crime Agency (NCA), acknowledges these risks but remains optimistic about the potential of a newly established AI centre to mitigate them.
Acknowledging Bias in AI
During a recent interview, Murray candidly addressed the unavoidable bias embedded in AI technologies used for policing. He highlighted that these systems often rely on historical data, which can reflect and perpetuate existing societal prejudices. This raises significant concerns, especially regarding the potential for algorithms to disproportionately target minority communities or misidentify individuals based on race, gender, or socio-economic background.
Murray stated, “It’s essential to recognise and minimise bias. However, the next step involves training officers to effectively interpret the outputs to ensure further minimisation.” He emphasised the importance of engaging data scientists and engineers to refine the data, train models appropriately, and rigorously test the systems before deployment. “There is no point in introducing biased technology into policing without acknowledging its flaws. Our goal is to mitigate these biases to a comprehensible and manageable level,” he added.
The Role of the National AI Centre
The establishment of the £115 million national AI centre aims to streamline the use of AI in policing across England and Wales. Currently, individual forces make independent decisions about technology adoption, which is often inefficient. Murray envisions that the centre will not only reduce bias but also evaluate private sector products for effectiveness.

“It’s an arms race,” he remarked, referring to the race between law enforcement and criminals, who increasingly leverage technology for nefarious purposes. This was exemplified in a recent case where a convicted paedophile attempted to claim that incriminating images were deepfakes, prompting police to work diligently to disprove this assertion.
Real-World Applications of AI in Policing
While Murray acknowledges the concerns surrounding AI, he is also keen to highlight its transformative potential in various policing scenarios. He noted that AI could significantly expedite processes traditionally bogged down by time-consuming manual efforts. For instance, AI can drastically reduce the time required to sift through extensive CCTV footage or analyse digital evidence from seized devices.
In one notable case, police in Luton successfully apprehended suspects involved in cashpoint thefts through AI-assisted investigations. The technology enabled officers to quickly analyse data from the suspects’ phones, translating Romanian material and identifying key evidence, which led to swift guilty pleas.
Trevor Rodenhurst, the chief constable of Bedfordshire, remarked on the positive shift in frontline officers’ attitudes towards AI: “As they experience the benefits, they are no longer sceptical; they are eager to utilise these capabilities. This transformation is significant.”
Balancing Innovation and Oversight
Despite the enthusiasm surrounding AI’s potential, there are calls for greater oversight and accountability in its implementation. The Association of Police and Crime Commissioners (APCC) has voiced concerns over the lack of transparency regarding system failures, particularly in retrospective facial recognition technologies. Darryl Preston, the APCC’s forensic science lead, underscored the need for independent scrutiny, stating, “The presence of bias in the police national database’s systems indicates that technology must be thoroughly tested to eliminate any unfairness before deployment.”

Why it Matters
As UK law enforcement agencies navigate the complexities of integrating AI into their operations, the focus on bias and oversight will be crucial. The establishment of a national AI centre represents a proactive step towards harnessing technology’s potential while addressing its inherent flaws. How successfully the police manage these challenges will set a precedent for the future of policing and technology, ultimately influencing public trust and safety in a rapidly evolving digital landscape. The stakes are high, and the path forward will require a delicate balance between innovation and ethical responsibility.