Global progressive policing
ADVERTISEMENT FEATURE:

Fighting the bad and finding the good in generative AI

SAS Feature

As generative AI bursts onto the scene, it promises to transform people’s personal and professional lives forever. It could hold the key to police forces operating more effectively – but it’s also already being used by criminals for nefarious purposes. UK forces must consider: How can they harness GenAI while preventing its misuse?

Like many emerging technologies, AI, and more recently GenAI, are provoking both excitement and fear. Excitement, because although it has been in the making for a long time, AI is delivering tangible benefits for real-world use cases; GenAI, as an evolution of AI, promises the same in terms of benefits. Fear, because of value, risk and ethical considerations, both known and unknown.

As criminals become smarter about technology, UK police forces must do the same. Thankfully, there are solutions available, and forces have an opportunity to apply AI techniques to help them sieve through data and more quickly and accurately identify investigative opportunities as well as risks to the public to prioritise reports and respond more effectively.

Police forces are on the front line when it comes to identifying and stopping new ways to commit crimes. AI techniques are already being used by criminals to cause harm; and while GenAI is so novel that there is still no widely accepted definition, criminals are already exploiting this technology.

For example, there is a disturbing trend of using AI-generated imagery, videos and voice recordings to target individuals for extortion or blackmail. Criminals can use this material to convince a victim that a loved one is in danger, extracting money from them through deception. Alternatively, criminals can threaten to ruin someone’s reputation or livelihood using materials that misrepresent the victim engaging in embarrassing or illegal activities. They demand that unless payment is made, they will reveal these materials publicly.

Distressingly, offenders are also using existing child sexual exploitation (CSE) images to generate new materials using AI. Forces are under pressure to find the real-life source(s) of the materials so they can identify and help victims.

In all of these scenarios, UK police forces must sift through the huge number of reports they receive to home in on the most urgent cases. Since social media networks have enabled reporting functionality, there has been a huge leap in the amount of information handed over to police.

The sheer volume and complexity of data represent a massive challenge, and with limited resources most organisations are keen to find help making sense of this data deluge. Solutions exist to recognise whether images are AI-generated, but criminals are finding ways to make this an increasingly difficult task.

Albeit still in prototype, SAS can also support UK police forces with digital assistants that can scan volumes of information, then cross-reference standard operating procedures and investigation standards documentation created by the force to suggest possible avenues of enquiry, therefore injecting a minimum standard of investigation as well as bolstering the human decision-making process. As forces struggle with the sheer volume of data coupled with a diminishing level of experience, these assistants can help officers deal with a heavy caseload by ensuring all actions have been considered and done so in a way that is relatable to the investigator.

As criminals become smarter about technology, UK police forces must do the same. Thankfully, there are solutions available, and forces have an opportunity to apply AI techniques to help them sieve through data and more quickly and accurately identify investigative opportunities as well as risks to the public to prioritise reports and respond more effectively.

Generative AI

While there is no official or generally agreed upon definition of exactly which technologies fall under the generative AI umbrella, SAS considers digital twins, synthetic data generation and large language models all to have generative qualities.

So much of policing is conducted under immense time pressure and with a huge burden of responsibility. While generative AI is adding to the types and volume of crimes that forces have to deal with, it can also support officers in making better decisions faster. At SAS, we’re committed to helping UK police forces harness this emerging technology to improve people’s lives.”
Ashley Beck, CSE Lead, SAS
& fmr Police Scotland officer

Digital twins – the virtual model of an object, system or process – are arguably the most used GenAI technique today.

High demand exists for synthetic data generation as a privacy preservation and insights scaling technique. The generation of data by rules or algorithms has many applications for which consideration is needed for handling sensitive, secret and hard-to-obtain data. For example, because there is limited data around CSE events, building a model to surface trends and patterns in the traditional way is difficult. With synthetic data generation, more robust detection models can be built and risk scenario analysis can be performed to help identify emerging CSE trends. Similarly, this technique is also yielding results in the protection of other vulnerable citizens, such as preventing child labour and people trafficking.

Natural language processing (NLP) techniques are used today by police forces to deconstruct and make sense of the huge volumes of textual data, like crime and interview notes. Large language models (LLMs) are an evolution of NLP designed to process and generate natural language text. Typically trained on massive amounts of text data, they identify complex relationships in language, allowing those complex relationships to be more easily identified and surfaced to aid police investigations.

Crucially and regardless of the techniques deployed, AI solutions should be as transparent and explainable as possible. We understand that UK police forces must be able to demonstrate in court how they reached a decision, and AI should help with building that audit trail so that every technology-augmented step is provably documented, auditable, logical and defensible. Governance and traceability aren’t always possible with open-source tools.

As UK police forces grapple with the new types of crime emerging from the use of GenAI, teaming up with the right partners can help you get ahead of the game. SAS is already known for our leadership in emerging technologies – not unlike our early days when our foundation in data analytics later informed AI and machine learning – and we now find ourselves at the precipice of another iteration with GenAI.

We believe developing AI responsibly and in a trustworthy manner requires a comprehensive approach that puts AI governance first, then technology. Trustworthy AI starts before the first line of code is written and should always prioritise the greatest interests of people, be they citizens or members of police forces.

  • For more information read A Comprehensive Approach to Trustworthy AI Governance..
  • Or contact [email protected] or [email protected] to discuss.SAS logo

 

 

 


You must be registered and logged in to post a comment

Please LOG IN or REGISTER
Top