Digital technologies, devices and the internet are producing huge amounts of data and greater capacity to store it, and those developments are likely to accelerate.
In light of the many and varied solutions AI offers, the Australian Government is building the necessary policy and regulatory frameworks to pursue the goal of positioning Australia as a ‘global leader in AI’.
For law enforcement, a critical capability lagging behind the pace of innovation is the ability and capacity to screen, analyse and render insights from the ever-increasing volume of data – and to do so in accordance with the constraints on access to and use of personal information within a democratic system.
Artificial intelligence (AI) and machine learning are presenting valuable tools to the public and private sectors for screening big and live data. AI is also commonly considered and marketed as a solution that removes human bias, although AI algorithms and dataset creation can also perpetuate human bias and so aren’t value- or error-free.
In light of the many and varied solutions AI offers, the Australian Government is building the necessary policy and regulatory frameworks to pursue the goal of positioning Australia as a ‘global leader in AI’.
Recent initiatives include an AI action plan launched in 2021 as part of the digital economy strategy, the CSIRO’s 2019 AI roadmap, and the voluntary artificial intelligence ethics framework, which includes eight principles necessary for AI to be safe and democratically legitimate.
In addition, more than $100 million in investment has been pledged to develop the expertise and capabilities of an Australian AI workforce and to establish private-public partnerships to develop AI solutions to national challenges.
Informing judgements
AI is being broadly conceptualised by the federal government and many private companies as an exciting technological solution to “strengthen the economy and improve the quality of life of all Australians” by inevitably “reshap[ing] virtually every industry, profession and life”.
It is problematic, to say the least, that the ethics framework designed to ‘ensure AI is safe, secure and reliable’ is entirely voluntary for both the public and private sectors.
There’s some truth there, but how that reshaping occurs depends on choices, including, for policing, about how data and insights are used and how direct human judgements and relationships can be informed by those technologies, not discounted and disempowered.
In a new ASPI report I explore some of the limitations on the use of AI in policing and law enforcement scenarios, possible strategies to mitigate the potential negative effects of AI data insights and decision-making in the justice system, and implications for regulation of AI use by police and law enforcement in Australia.
It is problematic, to say the least, that the ethics framework designed to “ensure AI is safe, secure and reliable” is entirely voluntary for both the public and private sectors. And it is not supported by any actual laws specifically addressing the use of AI or other emerging technologies, such as the European Union’s General Data Protection Regulation.
For policing agencies, AI is considered a force-multiplier not only because it can process more data than human brains can conceivably do within required time frames, but also because it can yield insights to complement the efforts of human teams to solve complex analytical problems.
There are many different types and purposes for AI currently under consideration, in trial or in use in various areas of policing globally. Examples include risk assessment of recidivism used to inform parole decisions or to prompt pre-emptive, deterrent police visits to offenders’ homes; public safety video and image analysis, using facial recognition to identify people of interest or to decipher low-quality images; and forensic DNA testing.
Safe, secure and reliable
AI algorithms, or models, promise to enable processing of high volumes of data at speed while identifying patterns; supercharge knowledge management while (supposedly) removing human bias from the process (we now know that AI can in fact learn and act on human bias); and operate with ethical principles coded into their decision-making. This promise, however, is not a guarantee.
While researchers are fond of analysing ‘human bias’ in systems, the humanity of individuals also really matters for how they do their work and engage with their communities. It’s a strength of the community policing function, not something to be edited out by technology.
Holding AI to the ‘safe, secure and reliable’ standard in the Australian ethics framework requires the ability and functionality to comprehensively know and understand how algorithms make decisions and how ethical decision frameworks are coded into an algorithm and its development and training on historical and live datasets.
In fact, there are significant barriers from both a technical and implementation standpoint (for example, the market disincentive of sharing the finer details of how a proprietary AI product works) to achieving these aims.
It’s broadly understood that human bias can compromise both police outcomes (reducing and preventing crime, the successful prosecution of perpetrators of crime, and justice for victims of crime) and the trust from communities that enables effective policing.
This can make AI seem a solution; however, if it’s adopted without knowledge of its limitations and potential errors, it has the potential to create more and compounding problems for police.
While researchers are fond of analysing ‘human bias’ in systems, the humanity of individuals also really matters for how they do their work and engage with their communities. It’s a strength of the community policing function, not something to be edited out by technology, no matter how powerful and large the datasets may be.
This insight can help shape how policing works with AI and other new technologies, and how human analysts can prevent coded human bias from running unchecked in AI systems.
We can be certain that AI is here to stay. Appropriate regulation of its use in law enforcement scenarios is imperative to mitigate the significant potential impacts on justice outcomes and civil liberties.
If Australia wants to ensure AI is safe, secure and reliable, we need at the very least an ethics framework that is compulsory and legally enforceable.
This article first appeared on the ASPI Strategist website, and is republished here with permission. Dr Teagan Westendorf is an analyst in the Northern Australia Strategic Policy Centre and the strategic policing and law enforcement program at ASPI. Her new report, Artificial Intelligence and Policing in Australia, is also available to download from the ASPI website.
Picture © D Kosig / iStockphoto