Global progressive policing
ANALYSIS:

How the Australian Border Force can exploit artificial intelligence

As part of his speech to the recent Milpol Asia-Pacific Conference in Singapore, Australian Border Force Commissioner Michael Outram highlighted the potential that artificial intelligence offers his force and others in detecting and disrupting criminal activities and responding to crises.

As AI continues to evolve we’re going to be able to look at an increasingly bigger picture and start addressing problems at the systems level – whether in terms of threat discovery, modelling or disruption. 

Our ability to plan for crises, to reduce the uncertainties they present and to quickly diagnose the effectiveness of our actions in novel scenarios – all of this opens up a conversation about the possibilities and challenges of artificial intelligence (AI).

The extraordinary power of AI to support the Australian Border Force (ABF), to analyse data in close to real time, and at scale, is already helping our officers to detect and disrupt all manner of criminal activities. AI is giving us more capacity to detect and disrupt new threats at and before they even reach the border. Aside from this, there may also be other highly beneficial applications for AI such as Outbound Sales AI, for instance.

The ABF is well advanced in developing our Targeting 2.0 capability, to incorporate all of our assessments of border-related threats, risks and vulnerabilities along with new data from industry and partners, to support our decision-making.

Targeting 2.0 seeks to apply the extraordinary power of AI to complement and amplify the deep expertise of our people, to identify new patterns at speed and at scale, to detect and disrupt crime as it happens, and, in time, to get ahead of the perpetual evolution of criminal activities.

As AI continues to evolve we’re going to be able to look at an increasingly bigger picture and start addressing problems at the systems level – whether in terms of threat discovery, modelling or disruption.

Reducing uncertainty

Our jobs and the world in which we operate are going to be very different in the coming years because of AI – whether it’s strategic planning, preparedness, operational planning and response, augmented decision-making, or being able to respond to or get ahead of threats.

The concept of digital twins – virtual models designed to accurately mirror a physical process, object or system – is one of the things that has grabbed our attention. And social systems are well in scope, opening the door for policy twins. A digital representation of a policy could include legislation as code, relevant data, modelling tools, impact monitoring and more.

We’ll be able to model crises and our responses, with augmented decision-making and the ability to monitor those decisions’ impact on complex social systems, during a crisis.

AI is only going to accelerate our ability to design and implement policy twins as well as other digital twins. Add in the incredible horsepower of quantum computing, and we’ll be able to have digital and policy twins of things as complex as the entire Australian border and all its related infrastructure and systems.

We should eventually be able to model the effects of a crisis across the whole border continuum, more easily, and on an enterprise scale.

There are many other technological advances contributing to the immense power of AI, including neural network architectures, edge computing, blockchain, and augmented/virtual reality.

Another one of the tools at our disposal is Bayesian belief networks – advanced decision-making maps that consider how different variables are connected and how certain or uncertain those connections are in determining an outcome.

They aim reduce uncertainty in decision-making, and help to determine the probability of an event based on what we know.

Imagine an array of sensors and data feeds, technology stacks with learning ability and visualisation tools; now incorporate digital twins, Bayesian belief networks and quantum computing. We’ll be able to model crises and our responses, with augmented decision-making and the ability to monitor those decisions’ impact on complex social systems, during a crisis.

Building readiness

But the future is hard to predict, and we always have to factor people into our equation. Because AI won’t supplant human judgment, accountability and responsibility for decision making; AI will augment it.

For many governments, to gain the social license to implement AI systems like I’ve described, building and maintaining trust is key.

People must trust that our data is secure, have trust in the information we push to them and they pull from us, trust in our people, trust that we won’t misuse personal information, trust that we won’t act unlawfully or unethically.

I think we have to introduce human-centric measures of success into our success criteria, budgeting and operating models, so that our AI systems aren’t just based on being good value for money but also having a positive effect on people.

One of the best ways to build trust is to demonstrate, measurably, the benefits to people of sharing their information and data with us. Take truly seamless and contactless travel through digital borders. To collect the data we need from travellers, we need to emphasise the benefits of people providing their biometrics, for example. Travellers will reap economic and personal benefits like time saving and convenience.

I think we have to introduce human-centric measures of success into our success criteria, budgeting and operating models, so that our AI systems aren’t just based on being good value for money but also having a positive effect on people. We’ll have to monitor for outputs and public impacts to ensure systems are operating as they should, and not leading to unintended bias or harm.

In Australia more broadly, we’re currently developing a whole-of-government AI policy and legislation and a consistent approach to AI assurance.

In the ABF we’re focused on developing practical and effective AI guardrails and governance, and robust data science.

For us, it’s not just about ethical and responsible design of AI systems. It’s also about assurance – monitoring of outputs and impact, ensuring independent oversight of our systems, and appropriate transparency measures.

Given the threats we will face at our borders, and the likelihood of future crises, we have to start building now to be ready for the future – by assembling vast amounts of data ready to be fed into AI, by getting our people ready to use it, and by genuinely reinforcing trust.

Those who don’t start building readiness for AI into their systems now are going to have a hard time adapting when it becomes imperative to do so.

This article first appeared on the ASPI Strategist website, and is drawn from ABF Commissioner Michael Outram’s speech to this month’s Milpol Asia-Pacific conference; the article is republished with permission – you can read the original here.


You must be registered and logged in to post a comment

Please LOG IN or REGISTER
Top