When is a military aircraft not a military aircraft, and what does that mean for the rules of war? Would you emotionally unload to an AI dog about difficult personal experiences?
These questions should interest security sector governance and reform (SSG/R) professionals.
SSG/R professionals are driven by improving the quality of security provision around the world, by making it more effective and more accountable – and generally speaking, more people-centric. In other words, we are interested both in security and safety, and its governance. And AI is relevant because, while it has the capacity to transform society as a whole, we have no idea yet how it will change our security and political landscape in particular.
The therapeutic robodog, Therabot, presents intriguing opportunities for filling gaps in the provision of security and safety services. According to its creators at Mississippi State University, Therabot can help sufferers of PTSD by providing a friendly and non-judgemental interlocutor. Such capability has a clear use for sensitive police investigations.
Spot, another AI quadruped, has been deployed by police in the US for such tasks as assessing a child kidnapping situation without endangering officers. In European fire-fighting situations, Fotokite’s tethered drone and Shark Robotic’s firefighting unit both offer civil defence officers extra capability and greater safety.
So much for potential enhancements in service provision, but what about governance and accountability? Responding to the dizzying speed of AI deployment in public life, prominent intellectuals and AI developers are issuing frightening warnings about the risk to humanity. This is clearly a concern for people-centred security since security actors are likely to be equipped with new tools while regulation struggles to catch up.
In conflict situations, partially or fully automated weapons systems are also generating demanding governance questions, including how to determine responsibility for when they are hacked. Speaking in Geneva in July Dr William Boothby raised the conundrum of whether a military aircraft retains the status of military aircraft if it is partly or fully autonomously piloted by AI. If it loses its military status because it is not fully under the command of a military entity, then it may also lose its right to take part in military operations under international law.
These types of challenges to accountability are clearly within the purview of SSG/R, and they are only a tiny sample of what is coming. But, as yet, SSG/R institutions have been rather tentative in its engagement.
SO WHAT SHOULD WE DO?
An important first step is to engage in the public conversation about the relationship between security and AI, and to inject SSG/R expertise into emerging regulatory frameworks. Doing so, we can weave the thread of accountability and people-centred security into their development. SSG/R practitioners already have decades of experience building accountability through oversight mechanisms, capacity building and other tools. These can also be applied in the case of AI technology.
Second, we need to skill up, and collaborate. SSG/R practitioners tend to have strong qualitative and people skills; now they need learn about AI and be willing to collaborate with technologists. This will put us in a better position for the third step, which is exploring ways to apply AI as an accountability tool in the security sector. Towards this goal, we can investigate good examples in adjacent fields: impressive projects are underway in public procurement and human rights – where for instance the Office of the United Nations High Commissioner for Human Rights is deploying AI to track attacks on human rights defenders.
All of this suggests a wealth of applications for SSG/R and we must put ourselves into the forefront of the conversation to help influence and guide its development. The dogs of AI are barking; it is time to pay heed.