The Pentagon is on the hunt for an ethicist to “help guide the Defense Department’s development and application” of AI.
Joint Artificial Intelligence Center (JAIC) director, Jack Shanahan, said the integration of AI into US military operations was an ongoing process.
"We still have a long way to go to help bring pilots, prototypes, and pitches across the technology 'valley of death' to fielding and updating artificial intelligence-enabled capabilities at speed and at scale," he said.
A year ago, Shanahan’s team was small and had almost no money.
Now, it has 60 employees and a requested budget of nearly $400 million.
“Our team is spending a lot of time working with the Defense Innovation Board, but also just internally and with the service components on this question about the ethical use of AI, the safe use of AI, the lawful use of AI," Shanahan said.
"In 35-plus years in uniform, I have never spent the amount of time I am spending now thinking about the ethical employment of AI.
“We do take it very seriously.”
The deployment of autonomous weapons systems is a controversial topic.
In August, the UN Convention on Certain Conventional Weapons (CCW) met to once again discuss the use of lethal autonomous weapons.
They did not reach any conclusions.
Prominent Australian opponent of autonomous weapons, Toby Walsh, told Information Age that the recent talks seemed to have taken a backwards step on the path to limiting the use of killer robots.
“The number of nations calling for a preemptive ban did go up to 29, but there is still a lot of resistance from Russia, the UK, and the US to move forwards on any sort of regulation,” Walsh said.
“One of the challenges here is that AI is a tech that is dual use – it can be used for good or for bad."
AI researcher Toby Walsh
“Much of the research isn’t building a weapon as much as it’s about making more informed decisions and fewer mistakes – and that’s a good thing," Walsh said.
“But if you take some of the same software and put it into a drone, it could remove the human completely. So it’s a short step from working on a system that’s just offering advice to humans and reducing casualties, to building a fully autonomous weapons.”
Examples of military hardware with autonomous elements already exist.
Israeli Aerospace Industries, for example, has been advertising its HARPY weapon for years.
The ‘loitering munition’ flies around an area until it detects a radar antenna, at which point the missile homes in and detonates its warhead on the target.
A Department of Defence spokesperson told Information Age that Australian defence forces do not currently use fully autonomous weapons.
“Defence is investigating the use of artificial intelligence and autonomous systems for weapons that augment human intelligence within a ‘system of control’,” the spokesperson said.
“A system of control is the way any weapons system, whether autonomous or not, is to be adopted in compliance with international humanitarian law.
“The Australian Defence Force acquires and operates all weapons systems – now and into the future – in accordance with Australia’s international and domestic legal obligations, including humanitarian law.
“If a weapon system is not compliant, it will not be deployed.”
Because of the competitive edge AI can offer to a modern military, there are obvious strategic reasons that the likes of the US and Russia have so blocked a prohibition.
But Toby Walsh warns that, as these systems become more advanced, the odds of them being used in abhorrent acts of violence greatly increases.
“There is a real risk of proliferation of weapons that can fall into the wrong hands and be misused,” he said.
“They would be terrible weapons in the sense that they would do whatever you programmed them to do, however evil it might be.
“And there are people who would remove whatever ethical safeguards a military might put in and use them to target civilians; to use them against people in terrible ways.
“I’m sure we will one day see it regulated by the UN. We need to regulate them.”