Artificial intelligence (AI) is the next edge in armed conflict and it looks like the US is ready to set the rules of engagement.

The US Defense Innovation Board (DIB) released a report proposing rules of conduct for the use of artificial intelligence in military applications.

And in this list of recommendations there is zero mention of outright banning its use, thereby cementing its place in the future of US military strategy.

This report was commissioned by the US Department of Defense (DoD) to lay a foundation for including AI in its ethical framework, shape international norms on AI use in warfare and develop a military advantage.

“Now is the time, at this early stage of the resurgence of interest in AI, to hold serious discussions about norms of AI development and use in a military context – long before there has been an incident,” said the report.

AI is a rapidly emerging technology and its ethical use must be established, especially amongst governing bodies.

Many of the recommendations from the report draw from already existing ethical and legal commitments, including the Law of War and existing international treaties such as the Geneva Convention and customary international law.

However, there was no mention of implementing a new treaty to address situations unique to AI.

US first

Humans as a species can collectively decide not to use a weapon with a treaty – much like how the world decided to ban chemical weapons use in war in the 1925 Geneva Protocol.

This push for ethics in AI warfare does not just rise from the bleeding heart of the US military industrial complex, it comes with strategic value, too.

The global push in AI development prompted the US to step up and develop guidelines for AI use before the rest of the world makes it for them.

“It is important to note the US finds itself in a technological competition with authoritarian powers that are pursuing AI applications in ways inconsistent with the legal, ethical, and moral norms expected by democratic countries,” the report said.

Both China and Russia are identified in the report as the leading rivals in the AI arms race.

“What I don't want to see is a future where our potential adversaries have a fully AI-enabled force and we do not,” said Lieutenant General Jack Shanahan, Director of the Joint AI Center (JAIC) of the DIB.

The report details five “steps” for a principled application of AI in warfare.

The Steps

1. Responsible. Make sure it is human beings who are entirely responsible for the “development, deployment, use, and outcomes” of AI systems in war.

2. Equitable. Seek to deliberately avoid bias in combat and non-combat AI systems that can cause harm to people.

3. Traceable. Ensure technicians and technical aspects of AI are transparent and auditable for the sake of any future investigations.

4. Reliable. Ensure the “security, safety and robustness” of AI systems and repeatedly check it for maintenance.

5. Governable. Avoid any unintended harm or functions, and make sure it does not develop behaviours outside of its official function.

But, with the DoD being the final arbiter of what to implement, some recommendations may fall by the wayside for tactical reasons.

“The Department has the sole responsibility to determine how best to proceed with the recommendations made in this report,” the report said.

Whether the time, energy, and effort to develop these recommendations will bear fruit at all is questionable.

The report took 15 months to complete and purposefully drew upon a diverse set of opinions from those in academia, industry, military, human rights, and political theory.

These conversations were never meant to reach a consensus, but deliver a wide range of views and opinions as an “earnest attempt to provide an opening for a thought-provoking dialogue internally to Department and externally in our wider society,” according to the report.

But these opinions only present themselves as a suggestion for US military leaders to implement.

There's still one question: is tactically sound to apply ethics to killer robots?