How will advancements in artificial intelligence shape our human rights of the future?
The Australian Human Rights Commission has officially launched a three-year project examining the impact of technology on human rights. The Human Rights and Technology Issues Paper will assess the implications of AI on privacy and freedom and investigate what protections are required in areas such as insurance, social media and justice.
Launching the project in Sydney on Tuesday, Human Rights Commissioner Edward Santow said humanity should lead innovation.
“Human rights must shape the future these incredible innovations have made possible,” he said.
“We must seize the opportunities technology presents but also guard against threats to our rights and the potential for entrenched inequality and disadvantage.
“Working collaboratively with industry and government, we will develop a practical roadmap for reform in Australia.”
Joining Santow to announce the project was Australia’s Chief Scientist, Dr Alan Finkel.
He implored Australia to embrace the promises of AI and become “leaders in the field of ethics and human rights.”
“That is my aspiration for Australia,” he said. “To be human custodians.”
“In my mind, that means showing the world how an open society, a liberal democracy, and a fair-minded people can build artificial intelligence into a better way of living.”
He compared contemporary concerns of AI to those surrounding IVF 40 years ago.
“No-one could hand us a readymade rule-book,” he said.
“There wasn’t one. So we had to create one. And we did. We were the first country to collate and report on birth outcomes through IVF.
“We published the first national ethical guidelines on IVF, anywhere in the world.”
He urged Australia to look at the gradual mainstreaming of IVF as a lesson moving forward.
“Don’t expect a single answer or a one-shot, set-and-forget AI law,” Dr Finkel said.
“That wasn’t the secret to adopting IVF.
“No, we had a spectrum of approaches that worked together and evolved in line with the state of the technology, and the levels of comfort in the community.
“There were laws and regulations, there were industry codes and practices, and there were social norms.
“We will need to develop a similar spectrum of responses to AI – so that we can strike the balance between opportunity and risk.”
One response he highlighted was a potential ethical mark to certify technology.
“What if we had a recognised mark for ethical technology vendors: like the Fairtrade stamp for ethical suppliers?”
“In my mind, it’s called the Turing Certificate.
“The standards would be developed by a designated expert body, in close consultation with consumer groups and industry.
“Then companies that wanted to display the mark would submit both the specific product and their company processes for an ethics audit, by an independent auditor.”
Submissions for the Human Rights and Technology Issues Paper close on 2 October.