International efforts to regulate artificial intelligence are ramping up as countries across the globe grapple with rules around the complex technology.
The OECD and World Economic Forum (WEF) are the latest in a growing list of groups working on AI system guidelines.
WEF’s Centre for the Fourth Industrial Revolution launched the Global Artificial Intelligence Council last month to be co-chaired by Sinovation CEO Lee Fai-Fu and Microsoft President Bradford Smith.
Along with five other councils addressing emerging technologies, the global AI council will help form a set of best practice policies that operates between member countries.
Forty-two countries adopted the OECD principles on artificial intelligence last month, and in April the European Commission presented its ethics guidelines for trustworthy AI.
In Australia, the debate about how best to respond to emerging artificial intelligence systems is ongoing.
Last year’s budget saw the Morrison government put $30 million toward AI research.
Part of that funded the CSIRO and Data61’s development of Australia’s AI ethics framework discussion paper.
Last June, India released a discussion paper for its national AI strategy while the Japanese Society for Artificial Intelligence handed down its ethical guidelines in 2017.
Organisations and AI researchers around the world have signed a pledge for the ban of autonomous weapons to go along with the international sets of heuristics with those signatories all agreeing that “the decision to take a human life should never be delegated to a machine”.
But while killer robots are the most obvious apocalyptic science fiction scenario, AI is already demonstrating its problematic nature.
Current issues
Privacy protection, fairness, and transparency regularly feature in AI guidelines.
AI functions on big data analysis which is naturally vulnerable to breaches that put private information at risk.
And as chief machine intelligence scientist at Alibaba Cloud, Dr Wanli Min, recently told Information Age, AI is like a black box which makes transparency almost impossible.
“It’s like magic,” Min said. “We still struggle with the interpretability of the AI technology.”
Fairness also remains a massive problem for AI systems that manage and serve human beings.
The OECD’s new guidelines say that “AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.”
Late last year, Amazon stopped developing an AI recruitment tool when it showed a bias against women after being trained on data from a decade’s worth of résumés.
According to deep learning expert Fei-Fei Li, AI development is, unfortunately, a case of “bias in, bias out” – what appears to be an objective, neutral tool tends to carry and entrench the biases (conscious or not) of its developers who tend to be male.
This was reinforced by a recent UN report into the gender divide in digital skills that says AI voice assistants like Cortana and Alexa reinforce negative notions around gender.
“Because the speech of most voice assistants is female, it sends a signal that women are obliging, docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command like ‘hey’ or ‘OK’,” the report says.