Rapidly improving artificial intelligence (AI) technology may be delivering real wins across nearly every industry, but a senior AI scientist has warned developers and marketers not to “stretch the credibility of AI”.
Dr Wanli Min, Chief Machine Intelligence Scientist at Alibaba Cloud told Information Age that developers should instead take a “prudent” approach when discussing AI’s promise.
A former IBM research scientist with specialties in traffic monitoring and prediction, Min moved into the big-data e-commerce space before developing the ET Brain platform – on which the Chinese online retail giant has deployed application-specific AI tools for smart cities, aviation, environment, agriculture, and industry.
A matter of priority
Given Alibaba’s user base of nearly 700 million monthly users, ET Brain – which emerged from the firm’s efforts to analyse and shape users’ online behaviour in real time – would be one of the largest AI-based optimisation platforms in a Chinese market that continues to explode.
China’s government has positioned AI as a national priority, executing so aggressively on its multi billion-dollar investment strategy that analysts argue the country is rapidly overtaking the inertia-plagued United States market.
A recent Deloitte analysis found that Chinese businesses currently lag those in other countries: just 11 percent said they are “seasoned” adopters of AI, compared with 17 percent in Australia and 24 percent in the US.
However, Chinese companies were embracing AI more aggressively: fully 46 percent have a comprehensive, company-wide AI strategy, compared with 37 percent of US companies and 34 percent of Australian companies.
Dr Wanli Min speaking at Mobile World Congress in Barcelona in February. Photo: Twitter
Given the billions in industrial and agricultural efficiency that AI promises to unleash, ceding first-mover advantage could have a direct impact on industries’ global competitiveness – and China is counting on it, with the Deloitte survey finding that 55 percent of Chinese companies believe AI will give them a “strong competitive advantage”.
By contrast, just 37 percent of US companies felt the same – as did 22 percent of Australian companies, the lowest proportion amongst respondents in the seven surveyed countries.
A broad range of national AI initiatives is working to build up capabilities in the area as quickly – and, as a recent Ericsson-UNESCO initiative showed, as early – as possible.
Australia’s AI capabilities may even get a boost in this month’s federal election, if Labor takes government and delivers on its promised $3m injection into a National Artificial Intelligence Centre already announced by Victoria’s Andrews government.
From research labs to commercial prospects
For the researchers charged with commercialising their inventions, the high expectations being placed on AI have turned the technology into something of a double-edged sword.
Few dispute assertions that machine-learning techniques had proven to be the best way to help increasingly data-heavy industries deal with the volumes of information their equipment, sensors, controls and users were producing on a daily basis.
Alibaba Cloud, for its part, has used its AI capabilities to successfully court a growing roster of countries across the Asia-Pacific and in Australia, where it opened shop just three years ago.
The expansion strategy has been so successful that a March Gartner analysis gave the cloud’s infrastructure-as-a-service offerings the market lead, with 19.6 percent market share – up from 14.9 percent a year earlier – that put it well ahead of rivals Amazon Web Services (11 percent) and Microsoft (8 percent).
Yet as the marketers continue to woo the market with Alibaba’s AI expertise – which drives hosted capabilities including image recognition, real-time customer behaviour analysis and natural-language speech recognition into the market – Min says it’s important to remember that AI isn’t a silver bullet for every business problem.
“We always start the customer conversation asking the customer what is the problem they are trying to solve, and how much of their opportunity cost is due to their pain points,” he explains.
“Once you have identified the largest problem to solve, it’s only then that you can talk about what kind of technology is the right technology,” he adds.
“It cannot be just a single play for AI technology, but is always a combination of different technologies. We are not in the business of over-selling AI just to make a profit.”
Looking inside the black box
Given industry’s recent obsession with AI, calls for restraint on its adoption may seem surprising.
But as a researcher, Min knows that – despite its undeniable wins in a variety of applications – AI still has a long way to go before it meets corporate expectations around areas such as accountability and transparency.
Most AI technology “is like a black box,” he explains. “It’s like magic. We still struggle with the interpretability of the AI technology.”
That lack of visibility can create problems when businesses jump into AI without fully understanding how it is arriving at its results and recommendations.
Extrapolating the results of limited pilot programs to real-world environments requires prudence and care, he adds, noting that “extrapolation is a dangerous business, which is why we are prudent in applying AI to real, practical situations.”
The need to improve visibility of AI operations has become more pointed as the technology’s adoption increases.
Facing questions such as where to assign liability if an AI-generated recommendation causes physical damage or loss of lives, ethicists and scientists – including a who’s-who of Australian scientists that recently penned an open letter to industry – have been clamouring for restraint and open discussions about the ethical issues that AI introduces.
“The biggest challenges is for us to stay cool, practical, and objective,” he says. “As researchers we have to know our limitations and boundaries, and not exaggerate.”
Even as companies compete by tweaking their ‘black box’ AI solutions with new features, Min notes the importance of developing open ‘white box’ AI that would offer accountability through clear documentation, consultation, and interoperability.
White-box solutions might initially lag behind black-box offerings as vendors keep their cards close to their chests, but over time “the position of the white-box approach will improve as we add more and more features,” he says.
“If that happens, I would rather go for a white-box approach rather than the black-box approach,” Min adds. “Performance-wise they are comparable, but from a maintenance and explanation perspective the white box approach is much better in the long term.”
Taking AI to the limit
So where is AI headed? Min isn’t prepared to put limits on it, although he is loathe to buy into hype that AI could ever be smarter than human beings.
“Initialisation of an AI program needs an initial input of knowledge,” he points out, “and this knowledge comes from human beings. So, an AI program can only be as smart as a human being.”
Yet, he concedes, AI-powered robots offer advantages over human workers because they don’t eat, drink, or rest – but argues that the potential societal impacts of this need to be managed not by technologists or by technology companies, but by politicians.
“At the end of the day, the politicians have to come up with legislation, policies, and regulation in order to guide AI from the legal perspective, and to work through the liability from unfortunate events.
“We expect that we will reach a generic, universal practice of principles, but it will take a while for healthy debate to guide us through to that destination.”