Lee Hickin will steer a $21.6 million federal government investment in Australia’s AI capability and advise on the implementation of hotly debated safeguards for emerging technology.
Hickin, who confirmed that he will resign from his current role overseeing Microsoft’s AI policy in Asia, said in a LinkedIn post that his primary responsibility as director of the National AI Centre would be “helping to drive Australia's AI future.”
Innovation vs regulation
The National AI Centre, hosted by the Department of Industry Science and Research (DISR), oversees initiatives to finance local AI startups and support small to medium businesses to harness machine learning (ML).
Alongside DISR’s AI Expert Group, Hickin will assist industry to implement Australia’s Voluntary AI Safety Standard, which may be elevated to mandatory guardrails for high-risk AI use cases that vendors are lobbying against over concerns that they will stifle innovation.
Hickin declined Information Age’s requests for comment on whether he supports the Albanese Government's proposal for EU-style AI regulations, but said, last year at a roundtable hosted by the Malaysia Digital Economy Corporation, that “AI needs guardrails to protect society and AI needs flexibility to address global issues”.
In his post yesterday, he said: “I have long been an advocate for the positive potential for AI in our lives, communities and industry.”
The 'ideal fit'
Industry Minister Ed Husic said on Linkedin that Lee’s “30 years of commercial experience” at “companies like Microsoft and Amazon” made “him an ideal fit” for the National AI Centre’s director.
Hickin has had two stints at Microsoft; between 2005 to 2015 he held titles including security technology specialist and IoT product manager, then worked at AWS for two years; first as its APAC IoT business development lead and later APAC head of platform technology business development.
He returned to Microsoft in 2018 as its chief technology officer and was promoted in 2023 to Asia AI Policy and Lead.
Husic added that Hickin was also “bringing… his involvement in shaping AI policies with Government” to the role, including his support developing the “AI Action Plan” and his four years’ experience as a committee member of Australia’s first AI-specific government watchdog, NSW’s “AI Review Committee” (AIRC).
Balancing AI innovation with risk mitigation
Hickin’s dual experience in both encouraging companies to embrace AI before they get left behind and auditing emerging technology — both internally as the head of Microsoft’s ANZ Responsible Office of AI, which audits ML projects, and externally as a member of the NSW Government’s AIRC — makes him a safe pick.
The Government is at a crossroads on whether to follow the EU in passing laws to better protect citizens from the privacy and procedural fairness risks that the private sector’s rapid uptake of AI poses, which regulators and civil society groups are calling for, or to side with lobbyists like the Business Council of Australia in following the US in abstaining from any regulations that could hinder AI-enabled productivity.
When Hickin simultaneously worked for AIRC and Microsoft, he deployed Microsoft technology to NSW government agencies that AIRC expressed concerns about, highlighting the difficulty of balancing AI innovation with risk mitigation.
At the time, he said that it was “a privilege to work alongside NSW Police” when implementing the AI Insights platform because it “can speed up the analysis of evidence, accelerating justice”.
However, AIRC’s review of Insights warned that the AI-generated tool could bias investigations against communities more likely to feature in its surveillance feeds.
Further, legal academics expressed concerns over its use of biometric ML models, which pose a proportionately higher risk of misidentifying minorities, but Hickin refused to release the audits Microsoft’s Office of Responsible AI conducted to risk-assess the platform, nor provide a summary.