The federal government is considering banning “high-risk” artificial intelligence tools as it looks to get on the front foot and ensure safeguards are in place for the emerging technology.

Minister for Industry and Science Ed Husic on Thursday released two papers on artificial intelligence, signalling an intent to make governance and regulatory changes to ensure the safe and responsible development and use of AI in Australia.

The government’s Safe and Responsible AI in Australia discussion paper looks at the existing regulations and laws in the country overseeing AI and some of the responses overseas, while the National Science and Technology Council’s Rapid Response Report: Generative AI investigates some of the risks and opportunities associated with the technology.

The papers come amid a heightened state of concern over the growth of AI, driven by generative AI tools such as ChatGPT, which was launched late last year and has become the fastest-adopted app in history.

In recent weeks a number of high-profile experts and tech leaders have warned that AI poses a “risk of extinction” and could lead to “many, many” deaths.

The Labor government is now looking to put in place appropriate safeguards to prevent these potential harms and ensure Australia benefits from the immense opportunities on offer.

“There’s been a growing sense that there’s been an accelerated development, a big leap forward in the technology, and people are wanting to think about whether or not that technology and the risk that might be presented have been thought through in a way that gives people assurance and comfort about what is going on around them,” Husic told the media on Thursday morning.

“Ultimately what we want are modern laws for modern technology, and that’s what we have been working on.”

Australia will look to follow international best practice in relation to AI regulation, but should also lead the world in some aspects, Husic said.

“We should develop a framework that levers off our advantages – we’re a great liberal democracy, we’re a trusted partner and I want us to set up a legal framework or model of regulations that can be used by other countries as well,” he said.

“We should be world leaders in this.”

The government’s discussion paper examines Australia’s current regulatory approach to AI and some of the initiatives adopted around the world, identifying some of the gaps locally.

It compares the voluntary approaches seen in countries such as Singapore with the more regulatory-based approach to be adopted in the EU and Canada.

Australia’s current approach is largely technology neutral, with elements of AI governed by data protection and privacy law, Australian Consumer Law, competition law and online safety regulations, among others, the paper said.

“I want to assure you that we’re not starting from scratch,” Husic said.

“Australia already has strong laws and guardrails in place but what we’re asking is, are they enough.”

Among the questions stakeholders have been asked to address in a whirlwind eight-week consultation period is whether some high-risk AI applications or technologies should be banned completely, and what criteria should be used to do so.

“If that consultation process does bring up high risk areas that do need a regulatory approach, that is something that will be considered by government,” Husic said.

The Rapid Response Information Report: Generative AI paper was commissioned by Husic in February and delivered by the National Science and Technology Council the following month, and looks at the opportunities and risks AI poses over the coming years.

“The current concentration of generative AI activities poses risks for Australia and raises questions about our capabilities, capacities, investments and regulatory frames,” the paper found.

The paper found a wide range of risks associated with AI, including in terms of technical systems, contextual and social risks, and systemic social and economic ones.

“The extent that these risks are realised or mitigated will depend on the actions of governments, industries, developers and consumers,” it said.

The paper also found that the “concentration of generative AI resources within a small number of large multinational and primarily US-based technology companies poses potential risks to Australia”.

Australian Computer Society (ACS) CEO Chris Vein said he welcomed the release of the two government documents for consultation.

"ACS believes that the manifest benefits from uses of AI must not come at the cost of harms to humans and environment," he said.

"We have been a long-standing advocate for frameworks, methodologies and education to promote responsible and ethical use of AI.

"We also recognise that regulation can provide appropriate incentives for organisations to evaluate risk of harms to humans and the environment from uses of AI, and to mitigate those risks. We also believe regulation has a role to play in creating adverse consequences for bad actors."

There are concerns that Australia is lagging behind on AI regulation, with a focus on voluntary principles rather than dedicated legislation.

This week a number of business leaders and public figures co-signed a short statement released by the Center for AI Safety, warning that AI poses a “risk of extinction”.

Earlier this year, a group of more than 1,000 high-profile executives signed an open letter calling for a six-month pause in the development of AI models so robust regulatory systems can be put in place.