In early September, the federal government shared its proposal for regulating artificial intelligence and began its latest consultation process.

What the proposals paper reveals is that the government is still undecided on core, non-trivial components of how to resolve the complex issue of AI regulation and thus we are a long way off companies getting regulatory certainty.

Should specific AI use-cases or settings or contexts be de facto classified as ‘high risk’ and thus attract higher degrees of regulatory scrutiny?

Should certain forms of AI be outright banned in some cases?

How ought the government even go about the task of creating and enforcing AI regulation?

All these questions remain unanswered in the government’s proposal.

Rather, five years on from the establishment of Australia’s voluntary AI ethics principles, we’re still largely talking in terms of principles and voluntary measures.

Yes, particularly diligent organisations could implement the Voluntary AI Safety Standard and be in compliance with the near identical proposed mandatory guardrails.

But the fact remains that there is today no clearer articulation of what the real boundaries for AI development and use are beyond existing laws – many of which have struggled to keep pace with the internet, let alone machines that can mimic human speech, parse real-time video feeds, and create realistic images of any imaginable scenario.

An incomplete and flawed proposal

Here’s how the federal AI regulation would work in its current phase of development:

1. Every business and organisation must check if they are classified as an AI 'deployer' or 'developer' under the proposed definitions.

2. All deployers or developers must then conduct a self-assessment against a proposed set of principles to determine if their AI use should be deemed high-risk. Only the use of general-purpose AI (GPAI) systems would be automatically classed high-risk.

3. All self-identified high-risk AI uses are subject to the mandatory guardrails. Methods of compliance incentivisation and enforcement are unspecified.

This proposal, while a sensible foundation, is flawed and incomplete.

In large part, this is because it completely fails to prioritise harm mitigation in areas that are straightforwardly high-risk (eg, law enforcement, education, healthcare, the administration of welfare).

Prioritising the enforcement of AI safety standards is especially relevant because I believe the proposed definition of an AI ‘deployer’ will capture every organisation and business in the country, requiring everyone to assess their computer use for risk of AI harm.

Here’s the government’s proposed definition of an AI deployer:

“Any individual or organisation that supplies or uses an AI system to provide a product or service. Deployment can be for internal purposes, or used externally impacting others, such as customers or individuals.”

Without pre-defined high-risk uses, mandated licenses for certain settings, or outright bans of AI in specific contexts, a single regulator (or small group of regulators) would thus find adequate and equitable enforcement a monumental, if not impossible, task.

Deploying AI is deploying computers

Let's start by extrapolating from the current state of things.

The handful of major non-Linux operating systems (Windows, ChromeOS, Android, iOS, MacOS) are already getting jammed full of AI.

If you go out today and buy your employees brand new devices running Windows or iOS, there's a case to be made that you're deploying AI systems.

Every major enterprise software vendor has also spent the last two years boasting about how their products are filled with AI.

If your business uses any piece of enterprise software developed or updated in the last 24 months, you might be considered an AI deployer.

Remember, the definition just requires that your business or organisation “supplies or uses an AI system to provide a product or service”.

For the proposed regulatory approach to be effective, every such supplier and user of an AI system will have to self-assess their use’s risk level and, if it crosses some as-yet-undetermined risk threshold, apply the mandatory guardrails.

Will businesses that find themselves in the high-risk category have to report to some central authority?

How will accurate risk assessments be properly incentivised within organisations?

Who will audit compliance?

You can see how many questions are unanswered here which likely means more consultation, which means more time passing before any legislation hits parliament, which means a serious lag on keeping people safe and providing certainty for AI use.

It can’t be simply about intentionality

You might say that being a deployer must involve a meaningful amount of intent, such that an organisation merely providing employees with AI-rich hardware or software is not sufficient to classify them as a deployer.

Intent is not a feature of the proposed definition of AI deployer, but for the sake of argument, suppose it was.

Consider what this means in the context of the recent Office of the Victorian Information Commissioner report about a Victorian child protection worker who used ChatGPT to write documents that were used in court proceedings.

Not only did that worker input personal and sensitive data about other people, but OVIC also found ChatGPT's output actively downplayed risks of harm to the child.

This type of situation – where a state employee uses opaque AI software to generate official documents for the purpose of informing material decisions about a child’s welfare – should clearly be deemed high-risk.

Let's look at that set of circumstances through the lens of the government's proposed AI regulation and a definition of AI deployer that requires intentionality.

A worker opens ChatGPT in their web browser and starts getting help with this child protection case.

Since ChatGPT is a general-purpose AI, its use would be automatically deemed high-risk and thus the mandatory guardrails should apply.

Yet the Victorian Department of Families, Fairness and Housing never intended on its workers using ChatGPT, rather it simply failed to stop ChatGPT from being used.

This is a deeply unsophisticated and obvious example of high-risk AI use.

Yet it would not fall under the proposed regulatory regime if organisations have to intentionally deploy AI.

For the government’s proposed AI regulatory approach to not be thwarted by basic shadow IT, any organisation that provides IT equipment – hardware, network access, software licenses – without taking every reasonable step to ensure AI software can't be easily used must be an AI deployer.

You can imagine how this would have immense practical implications for everyday business activity and regulatory enforcement.

If that’s what the government wants, then it’s certainly ambitious, but I’m not sold on its effectiveness.

Click here to read the full joint Australian Computer Society and ISACA response to the government’s mandatory AI guardrails proposal.