Tick-box approaches to AI assurance are too abstract for the average employee to apply to their daily work, according to a panel of experts that emphasised the importance of trust as Australian firm Hypereal debuted its AtomEthics continuous AI assessment tool.

One of a growing body of frameworks for responsible AI, AtomEthics was designed to help companies apply policy guardrails to control genAI technology – and the deepfakes, misinformation, and other issues it can create.

That flexibility is also creating data confidentiality issues – for example, a recent Netskope analysis found the volume of business data built into genAI prompts increased over 30-fold over the last year – and security concerns, with unsanctioned ‘shadow AI’ commonplace.

“We are already seeing evidence that AI is fundamentally changing the relationship between humans and machines,” AtomEthics co-founder Chris O’Connor said.

“While business leaders are eager to realise the opportunities presented by AI, large data models and automated decision making, there is a new and evolving threat landscape that must be navigated with care.”

While some industry groups have created sector-specific genAI guidance – financial services body FS-ISAC recently published a framework for that sector, while NSW court guidance highlights the importance of human oversight – most are still feeling their way.

Aiming to simplify the integration of ethical guardrails into business AI projects, AtomEthics was designed by the “values-driven” consultancy Hypereal to steer project teams through the process of documenting their data and AI lifecycles “from initial concept to sunset”.

Integrating ethics into every stage of the process makes it pervasive, but O’Connor noted that most companies are still using checklist-based policies that “are completed as point-in-time interventions, just before a deadline such as a review or committee meeting.”

“Long, checklist-based documents place a high cognitive load on employees who need to switch out of their day jobs and connect with an abstract set of questions,” O’Connor said, adding that making truly relevant ethical AI needs transparent, ubiquitous policy guardrails.

This includes guiding teams through ethical decision making, identifying gaps in policies, and providing continuous assessment of everyday policies, with clear visibility of the processes, regulations, data sources, parameters, algorithms, and verification of their operation.

Ghost in the machine

The other crucial part of having an ethical AI policy is ensuring there is an appeals process in place so that employees and customers know there is a way to audit AI decisions – a key part of building and maintaining crucial trust in AI and its applications.

“The disconnect between aspirational statements about data ethics and actual business practices has created a significant trust deficit,” Sarah Kaur, principal of Portable noted during a panel discussion that also included past ACS president Dr Ian Oppermann.

“Our digital footprints have been collected extensively, social profiles scraped, and consumers have every reason to approach corporate data practices with scepticism.”

With news reports regularly surfacing scandals about unethical use of data – think Facebook’s Cambridge Analytica debacle or Amazon’s $38 million fine for keeping sensitive data collected from children by its Alexa assistant – consumer trust in digital services is crumbling.

Indeed, Thales recently found 82 per cent of consumers had abandoned brands over concerns about how their personal data was being used.

“Consumers are more aware of the consequences of their data falling into the wrong hands,” Thales executive Sebastien Cano said.

“As cyber threats evolve so does consumer scepticism, and brands must adapt security measures to stay ahead and rebuild confidence.”

Companies must meet such scepticism head on by demonstrating their commitment to ethical data use, Kaur said, noting that “this authentic dialogue forms the foundation of rebuilding trust, or changing things.”

AI success hinges on getting the guardrails right

Addressing ethical concerns is more than just an academic exercise: with a growing number of companies notching up generative AI (genAI) failures, increasingly strong evidence suggests that building projects to an ethical AI framework can make or break a project.

A third of businesses have scaled genAI solutions but just 13 per cent report creating “significant” business value the technology, Accenture recently found after analysing more than 2,000 genAI projects and talking with over 3,000 C-level executives about their work.

Fully 49 per cent see responsible AI – the application of ethical and operational guardrails to genAI’s operations – as a “key contributor” to AI-related growth, with those reporting business value from genAI 2.7 times more likely to have it in place.

Companies that have followed five steps – leading with value; reinventing talent and ways of working; building an AI-enabled, secure digital core; introducing responsible AI; and driving continuous reinvention – boosted results by 2.5 times in the past year, the report adds.

With companies spending three times as much on AI technology than on training their people to use it, Accenture noted that successful genAI adopters scored 88 per cent higher if they had actively reshaped the workforce as part of the process.

“Challenges around data readiness, process redesign and a lack of C-level sponsorship continue to hinder progress,” the report notes.

“While technology anchors any transformation, it’s the alignment of people, processes and tech that drives reinvention.”