Humans may have spent decades adapting to technology, but the rise of AI is now driving a “moment for reinvention” in which technology amplifies human traits instead – and that has made a human-by-design approach to AI essential.

“Where once we adapted to technology – such as changing our habits for a new app or computer interface – technology is beginning to adapt to us,” Accenture chair and CEO Julie Sweet said in introducing the new Accenture Tech Vision 2024 report, which she said “charts the steps to a future where technology transitions from a passive proxy to an active collaborator that engages with us through more natural interaction.”

That transition was clearly evident at the recent Consumer Electronics Show (CES 2024), where the integration of AI and generative AI (genAI) capabilities into nearly every gadget showed how AI has helped developers integrate technology into all kinds of human activities.

An AI-powered fortune teller, for example, was on hand alongside AI-based masseurs, first-aid training dummies that talk back, glasses that display real-time AI-based transcriptions for hearing-impaired users, technology for mind-controlled computer interaction, and more.

Such innovations are furthering the integration of digital and human worlds that has already seen AI helping diagnose and characterise melanoma, pancreatic and other cancers, predict cognitive decline, and improve countless other aspects of humanity – demonstrating how forward-looking innovators are already embracing four key modes of innovation that Accenture believes will help AI “unleash the next level of human potential”.

Those areas include reshaping humans’ relationship with knowledge through widespread use of genAI; building ecosystems of AI agents that “could command major aspects of business”; creating “new realities” through spatial computing; and designing new human-computer interfaces that “are starting to understand people more deeply”.

“Where AI once focused on automation and routine tasks,” the report notes, “it’s now shifting to augmentation, changing how people approach work, and is rapidly democratising the technologies and specialised knowledge work that were once reserved for the highly trained or deep-pocketed.”

Indeed, genAI tools have proven to be so intuitive and easy to use that many businesses are seeing the technology permeate their operations from the ground up – becoming ubiquitous long before the companies can develop policies to manage them.

“By building fundamentally intuitive bridges between people and the most advanced technologies of our age,” Accenture predicts, “productivity and value creation are poised to grow exponentially across every industry.”

“It’s an entire universe of new ideas and new actions for businesses and consumers alike.”

Humanising tech demands humanising protections

For all AI’s benefits, Accenture warns that “appropriate human guidance and oversight is critical” as technology not only brings knowledge to people that have never had it before, but enables “the creation of things and ideas to which enterprises have never had access…. Their wealth of knowledge will become enterprise-actionable for the first time.”

That prospect is both promising and scary to many observers, including myriad academics who worry that – despite its newly released plans for light-touch AI regulation – the federal government is moving too slowly to meaningfully shape the AI transition.

“There is an urgent need to move forward now, particularly in areas like enhanced misinformation laws, the establishment of an expert advisory group, and mandatory guard rails for high-risk AI uses," Dr Katherine Woodthorpe, president of the Australian Academy of Technological Sciences & Engineering (ATSE) said as the government’s “encouraging” interim response was published.

“It is also essential to involve and consult consumer advocates when setting risk thresholds for AI applications,” she added, noting the urgency of “Australia’s AI moment” and arguing that a consumer-focused approach “will ensure a balanced and comprehensive approach to AI regulation, reflecting a wide range of societal needs and concerns.”

And while the interim response “makes promising steps” towards regulating high-risk AI technologies – a term that an expert advisory panel will define in the near future – RMIT University law lecturer Dr Nicole Shackleton believes the legislation is still not seeing the bigger picture.

Despite their eagerness to regulate AI-generated deepfakes, she said, the government “shows little interest in issues of sexual privacy… or the use of AI in sex technologies such as personal and intimate robots”.

It may be a niche concern now, but the increasing integration of genAI into sexbots and other domestic companions – Elon Musk this week demonstrated Tesla’s Optimus robot folding shirts and Figure’s 01 robot recently taught itself to make coffee – will push humans into Her-like introspection as they revisit issues of trust, companionship, intimacy, and even identity.

“It is vital that any future AI advisory body be capable of tackling such issues,” Shackleton said, “and that the risk-based framework does not result in unintended consequences.”

Oversight will be crucial as AI extends into every corner of humans’ lives and experiences, RMIT University dean of research and innovation Professor Mark Sanderson warned.

“As smart as AI has become, these computer systems are still prompted and controlled by something smarter: human beings…. It is critically important to monitor how people interact with AI systems and observe how those systems react.”