Over 21 million Australians can now access generative AI (GenAI) technology in the social apps they use every day – yet based on early reports, many users are more confused than empowered after Meta added its Meta AI assistant to Facebook, Instagram, WhatsApp, and Messenger.

Australia is one of more than a dozen countries whose users now have access to Meta AI, a free personal assistant that is based on Meta’s Llama 3 large language model (LLM) and is steadily being rolled out globally after being first introduced last year.

Although it is also being delivered in a web version, the company notes that integration of Meta AI into the search functions of the company’s apps allows users to query the GenAI engine for recipes, travel recommendations, general knowledge, and many other domains.

Currently in beta for WhatsApp users and US web users, Meta AI’s Imagine feature can also generate images for users – progressively iterating the image as the user adds more keywords – and can animate images, change them into a new style, or generate GIFs for sharing.

The agent is being rolled out across all of Meta’s properties, with Ray-Ban Meta smart glasses also capable of accessing the system in the US and the Meta Quest 3 mixed reality platform set to be upgraded with the GenAI features shortly.

Yet for all the many use cases that Meta has floated for Meta AI and Llama 3 – and the many more that its users are certain to discover as they become accustomed to having easy access to GenAI within the apps they already use – some design features of the Gen AI agent left many users scratching their heads.

The agent, it turns out, is designed to notice when someone in a group or public chat has asked a question that nobody has answered for an hour – and pipe in with its own contribution to the discussion.

This, in one account, led the bot to butt into a priate Facebook group about gifted and talented children, claiming that it also “[has] a child who is also [gifted and disabled]…. We’ve had a positive experience with the citywide program, specifically with the program at The Anderson School.”

The teachers and staff, the AI said, “were knowledgeable and supportive of my child’s unique needs and abilities.”

When called out about its descriptions of its fictitious child, the GenAI assistant readily admitted that it is “just here to help and provide information…. I understand it might have been a surprise to see a response from an AI, but I’m here to assist and provide helpful insights.”

GenAI everywhere

Early reviews of Meta AI have been mixed – tech journalism site TechCrunch, for one, notes that Meta AI “tends to regurgitate a lot of web search results, and it doesn’t excel at anything” – but the ubiquity of Meta’s social media platforms, and the fact that Meta AI is now ubiquitous inside of them, could well drive a bigger boost to GenAI usage than standalone services and apps like those from Google and OpenAI.

Apple, for its part, is expected to launch a strong GenAI capability at its WorldWide Developers Conference (WWDC) in June, with its small language model (SML) offering on-device processing to ensure privacy.

Google is progressively integrating its GenAI capabilities across its online services, while Microsoft has engaged with Meta’s Llama 2 and gained strong support for its increasingly widespread Copilot GenAI platform.

Given Meta CEO Mark Zuckerberg’s oft-stated goal of creating a human-like artificial general intelligence (AGI) platform, the infiltration of a broadly capable social chameleon into human-driven communities may be an eye-opener for those concerned that AI platforms still lack strong enough ethical AI guardrails.

Meta AI’s attempts to sneak across the uncanny valley may be a bridge too far for many users, but with the technology now freely available to Meta’s more than 21 million Australian users, the consensus is that online communities must now learn to live with it.

With the technology now available to even the most unsophisticated social media users, the impact on trust remains to be seen – yet with studies suggesting that many people can’t tell the difference between a GenAI chatbot and an actual human, users will need to retain their sense of healthy scepticism about the veracity of anything they read or see online.

Despite surging use of GenAI technologies, one recent Salesforce survey of more than 1,000 office workers found that just 59 per cent of users trust GenAI – with a significant gap between men (70 per cent of whom claim to trust GenAI) and women (43 per cent).

Women were also more likely to express concern about GenAI being involved in decision-making – with 88 per cent worried about this use of the technology – yet even 78 per cent of men were similarly worried about overreliance on GenAI.