Shares in Google parent company Alphabet lost $144 billion (US$100 billion) in value during trading on Wednesday following an error in one of its advertisements for Bard, the company’s ChatGPT-like conversational AI, that will soon be attached to its search engine.

Google has been rushing to get AI features to market in response to the immense popularity of OpenAI’s conversational bot ChatGPT which has singlehandedly shaken up areas like education, software development, content generation, and cyber security in less than two months.

The ease with which ChatGPT authoritatively responds to queries with more than a list of links caught the attention of Google’s executives who reportedly described it as a “code red” for the company’s dominance of the internet search market.

A day after Microsoft announced integration of OpenAI’s next generation large language model with its search engine Bing, Google announced upcoming AI updates to its Search, Maps, and Translate products at an event in Paris.

In a gif ad posted to Twitter, Google showed a query to Bard asking the AI to distil discoveries from the James Webb Space Telescope for a 9-year old.

One of Google’s answers was that the James Webb Space Telescope “took the very first pictures of a planet outside of our own solar system”.

News outlet Reuters noticed that this is incorrect: the first images of an exoplanet were taken in 2004 by the European Southern Observatory’s Very Large Telescope.

A Google spokesperson said the error “highlights the importance of a rigorous testing processing” and that Bard would be rolled out for the company’s “Trusted Tester program” in the coming days.

“We'll combine external feedback with our own internal testing to make sure Bard's responses meet a high bar for quality, safety and groundedness in real-world information,” the spokesperson told Reuters.

Bard’s initial, very public, failing – the tweet is still live and has been viewed more than 1.3 million times – is indicative of the reasons why big tech companies have typically been hesitant about releasing AI chat bots.

You would be forgiven for not knowing that Facebook parent company Meta also released a large language model, Galactica, in late 2022 – weeks before ChatGPT took the world by storm.

That model, aimed at helping researchers write scientific papers, was taken down after just three days because, as one scientist put it, “Galactica is little more than statistical nonsense at scale”.

OpenAI initially chose not to release the full GPT-2 model to the public because of “concerns about large language models being used to generate deceptive, biased, or abusive language at scale”.

The company’s appetite for those risks has clearly lessened over the years, culminating in ChatGPT – which has made its share of factual errors and ‘hallicunations’.

Part of this is due to the date of ChatGPT’s training. The model was trained on data up until 2021, so it is limited in how it responds to question about current events.

Large language models are very good at guessing which words are likely to come after one another given a certain context.

Over years of research, the models – trained on massive amounts of text from the internet, news, and literature – have been gradually tweaked to present a life-like illusion of human speech.

That illusion was real enough to convince a Google researcher – whose job was to talk with its Language Model for Dialogue Applications (LaMDA) and find evidence of inaccuracies, biases, and abuse – that the system was a sentient computer.