Google is closing in on market-dominant chipmaker Nvidia as tech giant Meta weighs up a multi-billion-dollar deal for Google’s AI chips.
Once primarily known for its gaming hardware, Nvidia became the world’s most valuable company last year after the successfully repurposing its graphics processing units (GPUs) for the AI boom.
As the unlikely chipmaker commanded a lion’s share of the AI chip market, Google was steadily improving and renting out its own AI-dedicated tensor processing units (TPUs) for use in its Google Cloud platform.
More recently, Google has approached high profile clients – including Facebook’s parent company Meta – with the option to use TPUs in their own data centres.
According to a person familiar with the matter, Meta is considering the proposal.
The Information reports Meta has started talks to use Google TPUs in-house from 2027, and begin renting them via Google Cloud as early as next year.
Though the inside source did not specify the cost of purchasing or renting Google’s chips, the potential deal would reportedly be worth billions of dollars.
Notably, Meta currently relies on Nvidia’s GPUs to power its AI solutions.
Meta did not respond to Information Age prior to publication.
Is Google better, or just cheaper?
With Nvidia holding a near-monopoly on AI chips, Google has attracted clients by offering a more cost-effective alternative.
According to The Economist, Google’s TPUs are priced between one-half and one-tenth of comparable Nvidia chips, giving cash-conscious companies a compelling reason to consider switching.
Sources did not confirm whether Meta would use Google’s TPUs for the resource-intensive task of training its AI models, or for less demanding AI ‘inference’: a process where already-trained models use their knowledge to perform tasks or respond to queries.
Notably, inference takes markedly less computational power than training AI models – meaning big spenders such as Meta could potentially optimise some of their AI expenditure by divesting to cheaper suppliers for inference.
Conversely, Google’s reported talks with Meta came three weeks after Google announced a public rollout of Ironwood — a TPU capable of handling both training and inference, and which Google claimed is four times faster than its predecessor.
In a deal worth tens of billions of dollars, Google said AI startup Anthropic plans to use up to one million Ironwood TPUs to power its Claude model.
Nvidia responds
On Wednesday, Nvidia followed Meta and Google’s reported chip discussions with a statement on social media site X.
“We’re delighted by Google’s success — they’ve made great advances in AI and we continue to supply to Google,” wrote Nvidia.
“NVIDIA is a generation ahead of the industry — it’s the only platform that runs every AI model and does it everywhere computing is done.”
The statement coincided with a 2.6 per cent dip in Nvidia’s stock on Wednesday, prompting social media users to describe it as “defensive.”
Nvidia emphasised that it offers greater “performance, versatility, and fungibility” compared with application-specific integrated circuits (ASICs) like Google’s TPUs.
Notably, Google recently launched Gemini 3, a model trained entirely on TPUs that has outperformed competitors across multiple benchmarks.
Levelling the playing field
Speaking with Information Age, Markus Wagner, Associate Professor at Monash University’s Department of Data Science and AI, said Google’s growth in the market was inevitable.
“Given that Nvidia still commands the vast bulk of the accelerator market for AI training, it was only a matter of time before cash-rich hyperscalers doubled down on custom chips to cut their dependency and improve margins,” said Wagner.
He added that Google’s prospective deal with Meta is “less about toppling Nvidia in the short term”, and more about “shifting future bargaining power”.
“Once hyperscalers like Meta can credibly move large workloads onto Google’s chips, Nvidia loses some pricing leverage and is forced to compete more directly on cost and energy efficiency,” he said.
Wagner suggested such a deal could have adverse effects for other AI vendors, including ChatGPT-maker OpenAI.
“If Meta and Google bring more of the AI compute stack in-house, independent foundation-model vendors could find themselves squeezed between rising infrastructure costs and platforms that are simultaneously their landlords, competitors and chip suppliers.”
Google did not confirm with Information Age whether it had discussed its chips with Meta, though a spokesperson confirmed it would continue supporting the use of Nvidia GPUs in Google Cloud.
“We are experiencing accelerating demand for both our custom TPUs and Nvidia GPUs,” they said.
“We are committed to supporting both, as we have for years.”