TPUs: Google's home advantage

The ITPro Podcast • December 12, 2025 • Solo Episode

View Original Episode

Guests

No guests identified for this episode.

Description

In the race to train and deploy generative AI models, companies have poured hundreds of billions of dollars into GPUs, chips that have become essential for the parallel processing needs of large language models.

Nvidia alone has forecast $500 billion in sales across 2025 and 2026, driven largely by Jensen Huang, founder and CEO at Nvidia, recently stated that “inference has become the most compute-intensive phase of AI — demanding real-time reasoning at planetary scale”. 

Google is meeting these demands in its own way. Unlike other firms reliant on chips by Nvidia, AMD, and others, Google has long used its in-house ‘tensor processing units’ (TPUs) for AI training and inference.

What are the benefits and drawbacks of Google’s reliance on TPUs? And how do its chips stack up against the competition?

In this episode, Jane and Rory discuss TPUs – Google’s specialized processors for AI and ML – and how they could help the hyperscaler outcompete its rivals.

Read more:

Audio