• Home
  • Blog
  • Google Releases Two New AI Chips 

Google Releases Two New AI Chips 

Updated:April 23, 2026

Reading Time: 3 minutes
An AI chip
  • Home
  • Blog
  • Google Releases Two New AI Chips 

Google Releases Two New AI Chips 

An AI chip

Updated:April 23, 2026

On Wednesday, April 22, 2026, Google released not one but two brand-new custom AI chips.

This time, the tech giant is playing a smarter game; splitting its work between two very different jobs.

The chips are called the TPU 8t and the TPU 8i. They are the eighth generation of Google’s famous tensor processing units, or TPUs. 

One chip handles training, and the other handles inference. It’s a focused approach, and it could change how AI workloads get done in the cloud.

AI Chips

AI has two big phases. First, you teach the model,  that’s training. Then, users actually talk to it; that’s inference.

Google’s new TPU 8t chip is built for training. The TPU 8i chip handles inference, meaning it’s what kicks in every time someone types a prompt and waits for an answer.

Splitting these tasks into two separate chips is a smart engineering choice. Each chip can be fine-tuned for what it does best. That leads to better performance and lower costs.

Google is making some big claims about these chips. Compared to the last generation, the new TPUs offer up to 3x faster AI model training. 

They also deliver 80% better performance per dollar spent. But perhaps the most jaw-dropping stat? More than one million TPUs can now work together inside a single cluster. 

That’s an enormous amount of computing power working in sync. The result should be more AI computing for less energy, and less cost for customers. 

That matters a lot as companies try to run bigger AI models without watching their cloud bills spiral out of control.

AI chips
Image Credits: Google

Competition 

Google isn’t replacing Nvidia with these chips. Instead, it’s adding them alongside Nvidia-based systems. That’s the same strategy Amazon and Microsoft are using with their own custom chips.

In fact, Google went out of its way to say its cloud will soon offer Nvidia’s next-generation chip, the Vera Rubin, later this year. 

So the two companies are still very much working together, not against each other. Google even announced a joint effort with Nvidia to improve networking software. 

Specifically, the two companies are teaming up to upgrade a software-based networking tool called Falcon. 

Google originally created Falcon and open-sourced it back in 2023 through the Open Compute Project: the leading nonprofit group behind open-source data center hardware. 

The goal is to make Nvidia-based systems run even more efficiently inside Google’s cloud.

History

Back in 2016, Google launched its very first TPU. At the time, chip market analyst Patrick Moorhead predicted it could spell bad news for Nvidia and Intel.

That prediction, as he himself jokingly noted on X, did not hold up. Nvidia is now worth nearly $5 trillion. It remains the dominant force in AI chips, and betting against it has been a losing game.

So while Google’s new chips are impressive, reality tells a more nuanced story.

Cloud AI

Amazon and Microsoft are also building their own custom AI chips. 

The idea is that as more businesses move their AI work to the cloud, hyperscalers, the big three cloud providers, could eventually lean less on Nvidia.

But that takes time. Right now, Nvidia supplies a massive portion of the AI hardware that powers the internet. 

Even when workloads run on Google’s custom chips, Nvidia still benefits because Google’s overall growth as an AI platform drives more hardware demand across the board.

In other words, it’s not a zero-sum game. At least not yet.