Meta is going all-in on AI infrastructure. It’s no longer just talking about AI scale. It’s building for it.
On Monday, Mark Zuckerberg announced Meta Compute, a new initiative focused on expanding the company’s AI infrastructure.

The goal is simple but massive: build enough computing power to support Meta’s long-term AI ambitions.
This move follows earlier signals from Meta that big spending was coming.
Last year, the company said strong AI infrastructure would be a major edge in building better models and products.
Now, the plans are becoming real.
Why Power and Energy Matter More Than Ever
AI systems need huge amounts of electricity. Training models. Running data centers. Serving billions of users. None of it comes cheap.
Zuckerberg says Meta plans to build tens of gigawatts of capacity this decade, with hundreds of gigawatts over time.
To put that in perspective, one gigawatt equals one billion watts. That level of energy use could reshape how tech companies think about power, partnerships, and location.
Some estimates suggest U.S. AI power demand could jump from 5 gigawatts to 50 gigawatts within the next decade. Meta wants to be ready.
The Team Leading Meta Compute
Zuckerberg also named three leaders who will run different parts of the project.
Santosh Janardhan: The Infrastructure Builder
Santosh Janardhan, Meta’s head of global infrastructure, will lead the technical side. He’s been with the company since 2009 and knows its systems inside out.
His role includes:
- Data center design and operations
- AI software and hardware stacks
- Silicon programs
- Global networks and developer tools
In short, he’s building the engine.
Daniel Gross: Planning for the Long Term
Daniel Gross, who joined Meta last year, will focus on long-range capacity planning. He co-founded Safe Superintelligence with former OpenAI chief scientist Ilya Sutskever.
His team will handle:
- Supplier partnerships
- Industry research
- Capacity planning
- Business modeling
This is about making sure Meta doesn’t hit a wall five or ten years from now.
Dina Powell McCormick: Working With Governments
The third leader is Dina Powell McCormick, Meta’s president and vice chairman. She brings government and policy experience.
Her role is to help Meta work with governments to build, fund, and deploy large-scale infrastructure projects around the world.
For projects this big, politics and policy matter.
Meta Isn’t Alone in the AI Infrastructure Race
Meta’s move comes as other tech giants push in the same direction.
- Microsoft continues to partner with cloud and infrastructure providers
- Alphabet, Google’s parent company, recently acquired a data-center firm
- AI-ready cloud capacity is now a competitive weapon
Everyone wants more compute. Few want to be left behind.
Why Meta Compute Is a Strategic Bet

Meta doesn’t just want more servers. It wants control.
By owning more of its infrastructure stack, Meta can:
- Reduce long-term costs
- Optimize performance for its AI models
- Move faster than rivals
- Avoid dependency on outside providers
Zuckerberg says how Meta builds and partners around this infrastructure will become a core strategic advantage.
What Comes Next
Meta hasn’t shared detailed timelines or locations yet. But Meta Compute signals a shift from planning to execution.
As AI models grow larger and more demanding, the companies that win won’t just have the smartest algorithms. They’ll have the power to run them.
And Meta is betting big that it will be one of those companies.

