Meta moves to lock in chips, capacity, long-term power as AI training costs surge in 2026

Mark Zuckerberg is making Meta’s next chapter clear: the company doesn’t just want to build smarter AI models — it wants to own the machinery that makes them possible.
On Monday, Zuckerberg announced Meta Compute, a new AI infrastructure initiative that formalizes what’s become one of Meta’s biggest priorities: scaling the physical backbone behind generative AI. The effort centers on expanding data center capacity, securing long-term energy supply, and planning compute at a level that Meta says will be required for the next wave of AI systems.
Meta’s message arrives at a moment when AI training is pushing Big Tech into an industrial race — one measured not only in model performance, but in megawatts, GPUs, and buildable land. According to Reuters, Meta’s initiative will aim for gigawatt-scale computing capacity, with Zuckerberg projecting expansion to tens or even hundreds of gigawatts over time — an energy footprint comparable to small countries.
The company also appears to be reshaping leadership around the initiative. The Financial Times reported that Dina Powell McCormick has been appointed Meta president and vice-chair, with oversight tied to major AI infrastructure investments, including data centers and energy strategy.
Behind the announcement sits a growing industry reality: compute is scarce, expensive, and increasingly strategic. Across the sector, hyperscalers are competing not just for chips — but for the electricity and permitting needed to keep giant clusters running continuously. Reuters has reported that AI-driven demand is helping fuel record data-center dealmaking and aggressive capital expansion across the market.
Meta’s infrastructure push also builds on earlier signals that it wants tighter control of its stack. Reuters previously reported that Meta has been testing in-house AI training chips, part of a broader attempt to reduce reliance on third-party hardware over time and control costs as model training scales.
For Zuckerberg, “Meta Compute” is now the label for a long-term buildout that’s already underway — and a signal to rivals that Meta intends to compete not only at the model layer, but at the infrastructure layer where the AI race is increasingly being decided.
Sign up for the Daily Briefing
Get the latest news and updates straight to your inbox