Business

Google boosts Thinking Machines ties in multi-billion AI deal

Misryoum reports Google signed a multi-billion AI infrastructure agreement with Mira Murati’s Thinking Machines Lab, bringing access to Nvidia’s latest GPU systems and Google Cloud’s reinforcement-learning stack.

Google has deepened its relationship with Mira Murati’s Thinking Machines Lab through a large, multi-year infrastructure agreement to expand the startup’s access to frontier AI compute.

A multi-billion cloud bet for a fast-growing AI lab

The deal. valued in the single-digit billions. centers on Google Cloud providing advanced AI infrastructure for training and deploying Thinking Machines’ models.. The package includes systems built on Nvidia’s newest GPU platform. along with the supporting cloud services needed to run large-scale model development—training. tuning. and deployment.

For Google, the timing is strategic.. Cloud providers are increasingly competing not just on servers. but on the full stack: compute plus orchestration plus data and operational services.. Misryoum sees this as a clear attempt to become the default environment for leading “frontier” AI labs while demand for reinforcement learning and high-throughput training continues to rise.

Why reinforcement learning needs serious compute muscle

A key clue inside Google’s own framing is Thinking Machines’ use of reinforcement learning workloads. The company’s flagship product, Tinker—released in October—automates parts of building custom frontier AI models, and its architecture relies on reinforcement learning.

Reinforcement learning is known for being compute-hungry because systems often require many rounds of training, simulation, and iterative improvement.. That is precisely the kind of workload that pushes cloud buyers toward providers that can deliver both scale and stability.. Misryoum’s interpretation is that this is less about “having GPUs available” and more about ensuring those GPUs can be used efficiently for repeated training cycles.

Thinking Machines is also positioned at a moment when the industry’s most visible recent advances have leaned heavily on reinforcement-style approaches.. That creates a competitive environment where the “cheapest” cloud option can still lose if it can’t deliver consistent performance for sustained experimentation.

The GPU pipeline: GB300-powered access and speed gains

This agreement includes early access to Google Cloud’s AI systems powered by Nvidia’s GB300 chips. Google says these systems provide a 2X improvement in both training and serving speed compared with prior-generation GPUs.

In plain terms. faster training can shorten development timelines. improve the speed of iteration for experimental architectures. and reduce the number of cycles needed to reach performance targets.. Serving speed matters too: once a model is deployed, real-world usage depends on latency and throughput.. Misryoum views the speed claim as important because reinforcement learning products often evolve quickly—performance improvements don’t stay “static” once they hit production.

The deal is not exclusive, which means Thinking Machines could use more than one cloud provider over time. Still, being among Google Cloud’s early customers for these systems signals a push to lock in momentum with a company that appears to be moving quickly.

Competitive pressure is reshaping AI infrastructure deals

Google’s push fits into a broader market pattern. Cloud providers have been striking agreements with AI developers to secure capacity and deepen integration across their platforms. Misryoum notes that this is increasingly common as demand for specialized AI hardware outpaces supply.

Competition shows up in recent moves by other major players.. For example, Anthropic has also secured multi-gigawatt capacity commitments with Amazon for training and deployment of Claude.. The takeaway for readers is that AI infrastructure has become a board-level topic: compute isn’t just an operating expense anymore—it’s a strategic resource.

In that environment, early partnerships can create a compounding advantage. If a lab builds workflows optimized for one provider’s hardware and tooling, switching costs rise over time—both technically and operationally.

What it means for Thinking Machines—and the industry

Thinking Machines was founded by former OpenAI chief technologist Mira Murati in February 2025 and raised a $2 billion seed round shortly afterward. valuing the company at $12 billion.. It has remained relatively secretive. but its product launch of Tinker suggests a concrete push into automating parts of frontier model development.

Misryoum expects deals like this to do more than expand compute access. They can influence what kinds of models the lab attempts first, how quickly it can scale experiments, and how confidently it can move from prototypes to production systems.

There is also a broader industry implication: as more reinforcement-learning and agent-like workflows emerge. AI labs will increasingly treat cloud infrastructure as part of the product.. Providers that bundle compute with the operational pieces—such as orchestration layers and databases—will be positioned to reduce friction for teams trying to deploy quickly.

For readers watching the AI economy, this agreement is a reminder that the next wave of capability will be constrained not only by algorithms, but by chips, capacity planning, and the engineering discipline required to run training at scale.

Google’s “wraparound stack” strategy meets frontier demand

Google has been aiming to integrate its cloud offering more tightly with adjacent services—storage. Kubernetes-style orchestration. and database capabilities—so AI teams can build and deploy within a single ecosystem.. In Misryoum’s view, that approach becomes more persuasive as frontier labs demand end-to-end reliability rather than just raw horsepower.

With Thinking Machines now plugged into Google Cloud’s GB300-powered environment, the question shifts from whether frontier AI needs massive compute to which provider can deliver the most usable, dependable platform for the next generation of model iteration.

Bluesky’s slowdown tests the limits of “anti-X” promise

Revolut’s $200B IPO target: what it means for European fintech

Mythos patch wave: why tech updates can’t wait