Thinking Machines’ chips-to-talent pull as Meta loses ground

Thinking Machines Lab is expanding fast—powered by a multibillion-dollar Google cloud deal for Nvidia GB300 chips—while drawing researchers away from Meta and across the AI industry.
Thinking Machines Lab is quietly turning a talent reshuffle into a competitive strategy, even as Meta’s recent staffing shifts continue to make headlines.
The most immediate change is infrastructure.. Thinking Machines Lab has signed a multibillion-dollar cloud agreement with Google. positioning the startup to access Nvidia’s latest GB300 chips—an advantage that matters because AI performance is often limited as much by hardware access as by model ideas.. The deal also places Thinking Machines in a higher infrastructure tier alongside major AI players. a signal that the company is no longer trying to “catch up” from the sidelines.
There’s a second story running in parallel: people.. Weiyao Wang. who spent eight years at Meta working on multimodal perception and open-world segmentation projects. has joined Thinking Machines Lab.. He’s only one example in a broader pattern of movement that now cuts both directions—some Meta employees are leaving. while others appear to be heading out from Thinking Machines as well.. The industry is treating this like a chess match where chips, compute credits, and research leadership all move together.
For readers watching the AI economy from the outside. the practical impact is simple: the companies that secure better compute access and attract research talent tend to accelerate their iteration cycles.. That can reshape timelines for training, evaluation, and deployment—especially for startups that must prove momentum quickly.. In other words. this is not just “who got hired.” It’s about who can build and test faster when the window for funding and market relevance narrows.
Thinking Machines’ hiring streak includes several high-profile researchers with long Meta roots.. Soumith Chintala. Thinking Machines’ CTO. previously spent 11 years at Meta and co-founded PyTorch. the open-source deep learning framework widely used across AI research.. Other Meta veterans now involved include Piotr Dollár. who worked on influential segmentation research and has joined Thinking Machines’ technical staff. along with additional researchers and engineers from Meta’s FAIR and related teams.. The theme is consistent: deep model and systems expertise, not just general AI interest.
But the startup’s talent pipeline doesn’t only rely on Meta.. It has also recruited researchers and engineers from a wider set of companies across the ecosystem. reflecting how the “AI labor market” increasingly operates like a network.. Neal Wu, for example, brings both elite technical credentials and startup experience.. Other hires add backgrounds from major tech firms and adjacent AI development efforts. including people associated with work at Apple. Microsoft. and established AI organizations.
This kind of cross-company recruiting has become a defining feature of the current AI phase.. During earlier waves, new AI companies often built credibility by publishing research or partnering for compute.. Now, credibility is increasingly tied to a combination of enterprise-grade infrastructure and recognizable research leadership.. When compute partnerships land early—especially those involving cutting-edge Nvidia hardware—startups can turn scarce resources into sustained progress.
The financial backdrop adds another layer to the story.. Thinking Machines is reported to be valued around $12 billion. a figure that would have been hard to imagine for a young company in earlier cycles.. That valuation won’t automatically guarantee success. but it can change how aggressively a startup hires and how quickly it scales experimentation.. Meta’s well-known compensation packages—particularly for researchers—have long shaped the direction of talent.. The current shift suggests that for some. the calculus is no longer salary alone; it’s also access to next-generation infrastructure and the chance to build a new technical agenda with a larger runway.
Meanwhile, the broader industry implication is that competition is increasingly a three-way race: infrastructure, talent, and execution speed.. Cloud deals can lock in compute capacity and shorten the time from idea to iteration.. Senior researchers can convert that compute into better model results or more reliable engineering.. And if a startup like Thinking Machines can keep those elements aligned. it can challenge even the largest incumbents—not by matching everything at once. but by moving faster at the most important bottlenecks.
With Thinking Machines’ headcount now around 140. the immediate question becomes whether the company can translate rapid growth into sustained product and research momentum.. For Meta, the loss of seasoned researchers is a staffing issue.. For the market. it’s a signal: the next competitive advantage in AI may be decided as much by who gets the best chips and teams at the same time as by which models sound most impressive on paper.