According to people familiar with the matter, Google recently reached a new multi-year cloud computing and strategic cooperation agreement with artificial intelligence startup Thinking Machines Lab. The transaction scale is as high as billions of dollars, marking that the search giant is increasing its investment in cutting-edge large model customers. This agreement was reached after Thinking Machines Lab previously signed a large-scale computing power procurement cooperation with NVIDIA, which means that the company will simultaneously bind industry-leading suppliers on both the underlying chip and cloud platform.

Thinking Machines Lab was founded in 2025 by former OpenAI CTO Mira Murati and is headquartered in San Francisco. It completed a US$2 billion seed round of financing in the year of its establishment, with a valuation of approximately US$12 billion. Investors include institutions and industries such as Andreessen Horowitz, Accel, Nvidia, and AMD, and is regarded as one of the most topical cutting-edge AI laboratories. The company is positioned as a research and development institution for "general AI systems for human collaboration", emphasizing explainability, customizability and interdisciplinary capabilities, with the goal of narrowing the gap between cutting-edge AI capabilities and the understanding of the scientific community.

In March this year, Thinking Machines Lab just announced a multi-year computing power cooperation with NVIDIA, which will deploy at least 1 GW of NVIDIA Vera Rubin systems in its training and inference infrastructure starting in 2027. NVIDIA will also make a strategic investment in the company. Industry insiders infer that the overall value of this cooperation during the contract cycle is very likely to reach "billions of dollars" or even higher, based on Huang's previous estimate of the cost of a 1-gigawatt AI data center as "up to $50 billion."

In this context, the latest cooperation with Google is seen as a key addition to its computing power landscape: Nvidia provides chips and dedicated systems, while Google provides Thinking Machines Lab with large-scale GPU/TPU clusters, networks, storage and engineering support through its cloud platform for training the laboratory's new generation of multi-modal large models. As early as the completion of the seed round of financing, Thinking Machines Lab has already established cooperation with Google Cloud. This agreement is regarded as amplification and locking of the existing relationship, allowing Google to occupy a more solid infrastructure and ecological niche in this "potential next OpenAI or Anthropic" laboratory.

According to people close to the transaction, in addition to renting cloud computing power, the agreement also includes a package of joint technology optimization and commercial terms, such as the co-construction of training and inference systems around Google's new generation TPU platform, network and data pipeline optimization for large-scale distributed training, and in-depth cooperation in security and compliance. What Google values ​​​​is that by establishing deep binding relationships with early cutting-edge laboratories, in the future, whether it is model hosting, API distribution or enterprise-level solutions, there will be opportunities to obtain considerable returns based on the growth of these customers.

For Thinking Machines Lab, the continuous heavyweight cooperation with NVIDIA and Google means that its long-term guarantee of computing resources has been greatly enhanced, which will help it continue its research and development route of "building cutting-edge AI models with reproducible results". In an environment where the AI ​​industry's demand for high-end GPUs and computing power continues to be tight, this binding helps reduce the risk of training plans being constrained by resource supply, and also lays the foundation for its possible future launch of commercial APIs and scientific research tools.

However, such huge computing power and cloud service contracts also mean that both parties need to give convincing answers on cost recovery and commercialization paths. For Google, how to transform such high-risk, high-investment cutting-edge laboratory customers into Google Cloud's long-term growth engine will become one of the focuses of the capital market; and for Thinking Machines Lab, which is still in its early stages, how to stably launch products, generate revenue, and realize the vision of "a more understandable and customizable general AI system" while continuing to spend high amounts of computing power is also facing a test.