Anthropic has expanded its partnership with Google to use up to one million of the company’s artificial intelligence chips, valued in the tens of billions of dollars, to train future versions of its Claude chatbot. The arrangement, announced on Thursday, grants Anthropic access to more than one gigawatt of computing capacity beginning in 2026. The startup will use Google’s tensor processing units (TPUs), which were previously reserved primarily for internal use.
Anthropic said it selected Google’s TPUs because of their efficiency, cost-performance ratio, and the company’s prior experience using them to train and operate its Claude models. Google, owned by Alphabet, will also provide additional cloud computing services under the agreement.
The deal reflects rising competition in the AI sector and the growing demand for high-performance chips needed to train large language models. Google’s TPUs, available through Google Cloud, serve as an alternative to Nvidia’s graphics processing units, which remain in short supply. OpenAI, a major rival, has signed multiple agreements reportedly exceeding $1 trillion to secure about 26 gigawatts of computing capacity, relying on chips from Nvidia and AMD to meet its training needs.
