Google has signed a new agreement with the Pentagon to open up access to its own AI tools to its classified networks, allowing almost all "lawful uses" after Anthropic rejected a request from the U.S. Department of Defense to authorize sensitive uses. Multiple media reports stated that this arrangement means that the Department of Defense can deploy Google’s AI more widely in classified environments for intelligence analysis, decision support and other scenarios.

The incident was sparked by Anthropic's public stance against the Trump administration. Previously, the Pentagon wanted to use Anthropic's models with few restrictions, including using AI for domestic mass surveillance and autonomous weapons systems, but Anthropic insisted on adding clear "guardrails" to the contract prohibiting such uses. Because Anthropic refused to cooperate with these use cases, the Department of Defense subsequently placed the model company on a "supply chain risk" list—a label typically reserved for entities deemed "foreign adversaries."
Anthropic has filed suit in court surrounding the designation. In March this year, a judge granted Anthropic’s request to issue a temporary injunction on the “supply chain risk” determination while the case was pending, temporarily preventing the spread of related administrative consequences.
Amid this conflict, other large AI companies were quick to seize the opportunity, trying to turn Anthropic’s “losses” into “gains” of their own. OpenAI signed an agreement with the Department of Defense as soon as possible. Later, xAI also gained access to confidential networks, which caused some members of the U.S. Congress to question the Pentagon's decision-making. Now Google has become the third company to reach a similar arrangement with the military, and its cooperation has further expanded the lineup of large models available to the U.S. defense system.
According to the Wall Street Journal, the text of Google’s agreement with the Pentagon also included language stating that the company “does not intend” to have its AI used for domestic mass surveillance or autonomous weapons, similar to parts of the OpenAI contract. However, the report also pointed out that there is still uncertainty as to whether these provisions are truly legally binding and how they will be monitored and held accountable in actual implementation.
It is worth noting that Google is advancing this large military order while internal opposition has not subsided. So far, 950 Google employees have signed an open letter calling on the company to follow Anthropic’s lead and should not sell AI capabilities to the Department of Defense without clear and enforceable usage restrictions. Google has not yet responded to the joint letter and the specific content of the new contract, nor has it responded to media requests for comment.
In the context of the rapid penetration of artificial intelligence into public security and military fields, this incident reflects an increasingly acute conflict of values between technology giants, governments and their own employees. On the one hand, the military seeks to maximize the use of cutting-edge AI tools to enhance capabilities; on the other hand, some technology companies and their employees attempt to set a bottom line for the military and security applications of AI through contract terms and public opinion pressure. Google's choice to expand cooperation with the Pentagon instead of following Anthropic's tough stance has further intensified the controversy surrounding "who should draw red lines for the use of AI."