The Trump administration in the United States is considering passing a package of administrative measures to tighten controls on cutting-edge artificial intelligence models to deal with escalating national security risks. Discussions within the White House over new regulations, which would include establishing a formal review-and-release mechanism for advanced AI models, have been ongoing for some time, according to seven technology industry representatives and policy advisers who were granted anonymity.

Multiple people familiar with the matter said that in recent closed-door communications with the industry, one of the ideas proposed by the White House was to establish a review system in the form of an executive order to evaluate the potential impact of so-called "cutting-edge AI models." An AI policy expert and an industry source said that under this scheme, companies may need to get a "green light" from the federal government before releasing high-capability models. The New York Times had previously disclosed that the White House was considering a similar review structure.
A White House spokesman said that any formal policy would be announced personally by President Donald Trump, and that current discussions around possible executive orders are still "speculation." At the same time, more and more technology companies are voluntarily cooperating with the government and actively submitting new models for review before they go online. Just on Tuesday, the Trump administration reached an agreement with Microsoft, xAI, and Google DeepMind to allow the government to conduct a national security risk assessment on a new generation of models before they are made public.
This series of actions comes as widespread public unease about AI continues to rise, including concerns about the safety of the technology itself and questions about the industry’s huge investment in political campaigns. A POLITICO poll released earlier this month showed that U.S. voters are significantly skeptical of artificial intelligence overall. In this context, a formal AI pre-deployment review mechanism is just one of a series of administrative measures currently being considered by the White House. Other ideas include taking a tougher approach to the security risks posed by AI and limiting the space for the technology industry to counter government security and policy requirements.
The government is preparing a 16-page draft executive order that would prohibit the private sector from “interfering” in the government’s use of AI models, according to four people familiar with the matter. The draft also plans to tighten federal procurement and contracting standards and give the government greater agency in working with AI vendors. The provisions are widely seen as a direct response to a recent standoff between the White House and AI company Anthropic, which has refused to allow the military to use its model Claude to monitor U.S. citizens or power autonomous weapons, sparking an outcry from the Defense Department.
In response, Defense Secretary Pete Hegseth in March classified Anthropic as a "supply chain security risk," a rare move that immediately restricted the ability of federal agencies to use the company's products. Many observers pointed out that the current round of policy brewing around AI marks an important shift in the Trump administration’s regulatory thinking. Prior to this, under the lobbying of "laissez-faire" venture capitalists such as David Sacks and Marc Andreessen, the White House had always adopted a relatively loose "light-touch" attitude towards AI industry regulation and supervision.
Now this seemingly “sharp turn” is arousing vigilance in the technology circle. Some industry representatives are worried that stricter government control will slow down the pace of innovation. Daniel Castro, chairman of the think tank "Information Technology and Innovation Foundation", said that no one wants to enter a world where "every new version of the model needs to be submitted to the government for approval first." He warned that "Silicon Valley speed" is very different from "Washington speed" and that the United States must keep advancing quickly if it wants to compete with China in AI.
The proposed executive order also takes aim at new threats posed by cutting-edge AI in cybersecurity, specifically Anthropic’s new model Mythos. While the model has not yet been released to the public, early test results from governments and large institutions show that Mythos is able to find and exploit software vulnerabilities in ways far beyond the capabilities of human hackers. Two people familiar with the discussions revealed that the draft considers formulating technical guidelines and best practices for "open weight models" to strengthen security protection. Such models will expose training parameters and allow users to retrain and modify them for different tasks. The White House is also considering mobilizing the intelligence community to help defend against threats to critical systems posed by cutting-edge AI, three people familiar with the matter added.
The potential risks posed by Mythos have aroused high alert among senior Trump administration officials. Multiple officials worry that amid the standoff with Anthropic, federal agencies will have difficulty obtaining Mythos to use to "stress test" critical systems. In recent weeks, the White House has begun to "cool down" tensions with the AI company. The administration is moving to establish a review board to reassess the supply chain risk determination made on Anthropic, two sources said, but it is unclear whether this arrangement will be included in the text of the final AI executive order.
From the outside, the emergence of Mythos is reshaping the discussion framework around AI and national security within the White House. Saif Khan, who served as an emerging technology adviser in the Biden administration and is now a researcher at the Institute for Progress think tank, said that there was a certain degree of contempt for related risks within the government before, "but now, many people are beginning to take this matter extremely seriously." In his view, the era of AI policy guided solely by the logic of Silicon Valley venture capital may be coming to an end within the Trump administration.