In the spring of 2026, Dario Amodei suddenly became a Silicon Valley nuisance. Huang Renxun almost named Amodei directly, and fiercely criticized those CEOs who continue to predict that AI will eliminate jobs on a large scale as suffering from a "God complex": Once you become a CEO, it is easy to think that you know everything. Altman said Amodei was using "fear" for marketing, and Yang Likun simply said he didn't understand how the technological revolution would affect employment.

Even the media began to ask: Why doesn’t an AI boss who keeps warning about the end of the world stop?
Amoudi offends people, of course, not just because his moral sense is too high and his emotional intelligence is too low.
What’s subtle is that he has indeed believed in the risks of AI for a long time, and has indeed turned this belief into Anthropic’s sharpest commercial weapon.
This company established after leaving OpenAI is no longer just a research team under the banner of "AI security". Claude Code has become one of the most powerful products in the enterprise AI market, with annual revenue reaching billions of dollars. What’s even more exaggerated is that, according to Business Insider, Anthropic’s valuation in the private secondary market has exceeded US$1 trillion.
When a person stands on the moral high ground and reminds everyone to slow down, and at the same time bets in the middle of the field, and the bets get bigger and bigger, it is difficult for him not to become a target.
01
T AGPH27Public Enemy
Amoudi is becoming the most unpleasant person in the AI industry.
The latest person to open fire is Huang Renxun.
In a podcast, Huang Renxun called technology CEOs who frequently predict that AI will cause large-scale unemployment and even bring about the risk of human extinction as suffering from a "God complex."
His general idea is that once you sit in the position of CEO, it is easy for people to start thinking that they know everything. But the public discussion about AI should return to the facts, rather than being led by exaggerated doomsday narratives.

These words were not mentioned by name, but it is difficult not to think of Amoudi.
And this is not the first time that Huang Renxun has been provoked by Amodei’s AI risk narrative.
Amoudi has always supported stricter chip export controls and even wrote a long article calling for stronger restrictions. At the Davos Forum this year, he compared exporting advanced AI chips to China to "selling nuclear weapons."
Of course Jen-Hsun Huang would not accept this statement.
In another podcast, when Huang was asked about this analogy, he directly said that it was "ridiculous." He believes that comparing AI chips to nuclear weapons is a poor and illogical analogy.

If Huang Jen-Hsun’s counterattack comes from the direct conflict of interest between Nvidia and Anthropic on chip policy, then OpenAI’s attack is more like a head-on confrontation between old rivals.
In a podcast conversation, Altman said Anthropic is using "fear" for marketing.
He used a rather harsh metaphor: It's like someone saying, we built a bomb and will drop it on your head soon, and then we can sell you a bomb shelter for $100 million.

This sentence obviously points to the Claude Mythos Preview just released by Anthropic. According to Anthropic's own statement, this network security model is too powerful and will not be open to the public for the time being. Instead, it will be put into a project called "Wings of Glass" to do defensive security work for partner organizations.
This is not the only “attack” from OpenAI. On April 13, The Verge disclosed a four-page internal memo sent to employees by Denise Dresser, chief revenue officer of OpenAI. The theme of the memo is how to win the enterprise AI market, but a section is dedicated to discussing Anthropic, which is almost to dismantle the opponent point by point.

It writes that Anthropic’s story is built on “fear, restrictions, and the notion that a small elite should control AI.”
She also said that Anthropic relied too much on code scenarios and had insufficient computing power reserves, and questioned its annual revenue statement, saying that Anthropic included its revenue sharing with Amazon and Google on a total basis, resulting in its annualized revenue of US$30 billion being overestimated by approximately US$8 billion.
Amodei and Ultraman are old enemies, OpenAI and Anthropic are direct competitors; Huang Renxun’s chip business has also been directly affected by Amodei’s policy proposals. Their counterattack seems to be on track.
But it’s not over yet.
Yann LeCun also publicly criticized Amoudi on X. This time, the controversy arises from the impact of AI on employment.
In 2025, Amodei said in an interview with Axios that AI may eliminate half of entry-level white-collar jobs in the next one to five years and push the unemployment rate to 10% to 20%. He also said that AI companies and regulators cannot continue to sugarcoat or downplay the coming impact.
In response to Amodei’s judgment on employment, Yang Likun said on X: “Dario is wrong” and “he knows nothing about how the technological revolution affects the labor market.” He even said that such questions should be listened to by economists rather than anyone in the AI circle, including Yang Likun himself.

Even public opinion began to have some doubts.
Just to give examples of comments from well-known media: The Times questioned why the AI giant who warned of the end of the world didn't stop on his own. TechCrunch pointed out that Anthropic restricted the opening of Mythos, ostensibly to protect Internet security, but this selective opening may also help it lock in large corporate customers and prevent small companies from replicating capabilities through distillation, thereby protecting their own business interests.

Obviously, Amoudi leads Anthropic and occupies some kind of moral high ground. But the more you stand on high ground, the more you become a target.
The question is, is it just because Amoudi is too principled, too insistent on "safety" and does not hesitate to offend others, that he is being attacked in turn?
The harder Amodei tries to be a "moral pacesetter", the more glaring a moral paradox he has - he stands in the middle of the field, placing bets while reminding everyone that this game may be toxic.
This is so awkward.
02
“ How “Security King” came about
In the memo, OpenAI Chief Revenue Officer Denise mentioned a word: “elite”.
This is indeed one of the most eye-catching labels on Amoudi.
His confrontation on the moral high ground has almost become the pinnacle of his reputation. Facing the requirements of the U.S. Department of War, Anthropic refused to relax the boundaries of Claude's use, insisting that it was not allowed to be used for large-scale surveillance, nor was it allowed to be used in fully autonomous weapons without human participation in decision-making.

Amoudi said at the time that they “could not in good conscience agree” to the request. At that moment, the public cheered with a sense of long-lost release. The Silicon Valley elite finally stood up again and said "no" to greater power.
To understand where this “security king” comes from, we still have to look back at his origins.
Amoudi was born in San Francisco in 1983. His father, an Italian leatherworker from Tuscany, suffered from long-term health problems and died when Amodei was young; his mother, a Jewish-American born in Chicago, worked as a library project manager.
Amoudi has been a standard top student in science since he was a child. He attended one of the most famous elite public high schools in San Francisco and was selected to the U.S. Physics Olympiad team in 2000. After entering college, he went to the California Institute of Technology, one of the top science and engineering schools in the United States, then transferred to Stanford to get an undergraduate degree in physics, and finally went to Princeton to complete a PhD in biophysics.
He is a technical figure who combines physics, neuroscience, and AI research in multiple dimensions, which also determines his complex perspective.

In 2014, Ng recruited Amodei into Baidu's artificial intelligence laboratory in Silicon Valley to participate in the research of speech recognition systems. Later, he became one of the authors of the Deep Speech 2 paper. This system focuses on end-to-end speech recognition, covering English and Mandarin. It also attempts to use larger-scale data, computing power, and training to redo the traditional speech recognition process. It was during this time that Amodei began to develop his early instincts about scale. He later recalled in an interview that he found that when the model was larger, the data was more, and the training time was longer, the results would continue to get better. For him, the Scaling belief that later ran through GPT and Claude’s generation of large models had already laid the foundation for Baidu’s speech recognition.
Daniela Amodei also plays an important role. She also left OpenAI, and the brother and sister co-founded Anthropic.
The brother and sister form a dual-core structure. Amoudi is more like a representative of technical lines and safety narratives, while Daniela is more responsible for company operations, organizational building and business promotion. Anthropic’s unique temperament today—looking like a security research institution on the one hand, and an AI company with rapid financing and rapid expansion on the other—comes largely from this pairing.

This family bond also brings a special sense of stability to Anthropic, making the company more like a small group that split from the old organization, with a strong self-identity, thinking that it wants to do AI in another way.
At the end of 2020, OpenAI issued a very polite organizational update, announcing that then Vice President of Research Amodei was leaving. The article thanks him for his contributions in the past five years, mentions his participation in GPT-2, GPT-3, and jointly set research directions with Ilya Sutskever and others.
The most subtle thing is that OpenAI also wrote that Amoudi and several colleagues plan to start a new project that "may focus less on product development and more on research." Most of the other pages are used to show OpenAI's determination in the security field.

Years later, when the conflict between Anthropic and OpenAI became public, people looked back on the amicable "breakup" of that year. It is not difficult to see that the two sides already had differences on how the growth of AI capabilities and security boundaries should be prioritized.
A few months later, Anthropic was founded. Since then, security has slowly transformed from Amodei’s personal stance into the backbone of the company.
It has RSP, which is the "Responsible Scaling Policy". It uses ASL levels to set risk requirements for models at different capability stages, a bit like the AI version of biosafety levels; it guides model behavior through a set of "constitutional" principles; it also continues to invest in interpretability research and attempts to open the model black box.
Amodei is certainly taking advantage of “security,” but only if he also truly believes in security for the long term. From leaving OpenAI, to founding Anthropic, to RSP, AI "Constitution", explainability, model risk classification, and defense contract boundaries, his behavior has been quite consistent over the long term.
This is also where Amoudi is complicated.
He has a strong elite self-confidence - I have seen greater risks, so I am qualified to remind everyone to be slower, more strict, and more afraid. But it's the same confidence that makes him look condescending.
And when Amoudi starts running a business company, when "morality" meets business, the delicate balance becomes even more difficult to maintain.
03
When “Safe” Becomes The main line of the company
“Safety” is both Anthropic’s business and its screening mechanism.
The special thing about Anthropic is that it was a new organization formed by a group of people who left OpenAI from the beginning. The reasons for leaving included technical routes, security concepts, and disagreements about "who should define the future of AI."
So culture is particularly important in Anthropic, more like an operating system.

Amoudi is an organizational designer with the temperament of a researcher. He once said in an interview that about one-third, or even 40%, of his time is spent on ensuring that Anthropic's culture is good.
Anthropic always talks about caution, restraint, and boundaries on the surface, but internally it is not a lukewarm culture. On the contrary, Anthropic has a Slack channel similar to a personal public notebook, where employees write their own ideas, work progress, and even directly debate Amoudi.
It is conceivable that whether it is Anthropic’s entry threshold or the continuous shaping and collision of internal culture, some kind of screening mechanism for talents has been formed.
In the past year, Meta has been waving its checkbook everywhere to recruit people in order to strengthen its AI team. The price of talent in the AI industry has been raised to an exaggerated level, and top researchers and engineers have almost become superstars in the free market. Faced with this kind of poaching, some companies instinctively "follow the price", or at least explicitly state that they want to increase employee salaries and keep people first.

Amoudi did not do this. He has publicly explained that Anthropic will not immediately break its compensation principles just because an external company makes a sky-high offer to someone. He said that if Zuckerberg hits someone's name at random like a dart, that doesn't mean that person should get ten times more than the equally talented colleague next to him.
What’s even more interesting is that Amodei said that some Anthropic employees were unwilling to even talk to Zuckerberg when faced with Meta’s poaching. There is certainly an element of showing off in this sentence. But what it really wants to convey is that the "consensus" among Anthropic employees exists and is strong.
Amodei must convince employees that Anthropic is not another AI company that just wants to run faster; at the same time, he must lead this company to really run fast enough. Otherwise, no matter how beautiful the safety narrative is, it will just become a moral commentary on the sidelines of the game.
"Safety" has achieved great results as a business.
Anthropic has naturally attracted customers who have higher requirements for reliability and controllability from the beginning. When Claude was first launched, it emphasized reliability, predictability, and bootability. Its early partners were knowledge production, search, education, and workflow tools such as Notion, Quora, DuckDuckGo, and Juni Learning.
Anthropic doesn’t sell the “cheapest model”. Enterprise, government, code, finance, healthcare, education, public sector – these customers care more about stability, compliance, security boundaries and long-term liability. It can also be seen from the price that Claude is not a low-price route. The price of Claude Opus 4.7 is US$5 per million input tokens and US$25 per million outputs; Claude Sonnet 4.6 is US$3 input and US$15 output. In comparison, xAI's Grok 4.20 has an input of $2 and an output of $6, which is obviously more radical; OpenAI's GPT-5.5 has an input of $5 and an output of $30, which is in the same high price range as Claude Opus.

Reuters also previously reported that Anthropic has more than 300,000 commercial and enterprise customers, which contribute about 80% of revenue. Enterprise customers pay based on usage, with more stable retention and greater room for expansion. Once they enter code, office flow, cloud platforms and government systems, the income is incomparable to ordinary subscription products.
In contrast, OpenAI has made the biggest noise on the C-side with ChatGPT, and it has indeed educated mass users to an unprecedented scale; however, having many C-side users does not mean that the revenue structure is the best.
OpenAI’s latest statement is that corporate revenue has exceeded 40% of total revenue, and it hopes to catch up with the consumer side by the end of 2026. In other words, even OpenAI is working hard to enter the enterprise market.

Anthropic’s cooperation with the US Department of War DoW is a typical example. For a long time, Anthropic was the only AI company that DoW cooperated with in the “confidential field”. Claude is used for tasks such as intelligence analysis, modeling and simulation, combat planning, and cyber operations.
Following this line of thought, looking back at the fierce “struggle” between Anthropic and DoW, it shows Amodei’s ability to treat security as a business—by emphasizing the bottom line of “mass surveillance” and “autonomous weapons”, even if he lost the cooperation with DoW, Anthropic’s public visibility and “reliability” have been raised to a higher level, and it has gained unprecedented favor in the capital market.
Anthropic’s annual revenue has surged from US$9 billion at the end of 2025 to US$30 billion in April 2026; in February this year, it completed US$30 billion in financing at a valuation of US$380 billion, and then Google planned to invest up to US$40 billion in it. Some reports indicate that its valuation in the secondary market has reached US$1.1 trillion.
Security sounds like a restraint, but at Anthropic it ends up being a higher-level business language.
But this management technique certainly has its dangerous side.
When a company turns "safety" into an organizational belief, it will naturally gain a sense of moral superiority, and it will be easier to portray competitors as irresponsible people and package business choices into value choices.
This is why the more successful Anthropic is, the more uncomfortable Amoudi becomes.
04
Two sides of “security”TAGPH 10
It would be a bit naive to just regard Amoudi as a "science and engineering man" who has ideals and bottom lines but low emotional intelligence, so he always tells the truth and offends others.
The Wall Street Journal once had an article looking back on the 10-year feud between Amodei and Ultraman.
Amodei’s story at OpenAI is far from being explained by “differences in security concepts”.

In 2018, after Musk withdrew, Amodei agreed to stay on the condition that Brockman (one of the co-founders) and Sutskowe not take power - he first talked about power distribution, not technical routes.
Later, in the key model project, he and his sister Daniela teamed up to block Brockman who wanted to join. One of the reasons was that a certain core researcher "didn't want to work with him." The researcher later described himself as being used as a "proxy weapon" by senior executives, which shows that Amodei knows how to use people to form alliances.
As GPT-2/3 became popular, he became more and more sensitive about credit attribution and exposure: he was dissatisfied with Brockman "stealing the limelight" in the podcast, and he became angry when he found out that Altman and Brockman were going to meet Obama without him.
The following year, he asked to be promoted to vice president of research. Altman agreed, and included in the board of directors' email "Amody promises not to belittle projects he does not approve of," which was like a ceasefire clause.
These details show that security is Amoudi’s creed and a sharp organizational weapon. He talks about conditions, fights for projects, and fights for the right to speak, and uses the language of "risk" to define who can stay in the core.
Finally left OpenAI to found Anthropic. It was not so much an ideal departure, but rather a move of the battlefield after the internal power struggle failed.
When he took the helm at Anthropic, the situation became more complicated. Anthropic is no longer an ideal testing ground in a research laboratory, but a fast-growing commercial battleship. At this time, safety has changed from a pure principle to a solid product selling point.
The more Anthropic emphasizes the risks of AI in a high-profile manner, the more it highlights the need for its own existence; the more it names others for running too fast, the more it can package itself as the most reliable partner of enterprises, governments and regulators.
Its narrative naturally carries a moral comparison. Choosing Claude is not just choosing a model, but also choosing a more responsible and controllable route.
For competitors, this is very annoying. You just sell products, and you conveniently make others look like irresponsible gamblers. Security has become a moat for differentiation and a marketing tool. Enterprise customers are willing to pay more for "peace of mind", and government departments are more likely to hand over sensitive projects to the "safest" company.
In this process, security has also become a political bargaining chip.
Axios reported that in the first quarter of 2026, Anthropic’s federal lobbying expenditures reached US$1.6 million, surpassing OpenAI’s US$1 million and setting its own record; it had previously announced an investment of US$20 million in a bipartisan advocacy organization focusing on AI transparency and safety guardrails, and plans to expand its policy team and establish a long-term office in Washington.
To put it bluntly, Amodei is already competing for the rules of the game in the AI era.
The complexity of Amodei lies in the fact that these two things may be true at the same time: he does believe in AI security, and he has indeed turned AI security into Anthropic’s sharpest commercial weapon. Security is both his religion and his lever of power.
Principles and interests are not simply opposites, but feed and strengthen each other. It is this intertwining of truth and falsehood that makes him both praised as the "security king" and also regarded by some as a shrewd trader.
Anthropic The better you can explain risks, the more opportunities you have to define risks; the more opportunities you have to define risks, the more you can put yourself at the center of the rules of the AI era.
This is Amoudi’s “safety magic”, and “public enemy” is a necessary side effect of playing magic.
Maybe he enjoys it.