After announcing a strong first-quarter 2026 financial report, AMD CEO Lisa Su said on the earnings call that with the advent of the era of Agentic AI (agent artificial intelligence), CPU usage in data centers is being pushed to unprecedented heights. She said that under this new trend, the number of CPUs and GPUs in a single computing node is gradually approaching one-to-one from one-to-many in the past, and there may even be more CPUs than GPUs in the future.

When answering questions from analysts, Su Zifeng pointed out that traditional AI training and inference clusters usually use a configuration of "one CPU with four to eight GPUs", and the CPU plays more of a "host" role, responsible for scheduling and initiating GPU computing tasks. In the Agentic AI mode, a large number of autonomous agents need to rely on the host CPU to continuously perform status updates, task orchestration and collaboration, which is fundamentally changing the form of computing nodes.
According to Su Zifeng, with the rapid increase in the number of agents, the ratio of CPU to GPU is moving closer to 1:1. She even suggested that if future clusters run "extremely large numbers" of agents, it's entirely conceivable to have more CPUs than GPUs in a single node. This means that the wave of GPU-led accelerated computing expansion of the past few years is being superimposed by a wave of CPU demand driven by "agent workloads."
The so-called Agentic AI essentially runs multiple autonomous “agents” on top of a large language model (LLM) to automatically complete complex task processes. For example, in a software development scenario, an agent can review the code by itself, implement modifications, wait for compilation to complete, and continue to fix new bugs when they are discovered, with almost no manual intervention required in the entire process. However, in order to coordinate, schedule and orchestrate these parallel running agent tasks, the system must rely on the CPU to provide continuous control and management capabilities.
Under such workloads, the CPU is no longer just a supporting role in "starting GPU training or inference", but has become the core hub driving the operation of the entire Agentic AI system. As more and more tasks are split and delegated to agents, CPU utilization is pushed to extremely high levels, even in the era of rapid expansion of GPU-accelerated computing. The report quoted AMD's statement as saying that the company is currently "selling almost all the CPUs it can provide to AI laboratories and hyperscale cloud service providers" to meet this wave of new demands brought about by intelligent agent tasks.
This also means that in the design of future AI infrastructure, the relationship between CPU and GPU may change from "master-slave" to a more equal or even more CPU-intensive form. For chip suppliers, Agentic AI not only continues to drive demand for GPUs, but is also expected to open up a new round of growth in the server CPU market.