Meta, OpenAI and Microsoft stated at the AMD Investor Conference held on Wednesday that they will use AMD’s latest artificial intelligence chip InstinctMI300X. The move shows that the tech community is actively seeking alternatives to expensive Nvidia GPUs. MI300X has high hopes in the industry. After all, "The world has been suffering from Vida for a long time." The latter's flagship GPU is not only expensive, but also has a very limited supply quantity. If MI300X can be adopted on a large scale, it is expected to reduce the cost of developing artificial intelligence models and form competitive pressure on Nvidia.

How much faster than Nvidia's GPU?

AMD said that MI300X is based on a new architecture and has significantly improved performance. Its biggest feature is its 192GB of cutting-edge high-performance memory, HBM3, which transfers data faster and can accommodate larger artificial intelligence models.

Su Zifeng directly compared the MI300X and the system it builds to Nvidia's (previous generation) flagship GPU H100.

In terms of basic specifications, MI300X's floating point operation speed is 30% higher than H100, memory bandwidth is 60% higher than H100, and memory capacity is more than twice that of H100.

Of course, MI300X is more benchmarked against NVIDIA's latest GPU H200. Although it is also leading in specifications, MI300X's advantage over H200 is not that great. The memory bandwidth is only single digits more than the latter, and the capacity is nearly 40% larger than the latter.


Su Zifeng believes:

“This performance translates directly into a better user experience, and when you ask a model a question, you want it to answer it faster, especially as the answers become more complex.”

Su Zifeng: AMD doesn’t need to beat Nvidia for second place to live well

The main question facing AMD is whether companies that have been relying on Nvidia will invest the time and money to add another GPU supplier.

Su Zifeng also admitted that these companies do need to "make efforts" to switch to AMD chips.

AMD told investors and partners on Wednesday that it has improved its software suite ROCm that benchmarks Nvidia's CUDA. The CUDA suite has been one of the main reasons why AI developers currently favor NVIDIA.

Price is also important. AMD didn't reveal pricing for the MI300X on Wednesday, but it will certainly be cheaper than Nvidia's flagship chips, which cost around $40,000 each. Su Zifeng said that AMD's chips must have lower purchase and operating costs than Nvidia in order to convince customers to buy them.

AMD also said that it has signed MI300X orders with some large companies that need GPUs most.

Meta plans to use the MI300X GPU for artificial intelligence reasoning tasks. Microsoft Chief Technology Officer Kevin Scott also said that the company will deploy MI300X in its cloud computing service Azure. In addition, Oracle's cloud computing service will also use MI300X. OpenAI will also use AMD GPU in a software product called Triton.

According to the latest report from research firm Omidia, Meta, Microsoft, and Oracle are all important buyers of Nvidia H100 GPU in 2023.

AMD did not give a sales forecast for MI300X, but only estimated total data center GPU revenue in 2024 to be approximately US$2 billion. NVIDIA's data center revenue in the most recent quarter alone exceeded $14 billion, but that includes other chips besides GPUs.

Looking forward, AMD believes that the market size of artificial intelligence GPUs will climb to US$400 billion, doubling its previous forecast. This shows how high people’s expectations and covetation are for high-end artificial intelligence chips.

Su Zifeng also frankly told the media that AMD does not need to beat Nvidia to achieve good results in the market. The implication is that second place can also live well.

When talking about the AI ​​chip market, she said:

"I think it's clear to say that Nvidia is definitely the market leader now, and we believe this market could be over $400 billion by 2027. We can get a piece of that pie."