Today, Moore Threads officially announced that,The unveiling ceremony of Moore Thread's first nationally produced 1,000-kamma and 100 billion model training platform, Moore Thread's KUAE Intelligent Computing Center, was successfully held.This move also means thatThe country's first large-scale computing cluster based on domestically produced full-featured GPUs was officially launched.

Moore Thread CEO Zhang Jianzhong said that Moore Thread has built an intelligent computing product line from chips to graphics cards to clusters. Relying on the multiple computing advantages of full-featured GPUs, it can meet the growing needs for large model training and inference.

According to reports, the Moore Thread KUAE intelligent computing center solution is based on a full-featured GPU.It aims to solve the construction and operation management problems of large-scale GPU computing power in an integrated delivery manner.

This solution can be used out of the box, greatly reducing the time cost of traditional computing power construction, application development and operation and maintenance platform construction, and achieving rapid launch on the market for commercial operations.

Currently, Moore Thread supports the training and fine-tuning of various mainstream large models including LLaMA, GLM, Aquila, Baichuan, GPT, Bloom, Yuyan, etc.

Based on Moore thread KUAE kilocal cluster,For large model training with 70B to 130B parameters, the linear acceleration ratio can reach 91%.The computing power utilization rate remains basically unchanged.

Taking the amount of 200 billion training data as an example,Zhiyuan Research Institute's 70 billion parameter Aquila2 can complete training in 33 days; a model with 130 billion parameters can complete training in 56 days.