The "GPU poor" are about to bid farewell to their predicament! Just now, NVIDIA released an open source software TensorRT-LLM, which can accelerate the reasoning of large language models on H100. So, how many times can it be improved? After adding TensorRT-LLM and its series of optimization functions (including In-Flight batch processing), the total model throughput increased by 8 times.
Visit the purchase page:
Jingdong NVIDIA series Column product summary
TAGPH 17
Comparison of GPT-J-6BA100 and H100 with and without TensorRT-LLM
In addition, taking Llama2 as an example, TensorRT-LLM can improve inference performance by 4.6 times compared to using A100 alone.
Llama270B, A with and without TensorRT-LLM Comparison between 100 and H100
Netizens said that the super powerful H100, combined with TensorRT-LLM, will undoubtedly completely change the current situation of large language model inference!
TensorRT-LLM: Large model inference acceleration artifact
Currently, due to the huge parameter scale of large models, the difficulty and cost of "deployment and inference" have always been high.
The TensorRT-LLM developed by NVIDIA aims to significantly improve LLM throughput and reduce costs through GPU.
Specifically, TensorRT-LLM encapsulates TensorRT’s deep learning compiler, FasterTransformer’s optimized kernel, pre- and post-processing, and multi-GPU/multi-node communication in a simple open source Python API.
NVIDIA has further enhanced FasterTransformer to make it a productized solution.
It can be seen that TensorRT-LLM provides an easy-to-use, open source and modular Python application programming interface.
Coders do not need in-depth professional knowledge of C++ or CUDA. They can deploy, run, and debug various large language models, and can also obtain top performance and rapid customization functions.
According to NVIDIA’s official blog, TensorRT-LLM optimizes LLM inference performance on NVIDIA GPUs in four ways.
First of all, TensorRT-LLM is introduced for the current 10+ large models, allowing developers to run them immediately.
Secondly, TensorRT-LLM, as an open source software library, allows LLM to perform inference on multiple GPUs and multiple GPU servers simultaneously.
These servers are connected via NVIDIA's NVLink and InfiniBand interconnects respectively.
The third is "In-flight batch processing", which is a brand-new scheduling technology that allows different model tasks to enter and exit the GPU independently of other tasks.
Finally, TensorRT-LLM has been optimized to use H100TransformerEngine to reduce memory usage and latency during model inference.
Next, let’s take a closer look at how TensorRT-LLM improves model performance.
Supports rich LLM ecosystem
TensorRT-LLM provides very good support for the open source model ecosystem.
The largest and most advanced language models, such as Llama2-70B launched by Meta, require multiple GPUs to work together to provide responses in real time.
Previously, if they wanted to achieve optimal performance for LLM inference, developers had to rewrite the AI model and manually split it into multiple fragments and coordinate execution across GPUs.
TensorRT-LLM uses tensor parallelism technology (tensorparallelism) to distribute the weight matrix to each device, thereby simplifying the process and enabling large-scale efficient inference.
Each model can run in parallel on multiple GPUs and multiple servers connected via NVLink, without developer intervention or model changes.
With the introduction of new models and model architectures, developers can use the latest NVIDIA AI kernel (Kernal) open source in TensorRT-LLM to optimize models.
Supported kernel fusion (KernalFusion), including the most cutting-edge FlashAttention implementation and masked multi-head attention for the context and generation stages of GPT model execution, etc.
In addition, TensorRT-LLM also includes fully optimized, ready-to-run versions of many large language models that are currently popular.
Including MetaLlama2, OpenAIGPT-2 and GPT-3, Falcon, MosaicMPT, BLOOM and more than 10 models, all of which can be called using the simple and easy-to-use TensorRT-LLMPython API.
These features can help developers build customized large language models faster and more accurately to meet the different needs of various industries.
In-flight batch processing
Large language models are extremely versatile nowadays.
A single model can be used simultaneously for multiple, seemingly disparate tasks - from simple Q&A responses in a chatbot, to document summarization or the generation of long code blocks, workloads are highly dynamic and output sizes need to be of varying orders of magnitude to meet the demands of the task.
The diversity of tasks can make it difficult to efficiently batch requests and perform efficient parallel execution, possibly causing some requests to complete earlier than others.
To manage these dynamic loads, TensorRT-LLM includes an optimized scheduling technology called "In-flight batching".
Its core principle is that the entire text generation process of a large language model can be broken down into multiple execution iterations on the model.
With inflight batching, the TensorRT-LLM runtime immediately releases completed sequences from the batch, rather than waiting for the entire batch to complete before continuing to process the next set of requests.
When executing a new request, other requests from the previous batch that have not been completed are still being processed.
In-flight batching and additional kernel-level optimizations improve GPU utilization, which can at least double the throughput of the actual LLM request benchmark on the H100.
Using FP8’s H100Transformer engine
TensorRT-LLM also provides a function called H100TransformerEngine, which can effectively reduce memory consumption and latency during large model inference.
Because LLM contains billions of model weights and activation functions, it is usually trained and represented with FP16 or BF16 values, each occupying 16 bits of memory.
However, at inference time, most models can be efficiently represented with lower precision using quantization techniques, such as 8-bit or even 4-bit integers (INT8 or INT4).
Quantization is the process of reducing model weights and activation accuracy without sacrificing accuracy. Using lower precision means each parameter is smaller and the model takes up less space in GPU memory.
This enables inference on larger models using the same hardware while spending less time on memory operations during execution.
Through H100TransformerEngine technology, the H100GPU with TensorRT-LLM allows users to easily convert model weights to the new FP8 format, and automatically compile the model to take advantage of the optimized FP8 kernel.
And this process does not require any code! The FP8 data format introduced by H100 enables developers to quantify their models and dramatically reduce memory consumption without reducing model accuracy.
Compared with other data formats such as INT8 or INT4, FP8 quantization retains higher accuracy while achieving the fastest performance and is the most convenient to implement.
How to obtain TensorRT-LLM
Although TensorRT-LLM has not been officially released yet, users can now have early access.
The application link is as follows:
https://developer.nvidia.com/tensorr t-llm-early-access/join
NVIDIA also said that TensorRT-LLM will be integrated into the NVIDIANeMo framework soon.
This framework is part of the AIEnterprise recently launched by NVIDIA, providing enterprise customers with a secure, stable, and highly manageable enterprise-level AI software platform.
Developers and researchers can access TensorRT-LLM through the NeMo framework on NVIDIA NGC or as a project on GitHub.
However, it should be noted that users must register for the NVIDIA Developer Program to apply for the early access version.
Netizens are hotly discussing
Netizens on Reddit have launched a heated discussion on the launch of TensorRT-LLM.
It’s hard to imagine how much the effect will be improved after optimizing the hardware specifically for LLM.
But some netizens believe that the purpose of this thing is to help Lao Huang sell more H100s.
However, some netizens do not agree very much. He thinks that TensorRT is also helpful for users who deploy SD locally, so as long as there is an RTX GPU, it should be possible to benefit from similar products in the future.
From a more macro perspective, perhaps for LLM, there will also be a series of hardware-level optimizations, and even hardware designed specifically for LLM will appear in the future to improve the performance of LLM. This situation has actually happened in many popular applications, and LLM is no exception.