Torch Compile: 2x Faster Llama 3.2 with Low Effort


But it will depend on your GPU

Towards Data Science
Image generated with ChatGPT

Torch Compile (torch.compile) was first introduced with PyTorch 2.0, but it took several updates and optimizations before it could reliably support most large language models (LLMs).

when it comes to inference, torch.compile can genuinely speed up decoding with only a small increase in memory usage.

In this article, we’ll go over how torch.compile works and measure its impact on inference performance with LLMs. To use torch.compile in your code, you only need to add a single line. For this article, I tested it with Llama 3.2 and also tried it with bitsandbytes quantization, using two different GPUs: Google Colab’s L4 and A100.

I’ve also created a notebook demonstrating how to use torch.compile and benchmarking its performance here:

Get the notebook (#120)

torch.compile provides a way to accelerate models by converting standard PyTorch code into optimized machine code. This approach, called JIT (Just-In-Time) compilation, makes the code run more efficiently on specific hardware, i.e., faster than normal Python code. It’s particularly good for complex models where even small speed…

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here