Skip to main content

2 posts tagged with "Optimization"

View All Tags

vLLM Optimization Techniques: 5 Practical Methods to Improve Performance

· 26 min read
Jaydev Tonde
Jaydev Tonde
Data Scientist

vLLM optimization techniques cover artwork with five performance methods highlighted

Running large language models efficiently can be challenging. You want good performance without overloading your servers or exceeding your budget. That's where vLLM comes in - but even this powerful inference engine can be made faster and smarter.

In this post, we'll explore five cutting-edge optimization techniques that can dramatically improve your vLLM performance:

  1. Prefix Caching - Stop recomputing what you've already computed
  2. FP8 KV-Cache - Pack more memory efficiency into your cache
  3. CPU Offloading - Make your CPU and GPU work together
  4. Disaggregated P/D - Split processing and serving for better scaling
  5. Zero Reload Sleep Mode - Keep your models warm without wasting resources

Each technique addresses a different bottleneck, and together they can significantly improve your inference pipeline performance. Let's explore how these optimizations work.

Disaggregated Prefill-Decode: The Architecture Behind Meta's LLM Serving

· 11 min read
Vishnu Subramanian
Founder @JarvisLabs.ai

Disaggregated Prefill-Decode Architecture

Why I'm Writing This Series

I've been deep in research mode lately, studying how to optimize LLM inference. The goal is to eventually integrate these techniques into JarvisLabs - making it easier for our users to serve models efficiently without having to become infrastructure experts themselves.

As I learn, I want to share what I find. This series is part research notes, part explainer. If you're trying to understand LLM serving optimization, hopefully my journey saves you some time.

This first post covers disaggregated prefill-decode - a pattern I discovered while reading through the vLLM router repository. Meta's team has been working closely with vLLM on this, and it solves a fundamental problem that's been on my mind.