📖A curated list of Awesome LLM Inference Paper with codes, TensorRT-LLM, vLLM, streaming-llm, AWQ, SmoothQuant, WINT8/4, Continuous Batching, FlashAttention, PagedAttention etc.
-
Updated
May 27, 2024
📖A curated list of Awesome LLM Inference Paper with codes, TensorRT-LLM, vLLM, streaming-llm, AWQ, SmoothQuant, WINT8/4, Continuous Batching, FlashAttention, PagedAttention etc.
A lightweight, fast inference server for Llama
Add a description, image, and links to the paged-attention topic page so that developers can more easily learn about it.
To associate your repository with the paged-attention topic, visit your repo's landing page and select "manage topics."