SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
-
Updated
Jun 13, 2024 - Python
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Must read research papers and links to tools and datasets that are related to using machine learning for compilers and systems optimisation
Stretching GPU performance for GEMMs and tensor contractions.
Kernel Tuner
CLTune: An automatic OpenCL & CUDA kernel tuner
Benchmark scripts for TVM
Machine Learning Framework for Operating Systems - Brings ML to Linux kernel
Phoebe
Backoff uses an exponential backoff algorithm to backoff between retries with optional auto-tuning functionality.
Collective Knowledge crowd-tuning extension to let users crowdsource their experiments (using portable Collective Knowledge workflows) such as performance benchmarking, auto tuning and machine learning across diverse platforms with Linux, Windows, MacOS and Android provided by volunteers. Demo of DNN crowd-benchmarking and crowd-tuning:
ebpf profiler for jvm
A GPU benchmark suite for autotuners
A pattern-based algorithmic auto-tuner for graph processing on GPUs
A self-hosted language learning website
A Generic Distributed Auto-Tuning Infrastructure
Autotuner for Spark applications
This software package accompanies the paper "A Methodology for Comparing Auto-Tuning Optimization Algorithms" (https://doi.org/10.1016/j.future.2024.05.021), making the guidelines in the methodology easy to apply.
Add a description, image, and links to the auto-tuning topic page so that developers can more easily learn about it.
To associate your repository with the auto-tuning topic, visit your repo's landing page and select "manage topics."