# aimet **Repository Path**: superpig2021/aimet ## Basic Information - **Project Name**: aimet - **Description**: No description available - **Primary Language**: Unknown - **License**: BSD-3-Clause - **Default Branch**: develop - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2023-12-22 - **Last Updated**: 2025-03-14 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README  [](https://quic.github.io/aimet-pages/index.html) [](https://quic.github.io/aimet-pages/releases/latest/index.html) [](#quick-installation) [](https://github.com/quic/aimet/discussions) [](#whats-new) # AI Model Efficiency Toolkit (AIMET) AIMET is a library that provides advanced model quantization and compression techniques for trained neural network models. It provides features that have been proven to improve run-time performance of deep learning neural network models with lower compute and memory requirements and minimal impact to task accuracy.  AIMET is designed to work with [PyTorch](https://pytorch.org), [TensorFlow](https://tensorflow.org) and [ONNX](https://onnx.ai) models. We also host the [AIMET Model Zoo](https://github.com/quic/aimet-model-zoo) - a collection of popular neural network models optimized for 8-bit inference. We also provide recipes for users to quantize floating point models using AIMET. ## Table of Contents - [Installation](#quick-installation) - [Why AIMET?](#why-aimet) - [Supported features](#supported-features) - [What's New](#whats-new) - [Results](#results) - [Resources](#resources) - [Contributions](#contributions) - [Team](#team) - [License](#license) ## Installation To install the latest version of AIMET for supported frameworks and compute platforms, see [Install and run AIMET](https://quic.github.io/aimet-pages/releases/latest/install) ### Building from source To build the latest AIMET code from the source, see [Build, install and run AIMET from source in *Docker* environment](./packaging/docker_install.md) ## Why AIMET?  * **Supports advanced quantization techniques**: Inference using integer runtimes is significantly faster than using floating-point runtimes. For example, models run 5x-15x faster on the Qualcomm Hexagon DSP than on the Qualcomm Kyro CPU. In addition, 8-bit precision models have a 4x smaller footprint than 32-bit precision models. However, maintaining model accuracy when quantizing ML models is often challenging. AIMET solves this using novel techniques like Data-Free Quantization that provide state-of-the-art INT8 results on several popular models. * **Supports advanced model compression techniques** that enable models to run faster at inference-time and require less memory * **AIMET is designed to automate optimization** of neural networks avoiding time-consuming and tedious manual tweaking. AIMET also provides user-friendly APIs that allow users to make calls directly from their [TensorFlow](https://tensorflow.org) or [PyTorch](https://pytorch.org) pipelines. Please visit the [AIMET on Github Pages](https://quic.github.io/aimet-pages/index.html) for more details. ## Supported Features ### Quantization * *Cross-Layer Equalization*: Equalize weight tensors to reduce amplitude variation across channels * *Bias Correction*: Corrects shift in layer outputs introduced due to quantization * *Adaptive Rounding*: Learn the optimal rounding given unlabelled data * *Quantization Simulation*: Simulate on-target quantized inference accuracy * *Quantization-aware Training*: Use quantization simulation to train the model further to improve accuracy ### Model Compression * *Spatial SVD*: Tensor decomposition technique to split a large layer into two smaller ones * *Channel Pruning*: Removes redundant input channels from a layer and reconstructs layer weights * *Per-layer compression-ratio selection*: Automatically selects how much to compress each layer in the model ### Visualization * *Weight ranges*: Inspect visually if a model is a candidate for applying the Cross Layer Equalization technique. And the effect after applying the technique * *Per-layer compression sensitivity*: Visually get feedback about the sensitivity of any given layer in the model to compression ## What's New Some recently added features include * Adaptive Rounding (AdaRound): Learn the optimal rounding given unlabelled data * Quantization-aware Training (QAT) for recurrent models (including with RNNs, LSTMs and GRUs) ## Results AIMET can quantize an existing 32-bit floating-point model to an 8-bit fixed-point model without sacrificing much accuracy and without model fine-tuning.
Models | FP32 | INT8 Simulation |
---|---|---|
MobileNet v2 (top1) | 71.72% | 71.08% |
ResNet 50 (top1) | 76.05% | 75.45% |
DeepLab v3 (mIOU) | 72.65% | 71.91% |
For this example ADAS object detection model, which was challenging to quantize to 8-bit precision, AdaRound can recover the accuracy to within 1% of the FP32 accuracy.
Configuration | mAP - Mean Average Precision | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
FP32 | 82.20% | ||||||||||||||
Nearest Rounding (INT8 weights, INT8 acts) | 49.85% | ||||||||||||||
AdaRound (INT8 weights, INT8 acts) | 81.21% |
For some models like the DeepLabv3 semantic segmentation model, AdaRound can even quantize the model weights to 4-bit precision without a significant drop in accuracy.
Configuration | mIOU - Mean intersection over union | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
FP32 | 72.94% | ||||||||||||||
Nearest Rounding (INT4 weights, INT8 acts) | 6.09% | ||||||||||||||
AdaRound (INT4 weights, INT8 acts) | 70.86% |
AIMET supports quantization simulation and quantization-aware training (QAT) for recurrent models (RNN, LSTM, GRU). Using QAT feature in AIMET, a DeepSpeech2 model with bi-directional LSTMs can be quantized to 8-bit precision with minimal drop in accuracy.
DeepSpeech2 (using bi-directional LSTMs) |
Word Error Rate |
---|---|
FP32 | 9.92% |
INT8 | 10.22% |
AIMET can also significantly compress models. For popular models, such as Resnet-50 and Resnet-18, compression with spatial SVD plus channel pruning achieves 50% MAC (multiply-accumulate) reduction while retaining accuracy within approx. 1% of the original uncompressed model.
Models | Uncompressed model | 50% Compressed model |
---|---|---|
ResNet18 (top1) | 69.76% | 68.56% |
ResNet 50 (top1) | 76.05% | 75.75% |