nncf2.18.0
Published
Neural Networks Compression Framework
pip install nncf
Package Downloads
Authors
Project URLs
Requires Python
>=3.9
Dependencies
- jsonschema
>=3.2.0
- natsort
>=7.1.0
- networkx
<3.5.0,>=2.6
- ninja
<1.14,>=1.10.0.post2
- numpy
<2.3.0,>=1.24.0
- openvino-telemetry
>=2023.2.0
- packaging
>=20.0
- pandas
<2.4,>=1.1.5
- psutil
- pydot
<=3.0.4,>=1.4.1
- pymoo
>=0.6.0.1
- rich
>=13.5.2
- safetensors
>=0.4.1
- scikit-learn
>=0.24.0
- scipy
>=1.3.2
- tabulate
>=0.9.0
- kaleido
>=0.2.1; extra == "plots"
- matplotlib
>=3.3.4; extra == "plots"
- pillow
>=9.0.0; extra == "plots"
- plotly-express
>=0.4.1; extra == "plots"
Neural Network Compression Framework (NNCF)
Neural Network Compression Framework (NNCF) provides a suite of post-training and training-time algorithms for optimizing inference of neural networks in OpenVINO™ with a minimal accuracy drop.
NNCF is designed to work with models from PyTorch, TorchFX, TensorFlow, ONNX and OpenVINO™.
NNCF provides samples that demonstrate the usage of compression algorithms for different use cases and models. See compression results achievable with the NNCF-powered samples on the NNCF Model Zoo page.
The framework is organized as a Python* package that can be built and used in a standalone mode. The framework architecture is unified to make it easy to add different compression algorithms for both PyTorch and TensorFlow deep learning frameworks.
For more information about NNCF, see:
- NNCF repository
- User documentation
- NNCF API documentation
- Usage examples
- Notebook tutorials
- NNCF Compressed Model Zoo
Table of contents
Key Features
Post-Training Compression Algorithms
Compression algorithm | OpenVINO | PyTorch | TorchFX | TensorFlow | ONNX |
---|---|---|---|---|---|
Post-Training Quantization | Supported | Supported | Experimental | Supported | Supported |
Weights Compression | Supported | Supported | Experimental | Not supported | Not supported |
Activation Sparsity | Not supported | Experimental | Not supported | Not supported | Not supported |
Training-Time Compression Algorithms
Compression algorithm | PyTorch | TensorFlow |
---|---|---|
Quantization Aware Training | Supported | Supported |
Weight-Only Quantization Aware Training with LoRA and NLS | Supported | Not Supported |
Mixed-Precision Quantization | Supported | Not supported |
Sparsity | Supported | Supported |
Filter pruning | Supported | Supported |
Movement pruning | Experimental | Not supported |
- Automatic, configurable model graph transformation to obtain the compressed model.
NOTE: Limited support for TensorFlow models. Only models created using Sequential or Keras Functional API are supported.
- Common interface for compression methods.
- GPU-accelerated layers for faster compressed model fine-tuning.
- Distributed training support.
- Git patch for prominent third-party repository (huggingface-transformers) demonstrating the process of integrating NNCF into custom training pipelines.
- Exporting PyTorch compressed models to ONNX* checkpoints and TensorFlow compressed models to SavedModel or Frozen Graph format, ready to use with OpenVINO™ toolkit.
- Support for Accuracy-Aware model training pipelines via the Adaptive Compression Level Training and Early Exit Training.
Installation Guide
For detailed installation instructions, refer to the Installation guide.
NNCF can be installed as a regular PyPI package via pip:
pip install nncf
NNCF is also available via conda:
conda install -c conda-forge nncf
System requirements of NNCF correspond to the used backend. System requirements for each backend and the matrix of corresponding versions can be found in installation.md.
Third-party Repository Integration
NNCF may be easily integrated into training/evaluation pipelines of third-party repositories.
Used by
-
NNCF is used as a compression backend within the renowned
transformers
repository in HuggingFace Optimum Intel. For instance, the command below exports the Llama-3.2-3B-Instruct model to OpenVINO format with INT4-quantized weights:optimum-cli export openvino -m meta-llama/Llama-3.2-3B-Instruct --weight-format int4 ./Llama-3.2-3B-Instruct-int4
-
NNCF is integrated into the Intel OpenVINO export pipeline, enabling quantization for the exported models.
-
NNCF is used as primary quantization framework for the ExecuTorch OpenVINO integration.
-
NNCF is used as primary quantization framework for the torch.compile OpenVINO integration.
-
NNCF is integrated into OpenVINO Training Extensions as a model optimization backend. You can train, optimize, and export new models based on available model templates as well as run the exported models with OpenVINO.
NNCF Compressed Model Zoo
List of models and compression results for them can be found at our NNCF Model Zoo page.
Citing
@article{kozlov2020neural,
title = {Neural network compression framework for fast model inference},
author = {Kozlov, Alexander and Lazarevich, Ivan and Shamporov, Vasily and Lyalyushkin, Nikolay and Gorbachev, Yury},
journal = {arXiv preprint arXiv:2002.08679},
year = {2020}
}
Telemetry
NNCF as part of the OpenVINO™ toolkit collects anonymous usage data for the purpose of improving OpenVINO™ tools. You can opt-out at any time by running the following command in the Python environment where you have NNCF installed:
opt_in_out --opt_out
More information available on OpenVINO telemetry.