nncf2.14.1
Published
Neural Networks Compression Framework
pip install nncf
Package Downloads
Authors
Project URLs
Requires Python
>=3.9
Dependencies
- jsonschema
>=3.2.0
- jstyleson
>=0.0.2
- natsort
>=7.1.0
- networkx
<=3.3,>=2.6
- ninja
<1.12,>=1.10.0.post2
- numpy
<2.2.0,>=1.19.1
- openvino-telemetry
>=2023.2.0
- packaging
>=20.0
- pandas
<2.3,>=1.1.5
- psutil
- pydot
<3.0.0,>=1.4.1
- pymoo
>=0.6.0.1
- rich
>=13.5.2
- scikit-learn
>=0.24.0
- scipy
>=1.3.2
- tabulate
>=0.9.0
- tqdm
>=4.54.1
- safetensors
>=0.4.1
- kaleido
>=0.2.1; extra == "plots"
- matplotlib
<3.6,>=3.3.4; extra == "plots"
- pillow
>=9.0.0; extra == "plots"
- plotly-express
>=0.4.1; extra == "plots"
Neural Network Compression Framework (NNCF)
Neural Network Compression Framework (NNCF) provides a suite of post-training and training-time algorithms for optimizing inference of neural networks in OpenVINO™ with a minimal accuracy drop.
NNCF is designed to work with models from PyTorch, TensorFlow, ONNX and OpenVINO™.
The framework is organized as a Python package that can be built and used as a standalone tool. Its architecture is unified to make adding different compression algorithms easy for both PyTorch and TensorFlow.
NNCF provides samples that demonstrate the usage of compression algorithms for different use cases and models. See compression results achievable with the NNCF-powered samples on the NNCF Model Zoo page.
For more information about NNCF, see:
Table of contents
Key Features
Post-Training Compression Algorithms
Compression algorithm | OpenVINO | PyTorch | TensorFlow | ONNX |
---|---|---|---|---|
Post-Training Quantization | Supported | Supported | Supported | Supported |
Weight Compression | Supported | Supported | Not supported | Not supported |
Activation Sparsity | Not supported | Experimental | Not supported | Not supported |
Training-Time Compression Algorithms
Compression algorithm | PyTorch | TensorFlow |
---|---|---|
Quantization Aware Training | Supported | Supported |
Mixed-Precision Quantization | Supported | Not supported |
Sparsity | Supported | Supported |
Filter pruning | Supported | Supported |
Movement pruning | Experimental | Not supported |
- Automatic, configurable model graph transformation to obtain the compressed
model.
NOTE: Limited support for TensorFlow models. Only models created using Sequential or Keras Functional API are supported.
- Common interface for compression methods.
- GPU-accelerated layers for faster compressed model fine-tuning.
- Distributed training support.
- Git patch for prominent third-party repository (huggingface-transformers) demonstrating the process of integrating NNCF into custom training pipelines.
- Seamless combination of pruning, sparsity, and quantization algorithms. Refer to optimum-intel for examples of joint (movement) pruning, quantization, and distillation (JPQD), end-to-end from NNCF optimization to compressed OpenVINO IR.
- Exporting PyTorch compressed models to ONNX* checkpoints and TensorFlow compressed models to SavedModel or Frozen Graph format, ready to use with OpenVINO™ toolkit.
- Support for Accuracy-Aware model training pipelines via the Adaptive Compression Level Training and Early Exit Training.
Installation Guide
NNCF can be installed as a regular PyPI package:
pip install nncf
For detailed installation instructions, refer to the Installation guide.
System Requirements
- Ubuntu 18.04 or later (64-bit)
- Python 3.9 or later
- Supported frameworks:
- PyTorch >=2.2, <2.5
- TensorFlow >=2.8.4, <=2.15.1
- ONNX ==1.16.0
- OpenVINO >=2022.3.0
Third-party Repository Integration
NNCF may be easily integrated into training/evaluation pipelines of third-party repositories.
-
NNCF is integrated into OpenVINO Training Extensions as a model optimization backend. You can train, optimize, and export new models based on available model templates as well as run the exported models with OpenVINO.
-
NNCF is used as a compression backend within the renowned
transformers
repository in HuggingFace Optimum Intel.
NNCF Compressed Model Zoo
A list of models and compression results for them can be found at our NNCF Model Zoo page.
Citing
@article{kozlov2020neural,
title = {Neural network compression framework for fast model inference},
author = {Kozlov, Alexander and Lazarevich, Ivan and Shamporov, Vasily and Lyalyushkin, Nikolay and Gorbachev, Yury},
journal = {arXiv preprint arXiv:2002.08679},
year = {2020}
}
Telemetry
NNCF as part of the OpenVINO™ toolkit collects anonymous usage data for the purpose of improving OpenVINO™ tools. You can opt-out at any time by running the following command in the Python environment where you have NNCF installed:
opt_in_out --opt_out
More information available on OpenVINO telemetry.