adapters1.0.1
Published
A Unified Library for Parameter-Efficient and Modular Transfer Learning
pip install adapters
Package Downloads
Authors
Project URLs
Requires Python
>=3.8.0
Dependencies
- transformers
~=4.45.2
- pytest
<8.0.0,>=7.2.0; extra == "dev"
- pytest-rich
; extra == "dev"
- pytest-xdist
; extra == "dev"
- timeout-decorator
; extra == "dev"
- parameterized
; extra == "dev"
- psutil
; extra == "dev"
- datasets
!=2.5.0; extra == "dev"
- dill
<0.3.5; extra == "dev"
- evaluate
>=0.2.0; extra == "dev"
- pytest-timeout
; extra == "dev"
- black
~=24.4.0; extra == "dev"
- sacrebleu
<2.0.0,>=1.4.12; extra == "dev"
- rouge-score
!=0.0.7,!=0.0.8,!=0.1,!=0.1.1; extra == "dev"
- nltk
; extra == "dev"
- GitPython
<3.1.19; extra == "dev"
- sacremoses
; extra == "dev"
- rjieba
; extra == "dev"
- beautifulsoup4
; extra == "dev"
- pillow
; extra == "dev"
- accelerate
>=0.26.0; extra == "dev"
- torch
; extra == "dev"
- sentencepiece
!=0.1.92,>=0.1.91; extra == "dev"
- protobuf
; extra == "dev"
- isort
>=5.5.4; extra == "dev"
- flake8
>=3.8.3; extra == "dev"
- docutils
==0.16.0; extra == "dev"
- Jinja2
==2.11.3; extra == "dev"
- markupsafe
==2.0.1; extra == "dev"
- myst-parser
; extra == "dev"
- sphinx
==5.0.2; extra == "dev"
- sphinx-markdown-tables
==0.0.17; extra == "dev"
- sphinx-rtd-theme
==2.0.0; extra == "dev"
- sphinx-copybutton
==0.5.2; extra == "dev"
- sphinxext-opengraph
==0.4.1; extra == "dev"
- sphinx-intl
==2.1.0; extra == "dev"
- sphinx-multiversion
==0.2.4; extra == "dev"
- scikit-learn
; extra == "dev"
- docutils
==0.16.0; extra == "docs"
- Jinja2
==2.11.3; extra == "docs"
- markupsafe
==2.0.1; extra == "docs"
- myst-parser
; extra == "docs"
- sphinx
==5.0.2; extra == "docs"
- sphinx-markdown-tables
==0.0.17; extra == "docs"
- sphinx-rtd-theme
==2.0.0; extra == "docs"
- sphinx-copybutton
==0.5.2; extra == "docs"
- sphinxext-opengraph
==0.4.1; extra == "docs"
- sphinx-intl
==2.1.0; extra == "docs"
- sphinx-multiversion
==0.2.4; extra == "docs"
- black
~=24.4.0; extra == "quality"
- datasets
!=2.5.0; extra == "quality"
- isort
>=5.5.4; extra == "quality"
- flake8
>=3.8.3; extra == "quality"
- GitPython
<3.1.19; extra == "quality"
- sentencepiece
!=0.1.92,>=0.1.91; extra == "sentencepiece"
- protobuf
; extra == "sentencepiece"
- scikit-learn
; extra == "sklearn"
- pytest
<8.0.0,>=7.2.0; extra == "testing"
- pytest-rich
; extra == "testing"
- pytest-xdist
; extra == "testing"
- timeout-decorator
; extra == "testing"
- parameterized
; extra == "testing"
- psutil
; extra == "testing"
- datasets
!=2.5.0; extra == "testing"
- dill
<0.3.5; extra == "testing"
- evaluate
>=0.2.0; extra == "testing"
- pytest-timeout
; extra == "testing"
- black
~=24.4.0; extra == "testing"
- sacrebleu
<2.0.0,>=1.4.12; extra == "testing"
- rouge-score
!=0.0.7,!=0.0.8,!=0.1,!=0.1.1; extra == "testing"
- nltk
; extra == "testing"
- GitPython
<3.1.19; extra == "testing"
- sacremoses
; extra == "testing"
- rjieba
; extra == "testing"
- beautifulsoup4
; extra == "testing"
- pillow
; extra == "testing"
- accelerate
>=0.26.0; extra == "testing"
- torch
; extra == "torch"
- accelerate
>=0.26.0; extra == "torch"
Adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
Website • Documentation • Paper
Adapters is an add-on library to HuggingFace's Transformers, integrating 10+ adapter methods into 20+ state-of-the-art Transformer models with minimal coding overhead for training and inference.
Adapters provides a unified interface for efficient fine-tuning and modular transfer learning, supporting a myriad of features like full-precision or quantized training (e.g. Q-LoRA, Q-Bottleneck Adapters, or Q-PrefixTuning), adapter merging via task arithmetics or the composition of multiple adapters via composition blocks, allowing advanced research in parameter-efficient transfer learning for NLP tasks.
Note: The Adapters library has replaced the
adapter-transformers
package. All previously trained adapters are compatible with the new library. For transitioning, please read: https://docs.adapterhub.ml/transitioning.html.
Installation
adapters
currently supports Python 3.8+ and PyTorch 1.10+.
After installing PyTorch, you can install adapters
from PyPI ...
pip install -U adapters
... or from source by cloning the repository:
git clone https://github.com/adapter-hub/adapters.git
cd adapters
pip install .
Quick Tour
Load pre-trained adapters:
from adapters import AutoAdapterModel
from transformers import AutoTokenizer
model = AutoAdapterModel.from_pretrained("roberta-base")
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model.load_adapter("AdapterHub/roberta-base-pf-imdb", source="hf", set_active=True)
print(model(**tokenizer("This works great!", return_tensors="pt")).logits)
Adapt existing model setups:
import adapters
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("t5-base")
adapters.init(model)
model.add_adapter("my_lora_adapter", config="lora")
model.train_adapter("my_lora_adapter")
# Your regular training loop...
Flexibly configure adapters:
from adapters import ConfigUnion, PrefixTuningConfig, ParBnConfig, AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/deberta-v3-base")
adapter_config = ConfigUnion(
PrefixTuningConfig(prefix_length=20),
ParBnConfig(reduction_factor=4),
)
model.add_adapter("my_adapter", config=adapter_config, set_active=True)
Easily compose adapters in a single model:
from adapters import AdapterSetup, AutoAdapterModel
import adapters.composition as ac
model = AutoAdapterModel.from_pretrained("roberta-base")
qc = model.load_adapter("AdapterHub/roberta-base-pf-trec")
sent = model.load_adapter("AdapterHub/roberta-base-pf-imdb")
with AdapterSetup(ac.Parallel(qc, sent)):
print(model(**tokenizer("What is AdapterHub?", return_tensors="pt")))
Useful Resources
HuggingFace's great documentation on getting started with Transformers can be found here. adapters
is fully compatible with Transformers.
To get started with adapters, refer to these locations:
- Colab notebook tutorials, a series notebooks providing an introduction to all the main concepts of (adapter-)transformers and AdapterHub
- https://docs.adapterhub.ml, our documentation on training and using adapters with adapters
- https://adapterhub.ml to explore available pre-trained adapter modules and share your own adapters
- Examples folder of this repository containing HuggingFace's example training scripts, many adapted for training adapters
Implemented Methods
Currently, adapters integrates all architectures and methods listed below:
Method | Paper(s) | Quick Links |
---|---|---|
Bottleneck adapters | Houlsby et al. (2019) Bapna and Firat (2019) | Quickstart, Notebook |
AdapterFusion | Pfeiffer et al. (2021) | Docs: Training, Notebook |
MAD-X, Invertible adapters | Pfeiffer et al. (2020) | Notebook |
AdapterDrop | Rücklé et al. (2021) | Notebook |
MAD-X 2.0, Embedding training | Pfeiffer et al. (2021) | Docs: Embeddings, Notebook |
Prefix Tuning | Li and Liang (2021) | Docs |
Parallel adapters, Mix-and-Match adapters | He et al. (2021) | Docs |
Compacter | Mahabadi et al. (2021) | Docs |
LoRA | Hu et al. (2021) | Docs |
(IA)^3 | Liu et al. (2022) | Docs |
UniPELT | Mao et al. (2022) | Docs |
Prompt Tuning | Lester et al. (2021) | Docs |
QLoRA | Dettmers et al. (2023) | Notebook |
ReFT | Wu et al. (2024) | Docs |
Adapter Task Arithmetics | Chronopoulou et al. (2023) Zhang et al. (2023) | Docs, Notebook |
Supported Models
We currently support the PyTorch versions of all models listed on the Model Overview page in our documentation.
Developing & Contributing
To get started with developing on Adapters yourself and learn more about ways to contribute, please see https://docs.adapterhub.ml/contributing.html.
Citation
If you use Adapters in your work, please consider citing our library paper: Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
@inproceedings{poth-etal-2023-adapters,
title = "Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning",
author = {Poth, Clifton and
Sterz, Hannah and
Paul, Indraneil and
Purkayastha, Sukannya and
Engl{\"a}nder, Leon and
Imhof, Timo and
Vuli{\'c}, Ivan and
Ruder, Sebastian and
Gurevych, Iryna and
Pfeiffer, Jonas},
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.13",
pages = "149--160",
}
Alternatively, for the predecessor adapter-transformers
, the Hub infrastructure and adapters uploaded by the AdapterHub team, please consider citing our initial paper: AdapterHub: A Framework for Adapting Transformers
@inproceedings{pfeiffer2020AdapterHub,
title={AdapterHub: A Framework for Adapting Transformers},
author={Pfeiffer, Jonas and
R{\"u}ckl{\'e}, Andreas and
Poth, Clifton and
Kamath, Aishwarya and
Vuli{\'c}, Ivan and
Ruder, Sebastian and
Cho, Kyunghyun and
Gurevych, Iryna},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
pages={46--54},
year={2020}
}