adapters


Nameadapters JSON
Version 1.0.1 PyPI version JSON
download
home_pagehttps://github.com/adapter-hub/adapters
SummaryA Unified Library for Parameter-Efficient and Modular Transfer Learning
upload_time2024-11-02 18:46:40
maintainerNone
docs_urlNone
authorThe AdapterHub team and community contributors
requires_python>=3.8.0
licenseApache
keywords nlp deep learning transformer pytorch bert adapters peft lora
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <!---
Copyright 2020 The AdapterHub Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

<p align="center">
<img style="vertical-align:middle" src="https://raw.githubusercontent.com/Adapter-Hub/adapters/main/docs/img/adapter-bert.png" width="80" />
</p>
<h1 align="center">
<span><i>Adapters</i></span>
</h1>

<h3 align="center">
A Unified Library for Parameter-Efficient and Modular Transfer Learning
</h3>
<h3 align="center">
    <a href="https://adapterhub.ml">Website</a>
    &nbsp; • &nbsp;
    <a href="https://docs.adapterhub.ml">Documentation</a>
    &nbsp; • &nbsp;
    <a href="https://arxiv.org/abs/2311.11077">Paper</a>
</h3>

![Tests](https://github.com/Adapter-Hub/adapters/workflows/Tests/badge.svg?branch=adapters)
[![GitHub](https://img.shields.io/github/license/adapter-hub/adapters.svg?color=blue)](https://github.com/adapter-hub/adapters/blob/main/LICENSE)
[![PyPI](https://img.shields.io/pypi/v/adapters)](https://pypi.org/project/adapters/)

_Adapters_ is an add-on library to [HuggingFace's Transformers](https://github.com/huggingface/transformers), integrating [10+ adapter methods](https://docs.adapterhub.ml/overview.html) into [20+ state-of-the-art Transformer models](https://docs.adapterhub.ml/model_overview.html) with minimal coding overhead for training and inference.

_Adapters_ provides a unified interface for efficient fine-tuning and modular transfer learning, supporting a myriad of features like full-precision or quantized training (e.g. [Q-LoRA, Q-Bottleneck Adapters, or Q-PrefixTuning](https://github.com/Adapter-Hub/adapters/blob/main/notebooks/QLoRA_Llama_Finetuning.ipynb)), [adapter merging via task arithmetics](https://docs.adapterhub.ml/adapter_composition.html#merging-adapters) or the composition of multiple adapters via [composition blocks](https://docs.adapterhub.ml/adapter_composition.html), allowing advanced research in parameter-efficient transfer learning for NLP tasks.

> **Note**: The _Adapters_ library has replaced the [`adapter-transformers`](https://github.com/adapter-hub/adapter-transformers-legacy) package. All previously trained adapters are compatible with the new library. For transitioning, please read: https://docs.adapterhub.ml/transitioning.html.


## Installation

`adapters` currently supports **Python 3.8+** and **PyTorch 1.10+**.
After [installing PyTorch](https://pytorch.org/get-started/locally/), you can install `adapters` from PyPI ...

```
pip install -U adapters
```

... or from source by cloning the repository:

```
git clone https://github.com/adapter-hub/adapters.git
cd adapters
pip install .
```


## Quick Tour

#### Load pre-trained adapters:

```python
from adapters import AutoAdapterModel
from transformers import AutoTokenizer

model = AutoAdapterModel.from_pretrained("roberta-base")
tokenizer = AutoTokenizer.from_pretrained("roberta-base")

model.load_adapter("AdapterHub/roberta-base-pf-imdb", source="hf", set_active=True)

print(model(**tokenizer("This works great!", return_tensors="pt")).logits)
```

**[Learn More](https://docs.adapterhub.ml/loading.html)**

#### Adapt existing model setups:

```python
import adapters
from transformers import AutoModelForSequenceClassification

model = AutoModelForSequenceClassification.from_pretrained("t5-base")

adapters.init(model)

model.add_adapter("my_lora_adapter", config="lora")
model.train_adapter("my_lora_adapter")

# Your regular training loop...
```

**[Learn More](https://docs.adapterhub.ml/quickstart.html)**

#### Flexibly configure adapters:

```python
from adapters import ConfigUnion, PrefixTuningConfig, ParBnConfig, AutoAdapterModel

model = AutoAdapterModel.from_pretrained("microsoft/deberta-v3-base")

adapter_config = ConfigUnion(
    PrefixTuningConfig(prefix_length=20),
    ParBnConfig(reduction_factor=4),
)
model.add_adapter("my_adapter", config=adapter_config, set_active=True)
```

**[Learn More](https://docs.adapterhub.ml/overview.html)**

#### Easily compose adapters in a single model:

```python
from adapters import AdapterSetup, AutoAdapterModel
import adapters.composition as ac

model = AutoAdapterModel.from_pretrained("roberta-base")

qc = model.load_adapter("AdapterHub/roberta-base-pf-trec")
sent = model.load_adapter("AdapterHub/roberta-base-pf-imdb")

with AdapterSetup(ac.Parallel(qc, sent)):
    print(model(**tokenizer("What is AdapterHub?", return_tensors="pt")))
```

**[Learn More](https://docs.adapterhub.ml/adapter_composition.html)**

## Useful Resources

HuggingFace's great documentation on getting started with _Transformers_ can be found [here](https://huggingface.co/transformers/index.html). `adapters` is fully compatible with _Transformers_.

To get started with adapters, refer to these locations:

- **[Colab notebook tutorials](https://github.com/Adapter-Hub/adapters/tree/main/notebooks)**, a series notebooks providing an introduction to all the main concepts of (adapter-)transformers and AdapterHub
- **https://docs.adapterhub.ml**, our documentation on training and using adapters with _adapters_
- **https://adapterhub.ml** to explore available pre-trained adapter modules and share your own adapters
- **[Examples folder](https://github.com/Adapter-Hub/adapters/tree/main/examples/pytorch)** of this repository containing HuggingFace's example training scripts, many adapted for training adapters

## Implemented Methods

Currently, adapters integrates all architectures and methods listed below:

| Method | Paper(s) | Quick Links |
| --- | --- | --- |
| Bottleneck adapters | [Houlsby et al. (2019)](https://arxiv.org/pdf/1902.00751.pdf)<br> [Bapna and Firat (2019)](https://arxiv.org/pdf/1909.08478.pdf) | [Quickstart](https://docs.adapterhub.ml/quickstart.html), [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/01_Adapter_Training.ipynb) |
| AdapterFusion | [Pfeiffer et al. (2021)](https://aclanthology.org/2021.eacl-main.39.pdf) | [Docs: Training](https://docs.adapterhub.ml/training.html#train-adapterfusion), [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/03_Adapter_Fusion.ipynb) |
| MAD-X,<br> Invertible adapters | [Pfeiffer et al. (2020)](https://aclanthology.org/2020.emnlp-main.617/) | [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/04_Cross_Lingual_Transfer.ipynb) |
| AdapterDrop | [Rücklé et al. (2021)](https://arxiv.org/pdf/2010.11918.pdf) | [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/05_Adapter_Drop_Training.ipynb) |
| MAD-X 2.0,<br> Embedding training | [Pfeiffer et al. (2021)](https://arxiv.org/pdf/2012.15562.pdf) | [Docs: Embeddings](https://docs.adapterhub.ml/embeddings.html), [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/08_NER_Wikiann.ipynb) |
| Prefix Tuning | [Li and Liang (2021)](https://arxiv.org/pdf/2101.00190.pdf) | [Docs](https://docs.adapterhub.ml/methods.html#prefix-tuning) |
| Parallel adapters,<br> Mix-and-Match adapters | [He et al. (2021)](https://arxiv.org/pdf/2110.04366.pdf) | [Docs](https://docs.adapterhub.ml/method_combinations.html#mix-and-match-adapters) |
| Compacter | [Mahabadi et al. (2021)](https://arxiv.org/pdf/2106.04647.pdf) | [Docs](https://docs.adapterhub.ml/methods.html#compacter) |
| LoRA | [Hu et al. (2021)](https://arxiv.org/pdf/2106.09685.pdf) | [Docs](https://docs.adapterhub.ml/methods.html#lora) |
| (IA)^3 | [Liu et al. (2022)](https://arxiv.org/pdf/2205.05638.pdf) | [Docs](https://docs.adapterhub.ml/methods.html#ia-3) |
| UniPELT | [Mao et al. (2022)](https://arxiv.org/pdf/2110.07577.pdf) | [Docs](https://docs.adapterhub.ml/method_combinations.html#unipelt) |
| Prompt Tuning | [Lester et al. (2021)](https://aclanthology.org/2021.emnlp-main.243/) | [Docs](https://docs.adapterhub.ml/methods.html#prompt-tuning) |
| QLoRA | [Dettmers et al. (2023)](https://arxiv.org/pdf/2305.14314.pdf) | [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/QLoRA_Llama_Finetuning.ipynb) |
| ReFT | [Wu et al. (2024)](https://arxiv.org/pdf/2404.03592) | [Docs](https://docs.adapterhub.ml/methods.html#reft) |
| Adapter Task Arithmetics | [Chronopoulou et al. (2023)](https://arxiv.org/abs/2311.09344)<br> [Zhang et al. (2023)](https://proceedings.neurips.cc/paper_files/paper/2023/hash/299a08ee712d4752c890938da99a77c6-Abstract-Conference.html) | [Docs](https://docs.adapterhub.ml/merging_adapters.html), [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/06_Task_Arithmetics.ipynb)|


## Supported Models

We currently support the PyTorch versions of all models listed on the **[Model Overview](https://docs.adapterhub.ml/model_overview.html) page** in our documentation.

## Developing & Contributing

To get started with developing on _Adapters_ yourself and learn more about ways to contribute, please see https://docs.adapterhub.ml/contributing.html.

## Citation

If you use _Adapters_ in your work, please consider citing our library paper: [Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning](https://arxiv.org/abs/2311.11077)

```
@inproceedings{poth-etal-2023-adapters,
    title = "Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning",
    author = {Poth, Clifton  and
      Sterz, Hannah  and
      Paul, Indraneil  and
      Purkayastha, Sukannya  and
      Engl{\"a}nder, Leon  and
      Imhof, Timo  and
      Vuli{\'c}, Ivan  and
      Ruder, Sebastian  and
      Gurevych, Iryna  and
      Pfeiffer, Jonas},
    booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.emnlp-demo.13",
    pages = "149--160",
}
```

Alternatively, for the predecessor `adapter-transformers`, the Hub infrastructure and adapters uploaded by the AdapterHub team, please consider citing our initial paper: [AdapterHub: A Framework for Adapting Transformers](https://arxiv.org/abs/2007.07779)

```
@inproceedings{pfeiffer2020AdapterHub,
    title={AdapterHub: A Framework for Adapting Transformers},
    author={Pfeiffer, Jonas and
            R{\"u}ckl{\'e}, Andreas and
            Poth, Clifton and
            Kamath, Aishwarya and
            Vuli{\'c}, Ivan and
            Ruder, Sebastian and
            Cho, Kyunghyun and
            Gurevych, Iryna},
    booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
    pages={46--54},
    year={2020}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/adapter-hub/adapters",
    "name": "adapters",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8.0",
    "maintainer_email": null,
    "keywords": "NLP deep learning transformer pytorch BERT adapters PEFT LoRA",
    "author": "The AdapterHub team and community contributors",
    "author_email": "calpt@mail.de",
    "download_url": "https://files.pythonhosted.org/packages/7b/d7/510d735ffa757de956645228b49cf15d281fd324dbe826066137c0f21a2e/adapters-1.0.1.tar.gz",
    "platform": null,
    "description": "<!---\r\nCopyright 2020 The AdapterHub Team. All rights reserved.\r\n\r\nLicensed under the Apache License, Version 2.0 (the \"License\");\r\nyou may not use this file except in compliance with the License.\r\nYou may obtain a copy of the License at\r\n\r\n    http://www.apache.org/licenses/LICENSE-2.0\r\n\r\nUnless required by applicable law or agreed to in writing, software\r\ndistributed under the License is distributed on an \"AS IS\" BASIS,\r\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\nSee the License for the specific language governing permissions and\r\nlimitations under the License.\r\n-->\r\n\r\n<p align=\"center\">\r\n<img style=\"vertical-align:middle\" src=\"https://raw.githubusercontent.com/Adapter-Hub/adapters/main/docs/img/adapter-bert.png\" width=\"80\" />\r\n</p>\r\n<h1 align=\"center\">\r\n<span><i>Adapters</i></span>\r\n</h1>\r\n\r\n<h3 align=\"center\">\r\nA Unified Library for Parameter-Efficient and Modular Transfer Learning\r\n</h3>\r\n<h3 align=\"center\">\r\n    <a href=\"https://adapterhub.ml\">Website</a>\r\n    &nbsp; \u2022 &nbsp;\r\n    <a href=\"https://docs.adapterhub.ml\">Documentation</a>\r\n    &nbsp; \u2022 &nbsp;\r\n    <a href=\"https://arxiv.org/abs/2311.11077\">Paper</a>\r\n</h3>\r\n\r\n![Tests](https://github.com/Adapter-Hub/adapters/workflows/Tests/badge.svg?branch=adapters)\r\n[![GitHub](https://img.shields.io/github/license/adapter-hub/adapters.svg?color=blue)](https://github.com/adapter-hub/adapters/blob/main/LICENSE)\r\n[![PyPI](https://img.shields.io/pypi/v/adapters)](https://pypi.org/project/adapters/)\r\n\r\n_Adapters_ is an add-on library to [HuggingFace's Transformers](https://github.com/huggingface/transformers), integrating [10+ adapter methods](https://docs.adapterhub.ml/overview.html) into [20+ state-of-the-art Transformer models](https://docs.adapterhub.ml/model_overview.html) with minimal coding overhead for training and inference.\r\n\r\n_Adapters_ provides a unified interface for efficient fine-tuning and modular transfer learning, supporting a myriad of features like full-precision or quantized training (e.g. [Q-LoRA, Q-Bottleneck Adapters, or Q-PrefixTuning](https://github.com/Adapter-Hub/adapters/blob/main/notebooks/QLoRA_Llama_Finetuning.ipynb)), [adapter merging via task arithmetics](https://docs.adapterhub.ml/adapter_composition.html#merging-adapters) or the composition of multiple adapters via [composition blocks](https://docs.adapterhub.ml/adapter_composition.html), allowing advanced research in parameter-efficient transfer learning for NLP tasks.\r\n\r\n> **Note**: The _Adapters_ library has replaced the [`adapter-transformers`](https://github.com/adapter-hub/adapter-transformers-legacy) package. All previously trained adapters are compatible with the new library. For transitioning, please read: https://docs.adapterhub.ml/transitioning.html.\r\n\r\n\r\n## Installation\r\n\r\n`adapters` currently supports **Python 3.8+** and **PyTorch 1.10+**.\r\nAfter [installing PyTorch](https://pytorch.org/get-started/locally/), you can install `adapters` from PyPI ...\r\n\r\n```\r\npip install -U adapters\r\n```\r\n\r\n... or from source by cloning the repository:\r\n\r\n```\r\ngit clone https://github.com/adapter-hub/adapters.git\r\ncd adapters\r\npip install .\r\n```\r\n\r\n\r\n## Quick Tour\r\n\r\n#### Load pre-trained adapters:\r\n\r\n```python\r\nfrom adapters import AutoAdapterModel\r\nfrom transformers import AutoTokenizer\r\n\r\nmodel = AutoAdapterModel.from_pretrained(\"roberta-base\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"roberta-base\")\r\n\r\nmodel.load_adapter(\"AdapterHub/roberta-base-pf-imdb\", source=\"hf\", set_active=True)\r\n\r\nprint(model(**tokenizer(\"This works great!\", return_tensors=\"pt\")).logits)\r\n```\r\n\r\n**[Learn More](https://docs.adapterhub.ml/loading.html)**\r\n\r\n#### Adapt existing model setups:\r\n\r\n```python\r\nimport adapters\r\nfrom transformers import AutoModelForSequenceClassification\r\n\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"t5-base\")\r\n\r\nadapters.init(model)\r\n\r\nmodel.add_adapter(\"my_lora_adapter\", config=\"lora\")\r\nmodel.train_adapter(\"my_lora_adapter\")\r\n\r\n# Your regular training loop...\r\n```\r\n\r\n**[Learn More](https://docs.adapterhub.ml/quickstart.html)**\r\n\r\n#### Flexibly configure adapters:\r\n\r\n```python\r\nfrom adapters import ConfigUnion, PrefixTuningConfig, ParBnConfig, AutoAdapterModel\r\n\r\nmodel = AutoAdapterModel.from_pretrained(\"microsoft/deberta-v3-base\")\r\n\r\nadapter_config = ConfigUnion(\r\n    PrefixTuningConfig(prefix_length=20),\r\n    ParBnConfig(reduction_factor=4),\r\n)\r\nmodel.add_adapter(\"my_adapter\", config=adapter_config, set_active=True)\r\n```\r\n\r\n**[Learn More](https://docs.adapterhub.ml/overview.html)**\r\n\r\n#### Easily compose adapters in a single model:\r\n\r\n```python\r\nfrom adapters import AdapterSetup, AutoAdapterModel\r\nimport adapters.composition as ac\r\n\r\nmodel = AutoAdapterModel.from_pretrained(\"roberta-base\")\r\n\r\nqc = model.load_adapter(\"AdapterHub/roberta-base-pf-trec\")\r\nsent = model.load_adapter(\"AdapterHub/roberta-base-pf-imdb\")\r\n\r\nwith AdapterSetup(ac.Parallel(qc, sent)):\r\n    print(model(**tokenizer(\"What is AdapterHub?\", return_tensors=\"pt\")))\r\n```\r\n\r\n**[Learn More](https://docs.adapterhub.ml/adapter_composition.html)**\r\n\r\n## Useful Resources\r\n\r\nHuggingFace's great documentation on getting started with _Transformers_ can be found [here](https://huggingface.co/transformers/index.html). `adapters` is fully compatible with _Transformers_.\r\n\r\nTo get started with adapters, refer to these locations:\r\n\r\n- **[Colab notebook tutorials](https://github.com/Adapter-Hub/adapters/tree/main/notebooks)**, a series notebooks providing an introduction to all the main concepts of (adapter-)transformers and AdapterHub\r\n- **https://docs.adapterhub.ml**, our documentation on training and using adapters with _adapters_\r\n- **https://adapterhub.ml** to explore available pre-trained adapter modules and share your own adapters\r\n- **[Examples folder](https://github.com/Adapter-Hub/adapters/tree/main/examples/pytorch)** of this repository containing HuggingFace's example training scripts, many adapted for training adapters\r\n\r\n## Implemented Methods\r\n\r\nCurrently, adapters integrates all architectures and methods listed below:\r\n\r\n| Method | Paper(s) | Quick Links |\r\n| --- | --- | --- |\r\n| Bottleneck adapters | [Houlsby et al. (2019)](https://arxiv.org/pdf/1902.00751.pdf)<br> [Bapna and Firat (2019)](https://arxiv.org/pdf/1909.08478.pdf) | [Quickstart](https://docs.adapterhub.ml/quickstart.html), [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/01_Adapter_Training.ipynb) |\r\n| AdapterFusion | [Pfeiffer et al. (2021)](https://aclanthology.org/2021.eacl-main.39.pdf) | [Docs: Training](https://docs.adapterhub.ml/training.html#train-adapterfusion), [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/03_Adapter_Fusion.ipynb) |\r\n| MAD-X,<br> Invertible adapters | [Pfeiffer et al. (2020)](https://aclanthology.org/2020.emnlp-main.617/) | [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/04_Cross_Lingual_Transfer.ipynb) |\r\n| AdapterDrop | [R\u00fcckl\u00e9 et al. (2021)](https://arxiv.org/pdf/2010.11918.pdf) | [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/05_Adapter_Drop_Training.ipynb) |\r\n| MAD-X 2.0,<br> Embedding training | [Pfeiffer et al. (2021)](https://arxiv.org/pdf/2012.15562.pdf) | [Docs: Embeddings](https://docs.adapterhub.ml/embeddings.html), [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/08_NER_Wikiann.ipynb) |\r\n| Prefix Tuning | [Li and Liang (2021)](https://arxiv.org/pdf/2101.00190.pdf) | [Docs](https://docs.adapterhub.ml/methods.html#prefix-tuning) |\r\n| Parallel adapters,<br> Mix-and-Match adapters | [He et al. (2021)](https://arxiv.org/pdf/2110.04366.pdf) | [Docs](https://docs.adapterhub.ml/method_combinations.html#mix-and-match-adapters) |\r\n| Compacter | [Mahabadi et al. (2021)](https://arxiv.org/pdf/2106.04647.pdf) | [Docs](https://docs.adapterhub.ml/methods.html#compacter) |\r\n| LoRA | [Hu et al. (2021)](https://arxiv.org/pdf/2106.09685.pdf) | [Docs](https://docs.adapterhub.ml/methods.html#lora) |\r\n| (IA)^3 | [Liu et al. (2022)](https://arxiv.org/pdf/2205.05638.pdf) | [Docs](https://docs.adapterhub.ml/methods.html#ia-3) |\r\n| UniPELT | [Mao et al. (2022)](https://arxiv.org/pdf/2110.07577.pdf) | [Docs](https://docs.adapterhub.ml/method_combinations.html#unipelt) |\r\n| Prompt Tuning | [Lester et al. (2021)](https://aclanthology.org/2021.emnlp-main.243/) | [Docs](https://docs.adapterhub.ml/methods.html#prompt-tuning) |\r\n| QLoRA | [Dettmers et al. (2023)](https://arxiv.org/pdf/2305.14314.pdf) | [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/QLoRA_Llama_Finetuning.ipynb) |\r\n| ReFT | [Wu et al. (2024)](https://arxiv.org/pdf/2404.03592) | [Docs](https://docs.adapterhub.ml/methods.html#reft) |\r\n| Adapter Task Arithmetics | [Chronopoulou et al. (2023)](https://arxiv.org/abs/2311.09344)<br> [Zhang et al. (2023)](https://proceedings.neurips.cc/paper_files/paper/2023/hash/299a08ee712d4752c890938da99a77c6-Abstract-Conference.html) | [Docs](https://docs.adapterhub.ml/merging_adapters.html), [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/06_Task_Arithmetics.ipynb)|\r\n\r\n\r\n## Supported Models\r\n\r\nWe currently support the PyTorch versions of all models listed on the **[Model Overview](https://docs.adapterhub.ml/model_overview.html) page** in our documentation.\r\n\r\n## Developing & Contributing\r\n\r\nTo get started with developing on _Adapters_ yourself and learn more about ways to contribute, please see https://docs.adapterhub.ml/contributing.html.\r\n\r\n## Citation\r\n\r\nIf you use _Adapters_ in your work, please consider citing our library paper: [Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning](https://arxiv.org/abs/2311.11077)\r\n\r\n```\r\n@inproceedings{poth-etal-2023-adapters,\r\n    title = \"Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning\",\r\n    author = {Poth, Clifton  and\r\n      Sterz, Hannah  and\r\n      Paul, Indraneil  and\r\n      Purkayastha, Sukannya  and\r\n      Engl{\\\"a}nder, Leon  and\r\n      Imhof, Timo  and\r\n      Vuli{\\'c}, Ivan  and\r\n      Ruder, Sebastian  and\r\n      Gurevych, Iryna  and\r\n      Pfeiffer, Jonas},\r\n    booktitle = \"Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations\",\r\n    month = dec,\r\n    year = \"2023\",\r\n    address = \"Singapore\",\r\n    publisher = \"Association for Computational Linguistics\",\r\n    url = \"https://aclanthology.org/2023.emnlp-demo.13\",\r\n    pages = \"149--160\",\r\n}\r\n```\r\n\r\nAlternatively, for the predecessor `adapter-transformers`, the Hub infrastructure and adapters uploaded by the AdapterHub team, please consider citing our initial paper: [AdapterHub: A Framework for Adapting Transformers](https://arxiv.org/abs/2007.07779)\r\n\r\n```\r\n@inproceedings{pfeiffer2020AdapterHub,\r\n    title={AdapterHub: A Framework for Adapting Transformers},\r\n    author={Pfeiffer, Jonas and\r\n            R{\\\"u}ckl{\\'e}, Andreas and\r\n            Poth, Clifton and\r\n            Kamath, Aishwarya and\r\n            Vuli{\\'c}, Ivan and\r\n            Ruder, Sebastian and\r\n            Cho, Kyunghyun and\r\n            Gurevych, Iryna},\r\n    booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},\r\n    pages={46--54},\r\n    year={2020}\r\n}\r\n```\r\n",
    "bugtrack_url": null,
    "license": "Apache",
    "summary": "A Unified Library for Parameter-Efficient and Modular Transfer Learning",
    "version": "1.0.1",
    "project_urls": {
        "Homepage": "https://github.com/adapter-hub/adapters"
    },
    "split_keywords": [
        "nlp",
        "deep",
        "learning",
        "transformer",
        "pytorch",
        "bert",
        "adapters",
        "peft",
        "lora"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "69c923a4a17aaff8ea66f8d2631eeaeaed4b8c6a7559082865ddc9e0344170b9",
                "md5": "cd910d20a2487a91bc31bd4714cac733",
                "sha256": "d5d0c9f73eab5bba613d3e29da1e001dce525de8209efeb8f281ff58ddd15f00"
            },
            "downloads": -1,
            "filename": "adapters-1.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "cd910d20a2487a91bc31bd4714cac733",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.0",
            "size": 283688,
            "upload_time": "2024-11-02T18:46:37",
            "upload_time_iso_8601": "2024-11-02T18:46:37.922656Z",
            "url": "https://files.pythonhosted.org/packages/69/c9/23a4a17aaff8ea66f8d2631eeaeaed4b8c6a7559082865ddc9e0344170b9/adapters-1.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7bd7510d735ffa757de956645228b49cf15d281fd324dbe826066137c0f21a2e",
                "md5": "efb9083cee931cffe3818e048f2b6519",
                "sha256": "a5825c8773899c9f4fded9c8c0565ad157fa1975dca7574b706aa8eb26c8ed2d"
            },
            "downloads": -1,
            "filename": "adapters-1.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "efb9083cee931cffe3818e048f2b6519",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.0",
            "size": 233348,
            "upload_time": "2024-11-02T18:46:40",
            "upload_time_iso_8601": "2024-11-02T18:46:40.018710Z",
            "url": "https://files.pythonhosted.org/packages/7b/d7/510d735ffa757de956645228b49cf15d281fd324dbe826066137c0f21a2e/adapters-1.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-02 18:46:40",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "adapter-hub",
    "github_project": "adapters",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "adapters"
}
        
Elapsed time: 0.41514s