fast-bert


Namefast-bert JSON
Version 2.0.24 PyPI version JSON
download
home_pagehttps://github.com/kaushaltrivedi/fast-bert
SummaryAI Library using BERT
upload_time2024-01-30 11:19:09
maintainer
docs_urlNone
authorKaushal Trivedi
requires_python
licenseApache2
keywords bert nlp deep learning google
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Fast-Bert

[![License Apache 2.0](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/deepmipt/DeepPavlov/blob/master/LICENSE)
[![PyPI version](https://badge.fury.io/py/fast-bert.svg)](https://badge.fury.io/py/fast-bert)
![Python 3.6, 3.7](https://img.shields.io/badge/python-3.6%20%7C%203.7-green.svg)

**New - Learning Rate Finder for Text Classification Training (borrowed with thanks from https://github.com/davidtvs/pytorch-lr-finder)**


**Supports LAMB optimizer for faster training.**
Please refer to https://arxiv.org/abs/1904.00962 for the paper on LAMB optimizer.

**Supports BERT and XLNet for both Multi-Class and Multi-Label text classification.**

Fast-Bert is the deep learning library that allows developers and data scientists to train and deploy BERT and XLNet based models for natural language processing tasks beginning with Text Classification.

The work on FastBert is built on solid foundations provided by the excellent [Hugging Face BERT PyTorch library](https://github.com/huggingface/pytorch-pretrained-BERT) and is inspired by [fast.ai](https://github.com/fastai/fastai) and strives to make the cutting edge deep learning technologies accessible for the vast community of machine learning practitioners.

With FastBert, you will be able to:

1. Train (more precisely fine-tune) BERT, RoBERTa and XLNet text classification models on your custom dataset.

2. Tune model hyper-parameters such as epochs, learning rate, batch size, optimiser schedule and more.

3. Save and deploy trained model for inference (including on AWS Sagemaker).

Fast-Bert will support both multi-class and multi-label text classification for the following and in due course, it will support other NLU tasks such as Named Entity Recognition, Question Answering and Custom Corpus fine-tuning.

1.  **[BERT](https://github.com/google-research/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.

2)  **[XLNet](https://github.com/zihangdai/xlnet/)** (from Google/CMU) released with the paper [​XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.

3)  **[RoBERTa](https://arxiv.org/abs/1907.11692)** (from Facebook), a Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du et al.

4)  **DistilBERT (from HuggingFace)**, released together with the blogpost [Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5) by Victor Sanh, Lysandre Debut and Thomas Wolf.

## Installation

This repo is tested on Python 3.6+.

### With pip

PyTorch-Transformers can be installed by pip as follows:

```bash
pip install fast-bert
```

### From source

Clone the repository and run:

```bash
pip install [--editable] .
```

or

```bash
pip install git+https://github.com/kaushaltrivedi/fast-bert.git
```

You will also need to install NVIDIA Apex.

```bash
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
```

## Usage

## Text Classification

### 1. Create a DataBunch object

The databunch object takes training, validation and test csv files and converts the data into internal representation for BERT, RoBERTa, DistilBERT or XLNet. The object also instantiates the correct data-loaders based on device profile and batch_size and max_sequence_length.

```python

from fast_bert.data_cls import BertDataBunch

databunch = BertDataBunch(DATA_PATH, LABEL_PATH,
                          tokenizer='bert-base-uncased',
                          train_file='train.csv',
                          val_file='val.csv',
                          label_file='labels.csv',
                          text_col='text',
                          label_col='label',
                          batch_size_per_gpu=16,
                          max_seq_length=512,
                          multi_gpu=True,
                          multi_label=False,
                          model_type='bert')
```

#### File format for train.csv and val.csv

| index | text                                                                                                                                                                                                                                                                                                                                | label |
| ----- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----- |
| 0     | Looking through the other comments, I'm amazed that there aren't any warnings to potential viewers of what they have to look forward to when renting this garbage. First off, I rented this thing with the understanding that it was a competently rendered Indiana Jones knock-off.                                                | neg   |
| 1     | I've watched the first 17 episodes and this series is simply amazing! I haven't been this interested in an anime series since Neon Genesis Evangelion. This series is actually based off an h-game, which I'm not sure if it's been done before or not, I haven't played the game, but from what I've heard it follows it very well | pos   |
| 2     | his movie is nothing short of a dark, gritty masterpiece. I may be bias, as the Apartheid era is an area I've always felt for.                                                                                                                                                                                                      | pos   |

In case the column names are different than the usual text and labels, you will have to provide those names in the databunch text_col and label_col parameters.

**labels.csv** will contain a list of all unique labels. In this case the file will contain:

```csv
pos
neg
```

For multi-label classification, **labels.csv** will contain all possible labels:

```
toxic
severe_toxic
obscene
threat
insult
identity_hate
```

The file **train.csv** will then contain one column for each label, with each column value being either 0 or 1. Don't forget to change `multi_label=True` for multi-label classification in `BertDataBunch`.

| id  | text                                                                       | toxic | severe_toxic | obscene | threat | insult | identity_hate |
| --- | -------------------------------------------------------------------------- | ----- | ------------ | ------- | ------ | ------ | ------------- |
| 0   | Why the edits made under my username Hardcore Metallica Fan were reverted? | 0     | 0            | 0       | 0      | 0      | 0             |
| 0   | I will mess you up                                                         | 1     | 0            | 0       | 1      | 0      | 0             |

label_col will be a list of label column names. In this case it will be:

```python
['toxic','severe_toxic','obscene','threat','insult','identity_hate']
```

#### Tokenizer

You can either create a tokenizer object and pass it to DataBunch or you can pass the model name as tokenizer and DataBunch will automatically download and instantiate an appropriate tokenizer object.

For example for using XLNet base cased model, set tokenizer parameter to 'xlnet-base-cased'. DataBunch will automatically download and instantiate XLNetTokenizer with the vocabulary for xlnet-base-cased model.

#### Model Type

Fast-Bert supports XLNet, RoBERTa and BERT based classification models. Set model type parameter value to **'bert'**, **roberta** or **'xlnet'** in order to initiate an appropriate databunch object.

### 2. Create a Learner Object

BertLearner is the ‘learner’ object that holds everything together. It encapsulates the key logic for the lifecycle of the model such as training, validation and inference.

The learner object will take the databunch created earlier as as input alongwith some of the other parameters such as location for one of the pretrained models, FP16 training, multi_gpu and multi_label options.

The learner class contains the logic for training loop, validation loop, optimiser strategies and key metrics calculation. This help the developers focus on their custom use-cases without worrying about these repetitive activities.

At the same time the learner object is flexible enough to be customised either via using flexible parameters or by creating a subclass of BertLearner and redefining relevant methods.

```python

from fast_bert.learner_cls import BertLearner
from fast_bert.metrics import accuracy
import logging

logger = logging.getLogger()
device_cuda = torch.device("cuda")
metrics = [{'name': 'accuracy', 'function': accuracy}]

learner = BertLearner.from_pretrained_model(
						databunch,
						pretrained_path='bert-base-uncased',
						metrics=metrics,
						device=device_cuda,
						logger=logger,
						output_dir=OUTPUT_DIR,
						finetuned_wgts_path=None,
						warmup_steps=500,
						multi_gpu=True,
						is_fp16=True,
						multi_label=False,
						logging_steps=50)
```

| parameter           | description                                                                                                                                                                                                                    |
| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| databunch           | Databunch object created earlier                                                                                                                                                                                               |
| pretrained_path     | Directory for the location of the pretrained model files or the name of one of the pretrained models i.e. bert-base-uncased, xlnet-large-cased, etc                                                                            |
| metrics             | List of metrics functions that you want the model to calculate on the validation set, e.g. accuracy, beta, etc                                                                                                                 |
| device              | torch.device of type _cuda_ or _cpu_                                                                                                                                                                                           |
| logger              | logger object                                                                                                                                                                                                                  |
| output_dir          | Directory for model to save trained artefacts, tokenizer vocabulary and tensorboard files                                                                                                                                      |
| finetuned_wgts_path | provide the location for fine-tuned language model (experimental feature)                                                                                                                                                      |
| warmup_steps        | number of training warms steps for the scheduler                                                                                                                                                                               |
| multi_gpu           | multiple GPUs available e.g. if running on AWS p3.8xlarge instance                                                                                                                                                             |
| is_fp16             | FP16 training                                                                                                                                                                                                                  |
| multi_label         | multilabel classification                                                                                                                                                                                                      |
| logging_steps       | number of steps between each tensorboard metrics calculation. Set it to 0 to disable tensor flow logging. Keeping this value too low will lower the training speed as model will be evaluated each time the metrics are logged |

### 3. Find the optimal learning rate

The learning rate is one of the most important hyperparameters for model training.  We have incorporated the learining rate finder that was proposed by Leslie Smith and then built into the fastai library. 

```python
learner.lr_find(start_lr=1e-5,optimizer_type='lamb')
```

The code is heavily borrowed from David Silva's [pytorch-lr-finder library](https://github.com/davidtvs/pytorch-lr-finder). 

![Learning rate range test](images/lr_finder.png)

### 4. Train the model

```python
learner.fit(epochs=6,
			lr=6e-5,
			validate=True, 	# Evaluate the model after each epoch
			schedule_type="warmup_cosine",
			optimizer_type="lamb")
```

Fast-Bert now supports LAMB optmizer. Due to the speed of training, we have set LAMB as the default optimizer. You can switch back to AdamW by setting optimizer_type to 'adamw'.

### 5. Save trained model artifacts

```python
learner.save_model()
```

Model artefacts will be persisted in the output_dir/'model_out' path provided to the learner object. Following files will be persisted:

| File name               | description                                      |
| ----------------------- | ------------------------------------------------ |
| pytorch_model.bin       | trained model weights                            |
| spiece.model            | sentence tokenizer vocabulary (for xlnet models) |
| vocab.txt               | workpiece tokenizer vocabulary (for bert models) |
| special_tokens_map.json | special tokens mappings                          |
| config.json             | model config                                     |
| added_tokens.json       | list of new tokens                               |

As the model artefacts are all stored in the same folder, you will be able to instantiate the learner object to run inference by pointing pretrained_path to this location.

### 6. Model Inference

If you already have a Learner object with trained model instantiated, just call predict_batch method on the learner object with the list of text data:

```python
texts = ['I really love the Netflix original movies',
		 'this movie is not worth watching']
predictions = learner.predict_batch(texts)
```

If you have persistent trained model and just want to run inference logic on that trained model, use the second approach, i.e. the predictor object.

```python
from fast_bert.prediction import BertClassificationPredictor

MODEL_PATH = OUTPUT_DIR/'model_out'

predictor = BertClassificationPredictor(
				model_path=MODEL_PATH,
				label_path=LABEL_PATH, # location for labels.csv file
				multi_label=False,
				model_type='xlnet',
				do_lower_case=False,
				device=None) # set custom torch.device, defaults to cuda if available

# Single prediction
single_prediction = predictor.predict("just get me result for this text")

# Batch predictions
texts = [
	"this is the first text",
	"this is the second text"
	]

multiple_predictions = predictor.predict_batch(texts)
```

## Language Model Fine-tuning

A useful approach to use BERT based models on custom datasets is to first finetune the language model task for the custom dataset, an apporach followed by fast.ai's ULMFit. The idea is to start with a pre-trained model and further train the model on the raw text of the custom dataset. We will use the masked LM task to finetune the language model.

This section will describe the usage of FastBert to finetune the language model.

### 1. Import the necessary libraries

The necessary objects are stored in the files with '\_lm' suffix.

```python
# Language model Databunch
from fast_bert.data_lm import BertLMDataBunch
# Language model learner
from fast_bert.learner_lm import BertLMLearner

from pathlib import Path
from box import Box
```

### 2. Define parameters and setup datapaths

```python
# Box is a nice wrapper to create an object from a json dict
args = Box({
    "seed": 42,
    "task_name": 'imdb_reviews_lm',
    "model_name": 'roberta-base',
    "model_type": 'roberta',
    "train_batch_size": 16,
    "learning_rate": 4e-5,
    "num_train_epochs": 20,
    "fp16": True,
    "fp16_opt_level": "O2",
    "warmup_steps": 1000,
    "logging_steps": 0,
    "max_seq_length": 512,
    "multi_gpu": True if torch.cuda.device_count() > 1 else False
})

DATA_PATH = Path('../lm_data/')
LOG_PATH = Path('../logs')
MODEL_PATH = Path('../lm_model_{}/'.format(args.model_type))

DATA_PATH.mkdir(exist_ok=True)
MODEL_PATH.mkdir(exist_ok=True)
LOG_PATH.mkdir(exist_ok=True)


```

### 3. Create DataBunch object

The BertLMDataBunch class contains a static method 'from_raw_corpus' that will take the list of raw texts and create DataBunch for the language model learner.

The method will at first preprocess the text list by removing html tags, extra spaces and more and then create files `lm_train.txt` and `lm_val.txt`. These files will be used for training and evaluating the language model finetuning task.

The next step will be to featurize the texts. The text will be tokenized, numericalized and split into blocks on 512 tokens (including special tokens).

```python
databunch_lm = BertLMDataBunch.from_raw_corpus(
					data_dir=DATA_PATH,
					text_list=texts,
					tokenizer=args.model_name,
					batch_size_per_gpu=args.train_batch_size,
					max_seq_length=args.max_seq_length,
                    multi_gpu=args.multi_gpu,
                    model_type=args.model_type,
                    logger=logger)
```

As this step can take some time based on the size of your custom dataset's text, the featurized data will be cached in pickled files in the data_dir/lm_cache folder.

The next time, instead of using from_raw_corpus method, you may want to directly instantiate the DataBunch object as shown below:

```python
databunch_lm = BertLMDataBunch(
						data_dir=DATA_PATH,
						tokenizer=args.model_name,
                        batch_size_per_gpu=args.train_batch_size,
                        max_seq_length=args.max_seq_length,
                        multi_gpu=args.multi_gpu,
                        model_type=args.model_type,
                        logger=logger)
```

### 4. Create the LM Learner object

BertLearner is the ‘learner’ object that holds everything together. It encapsulates the key logic for the lifecycle of the model such as training, validation and inference.

The learner object will take the databunch created earlier as as input alongwith some of the other parameters such as location for one of the pretrained models, FP16 training, multi_gpu and multi_label options.

The learner class contains the logic for training loop, validation loop, and optimizer strategies. This help the developers focus on their custom use-cases without worrying about these repetitive activities.

At the same time the learner object is flexible enough to be customized either via using flexible parameters or by creating a subclass of BertLearner and redefining relevant methods.

```python
learner = BertLMLearner.from_pretrained_model(
							dataBunch=databunch_lm,
							pretrained_path=args.model_name,
							output_dir=MODEL_PATH,
							metrics=[],
							device=device,
							logger=logger,
							multi_gpu=args.multi_gpu,
							logging_steps=args.logging_steps,
							fp16_opt_level=args.fp16_opt_level)
```

### 5. Train the model

```python
learner.fit(epochs=6,
			lr=6e-5,
			validate=True, 	# Evaluate the model after each epoch
			schedule_type="warmup_cosine",
			optimizer_type="lamb")
```

Fast-Bert now supports LAMB optmizer. Due to the speed of training, we have set LAMB as the default optimizer. You can switch back to AdamW by setting optimizer_type to 'adamw'.

### 6. Save trained model artifacts

```python
learner.save_model()
```

Model artefacts will be persisted in the output_dir/'model_out' path provided to the learner object. Following files will be persisted:

| File name               | description                                      |
| ----------------------- | ------------------------------------------------ |
| pytorch_model.bin       | trained model weights                            |
| spiece.model            | sentence tokenizer vocabulary (for xlnet models) |
| vocab.txt               | workpiece tokenizer vocabulary (for bert models) |
| special_tokens_map.json | special tokens mappings                          |
| config.json             | model config                                     |
| added_tokens.json       | list of new tokens                               |

The pytorch_model.bin contains the finetuned weights and you can point the classification task learner object to this file throgh the `finetuned_wgts_path` parameter.

## Amazon Sagemaker Support

The purpose of this library is to let you train and deploy production grade models. As transformer models require expensive GPUs to train, I have added support for training and deploying model on AWS SageMaker.

The repository contains the docker image and code for building BERT based classification models in Amazon SageMaker.

Please refer to my blog [Train and Deploy the Mighty BERT based NLP models using FastBert and Amazon SageMaker](https://towardsdatascience.com/train-and-deploy-mighty-transformer-nlp-models-using-fastbert-and-aws-sagemaker-cc4303c51cf3) that provides detailed explanation on using SageMaker with FastBert.

## Citation

Please include a mention of [this library](https://github.com/kaushaltrivedi/fast-bert) and HuggingFace [pytorch-transformers](https://github.com/huggingface/pytorch-transformers) library and a link to the present repository if you use this work in a published or open-source project.

Also include my blogs on this topic:

- [Introducing FastBert — A simple Deep Learning library for BERT Models](https://medium.com/huggingface/introducing-fastbert-a-simple-deep-learning-library-for-bert-models-89ff763ad384)
- [Multi-label Text Classification using BERT – The Mighty Transformer](https://medium.com/huggingface/multi-label-text-classification-using-bert-the-mighty-transformer-69714fa3fb3d)

- [Train and Deploy the Mighty BERT based NLP models using FastBert and Amazon SageMaker](https://towardsdatascience.com/train-and-deploy-mighty-transformer-nlp-models-using-fastbert-and-aws-sagemaker-cc4303c51cf3)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kaushaltrivedi/fast-bert",
    "name": "fast-bert",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "BERT NLP deep learning google",
    "author": "Kaushal Trivedi",
    "author_email": "kaushaltrivedi@me.com",
    "download_url": "https://files.pythonhosted.org/packages/fa/8d/02c33c160012d5365189cb2697b9b79fb1522e4c88394c1eeb4c729e7cb7/fast_bert-2.0.24.tar.gz",
    "platform": null,
    "description": "# Fast-Bert\n\n[![License Apache 2.0](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/deepmipt/DeepPavlov/blob/master/LICENSE)\n[![PyPI version](https://badge.fury.io/py/fast-bert.svg)](https://badge.fury.io/py/fast-bert)\n![Python 3.6, 3.7](https://img.shields.io/badge/python-3.6%20%7C%203.7-green.svg)\n\n**New - Learning Rate Finder for Text Classification Training (borrowed with thanks from https://github.com/davidtvs/pytorch-lr-finder)**\n\n\n**Supports LAMB optimizer for faster training.**\nPlease refer to https://arxiv.org/abs/1904.00962 for the paper on LAMB optimizer.\n\n**Supports BERT and XLNet for both Multi-Class and Multi-Label text classification.**\n\nFast-Bert is the deep learning library that allows developers and data scientists to train and deploy BERT and XLNet based models for natural language processing tasks beginning with Text Classification.\n\nThe work on FastBert is built on solid foundations provided by the excellent [Hugging Face BERT PyTorch library](https://github.com/huggingface/pytorch-pretrained-BERT) and is inspired by [fast.ai](https://github.com/fastai/fastai) and strives to make the cutting edge deep learning technologies accessible for the vast community of machine learning practitioners.\n\nWith FastBert, you will be able to:\n\n1. Train (more precisely fine-tune) BERT, RoBERTa and XLNet text classification models on your custom dataset.\n\n2. Tune model hyper-parameters such as epochs, learning rate, batch size, optimiser schedule and more.\n\n3. Save and deploy trained model for inference (including on AWS Sagemaker).\n\nFast-Bert will support both multi-class and multi-label text classification for the following and in due course, it will support other NLU tasks such as Named Entity Recognition, Question Answering and Custom Corpus fine-tuning.\n\n1.  **[BERT](https://github.com/google-research/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.\n\n2)  **[XLNet](https://github.com/zihangdai/xlnet/)** (from Google/CMU) released with the paper [\u200bXLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.\n\n3)  **[RoBERTa](https://arxiv.org/abs/1907.11692)** (from Facebook), a Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du et al.\n\n4)  **DistilBERT (from HuggingFace)**, released together with the blogpost [Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5) by Victor Sanh, Lysandre Debut and Thomas Wolf.\n\n## Installation\n\nThis repo is tested on Python 3.6+.\n\n### With pip\n\nPyTorch-Transformers can be installed by pip as follows:\n\n```bash\npip install fast-bert\n```\n\n### From source\n\nClone the repository and run:\n\n```bash\npip install [--editable] .\n```\n\nor\n\n```bash\npip install git+https://github.com/kaushaltrivedi/fast-bert.git\n```\n\nYou will also need to install NVIDIA Apex.\n\n```bash\ngit clone https://github.com/NVIDIA/apex\ncd apex\npip install -v --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" ./\n```\n\n## Usage\n\n## Text Classification\n\n### 1. Create a DataBunch object\n\nThe databunch object takes training, validation and test csv files and converts the data into internal representation for BERT, RoBERTa, DistilBERT or XLNet. The object also instantiates the correct data-loaders based on device profile and batch_size and max_sequence_length.\n\n```python\n\nfrom fast_bert.data_cls import BertDataBunch\n\ndatabunch = BertDataBunch(DATA_PATH, LABEL_PATH,\n                          tokenizer='bert-base-uncased',\n                          train_file='train.csv',\n                          val_file='val.csv',\n                          label_file='labels.csv',\n                          text_col='text',\n                          label_col='label',\n                          batch_size_per_gpu=16,\n                          max_seq_length=512,\n                          multi_gpu=True,\n                          multi_label=False,\n                          model_type='bert')\n```\n\n#### File format for train.csv and val.csv\n\n| index | text                                                                                                                                                                                                                                                                                                                                | label |\n| ----- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----- |\n| 0     | Looking through the other comments, I'm amazed that there aren't any warnings to potential viewers of what they have to look forward to when renting this garbage. First off, I rented this thing with the understanding that it was a competently rendered Indiana Jones knock-off.                                                | neg   |\n| 1     | I've watched the first 17 episodes and this series is simply amazing! I haven't been this interested in an anime series since Neon Genesis Evangelion. This series is actually based off an h-game, which I'm not sure if it's been done before or not, I haven't played the game, but from what I've heard it follows it very well | pos   |\n| 2     | his movie is nothing short of a dark, gritty masterpiece. I may be bias, as the Apartheid era is an area I've always felt for.                                                                                                                                                                                                      | pos   |\n\nIn case the column names are different than the usual text and labels, you will have to provide those names in the databunch text_col and label_col parameters.\n\n**labels.csv** will contain a list of all unique labels. In this case the file will contain:\n\n```csv\npos\nneg\n```\n\nFor multi-label classification, **labels.csv** will contain all possible labels:\n\n```\ntoxic\nsevere_toxic\nobscene\nthreat\ninsult\nidentity_hate\n```\n\nThe file **train.csv** will then contain one column for each label, with each column value being either 0 or 1. Don't forget to change `multi_label=True` for multi-label classification in `BertDataBunch`.\n\n| id  | text                                                                       | toxic | severe_toxic | obscene | threat | insult | identity_hate |\n| --- | -------------------------------------------------------------------------- | ----- | ------------ | ------- | ------ | ------ | ------------- |\n| 0   | Why the edits made under my username Hardcore Metallica Fan were reverted? | 0     | 0            | 0       | 0      | 0      | 0             |\n| 0   | I will mess you up                                                         | 1     | 0            | 0       | 1      | 0      | 0             |\n\nlabel_col will be a list of label column names. In this case it will be:\n\n```python\n['toxic','severe_toxic','obscene','threat','insult','identity_hate']\n```\n\n#### Tokenizer\n\nYou can either create a tokenizer object and pass it to DataBunch or you can pass the model name as tokenizer and DataBunch will automatically download and instantiate an appropriate tokenizer object.\n\nFor example for using XLNet base cased model, set tokenizer parameter to 'xlnet-base-cased'. DataBunch will automatically download and instantiate XLNetTokenizer with the vocabulary for xlnet-base-cased model.\n\n#### Model Type\n\nFast-Bert supports XLNet, RoBERTa and BERT based classification models. Set model type parameter value to **'bert'**, **roberta** or **'xlnet'** in order to initiate an appropriate databunch object.\n\n### 2. Create a Learner Object\n\nBertLearner is the \u2018learner\u2019 object that holds everything together. It encapsulates the key logic for the lifecycle of the model such as training, validation and inference.\n\nThe learner object will take the databunch created earlier as as input alongwith some of the other parameters such as location for one of the pretrained models, FP16 training, multi_gpu and multi_label options.\n\nThe learner class contains the logic for training loop, validation loop, optimiser strategies and key metrics calculation. This help the developers focus on their custom use-cases without worrying about these repetitive activities.\n\nAt the same time the learner object is flexible enough to be customised either via using flexible parameters or by creating a subclass of BertLearner and redefining relevant methods.\n\n```python\n\nfrom fast_bert.learner_cls import BertLearner\nfrom fast_bert.metrics import accuracy\nimport logging\n\nlogger = logging.getLogger()\ndevice_cuda = torch.device(\"cuda\")\nmetrics = [{'name': 'accuracy', 'function': accuracy}]\n\nlearner = BertLearner.from_pretrained_model(\n\t\t\t\t\t\tdatabunch,\n\t\t\t\t\t\tpretrained_path='bert-base-uncased',\n\t\t\t\t\t\tmetrics=metrics,\n\t\t\t\t\t\tdevice=device_cuda,\n\t\t\t\t\t\tlogger=logger,\n\t\t\t\t\t\toutput_dir=OUTPUT_DIR,\n\t\t\t\t\t\tfinetuned_wgts_path=None,\n\t\t\t\t\t\twarmup_steps=500,\n\t\t\t\t\t\tmulti_gpu=True,\n\t\t\t\t\t\tis_fp16=True,\n\t\t\t\t\t\tmulti_label=False,\n\t\t\t\t\t\tlogging_steps=50)\n```\n\n| parameter           | description                                                                                                                                                                                                                    |\n| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| databunch           | Databunch object created earlier                                                                                                                                                                                               |\n| pretrained_path     | Directory for the location of the pretrained model files or the name of one of the pretrained models i.e. bert-base-uncased, xlnet-large-cased, etc                                                                            |\n| metrics             | List of metrics functions that you want the model to calculate on the validation set, e.g. accuracy, beta, etc                                                                                                                 |\n| device              | torch.device of type _cuda_ or _cpu_                                                                                                                                                                                           |\n| logger              | logger object                                                                                                                                                                                                                  |\n| output_dir          | Directory for model to save trained artefacts, tokenizer vocabulary and tensorboard files                                                                                                                                      |\n| finetuned_wgts_path | provide the location for fine-tuned language model (experimental feature)                                                                                                                                                      |\n| warmup_steps        | number of training warms steps for the scheduler                                                                                                                                                                               |\n| multi_gpu           | multiple GPUs available e.g. if running on AWS p3.8xlarge instance                                                                                                                                                             |\n| is_fp16             | FP16 training                                                                                                                                                                                                                  |\n| multi_label         | multilabel classification                                                                                                                                                                                                      |\n| logging_steps       | number of steps between each tensorboard metrics calculation. Set it to 0 to disable tensor flow logging. Keeping this value too low will lower the training speed as model will be evaluated each time the metrics are logged |\n\n### 3. Find the optimal learning rate\n\nThe learning rate is one of the most important hyperparameters for model training.  We have incorporated the learining rate finder that was proposed by Leslie Smith and then built into the fastai library. \n\n```python\nlearner.lr_find(start_lr=1e-5,optimizer_type='lamb')\n```\n\nThe code is heavily borrowed from David Silva's [pytorch-lr-finder library](https://github.com/davidtvs/pytorch-lr-finder). \n\n![Learning rate range test](images/lr_finder.png)\n\n### 4. Train the model\n\n```python\nlearner.fit(epochs=6,\n\t\t\tlr=6e-5,\n\t\t\tvalidate=True, \t# Evaluate the model after each epoch\n\t\t\tschedule_type=\"warmup_cosine\",\n\t\t\toptimizer_type=\"lamb\")\n```\n\nFast-Bert now supports LAMB optmizer. Due to the speed of training, we have set LAMB as the default optimizer. You can switch back to AdamW by setting optimizer_type to 'adamw'.\n\n### 5. Save trained model artifacts\n\n```python\nlearner.save_model()\n```\n\nModel artefacts will be persisted in the output_dir/'model_out' path provided to the learner object. Following files will be persisted:\n\n| File name               | description                                      |\n| ----------------------- | ------------------------------------------------ |\n| pytorch_model.bin       | trained model weights                            |\n| spiece.model            | sentence tokenizer vocabulary (for xlnet models) |\n| vocab.txt               | workpiece tokenizer vocabulary (for bert models) |\n| special_tokens_map.json | special tokens mappings                          |\n| config.json             | model config                                     |\n| added_tokens.json       | list of new tokens                               |\n\nAs the model artefacts are all stored in the same folder, you will be able to instantiate the learner object to run inference by pointing pretrained_path to this location.\n\n### 6. Model Inference\n\nIf you already have a Learner object with trained model instantiated, just call predict_batch method on the learner object with the list of text data:\n\n```python\ntexts = ['I really love the Netflix original movies',\n\t\t 'this movie is not worth watching']\npredictions = learner.predict_batch(texts)\n```\n\nIf you have persistent trained model and just want to run inference logic on that trained model, use the second approach, i.e. the predictor object.\n\n```python\nfrom fast_bert.prediction import BertClassificationPredictor\n\nMODEL_PATH = OUTPUT_DIR/'model_out'\n\npredictor = BertClassificationPredictor(\n\t\t\t\tmodel_path=MODEL_PATH,\n\t\t\t\tlabel_path=LABEL_PATH, # location for labels.csv file\n\t\t\t\tmulti_label=False,\n\t\t\t\tmodel_type='xlnet',\n\t\t\t\tdo_lower_case=False,\n\t\t\t\tdevice=None) # set custom torch.device, defaults to cuda if available\n\n# Single prediction\nsingle_prediction = predictor.predict(\"just get me result for this text\")\n\n# Batch predictions\ntexts = [\n\t\"this is the first text\",\n\t\"this is the second text\"\n\t]\n\nmultiple_predictions = predictor.predict_batch(texts)\n```\n\n## Language Model Fine-tuning\n\nA useful approach to use BERT based models on custom datasets is to first finetune the language model task for the custom dataset, an apporach followed by fast.ai's ULMFit. The idea is to start with a pre-trained model and further train the model on the raw text of the custom dataset. We will use the masked LM task to finetune the language model.\n\nThis section will describe the usage of FastBert to finetune the language model.\n\n### 1. Import the necessary libraries\n\nThe necessary objects are stored in the files with '\\_lm' suffix.\n\n```python\n# Language model Databunch\nfrom fast_bert.data_lm import BertLMDataBunch\n# Language model learner\nfrom fast_bert.learner_lm import BertLMLearner\n\nfrom pathlib import Path\nfrom box import Box\n```\n\n### 2. Define parameters and setup datapaths\n\n```python\n# Box is a nice wrapper to create an object from a json dict\nargs = Box({\n    \"seed\": 42,\n    \"task_name\": 'imdb_reviews_lm',\n    \"model_name\": 'roberta-base',\n    \"model_type\": 'roberta',\n    \"train_batch_size\": 16,\n    \"learning_rate\": 4e-5,\n    \"num_train_epochs\": 20,\n    \"fp16\": True,\n    \"fp16_opt_level\": \"O2\",\n    \"warmup_steps\": 1000,\n    \"logging_steps\": 0,\n    \"max_seq_length\": 512,\n    \"multi_gpu\": True if torch.cuda.device_count() > 1 else False\n})\n\nDATA_PATH = Path('../lm_data/')\nLOG_PATH = Path('../logs')\nMODEL_PATH = Path('../lm_model_{}/'.format(args.model_type))\n\nDATA_PATH.mkdir(exist_ok=True)\nMODEL_PATH.mkdir(exist_ok=True)\nLOG_PATH.mkdir(exist_ok=True)\n\n\n```\n\n### 3. Create DataBunch object\n\nThe BertLMDataBunch class contains a static method 'from_raw_corpus' that will take the list of raw texts and create DataBunch for the language model learner.\n\nThe method will at first preprocess the text list by removing html tags, extra spaces and more and then create files `lm_train.txt` and `lm_val.txt`. These files will be used for training and evaluating the language model finetuning task.\n\nThe next step will be to featurize the texts. The text will be tokenized, numericalized and split into blocks on 512 tokens (including special tokens).\n\n```python\ndatabunch_lm = BertLMDataBunch.from_raw_corpus(\n\t\t\t\t\tdata_dir=DATA_PATH,\n\t\t\t\t\ttext_list=texts,\n\t\t\t\t\ttokenizer=args.model_name,\n\t\t\t\t\tbatch_size_per_gpu=args.train_batch_size,\n\t\t\t\t\tmax_seq_length=args.max_seq_length,\n                    multi_gpu=args.multi_gpu,\n                    model_type=args.model_type,\n                    logger=logger)\n```\n\nAs this step can take some time based on the size of your custom dataset's text, the featurized data will be cached in pickled files in the data_dir/lm_cache folder.\n\nThe next time, instead of using from_raw_corpus method, you may want to directly instantiate the DataBunch object as shown below:\n\n```python\ndatabunch_lm = BertLMDataBunch(\n\t\t\t\t\t\tdata_dir=DATA_PATH,\n\t\t\t\t\t\ttokenizer=args.model_name,\n                        batch_size_per_gpu=args.train_batch_size,\n                        max_seq_length=args.max_seq_length,\n                        multi_gpu=args.multi_gpu,\n                        model_type=args.model_type,\n                        logger=logger)\n```\n\n### 4. Create the LM Learner object\n\nBertLearner is the \u2018learner\u2019 object that holds everything together. It encapsulates the key logic for the lifecycle of the model such as training, validation and inference.\n\nThe learner object will take the databunch created earlier as as input alongwith some of the other parameters such as location for one of the pretrained models, FP16 training, multi_gpu and multi_label options.\n\nThe learner class contains the logic for training loop, validation loop, and optimizer strategies. This help the developers focus on their custom use-cases without worrying about these repetitive activities.\n\nAt the same time the learner object is flexible enough to be customized either via using flexible parameters or by creating a subclass of BertLearner and redefining relevant methods.\n\n```python\nlearner = BertLMLearner.from_pretrained_model(\n\t\t\t\t\t\t\tdataBunch=databunch_lm,\n\t\t\t\t\t\t\tpretrained_path=args.model_name,\n\t\t\t\t\t\t\toutput_dir=MODEL_PATH,\n\t\t\t\t\t\t\tmetrics=[],\n\t\t\t\t\t\t\tdevice=device,\n\t\t\t\t\t\t\tlogger=logger,\n\t\t\t\t\t\t\tmulti_gpu=args.multi_gpu,\n\t\t\t\t\t\t\tlogging_steps=args.logging_steps,\n\t\t\t\t\t\t\tfp16_opt_level=args.fp16_opt_level)\n```\n\n### 5. Train the model\n\n```python\nlearner.fit(epochs=6,\n\t\t\tlr=6e-5,\n\t\t\tvalidate=True, \t# Evaluate the model after each epoch\n\t\t\tschedule_type=\"warmup_cosine\",\n\t\t\toptimizer_type=\"lamb\")\n```\n\nFast-Bert now supports LAMB optmizer. Due to the speed of training, we have set LAMB as the default optimizer. You can switch back to AdamW by setting optimizer_type to 'adamw'.\n\n### 6. Save trained model artifacts\n\n```python\nlearner.save_model()\n```\n\nModel artefacts will be persisted in the output_dir/'model_out' path provided to the learner object. Following files will be persisted:\n\n| File name               | description                                      |\n| ----------------------- | ------------------------------------------------ |\n| pytorch_model.bin       | trained model weights                            |\n| spiece.model            | sentence tokenizer vocabulary (for xlnet models) |\n| vocab.txt               | workpiece tokenizer vocabulary (for bert models) |\n| special_tokens_map.json | special tokens mappings                          |\n| config.json             | model config                                     |\n| added_tokens.json       | list of new tokens                               |\n\nThe pytorch_model.bin contains the finetuned weights and you can point the classification task learner object to this file throgh the `finetuned_wgts_path` parameter.\n\n## Amazon Sagemaker Support\n\nThe purpose of this library is to let you train and deploy production grade models. As transformer models require expensive GPUs to train, I have added support for training and deploying model on AWS SageMaker.\n\nThe repository contains the docker image and code for building BERT based classification models in Amazon SageMaker.\n\nPlease refer to my blog [Train and Deploy the Mighty BERT based NLP models using FastBert and Amazon SageMaker](https://towardsdatascience.com/train-and-deploy-mighty-transformer-nlp-models-using-fastbert-and-aws-sagemaker-cc4303c51cf3) that provides detailed explanation on using SageMaker with FastBert.\n\n## Citation\n\nPlease include a mention of [this library](https://github.com/kaushaltrivedi/fast-bert) and HuggingFace [pytorch-transformers](https://github.com/huggingface/pytorch-transformers) library and a link to the present repository if you use this work in a published or open-source project.\n\nAlso include my blogs on this topic:\n\n- [Introducing FastBert \u2014 A simple Deep Learning library for BERT Models](https://medium.com/huggingface/introducing-fastbert-a-simple-deep-learning-library-for-bert-models-89ff763ad384)\n- [Multi-label Text Classification using BERT \u2013 The Mighty Transformer](https://medium.com/huggingface/multi-label-text-classification-using-bert-the-mighty-transformer-69714fa3fb3d)\n\n- [Train and Deploy the Mighty BERT based NLP models using FastBert and Amazon SageMaker](https://towardsdatascience.com/train-and-deploy-mighty-transformer-nlp-models-using-fastbert-and-aws-sagemaker-cc4303c51cf3)\n",
    "bugtrack_url": null,
    "license": "Apache2",
    "summary": "AI Library using BERT",
    "version": "2.0.24",
    "project_urls": {
        "Homepage": "https://github.com/kaushaltrivedi/fast-bert"
    },
    "split_keywords": [
        "bert",
        "nlp",
        "deep",
        "learning",
        "google"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a4312e9fe51277f9eebf24a905e001b3a9a614bd3f86a239742cd43be303cf5d",
                "md5": "0e46f79e4490b30be03a602d4daa502e",
                "sha256": "f3730c347db3a53a006f96e416f8fb84c2ae9f3493865ba0cbc3b400e7912731"
            },
            "downloads": -1,
            "filename": "fast_bert-2.0.24-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0e46f79e4490b30be03a602d4daa502e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 104455,
            "upload_time": "2024-01-30T11:19:07",
            "upload_time_iso_8601": "2024-01-30T11:19:07.384971Z",
            "url": "https://files.pythonhosted.org/packages/a4/31/2e9fe51277f9eebf24a905e001b3a9a614bd3f86a239742cd43be303cf5d/fast_bert-2.0.24-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fa8d02c33c160012d5365189cb2697b9b79fb1522e4c88394c1eeb4c729e7cb7",
                "md5": "d285d9b92a2b9e16218597614e2fa800",
                "sha256": "a43ce40c63ddc070fb5a2d140c343301856aeb45c45b4b63e60979baa12ffb59"
            },
            "downloads": -1,
            "filename": "fast_bert-2.0.24.tar.gz",
            "has_sig": false,
            "md5_digest": "d285d9b92a2b9e16218597614e2fa800",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 96672,
            "upload_time": "2024-01-30T11:19:09",
            "upload_time_iso_8601": "2024-01-30T11:19:09.407604Z",
            "url": "https://files.pythonhosted.org/packages/fa/8d/02c33c160012d5365189cb2697b9b79fb1522e4c88394c1eeb4c729e7cb7/fast_bert-2.0.24.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-30 11:19:09",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kaushaltrivedi",
    "github_project": "fast-bert",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "fast-bert"
}
        
Elapsed time: 0.19147s