pytransformers


Namepytransformers JSON
Version 0.1.0 PyPI version JSON
download
home_page
Summarycreate transformer models (Architecture behind chat GPT and other large language models)
upload_time2023-08-12 16:29:58
maintainer
docs_urlNone
authoromer mustafa
requires_python>=3.9
license
keywords python ai chat gpt transformer model transformers bert seq2seq sequence to sequence classification chat bot deep learning keras
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## PyTransformers 

PyTransformers is a powerful library for data processing and implementing Transformer-based models using Keras and TensorFlow. This library simplifies the data preprocessing steps and allows you to build and train Transformer models for various natural language processing tasks.

### Installation

To install the pytransformers library, you can use pip:

pip install pytransformers

### DataProcessor Class

The `DataProcessor` class in pytransformers is designed for data preprocessing and tokenization. It prepares the data for training and evaluation by cleaning the input and target sentences and creating TextVectorization objects for inputs and targets.

#### Constructor

| Parameter           | Description                                                                                                  |
|---------------------|--------------------------------------------------------------------------------------------------------------|
| `inputs`            | List of input sentences.                                                                                    |
| `targets`           | List of target sentences.                                                                                   |
| `maxlen` (optional) | Maximum length of input and target sentences. If not provided, it will be set to the maximum sentence length in the data.  |
| `remove_target_punc`| Boolean value to indicate whether to remove punctuation from the target sentences during data processing. |
| `remove_input_punc` | Boolean value to indicate whether to remove punctuation from the input sentences during data processing.  |


#### Methods

| Method                 | Description                                                                        |
|------------------------|------------------------------------------------------------------------------------|
| `get_Dataset()`        | Returns a preprocessed TensorFlow Dataset ready for training.                      |
| `save_input_vectoriser(name)`  | Saves the input TextVectorization object to a pickle file with the given name.      |
| `save_target_vectoriser(name)` | Saves the target TextVectorization object to a pickle file with the given name.     |
| `load_vectoriser(name)` | Loads a TextVectorization object from a pickle file with the given name.             |

### Transformer Class

The `Transformer` class combines the encoder and decoder layers to create the Transformer model. It takes in sequence length, vocabulary size, latent dimension, embedding dimension, and the number of heads as its parameters.

#### Constructor

| Parameter       | Description                            |
|-----------------|----------------------------------------|
| `seq_length`    | Maximum sequence length for inputs and targets. |
| `vocab_size`    | Vocabulary size (number of unique tokens). |
| `latent_dim`    | Latent dimension for the model.        |
| `embd_dim`      | Embedding dimension for the model.     |
| `num_heads`     | Number of attention heads in the model.|
| `EncoderUnits` | Number of encoder layers in the model.|
| `DecoderUnits`  | Number of deocder layers in the model.|

#### Methods

| Method                    | Description                                                                   |
|---------------------------|-------------------------------------------------------------------------------|
| `model()`                 | Returns the Keras model for the Transformer.                                  |
| `save_transformer(name)`  | Saves the trained Transformer model to an h5 file with the given name.       |
| `answer()`                | Performs prediction for a given input sentence using the trained model.       |
| `Chat()`                  | Allows interactive chat with the trained model for question-answer tasks.     |
| `load_transformer(name)`  | Loads the trained Transformer model from an h5 file with the given name.      |
| `train()`                 | used to fine tune the Transformer model with new data and saves the updated model.     |

### Usage

```python
# Example usage for DataProcessor and Transformer

import pandas as pd
from pytransformer import Transformer, DataProcessor

# Example data
data = pd.DataFrame({
    'text': ['this is the first example', 'and here comes the second example'],
    'code': ['print("hello")', 'print("world")']
})

inputs = data['text'].tolist()
targets = data['code'].tolist()

dp = DataProcessor(inputs=inputs, targets=targets, maxlen=100, remove_input_punc=True, remove_target_punc=True)
dataset = dp.get_Dataset(batch_size=24)

seq_len = dp.maxlen
embd_dim = 512
dense_dim = 8000
vocab_size = dp.vocab_size
encoder_units = 6
decoder_units = 12
num_heads = 16

transformer =  Transformer(vocab_size=vocab_size,embd_dim=embd_dim,seq_length=seq_len,latent_dim=dense_dim,num_heads=num_heads,DecoderUnits=decoder_units,EncoderUnits=encoder_units)
model = transformer.model()
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

model.fit(dataset, epochs=1)

# Saving the model and vocabulary
transformer.save_transformer(name='transformer_model')
dp.save_input_vectoriser('transformer_inp_vec')
dp.save_target_vectoriser('transformer_tar_vec')

# To use the trained model for prediction, you can load the model and vectorizers and call the Transformer.Chat method.

input_vocab = DataProcessor.load_vectoriser('transformer_inp_vec.pkl')
tar_vocab = DataProcessor.load_vectoriser('transformer_tar_vec.pkl')

model = keras.models.load_model('transformer_model.h5')

# Run prediction with the correct 'max_len' value
max_len = 100

Transformer.Chat(input_vectoriser=input_vocab, target_vectoriser=tar_vocab, model=model, maxlen=max_len)


# fine-tuning the Model

# Load example data
data = pd.DataFrame({
    'text': ['this is the first example', 'and here comes the second example'],
    'code': ['print("hello")', 'print("world")']
})

inputs = data['text'].tolist()
targets = data['code'].tolist()

# Load model and vectorizers
input_vec = DataProcessor.load_vectoriser('transformer_inp_vec.pkl')
target_vec = DataProcessor.load_vectoriser('transformer_tar_vec.pkl')
model = Transformer.load_transformer('transformer_model.h5')

# Train the model with new data
Transformer.train(model=model, input_vectoriser=input_vec, target_vectoriser=target_vec, batch_size=128, epochs=5, inputs=inputs, targets=targets, name='transformer_model')
# Train method will train the model and save it to the local directory

```
# News:
### OBert model coming soon !!
OBERT is a model that closely resembles the BERT architecture but incorporates a few modifications. It is designed to serve the purposes of classification and predicting the next token in a sequence.

## Contributing
If you want to contribute to the pytransformers library, feel free to email me omermustafacontact@gmail.com


            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "pytransformers",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "",
    "keywords": "python,ai,Chat GPT,transformer model,transformers,bert,seq2seq,sequence to sequence,classification,chat bot,deep learning,keras",
    "author": "omer mustafa",
    "author_email": "<omermustafacontact@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/d5/08/bc82e14564d0fc63d360cd656e1acf312c56b710e05056a2f454aada9fdc/pytransformers-0.1.0.tar.gz",
    "platform": null,
    "description": "## PyTransformers \n\nPyTransformers is a powerful library for data processing and implementing Transformer-based models using Keras and TensorFlow. This library simplifies the data preprocessing steps and allows you to build and train Transformer models for various natural language processing tasks.\n\n### Installation\n\nTo install the pytransformers library, you can use pip:\n\npip install pytransformers\n\n### DataProcessor Class\n\nThe `DataProcessor` class in pytransformers is designed for data preprocessing and tokenization. It prepares the data for training and evaluation by cleaning the input and target sentences and creating TextVectorization objects for inputs and targets.\n\n#### Constructor\n\n| Parameter           | Description                                                                                                  |\n|---------------------|--------------------------------------------------------------------------------------------------------------|\n| `inputs`            | List of input sentences.                                                                                    |\n| `targets`           | List of target sentences.                                                                                   |\n| `maxlen` (optional) | Maximum length of input and target sentences. If not provided, it will be set to the maximum sentence length in the data.  |\n| `remove_target_punc`| Boolean value to indicate whether to remove punctuation from the target sentences during data processing. |\n| `remove_input_punc` | Boolean value to indicate whether to remove punctuation from the input sentences during data processing.  |\n\n\n#### Methods\n\n| Method                 | Description                                                                        |\n|------------------------|------------------------------------------------------------------------------------|\n| `get_Dataset()`        | Returns a preprocessed TensorFlow Dataset ready for training.                      |\n| `save_input_vectoriser(name)`  | Saves the input TextVectorization object to a pickle file with the given name.      |\n| `save_target_vectoriser(name)` | Saves the target TextVectorization object to a pickle file with the given name.     |\n| `load_vectoriser(name)` | Loads a TextVectorization object from a pickle file with the given name.             |\n\n### Transformer Class\n\nThe `Transformer` class combines the encoder and decoder layers to create the Transformer model. It takes in sequence length, vocabulary size, latent dimension, embedding dimension, and the number of heads as its parameters.\n\n#### Constructor\n\n| Parameter       | Description                            |\n|-----------------|----------------------------------------|\n| `seq_length`    | Maximum sequence length for inputs and targets. |\n| `vocab_size`    | Vocabulary size (number of unique tokens). |\n| `latent_dim`    | Latent dimension for the model.        |\n| `embd_dim`      | Embedding dimension for the model.     |\n| `num_heads`     | Number of attention heads in the model.|\n| `EncoderUnits` | Number of encoder layers in the model.|\n| `DecoderUnits`  | Number of deocder layers in the model.|\n\n#### Methods\n\n| Method                    | Description                                                                   |\n|---------------------------|-------------------------------------------------------------------------------|\n| `model()`                 | Returns the Keras model for the Transformer.                                  |\n| `save_transformer(name)`  | Saves the trained Transformer model to an h5 file with the given name.       |\n| `answer()`                | Performs prediction for a given input sentence using the trained model.       |\n| `Chat()`                  | Allows interactive chat with the trained model for question-answer tasks.     |\n| `load_transformer(name)`  | Loads the trained Transformer model from an h5 file with the given name.      |\n| `train()`                 | used to fine tune the Transformer model with new data and saves the updated model.     |\n\n### Usage\n\n```python\n# Example usage for DataProcessor and Transformer\n\nimport pandas as pd\nfrom pytransformer import Transformer, DataProcessor\n\n# Example data\ndata = pd.DataFrame({\n    'text': ['this is the first example', 'and here comes the second example'],\n    'code': ['print(\"hello\")', 'print(\"world\")']\n})\n\ninputs = data['text'].tolist()\ntargets = data['code'].tolist()\n\ndp = DataProcessor(inputs=inputs, targets=targets, maxlen=100, remove_input_punc=True, remove_target_punc=True)\ndataset = dp.get_Dataset(batch_size=24)\n\nseq_len = dp.maxlen\nembd_dim = 512\ndense_dim = 8000\nvocab_size = dp.vocab_size\nencoder_units = 6\ndecoder_units = 12\nnum_heads = 16\n\ntransformer =  Transformer(vocab_size=vocab_size,embd_dim=embd_dim,seq_length=seq_len,latent_dim=dense_dim,num_heads=num_heads,DecoderUnits=decoder_units,EncoderUnits=encoder_units)\nmodel = transformer.model()\nmodel.summary()\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nmodel.fit(dataset, epochs=1)\n\n# Saving the model and vocabulary\ntransformer.save_transformer(name='transformer_model')\ndp.save_input_vectoriser('transformer_inp_vec')\ndp.save_target_vectoriser('transformer_tar_vec')\n\n# To use the trained model for prediction, you can load the model and vectorizers and call the Transformer.Chat method.\n\ninput_vocab = DataProcessor.load_vectoriser('transformer_inp_vec.pkl')\ntar_vocab = DataProcessor.load_vectoriser('transformer_tar_vec.pkl')\n\nmodel = keras.models.load_model('transformer_model.h5')\n\n# Run prediction with the correct 'max_len' value\nmax_len = 100\n\nTransformer.Chat(input_vectoriser=input_vocab, target_vectoriser=tar_vocab, model=model, maxlen=max_len)\n\n\n# fine-tuning the Model\n\n# Load example data\ndata = pd.DataFrame({\n    'text': ['this is the first example', 'and here comes the second example'],\n    'code': ['print(\"hello\")', 'print(\"world\")']\n})\n\ninputs = data['text'].tolist()\ntargets = data['code'].tolist()\n\n# Load model and vectorizers\ninput_vec = DataProcessor.load_vectoriser('transformer_inp_vec.pkl')\ntarget_vec = DataProcessor.load_vectoriser('transformer_tar_vec.pkl')\nmodel = Transformer.load_transformer('transformer_model.h5')\n\n# Train the model with new data\nTransformer.train(model=model, input_vectoriser=input_vec, target_vectoriser=target_vec, batch_size=128, epochs=5, inputs=inputs, targets=targets, name='transformer_model')\n# Train method will train the model and save it to the local directory\n\n```\n# News:\n### OBert model coming soon !!\nOBERT is a model that closely resembles the BERT architecture but incorporates a few modifications. It is designed to serve the purposes of classification and predicting the next token in a sequence.\n\n## Contributing\nIf you want to contribute to the pytransformers library, feel free to email me omermustafacontact@gmail.com\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "create transformer models (Architecture behind chat GPT and other large language models)",
    "version": "0.1.0",
    "project_urls": null,
    "split_keywords": [
        "python",
        "ai",
        "chat gpt",
        "transformer model",
        "transformers",
        "bert",
        "seq2seq",
        "sequence to sequence",
        "classification",
        "chat bot",
        "deep learning",
        "keras"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b3e2c9fb48dc5037da7ada92450c0436f0dd7b2db0f8f8bfac79442fdd22809a",
                "md5": "49269e444a41c5d9eee0c9e845bfb7c4",
                "sha256": "13cbd2390667ce1827f07c27f5957d5001881a56d1577b29443b87b97c34233c"
            },
            "downloads": -1,
            "filename": "pytransformers-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "49269e444a41c5d9eee0c9e845bfb7c4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 7834,
            "upload_time": "2023-08-12T16:29:57",
            "upload_time_iso_8601": "2023-08-12T16:29:57.169782Z",
            "url": "https://files.pythonhosted.org/packages/b3/e2/c9fb48dc5037da7ada92450c0436f0dd7b2db0f8f8bfac79442fdd22809a/pytransformers-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d508bc82e14564d0fc63d360cd656e1acf312c56b710e05056a2f454aada9fdc",
                "md5": "88633bf92a9c115f16a7344e2f93b355",
                "sha256": "b38eca7f49bc6a549d61c73239ffb115e862fcfbaa856212ee0e0fcb7c9ccfeb"
            },
            "downloads": -1,
            "filename": "pytransformers-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "88633bf92a9c115f16a7344e2f93b355",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 7921,
            "upload_time": "2023-08-12T16:29:58",
            "upload_time_iso_8601": "2023-08-12T16:29:58.319893Z",
            "url": "https://files.pythonhosted.org/packages/d5/08/bc82e14564d0fc63d360cd656e1acf312c56b710e05056a2f454aada9fdc/pytransformers-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-12 16:29:58",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "pytransformers"
}
        
Elapsed time: 0.10433s