tsdae


Nametsdae JSON
Version 1.1.0 PyPI version JSON
download
home_pageNone
SummaryTranformer-based Denoising AutoEncoder for Sentence Transformers Unsupervised pre-training.
upload_time2024-05-26 11:48:12
maintainerNone
docs_urlNone
authorNone
requires_pythonNone
licenseNone
keywords transformers sentence-transformers tsdae bert machine-learning nlp sentence-similarity nltk pre-training embeddings
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Tranformer-based Denoising AutoEncoder for Sentence Transformers Unsupervised pre-training
[![Python](https://img.shields.io/pypi/pyversions/tensorflow.svg)](https://badge.fury.io/py/tensorflow) [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) ![Maintainer](https://img.shields.io/badge/maintainer-@louisbrulenaudet-blue)

The acquisition of sentence embeddings often necessitates a substantial volume of labeled data. However, in many cases and fields, labeled data is rarely accessible, and the procurement of such data is costly. In this project, we employ an unsupervised process grounded in pre-trained Transformers-based Sequential Denoising Auto-Encoder (TSDAE), introduced by the Ubiquitous Knowledge Processing Lab of Darmstadt, which can realize a performance level reaching 93.1% of in-domain supervised methodologies. 

The TSDAE schema comprises two components: an encoder and a decoder. Throughout the training process, TSDAE translates tainted sentences into uniform-sized vectors, necessitating the decoder to reconstruct the original sentences utilizing this sentence embedding. For good reconstruction quality, the semantics must be captured well in the sentence embeddings from the encoder. Subsequently, during inference, the encoder is solely utilized to form sentence embeddings.

![Plot](https://github.com/louisbrulenaudet/tsdae/blob/main/thumbnail.png?raw=true)

Moreover, TSDAE serves as an effective pre-training technique, surpassing the classical Mask Language Model (MLM) pre-training task in performance.

## Dependencies

Below is a list of the main dependencies for TSDAE:
- `nltk`: The Natural Language Toolkit (NLTK) is a suite of libraries and programs for symbolic and statistical natural language processing. In TSDAE, it's primarily used for text preprocessing tasks, such as tokenization.
- `re`: A library for regular expression operations in Python, utilized for text cleaning and splitting operations within TSDAE to prepare text data for further processing.
- `random`: Provides functionalities for generating random numbers, used in TSDAE for shuffling datasets and sampling data subsets in various preprocessing steps.
- `logging`: Facilitates logging events for debugging and tracking the execution process. TSDAE uses the logging module to record critical information, errors, and progress updates during the model's operation.
- `datasets`: Part of the Hugging Face ecosystem, this library simplifies the loading and processing of large-scale datasets. TSDAE uses it to fetch and prepare datasets for training the sentence embeddings.
- `sentence_transformers`: A framework for state-of-the-art sentence, text, and image embeddings. TSDAE leverages this library to create and train the underlying Transformer models for generating sentence embeddings.
- `torch`: The PyTorch library provides a wide array of deep learning and tensor computation tools with GPU acceleration support. It is central to TSDAE for modeling and training the denoising autoencoder.
- `ssl`: Used for handling Secure Sockets Layer (SSL) and Transport Layer Security (TLS) encryption in Python. In TSDAE, it is optionally employed to create unverified HTTPS contexts, facilitating dataset retrieval in environments with strict SSL certificate requirements.

The dependencies above are instrumental in dataset handling, model training, and embedding processes, constituting the backbone of TSDAE's operational infrastructure.

## Installation

Before proceeding with TSDAE, ensure that all dependencies are installed through the Python package manager pip:

```bash
pip install tsdae nltk datasets sentence-transformers torch
```

Note: Additional steps, such as configuring unverified HTTPS contexts, may be necessary depending on your execution environment.

## Usage
Here's how you can use `tsdae`:

1. **Installation**: Install the required libraries, including `Torch`, `Transformers`...
2. **Initialization**: Create an instance of the `TSDAE` class.

### Model Architecture

The TSDAE model is bifurcated into two primary components: 

- **Encoder**: The encoder processes input sentences that have been deliberately corrupted, converting them into fixed-sized sentence embeddings. Essential to the model's success is the encoder's ability to distill and encode the semantic essence of the sentences into these embeddings.
  
- **Decoder**: Tasked with the challenge of reconstruction, the decoder utilizes the sentence embeddings generated by the encoder to recreate the original sentences. The quality of reconstruction is directly proportional to the semantic information retained within the sentence embeddings.

## Usage Example

Below is a concise illustration of employing TSDAE to train a model on a dataset named "louisbrulenaudet/cgi", showcasing the seamless integration of components from dataset preparation to model training.

```python
from tsdae import TSDAE

# Initialize an instance of TSDAE
instance = TSDAE()

# Load a dataset
train_dataset = instance.load_dataset_from_hf(
    dataset="louisbrulenaudet/cgi"
)

# Train the model with the dataset
model = instance.train(
    train_dataset=train_dataset,
    model_name="bert-base-multilingual-uncased",
    column="output",
    output_path="output/tsdae-lemon-mbert-base"
)
```

This example encapsulates the simplicity and power of TSDAE, guiding users from dataset acquisition to model optimization with minimal overhead.

## References

Wang, K., Reimers, N., & Gurevych, I. (2021). TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning. arXiv. Retrieved from http://arxiv.org/abs/2104.06979.

## Citing this project
If you use this code in your research, please use the following BibTeX entry.

```BibTeX
@misc{louisbrulenaudet2023,
	author = {Louis Brulé Naudet},
	title = {Tranformer-based Denoising AutoEncoder for Sentence Transformers Unsupervised pre-training},
	howpublished = {\url{https://github.com/louisbrulenaudet/tsdae}},
	year = {2024}
}

```
## Feedback
If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "tsdae",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "transformers, sentence-transformers, tsdae, bert, machine-learning, NLP, sentence-similarity, nltk, pre-training, embeddings",
    "author": null,
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/33/15/16fc5578f33ffbe9c5404142db433eff4613a23fceaf6b93155ac191347c/tsdae-1.1.0.tar.gz",
    "platform": null,
    "description": "# Tranformer-based Denoising AutoEncoder for Sentence Transformers Unsupervised pre-training\n[![Python](https://img.shields.io/pypi/pyversions/tensorflow.svg)](https://badge.fury.io/py/tensorflow) [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) ![Maintainer](https://img.shields.io/badge/maintainer-@louisbrulenaudet-blue)\n\nThe acquisition of sentence embeddings often necessitates a substantial volume of labeled data. However, in many cases and fields, labeled data is rarely accessible, and the procurement of such data is costly. In this project, we employ an unsupervised process grounded in pre-trained Transformers-based Sequential Denoising Auto-Encoder (TSDAE), introduced by the Ubiquitous Knowledge Processing Lab of Darmstadt, which can realize a performance level reaching 93.1% of in-domain supervised methodologies. \n\nThe TSDAE schema comprises two components: an encoder and a decoder. Throughout the training process, TSDAE translates tainted sentences into uniform-sized vectors, necessitating the decoder to reconstruct the original sentences utilizing this sentence embedding. For good reconstruction quality, the semantics must be captured well in the sentence embeddings from the encoder. Subsequently, during inference, the encoder is solely utilized to form sentence embeddings.\n\n![Plot](https://github.com/louisbrulenaudet/tsdae/blob/main/thumbnail.png?raw=true)\n\nMoreover, TSDAE serves as an effective pre-training technique, surpassing the classical Mask Language Model (MLM) pre-training task in performance.\n\n## Dependencies\n\nBelow is a list of the main dependencies for TSDAE:\n- `nltk`: The Natural Language Toolkit (NLTK) is a suite of libraries and programs for symbolic and statistical natural language processing. In TSDAE, it's primarily used for text preprocessing tasks, such as tokenization.\n- `re`: A library for regular expression operations in Python, utilized for text cleaning and splitting operations within TSDAE to prepare text data for further processing.\n- `random`: Provides functionalities for generating random numbers, used in TSDAE for shuffling datasets and sampling data subsets in various preprocessing steps.\n- `logging`: Facilitates logging events for debugging and tracking the execution process. TSDAE uses the logging module to record critical information, errors, and progress updates during the model's operation.\n- `datasets`: Part of the Hugging Face ecosystem, this library simplifies the loading and processing of large-scale datasets. TSDAE uses it to fetch and prepare datasets for training the sentence embeddings.\n- `sentence_transformers`: A framework for state-of-the-art sentence, text, and image embeddings. TSDAE leverages this library to create and train the underlying Transformer models for generating sentence embeddings.\n- `torch`: The PyTorch library provides a wide array of deep learning and tensor computation tools with GPU acceleration support. It is central to TSDAE for modeling and training the denoising autoencoder.\n- `ssl`: Used for handling Secure Sockets Layer (SSL) and Transport Layer Security (TLS) encryption in Python. In TSDAE, it is optionally employed to create unverified HTTPS contexts, facilitating dataset retrieval in environments with strict SSL certificate requirements.\n\nThe dependencies above are instrumental in dataset handling, model training, and embedding processes, constituting the backbone of TSDAE's operational infrastructure.\n\n## Installation\n\nBefore proceeding with TSDAE, ensure that all dependencies are installed through the Python package manager pip:\n\n```bash\npip install tsdae nltk datasets sentence-transformers torch\n```\n\nNote: Additional steps, such as configuring unverified HTTPS contexts, may be necessary depending on your execution environment.\n\n## Usage\nHere's how you can use `tsdae`:\n\n1. **Installation**: Install the required libraries, including `Torch`, `Transformers`...\n2. **Initialization**: Create an instance of the `TSDAE` class.\n\n### Model Architecture\n\nThe TSDAE model is bifurcated into two primary components: \n\n- **Encoder**: The encoder processes input sentences that have been deliberately corrupted, converting them into fixed-sized sentence embeddings. Essential to the model's success is the encoder's ability to distill and encode the semantic essence of the sentences into these embeddings.\n  \n- **Decoder**: Tasked with the challenge of reconstruction, the decoder utilizes the sentence embeddings generated by the encoder to recreate the original sentences. The quality of reconstruction is directly proportional to the semantic information retained within the sentence embeddings.\n\n## Usage Example\n\nBelow is a concise illustration of employing TSDAE to train a model on a dataset named \"louisbrulenaudet/cgi\", showcasing the seamless integration of components from dataset preparation to model training.\n\n```python\nfrom tsdae import TSDAE\n\n# Initialize an instance of TSDAE\ninstance = TSDAE()\n\n# Load a dataset\ntrain_dataset = instance.load_dataset_from_hf(\n    dataset=\"louisbrulenaudet/cgi\"\n)\n\n# Train the model with the dataset\nmodel = instance.train(\n    train_dataset=train_dataset,\n    model_name=\"bert-base-multilingual-uncased\",\n    column=\"output\",\n    output_path=\"output/tsdae-lemon-mbert-base\"\n)\n```\n\nThis example encapsulates the simplicity and power of TSDAE, guiding users from dataset acquisition to model optimization with minimal overhead.\n\n## References\n\nWang, K., Reimers, N., & Gurevych, I. (2021). TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning. arXiv. Retrieved from http://arxiv.org/abs/2104.06979.\n\n## Citing this project\nIf you use this code in your research, please use the following BibTeX entry.\n\n```BibTeX\n@misc{louisbrulenaudet2023,\n\tauthor = {Louis Brul\u00e9 Naudet},\n\ttitle = {Tranformer-based Denoising AutoEncoder for Sentence Transformers Unsupervised pre-training},\n\thowpublished = {\\url{https://github.com/louisbrulenaudet/tsdae}},\n\tyear = {2024}\n}\n\n```\n## Feedback\nIf you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com).\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Tranformer-based Denoising AutoEncoder for Sentence Transformers Unsupervised pre-training.",
    "version": "1.1.0",
    "project_urls": null,
    "split_keywords": [
        "transformers",
        " sentence-transformers",
        " tsdae",
        " bert",
        " machine-learning",
        " nlp",
        " sentence-similarity",
        " nltk",
        " pre-training",
        " embeddings"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4f3ea969201b9ee978fb82a580ad44449ba9748108ba0189d998e100669da544",
                "md5": "7b2ecd2c8dc468fa1ce2cd0614e0c0f0",
                "sha256": "b4c1f5a30245f0d465d8608c4c3f96c9488e3fb2c210932e5844f14bde940373"
            },
            "downloads": -1,
            "filename": "tsdae-1.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7b2ecd2c8dc468fa1ce2cd0614e0c0f0",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 14557,
            "upload_time": "2024-05-26T11:48:10",
            "upload_time_iso_8601": "2024-05-26T11:48:10.918198Z",
            "url": "https://files.pythonhosted.org/packages/4f/3e/a969201b9ee978fb82a580ad44449ba9748108ba0189d998e100669da544/tsdae-1.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "331516fc5578f33ffbe9c5404142db433eff4613a23fceaf6b93155ac191347c",
                "md5": "3a2f471a434a8fe85a870c0d42d56b95",
                "sha256": "4d8495a65a24378c6509e2e8175d75d1f7571177ba4abede7427c3c237515c72"
            },
            "downloads": -1,
            "filename": "tsdae-1.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "3a2f471a434a8fe85a870c0d42d56b95",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 16605,
            "upload_time": "2024-05-26T11:48:12",
            "upload_time_iso_8601": "2024-05-26T11:48:12.676661Z",
            "url": "https://files.pythonhosted.org/packages/33/15/16fc5578f33ffbe9c5404142db433eff4613a23fceaf6b93155ac191347c/tsdae-1.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-26 11:48:12",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "tsdae"
}
        
Elapsed time: 0.44681s