one-line-llm-tuner


Nameone-line-llm-tuner JSON
Version 0.0.15 PyPI version JSON
download
home_pagehttps://github.com/metriccoders/one-line-llm-tuner
SummaryA Large Language Model Fine-tuning package. The package uses a single line to fine-tune an LLM by taking care of all the boilerplate in the backend.
upload_time2024-08-05 23:21:49
maintainerNone
docs_urlNone
authorSuhas Bhairav
requires_python>=3.10
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🔥 One Line LLM Tuner 🔥

Fine-tune any Large Language Model (LLM) available on [Hugging Face](https://www.huggingface.co) in a single line. It is created by [Suhas Bhairav](https://www.metriccoders.com).

## Overview

`one-line-llm-tuner` is a Python package designed to simplify the process of fine-tuning large language models (LLMs) like GPT-2, Llama-2, GPT-3 and more. With just one line of code, you can fine-tune a pre-trained model to your specific dataset. Consider it as a wrapper for `transformers` library, just like how `keras` is for `tensorflow`.

## Features

- **Simple**: Fine-tune models with minimal code.
- **Supports Popular LLMs**: Works with models from the `transformers` library, including GPT, BERT, and more.
- **Customizable**: Advanced users can customize the fine-tuning process with additional parameters.

## Installation

You can install `one-line-llm-tuner` using pip:

```bash
pip install one-line-llm-tuner
```

## Usage

The PyPI package can be used in the following way after installation.

```bash
from one_line_llm_tuner.tuner import llm_tuner

fine_tune_obj = llm_tuner.FineTuneModel()

fine_tune_obj.fine_tune_model(input_file_path="train.txt")

fine_tune_obj.predict_text("Elon musk founded Spacex in ")
```

If you want to modify the default values such as type of model used, tokenizer and more, use the following code.

```bash
from one_line_llm_tuner.tuner import llm_tuner

fine_tune_obj = llm_tuner.FineTuneModel(model_name="gpt2",
                 test_size=0.3,
                 training_dataset_filename="train_dataset.txt",
                 testing_dataset_filename="test_dataset.txt",
                 tokenizer_truncate=True,
                 tokenizer_padding=True,
                 output_dir="./results",
                 num_train_epochs=2,
                 logging_steps=500,
                 save_steps=500,
                 per_device_train_batch_size=128,
                 per_device_eval_batch_size=128,
                 max_output_length=100,
                 num_return_sequences=1,
                 skip_special_tokens=True,)

fine_tune_obj.fine_tune_model(input_file_path="train.txt")

fine_tune_obj.predict_text("Elon musk founded Spacex in ")
```


## Contributing
We welcome contributions! Please see the [contributing guide](Contributing.md) for more details.

## License
This project is licensed under the terms of the MIT license. See the [LICENSE](LICENSE.txt) file for details.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/metriccoders/one-line-llm-tuner",
    "name": "one-line-llm-tuner",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": "Suhas Bhairav",
    "author_email": "info@metriccoders.com",
    "download_url": "https://files.pythonhosted.org/packages/5b/4a/c38cfb6cd5d1ebdf4b42ebe6a9a9c86b41565089f1cedc0cf9ea4238a60e/one_line_llm_tuner-0.0.15.tar.gz",
    "platform": null,
    "description": "# \ud83d\udd25 One Line LLM Tuner \ud83d\udd25\n\nFine-tune any Large Language Model (LLM) available on [Hugging Face](https://www.huggingface.co) in a single line. It is created by [Suhas Bhairav](https://www.metriccoders.com).\n\n## Overview\n\n`one-line-llm-tuner` is a Python package designed to simplify the process of fine-tuning large language models (LLMs) like GPT-2, Llama-2, GPT-3 and more. With just one line of code, you can fine-tune a pre-trained model to your specific dataset. Consider it as a wrapper for `transformers` library, just like how `keras` is for `tensorflow`.\n\n## Features\n\n- **Simple**: Fine-tune models with minimal code.\n- **Supports Popular LLMs**: Works with models from the `transformers` library, including GPT, BERT, and more.\n- **Customizable**: Advanced users can customize the fine-tuning process with additional parameters.\n\n## Installation\n\nYou can install `one-line-llm-tuner` using pip:\n\n```bash\npip install one-line-llm-tuner\n```\n\n## Usage\n\nThe PyPI package can be used in the following way after installation.\n\n```bash\nfrom one_line_llm_tuner.tuner import llm_tuner\n\nfine_tune_obj = llm_tuner.FineTuneModel()\n\nfine_tune_obj.fine_tune_model(input_file_path=\"train.txt\")\n\nfine_tune_obj.predict_text(\"Elon musk founded Spacex in \")\n```\n\nIf you want to modify the default values such as type of model used, tokenizer and more, use the following code.\n\n```bash\nfrom one_line_llm_tuner.tuner import llm_tuner\n\nfine_tune_obj = llm_tuner.FineTuneModel(model_name=\"gpt2\",\n                 test_size=0.3,\n                 training_dataset_filename=\"train_dataset.txt\",\n                 testing_dataset_filename=\"test_dataset.txt\",\n                 tokenizer_truncate=True,\n                 tokenizer_padding=True,\n                 output_dir=\"./results\",\n                 num_train_epochs=2,\n                 logging_steps=500,\n                 save_steps=500,\n                 per_device_train_batch_size=128,\n                 per_device_eval_batch_size=128,\n                 max_output_length=100,\n                 num_return_sequences=1,\n                 skip_special_tokens=True,)\n\nfine_tune_obj.fine_tune_model(input_file_path=\"train.txt\")\n\nfine_tune_obj.predict_text(\"Elon musk founded Spacex in \")\n```\n\n\n## Contributing\nWe welcome contributions! Please see the [contributing guide](Contributing.md) for more details.\n\n## License\nThis project is licensed under the terms of the MIT license. See the [LICENSE](LICENSE.txt) file for details.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A Large Language Model Fine-tuning package. The package uses a single line to fine-tune an LLM by taking care of all the boilerplate in the backend.",
    "version": "0.0.15",
    "project_urls": {
        "Bug Tracker": "https://github.com/metriccoders/one-line-llm-tuner/issues",
        "Homepage": "https://github.com/metriccoders/one-line-llm-tuner"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "97e1a71d245cea9de8f7dc67a16a14c5731ef3d9031083888ded9858bfab5cb6",
                "md5": "48c196edf16455ea2996ac385a109853",
                "sha256": "d5899368edb9f069c6abbfd3269ae3f3a316662ad06a31b8ebb0dc4d2a3e1306"
            },
            "downloads": -1,
            "filename": "one_line_llm_tuner-0.0.15-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "48c196edf16455ea2996ac385a109853",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 6085,
            "upload_time": "2024-08-05T23:21:36",
            "upload_time_iso_8601": "2024-08-05T23:21:36.409889Z",
            "url": "https://files.pythonhosted.org/packages/97/e1/a71d245cea9de8f7dc67a16a14c5731ef3d9031083888ded9858bfab5cb6/one_line_llm_tuner-0.0.15-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5b4ac38cfb6cd5d1ebdf4b42ebe6a9a9c86b41565089f1cedc0cf9ea4238a60e",
                "md5": "c88f253dd77dc0460fe5d83509147ae9",
                "sha256": "282fd46c7ae195da16ed3aa54c980126251acd8c63f96c3ed536e96ce25c9a01"
            },
            "downloads": -1,
            "filename": "one_line_llm_tuner-0.0.15.tar.gz",
            "has_sig": false,
            "md5_digest": "c88f253dd77dc0460fe5d83509147ae9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 5956,
            "upload_time": "2024-08-05T23:21:49",
            "upload_time_iso_8601": "2024-08-05T23:21:49.354878Z",
            "url": "https://files.pythonhosted.org/packages/5b/4a/c38cfb6cd5d1ebdf4b42ebe6a9a9c86b41565089f1cedc0cf9ea4238a60e/one_line_llm_tuner-0.0.15.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-05 23:21:49",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "metriccoders",
    "github_project": "one-line-llm-tuner",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "one-line-llm-tuner"
}
        
Elapsed time: 0.28332s