<!--- BADGES: START --->
[![GitHub - License](https://img.shields.io/badge/License-MIT-yellow.svg)][#github-license]
[![Docs - GitHub.io](https://img.shields.io/static/v1?logo=github&style=flat&color=pink&label=docs&message=promptzl)][#docs-package]
![Tests Passing](https://github.com/lazerlambda/promptzl/actions/workflows/python-package.yml/badge.svg)
[#github-license]: https://github.com/LazerLambda/Promptzl/blob/main/LICENSE.md
[#docs-package]: https://promptzl.readthedocs.io/en/latest/
<!--- BADGES: END --->
# <p style="text-align: center;">Pr🥨mptzl (Under Development)</p>
Promptzl is a simple library for turning LLMs into traditional PyTorch-based classifiers using the 🤗 Transformers library.
Classify large datasets quickly and easily while maintaining full control!
## Installation
Download this repository, navigate to the folder and run:
`pip install .`
### Getting Started
In just a few lines of code, you can transform a LLM of choice into an old-school classifier with all it's desirable properties:
```{python}
from promptzl import *
from datasets import load_dataset
dataset = load_dataset("SetFit/ag_news")
verbalizer = Vbz({0: ["World"], 1: ["Sports"], 2: ["Business"], 3: ["Tech"]})
prompt = Txt("[Category:") + verbalizer + Txt("] ") + Key()
model = MaskedLM4Classification("roberta-large", prompt)
output = model.classify(dataset['test'], show_progress_bar=True).predictions
sum([int(prd == lbl) for prd, lbl in zip(output, dataset['test']['label'])]) / len(output)
```
## Installation (Dev)
`pip install -e .`
`pip install -r test-requirements.txt`
Raw data
{
"_id": null,
"home_page": null,
"name": "promptzl",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "llm, nlp, transformers, classifiers, predictive modeling, machine learning, torch, huggingface",
"author": null,
"author_email": "Philipp Koch <PhillKoch@protonmail.com>",
"download_url": "https://files.pythonhosted.org/packages/c6/b8/107ef1277920ba6affeee9ee483de5cd8ba6ee3f8964c29fe8812b4040d5/promptzl-0.9.1.tar.gz",
"platform": null,
"description": "<!--- BADGES: START --->\n[![GitHub - License](https://img.shields.io/badge/License-MIT-yellow.svg)][#github-license]\n[![Docs - GitHub.io](https://img.shields.io/static/v1?logo=github&style=flat&color=pink&label=docs&message=promptzl)][#docs-package]\n![Tests Passing](https://github.com/lazerlambda/promptzl/actions/workflows/python-package.yml/badge.svg)\n\n[#github-license]: https://github.com/LazerLambda/Promptzl/blob/main/LICENSE.md\n[#docs-package]: https://promptzl.readthedocs.io/en/latest/\n<!--- BADGES: END --->\n\n\n\n# <p style=\"text-align: center;\">Pr\ud83e\udd68mptzl (Under Development)</p>\n\nPromptzl is a simple library for turning LLMs into traditional PyTorch-based classifiers using the \ud83e\udd17 Transformers library.\n\nClassify large datasets quickly and easily while maintaining full control!\n\n## Installation\n\nDownload this repository, navigate to the folder and run:\n`pip install .`\n\n### Getting Started\n\nIn just a few lines of code, you can transform a LLM of choice into an old-school classifier with all it's desirable properties:\n```{python}\n from promptzl import *\n from datasets import load_dataset\n\n dataset = load_dataset(\"SetFit/ag_news\")\n\n verbalizer = Vbz({0: [\"World\"], 1: [\"Sports\"], 2: [\"Business\"], 3: [\"Tech\"]})\n prompt = Txt(\"[Category:\") + verbalizer + Txt(\"] \") + Key()\n\n model = MaskedLM4Classification(\"roberta-large\", prompt)\n output = model.classify(dataset['test'], show_progress_bar=True).predictions\n sum([int(prd == lbl) for prd, lbl in zip(output, dataset['test']['label'])]) / len(output)\n```\n\n## Installation (Dev)\n\n`pip install -e .`\n\n`pip install -r test-requirements.txt`\n",
"bugtrack_url": null,
"license": null,
"summary": "Promptzl - LLMs as Classifiers",
"version": "0.9.1",
"project_urls": {
"Issues": "https://github.com/LazerLambda/Promptzl/issues",
"Repository": "https://github.com/LazerLambda/Promptzl",
"homepage": "https://promptzl.readthedocs.io/en/latest/"
},
"split_keywords": [
"llm",
" nlp",
" transformers",
" classifiers",
" predictive modeling",
" machine learning",
" torch",
" huggingface"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "2988aa41fc0cd8b65266d8049f99f02277034fa981eaa8091ff6a6d2b8fbec8a",
"md5": "3ce7bf53ef949d61ce97ce96ad1bc40d",
"sha256": "34a925c5015c6ca8e32aa3ad8183af10319c3dba7858a04485cfb0be0c82301c"
},
"downloads": -1,
"filename": "promptzl-0.9.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3ce7bf53ef949d61ce97ce96ad1bc40d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 17500,
"upload_time": "2024-12-01T00:58:33",
"upload_time_iso_8601": "2024-12-01T00:58:33.132859Z",
"url": "https://files.pythonhosted.org/packages/29/88/aa41fc0cd8b65266d8049f99f02277034fa981eaa8091ff6a6d2b8fbec8a/promptzl-0.9.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "c6b8107ef1277920ba6affeee9ee483de5cd8ba6ee3f8964c29fe8812b4040d5",
"md5": "d7a0a02694918bdbe48b9af22cc72704",
"sha256": "75ac727807e833a1e1acc4abd322efc13db72e477e1374f248237aa4dc4387dc"
},
"downloads": -1,
"filename": "promptzl-0.9.1.tar.gz",
"has_sig": false,
"md5_digest": "d7a0a02694918bdbe48b9af22cc72704",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 21621,
"upload_time": "2024-12-01T00:58:37",
"upload_time_iso_8601": "2024-12-01T00:58:37.221762Z",
"url": "https://files.pythonhosted.org/packages/c6/b8/107ef1277920ba6affeee9ee483de5cd8ba6ee3f8964c29fe8812b4040d5/promptzl-0.9.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-01 00:58:37",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "LazerLambda",
"github_project": "Promptzl",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "promptzl"
}