<!--- BADGES: START --->
[![GitHub - License](https://img.shields.io/badge/License-Apache-yellow.svg)][#github-license]
[![Docs - GitHub.io](https://img.shields.io/static/v1?logo=github&style=flat&color=pink&label=docs&message=promptzl)][#docs-package]
![Tests Passing](https://github.com/lazerlambda/promptzl/actions/workflows/python-package.yml/badge.svg)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/promptzl?logo=pypi&style=flat)][#pypi-package]
[![PyPI - Package Version](https://img.shields.io/pypi/v/promptzl?logo=pypi&style=flat)][#pypi-package]
[#github-license]: https://github.com/LazerLambda/Promptzl/blob/main/LICENSE.md
[#docs-package]: https://promptzl.readthedocs.io/en/latest/
[#pypi-package]: https://pypi.org/project/promptzl/
<!--- BADGES: END --->
<!-- TODO -->
<h1 align="center">Pr🥨mptzl</h1>
Turn state-of-the-art LLMs into zero<sup>+</sup>-shot PyTorch classifiers in just a few lines of code.
Promptzl offers:
- 🤖 Zero<sup>+</sup>-shot classification with LLMs
- 🤗 Turning [causal](https://huggingface.co/models?pipeline_tag=text-generation>) and [masked](https://huggingface.co/models?pipeline_tag=fill-mask>) LMs into classifiers without any training
- 📦 Batch processing on your device for efficiency
- 🚀 Speed-up over calling an online API
- 🔎 Transparency and accessibility by using the model locally
- 📈 Distribution over labels
- ✂️ No need to extract the predictions from the answer.
For more information, check out the [**official documentation**.](https://promptzl.readthedocs.io/en/latest/)
## Installation
`pip install -U promptzl`
## Getting Started
In just a few lines of code, you can transform a LLM of choice into an old-school classifier with all it's desirable properties:
Set up the dataset:
```python
from datasets import Dataset
dataset = Dataset.from_dict(
{
'text': [
"The food was absolutely wonderful, from preparation to presentation, very pleasing.",
"The service was a bit slow, but the food made up for it. Highly recommend the pasta!",
"The restaurant was too noisy and the food was mediocre at best. Not worth the price.",
],
'label': [1, 1, 0]
}
)
```
Define a prompt for guiding the language model to the correct predictions:
```python
from promptzl import FnVbzPair, Vbz
prompt = FnVbzPair(
lambda e: f"""Restaurant review classification into categories 'positive' or 'negative'.
'Best pretzls in town!'='positive'
'Rude staff, horrible food.'='negative'
'{e['text']}'=""",
Vbz({0: ["negative"], 1: ["positive"]}))
```
Initialize a model:
```python
from promptzl import CausalLM4Classification
model = CausalLM4Classification(
'HuggingFaceTB/SmolLM2-1.7B',
prompt=prompt)
```
Classify the data:
```python
from sklearn.metrics import accuracy_score
output = model.classify(dataset, show_progress_bar=True, batch_size=1)
accuracy_score(dataset['label'], output.predictions)
1.0
```
For more detailed tutorials, check out the [documentation](https://promptzl.readthedocs.io/en/latest/)!
Raw data
{
"_id": null,
"home_page": null,
"name": "promptzl",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "llm, nlp, transformers, classifiers, predictive modeling, machine learning, torch, huggingface, zero-shot, prompting",
"author": null,
"author_email": "Philipp Koch <PhillKoch@protonmail.com>",
"download_url": "https://files.pythonhosted.org/packages/a4/e7/0421bb579b961f189316658594b670412479220e6ddc30a363a9511b231f/promptzl-1.0.0.tar.gz",
"platform": null,
"description": "<!--- BADGES: START --->\n[![GitHub - License](https://img.shields.io/badge/License-Apache-yellow.svg)][#github-license]\n[![Docs - GitHub.io](https://img.shields.io/static/v1?logo=github&style=flat&color=pink&label=docs&message=promptzl)][#docs-package]\n![Tests Passing](https://github.com/lazerlambda/promptzl/actions/workflows/python-package.yml/badge.svg)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/promptzl?logo=pypi&style=flat)][#pypi-package]\n[![PyPI - Package Version](https://img.shields.io/pypi/v/promptzl?logo=pypi&style=flat)][#pypi-package]\n\n[#github-license]: https://github.com/LazerLambda/Promptzl/blob/main/LICENSE.md\n[#docs-package]: https://promptzl.readthedocs.io/en/latest/\n[#pypi-package]: https://pypi.org/project/promptzl/\n<!--- BADGES: END --->\n\n\n<!-- TODO -->\n<h1 align=\"center\">Pr\ud83e\udd68mptzl</h1>\n\nTurn state-of-the-art LLMs into zero<sup>+</sup>-shot PyTorch classifiers in just a few lines of code.\n\nPromptzl offers:\n - \ud83e\udd16 Zero<sup>+</sup>-shot classification with LLMs\n - \ud83e\udd17 Turning [causal](https://huggingface.co/models?pipeline_tag=text-generation>) and [masked](https://huggingface.co/models?pipeline_tag=fill-mask>) LMs into classifiers without any training\n - \ud83d\udce6 Batch processing on your device for efficiency\n - \ud83d\ude80 Speed-up over calling an online API\n - \ud83d\udd0e Transparency and accessibility by using the model locally\n - \ud83d\udcc8 Distribution over labels\n - \u2702\ufe0f No need to extract the predictions from the answer.\n\nFor more information, check out the [**official documentation**.](https://promptzl.readthedocs.io/en/latest/)\n\n## Installation\n\n\n`pip install -U promptzl`\n\n## Getting Started\n\nIn just a few lines of code, you can transform a LLM of choice into an old-school classifier with all it's desirable properties:\n\nSet up the dataset:\n```python\nfrom datasets import Dataset\n\ndataset = Dataset.from_dict(\n {\n 'text': [\n \"The food was absolutely wonderful, from preparation to presentation, very pleasing.\",\n \"The service was a bit slow, but the food made up for it. Highly recommend the pasta!\",\n \"The restaurant was too noisy and the food was mediocre at best. Not worth the price.\",\n ],\n 'label': [1, 1, 0]\n }\n)\n```\n\nDefine a prompt for guiding the language model to the correct predictions:\n```python\nfrom promptzl import FnVbzPair, Vbz\nprompt = FnVbzPair(\n lambda e: f\"\"\"Restaurant review classification into categories 'positive' or 'negative'.\n\n 'Best pretzls in town!'='positive'\n 'Rude staff, horrible food.'='negative'\n\n '{e['text']}'=\"\"\",\n Vbz({0: [\"negative\"], 1: [\"positive\"]}))\n```\n\nInitialize a model:\n```python\nfrom promptzl import CausalLM4Classification\nmodel = CausalLM4Classification(\n 'HuggingFaceTB/SmolLM2-1.7B',\n prompt=prompt)\n```\n\nClassify the data:\n```python\nfrom sklearn.metrics import accuracy_score\noutput = model.classify(dataset, show_progress_bar=True, batch_size=1)\naccuracy_score(dataset['label'], output.predictions)\n1.0\n```\n\nFor more detailed tutorials, check out the [documentation](https://promptzl.readthedocs.io/en/latest/)!\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Promptzl - LLMs as Classifiers",
"version": "1.0.0",
"project_urls": {
"Issues": "https://github.com/LazerLambda/Promptzl/issues",
"Repository": "https://github.com/LazerLambda/Promptzl",
"homepage": "https://promptzl.readthedocs.io/en/latest/"
},
"split_keywords": [
"llm",
" nlp",
" transformers",
" classifiers",
" predictive modeling",
" machine learning",
" torch",
" huggingface",
" zero-shot",
" prompting"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "15a0379ae2619a2afabaee69404f2e751cfe62f63512217030f781c7c9f2e493",
"md5": "220c006d9052200126fef7f2adc7a4ed",
"sha256": "38fef5923093501dc852f7238e90d9f2e5c18aece27eea2ada183ab662498450"
},
"downloads": -1,
"filename": "promptzl-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "220c006d9052200126fef7f2adc7a4ed",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 20521,
"upload_time": "2024-12-22T17:48:58",
"upload_time_iso_8601": "2024-12-22T17:48:58.592587Z",
"url": "https://files.pythonhosted.org/packages/15/a0/379ae2619a2afabaee69404f2e751cfe62f63512217030f781c7c9f2e493/promptzl-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a4e70421bb579b961f189316658594b670412479220e6ddc30a363a9511b231f",
"md5": "863e895ff3d035e9117422d18ee4ae24",
"sha256": "88687944e7a2f7e0cebae61d31e918c7d4d59d2d877d8382a275fa030951cf9f"
},
"downloads": -1,
"filename": "promptzl-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "863e895ff3d035e9117422d18ee4ae24",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 25404,
"upload_time": "2024-12-22T17:49:00",
"upload_time_iso_8601": "2024-12-22T17:49:00.955448Z",
"url": "https://files.pythonhosted.org/packages/a4/e7/0421bb579b961f189316658594b670412479220e6ddc30a363a9511b231f/promptzl-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-22 17:49:00",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "LazerLambda",
"github_project": "Promptzl",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "promptzl"
}