folktexts


Namefolktexts JSON
Version 0.0.12 PyPI version JSON
download
home_pageNone
SummaryA benchmark for LLM calibration on human populations.
upload_time2024-06-12 17:29:38
maintainerNone
docs_urlNone
authorAndre Cruz, Ricardo Dominguez-Olmedo, Celestine Mendler-Dunner, Moritz Hardt
requires_python>=3.8
licenseMIT License Copyright (c) 2024 Social Foundations of Computation, at MPI-IS Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords language-model risk-estimation benchmark machine-learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # :book: folktexts   <!-- omit in toc -->

![Tests status](https://github.com/socialfoundations/folktexts/actions/workflows/python-tests.yml/badge.svg)
![PyPI status](https://github.com/socialfoundations/folktexts/actions/workflows/python-publish.yml/badge.svg)
![Documentation status](https://github.com/socialfoundations/folktexts/actions/workflows/python-docs.yml/badge.svg)
![PyPI version](https://badgen.net/pypi/v/folktexts)
![PyPI - License](https://img.shields.io/pypi/l/folktexts)
<!-- ![OSI license](https://badgen.net/pypi/license/folktexts) -->
<!-- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE) -->
![Python compatibility](https://badgen.net/pypi/python/folktexts)

Folktexts is a python package to evaluate and benchmark calibration of large
language models.
It enables using any transformers model as a classifier for tabular data tasks, 
and extracting risk score estimates from the model's output log-odds.

Several benchmark tasks are provided based on data from the American Community Survey.
Namely, each prediction task from the popular 
[folktables](https://github.com/socialfoundations/folktables) package is made available 
as a natural-language prompting task.

Package documentation can be found [here](https://socialfoundations.github.io/folktexts/).

**Table of contents:**
- [Installing](#installing)
- [Basic setup](#basic-setup)
- [Example usage](#example-usage)
- [Benchmark options](#benchmark-options)
- [License and terms of use](#license-and-terms-of-use)


## Installing

Install package from [PyPI](https://pypi.org/project/folktexts/):

```
pip install folktexts
```

## Basic setup
> You'll need to go through these steps to run the benchmark tasks.

1. Create conda environment

```
conda create -n folktexts python=3.11
conda activate folktexts
```

2. Install folktexts package

```
pip install folktexts
```

3. Create models dataset and results folder

```
mkdir results
mkdir models
mkdir data
```

4. Download transformers model and tokenizer
```
download_models --model "google/gemma-2b" --save-dir models
```

5. Run benchmark on a given task

```
run_acs_benchmark --results-dir results --data-dir data --task-name "ACSIncome" --model models/google--gemma-2b
```

Run `run_acs_benchmark --help` to get a list of all available benchmark flags.


## Example usage

```py
from folktexts.llm_utils import load_model_tokenizer
model, tokenizer = load_model_tokenizer("gpt2")   # using tiny model as an example

from folktexts.acs import ACSDataset
acs_task_name = "ACSIncome"

# Create an object that classifies data using an LLM
from folktexts import LLMClassifier
clf = LLMClassifier(
    model=model,
    tokenizer=tokenizer,
    task=acs_task_name,
)

# Use a dataset or feed in your own data
dataset = ACSDataset(acs_task_name)   # use `.subsample(0.01)` to get faster approximate results

# Get risk score predictions out of the model
y_scores = clf.predict_proba(dataset)

# Optionally, you can fit the threshold based on a small portion of the data
clf.fit(*dataset[0:100])

# ...in order to get more accurate binary predictions
clf.predict(dataset)

# Compute a variety of evaluation metrics on calibration and accuracy
from folktexts.benchmark import CalibrationBenchmark
benchmark_results = CalibrationBenchmark(clf, dataset).run(results_root_dir=".")
```

<!-- TODO: add code to show-case example functionalities, including the
LLMClassifier (maybe the above code is fine for this), the benchmark, and
creating a custom ACS prediction task -->

## Benchmark options

```
usage: run_acs_benchmark [-h] --model MODEL --task-name TASK_NAME --results-dir RESULTS_DIR --data-dir DATA_DIR [--few-shot FEW_SHOT] [--batch-size BATCH_SIZE] [--context-size CONTEXT_SIZE] [--fit-threshold FIT_THRESHOLD] [--subsampling SUBSAMPLING] [--seed SEED] [--dont-correct-order-bias] [--chat-prompt] [--direct-risk-prompting] [--reuse-few-shot-examples] [--use-feature-subset [USE_FEATURE_SUBSET ...]]
                         [--use-population-filter [USE_POPULATION_FILTER ...]] [--logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}]

Run an LLM as a classifier experiment.

options:
  -h, --help            show this help message and exit
  --model MODEL         [str] Model name or path to model saved on disk
  --task-name TASK_NAME
                        [str] Name of the ACS task to run the experiment on
  --results-dir RESULTS_DIR
                        [str] Directory under which this experiment's results will be saved
  --data-dir DATA_DIR   [str] Root folder to find datasets on
  --few-shot FEW_SHOT   [int] Use few-shot prompting with the given number of shots
  --batch-size BATCH_SIZE
                        [int] The batch size to use for inference
  --context-size CONTEXT_SIZE
                        [int] The maximum context size when prompting the LLM
  --fit-threshold FIT_THRESHOLD
                        [int] Whether to fit the prediction threshold, and on how many samples
  --subsampling SUBSAMPLING
                        [float] Which fraction of the dataset to use (if omitted will use all data)
  --seed SEED           [int] Random seed -- to set for reproducibility
  --dont-correct-order-bias
                        [bool] Whether to avoid correcting ordering bias, by default will correct it
  --chat-prompt         [bool] Whether to use chat-based prompting (for instruct models)
  --direct-risk-prompting
                        [bool] Whether to directly prompt for risk-estimates instead of multiple-choice Q&A
  --reuse-few-shot-examples
                        [bool] Whether to reuse the same samples for few-shot prompting (or sample new ones every time)
  --use-feature-subset [USE_FEATURE_SUBSET ...]
                        [str] Optional subset of features to use for prediction
  --use-population-filter [USE_POPULATION_FILTER ...]
                        [str] Optional population filter for this benchmark; must follow the format 'column_name=value' to filter the dataset by a specific value.
  --logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
                        [str] The logging level to use for the experiment
```


## License and terms of use

Code licensed under the [MIT license](LICENSE).

The American Community Survey (ACS) Public Use Microdata Sample (PUMS) is
governed by the U.S. Census Bureau [terms of service](https://www.census.gov/data/developers/about/terms-of-service.html).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "folktexts",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "language-model, risk-estimation, benchmark, machine-learning",
    "author": "Andre Cruz, Ricardo Dominguez-Olmedo, Celestine Mendler-Dunner, Moritz Hardt",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/31/0b/b4269ccaa781c92be56817627554c0c8c9ddaca8706e255582cfa447b1ca/folktexts-0.0.12.tar.gz",
    "platform": null,
    "description": "# :book: folktexts   <!-- omit in toc -->\n\n![Tests status](https://github.com/socialfoundations/folktexts/actions/workflows/python-tests.yml/badge.svg)\n![PyPI status](https://github.com/socialfoundations/folktexts/actions/workflows/python-publish.yml/badge.svg)\n![Documentation status](https://github.com/socialfoundations/folktexts/actions/workflows/python-docs.yml/badge.svg)\n![PyPI version](https://badgen.net/pypi/v/folktexts)\n![PyPI - License](https://img.shields.io/pypi/l/folktexts)\n<!-- ![OSI license](https://badgen.net/pypi/license/folktexts) -->\n<!-- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE) -->\n![Python compatibility](https://badgen.net/pypi/python/folktexts)\n\nFolktexts is a python package to evaluate and benchmark calibration of large\nlanguage models.\nIt enables using any transformers model as a classifier for tabular data tasks, \nand extracting risk score estimates from the model's output log-odds.\n\nSeveral benchmark tasks are provided based on data from the American Community Survey.\nNamely, each prediction task from the popular \n[folktables](https://github.com/socialfoundations/folktables) package is made available \nas a natural-language prompting task.\n\nPackage documentation can be found [here](https://socialfoundations.github.io/folktexts/).\n\n**Table of contents:**\n- [Installing](#installing)\n- [Basic setup](#basic-setup)\n- [Example usage](#example-usage)\n- [Benchmark options](#benchmark-options)\n- [License and terms of use](#license-and-terms-of-use)\n\n\n## Installing\n\nInstall package from [PyPI](https://pypi.org/project/folktexts/):\n\n```\npip install folktexts\n```\n\n## Basic setup\n> You'll need to go through these steps to run the benchmark tasks.\n\n1. Create conda environment\n\n```\nconda create -n folktexts python=3.11\nconda activate folktexts\n```\n\n2. Install folktexts package\n\n```\npip install folktexts\n```\n\n3. Create models dataset and results folder\n\n```\nmkdir results\nmkdir models\nmkdir data\n```\n\n4. Download transformers model and tokenizer\n```\ndownload_models --model \"google/gemma-2b\" --save-dir models\n```\n\n5. Run benchmark on a given task\n\n```\nrun_acs_benchmark --results-dir results --data-dir data --task-name \"ACSIncome\" --model models/google--gemma-2b\n```\n\nRun `run_acs_benchmark --help` to get a list of all available benchmark flags.\n\n\n## Example usage\n\n```py\nfrom folktexts.llm_utils import load_model_tokenizer\nmodel, tokenizer = load_model_tokenizer(\"gpt2\")   # using tiny model as an example\n\nfrom folktexts.acs import ACSDataset\nacs_task_name = \"ACSIncome\"\n\n# Create an object that classifies data using an LLM\nfrom folktexts import LLMClassifier\nclf = LLMClassifier(\n    model=model,\n    tokenizer=tokenizer,\n    task=acs_task_name,\n)\n\n# Use a dataset or feed in your own data\ndataset = ACSDataset(acs_task_name)   # use `.subsample(0.01)` to get faster approximate results\n\n# Get risk score predictions out of the model\ny_scores = clf.predict_proba(dataset)\n\n# Optionally, you can fit the threshold based on a small portion of the data\nclf.fit(*dataset[0:100])\n\n# ...in order to get more accurate binary predictions\nclf.predict(dataset)\n\n# Compute a variety of evaluation metrics on calibration and accuracy\nfrom folktexts.benchmark import CalibrationBenchmark\nbenchmark_results = CalibrationBenchmark(clf, dataset).run(results_root_dir=\".\")\n```\n\n<!-- TODO: add code to show-case example functionalities, including the\nLLMClassifier (maybe the above code is fine for this), the benchmark, and\ncreating a custom ACS prediction task -->\n\n## Benchmark options\n\n```\nusage: run_acs_benchmark [-h] --model MODEL --task-name TASK_NAME --results-dir RESULTS_DIR --data-dir DATA_DIR [--few-shot FEW_SHOT] [--batch-size BATCH_SIZE] [--context-size CONTEXT_SIZE] [--fit-threshold FIT_THRESHOLD] [--subsampling SUBSAMPLING] [--seed SEED] [--dont-correct-order-bias] [--chat-prompt] [--direct-risk-prompting] [--reuse-few-shot-examples] [--use-feature-subset [USE_FEATURE_SUBSET ...]]\n                         [--use-population-filter [USE_POPULATION_FILTER ...]] [--logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}]\n\nRun an LLM as a classifier experiment.\n\noptions:\n  -h, --help            show this help message and exit\n  --model MODEL         [str] Model name or path to model saved on disk\n  --task-name TASK_NAME\n                        [str] Name of the ACS task to run the experiment on\n  --results-dir RESULTS_DIR\n                        [str] Directory under which this experiment's results will be saved\n  --data-dir DATA_DIR   [str] Root folder to find datasets on\n  --few-shot FEW_SHOT   [int] Use few-shot prompting with the given number of shots\n  --batch-size BATCH_SIZE\n                        [int] The batch size to use for inference\n  --context-size CONTEXT_SIZE\n                        [int] The maximum context size when prompting the LLM\n  --fit-threshold FIT_THRESHOLD\n                        [int] Whether to fit the prediction threshold, and on how many samples\n  --subsampling SUBSAMPLING\n                        [float] Which fraction of the dataset to use (if omitted will use all data)\n  --seed SEED           [int] Random seed -- to set for reproducibility\n  --dont-correct-order-bias\n                        [bool] Whether to avoid correcting ordering bias, by default will correct it\n  --chat-prompt         [bool] Whether to use chat-based prompting (for instruct models)\n  --direct-risk-prompting\n                        [bool] Whether to directly prompt for risk-estimates instead of multiple-choice Q&A\n  --reuse-few-shot-examples\n                        [bool] Whether to reuse the same samples for few-shot prompting (or sample new ones every time)\n  --use-feature-subset [USE_FEATURE_SUBSET ...]\n                        [str] Optional subset of features to use for prediction\n  --use-population-filter [USE_POPULATION_FILTER ...]\n                        [str] Optional population filter for this benchmark; must follow the format 'column_name=value' to filter the dataset by a specific value.\n  --logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}\n                        [str] The logging level to use for the experiment\n```\n\n\n## License and terms of use\n\nCode licensed under the [MIT license](LICENSE).\n\nThe American Community Survey (ACS) Public Use Microdata Sample (PUMS) is\ngoverned by the U.S. Census Bureau [terms of service](https://www.census.gov/data/developers/about/terms-of-service.html).\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024 Social Foundations of Computation, at MPI-IS  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "A benchmark for LLM calibration on human populations.",
    "version": "0.0.12",
    "project_urls": {
        "documentation": "https://socialfoundations.github.io/folktexts/",
        "homepage": "https://github.com/socialfoundations/folktexts",
        "repository": "https://github.com/socialfoundations/folktexts"
    },
    "split_keywords": [
        "language-model",
        " risk-estimation",
        " benchmark",
        " machine-learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f9f85e08b5b43d3196b31763531c8ae2a97ef65ba9bd4472d7152c0072e8f0f5",
                "md5": "75f04e5e02e3824def437d8dec0fd207",
                "sha256": "49ce4652f594aadcf3eda243cc2eff1440514da75f36b429bf53c62f5f144324"
            },
            "downloads": -1,
            "filename": "folktexts-0.0.12-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "75f04e5e02e3824def437d8dec0fd207",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 63907,
            "upload_time": "2024-06-12T17:29:36",
            "upload_time_iso_8601": "2024-06-12T17:29:36.260341Z",
            "url": "https://files.pythonhosted.org/packages/f9/f8/5e08b5b43d3196b31763531c8ae2a97ef65ba9bd4472d7152c0072e8f0f5/folktexts-0.0.12-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "310bb4269ccaa781c92be56817627554c0c8c9ddaca8706e255582cfa447b1ca",
                "md5": "dcf62856c70fbdf42ed98f401e08dc5e",
                "sha256": "da3d89de374f578b489ac6b9ee3134ae77ff5c5d2fa428a1b6f89a1b2b2caf59"
            },
            "downloads": -1,
            "filename": "folktexts-0.0.12.tar.gz",
            "has_sig": false,
            "md5_digest": "dcf62856c70fbdf42ed98f401e08dc5e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 57951,
            "upload_time": "2024-06-12T17:29:38",
            "upload_time_iso_8601": "2024-06-12T17:29:38.101595Z",
            "url": "https://files.pythonhosted.org/packages/31/0b/b4269ccaa781c92be56817627554c0c8c9ddaca8706e255582cfa447b1ca/folktexts-0.0.12.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-12 17:29:38",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "socialfoundations",
    "github_project": "folktexts",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "folktexts"
}
        
Elapsed time: 0.38579s