folktexts


Namefolktexts JSON
Version 0.0.23 PyPI version JSON
download
home_pageNone
SummaryUse LLMs to get classification risk scores on tabular tasks.
upload_time2024-10-31 10:26:29
maintainerNone
docs_urlNone
authorAndre Cruz, Ricardo Dominguez-Olmedo, Celestine Mendler-Dunner, Moritz Hardt
requires_python>=3.8
licenseMIT License Copyright (c) 2024 Social Foundations of Computation, at MPI-IS Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords language-model risk-estimation benchmark machine-learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # :book: folktexts   <!-- omit in toc -->

![Tests status](https://github.com/socialfoundations/folktexts/actions/workflows/python-tests.yml/badge.svg)
![PyPI status](https://github.com/socialfoundations/folktexts/actions/workflows/python-publish.yml/badge.svg)
![Documentation status](https://github.com/socialfoundations/folktexts/actions/workflows/python-docs.yml/badge.svg)
![PyPI version](https://badgen.net/pypi/v/folktexts)
![PyPI - License](https://img.shields.io/pypi/l/folktexts)
![Python compatibility](https://badgen.net/pypi/python/folktexts)

> This package is accompanied by a paper titled ["Evaluating language models as risk scores"](https://arxiv.org/abs/2407.14614)

Folktexts is a python package to evaluate statistical properties of LLMs as classifiers.
It enables computing and evaluating classification _risk scores_ for tabular prediction tasks using LLMs.

Several benchmark tasks are provided based on data from the American Community Survey.
Namely, each prediction task from the popular 
[folktables](https://github.com/socialfoundations/folktables) package is made available 
as a natural-language prompting task.

Package documentation can be found [here](https://socialfoundations.github.io/folktexts/).

**Table of contents:**
- [Installing](#installing)
- [Basic setup](#basic-setup)
- [Example usage](#example-usage)
- [Benchmark features and options](#benchmark-features-and-options)
- [Evaluating feature importance](#evaluating-feature-importance)
- [FAQ](#faq)
- [Citation](#citation)
- [License and terms of use](#license-and-terms-of-use)


## Installing

Install package from [PyPI](https://pypi.org/project/folktexts/):

```
pip install folktexts
```

## Basic setup
> You'll need to go through these steps to run the benchmark tasks.

1. Create conda environment

```
conda create -n folktexts python=3.11
conda activate folktexts
```

2. Install folktexts package

```
pip install folktexts
```

3. Create models dataset and results folder

```
mkdir results
mkdir models
mkdir data
```

4. Download transformers model and tokenizer
```
download_models --model 'google/gemma-2b' --save-dir models
```

5. Run benchmark on a given task

```
run_acs_benchmark --results-dir results --data-dir data --task 'ACSIncome' --model models/google--gemma-2b
```

Run `run_acs_benchmark --help` to get a list of all available benchmark flags.


## Example usage

```py
# Load transformers model
from folktexts.llm_utils import load_model_tokenizer
model, tokenizer = load_model_tokenizer("gpt2")   # using tiny model as an example

from folktexts.acs import ACSDataset
acs_task_name = "ACSIncome"     # Name of the benchmark ACS task to use

# Create an object that classifies data using an LLM
from folktexts import TransformersLLMClassifier
clf = TransformersLLMClassifier(
    model=model,
    tokenizer=tokenizer,
    task=acs_task_name,
)
# NOTE: You can also use a web-hosted model like GPT4 using the `WebAPILLMClassifier` class

# Use a dataset or feed in your own data
dataset = ACSDataset.make_from_task(acs_task_name)   # use `.subsample(0.01)` to get faster approximate results

# You can compute risk score predictions using an sklearn-style interface
X_test, y_test = dataset.get_test()
test_scores = clf.predict_proba(X_test)

# Optionally, you can fit the threshold based on a few samples
clf.fit(*dataset[0:100])    # (`dataset[...]` will access training data)

# ...in order to get more accurate binary predictions with `.predict`
test_preds = clf.predict(X_test)

# If you only care about the overall metrics and not individual predictions,
# you can simply run the following code:
from folktexts.benchmark import Benchmark, BenchmarkConfig
bench = Benchmark.make_benchmark(
    task=acs_task_name, dataset=dataset,
    model=model, tokenizer=tokenizer,
    numeric_risk_prompting=True,    # See the full list of configs below in the README
)
bench_results = bench.run(results_root_dir="results")
```


## Benchmark features and options

Here's a summary list of the most important benchmark options/flags used in
conjunction with the `run_acs_benchmark` command line script, or with the
`Benchmark` class.

| Option | Description | Examples |
|:---|:---|:---:|
| `--model` | Name of the model on huggingface transformers, or local path to folder with pretrained model and tokenizer. Can also use web-hosted models with `"[provider]/[model-name]"`. | `meta-llama/Meta-Llama-3-8B`, `openai/gpt-4o-mini` |
| `--task` | Name of the ACS task to run benchmark on. | `ACSIncome`, `ACSEmployment`  |
| `--results-dir` | Path to directory under which benchmark results will be saved. | `results` |
| `--data-dir` | Root folder to find datasets in (or download ACS data to). | `~/data` |
| `--numeric-risk-prompting` | Whether to use verbalized numeric risk prompting, i.e., directly query model for a probability estimate. **By default** will use standard multiple-choice Q&A, and extract risk scores from internal token probabilities. | Boolean flag (`True` if present, `False` otherwise) |
| `--use-web-api-model` | Whether the given `--model` name corresponds to a web-hosted model or not. **By default** this is False (assumes a huggingface transformers model). If this flag is provided, `--model` must contain a [litellm](https://docs.litellm.ai) model identifier ([examples here](https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models)). | Boolean flag (`True` if present, `False` otherwise) |
| `--subsampling` | Which fraction of the dataset to use for the benchmark. **By default** will use the whole test set. | `0.01` |
| `--fit-threshold` | Whether to use the given number of samples to fit the binarization threshold. **By default** will use a fixed $t=0.5$ threshold instead of fitting on data. | `100` |
| `--batch-size` | The number of samples to process in each inference batch. Choose according to your available VRAM. | `10`, `32` |


Full list of options:

```
usage: run_acs_benchmark [-h] --model MODEL --results-dir RESULTS_DIR --data-dir DATA_DIR [--task TASK] [--few-shot FEW_SHOT] [--batch-size BATCH_SIZE] [--context-size CONTEXT_SIZE] [--fit-threshold FIT_THRESHOLD] [--subsampling SUBSAMPLING] [--seed SEED] [--use-web-api-model] [--dont-correct-order-bias] [--numeric-risk-prompting] [--reuse-few-shot-examples] [--use-feature-subset USE_FEATURE_SUBSET]
                         [--use-population-filter USE_POPULATION_FILTER] [--logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}]

Benchmark risk scores produced by a language model on ACS data.

options:
  -h, --help            show this help message and exit
  --model MODEL         [str] Model name or path to model saved on disk
  --results-dir RESULTS_DIR
                        [str] Directory under which this experiment's results will be saved
  --data-dir DATA_DIR   [str] Root folder to find datasets on
  --task TASK           [str] Name of the ACS task to run the experiment on
  --few-shot FEW_SHOT   [int] Use few-shot prompting with the given number of shots
  --batch-size BATCH_SIZE
                        [int] The batch size to use for inference
  --context-size CONTEXT_SIZE
                        [int] The maximum context size when prompting the LLM
  --fit-threshold FIT_THRESHOLD
                        [int] Whether to fit the prediction threshold, and on how many samples
  --subsampling SUBSAMPLING
                        [float] Which fraction of the dataset to use (if omitted will use all data)
  --seed SEED           [int] Random seed -- to set for reproducibility
  --use-web-api-model   [bool] Whether use a model hosted on a web API (instead of a local model)
  --dont-correct-order-bias
                        [bool] Whether to avoid correcting ordering bias, by default will correct it
  --numeric-risk-prompting
                        [bool] Whether to prompt for numeric risk-estimates instead of multiple-choice Q&A
  --reuse-few-shot-examples
                        [bool] Whether to reuse the same samples for few-shot prompting (or sample new ones every time)
  --use-feature-subset USE_FEATURE_SUBSET
                        [str] Optional subset of features to use for prediction, comma separated
  --use-population-filter USE_POPULATION_FILTER
                        [str] Optional population filter for this benchmark; must follow the format 'column_name=value' to filter the dataset by a specific value.
  --logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
                        [str] The logging level to use for the experiment
```



## Evaluating feature importance

By evaluating LLMs on tabular classification tasks, we can use standard feature importance methods to assess which features the model uses to compute risk scores.

You can do so yourself by calling `folktexts.cli.eval_feature_importance` (add `--help` for a full list of options).

Here's an example for the Llama3-70B-Instruct model on the ACSIncome task (*warning: takes 24h on an Nvidia H100*):
```
python -m folktexts.cli.eval_feature_importance --model 'meta-llama/Meta-Llama-3-70B-Instruct' --task ACSIncome --subsampling 0.1
```
<div style="text-align: center;">
<img src="docs/_static/feat-imp_meta-llama--Meta-Llama-3-70B-Instruct.png" alt="feature importance on llama3 70b it" width="50%">
</div>

This script uses sklearn's [`permutation_importance`](https://scikit-learn.org/stable/modules/generated/sklearn.inspection.permutation_importance.html#sklearn.inspection.permutation_importance) to assess which features contribute the most for the ROC AUC metric (other metrics can be assessed using the `--scorer [scorer]` parameter).


## FAQ

1.
    **Q:** Can I use `folktexts` with a different dataset?

    **A:** **Yes!** Folktexts provides the whole ML pipeline needed to produce risk scores using LLMs, together with a few example ACS datasets. You can easily apply these same utilities to a different dataset following the [example jupyter notebook](notebooks/custom-dataset-example.ipynb).


2.
    **Q:** How do I create a custom prediction task based on American Community Survey data?

    **A:** Simply create a new `TaskMetadata` object with the parameters you want. Follow the [example jupyter notebook](notebooks/custom-acs-task-example.ipynb) for more details.


3.
    **Q:** Can I use `folktexts` with closed-source models?

    **A:** **Yes!** We provide compatibility with local LLMs via [🤗 transformers](https://github.com/huggingface/transformers) and compatibility with web-hosted LLMs via [litellm](https://github.com/BerriAI/litellm). For example, you can use `--model='gpt-4o' --use-web-api-model` to use GPT-4o when calling the `run_acs_benchmark` script. [Here's a complete list](https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models) of compatible OpenAI models. Note that some models are not compatible as they don't enable access to log-probabilities.
    Using models through a web API requires installing extra optional dependencies with `pip install 'folktexts[apis]'`.


4.
    **Q:** Can I use `folktexts` to fine-tune LLMs on survey prediction tasks?

    **A:** The package does not feature specific fine-tuning functionality, but you can use the data and Q&A prompts generated by `folktexts` to fine-tune an LLM for a specific prediction task.

    <!-- **A:** Yes. Although the package does not feature specific fine-tuning functionality, you can use the data and Q&A prompts generated by `folktexts` to fine-tune an LLM for a specific prediction task. Follow the [example jupyter notebook](notebooks/finetuning-llms-example.ipynb) for more details. In the future we may bring this functionality into the main package implementation. -->



## Citation

```bib
@inproceedings{cruz2024evaluating,
    title={Evaluating language models as risk scores},
    author={Andr\'{e} F. Cruz and Moritz Hardt and Celestine Mendler-D\"{u}nner},
    booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
    year={2024},
    url={https://openreview.net/forum?id=qrZxL3Bto9}
}
```


## License and terms of use

Code licensed under the [MIT license](LICENSE).

The American Community Survey (ACS) Public Use Microdata Sample (PUMS) is
governed by the U.S. Census Bureau [terms of service](https://www.census.gov/data/developers/about/terms-of-service.html).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "folktexts",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "language-model, risk-estimation, benchmark, machine-learning",
    "author": "Andre Cruz, Ricardo Dominguez-Olmedo, Celestine Mendler-Dunner, Moritz Hardt",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/05/bf/0dfe7347ce37ee54e0f82eb5cb68933b9968cc6dcce4f6300b3e79157367/folktexts-0.0.23.tar.gz",
    "platform": null,
    "description": "# :book: folktexts   <!-- omit in toc -->\n\n![Tests status](https://github.com/socialfoundations/folktexts/actions/workflows/python-tests.yml/badge.svg)\n![PyPI status](https://github.com/socialfoundations/folktexts/actions/workflows/python-publish.yml/badge.svg)\n![Documentation status](https://github.com/socialfoundations/folktexts/actions/workflows/python-docs.yml/badge.svg)\n![PyPI version](https://badgen.net/pypi/v/folktexts)\n![PyPI - License](https://img.shields.io/pypi/l/folktexts)\n![Python compatibility](https://badgen.net/pypi/python/folktexts)\n\n> This package is accompanied by a paper titled [\"Evaluating language models as risk scores\"](https://arxiv.org/abs/2407.14614)\n\nFolktexts is a python package to evaluate statistical properties of LLMs as classifiers.\nIt enables computing and evaluating classification _risk scores_ for tabular prediction tasks using LLMs.\n\nSeveral benchmark tasks are provided based on data from the American Community Survey.\nNamely, each prediction task from the popular \n[folktables](https://github.com/socialfoundations/folktables) package is made available \nas a natural-language prompting task.\n\nPackage documentation can be found [here](https://socialfoundations.github.io/folktexts/).\n\n**Table of contents:**\n- [Installing](#installing)\n- [Basic setup](#basic-setup)\n- [Example usage](#example-usage)\n- [Benchmark features and options](#benchmark-features-and-options)\n- [Evaluating feature importance](#evaluating-feature-importance)\n- [FAQ](#faq)\n- [Citation](#citation)\n- [License and terms of use](#license-and-terms-of-use)\n\n\n## Installing\n\nInstall package from [PyPI](https://pypi.org/project/folktexts/):\n\n```\npip install folktexts\n```\n\n## Basic setup\n> You'll need to go through these steps to run the benchmark tasks.\n\n1. Create conda environment\n\n```\nconda create -n folktexts python=3.11\nconda activate folktexts\n```\n\n2. Install folktexts package\n\n```\npip install folktexts\n```\n\n3. Create models dataset and results folder\n\n```\nmkdir results\nmkdir models\nmkdir data\n```\n\n4. Download transformers model and tokenizer\n```\ndownload_models --model 'google/gemma-2b' --save-dir models\n```\n\n5. Run benchmark on a given task\n\n```\nrun_acs_benchmark --results-dir results --data-dir data --task 'ACSIncome' --model models/google--gemma-2b\n```\n\nRun `run_acs_benchmark --help` to get a list of all available benchmark flags.\n\n\n## Example usage\n\n```py\n# Load transformers model\nfrom folktexts.llm_utils import load_model_tokenizer\nmodel, tokenizer = load_model_tokenizer(\"gpt2\")   # using tiny model as an example\n\nfrom folktexts.acs import ACSDataset\nacs_task_name = \"ACSIncome\"     # Name of the benchmark ACS task to use\n\n# Create an object that classifies data using an LLM\nfrom folktexts import TransformersLLMClassifier\nclf = TransformersLLMClassifier(\n    model=model,\n    tokenizer=tokenizer,\n    task=acs_task_name,\n)\n# NOTE: You can also use a web-hosted model like GPT4 using the `WebAPILLMClassifier` class\n\n# Use a dataset or feed in your own data\ndataset = ACSDataset.make_from_task(acs_task_name)   # use `.subsample(0.01)` to get faster approximate results\n\n# You can compute risk score predictions using an sklearn-style interface\nX_test, y_test = dataset.get_test()\ntest_scores = clf.predict_proba(X_test)\n\n# Optionally, you can fit the threshold based on a few samples\nclf.fit(*dataset[0:100])    # (`dataset[...]` will access training data)\n\n# ...in order to get more accurate binary predictions with `.predict`\ntest_preds = clf.predict(X_test)\n\n# If you only care about the overall metrics and not individual predictions,\n# you can simply run the following code:\nfrom folktexts.benchmark import Benchmark, BenchmarkConfig\nbench = Benchmark.make_benchmark(\n    task=acs_task_name, dataset=dataset,\n    model=model, tokenizer=tokenizer,\n    numeric_risk_prompting=True,    # See the full list of configs below in the README\n)\nbench_results = bench.run(results_root_dir=\"results\")\n```\n\n\n## Benchmark features and options\n\nHere's a summary list of the most important benchmark options/flags used in\nconjunction with the `run_acs_benchmark` command line script, or with the\n`Benchmark` class.\n\n| Option | Description | Examples |\n|:---|:---|:---:|\n| `--model` | Name of the model on huggingface transformers, or local path to folder with pretrained model and tokenizer. Can also use web-hosted models with `\"[provider]/[model-name]\"`. | `meta-llama/Meta-Llama-3-8B`, `openai/gpt-4o-mini` |\n| `--task` | Name of the ACS task to run benchmark on. | `ACSIncome`, `ACSEmployment`  |\n| `--results-dir` | Path to directory under which benchmark results will be saved. | `results` |\n| `--data-dir` | Root folder to find datasets in (or download ACS data to). | `~/data` |\n| `--numeric-risk-prompting` | Whether to use verbalized numeric risk prompting, i.e., directly query model for a probability estimate. **By default** will use standard multiple-choice Q&A, and extract risk scores from internal token probabilities. | Boolean flag (`True` if present, `False` otherwise) |\n| `--use-web-api-model` | Whether the given `--model` name corresponds to a web-hosted model or not. **By default** this is False (assumes a huggingface transformers model). If this flag is provided, `--model` must contain a [litellm](https://docs.litellm.ai) model identifier ([examples here](https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models)). | Boolean flag (`True` if present, `False` otherwise) |\n| `--subsampling` | Which fraction of the dataset to use for the benchmark. **By default** will use the whole test set. | `0.01` |\n| `--fit-threshold` | Whether to use the given number of samples to fit the binarization threshold. **By default** will use a fixed $t=0.5$ threshold instead of fitting on data. | `100` |\n| `--batch-size` | The number of samples to process in each inference batch. Choose according to your available VRAM. | `10`, `32` |\n\n\nFull list of options:\n\n```\nusage: run_acs_benchmark [-h] --model MODEL --results-dir RESULTS_DIR --data-dir DATA_DIR [--task TASK] [--few-shot FEW_SHOT] [--batch-size BATCH_SIZE] [--context-size CONTEXT_SIZE] [--fit-threshold FIT_THRESHOLD] [--subsampling SUBSAMPLING] [--seed SEED] [--use-web-api-model] [--dont-correct-order-bias] [--numeric-risk-prompting] [--reuse-few-shot-examples] [--use-feature-subset USE_FEATURE_SUBSET]\n                         [--use-population-filter USE_POPULATION_FILTER] [--logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}]\n\nBenchmark risk scores produced by a language model on ACS data.\n\noptions:\n  -h, --help            show this help message and exit\n  --model MODEL         [str] Model name or path to model saved on disk\n  --results-dir RESULTS_DIR\n                        [str] Directory under which this experiment's results will be saved\n  --data-dir DATA_DIR   [str] Root folder to find datasets on\n  --task TASK           [str] Name of the ACS task to run the experiment on\n  --few-shot FEW_SHOT   [int] Use few-shot prompting with the given number of shots\n  --batch-size BATCH_SIZE\n                        [int] The batch size to use for inference\n  --context-size CONTEXT_SIZE\n                        [int] The maximum context size when prompting the LLM\n  --fit-threshold FIT_THRESHOLD\n                        [int] Whether to fit the prediction threshold, and on how many samples\n  --subsampling SUBSAMPLING\n                        [float] Which fraction of the dataset to use (if omitted will use all data)\n  --seed SEED           [int] Random seed -- to set for reproducibility\n  --use-web-api-model   [bool] Whether use a model hosted on a web API (instead of a local model)\n  --dont-correct-order-bias\n                        [bool] Whether to avoid correcting ordering bias, by default will correct it\n  --numeric-risk-prompting\n                        [bool] Whether to prompt for numeric risk-estimates instead of multiple-choice Q&A\n  --reuse-few-shot-examples\n                        [bool] Whether to reuse the same samples for few-shot prompting (or sample new ones every time)\n  --use-feature-subset USE_FEATURE_SUBSET\n                        [str] Optional subset of features to use for prediction, comma separated\n  --use-population-filter USE_POPULATION_FILTER\n                        [str] Optional population filter for this benchmark; must follow the format 'column_name=value' to filter the dataset by a specific value.\n  --logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}\n                        [str] The logging level to use for the experiment\n```\n\n\n\n## Evaluating feature importance\n\nBy evaluating LLMs on tabular classification tasks, we can use standard feature importance methods to assess which features the model uses to compute risk scores.\n\nYou can do so yourself by calling `folktexts.cli.eval_feature_importance` (add `--help` for a full list of options).\n\nHere's an example for the Llama3-70B-Instruct model on the ACSIncome task (*warning: takes 24h on an Nvidia H100*):\n```\npython -m folktexts.cli.eval_feature_importance --model 'meta-llama/Meta-Llama-3-70B-Instruct' --task ACSIncome --subsampling 0.1\n```\n<div style=\"text-align: center;\">\n<img src=\"docs/_static/feat-imp_meta-llama--Meta-Llama-3-70B-Instruct.png\" alt=\"feature importance on llama3 70b it\" width=\"50%\">\n</div>\n\nThis script uses sklearn's [`permutation_importance`](https://scikit-learn.org/stable/modules/generated/sklearn.inspection.permutation_importance.html#sklearn.inspection.permutation_importance) to assess which features contribute the most for the ROC AUC metric (other metrics can be assessed using the `--scorer [scorer]` parameter).\n\n\n## FAQ\n\n1.\n    **Q:** Can I use `folktexts` with a different dataset?\n\n    **A:** **Yes!** Folktexts provides the whole ML pipeline needed to produce risk scores using LLMs, together with a few example ACS datasets. You can easily apply these same utilities to a different dataset following the [example jupyter notebook](notebooks/custom-dataset-example.ipynb).\n\n\n2.\n    **Q:** How do I create a custom prediction task based on American Community Survey data?\n\n    **A:** Simply create a new `TaskMetadata` object with the parameters you want. Follow the [example jupyter notebook](notebooks/custom-acs-task-example.ipynb) for more details.\n\n\n3.\n    **Q:** Can I use `folktexts` with closed-source models?\n\n    **A:** **Yes!** We provide compatibility with local LLMs via [\ud83e\udd17 transformers](https://github.com/huggingface/transformers) and compatibility with web-hosted LLMs via [litellm](https://github.com/BerriAI/litellm). For example, you can use `--model='gpt-4o' --use-web-api-model` to use GPT-4o when calling the `run_acs_benchmark` script. [Here's a complete list](https://docs.litellm.ai/docs/providers/openai#openai-chat-completion-models) of compatible OpenAI models. Note that some models are not compatible as they don't enable access to log-probabilities.\n    Using models through a web API requires installing extra optional dependencies with `pip install 'folktexts[apis]'`.\n\n\n4.\n    **Q:** Can I use `folktexts` to fine-tune LLMs on survey prediction tasks?\n\n    **A:** The package does not feature specific fine-tuning functionality, but you can use the data and Q&A prompts generated by `folktexts` to fine-tune an LLM for a specific prediction task.\n\n    <!-- **A:** Yes. Although the package does not feature specific fine-tuning functionality, you can use the data and Q&A prompts generated by `folktexts` to fine-tune an LLM for a specific prediction task. Follow the [example jupyter notebook](notebooks/finetuning-llms-example.ipynb) for more details. In the future we may bring this functionality into the main package implementation. -->\n\n\n\n## Citation\n\n```bib\n@inproceedings{cruz2024evaluating,\n    title={Evaluating language models as risk scores},\n    author={Andr\\'{e} F. Cruz and Moritz Hardt and Celestine Mendler-D\\\"{u}nner},\n    booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},\n    year={2024},\n    url={https://openreview.net/forum?id=qrZxL3Bto9}\n}\n```\n\n\n## License and terms of use\n\nCode licensed under the [MIT license](LICENSE).\n\nThe American Community Survey (ACS) Public Use Microdata Sample (PUMS) is\ngoverned by the U.S. Census Bureau [terms of service](https://www.census.gov/data/developers/about/terms-of-service.html).\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024 Social Foundations of Computation, at MPI-IS  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Use LLMs to get classification risk scores on tabular tasks.",
    "version": "0.0.23",
    "project_urls": {
        "documentation": "https://socialfoundations.github.io/folktexts/",
        "homepage": "https://github.com/socialfoundations/folktexts",
        "repository": "https://github.com/socialfoundations/folktexts"
    },
    "split_keywords": [
        "language-model",
        " risk-estimation",
        " benchmark",
        " machine-learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9792c58df51c2ae74cec145466d295d54c150d3a52ae7d1c0e8403db6f0f81e9",
                "md5": "8d7b877aaa4890ff0963b9f0dd89fdde",
                "sha256": "0db2b28210fc7f80c8e22a9d67554775f5f5facc5a14ba2c9f8b5cd77f87bb75"
            },
            "downloads": -1,
            "filename": "folktexts-0.0.23-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8d7b877aaa4890ff0963b9f0dd89fdde",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 84806,
            "upload_time": "2024-10-31T10:26:27",
            "upload_time_iso_8601": "2024-10-31T10:26:27.671793Z",
            "url": "https://files.pythonhosted.org/packages/97/92/c58df51c2ae74cec145466d295d54c150d3a52ae7d1c0e8403db6f0f81e9/folktexts-0.0.23-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "05bf0dfe7347ce37ee54e0f82eb5cb68933b9968cc6dcce4f6300b3e79157367",
                "md5": "60ae10838f8934aea8ef87c579ac2d1f",
                "sha256": "c109f5cab109b0824fefb42536827be6e50a576713a2753b1d6b3c538263a47a"
            },
            "downloads": -1,
            "filename": "folktexts-0.0.23.tar.gz",
            "has_sig": false,
            "md5_digest": "60ae10838f8934aea8ef87c579ac2d1f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 75574,
            "upload_time": "2024-10-31T10:26:29",
            "upload_time_iso_8601": "2024-10-31T10:26:29.438135Z",
            "url": "https://files.pythonhosted.org/packages/05/bf/0dfe7347ce37ee54e0f82eb5cb68933b9968cc6dcce4f6300b3e79157367/folktexts-0.0.23.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-31 10:26:29",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "socialfoundations",
    "github_project": "folktexts",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "folktexts"
}
        
Elapsed time: 0.41971s