outlines


Nameoutlines JSON
Version 0.0.8 PyPI version JSON
download
home_page
SummaryProbabilistic Generative Model Programming
upload_time2023-08-14 18:46:01
maintainer
docs_urlNone
author
requires_python>=3.7
license
keywords normal computing machine learning deep learning language models diffusion models
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">

<img src="./docs/source/_static/logo.png" alt="Outlines Logo" width=300></img>

# Outlines 〰️

Fast and reliable neural text generation.

[Install](#installation) •
[Guided generation](#guided-generation) •
[Prompting primitives](#prompting) •
[Examples](#examples) •
[Stay tuned](#stay-tuned-for)

</div>

**Outlines** 〰 is a library for neural text generation. You can think of it as a
more flexible replacement for the `generate` method in the
[transformers](https://github.com/huggingface/transformers) library.

**Outlines** 〰 helps developers *guide text generation* to build robust
interfaces with external systems. Provides generation methods that
guarantee that the output will match a regular expressions, or follow
a JSON schema.

**Outlines** 〰 provides *robust prompting primitives* that separate the prompting
from the execution logic and lead to simple implementations of few-shot
generations, ReAct, meta-prompting, agents, etc.

**Outlines** 〰 is designed as a *library* that is meant to be compatible the
broader ecosystem, not to replace it. We use as few abstractions as possible,
and generation can be interleaved with control flow, conditionals, custom Python
functions and calls to other libraries.

**Outlines** 〰 is *compatible with all models*. It only interfaces with models
via the next-token logits. It can be used with API-based models as well.

## Features

- [x] 🖍️Simple and powerful prompting primitives based on the [Jinja templating engine](https://jinja.palletsprojects.com/)
- [x] 🚄 Guided generation, including multiple choice, type constraints and dynamic stopping
- [x] ⚡ Fast [regex-guided generation](#efficient-regex-guided-generation)
- [x] 🔥 Fast [JSON generation](#efficient-json-generation-following-a-pydantic-model) following a JSON schema or a Pydantic model
- [x] 🐍 Interleave completions with loops, conditionals, and custom Python functions
- [x] 💾 Caching of generations
- [x] 🤗 Integration with HuggingFace's `transformers` models

Outlines 〰 has new releases and features coming every week! Make sure to ⭐ star and 👀 watch this repository to stay up to date.

## Stay tuned for

- Context-Free Grammar guided generation ([#178](https://github.com/normal-computing/outlines/pull/178));
- Prompt-token alignment so you don't have to think about tokenization details ([#201](https://github.com/normal-computing/outlines/pull/201))
- An infilling DSL ([#182](https://github.com/normal-computing/outlines/issues/182))

You can follow [@NormalComputing](https://twitter.com/NormalComputing), [@remilouf](https://twitter.com/remilouf) or [@BrandonTWillard](https://twitter.com/BrandonTWillard) for regular updates!


## Installation

**Outlines** is available on PyPi:

``` bash
pip install outlines
```


## Guided generation

The first step towards reliability of systems that include large language models
is to ensure that there is a well-defined interface between their output and
user-defined code. **Outlines** provides ways to control the generation of
language models to make their output more predictable.

### Early stopping

You can stop the generation after a given sequence has been found:

``` python
import outlines.text.generate as generate
import outlines.models as models

model = models.transformers("gpt2")
answer = generate.continuation(model, stop=["."])("Tell me a one-sentence joke.")
```

### Multiple choices

You can reduce the completion to a choice between multiple possibilities:

``` python
import outlines.text.generate as generate
import outlines.models as models

model = models.transformers("gpt2")

prompt = labelling("Just awesome", examples)
answer = generate.choice(model, ["Positive", "Negative"])(prompt)
```

### Type constraint

You can instruct the model to only return integers or floats:


``` python
import outlines.text.generate as generate
import outlines.models as models

model = models.transformers("gpt2")

prompt = "1+1="
answer = generate.integer(model)(prompt)

prompt = "sqrt(2)="
answer = generate.float(model)(prompt)
```

### Efficient regex-guided generation

Outlines also comes with fast regex-guided generation. In fact, the `choice`,
`integer` and `float` functions above all use regex-guided generation under the
hood:

``` python
import outlines.models as models
import outlines.text.generate as generate


model = models.transformers("gpt2-medium")

prompt = "Is 1+1=2? "
unguided = generate.continuation(model, max_tokens=30)(prompt)
guided = generate.regex(model, r"\s*([Yy]es|[Nn]o|[Nn]ever|[Aa]lways)", max_tokens=30)(
    prompt
)

print(unguided)
# Is 1+1=2?
#
# This is probably the most perplexing question.
# As I said in one of my articles describing how
# I call 2 and 1, there isn't

print(guided)
# Is 1+1=2? Always
```

``` python
import outlines.models as models
import outlines.text.generate as generate


model = models.transformers("gpt2-medium")

prompt = "What is the IP address of the Google DNS servers? "
unguided = generate.continuation(model, max_tokens=30)(prompt)
guided = generate.regex(
    model,
    r"((25[0-5]|2[0-4]\d|[01]?\d\d?)\.){3}(25[0-5]|2[0-4]\d|[01]?\d\d?)",
    max_tokens=30,
)(prompt)

print(unguided)
# What is the IP address of the Google DNS servers?
#
# Passive DNS servers are at DNS servers that are private.
# In other words, both IP servers are private. The database
# does not contain Chelsea Manning

print(guided)
# What is the IP address of the Google DNS servers?
# 2.2.6.1
```

Unlike other libraries, regex-guided generation in Outlines is almost as fast
as non-guided generation.

### Efficient JSON generation following a Pydantic model

Outlines 〰 allows to guide the generation process so the output is *guaranteed* to follow a [JSON schema](https://json-schema.org/) or [Pydantic model](https://docs.pydantic.dev/latest/):

```python
from typing import List
from enum import Enum
from pydantic import BaseModel, constr

import outlines.models as models
import outlines.text.generate as generate


class Weapon(str, Enum):
    sword = "sword"
    axe = "axe"
    mace = "mace"
    spear = "spear"
    bow = "bow"
    crossbow = "crossbow"


class Armor(str, Enum):
    leather = "leather"
    chainmail = "chainmail"
    plate = "plate"


class Character(BaseModel):
    name: constr(max_length=10)
    age: int
    armor: Armor
    weapon: Weapon
    strength: int


model = models.transformers("gpt2")
sequence = generate.json(model, Character)("Give me a character description")
print(sequence)
# {
#   "name": "ranbelt",
#   "age": 26,
#   "armor": "chainmail",
#   "weapon": "bow",
#   "strength": 5
# }

parsed = Character.model_validate_json(sequence)
print(parsed)
# name='ranbelt' age=26 armor=<Armor.chainmail: 'chainmail'> weapon=<Weapon.bow: 'bow'> strength=5
```

The method works with union types, optional types, arrays, nested schemas, etc. Some field constraints are [not supported yet](https://github.com/normal-computing/outlines/issues/215), but everything else should work.

## Prompting

Writing prompts by concatenating strings in pure Python quickly becomes
cumbersome: the prompt building logic gets entangled with the rest of the
program, and the structure of the rendered prompt is obfuscated.**Outlines**
makes it easier to write and manage prompts by encapsulating templates inside
"template functions".

These functions make it possible to neatly separate the prompt logic from the
general program logic; they can be imported from other modules and libraries.

Template functions require no superfluous abstraction, they use the Jinja2
templating engine to help build complex prompts in a concise manner:

``` python
import outlines.text as text
import outlines.models as models


examples = [
    ("The food was digusting", "Negative"),
    ("We had a fantastic night", "Positive"),
    ("Recommended", "Positive"),
    ("The waiter was rude", "Negative")
]

@text.prompt
def labelling(to_label, examples):
    """You are a sentiment-labelling assistant.

    {% for example in examples %}
    {{ example[0] }} // {{ example[1] }}
    {% endfor %}
    {{ to_label }} //
    """

model = models.transformers("gpt2")
prompt = labelling("Just awesome", examples)
answer = text.generate.continuation(model, max_tokens=100)(prompt)
```

### Tools

We can teach language models to call external functions to get additional
informations or perform tasks, by encoding the functions' description in the
prompt. To avoid duplicating information between the function definition and the
description passed to the prompt, we define custom Jinja filters that can
extract the function's name, description, signature and source:


``` python
from typing import Callable, List
import outlines.text as text


def google_search(query: str):
    """Google Search"""
    pass


def wikipedia_search(query: str):
    """Wikipedia Search"""
    pass


@text.prompt
def agent(tools: List[Callable]):
    """AVAILABLE COMMANDS:

    {% for tool in tools %}
    TOOL
    {{ tool | name }}, {{ tool | description }}, args: {{ tool | signature }}
    {{ tool | source }}
    {% endfor %}
    """


prompt = my_commands([google_search, wikipedia_search])
```

### Response models

We can instruct models to return their output in a pre-defined format, often
JSON. To avoid duplicating information between the function definition and the
description passed to the prompt we define a custom Jinja filter that can
extract the expected response's schema:

``` python
from pydantic import BaseModel
import outlines.text as text


class Joke(BaseModel):
    joke: str
    explanation: str


@text.prompt
def joke_ppt(response_model):
    """Tell a joke and explain why the joke is funny.

    RESPONSE FORMAT:
    {{ response_model | schema }}
    """


joke_ppt(Joke)
# Tell a joke and explain why the joke is funny.
#
# RESPONSE FORMAT:
# {
#    "joke": "The joke"
#    "explanation": "The explanation of why the joke is funny"
#  }
```

With these prompting primitives **Outlines** makes building agents like
[AutoGPT](https://github.com/Significant-Gravitas/Auto-GPT),
[BabyAGI](https://github.com/yoheinakajima/babyagi),
[ViperGPT](https://viper.cs.columbia.edu/) or [Transformers
Agent](https://huggingface.co/docs/transformers/transformers_agents) easier by
removing boilerplate prompting code.

## Contributing

### What contributions?

We curently only accept bug fixes and documentation contributions. If you have a
feature request, please start a new
[discussion](https://github.com/normal-computing/outlines/discussions). The
issue tracker is only intended for actionable items.

### How to contribute?

Run `pip install -e .[test]` or `conda env create -f environment.yml`. To build the documentation you will also need to run `pip install -r requirements-doc.txt`.

Before pushing your code to repository please run `pre-commit run --all-files` and `pytest` to make sure that the code is formatted correctly and that the tests pass.

Do not hesitate to open a draft PR before your contribution is ready, especially if you have questions and/or need feedback.

## Examples

- [Pick the odd one out](https://github.com/normal-computing/outlines/blob/main/examples/pick_odd_one_out.py)
- [Meta prompting](https://github.com/normal-computing/outlines/blob/main/examples/meta_prompting.py)
- [ReAct](https://github.com/normal-computing/outlines/blob/main/examples/meta_prompting.py)
- [Generate code to solve math problems](https://github.com/normal-computing/outlines/blob/main/examples/dust/math-generate-code.py)
- [BabyAGI](https://github.com/normal-computing/outlines/blob/main/examples/babyagi.py)
- [Uncertainty](https://github.com/normal-computing/outlines/blob/main/examples/sampling.ipynb)
- [Simulation-based inference](https://github.com/normal-computing/outlines/blob/main/examples/simulation_based_inference.ipynb)


## Cite Outlines

```
@article{willard2023efficient,
  title={Efficient Guided Generation for LLMs},
  author={Willard, Brandon T and Louf, R{\'e}mi},
  journal={arXiv preprint arXiv:2307.09702},
  year={2023}
}
```

## License

Outlines is open-source and licensed under the [Apache License 2.0](LICENSE).

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "outlines",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "normal computing,machine learning,deep learning,language models,diffusion models",
    "author": "",
    "author_email": "Normal Computing <support@normalcomputing.com>",
    "download_url": "https://files.pythonhosted.org/packages/d2/e8/8ab38a690146a3b482db63a0d47d950cbc5f3d42cff7b0eb55fb8e76ce4a/outlines-0.0.8.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n\n<img src=\"./docs/source/_static/logo.png\" alt=\"Outlines Logo\" width=300></img>\n\n# Outlines \u3030\ufe0f\n\nFast and reliable neural text generation.\n\n[Install](#installation) \u2022\n[Guided generation](#guided-generation) \u2022\n[Prompting primitives](#prompting) \u2022\n[Examples](#examples) \u2022\n[Stay tuned](#stay-tuned-for)\n\n</div>\n\n**Outlines** \u3030 is a library for neural text generation. You can think of it as a\nmore flexible replacement for the `generate` method in the\n[transformers](https://github.com/huggingface/transformers) library.\n\n**Outlines** \u3030 helps developers *guide text generation* to build robust\ninterfaces with external systems. Provides generation methods that\nguarantee that the output will match a regular expressions, or follow\na JSON schema.\n\n**Outlines** \u3030 provides *robust prompting primitives* that separate the prompting\nfrom the execution logic and lead to simple implementations of few-shot\ngenerations, ReAct, meta-prompting, agents, etc.\n\n**Outlines** \u3030 is designed as a *library* that is meant to be compatible the\nbroader ecosystem, not to replace it. We use as few abstractions as possible,\nand generation can be interleaved with control flow, conditionals, custom Python\nfunctions and calls to other libraries.\n\n**Outlines** \u3030 is *compatible with all models*. It only interfaces with models\nvia the next-token logits. It can be used with API-based models as well.\n\n## Features\n\n- [x] \ud83d\udd8d\ufe0fSimple and powerful prompting primitives based on the [Jinja templating engine](https://jinja.palletsprojects.com/)\n- [x] \ud83d\ude84 Guided generation, including multiple choice, type constraints and dynamic stopping\n- [x] \u26a1 Fast [regex-guided generation](#efficient-regex-guided-generation)\n- [x] \ud83d\udd25 Fast [JSON generation](#efficient-json-generation-following-a-pydantic-model) following a JSON schema or a Pydantic model\n- [x] \ud83d\udc0d Interleave completions with loops, conditionals, and custom Python functions\n- [x] \ud83d\udcbe Caching of generations\n- [x] \ud83e\udd17 Integration with HuggingFace's `transformers` models\n\nOutlines \u3030 has new releases and features coming every week! Make sure to \u2b50 star and \ud83d\udc40 watch this repository to stay up to date.\n\n## Stay tuned for\n\n- Context-Free Grammar guided generation ([#178](https://github.com/normal-computing/outlines/pull/178));\n- Prompt-token alignment so you don't have to think about tokenization details ([#201](https://github.com/normal-computing/outlines/pull/201))\n- An infilling DSL ([#182](https://github.com/normal-computing/outlines/issues/182))\n\nYou can follow [@NormalComputing](https://twitter.com/NormalComputing), [@remilouf](https://twitter.com/remilouf) or [@BrandonTWillard](https://twitter.com/BrandonTWillard) for regular updates!\n\n\n## Installation\n\n**Outlines** is available on PyPi:\n\n``` bash\npip install outlines\n```\n\n\n## Guided generation\n\nThe first step towards reliability of systems that include large language models\nis to ensure that there is a well-defined interface between their output and\nuser-defined code. **Outlines** provides ways to control the generation of\nlanguage models to make their output more predictable.\n\n### Early stopping\n\nYou can stop the generation after a given sequence has been found:\n\n``` python\nimport outlines.text.generate as generate\nimport outlines.models as models\n\nmodel = models.transformers(\"gpt2\")\nanswer = generate.continuation(model, stop=[\".\"])(\"Tell me a one-sentence joke.\")\n```\n\n### Multiple choices\n\nYou can reduce the completion to a choice between multiple possibilities:\n\n``` python\nimport outlines.text.generate as generate\nimport outlines.models as models\n\nmodel = models.transformers(\"gpt2\")\n\nprompt = labelling(\"Just awesome\", examples)\nanswer = generate.choice(model, [\"Positive\", \"Negative\"])(prompt)\n```\n\n### Type constraint\n\nYou can instruct the model to only return integers or floats:\n\n\n``` python\nimport outlines.text.generate as generate\nimport outlines.models as models\n\nmodel = models.transformers(\"gpt2\")\n\nprompt = \"1+1=\"\nanswer = generate.integer(model)(prompt)\n\nprompt = \"sqrt(2)=\"\nanswer = generate.float(model)(prompt)\n```\n\n### Efficient regex-guided generation\n\nOutlines also comes with fast regex-guided generation. In fact, the `choice`,\n`integer` and `float` functions above all use regex-guided generation under the\nhood:\n\n``` python\nimport outlines.models as models\nimport outlines.text.generate as generate\n\n\nmodel = models.transformers(\"gpt2-medium\")\n\nprompt = \"Is 1+1=2? \"\nunguided = generate.continuation(model, max_tokens=30)(prompt)\nguided = generate.regex(model, r\"\\s*([Yy]es|[Nn]o|[Nn]ever|[Aa]lways)\", max_tokens=30)(\n    prompt\n)\n\nprint(unguided)\n# Is 1+1=2?\n#\n# This is probably the most perplexing question.\n# As I said in one of my articles describing how\n# I call 2 and 1, there isn't\n\nprint(guided)\n# Is 1+1=2? Always\n```\n\n``` python\nimport outlines.models as models\nimport outlines.text.generate as generate\n\n\nmodel = models.transformers(\"gpt2-medium\")\n\nprompt = \"What is the IP address of the Google DNS servers? \"\nunguided = generate.continuation(model, max_tokens=30)(prompt)\nguided = generate.regex(\n    model,\n    r\"((25[0-5]|2[0-4]\\d|[01]?\\d\\d?)\\.){3}(25[0-5]|2[0-4]\\d|[01]?\\d\\d?)\",\n    max_tokens=30,\n)(prompt)\n\nprint(unguided)\n# What is the IP address of the Google DNS servers?\n#\n# Passive DNS servers are at DNS servers that are private.\n# In other words, both IP servers are private. The database\n# does not contain Chelsea Manning\n\nprint(guided)\n# What is the IP address of the Google DNS servers?\n# 2.2.6.1\n```\n\nUnlike other libraries, regex-guided generation in Outlines is almost as fast\nas non-guided generation.\n\n### Efficient JSON generation following a Pydantic model\n\nOutlines \u3030 allows to guide the generation process so the output is *guaranteed* to follow a [JSON schema](https://json-schema.org/) or [Pydantic model](https://docs.pydantic.dev/latest/):\n\n```python\nfrom typing import List\nfrom enum import Enum\nfrom pydantic import BaseModel, constr\n\nimport outlines.models as models\nimport outlines.text.generate as generate\n\n\nclass Weapon(str, Enum):\n    sword = \"sword\"\n    axe = \"axe\"\n    mace = \"mace\"\n    spear = \"spear\"\n    bow = \"bow\"\n    crossbow = \"crossbow\"\n\n\nclass Armor(str, Enum):\n    leather = \"leather\"\n    chainmail = \"chainmail\"\n    plate = \"plate\"\n\n\nclass Character(BaseModel):\n    name: constr(max_length=10)\n    age: int\n    armor: Armor\n    weapon: Weapon\n    strength: int\n\n\nmodel = models.transformers(\"gpt2\")\nsequence = generate.json(model, Character)(\"Give me a character description\")\nprint(sequence)\n# {\n#   \"name\": \"ranbelt\",\n#   \"age\": 26,\n#   \"armor\": \"chainmail\",\n#   \"weapon\": \"bow\",\n#   \"strength\": 5\n# }\n\nparsed = Character.model_validate_json(sequence)\nprint(parsed)\n# name='ranbelt' age=26 armor=<Armor.chainmail: 'chainmail'> weapon=<Weapon.bow: 'bow'> strength=5\n```\n\nThe method works with union types, optional types, arrays, nested schemas, etc. Some field constraints are [not supported yet](https://github.com/normal-computing/outlines/issues/215), but everything else should work.\n\n## Prompting\n\nWriting prompts by concatenating strings in pure Python quickly becomes\ncumbersome: the prompt building logic gets entangled with the rest of the\nprogram, and the structure of the rendered prompt is obfuscated.**Outlines**\nmakes it easier to write and manage prompts by encapsulating templates inside\n\"template functions\".\n\nThese functions make it possible to neatly separate the prompt logic from the\ngeneral program logic; they can be imported from other modules and libraries.\n\nTemplate functions require no superfluous abstraction, they use the Jinja2\ntemplating engine to help build complex prompts in a concise manner:\n\n``` python\nimport outlines.text as text\nimport outlines.models as models\n\n\nexamples = [\n    (\"The food was digusting\", \"Negative\"),\n    (\"We had a fantastic night\", \"Positive\"),\n    (\"Recommended\", \"Positive\"),\n    (\"The waiter was rude\", \"Negative\")\n]\n\n@text.prompt\ndef labelling(to_label, examples):\n    \"\"\"You are a sentiment-labelling assistant.\n\n    {% for example in examples %}\n    {{ example[0] }} // {{ example[1] }}\n    {% endfor %}\n    {{ to_label }} //\n    \"\"\"\n\nmodel = models.transformers(\"gpt2\")\nprompt = labelling(\"Just awesome\", examples)\nanswer = text.generate.continuation(model, max_tokens=100)(prompt)\n```\n\n### Tools\n\nWe can teach language models to call external functions to get additional\ninformations or perform tasks, by encoding the functions' description in the\nprompt. To avoid duplicating information between the function definition and the\ndescription passed to the prompt, we define custom Jinja filters that can\nextract the function's name, description, signature and source:\n\n\n``` python\nfrom typing import Callable, List\nimport outlines.text as text\n\n\ndef google_search(query: str):\n    \"\"\"Google Search\"\"\"\n    pass\n\n\ndef wikipedia_search(query: str):\n    \"\"\"Wikipedia Search\"\"\"\n    pass\n\n\n@text.prompt\ndef agent(tools: List[Callable]):\n    \"\"\"AVAILABLE COMMANDS:\n\n    {% for tool in tools %}\n    TOOL\n    {{ tool | name }}, {{ tool | description }}, args: {{ tool | signature }}\n    {{ tool | source }}\n    {% endfor %}\n    \"\"\"\n\n\nprompt = my_commands([google_search, wikipedia_search])\n```\n\n### Response models\n\nWe can instruct models to return their output in a pre-defined format, often\nJSON. To avoid duplicating information between the function definition and the\ndescription passed to the prompt we define a custom Jinja filter that can\nextract the expected response's schema:\n\n``` python\nfrom pydantic import BaseModel\nimport outlines.text as text\n\n\nclass Joke(BaseModel):\n    joke: str\n    explanation: str\n\n\n@text.prompt\ndef joke_ppt(response_model):\n    \"\"\"Tell a joke and explain why the joke is funny.\n\n    RESPONSE FORMAT:\n    {{ response_model | schema }}\n    \"\"\"\n\n\njoke_ppt(Joke)\n# Tell a joke and explain why the joke is funny.\n#\n# RESPONSE FORMAT:\n# {\n#    \"joke\": \"The joke\"\n#    \"explanation\": \"The explanation of why the joke is funny\"\n#  }\n```\n\nWith these prompting primitives **Outlines** makes building agents like\n[AutoGPT](https://github.com/Significant-Gravitas/Auto-GPT),\n[BabyAGI](https://github.com/yoheinakajima/babyagi),\n[ViperGPT](https://viper.cs.columbia.edu/) or [Transformers\nAgent](https://huggingface.co/docs/transformers/transformers_agents) easier by\nremoving boilerplate prompting code.\n\n## Contributing\n\n### What contributions?\n\nWe curently only accept bug fixes and documentation contributions. If you have a\nfeature request, please start a new\n[discussion](https://github.com/normal-computing/outlines/discussions). The\nissue tracker is only intended for actionable items.\n\n### How to contribute?\n\nRun `pip install -e .[test]` or `conda env create -f environment.yml`. To build the documentation you will also need to run `pip install -r requirements-doc.txt`.\n\nBefore pushing your code to repository please run `pre-commit run --all-files` and `pytest` to make sure that the code is formatted correctly and that the tests pass.\n\nDo not hesitate to open a draft PR before your contribution is ready, especially if you have questions and/or need feedback.\n\n## Examples\n\n- [Pick the odd one out](https://github.com/normal-computing/outlines/blob/main/examples/pick_odd_one_out.py)\n- [Meta prompting](https://github.com/normal-computing/outlines/blob/main/examples/meta_prompting.py)\n- [ReAct](https://github.com/normal-computing/outlines/blob/main/examples/meta_prompting.py)\n- [Generate code to solve math problems](https://github.com/normal-computing/outlines/blob/main/examples/dust/math-generate-code.py)\n- [BabyAGI](https://github.com/normal-computing/outlines/blob/main/examples/babyagi.py)\n- [Uncertainty](https://github.com/normal-computing/outlines/blob/main/examples/sampling.ipynb)\n- [Simulation-based inference](https://github.com/normal-computing/outlines/blob/main/examples/simulation_based_inference.ipynb)\n\n\n## Cite Outlines\n\n```\n@article{willard2023efficient,\n  title={Efficient Guided Generation for LLMs},\n  author={Willard, Brandon T and Louf, R{\\'e}mi},\n  journal={arXiv preprint arXiv:2307.09702},\n  year={2023}\n}\n```\n\n## License\n\nOutlines is open-source and licensed under the [Apache License 2.0](LICENSE).\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Probabilistic Generative Model Programming",
    "version": "0.0.8",
    "project_urls": {
        "documentation": "https://normal-computing.github.io/outlines/",
        "homepage": "https://github.com/normal-computing/outlines",
        "repository": "https://github.com/normal-computing/outlines"
    },
    "split_keywords": [
        "normal computing",
        "machine learning",
        "deep learning",
        "language models",
        "diffusion models"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "abcb23ed8227fc959df0a3b6f71d94ac7726a4b85159d41c8a0b77f762dbf7af",
                "md5": "b27909745f4924c5588e24fff0dc42fc",
                "sha256": "6f26000ba32aa2af67764e2b398d758567d1cdda18caceb7cebd2e8bc4b14b3c"
            },
            "downloads": -1,
            "filename": "outlines-0.0.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b27909745f4924c5588e24fff0dc42fc",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 46940,
            "upload_time": "2023-08-14T18:45:59",
            "upload_time_iso_8601": "2023-08-14T18:45:59.265182Z",
            "url": "https://files.pythonhosted.org/packages/ab/cb/23ed8227fc959df0a3b6f71d94ac7726a4b85159d41c8a0b77f762dbf7af/outlines-0.0.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d2e88ab38a690146a3b482db63a0d47d950cbc5f3d42cff7b0eb55fb8e76ce4a",
                "md5": "e33e84a857402391ab12755a218746f3",
                "sha256": "3157dbace3b6fbf95a465f68a6236bd6fa88f62a45ae1d838800b003fd03e3d2"
            },
            "downloads": -1,
            "filename": "outlines-0.0.8.tar.gz",
            "has_sig": false,
            "md5_digest": "e33e84a857402391ab12755a218746f3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 469530,
            "upload_time": "2023-08-14T18:46:01",
            "upload_time_iso_8601": "2023-08-14T18:46:01.096195Z",
            "url": "https://files.pythonhosted.org/packages/d2/e8/8ab38a690146a3b482db63a0d47d950cbc5f3d42cff7b0eb55fb8e76ce4a/outlines-0.0.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-14 18:46:01",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "normal-computing",
    "github_project": "outlines",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "outlines"
}
        
Elapsed time: 0.13063s