formatron


Nameformatron JSON
Version 0.4.10 PyPI version JSON
download
home_pageNone
SummaryFormatron empowers everyone to control the output format of language models with minimal overhead.
upload_time2024-12-21 05:23:20
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseMIT License Copyright (c) 2023 Huanghe Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords deep learning language model guided generation structured generation constrained decoding
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align='center'>
<image src="logo.svg">
</p>

[![PyPI](https://img.shields.io/pypi/v/formatron.svg)](https://pypi.python.org/pypi/formatron)
![PyPI Downloads](https://static.pepy.tech/badge/formatron)

Formatron allows users to control the output format of language models
with minimal overhead. It is lightweight, user-friendly,
and seamlessly integrates into existing codebases and frameworks.

## Installation

`pip install formatron`

## Features

- **🔗 Popular Library Integrations**: Supports transformers, exllamav2, vllm and RWKV.
- **🔌 Plugins, not wrappers**:
Instead of wrapping third-party libraries in large, cumbersome classes,
Formatron offers convenient, clean plugins for different libraries.
- **💡 Library, not framework**:
Instead of unifying everything into a bulky framework,
Formatron is a flexible library that can be embedded anywhere.
- **✍️ Fluent Formatting**: Describe your format as easily as writing natural language.
- **📜 Regex and CFG Support**:
Effortlessly interleave regular expressions and context-free grammars (CFG) in formats.
- **⚙️ Efficient JSON Generation**: Feature-complete JSON generation based on Pydantic models or json schemas.
- **📤 Batched Inference**:
Freely specify different formats for each sequence in one batch!
- **🚀 Minimal Runtime Overhead**:
With Leo optimization, a specialized compacting algorithm,
and CFG caches across generations, Earley algorithm implemented in Rust is
aymptotically and practically the fastest algorithm.
- **🔧 Customizable**: Everything is configurable, including schema generation,
grammar generation, and post-generation processing (such as function calls).

## Comparison to other libraries

| Capability                                   | Formatron                          | [LM Format Enforcer](https://github.com/noamgat/lm-format-enforcer)                           | [Guidance](https://github.com/guidance-ai/guidance) | [Outlines](https://github.com/outlines-dev/outlines)                                    | [LMQL](https://github.com/eth-sri/lmql)                                                         |
|:---------------------------------------------|------------------------------------|:----------------------------------------------------------------------------------------------|:----------------------------------------------------|:----------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|
| Regular Expressions                          | ✅                                  | ✅                                                                                             | ✅                                                   | ✅                                                                                       | 🟡([preview feature](https://lmql.ai/docs/language/constraints.html#regex-constraints-preview)) |
| Efficient Regex-constrained Generation       | ✅                                  | 🟡([performance issues still exist](https://github.com/noamgat/lm-format-enforcer/issues/36)) | ❌                                                   | 🟡([scalablity currently suffers](https://github.com/outlines-dev/outlines/issues/680)) | ❌                                                                                               |
| Context Free Grammars(CFG)                   | ✅                                  | ❌                                                                                             | ✅                                                   | 🟡([some bugs exist](https://github.com/outlines-dev/outlines/issues/959))              | ❌                                                                                               |
| Efficient CFG-constrained Generation         | ✅                                  | ❌                                                                                             | ❌                                                   | ❌                                                                                       | ❌                                                                                               |
| Custom Format Extractor                      | 🟡([some limitations exist](#ast)) | ❌                                                                                             | ✅                                                   | ✅                                                                                       | ✅                                                                                               |
| JSON Schema                                  | ✅([indirectly](#json-schema))      | ✅                                                                                             | ✅                                                   | ✅                                                                                       | ❌                                                                                               |
| Function Call From Callable                  | ✅                                  | ❌                                                                                             | ✅                                                   | ✅                                                                                       | ✅                                                                                               |
| Interleave Python control flow in generation | ❌                                  | ❌                                                                                             | ✅                                                   | ❌                                                                                       | ✅                                                                                               |
| Batched Generation                           | ✅                                  | ✅                                                                                             | ❌                                                   | ✅                                                                                       | ❌                                                                                               |
| Beam Search                                  | ❌                                  | ✅                                                                                             | ❌                                                   | ✅                                                                                       | ✅                                                                                               |
| Integrates into existing pipelines           | ✅                                  | ✅                                                                                             | ❌                                                   | 🟡([some integrations crash](https://github.com/outlines-dev/outlines/issues/1115))     | ❌                                                                                               |
| Optional JSON Fields                         | ✅                                  | ✅                                                                                             | ❌                                                   | ❌                                                                                       | ❌                                                                                               |
| LLM Controls JSON field whitespaces          | ✅                                  | ✅                                                                                             | ❌                                                   | ✅                                                                                       | ❌                                                                                               |
| LLM Controls JSON field orderings            | ❌                                  | ✅                                                                                             | ❌                                                   | ❌                                                                                       | ❌                                                                                               |
| JSON Schema with recursive classes           | ✅                                  | ✅                                                                                             | ❌                                                   | ❌                                                                                       | ❌                                                                                               |
|Extractive generation(substringOf)           | ✅                                  | ❌                                                                                             | ✅                                                   | ❌                                                                                       | ❌                                                                                               |

Feel free to open up an [issue](https://github.com/Dan-wanna-M/formatron/issues) if something is missing or incorrect!

## Examples

### Regex-constrained Generation

```python
import torch
from formatron.integrations.transformers import create_formatter_logits_processor_list
from formatron.formatter import FormatterBuilder
from transformers import AutoModelForCausalLM
import transformers
torch.manual_seed(514)
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-128k-instruct",
                                                device_map="cuda",
                                                torch_dtype=torch.float16)
tokenizer = transformers.AutoTokenizer.from_pretrained(
    "microsoft/Phi-3-mini-128k-instruct")

f = FormatterBuilder()
digit = f.regex('([1-9][0-9]*)', capture_name='digit')
f.append_line(f"My favorite integer is {digit}.")
f.append_str(f"I think integer {digit} is also very interesting.")
logits_processor = create_formatter_logits_processor_list(tokenizer, f)
inputs = tokenizer(["""<|system|>
You are a helpful assistant.<|end|>
<|user|>Which integer is your favourite?<|end|>
<|assistant|>"""], return_tensors="pt").to("cuda")
print(tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,
                                            max_new_tokens=100, logits_processor=logits_processor)))
print(logits_processor[0].formatters_captures)
# possible output:
# [{'digit': [<re.Match object; span=(0, 2), match='42'>, <re.Match object; span=(0, 2), match='42'>]}]
```

Note that only
[Rust regex's syntax](https://docs.rs/regex/latest/regex/#syntax) is supported, which notably
does not include arbitrary lookaheads.

### Json Generation

#### Pydantic Model

```python
import torch
from formatron.integrations.transformers import create_formatter_logits_processor_list
from formatron.formatter import FormatterBuilder
from transformers import AutoModelForCausalLM
import transformers
from formatron.schemas.dict_inference import infer_mapping
torch.manual_seed(520)
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-128k-instruct",
                                                device_map="cuda",
                                                torch_dtype=torch.float16)
tokenizer = transformers.AutoTokenizer.from_pretrained(
    "microsoft/Phi-3-mini-128k-instruct")

f = FormatterBuilder()
schema = infer_mapping({"name": "foo", "age": 28})
f.append_line(f"{f.json(schema, capture_name='json')}")
logits_processor = create_formatter_logits_processor_list(tokenizer, f)
inputs = tokenizer(["""<|system|>
You are a helpful assistant.<|end|>
<|user|>I am 周明瑞. My age is 24. Extract information from this sentence into json.<|end|>
<|assistant|>"""], return_tensors="pt").to("cuda")
print(tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,
                                            max_new_tokens=100, logits_processor=logits_processor)))
print(logits_processor[0].formatters_captures)
# possible output:
# [{'json': {'name': '周明瑞', 'age': 34}}]
```

#### Json Example

```python
from formatron.schemas.pydantic import ClassSchema
from formatron.integrations.transformers import create_formatter_logits_processor_list
from formatron.formatter import FormatterBuilder
from transformers import AutoModelForCausalLM
import transformers
import torch

class Goods(ClassSchema):
    name: str
    price: float
    remaining: int

torch.manual_seed(520)
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-128k-instruct",
                                                device_map="cuda",
                                                torch_dtype=torch.float16)
tokenizer = transformers.AutoTokenizer.from_pretrained(
    "microsoft/Phi-3-mini-128k-instruct")

f = FormatterBuilder()
schema = Goods
f.append_line(f"{f.json(schema, capture_name='json')}")
logits_processor = create_formatter_logits_processor_list(tokenizer, f)
inputs = tokenizer(["""<|system|>
You are a helpful assistant.<|end|>
<|user|>We have 14 apples left with each price 14.4$. Extract information from this sentence into json.<|end|>
<|assistant|>"""], return_tensors="pt").to("cuda")
print(tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,
                                            max_new_tokens=100, logits_processor=logits_processor)))
print(logits_processor[0].formatters_captures)
# possible output:
# [{'json': Goods(name='apples', price=14.4, remaining=14)}]
```

### Batched Inference

```python
import transformers
from transformers import GPT2LMHeadModel

from formatron.formatter import FormatterBuilder
from formatron.integrations.transformers import create_formatter_logits_processor_list
f = FormatterBuilder()
f.append_line(f"Hello, Huggingface!")
f3 = FormatterBuilder()
f3.append_line("Hello, Huggingface! Hello, Huggingface!")
model = GPT2LMHeadModel.from_pretrained("openai-community/gpt2")
tokenizer = transformers.AutoTokenizer.from_pretrained("openai-community/gpt2",
                                                       padding_side='left')
tokenizer.pad_token = tokenizer.eos_token  # Needed for padding
model.generation_config.pad_token_id = tokenizer.pad_token_id
logits_processor = create_formatter_logits_processor_list(tokenizer, [f, f, f3])
inputs = tokenizer(["I am GPT2. ", "I am another GPT2. ", "I am yet another GPT2. "], return_tensors="pt",
                   padding=True)
print(tokenizer.batch_decode(model.generate(**inputs,
                                            max_new_tokens=100,
                                            logits_processor=logits_processor),
                             skip_special_tokens=True))
```

### Function Calls

```python
import torch
from formatron import schemas
from formatron.formatter import FormatterBuilder
from transformers import AutoModelForCausalLM
import transformers
from formatron.integrations.transformers import create_formatter_logits_processor_list

@schemas.pydantic.callable_schema
def add(a: int, b: int, /, *, c: int):
    return a + b + c

model = AutoModelForCausalLM.from_pretrained("NurtureAI/Meta-Llama-3-8B-Instruct-32k",
                                                device_map="cuda",
                                                torch_dtype=torch.float16)
tokenizer = transformers.AutoTokenizer.from_pretrained(
    "NurtureAI/Meta-Llama-3-8B-Instruct-32k")
inputs = tokenizer(["""<|system|>
You are a helpful assistant.<|end|>
<|user|>a is 1, b is 6 and c is 7. Generate a json containing them.<|end|>
<|assistant|>"""], return_tensors="pt").to("cuda")
f = FormatterBuilder()
f.append_line(f"{f.json(add, capture_name='json')}")
logits_processor = create_formatter_logits_processor_list(tokenizer, f)
print(tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,
                                            max_new_tokens=100, logits_processor=logits_processor)))
print(logits_processor[0].formatters_captures)
# possible output:
# [{'json': 14}]
```

### CFG-Constrained generation

Context free grammars use [kbnf's syntax](https://docs.rs/kbnf/latest/kbnf/#kbnf-grammar) which is a variant of EBNF.
Since formatron uses [kbnf](https://github.com/Dan-wanna-M/kbnf?tab=readme-ov-file#features) under the hood, all kbnf's claims on performance hold.

```python
import torch
from formatron.formatter import FormatterBuilder
from transformers import AutoModelForCausalLM
import transformers
from formatron.integrations.transformers import create_formatter_logits_processor_list
from formatron.extractor import NonterminalExtractor
import typing

class ArithmeticExpressionExtractor(NonterminalExtractor):
    def __init__(self, nonterminal: str, capture_name: typing.Optional[str] = None):
        super().__init__(nonterminal, capture_name)

    def extract(self, input_str: str) -> typing.Optional[tuple[str, typing.Any]]:
        i = 0
        left_bracket = 0
        while i < len(input_str):
            if input_str[i].isdigit() or input_str[i] in "+-*/.":
                i += 1
                continue
            if input_str[i] == "(":
                i += 1
                left_bracket += 1
                continue
            if input_str[i] == ")":
                i += 1
                left_bracket -= 1
                continue
            else:
                break
        if left_bracket != 0:
            return None
        return input_str[i:], input_str[:i]

    @property
    def kbnf_definition(self) -> str:
        return  """
expression ::=  term { ("+" | "-") term };
term       ::= factor { ("*" | "/") factor };
factor     ::= number | "(" expression ")";
number     ::= #"[0-9]+(\\\\.[0-9]+)?";
""".replace("expression", self.nonterminal)

model = AutoModelForCausalLM.from_pretrained("NurtureAI/Meta-Llama-3-8B-Instruct-32k",
                                                device_map="cuda",
                                                torch_dtype=torch.float16)
tokenizer = transformers.AutoTokenizer.from_pretrained(
    "NurtureAI/Meta-Llama-3-8B-Instruct-32k")
inputs = tokenizer(["""<|system|>
    You are a helpful assistant.<|end|>
    <|user|>Repeat it: ((32+43)*114)<|end|>
    <|assistant|>((32+43)*114)<|end|>
    <|user|>Repeat it: ((32+43)*(114-514))<|end|>
    <|assistant|>"""], return_tensors="pt").to("cuda")
f = FormatterBuilder()
f.append_line(
    f"{f.extractor(lambda nonterminal: ArithmeticExpressionExtractor(nonterminal, 'json'))}")
logits_processor = create_formatter_logits_processor_list(tokenizer, f)
print(tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,
                                            max_new_tokens=100, logits_processor=logits_processor)))
print(logits_processor[0].formatters_captures)
# possible output: [{'json': '(((32+43)*(114-514)))*1.5'}]
```

### Json Schema

Formatron supports a subset of json schemas that cover most useful features natively.

```python
from formatron.schemas import json_schema
from formatron.integrations.transformers import create_formatter_logits_processor_list
from formatron.formatter import FormatterBuilder
from transformers import AutoModelForCausalLM
import transformers
import torch

schema = {
    "$id": "https://example.com/person.json",
    "$schema": "https://json-schema.org/draft/2020-12/schema",
    "type": "object",
    "properties": {
        "name": {
            "type": "string"
        },
        "age": {
            "type": "integer"
        }
    },
    "required": ["name", "age"]
}
schema = json_schema.create_schema(schema)
torch.manual_seed(520)
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-128k-instruct",
                                                device_map="cuda",
                                                torch_dtype=torch.float16)
tokenizer = transformers.AutoTokenizer.from_pretrained(
    "microsoft/Phi-3-mini-128k-instruct")

f = FormatterBuilder()
f.append_line(f"{f.json(schema, capture_name='json')}")
logits_processor = create_formatter_logits_processor_list(tokenizer, f)
inputs = tokenizer(["""<|system|>
You are a helpful assistant.<|end|>
<|user|>Extract information from this sentence into json: my name is Genov and I am 28 years old.<|end|>
<|assistant|>```"""], return_tensors="pt").to("cuda")
print(tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,
                                            max_new_tokens=100, logits_processor=logits_processor)))
print(logits_processor[0].formatters_captures)
# possible output:
# [{'json': {'name': 'Genov', 'age': 28}}]
```

### Extractive generation

Starting from `v0.4.7`, extractive generation is supported with suffix automata. This means that you can constrain the output to be a substring of a given input.

```python
from formatron.integrations.transformers import create_formatter_logits_processor_list
from formatron.formatter import FormatterBuilder
from transformers import AutoModelForCausalLM
import transformers
import torch

torch.manual_seed(520)
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-128k-instruct",
                                                device_map="cuda",
                                                torch_dtype=torch.float16)
tokenizer = transformers.AutoTokenizer.from_pretrained(
    "microsoft/Phi-3-mini-128k-instruct")

f = FormatterBuilder()
f.append_line(f"{f.substr('The quick brown fox jumps over the lazy dog.', capture_name='animal')}")
logits_processor = create_formatter_logits_processor_list(tokenizer, f)
inputs = tokenizer(["""<|system|>
You are a helpful assistant.<|end|>
<|user|>What animal is mentioned in the phrase "The quick brown fox jumps over the lazy dog"?<|end|>
<|assistant|>The animal mentioned in the phrase is the """], return_tensors="pt").to("cuda")
output = tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,
                                                max_new_tokens=100, logits_processor=logits_processor))
print(output)
print(logits_processor[0].formatters_captures)
# possible output:
# [{'animal': 'fox'}]
```

What's more, you can embed fields that need extractive generation into pydantic models or json schemas.

```python
from formatron.schemas.pydantic import ClassSchema
from formatron.integrations.transformers import create_formatter_logits_processor_list
from formatron.schemas.schema import SubstringOf
from formatron.formatter import FormatterBuilder
from transformers import AutoModelForCausalLM
import transformers
import torch
import typing
from pydantic import Field

class Person(ClassSchema):
    name: typing.Annotated[str, Field(..., substring_of="Alice Bob Charlie David Eve"), SubstringOf("Alice Bob Charlie David Eve")]
    age: int

torch.manual_seed(520)
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-128k-instruct",
                                                device_map="cuda",
                                                torch_dtype=torch.float16)
tokenizer = transformers.AutoTokenizer.from_pretrained(
    "microsoft/Phi-3-mini-128k-instruct")

f = FormatterBuilder()
f.append_line(f"{f.json(Person, capture_name='json')}")
logits_processor = create_formatter_logits_processor_list(tokenizer, f)
inputs = tokenizer(["""<|system|>
You are a helpful assistant.<|end|>
<|user|>Extract information from this sentence into json: Bob is 32 years old.<|end|>
<|assistant|>```"""], return_tensors="pt").to("cuda")
print(tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,
                                            max_new_tokens=100, logits_processor=logits_processor)))
print(logits_processor[0].formatters_captures)
# possible output:
# [{'json': {'name': 'Bob', 'age': 32}}]
```

```python
from formatron.schemas import json_schema
from formatron.integrations.transformers import create_formatter_logits_processor_list
from formatron.formatter import FormatterBuilder
from transformers import AutoModelForCausalLM
import transformers
import torch

schema = {
    "$id": "https://example.com/animal.json",
    "$schema": "https://json-schema.org/draft/2020-12/schema",
    "type": "object",
    "properties": {
        "animal": {
            "type": "string",
            "substring_of": "The quick brown fox jumps over the lazy dog."
        }
    },
    "required": ["animal"]
}
schema = json_schema.create_schema(schema)

torch.manual_seed(520)
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-128k-instruct",
                                                device_map="cuda",
                                                torch_dtype=torch.float16)
tokenizer = transformers.AutoTokenizer.from_pretrained(
    "microsoft/Phi-3-mini-128k-instruct")

f = FormatterBuilder()
f.append_line(f"{f.json(schema, capture_name='json')}")
logits_processor = create_formatter_logits_processor_list(tokenizer, f)
inputs = tokenizer(["""<|system|>
You are a helpful assistant.<|end|>
<|user|>What animal is mentioned in the phrase "The quick brown fox jumps over the lazy dog"?<|end|>
<|assistant|>The animal mentioned in the phrase is the """], return_tensors="pt").to("cuda")
output = tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,
                                                max_new_tokens=100, logits_processor=logits_processor))
print(output)
print(logits_processor[0].formatters_captures)
# possible output:
# [{'json': {'animal': 'fox'}}]
```

### Integrations

Check out integration examples in the [tests](https://github.com/Dan-wanna-M/formatron/tree/master/tests) directory.
You may also want to check the minimum compatible version in [pyproject.toml](https://github.com/Dan-wanna-M/formatron/blob/master/pyproject.toml).

## API Reference

Check out the API reference [here](https://dan-wanna-m.github.io/formatron/).

## Benchmark

Check out the benchmark [here](benchmarks/readme.md).

## What Formatron Won't Do

### Implement an End-to-End Inference Pipeline

Every library related to large language models(LLM) must consider that LLMs
are rapidly evolving. Many libraries, such as Guidance, Outlines, and LMQL,
address this by offering their own end-to-end inference pipelines,
which are constantly updated to incorporate the latest techniques.

Formatron, however, takes a different approach.
Rather than providing a full-fledged inference pipeline,
Formatron focuses on being modular and easily embeddable into existing
and future pipelines.
While this may require users to write a bit more code initially,
it makes maintaining and updating the pipeline painless in the long run.

## What Formatron Can't Do Now

### Support OpenAI or in general API-based LLM solutions

They don't support efficient logits masking per token, nullifying most benefits
of constrained decoding.

### Semantic Validation

Although constrained decoding can enforce certain formats
in generated text, they cannot guarantee that the output aligns
with the users' intention. In other words, if the model is inadequate
or the prompt is poorly written, it's possible to generate well-formatted
but meaningless output.

### Context-Sensitive Validation

Unfortunately, many formats require context-sensitive validation.
For example, two keys in a JSON object must not be equal to each other.
Unlike CFGs, there is no efficient, generic algorithm to validate
such constraints. However, for a specific format, it is possible to validate
them efficiently with a specialized algorithm. In a future release,
Formatron will support context-sensitive validation for popular formats like JSON.

### Abstract Syntax Tree (AST) Construction<a id='ast'></a>

Formatron uses an Earley recognizer rather than a parser under the hood.
This approach allows for more efficient generation and validation
but also means that the AST of a given format is not available.
In most cases, this is not a problem,
as it is usually possible to extract the format from the generated string
using simple algorithms and then parse it with an existing parser.
However, in some cases, obtaining the AST might be necessary.
In a future release, Formatron will support AST construction.

### Process batch logits in parallel

While it is *technically possible* to process batch logits in parallel CPU threads
since Formatron uses Rust internally, most frameworks sequentially
call Formatron's plugin for each logits in a batch. Altering
this behaviour requires a breaking change to the frameworks' API or letting
Formatron take over the control flow. Both options imply
substantial work.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "formatron",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "deep learning, language model, guided generation, structured generation, constrained decoding",
    "author": null,
    "author_email": "Xintong Sun <xs28@rice.edu>",
    "download_url": "https://files.pythonhosted.org/packages/1d/cb/49cc57dc579eda120ffa175a5fb24c0bacfff4e684ede70220604d9a1f28/formatron-0.4.10.tar.gz",
    "platform": null,
    "description": "<p align='center'>\n<image src=\"logo.svg\">\n</p>\n\n[![PyPI](https://img.shields.io/pypi/v/formatron.svg)](https://pypi.python.org/pypi/formatron)\n![PyPI Downloads](https://static.pepy.tech/badge/formatron)\n\nFormatron allows users to control the output format of language models\nwith minimal overhead. It is lightweight, user-friendly,\nand seamlessly integrates into existing codebases and frameworks.\n\n## Installation\n\n`pip install formatron`\n\n## Features\n\n- **\ud83d\udd17 Popular Library Integrations**: Supports transformers, exllamav2, vllm and RWKV.\n- **\ud83d\udd0c Plugins, not wrappers**:\nInstead of wrapping third-party libraries in large, cumbersome classes,\nFormatron offers convenient, clean plugins for different libraries.\n- **\ud83d\udca1 Library, not framework**:\nInstead of unifying everything into a bulky framework,\nFormatron is a flexible library that can be embedded anywhere.\n- **\u270d\ufe0f Fluent Formatting**: Describe your format as easily as writing natural language.\n- **\ud83d\udcdc Regex and CFG Support**:\nEffortlessly interleave regular expressions and context-free grammars (CFG) in formats.\n- **\u2699\ufe0f Efficient JSON Generation**: Feature-complete JSON generation based on Pydantic models or json schemas.\n- **\ud83d\udce4 Batched Inference**:\nFreely specify different formats for each sequence in one batch!\n- **\ud83d\ude80 Minimal Runtime Overhead**:\nWith Leo optimization, a specialized compacting algorithm,\nand CFG caches across generations, Earley algorithm implemented in Rust is\naymptotically and practically the fastest algorithm.\n- **\ud83d\udd27 Customizable**: Everything is configurable, including schema generation,\ngrammar generation, and post-generation processing (such as function calls).\n\n## Comparison to other libraries\n\n| Capability                                   | Formatron                          | [LM Format Enforcer](https://github.com/noamgat/lm-format-enforcer)                           | [Guidance](https://github.com/guidance-ai/guidance) | [Outlines](https://github.com/outlines-dev/outlines)                                    | [LMQL](https://github.com/eth-sri/lmql)                                                         |\n|:---------------------------------------------|------------------------------------|:----------------------------------------------------------------------------------------------|:----------------------------------------------------|:----------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|\n| Regular Expressions                          | \u2705                                  | \u2705                                                                                             | \u2705                                                   | \u2705                                                                                       | \ud83d\udfe1([preview feature](https://lmql.ai/docs/language/constraints.html#regex-constraints-preview)) |\n| Efficient Regex-constrained Generation       | \u2705                                  | \ud83d\udfe1([performance issues still exist](https://github.com/noamgat/lm-format-enforcer/issues/36)) | \u274c                                                   | \ud83d\udfe1([scalablity currently suffers](https://github.com/outlines-dev/outlines/issues/680)) | \u274c                                                                                               |\n| Context Free Grammars(CFG)                   | \u2705                                  | \u274c                                                                                             | \u2705                                                   | \ud83d\udfe1([some bugs exist](https://github.com/outlines-dev/outlines/issues/959))              | \u274c                                                                                               |\n| Efficient CFG-constrained Generation         | \u2705                                  | \u274c                                                                                             | \u274c                                                   | \u274c                                                                                       | \u274c                                                                                               |\n| Custom Format Extractor                      | \ud83d\udfe1([some limitations exist](#ast)) | \u274c                                                                                             | \u2705                                                   | \u2705                                                                                       | \u2705                                                                                               |\n| JSON Schema                                  | \u2705([indirectly](#json-schema))      | \u2705                                                                                             | \u2705                                                   | \u2705                                                                                       | \u274c                                                                                               |\n| Function Call From Callable                  | \u2705                                  | \u274c                                                                                             | \u2705                                                   | \u2705                                                                                       | \u2705                                                                                               |\n| Interleave Python control flow in generation | \u274c                                  | \u274c                                                                                             | \u2705                                                   | \u274c                                                                                       | \u2705                                                                                               |\n| Batched Generation                           | \u2705                                  | \u2705                                                                                             | \u274c                                                   | \u2705                                                                                       | \u274c                                                                                               |\n| Beam Search                                  | \u274c                                  | \u2705                                                                                             | \u274c                                                   | \u2705                                                                                       | \u2705                                                                                               |\n| Integrates into existing pipelines           | \u2705                                  | \u2705                                                                                             | \u274c                                                   | \ud83d\udfe1([some integrations crash](https://github.com/outlines-dev/outlines/issues/1115))     | \u274c                                                                                               |\n| Optional JSON Fields                         | \u2705                                  | \u2705                                                                                             | \u274c                                                   | \u274c                                                                                       | \u274c                                                                                               |\n| LLM Controls JSON field whitespaces          | \u2705                                  | \u2705                                                                                             | \u274c                                                   | \u2705                                                                                       | \u274c                                                                                               |\n| LLM Controls JSON field orderings            | \u274c                                  | \u2705                                                                                             | \u274c                                                   | \u274c                                                                                       | \u274c                                                                                               |\n| JSON Schema with recursive classes           | \u2705                                  | \u2705                                                                                             | \u274c                                                   | \u274c                                                                                       | \u274c                                                                                               |\n|Extractive generation(substringOf)           | \u2705                                  | \u274c                                                                                             | \u2705                                                   | \u274c                                                                                       | \u274c                                                                                               |\n\nFeel free to open up an [issue](https://github.com/Dan-wanna-M/formatron/issues) if something is missing or incorrect!\n\n## Examples\n\n### Regex-constrained Generation\n\n```python\nimport torch\nfrom formatron.integrations.transformers import create_formatter_logits_processor_list\nfrom formatron.formatter import FormatterBuilder\nfrom transformers import AutoModelForCausalLM\nimport transformers\ntorch.manual_seed(514)\nmodel = AutoModelForCausalLM.from_pretrained(\"microsoft/Phi-3-mini-128k-instruct\",\n                                                device_map=\"cuda\",\n                                                torch_dtype=torch.float16)\ntokenizer = transformers.AutoTokenizer.from_pretrained(\n    \"microsoft/Phi-3-mini-128k-instruct\")\n\nf = FormatterBuilder()\ndigit = f.regex('([1-9][0-9]*)', capture_name='digit')\nf.append_line(f\"My favorite integer is {digit}.\")\nf.append_str(f\"I think integer {digit} is also very interesting.\")\nlogits_processor = create_formatter_logits_processor_list(tokenizer, f)\ninputs = tokenizer([\"\"\"<|system|>\nYou are a helpful assistant.<|end|>\n<|user|>Which integer is your favourite?<|end|>\n<|assistant|>\"\"\"], return_tensors=\"pt\").to(\"cuda\")\nprint(tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,\n                                            max_new_tokens=100, logits_processor=logits_processor)))\nprint(logits_processor[0].formatters_captures)\n# possible output:\n# [{'digit': [<re.Match object; span=(0, 2), match='42'>, <re.Match object; span=(0, 2), match='42'>]}]\n```\n\nNote that only\n[Rust regex's syntax](https://docs.rs/regex/latest/regex/#syntax) is supported, which notably\ndoes not include arbitrary lookaheads.\n\n### Json Generation\n\n#### Pydantic Model\n\n```python\nimport torch\nfrom formatron.integrations.transformers import create_formatter_logits_processor_list\nfrom formatron.formatter import FormatterBuilder\nfrom transformers import AutoModelForCausalLM\nimport transformers\nfrom formatron.schemas.dict_inference import infer_mapping\ntorch.manual_seed(520)\nmodel = AutoModelForCausalLM.from_pretrained(\"microsoft/Phi-3-mini-128k-instruct\",\n                                                device_map=\"cuda\",\n                                                torch_dtype=torch.float16)\ntokenizer = transformers.AutoTokenizer.from_pretrained(\n    \"microsoft/Phi-3-mini-128k-instruct\")\n\nf = FormatterBuilder()\nschema = infer_mapping({\"name\": \"foo\", \"age\": 28})\nf.append_line(f\"{f.json(schema, capture_name='json')}\")\nlogits_processor = create_formatter_logits_processor_list(tokenizer, f)\ninputs = tokenizer([\"\"\"<|system|>\nYou are a helpful assistant.<|end|>\n<|user|>I am \u5468\u660e\u745e. My age is 24. Extract information from this sentence into json.<|end|>\n<|assistant|>\"\"\"], return_tensors=\"pt\").to(\"cuda\")\nprint(tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,\n                                            max_new_tokens=100, logits_processor=logits_processor)))\nprint(logits_processor[0].formatters_captures)\n# possible output:\n# [{'json': {'name': '\u5468\u660e\u745e', 'age': 34}}]\n```\n\n#### Json Example\n\n```python\nfrom formatron.schemas.pydantic import ClassSchema\nfrom formatron.integrations.transformers import create_formatter_logits_processor_list\nfrom formatron.formatter import FormatterBuilder\nfrom transformers import AutoModelForCausalLM\nimport transformers\nimport torch\n\nclass Goods(ClassSchema):\n    name: str\n    price: float\n    remaining: int\n\ntorch.manual_seed(520)\nmodel = AutoModelForCausalLM.from_pretrained(\"microsoft/Phi-3-mini-128k-instruct\",\n                                                device_map=\"cuda\",\n                                                torch_dtype=torch.float16)\ntokenizer = transformers.AutoTokenizer.from_pretrained(\n    \"microsoft/Phi-3-mini-128k-instruct\")\n\nf = FormatterBuilder()\nschema = Goods\nf.append_line(f\"{f.json(schema, capture_name='json')}\")\nlogits_processor = create_formatter_logits_processor_list(tokenizer, f)\ninputs = tokenizer([\"\"\"<|system|>\nYou are a helpful assistant.<|end|>\n<|user|>We have 14 apples left with each price 14.4$. Extract information from this sentence into json.<|end|>\n<|assistant|>\"\"\"], return_tensors=\"pt\").to(\"cuda\")\nprint(tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,\n                                            max_new_tokens=100, logits_processor=logits_processor)))\nprint(logits_processor[0].formatters_captures)\n# possible output:\n# [{'json': Goods(name='apples', price=14.4, remaining=14)}]\n```\n\n### Batched Inference\n\n```python\nimport transformers\nfrom transformers import GPT2LMHeadModel\n\nfrom formatron.formatter import FormatterBuilder\nfrom formatron.integrations.transformers import create_formatter_logits_processor_list\nf = FormatterBuilder()\nf.append_line(f\"Hello, Huggingface!\")\nf3 = FormatterBuilder()\nf3.append_line(\"Hello, Huggingface! Hello, Huggingface!\")\nmodel = GPT2LMHeadModel.from_pretrained(\"openai-community/gpt2\")\ntokenizer = transformers.AutoTokenizer.from_pretrained(\"openai-community/gpt2\",\n                                                       padding_side='left')\ntokenizer.pad_token = tokenizer.eos_token  # Needed for padding\nmodel.generation_config.pad_token_id = tokenizer.pad_token_id\nlogits_processor = create_formatter_logits_processor_list(tokenizer, [f, f, f3])\ninputs = tokenizer([\"I am GPT2. \", \"I am another GPT2. \", \"I am yet another GPT2. \"], return_tensors=\"pt\",\n                   padding=True)\nprint(tokenizer.batch_decode(model.generate(**inputs,\n                                            max_new_tokens=100,\n                                            logits_processor=logits_processor),\n                             skip_special_tokens=True))\n```\n\n### Function Calls\n\n```python\nimport torch\nfrom formatron import schemas\nfrom formatron.formatter import FormatterBuilder\nfrom transformers import AutoModelForCausalLM\nimport transformers\nfrom formatron.integrations.transformers import create_formatter_logits_processor_list\n\n@schemas.pydantic.callable_schema\ndef add(a: int, b: int, /, *, c: int):\n    return a + b + c\n\nmodel = AutoModelForCausalLM.from_pretrained(\"NurtureAI/Meta-Llama-3-8B-Instruct-32k\",\n                                                device_map=\"cuda\",\n                                                torch_dtype=torch.float16)\ntokenizer = transformers.AutoTokenizer.from_pretrained(\n    \"NurtureAI/Meta-Llama-3-8B-Instruct-32k\")\ninputs = tokenizer([\"\"\"<|system|>\nYou are a helpful assistant.<|end|>\n<|user|>a is 1, b is 6 and c is 7. Generate a json containing them.<|end|>\n<|assistant|>\"\"\"], return_tensors=\"pt\").to(\"cuda\")\nf = FormatterBuilder()\nf.append_line(f\"{f.json(add, capture_name='json')}\")\nlogits_processor = create_formatter_logits_processor_list(tokenizer, f)\nprint(tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,\n                                            max_new_tokens=100, logits_processor=logits_processor)))\nprint(logits_processor[0].formatters_captures)\n# possible output:\n# [{'json': 14}]\n```\n\n### CFG-Constrained generation\n\nContext free grammars use [kbnf's syntax](https://docs.rs/kbnf/latest/kbnf/#kbnf-grammar) which is a variant of EBNF.\nSince formatron uses [kbnf](https://github.com/Dan-wanna-M/kbnf?tab=readme-ov-file#features) under the hood, all kbnf's claims on performance hold.\n\n```python\nimport torch\nfrom formatron.formatter import FormatterBuilder\nfrom transformers import AutoModelForCausalLM\nimport transformers\nfrom formatron.integrations.transformers import create_formatter_logits_processor_list\nfrom formatron.extractor import NonterminalExtractor\nimport typing\n\nclass ArithmeticExpressionExtractor(NonterminalExtractor):\n    def __init__(self, nonterminal: str, capture_name: typing.Optional[str] = None):\n        super().__init__(nonterminal, capture_name)\n\n    def extract(self, input_str: str) -> typing.Optional[tuple[str, typing.Any]]:\n        i = 0\n        left_bracket = 0\n        while i < len(input_str):\n            if input_str[i].isdigit() or input_str[i] in \"+-*/.\":\n                i += 1\n                continue\n            if input_str[i] == \"(\":\n                i += 1\n                left_bracket += 1\n                continue\n            if input_str[i] == \")\":\n                i += 1\n                left_bracket -= 1\n                continue\n            else:\n                break\n        if left_bracket != 0:\n            return None\n        return input_str[i:], input_str[:i]\n\n    @property\n    def kbnf_definition(self) -> str:\n        return  \"\"\"\nexpression ::=  term { (\"+\" | \"-\") term };\nterm       ::= factor { (\"*\" | \"/\") factor };\nfactor     ::= number | \"(\" expression \")\";\nnumber     ::= #\"[0-9]+(\\\\\\\\.[0-9]+)?\";\n\"\"\".replace(\"expression\", self.nonterminal)\n\nmodel = AutoModelForCausalLM.from_pretrained(\"NurtureAI/Meta-Llama-3-8B-Instruct-32k\",\n                                                device_map=\"cuda\",\n                                                torch_dtype=torch.float16)\ntokenizer = transformers.AutoTokenizer.from_pretrained(\n    \"NurtureAI/Meta-Llama-3-8B-Instruct-32k\")\ninputs = tokenizer([\"\"\"<|system|>\n    You are a helpful assistant.<|end|>\n    <|user|>Repeat it: ((32+43)*114)<|end|>\n    <|assistant|>((32+43)*114)<|end|>\n    <|user|>Repeat it: ((32+43)*(114-514))<|end|>\n    <|assistant|>\"\"\"], return_tensors=\"pt\").to(\"cuda\")\nf = FormatterBuilder()\nf.append_line(\n    f\"{f.extractor(lambda nonterminal: ArithmeticExpressionExtractor(nonterminal, 'json'))}\")\nlogits_processor = create_formatter_logits_processor_list(tokenizer, f)\nprint(tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,\n                                            max_new_tokens=100, logits_processor=logits_processor)))\nprint(logits_processor[0].formatters_captures)\n# possible output: [{'json': '(((32+43)*(114-514)))*1.5'}]\n```\n\n### Json Schema\n\nFormatron supports a subset of json schemas that cover most useful features natively.\n\n```python\nfrom formatron.schemas import json_schema\nfrom formatron.integrations.transformers import create_formatter_logits_processor_list\nfrom formatron.formatter import FormatterBuilder\nfrom transformers import AutoModelForCausalLM\nimport transformers\nimport torch\n\nschema = {\n    \"$id\": \"https://example.com/person.json\",\n    \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n    \"type\": \"object\",\n    \"properties\": {\n        \"name\": {\n            \"type\": \"string\"\n        },\n        \"age\": {\n            \"type\": \"integer\"\n        }\n    },\n    \"required\": [\"name\", \"age\"]\n}\nschema = json_schema.create_schema(schema)\ntorch.manual_seed(520)\nmodel = AutoModelForCausalLM.from_pretrained(\"microsoft/Phi-3-mini-128k-instruct\",\n                                                device_map=\"cuda\",\n                                                torch_dtype=torch.float16)\ntokenizer = transformers.AutoTokenizer.from_pretrained(\n    \"microsoft/Phi-3-mini-128k-instruct\")\n\nf = FormatterBuilder()\nf.append_line(f\"{f.json(schema, capture_name='json')}\")\nlogits_processor = create_formatter_logits_processor_list(tokenizer, f)\ninputs = tokenizer([\"\"\"<|system|>\nYou are a helpful assistant.<|end|>\n<|user|>Extract information from this sentence into json: my name is Genov and I am 28 years old.<|end|>\n<|assistant|>```\"\"\"], return_tensors=\"pt\").to(\"cuda\")\nprint(tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,\n                                            max_new_tokens=100, logits_processor=logits_processor)))\nprint(logits_processor[0].formatters_captures)\n# possible output:\n# [{'json': {'name': 'Genov', 'age': 28}}]\n```\n\n### Extractive generation\n\nStarting from `v0.4.7`, extractive generation is supported with suffix automata. This means that you can constrain the output to be a substring of a given input.\n\n```python\nfrom formatron.integrations.transformers import create_formatter_logits_processor_list\nfrom formatron.formatter import FormatterBuilder\nfrom transformers import AutoModelForCausalLM\nimport transformers\nimport torch\n\ntorch.manual_seed(520)\nmodel = AutoModelForCausalLM.from_pretrained(\"microsoft/Phi-3-mini-128k-instruct\",\n                                                device_map=\"cuda\",\n                                                torch_dtype=torch.float16)\ntokenizer = transformers.AutoTokenizer.from_pretrained(\n    \"microsoft/Phi-3-mini-128k-instruct\")\n\nf = FormatterBuilder()\nf.append_line(f\"{f.substr('The quick brown fox jumps over the lazy dog.', capture_name='animal')}\")\nlogits_processor = create_formatter_logits_processor_list(tokenizer, f)\ninputs = tokenizer([\"\"\"<|system|>\nYou are a helpful assistant.<|end|>\n<|user|>What animal is mentioned in the phrase \"The quick brown fox jumps over the lazy dog\"?<|end|>\n<|assistant|>The animal mentioned in the phrase is the \"\"\"], return_tensors=\"pt\").to(\"cuda\")\noutput = tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,\n                                                max_new_tokens=100, logits_processor=logits_processor))\nprint(output)\nprint(logits_processor[0].formatters_captures)\n# possible output:\n# [{'animal': 'fox'}]\n```\n\nWhat's more, you can embed fields that need extractive generation into pydantic models or json schemas.\n\n```python\nfrom formatron.schemas.pydantic import ClassSchema\nfrom formatron.integrations.transformers import create_formatter_logits_processor_list\nfrom formatron.schemas.schema import SubstringOf\nfrom formatron.formatter import FormatterBuilder\nfrom transformers import AutoModelForCausalLM\nimport transformers\nimport torch\nimport typing\nfrom pydantic import Field\n\nclass Person(ClassSchema):\n    name: typing.Annotated[str, Field(..., substring_of=\"Alice Bob Charlie David Eve\"), SubstringOf(\"Alice Bob Charlie David Eve\")]\n    age: int\n\ntorch.manual_seed(520)\nmodel = AutoModelForCausalLM.from_pretrained(\"microsoft/Phi-3-mini-128k-instruct\",\n                                                device_map=\"cuda\",\n                                                torch_dtype=torch.float16)\ntokenizer = transformers.AutoTokenizer.from_pretrained(\n    \"microsoft/Phi-3-mini-128k-instruct\")\n\nf = FormatterBuilder()\nf.append_line(f\"{f.json(Person, capture_name='json')}\")\nlogits_processor = create_formatter_logits_processor_list(tokenizer, f)\ninputs = tokenizer([\"\"\"<|system|>\nYou are a helpful assistant.<|end|>\n<|user|>Extract information from this sentence into json: Bob is 32 years old.<|end|>\n<|assistant|>```\"\"\"], return_tensors=\"pt\").to(\"cuda\")\nprint(tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,\n                                            max_new_tokens=100, logits_processor=logits_processor)))\nprint(logits_processor[0].formatters_captures)\n# possible output:\n# [{'json': {'name': 'Bob', 'age': 32}}]\n```\n\n```python\nfrom formatron.schemas import json_schema\nfrom formatron.integrations.transformers import create_formatter_logits_processor_list\nfrom formatron.formatter import FormatterBuilder\nfrom transformers import AutoModelForCausalLM\nimport transformers\nimport torch\n\nschema = {\n    \"$id\": \"https://example.com/animal.json\",\n    \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n    \"type\": \"object\",\n    \"properties\": {\n        \"animal\": {\n            \"type\": \"string\",\n            \"substring_of\": \"The quick brown fox jumps over the lazy dog.\"\n        }\n    },\n    \"required\": [\"animal\"]\n}\nschema = json_schema.create_schema(schema)\n\ntorch.manual_seed(520)\nmodel = AutoModelForCausalLM.from_pretrained(\"microsoft/Phi-3-mini-128k-instruct\",\n                                                device_map=\"cuda\",\n                                                torch_dtype=torch.float16)\ntokenizer = transformers.AutoTokenizer.from_pretrained(\n    \"microsoft/Phi-3-mini-128k-instruct\")\n\nf = FormatterBuilder()\nf.append_line(f\"{f.json(schema, capture_name='json')}\")\nlogits_processor = create_formatter_logits_processor_list(tokenizer, f)\ninputs = tokenizer([\"\"\"<|system|>\nYou are a helpful assistant.<|end|>\n<|user|>What animal is mentioned in the phrase \"The quick brown fox jumps over the lazy dog\"?<|end|>\n<|assistant|>The animal mentioned in the phrase is the \"\"\"], return_tensors=\"pt\").to(\"cuda\")\noutput = tokenizer.batch_decode(model.generate(**inputs, top_p=0.5, temperature=1,\n                                                max_new_tokens=100, logits_processor=logits_processor))\nprint(output)\nprint(logits_processor[0].formatters_captures)\n# possible output:\n# [{'json': {'animal': 'fox'}}]\n```\n\n### Integrations\n\nCheck out integration examples in the [tests](https://github.com/Dan-wanna-M/formatron/tree/master/tests) directory.\nYou may also want to check the minimum compatible version in [pyproject.toml](https://github.com/Dan-wanna-M/formatron/blob/master/pyproject.toml).\n\n## API Reference\n\nCheck out the API reference [here](https://dan-wanna-m.github.io/formatron/).\n\n## Benchmark\n\nCheck out the benchmark [here](benchmarks/readme.md).\n\n## What Formatron Won't Do\n\n### Implement an End-to-End Inference Pipeline\n\nEvery library related to large language models(LLM) must consider that LLMs\nare rapidly evolving. Many libraries, such as Guidance, Outlines, and LMQL,\naddress this by offering their own end-to-end inference pipelines,\nwhich are constantly updated to incorporate the latest techniques.\n\nFormatron, however, takes a different approach.\nRather than providing a full-fledged inference pipeline,\nFormatron focuses on being modular and easily embeddable into existing\nand future pipelines.\nWhile this may require users to write a bit more code initially,\nit makes maintaining and updating the pipeline painless in the long run.\n\n## What Formatron Can't Do Now\n\n### Support OpenAI or in general API-based LLM solutions\n\nThey don't support efficient logits masking per token, nullifying most benefits\nof constrained decoding.\n\n### Semantic Validation\n\nAlthough constrained decoding can enforce certain formats\nin generated text, they cannot guarantee that the output aligns\nwith the users' intention. In other words, if the model is inadequate\nor the prompt is poorly written, it's possible to generate well-formatted\nbut meaningless output.\n\n### Context-Sensitive Validation\n\nUnfortunately, many formats require context-sensitive validation.\nFor example, two keys in a JSON object must not be equal to each other.\nUnlike CFGs, there is no efficient, generic algorithm to validate\nsuch constraints. However, for a specific format, it is possible to validate\nthem efficiently with a specialized algorithm. In a future release,\nFormatron will support context-sensitive validation for popular formats like JSON.\n\n### Abstract Syntax Tree (AST) Construction<a id='ast'></a>\n\nFormatron uses an Earley recognizer rather than a parser under the hood.\nThis approach allows for more efficient generation and validation\nbut also means that the AST of a given format is not available.\nIn most cases, this is not a problem,\nas it is usually possible to extract the format from the generated string\nusing simple algorithms and then parse it with an existing parser.\nHowever, in some cases, obtaining the AST might be necessary.\nIn a future release, Formatron will support AST construction.\n\n### Process batch logits in parallel\n\nWhile it is *technically possible* to process batch logits in parallel CPU threads\nsince Formatron uses Rust internally, most frameworks sequentially\ncall Formatron's plugin for each logits in a batch. Altering\nthis behaviour requires a breaking change to the frameworks' API or letting\nFormatron take over the control flow. Both options imply\nsubstantial work.\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2023 Huanghe  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Formatron empowers everyone to control the output format of language models with minimal overhead.",
    "version": "0.4.10",
    "project_urls": {
        "Repository": "https://github.com/Dan-wanna-M/formatron"
    },
    "split_keywords": [
        "deep learning",
        " language model",
        " guided generation",
        " structured generation",
        " constrained decoding"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7c061b57644c55a216594cd2d832f2519b69c1fb05d417ff09f27584d4549120",
                "md5": "affb252351725247c44ebf1746db81c4",
                "sha256": "6e31b5610a7407960490e8ae1b326f45bb2ccc76ce9ee7eb18a7067d4ac7f60f"
            },
            "downloads": -1,
            "filename": "formatron-0.4.10-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "affb252351725247c44ebf1746db81c4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 40887,
            "upload_time": "2024-12-21T05:23:16",
            "upload_time_iso_8601": "2024-12-21T05:23:16.865007Z",
            "url": "https://files.pythonhosted.org/packages/7c/06/1b57644c55a216594cd2d832f2519b69c1fb05d417ff09f27584d4549120/formatron-0.4.10-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1dcb49cc57dc579eda120ffa175a5fb24c0bacfff4e684ede70220604d9a1f28",
                "md5": "eeb6b0cc7a5fbf6cb81af2e7417f47ff",
                "sha256": "0a49d3bcbcdd9389b72e84460612a4dcf858f5dd31896d5a65f2704441e092b9"
            },
            "downloads": -1,
            "filename": "formatron-0.4.10.tar.gz",
            "has_sig": false,
            "md5_digest": "eeb6b0cc7a5fbf6cb81af2e7417f47ff",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 46266,
            "upload_time": "2024-12-21T05:23:20",
            "upload_time_iso_8601": "2024-12-21T05:23:20.932527Z",
            "url": "https://files.pythonhosted.org/packages/1d/cb/49cc57dc579eda120ffa175a5fb24c0bacfff4e684ede70220604d9a1f28/formatron-0.4.10.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-21 05:23:20",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Dan-wanna-M",
    "github_project": "formatron",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "formatron"
}
        
Elapsed time: 0.56385s