Name | llmformat JSON |
Version |
0.0.2.1
JSON |
| download |
home_page | |
Summary | Format LLM language by using LALR(1) grammar. Supports JSON, XML, etc. |
upload_time | 2023-12-28 03:03:41 |
maintainer | |
docs_url | None |
author | Qiuling Xu |
requires_python | >=3.8 |
license | |
keywords |
python
llm
format
json
regex
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LLM Format
![](./assets/llmlogo.jpg)
## Introduction
LLM Format is a Large-language model formatter that constrain the outputs of language model to follow certain rules. Our design is to make this tool flexible and efficient to adapt to different use cases.
We currently supports **vllm** and any **LALR(1) grammar** including **JSON** format.
Different from other packages including lm-format-enforcer, jsonformer and guidance. This package ensures the generated text from LLM to be sound and complete.
## Tutorial
### Installation
`pip install llmformat`
### Usage
To enforce a new type of grammar, a grammar file written in EBNF is needed. We provide the JSON example as [here](https://github.com/qiulingxu/llmformat/blob/main/llmformat/grammar_files/json_min.bnf).
Once it is written, we can only need one-line code change to enforce the generation.
In vllm, add this option to sampling_param.
sampling_param.logits_processors=[llmformat.llminterface.build_vllm_logits_processor(model, "/root/llmformat/llmformat/json_min.bnf")]
### Example
The example of working on Llama2 and vllm can be found [here](https://github.com/qiulingxu/llmformat/blob/main/examples/vllm_llama2.ipynb). Note that you may want to change the location of grammar file.
- Add support for customized JSON
- Add more integrations
## Known Issues
We use cache to accelerate grammar parsing. The speed will becomes faster as it runns longer.
Raw data
{
"_id": null,
"home_page": "",
"name": "llmformat",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "",
"keywords": "python,llm,format,json,regex",
"author": "Qiuling Xu",
"author_email": "Qiuling Xu <xennaughtyboy@gmail.com>",
"download_url": "",
"platform": null,
"description": "# LLM Format\n\n![](./assets/llmlogo.jpg)\n\n## Introduction\n\nLLM Format is a Large-language model formatter that constrain the outputs of language model to follow certain rules. Our design is to make this tool flexible and efficient to adapt to different use cases.\n\nWe currently supports **vllm** and any **LALR(1) grammar** including **JSON** format.\n\nDifferent from other packages including lm-format-enforcer, jsonformer and guidance. This package ensures the generated text from LLM to be sound and complete. \n\n\n## Tutorial \n\n### Installation\n\n`pip install llmformat`\n\n### Usage\n\nTo enforce a new type of grammar, a grammar file written in EBNF is needed. We provide the JSON example as [here](https://github.com/qiulingxu/llmformat/blob/main/llmformat/grammar_files/json_min.bnf).\n\nOnce it is written, we can only need one-line code change to enforce the generation.\n\nIn vllm, add this option to sampling_param.\n\nsampling_param.logits_processors=[llmformat.llminterface.build_vllm_logits_processor(model, \"/root/llmformat/llmformat/json_min.bnf\")]\n\n### Example\nThe example of working on Llama2 and vllm can be found [here](https://github.com/qiulingxu/llmformat/blob/main/examples/vllm_llama2.ipynb). Note that you may want to change the location of grammar file.\n\n\n- Add support for customized JSON\n- Add more integrations\n\n## Known Issues\n\nWe use cache to accelerate grammar parsing. The speed will becomes faster as it runns longer.\n",
"bugtrack_url": null,
"license": "",
"summary": "Format LLM language by using LALR(1) grammar. Supports JSON, XML, etc.",
"version": "0.0.2.1",
"project_urls": {
"Homepage": "https://github.com/qiulingxu/llmformat",
"Issues": "https://github.com/qiulingxu/llmformat/issues"
},
"split_keywords": [
"python",
"llm",
"format",
"json",
"regex"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "faf2dff80cf4179317fb3dc92917f93fd850caeb3a577f482029f44f25e2f597",
"md5": "a471ef888bf73e57726ee2d9570c70aa",
"sha256": "f505715c91c10faf9eec8bbed2c7f6c1890984b945d645fce40fb94c31b5ccea"
},
"downloads": -1,
"filename": "llmformat-0.0.2.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a471ef888bf73e57726ee2d9570c70aa",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 19337,
"upload_time": "2023-12-28T03:03:41",
"upload_time_iso_8601": "2023-12-28T03:03:41.102775Z",
"url": "https://files.pythonhosted.org/packages/fa/f2/dff80cf4179317fb3dc92917f93fd850caeb3a577f482029f44f25e2f597/llmformat-0.0.2.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-12-28 03:03:41",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "qiulingxu",
"github_project": "llmformat",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "llmformat"
}