Name | gguf JSON |
Version |
0.11.0
JSON |
| download |
home_page | https://ggml.ai |
Summary | Read and write ML models in GGUF for GGML |
upload_time | 2024-12-12 16:23:36 |
maintainer | None |
docs_url | None |
author | GGML |
requires_python | >=3.8 |
license | None |
keywords |
ggml
gguf
llama.cpp
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
## gguf
This is a Python package for writing binary files in the [GGUF](https://github.com/ggerganov/ggml/pull/302)
(GGML Universal File) format.
See [convert_hf_to_gguf.py](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py)
as an example for its usage.
## Installation
```sh
pip install gguf
```
## API Examples/Simple Tools
[examples/writer.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/examples/writer.py) — Generates `example.gguf` in the current directory to demonstrate generating a GGUF file. Note that this file cannot be used as a model.
[scripts/gguf_dump.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf_dump.py) — Dumps a GGUF file's metadata to the console.
[scripts/gguf_set_metadata.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf_set_metadata.py) — Allows changing simple metadata values in a GGUF file by key.
[scripts/gguf_convert_endian.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf_convert_endian.py) — Allows converting the endianness of GGUF files.
[scripts/gguf_new_metadata.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf_new_metadata.py) — Copies a GGUF file with added/modified/removed metadata values.
## Development
Maintainers who participate in development of this package are advised to install it in editable mode:
```sh
cd /path/to/llama.cpp/gguf-py
pip install --editable .
```
**Note**: This may require to upgrade your Pip installation, with a message saying that editable installation currently requires `setup.py`.
In this case, upgrade Pip to the latest:
```sh
pip install --upgrade pip
```
## Automatic publishing with CI
There's a GitHub workflow to make a release automatically upon creation of tags in a specified format.
1. Bump the version in `pyproject.toml`.
2. Create a tag named `gguf-vx.x.x` where `x.x.x` is the semantic version number.
```sh
git tag -a gguf-v1.0.0 -m "Version 1.0 release"
```
3. Push the tags.
```sh
git push origin --tags
```
## Manual publishing
If you want to publish the package manually for any reason, you need to have `twine` and `build` installed:
```sh
pip install build twine
```
Then, follow these steps to release a new version:
1. Bump the version in `pyproject.toml`.
2. Build the package:
```sh
python -m build
```
3. Upload the generated distribution archives:
```sh
python -m twine upload dist/*
```
## Run Unit Tests
From root of this repository you can run this command to run all the unit tests
```bash
python -m unittest discover ./gguf-py -v
```
## TODO
- [ ] Include conversion scripts as command line entry points in this package.
Raw data
{
"_id": null,
"home_page": "https://ggml.ai",
"name": "gguf",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "ggml, gguf, llama.cpp",
"author": "GGML",
"author_email": "ggml@ggml.ai",
"download_url": "https://files.pythonhosted.org/packages/bd/e9/c42624e92afcfb69d7dd956a1e4f2607264369b4ca5eb9b4d483c02ae2b1/gguf-0.11.0.tar.gz",
"platform": null,
"description": "## gguf\n\nThis is a Python package for writing binary files in the [GGUF](https://github.com/ggerganov/ggml/pull/302)\n(GGML Universal File) format.\n\nSee [convert_hf_to_gguf.py](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py)\nas an example for its usage.\n\n## Installation\n```sh\npip install gguf\n```\n\n## API Examples/Simple Tools\n\n[examples/writer.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/examples/writer.py) \u2014 Generates `example.gguf` in the current directory to demonstrate generating a GGUF file. Note that this file cannot be used as a model.\n\n[scripts/gguf_dump.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf_dump.py) \u2014 Dumps a GGUF file's metadata to the console.\n\n[scripts/gguf_set_metadata.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf_set_metadata.py) \u2014 Allows changing simple metadata values in a GGUF file by key.\n\n[scripts/gguf_convert_endian.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf_convert_endian.py) \u2014 Allows converting the endianness of GGUF files.\n\n[scripts/gguf_new_metadata.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf_new_metadata.py) \u2014 Copies a GGUF file with added/modified/removed metadata values.\n\n## Development\nMaintainers who participate in development of this package are advised to install it in editable mode:\n\n```sh\ncd /path/to/llama.cpp/gguf-py\n\npip install --editable .\n```\n\n**Note**: This may require to upgrade your Pip installation, with a message saying that editable installation currently requires `setup.py`.\nIn this case, upgrade Pip to the latest:\n\n```sh\npip install --upgrade pip\n```\n\n## Automatic publishing with CI\n\nThere's a GitHub workflow to make a release automatically upon creation of tags in a specified format.\n\n1. Bump the version in `pyproject.toml`.\n2. Create a tag named `gguf-vx.x.x` where `x.x.x` is the semantic version number.\n\n```sh\ngit tag -a gguf-v1.0.0 -m \"Version 1.0 release\"\n```\n\n3. Push the tags.\n\n```sh\ngit push origin --tags\n```\n\n## Manual publishing\nIf you want to publish the package manually for any reason, you need to have `twine` and `build` installed:\n\n```sh\npip install build twine\n```\n\nThen, follow these steps to release a new version:\n\n1. Bump the version in `pyproject.toml`.\n2. Build the package:\n\n```sh\npython -m build\n```\n\n3. Upload the generated distribution archives:\n\n```sh\npython -m twine upload dist/*\n```\n\n## Run Unit Tests\n\nFrom root of this repository you can run this command to run all the unit tests\n\n```bash\npython -m unittest discover ./gguf-py -v\n```\n\n## TODO\n- [ ] Include conversion scripts as command line entry points in this package.\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Read and write ML models in GGUF for GGML",
"version": "0.11.0",
"project_urls": {
"Homepage": "https://ggml.ai",
"Repository": "https://github.com/ggerganov/llama.cpp"
},
"split_keywords": [
"ggml",
" gguf",
" llama.cpp"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "d7e97f3568108f80167b68ca03a4e7774781cd711bdae828fb7504d25273d00c",
"md5": "baaa4c1b13130e1aa98a073c80f672ff",
"sha256": "b7bd175438df8485b37f02f8508a9d3aeb02409df46a7b786ea114f9a770c182"
},
"downloads": -1,
"filename": "gguf-0.11.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "baaa4c1b13130e1aa98a073c80f672ff",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 74935,
"upload_time": "2024-12-12T16:23:35",
"upload_time_iso_8601": "2024-12-12T16:23:35.117883Z",
"url": "https://files.pythonhosted.org/packages/d7/e9/7f3568108f80167b68ca03a4e7774781cd711bdae828fb7504d25273d00c/gguf-0.11.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "bde9c42624e92afcfb69d7dd956a1e4f2607264369b4ca5eb9b4d483c02ae2b1",
"md5": "6af9f07b8c7525926498b287255185bb",
"sha256": "8f5bb7674902ead8c8fe0771c0cab62f8d0f9b160cf4de092bc27be56a3e9104"
},
"downloads": -1,
"filename": "gguf-0.11.0.tar.gz",
"has_sig": false,
"md5_digest": "6af9f07b8c7525926498b287255185bb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 69210,
"upload_time": "2024-12-12T16:23:36",
"upload_time_iso_8601": "2024-12-12T16:23:36.290996Z",
"url": "https://files.pythonhosted.org/packages/bd/e9/c42624e92afcfb69d7dd956a1e4f2607264369b4ca5eb9b4d483c02ae2b1/gguf-0.11.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-12 16:23:36",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ggerganov",
"github_project": "llama.cpp",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "gguf"
}