gguf


Namegguf JSON
Version 0.14.0 PyPI version JSON
download
home_pagehttps://ggml.ai
SummaryRead and write ML models in GGUF for GGML
upload_time2025-01-08 19:19:02
maintainerNone
docs_urlNone
authorGGML
requires_python>=3.8
licenseNone
keywords ggml gguf llama.cpp
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## gguf

This is a Python package for writing binary files in the [GGUF](https://github.com/ggerganov/ggml/pull/302)
(GGML Universal File) format.

See [convert_hf_to_gguf.py](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py)
as an example for its usage.

## Installation
```sh
pip install gguf
```

## API Examples/Simple Tools

[examples/writer.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/examples/writer.py) — Generates `example.gguf` in the current directory to demonstrate generating a GGUF file. Note that this file cannot be used as a model.

[gguf/scripts/gguf_dump.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/scripts/gguf_dump.py) — Dumps a GGUF file's metadata to the console.

[gguf/scripts/gguf_set_metadata.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/scripts/gguf_set_metadata.py) — Allows changing simple metadata values in a GGUF file by key.

[gguf/scripts/gguf_convert_endian.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/scripts/gguf_convert_endian.py) — Allows converting the endianness of GGUF files.

[gguf/scripts/gguf_new_metadata.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/scripts/gguf_new_metadata.py) — Copies a GGUF file with added/modified/removed metadata values.

## Development
Maintainers who participate in development of this package are advised to install it in editable mode:

```sh
cd /path/to/llama.cpp/gguf-py

pip install --editable .
```

**Note**: This may require to upgrade your Pip installation, with a message saying that editable installation currently requires `setup.py`.
In this case, upgrade Pip to the latest:

```sh
pip install --upgrade pip
```

## Automatic publishing with CI

There's a GitHub workflow to make a release automatically upon creation of tags in a specified format.

1. Bump the version in `pyproject.toml`.
2. Create a tag named `gguf-vx.x.x` where `x.x.x` is the semantic version number.

```sh
git tag -a gguf-v1.0.0 -m "Version 1.0 release"
```

3. Push the tags.

```sh
git push origin --tags
```

## Manual publishing
If you want to publish the package manually for any reason, you need to have `twine` and `build` installed:

```sh
pip install build twine
```

Then, follow these steps to release a new version:

1. Bump the version in `pyproject.toml`.
2. Build the package:

```sh
python -m build
```

3. Upload the generated distribution archives:

```sh
python -m twine upload dist/*
```

## Run Unit Tests

From root of this repository you can run this command to run all the unit tests

```bash
python -m unittest discover ./gguf-py -v
```

## TODO
- [ ] Include conversion scripts as command line entry points in this package.


            

Raw data

            {
    "_id": null,
    "home_page": "https://ggml.ai",
    "name": "gguf",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "ggml, gguf, llama.cpp",
    "author": "GGML",
    "author_email": "ggml@ggml.ai",
    "download_url": "https://files.pythonhosted.org/packages/e4/51/d532d776d01d9ed60d930fe58554ecb05806fc9aaffa2400344d3a9d630b/gguf-0.14.0.tar.gz",
    "platform": null,
    "description": "## gguf\n\nThis is a Python package for writing binary files in the [GGUF](https://github.com/ggerganov/ggml/pull/302)\n(GGML Universal File) format.\n\nSee [convert_hf_to_gguf.py](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py)\nas an example for its usage.\n\n## Installation\n```sh\npip install gguf\n```\n\n## API Examples/Simple Tools\n\n[examples/writer.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/examples/writer.py) \u2014 Generates `example.gguf` in the current directory to demonstrate generating a GGUF file. Note that this file cannot be used as a model.\n\n[gguf/scripts/gguf_dump.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/scripts/gguf_dump.py) \u2014 Dumps a GGUF file's metadata to the console.\n\n[gguf/scripts/gguf_set_metadata.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/scripts/gguf_set_metadata.py) \u2014 Allows changing simple metadata values in a GGUF file by key.\n\n[gguf/scripts/gguf_convert_endian.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/scripts/gguf_convert_endian.py) \u2014 Allows converting the endianness of GGUF files.\n\n[gguf/scripts/gguf_new_metadata.py](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/scripts/gguf_new_metadata.py) \u2014 Copies a GGUF file with added/modified/removed metadata values.\n\n## Development\nMaintainers who participate in development of this package are advised to install it in editable mode:\n\n```sh\ncd /path/to/llama.cpp/gguf-py\n\npip install --editable .\n```\n\n**Note**: This may require to upgrade your Pip installation, with a message saying that editable installation currently requires `setup.py`.\nIn this case, upgrade Pip to the latest:\n\n```sh\npip install --upgrade pip\n```\n\n## Automatic publishing with CI\n\nThere's a GitHub workflow to make a release automatically upon creation of tags in a specified format.\n\n1. Bump the version in `pyproject.toml`.\n2. Create a tag named `gguf-vx.x.x` where `x.x.x` is the semantic version number.\n\n```sh\ngit tag -a gguf-v1.0.0 -m \"Version 1.0 release\"\n```\n\n3. Push the tags.\n\n```sh\ngit push origin --tags\n```\n\n## Manual publishing\nIf you want to publish the package manually for any reason, you need to have `twine` and `build` installed:\n\n```sh\npip install build twine\n```\n\nThen, follow these steps to release a new version:\n\n1. Bump the version in `pyproject.toml`.\n2. Build the package:\n\n```sh\npython -m build\n```\n\n3. Upload the generated distribution archives:\n\n```sh\npython -m twine upload dist/*\n```\n\n## Run Unit Tests\n\nFrom root of this repository you can run this command to run all the unit tests\n\n```bash\npython -m unittest discover ./gguf-py -v\n```\n\n## TODO\n- [ ] Include conversion scripts as command line entry points in this package.\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Read and write ML models in GGUF for GGML",
    "version": "0.14.0",
    "project_urls": {
        "Homepage": "https://ggml.ai",
        "Repository": "https://github.com/ggerganov/llama.cpp"
    },
    "split_keywords": [
        "ggml",
        " gguf",
        " llama.cpp"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "94ee0301c4a2bdb43da8d059d67382ec7ca554677366ebb73db82b690a10c98a",
                "md5": "22c9712056584239ebde6beca9623fb3",
                "sha256": "d279b33cd743d6211c09d96f0797eb36652c0d9d90844f8986a7c25e445906c4"
            },
            "downloads": -1,
            "filename": "gguf-0.14.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "22c9712056584239ebde6beca9623fb3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 76159,
            "upload_time": "2025-01-08T19:19:00",
            "upload_time_iso_8601": "2025-01-08T19:19:00.497790Z",
            "url": "https://files.pythonhosted.org/packages/94/ee/0301c4a2bdb43da8d059d67382ec7ca554677366ebb73db82b690a10c98a/gguf-0.14.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e451d532d776d01d9ed60d930fe58554ecb05806fc9aaffa2400344d3a9d630b",
                "md5": "334eb07163c962f889804a05928dd46b",
                "sha256": "d9996f197a777831cf9e01efac24cbfa817787375305b8c4ee16074778e2bce8"
            },
            "downloads": -1,
            "filename": "gguf-0.14.0.tar.gz",
            "has_sig": false,
            "md5_digest": "334eb07163c962f889804a05928dd46b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 69683,
            "upload_time": "2025-01-08T19:19:02",
            "upload_time_iso_8601": "2025-01-08T19:19:02.794262Z",
            "url": "https://files.pythonhosted.org/packages/e4/51/d532d776d01d9ed60d930fe58554ecb05806fc9aaffa2400344d3a9d630b/gguf-0.14.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-01-08 19:19:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ggerganov",
    "github_project": "llama.cpp",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "gguf"
}
        
Elapsed time: 6.98436s