gguf-connector


Namegguf-connector JSON
Version 2.6.4 PyPI version JSON
download
home_pageNone
SummaryGGUF connector(s) with GUI
upload_time2025-08-25 06:49:32
maintainerNone
docs_urlNone
authorNone
requires_pythonNone
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## GGUF connector

GGUF (GPT-Generated Unified Format) is a successor of GGML (GPT-Generated Model Language), it was released on August 21, 2023; by the way, GPT stands for Generative Pre-trained Transformer.

[<img src="https://raw.githubusercontent.com/calcuis/gguf-connector/master/gguf.gif" width="128" height="128">](https://github.com/calcuis/gguf-connector)
[![Static Badge](https://img.shields.io/badge/version-2.6.4-green?logo=github)](https://github.com/calcuis/gguf-connector/releases)
[![Static Badge](https://badgen.net/badge/pack/0.1.3/green?icon=windows)](https://github.com/calcuis/chatgpt-model-selector/releases)

This package is a simple graphical user interface (GUI) application that uses the ctransformers or llama.cpp to interact with a chat model for generating responses.

Install the connector via pip (once only):
```
pip install gguf-connector
```
Update the connector (if previous version installed) by:
```
pip install gguf-connector --upgrade
```
With this version, you can interact straight with the GGUF file(s) available in the same directory by a simple command.
### Graphical User Interface (GUI)
Select model(s) with ctransformers (optional: need ctransformers to work; pip install ctransformers):
```
ggc c
```
Select model(s) with llama.cpp connector (optional: need llama-cpp-python to work; get it [here](https://github.com/abetlen/llama-cpp-python/releases) or nightly [here](https://github.com/calcuis/llama-cpp-python/releases)):
```
ggc cpp
```
[<img src="https://raw.githubusercontent.com/calcuis/chatgpt-model-selector/master/demo.gif" width="350" height="280">](https://github.com/calcuis/chatgpt-model-selector/blob/main/demo.gif)
[<img src="https://raw.githubusercontent.com/calcuis/chatgpt-model-selector/master/demo1.gif" width="350" height="280">](https://github.com/calcuis/chatgpt-model-selector/blob/main/demo1.gif)

### Command Line Interface (CLI)
Select model(s) with ctransformers:
```
ggc g
```
Select model(s) with llama.cpp connector:
```
ggc gpp
```
Select model(s) with vision connector:
```
ggc v
```
Opt a clip handler then opt a vision model; prompt your picture link to process (see example [here](https://huggingface.co/calcuis/llava-gguf))
#### Metadata reader (CLI only)
Select model(s) with metadata reader:
```
ggc r
```
Select model(s) with metadata fast reader:
```
ggc r2
```
Select model(s) with tensor reader (optional: need torch to work; pip install torch):
```
ggc r3
```
#### PDF analyzor (beta feature on CLI recently)
Load PDF(s) into a model with ctransformers:
```
ggc cp
```
Load PDF(s) into a model with llama.cpp connector:
```
ggc pp
```
optional: need pypdf; pip install pypdf
#### Speech recognizor (beta feature; accept WAV format recently)
Prompt WAV(s) into a model with ctransformers:
```
ggc cs
```
Prompt WAV(s) into a model with llama.cpp connector:
```
ggc ps
```
optional: need speechrecognition, pocketsphinx; pip install speechrecognition, pocketsphinx
#### Speech recognizor (via Google api; online)
Prompt WAV(s) into a model with ctransformers:
```
ggc cg
```
Prompt WAV(s) into a model with llama.cpp connector:
```
ggc pg
```
#### Container
Launch to page/container:
```
ggc w
```
#### Divider
Divide gguf into different part(s) with a cutoff point (size):
```
ggc d2
```
#### Merger
Merge all gguf into one:
```
ggc m2
```
#### Merger (safetensors)
Merge all safetensors into one (optional: need torch to work; pip install torch):
```
ggc ma
```
#### Splitter (checkpoint)
Split checkpoint into components (optional: need torch to work; pip install torch):
```
ggc s
```
#### Quantizor
Quantize safetensors to fp8 (downscale; optional: need torch to work; pip install torch):
```
ggc q
```
Quantize safetensors to fp32 (upscale; optional: need torch to work; pip install torch):
```
ggc q2
```
#### Convertor
Convert safetensors to gguf (auto; optional: need torch to work; pip install torch):
```
ggc t
```
#### Convertor (alpha)
Convert safetensors to gguf (meta; optional: need torch to work; pip install torch):
```
ggc t1
```
#### Convertor (beta)
Convert safetensors to gguf (unlimited; optional: need torch to work; pip install torch):
```
ggc t2
```
#### Convertor (gamma)
Convert gguf to safetensors (reversible; optional: need torch to work; pip install torch):
```
ggc t3
```
#### Swapper (lora)
Rename lora tensor (base/unet swappable; optional: need torch to work; pip install torch):
```
ggc la
```
#### Remover
Tensor remover:
```
ggc rm
```
#### Renamer
Tensor renamer:
```
ggc rn
```
#### Extractor
Tensor extractor:
```
ggc ex
```
#### Cutter
Get cutter for bf/f16 to q2-q8 quantization (see user guide [here](https://pypi.org/project/gguf-cutter)) by:
```
ggc u
```
#### Comfy
Download comfy pack (see user guide [here](https://pypi.org/project/gguf-comfy)) by:
```
ggc y
```
#### Node
Clone node (see user/setup guide [here](https://pypi.org/project/gguf-node)) by:
```
ggc n
```
#### Pack
Take pack (see user guide [here](https://pypi.org/project/gguf-pack)) by:
```
ggc p
```
#### PackPack
Take packpack (see user guide [here](https://pypi.org/project/framepack)) by:
```
ggc p2
```
#### FramePack (1-click long video generation)
Take framepack (portable packpack) by:
```
ggc p1
```
Run framepack - ggc edition by (optional: need framepack to work; pip install framepack):
```
ggc f2
```
#### Smart contract generator (solidity)
Activate backend and frontend by (optional: need transformers to work; pip install transformers):
```
ggc g1
```
#### Video 1 (image to video)
Activate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):
```
ggc v1
```
#### Video 2 (text to video)
Activate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):
```
ggc v2
```
#### Image 2 (text to image)
Activate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):
```
ggc i2
```
#### Kontext 2 (image editor)
Activate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):
```
ggc k2
```
With lora selection:
```
ggc k1
```
With memory economy mode (low/no vram or w/o gpu option):
```
ggc k3
```
#### Krea 4 (image generator)
Activate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):
```
ggc k4
```
#### Note 2 (OCR)
Activate backend and frontend by (optional: need transformers to work; pip install transformers):
```
ggc n2
```
#### Speech 2 (text to speech)
Activate backend and frontend by (optional: need diao to work; pip install diao):
```
ggc s2
```
#### Higgs 2 (text to audio)
Activate backend and frontend by (optional: need higgs to work; pip install higgs):
```
ggc h2
```
Multilingual supported, i.e., Spanish, German, Korean, etc.
#### Bagel 2 (any to any)
Activate backend and frontend by (optional: need bagel2 to work; pip install bagel2):
```
ggc b2
```
Opt a vae then opt a model file (see example [here](https://huggingface.co/calcuis/bagel-gguf))
#### Voice 2 (text to voice)
Activate backend and frontend by (optional: need chichat to work; pip install chichat):
```
ggc c2
```
Opt a vae, a clip and a model file (see example [here](https://huggingface.co/calcuis/chatterbox-gguf))
#### Audio 2 (text to audio)
Activate backend and frontend by (optional: need fishaudio to work; pip install fishaudio):
```
ggc o2
```
Opt a codec then opt a model file (see example [here](https://huggingface.co/calcuis/openaudio-gguf))
#### Krea 7 (image generator)
Activate backend and frontend by (optional: need dequantor to work; pip install dequantor):
```
ggc k7
```
Opt a model file in the current directory (see example [here](https://huggingface.co/calcuis/krea-gguf))
#### Kontext 8 (image editor)
Activate backend and frontend by (optional: need dequantor to work; pip install dequantor):
```
ggc k8
```
Opt a model file in the current directory (see example [here](https://huggingface.co/calcuis/kontext-gguf))
#### Flux connector (all-in-one)
Select flux image model(s) with k connector:
```
ggc k
```
#### Qwen image connector
Select qwen image model(s) with q5 connector:
```
ggc q5
```
#### Qwen image edit connector
Select image edit model(s) with q6 connector:
```
ggc q6
```
#### Lumina image connector
Select lumina image model(s) with l2 connector:
```
ggc l2
```
#### Wan video connector
Select wan video model(s) with w2 connector:
```
ggc w2
```
#### Ltxv connector
Select ltxv model(s) with x2 connector:
```
ggc x2
```
#### Mochi connector
Select mochi model(s) with m1 connector:
```
ggc m1
```
#### Kx-lite connector
Select kontext model(s) with k0 connector:
```
ggc k0
```
Opt a model file to interact with (see example [here](https://huggingface.co/calcuis/kontext-gguf))
#### SD-lite connector
Select sd3.5 model(s) with s3 connector:
```
ggc s3
```
Opt a model file to interact with (see example [here](https://huggingface.co/calcuis/sd3.5-lite-gguf))
#### Gudio 2 (text to speech)
Activate backend and frontend by (optional: need gudio to work; pip install gudio):
```
ggc g2
```
Opt a model then opt a clip (see example [here](https://huggingface.co/gguf-org/gudio))
### Menu
Enter the main menu for selecting a connector or getting pre-trained trial model(s).
```
ggc m
```
[<img src="https://raw.githubusercontent.com/calcuis/gguf-connector/master/demo1.gif" width="350" height="300">](https://github.com/calcuis/gguf-connector/blob/main/demo1.gif)

#### Import as a module
Include the connector selection menu to your code by:
```
from gguf_connector import menu
```
[<img src="https://raw.githubusercontent.com/calcuis/gguf-connector/master/demo.gif" width="350" height="200">](https://github.com/calcuis/gguf-connector/blob/main/demo.gif)

For standalone version please refer to the repository in the reference list (below).
#### References
[model selector](https://github.com/calcuis/chatgpt-model-selector) (standalone version: [installable package](https://github.com/calcuis/chatgpt-model-selector/releases))

[cgg](https://github.com/calcuis/cgg) (cmd-based tool)
#### Resources
[ctransformers](https://github.com/marella/ctransformers)
[llama.cpp](https://github.com/ggerganov/llama.cpp)
#### Article
[understanding gguf and the gguf-connector](https://medium.com/@whiteblanksheet/understanding-gguf-and-the-gguf-connector-a-comprehensive-guide-3b1fc0f938ba)
#### Website
[gguf.org](https://gguf.org) (you can download the frontend from [github](https://github.com/gguf-org/gguf-org.github.io) and host it locally; the backend is ethereum blockchain)

[gguf.io](https://gguf.io) (i/o is a mirror of us; note: this web3 domain might be restricted access in some regions/by some service providers, if so, visit the one above/below instead, exactly the same)

[gguf.us](https://gguf.us)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "gguf-connector",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "calcuis <info@calcu.io>",
    "download_url": "https://files.pythonhosted.org/packages/15/c2/22811d46823ec4d71a535c2a6f9927cd0f15a780567b07ac7603f7c8612f/gguf_connector-2.6.4.tar.gz",
    "platform": null,
    "description": "## GGUF connector\n\nGGUF (GPT-Generated Unified Format) is a successor of GGML (GPT-Generated Model Language), it was released on August 21, 2023; by the way, GPT stands for Generative Pre-trained Transformer.\n\n[<img src=\"https://raw.githubusercontent.com/calcuis/gguf-connector/master/gguf.gif\" width=\"128\" height=\"128\">](https://github.com/calcuis/gguf-connector)\n[![Static Badge](https://img.shields.io/badge/version-2.6.4-green?logo=github)](https://github.com/calcuis/gguf-connector/releases)\n[![Static Badge](https://badgen.net/badge/pack/0.1.3/green?icon=windows)](https://github.com/calcuis/chatgpt-model-selector/releases)\n\nThis package is a simple graphical user interface (GUI) application that uses the ctransformers or llama.cpp to interact with a chat model for generating responses.\n\nInstall the connector via pip (once only):\n```\npip install gguf-connector\n```\nUpdate the connector (if previous version installed) by:\n```\npip install gguf-connector --upgrade\n```\nWith this version, you can interact straight with the GGUF file(s) available in the same directory by a simple command.\n### Graphical User Interface (GUI)\nSelect model(s) with ctransformers (optional: need ctransformers to work; pip install ctransformers):\n```\nggc c\n```\nSelect model(s) with llama.cpp connector (optional: need llama-cpp-python to work; get it [here](https://github.com/abetlen/llama-cpp-python/releases) or nightly [here](https://github.com/calcuis/llama-cpp-python/releases)):\n```\nggc cpp\n```\n[<img src=\"https://raw.githubusercontent.com/calcuis/chatgpt-model-selector/master/demo.gif\" width=\"350\" height=\"280\">](https://github.com/calcuis/chatgpt-model-selector/blob/main/demo.gif)\n[<img src=\"https://raw.githubusercontent.com/calcuis/chatgpt-model-selector/master/demo1.gif\" width=\"350\" height=\"280\">](https://github.com/calcuis/chatgpt-model-selector/blob/main/demo1.gif)\n\n### Command Line Interface (CLI)\nSelect model(s) with ctransformers:\n```\nggc g\n```\nSelect model(s) with llama.cpp connector:\n```\nggc gpp\n```\nSelect model(s) with vision connector:\n```\nggc v\n```\nOpt a clip handler then opt a vision model; prompt your picture link to process (see example [here](https://huggingface.co/calcuis/llava-gguf))\n#### Metadata reader (CLI only)\nSelect model(s) with metadata reader:\n```\nggc r\n```\nSelect model(s) with metadata fast reader:\n```\nggc r2\n```\nSelect model(s) with tensor reader (optional: need torch to work; pip install torch):\n```\nggc r3\n```\n#### PDF analyzor (beta feature on CLI recently)\nLoad PDF(s) into a model with ctransformers:\n```\nggc cp\n```\nLoad PDF(s) into a model with llama.cpp connector:\n```\nggc pp\n```\noptional: need pypdf; pip install pypdf\n#### Speech recognizor (beta feature; accept WAV format recently)\nPrompt WAV(s) into a model with ctransformers:\n```\nggc cs\n```\nPrompt WAV(s) into a model with llama.cpp connector:\n```\nggc ps\n```\noptional: need speechrecognition, pocketsphinx; pip install speechrecognition, pocketsphinx\n#### Speech recognizor (via Google api; online)\nPrompt WAV(s) into a model with ctransformers:\n```\nggc cg\n```\nPrompt WAV(s) into a model with llama.cpp connector:\n```\nggc pg\n```\n#### Container\nLaunch to page/container:\n```\nggc w\n```\n#### Divider\nDivide gguf into different part(s) with a cutoff point (size):\n```\nggc d2\n```\n#### Merger\nMerge all gguf into one:\n```\nggc m2\n```\n#### Merger (safetensors)\nMerge all safetensors into one (optional: need torch to work; pip install torch):\n```\nggc ma\n```\n#### Splitter (checkpoint)\nSplit checkpoint into components (optional: need torch to work; pip install torch):\n```\nggc s\n```\n#### Quantizor\nQuantize safetensors to fp8 (downscale; optional: need torch to work; pip install torch):\n```\nggc q\n```\nQuantize safetensors to fp32 (upscale; optional: need torch to work; pip install torch):\n```\nggc q2\n```\n#### Convertor\nConvert safetensors to gguf (auto; optional: need torch to work; pip install torch):\n```\nggc t\n```\n#### Convertor (alpha)\nConvert safetensors to gguf (meta; optional: need torch to work; pip install torch):\n```\nggc t1\n```\n#### Convertor (beta)\nConvert safetensors to gguf (unlimited; optional: need torch to work; pip install torch):\n```\nggc t2\n```\n#### Convertor (gamma)\nConvert gguf to safetensors (reversible; optional: need torch to work; pip install torch):\n```\nggc t3\n```\n#### Swapper (lora)\nRename lora tensor (base/unet swappable; optional: need torch to work; pip install torch):\n```\nggc la\n```\n#### Remover\nTensor remover:\n```\nggc rm\n```\n#### Renamer\nTensor renamer:\n```\nggc rn\n```\n#### Extractor\nTensor extractor:\n```\nggc ex\n```\n#### Cutter\nGet cutter for bf/f16 to q2-q8 quantization (see user guide [here](https://pypi.org/project/gguf-cutter)) by:\n```\nggc u\n```\n#### Comfy\nDownload comfy pack (see user guide [here](https://pypi.org/project/gguf-comfy)) by:\n```\nggc y\n```\n#### Node\nClone node (see user/setup guide [here](https://pypi.org/project/gguf-node)) by:\n```\nggc n\n```\n#### Pack\nTake pack (see user guide [here](https://pypi.org/project/gguf-pack)) by:\n```\nggc p\n```\n#### PackPack\nTake packpack (see user guide [here](https://pypi.org/project/framepack)) by:\n```\nggc p2\n```\n#### FramePack (1-click long video generation)\nTake framepack (portable packpack) by:\n```\nggc p1\n```\nRun framepack - ggc edition by (optional: need framepack to work; pip install framepack):\n```\nggc f2\n```\n#### Smart contract generator (solidity)\nActivate backend and frontend by (optional: need transformers to work; pip install transformers):\n```\nggc g1\n```\n#### Video 1 (image to video)\nActivate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):\n```\nggc v1\n```\n#### Video 2 (text to video)\nActivate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):\n```\nggc v2\n```\n#### Image 2 (text to image)\nActivate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):\n```\nggc i2\n```\n#### Kontext 2 (image editor)\nActivate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):\n```\nggc k2\n```\nWith lora selection:\n```\nggc k1\n```\nWith memory economy mode (low/no vram or w/o gpu option):\n```\nggc k3\n```\n#### Krea 4 (image generator)\nActivate backend and frontend by (optional: need torch, diffusers to work; pip install torch, diffusers):\n```\nggc k4\n```\n#### Note 2 (OCR)\nActivate backend and frontend by (optional: need transformers to work; pip install transformers):\n```\nggc n2\n```\n#### Speech 2 (text to speech)\nActivate backend and frontend by (optional: need diao to work; pip install diao):\n```\nggc s2\n```\n#### Higgs 2 (text to audio)\nActivate backend and frontend by (optional: need higgs to work; pip install higgs):\n```\nggc h2\n```\nMultilingual supported, i.e., Spanish, German, Korean, etc.\n#### Bagel 2 (any to any)\nActivate backend and frontend by (optional: need bagel2 to work; pip install bagel2):\n```\nggc b2\n```\nOpt a vae then opt a model file (see example [here](https://huggingface.co/calcuis/bagel-gguf))\n#### Voice 2 (text to voice)\nActivate backend and frontend by (optional: need chichat to work; pip install chichat):\n```\nggc c2\n```\nOpt a vae, a clip and a model file (see example [here](https://huggingface.co/calcuis/chatterbox-gguf))\n#### Audio 2 (text to audio)\nActivate backend and frontend by (optional: need fishaudio to work; pip install fishaudio):\n```\nggc o2\n```\nOpt a codec then opt a model file (see example [here](https://huggingface.co/calcuis/openaudio-gguf))\n#### Krea 7 (image generator)\nActivate backend and frontend by (optional: need dequantor to work; pip install dequantor):\n```\nggc k7\n```\nOpt a model file in the current directory (see example [here](https://huggingface.co/calcuis/krea-gguf))\n#### Kontext 8 (image editor)\nActivate backend and frontend by (optional: need dequantor to work; pip install dequantor):\n```\nggc k8\n```\nOpt a model file in the current directory (see example [here](https://huggingface.co/calcuis/kontext-gguf))\n#### Flux connector (all-in-one)\nSelect flux image model(s) with k connector:\n```\nggc k\n```\n#### Qwen image connector\nSelect qwen image model(s) with q5 connector:\n```\nggc q5\n```\n#### Qwen image edit connector\nSelect image edit model(s) with q6 connector:\n```\nggc q6\n```\n#### Lumina image connector\nSelect lumina image model(s) with l2 connector:\n```\nggc l2\n```\n#### Wan video connector\nSelect wan video model(s) with w2 connector:\n```\nggc w2\n```\n#### Ltxv connector\nSelect ltxv model(s) with x2 connector:\n```\nggc x2\n```\n#### Mochi connector\nSelect mochi model(s) with m1 connector:\n```\nggc m1\n```\n#### Kx-lite connector\nSelect kontext model(s) with k0 connector:\n```\nggc k0\n```\nOpt a model file to interact with (see example [here](https://huggingface.co/calcuis/kontext-gguf))\n#### SD-lite connector\nSelect sd3.5 model(s) with s3 connector:\n```\nggc s3\n```\nOpt a model file to interact with (see example [here](https://huggingface.co/calcuis/sd3.5-lite-gguf))\n#### Gudio 2 (text to speech)\nActivate backend and frontend by (optional: need gudio to work; pip install gudio):\n```\nggc g2\n```\nOpt a model then opt a clip (see example [here](https://huggingface.co/gguf-org/gudio))\n### Menu\nEnter the main menu for selecting a connector or getting pre-trained trial model(s).\n```\nggc m\n```\n[<img src=\"https://raw.githubusercontent.com/calcuis/gguf-connector/master/demo1.gif\" width=\"350\" height=\"300\">](https://github.com/calcuis/gguf-connector/blob/main/demo1.gif)\n\n#### Import as a module\nInclude the connector selection menu to your code by:\n```\nfrom gguf_connector import menu\n```\n[<img src=\"https://raw.githubusercontent.com/calcuis/gguf-connector/master/demo.gif\" width=\"350\" height=\"200\">](https://github.com/calcuis/gguf-connector/blob/main/demo.gif)\n\nFor standalone version please refer to the repository in the reference list (below).\n#### References\n[model selector](https://github.com/calcuis/chatgpt-model-selector) (standalone version: [installable package](https://github.com/calcuis/chatgpt-model-selector/releases))\n\n[cgg](https://github.com/calcuis/cgg) (cmd-based tool)\n#### Resources\n[ctransformers](https://github.com/marella/ctransformers)\n[llama.cpp](https://github.com/ggerganov/llama.cpp)\n#### Article\n[understanding gguf and the gguf-connector](https://medium.com/@whiteblanksheet/understanding-gguf-and-the-gguf-connector-a-comprehensive-guide-3b1fc0f938ba)\n#### Website\n[gguf.org](https://gguf.org) (you can download the frontend from [github](https://github.com/gguf-org/gguf-org.github.io) and host it locally; the backend is ethereum blockchain)\n\n[gguf.io](https://gguf.io) (i/o is a mirror of us; note: this web3 domain might be restricted access in some regions/by some service providers, if so, visit the one above/below instead, exactly the same)\n\n[gguf.us](https://gguf.us)\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "GGUF connector(s) with GUI",
    "version": "2.6.4",
    "project_urls": {
        "Changelog": "https://github.com/calcuis/gguf-connector/releases",
        "Community": "https://www.reddit.com/r/gguf",
        "Download": "https://github.com/calcuis/chatgpt-model-selector/releases",
        "Homepage": "https://gguf.us",
        "Issues": "https://github.com/calcuis/gguf-connector/issues",
        "Repository": "https://github.com/calcuis/gguf-connector"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "11c6b3d650c8e810e3525ed58010a4979f4be0ee9f6b4d5bd2a89594bcdc504a",
                "md5": "4c7cbbe5103b551b38a51800ac5e77b9",
                "sha256": "bf054260476a7fd1d32d76fec3eed6a7976c45fea3e33398d1e212b076bd60b6"
            },
            "downloads": -1,
            "filename": "gguf_connector-2.6.4-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4c7cbbe5103b551b38a51800ac5e77b9",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": null,
            "size": 275729,
            "upload_time": "2025-08-25T06:49:31",
            "upload_time_iso_8601": "2025-08-25T06:49:31.191693Z",
            "url": "https://files.pythonhosted.org/packages/11/c6/b3d650c8e810e3525ed58010a4979f4be0ee9f6b4d5bd2a89594bcdc504a/gguf_connector-2.6.4-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "15c222811d46823ec4d71a535c2a6f9927cd0f15a780567b07ac7603f7c8612f",
                "md5": "9a4500d9ea18526aa0c7be9cea933dda",
                "sha256": "ae270bc140f8df456f3c3e44084157f8d952de6d909369d5219cbe1162d06659"
            },
            "downloads": -1,
            "filename": "gguf_connector-2.6.4.tar.gz",
            "has_sig": false,
            "md5_digest": "9a4500d9ea18526aa0c7be9cea933dda",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 181061,
            "upload_time": "2025-08-25T06:49:32",
            "upload_time_iso_8601": "2025-08-25T06:49:32.837267Z",
            "url": "https://files.pythonhosted.org/packages/15/c2/22811d46823ec4d71a535c2a6f9927cd0f15a780567b07ac7603f7c8612f/gguf_connector-2.6.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-25 06:49:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "calcuis",
    "github_project": "gguf-connector",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "gguf-connector"
}
        
Elapsed time: 0.50813s