gguf-llama


Namegguf-llama JSON
Version 0.0.18 PyPI version JSON
download
home_pagehttps://github.com/laelhalawani/gguf_llama
SummaryWrapper for simplified use of Llama2 GGUF quantized models.
upload_time2024-01-13 18:59:58
maintainer
docs_urlNone
authorŁael Al-Halawani
requires_python
license
keywords llama gguf quantized models llama gguf cpu inference
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # gguf_llama

Provides a LlamaAI class with Python interface for generating text using Llama models.

## Features

- Load Llama models and tokenizers automatically from gguf file
- Generate text completions for prompts
- Automatically adjust model size to fit longer prompts up to a specific limit
- Convenient methods for tokenizing and untokenizing text  
- Fix text formatting issues before generating

## Usage

```python
from llama_ai import LlamaAI

ai = LlamaAI("my_model.gguf", max_tokens=500, max_input_tokens=100)"
```
Generate text by calling infer():
```python
text = ai.infer("Once upon a time")  
print(text)"
```
## Installation

```python
pip install gguf_llama
``` 

## Documentation

See the [API documentation](https://laelhalawani.github.io/gguf_llama) for full details on classes and methods. 

## Contributing

Contributions are welcome! Open an issue or PR to improve gguf_llama.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/laelhalawani/gguf_llama",
    "name": "gguf-llama",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "llama,gguf,quantized models,llama gguf,cpu inference",
    "author": "\u0141ael Al-Halawani",
    "author_email": "laelhalawani@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/c2/94/548a808b940863e91e4a377d9765d4ca5966e02112602d36b275ad1c2682/gguf_llama-0.0.18.tar.gz",
    "platform": null,
    "description": "# gguf_llama\n\nProvides a LlamaAI class with Python interface for generating text using Llama models.\n\n## Features\n\n- Load Llama models and tokenizers automatically from gguf file\n- Generate text completions for prompts\n- Automatically adjust model size to fit longer prompts up to a specific limit\n- Convenient methods for tokenizing and untokenizing text  \n- Fix text formatting issues before generating\n\n## Usage\n\n```python\nfrom llama_ai import LlamaAI\n\nai = LlamaAI(\"my_model.gguf\", max_tokens=500, max_input_tokens=100)\"\n```\nGenerate text by calling infer():\n```python\ntext = ai.infer(\"Once upon a time\")  \nprint(text)\"\n```\n## Installation\n\n```python\npip install gguf_llama\n``` \n\n## Documentation\n\nSee the [API documentation](https://laelhalawani.github.io/gguf_llama) for full details on classes and methods. \n\n## Contributing\n\nContributions are welcome! Open an issue or PR to improve gguf_llama.\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Wrapper for simplified use of Llama2 GGUF quantized models.",
    "version": "0.0.18",
    "project_urls": {
        "Homepage": "https://github.com/laelhalawani/gguf_llama"
    },
    "split_keywords": [
        "llama",
        "gguf",
        "quantized models",
        "llama gguf",
        "cpu inference"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "07bd9a12ace7bdfc65d458c0d9a19c6394172bd187e779dcb865bb70364120c1",
                "md5": "608e42ce2bfce8ad00b4e21967f59e2f",
                "sha256": "77f0ae941b1cf4766c25b5071ad718ceab8a31abb0dd28e852227c76785e8596"
            },
            "downloads": -1,
            "filename": "gguf_llama-0.0.18-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "608e42ce2bfce8ad00b4e21967f59e2f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 4709,
            "upload_time": "2024-01-13T18:59:56",
            "upload_time_iso_8601": "2024-01-13T18:59:56.872607Z",
            "url": "https://files.pythonhosted.org/packages/07/bd/9a12ace7bdfc65d458c0d9a19c6394172bd187e779dcb865bb70364120c1/gguf_llama-0.0.18-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c294548a808b940863e91e4a377d9765d4ca5966e02112602d36b275ad1c2682",
                "md5": "3202b9ceb859947633a4e197116e2ecd",
                "sha256": "485c467705699bcc7f4f77034167a33ae4380da73bb10ac5ca93817cbc87fdde"
            },
            "downloads": -1,
            "filename": "gguf_llama-0.0.18.tar.gz",
            "has_sig": false,
            "md5_digest": "3202b9ceb859947633a4e197116e2ecd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 4570,
            "upload_time": "2024-01-13T18:59:58",
            "upload_time_iso_8601": "2024-01-13T18:59:58.366769Z",
            "url": "https://files.pythonhosted.org/packages/c2/94/548a808b940863e91e4a377d9765d4ca5966e02112602d36b275ad1c2682/gguf_llama-0.0.18.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-13 18:59:58",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "laelhalawani",
    "github_project": "gguf_llama",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "gguf-llama"
}
        
Elapsed time: 0.49879s