zerogpt


Namezerogpt JSON
Version 1.2.0 PyPI version JSON
download
home_pagehttps://github.com/RedPiarOfficial/ZeroGPT
SummaryPython client for interacting with the ZeroGPT API and generating images.
upload_time2025-07-11 09:54:53
maintainerRedpiar
docs_urlNone
authorRedpiar
requires_python>=3.8
licenseMIT
keywords ai zerogpt arting text image free api uncensured gpt deepseek chatgpt
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ZeroGPT

[![Version](https://img.shields.io/badge/version-1.1.0-blue.svg)](https://github.com/username/project/releases)
[![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)
![Status](https://img.shields.io/badge/status-active-success.svg)
![Stage](https://img.shields.io/badge/stage-alpha-red.svg)

[Documentation](https://red-3.gitbook.io/zerogpt/)

**ZeroGPT** is a Python library for interacting with AI APIs, providing capabilities for text and image generation.

## Features

* Text generation using various models
* Image creation based on textual descriptions
* Support for "uncensored" mode for more unrestricted responses
* Optimized memory and data handling
* Streamed data support
* Secure request authentication

## Installation

```bash
pip install zerogpt
```

## Usage

### Client Initialization

```python
from zerogpt import Client

client = Client()
```

### Text Generation

```python
# Simple request
response = client.send_message("Hi, how are you?")

# Request with instruction
response = client.send_message(
    "Tell me about space",
    instruction="You are an astronomy expert"
)

# Using "uncensored" mode
response = client.send_message(
    "Explain a complex topic",
    uncensored=True
)

# Using "think" mode (deeper reasoning)
response = client.send_message(
    "Solve a difficult math problem",
    think=True
)

# With context
messages=[
    {"role": "user", "content": "Hi"},
    {"role": "assistant", "content": "Hello!"}
]
response = client.send_message(
    messages,
    think=True
)
```

### Image Generation

```python
# Create image
result = client.create_image(
    prompt="anime neko girl",
    samples=1,
    resolution=(768, 512),
    seed=-1,
    steps=50
)

# Get generated image
image = client.get_image(result['data']['request_id'])

# Save image
image.download(['path/to/save/image.png'])

# View image
image.open()
```

### Image to Prompt

```python
from zerogpt.utils.tools import image_to_prompt

resp = image_to_prompt('path/to/image.png')
```

### Working with Dummy Context[^1]

```python
from zerogpt.utils.prompt import Dummy

# Create context
dummy = Dummy()
dummy.create(messages=[
    {"role": "user", "content": "Hi"},
    {"role": "assistant", "content": "Hello!"}
])

# Also possible for image generation
dummy = Dummy()
dummy.create(prompt='neko girl', steps=100)

# Save context
dummy.save("context.bin")

# Load context
dummy.load("context.bin")

# Use instead of messages:
# client.send_message(dummy)
# or
# client.create_image(dummy)
```


## Parameters

### send_message

* `input` (str or list): Text prompt or list of messages
* `instruction` (str, optional): System instruction
* `think` (bool, optional): Use model with deeper reasoning
* `uncensored` (bool, optional): Use unrestricted mode

### create_image

* `prompt` (str): Description of the desired image
* `samples` (int, optional): Number of samples
* `resolution` (tuple, optional): Image resolution (width, height)
* `seed` (int, optional): Seed for reproducibility
* `steps` (int, optional): Number of generation steps
* `negative_prompt` (str, optional): Description of undesired elements

## Security

The library uses HMAC-SHA256 to sign requests and ensure secure data transmission. All requests are authenticated using timestamps to prevent replay attacks.

## Requirements

* Python 3.8+

## License

MIT License
Copyright (c) 2025 RedPiar

## Author

[RedPiar](https://t.me/RedPiar)

[^1]: Dummy is used to compress context and data in general, very useful for systems with low RAM. It can also be saved for even greater memory efficiency!

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/RedPiarOfficial/ZeroGPT",
    "name": "zerogpt",
    "maintainer": "Redpiar",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Regeonwix@gmail.com",
    "keywords": "ai, zerogpt, arting, text, image, free, api, uncensured, gpt, deepseek, chatgpt",
    "author": "Redpiar",
    "author_email": "Regeonwix@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/4a/c6/88c96ad8cfba25ac89419cf7875f63fb63a2973f8e29cedb7fc0b4293a66/zerogpt-1.2.0.tar.gz",
    "platform": null,
    "description": "# ZeroGPT\r\n\r\n[![Version](https://img.shields.io/badge/version-1.1.0-blue.svg)](https://github.com/username/project/releases)\r\n[![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)\r\n![Status](https://img.shields.io/badge/status-active-success.svg)\r\n![Stage](https://img.shields.io/badge/stage-alpha-red.svg)\r\n\r\n[Documentation](https://red-3.gitbook.io/zerogpt/)\r\n\r\n**ZeroGPT** is a Python library for interacting with AI APIs, providing capabilities for text and image generation.\r\n\r\n## Features\r\n\r\n* Text generation using various models\r\n* Image creation based on textual descriptions\r\n* Support for \"uncensored\" mode for more unrestricted responses\r\n* Optimized memory and data handling\r\n* Streamed data support\r\n* Secure request authentication\r\n\r\n## Installation\r\n\r\n```bash\r\npip install zerogpt\r\n```\r\n\r\n## Usage\r\n\r\n### Client Initialization\r\n\r\n```python\r\nfrom zerogpt import Client\r\n\r\nclient = Client()\r\n```\r\n\r\n### Text Generation\r\n\r\n```python\r\n# Simple request\r\nresponse = client.send_message(\"Hi, how are you?\")\r\n\r\n# Request with instruction\r\nresponse = client.send_message(\r\n    \"Tell me about space\",\r\n    instruction=\"You are an astronomy expert\"\r\n)\r\n\r\n# Using \"uncensored\" mode\r\nresponse = client.send_message(\r\n    \"Explain a complex topic\",\r\n    uncensored=True\r\n)\r\n\r\n# Using \"think\" mode (deeper reasoning)\r\nresponse = client.send_message(\r\n    \"Solve a difficult math problem\",\r\n    think=True\r\n)\r\n\r\n# With context\r\nmessages=[\r\n    {\"role\": \"user\", \"content\": \"Hi\"},\r\n    {\"role\": \"assistant\", \"content\": \"Hello!\"}\r\n]\r\nresponse = client.send_message(\r\n    messages,\r\n    think=True\r\n)\r\n```\r\n\r\n### Image Generation\r\n\r\n```python\r\n# Create image\r\nresult = client.create_image(\r\n    prompt=\"anime neko girl\",\r\n    samples=1,\r\n    resolution=(768, 512),\r\n    seed=-1,\r\n    steps=50\r\n)\r\n\r\n# Get generated image\r\nimage = client.get_image(result['data']['request_id'])\r\n\r\n# Save image\r\nimage.download(['path/to/save/image.png'])\r\n\r\n# View image\r\nimage.open()\r\n```\r\n\r\n### Image to Prompt\r\n\r\n```python\r\nfrom zerogpt.utils.tools import image_to_prompt\r\n\r\nresp = image_to_prompt('path/to/image.png')\r\n```\r\n\r\n### Working with Dummy Context[^1]\r\n\r\n```python\r\nfrom zerogpt.utils.prompt import Dummy\r\n\r\n# Create context\r\ndummy = Dummy()\r\ndummy.create(messages=[\r\n    {\"role\": \"user\", \"content\": \"Hi\"},\r\n    {\"role\": \"assistant\", \"content\": \"Hello!\"}\r\n])\r\n\r\n# Also possible for image generation\r\ndummy = Dummy()\r\ndummy.create(prompt='neko girl', steps=100)\r\n\r\n# Save context\r\ndummy.save(\"context.bin\")\r\n\r\n# Load context\r\ndummy.load(\"context.bin\")\r\n\r\n# Use instead of messages:\r\n# client.send_message(dummy)\r\n# or\r\n# client.create_image(dummy)\r\n```\r\n\r\n\r\n## Parameters\r\n\r\n### send_message\r\n\r\n* `input` (str or list): Text prompt or list of messages\r\n* `instruction` (str, optional): System instruction\r\n* `think` (bool, optional): Use model with deeper reasoning\r\n* `uncensored` (bool, optional): Use unrestricted mode\r\n\r\n### create_image\r\n\r\n* `prompt` (str): Description of the desired image\r\n* `samples` (int, optional): Number of samples\r\n* `resolution` (tuple, optional): Image resolution (width, height)\r\n* `seed` (int, optional): Seed for reproducibility\r\n* `steps` (int, optional): Number of generation steps\r\n* `negative_prompt` (str, optional): Description of undesired elements\r\n\r\n## Security\r\n\r\nThe library uses HMAC-SHA256 to sign requests and ensure secure data transmission. All requests are authenticated using timestamps to prevent replay attacks.\r\n\r\n## Requirements\r\n\r\n* Python 3.8+\r\n\r\n## License\r\n\r\nMIT License\r\nCopyright (c) 2025 RedPiar\r\n\r\n## Author\r\n\r\n[RedPiar](https://t.me/RedPiar)\r\n\r\n[^1]: Dummy is used to compress context and data in general, very useful for systems with low RAM. It can also be saved for even greater memory efficiency!\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Python client for interacting with the ZeroGPT API and generating images.",
    "version": "1.2.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/RedPiarOfficial/ZeroGPT/issues",
        "Documentation": "https://red-3.gitbook.io/zerogpt/",
        "Homepage": "https://github.com/RedPiarOfficial/ZeroGPT",
        "Repository": "https://github.com/RedPiarOfficial/ZeroGPT"
    },
    "split_keywords": [
        "ai",
        " zerogpt",
        " arting",
        " text",
        " image",
        " free",
        " api",
        " uncensured",
        " gpt",
        " deepseek",
        " chatgpt"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "139735f15e3ea9970cac34350251af55a28b47a7f175324c301109f4764959d2",
                "md5": "cd00f7b5be33926b5185d79e0c921749",
                "sha256": "1c3680c9f3b998f1fac4692738e7d014b8219ee714e26a655c0cde7a07b84d7b"
            },
            "downloads": -1,
            "filename": "zerogpt-1.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "cd00f7b5be33926b5185d79e0c921749",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 13866,
            "upload_time": "2025-07-11T09:54:51",
            "upload_time_iso_8601": "2025-07-11T09:54:51.835809Z",
            "url": "https://files.pythonhosted.org/packages/13/97/35f15e3ea9970cac34350251af55a28b47a7f175324c301109f4764959d2/zerogpt-1.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "4ac688c96ad8cfba25ac89419cf7875f63fb63a2973f8e29cedb7fc0b4293a66",
                "md5": "20c7abd4bf74a27cbb0968e675d02cdd",
                "sha256": "6070285e41c7a11e5d6f50227c9dc4753031ffd425e0863f80c1c739946135e7"
            },
            "downloads": -1,
            "filename": "zerogpt-1.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "20c7abd4bf74a27cbb0968e675d02cdd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 13774,
            "upload_time": "2025-07-11T09:54:53",
            "upload_time_iso_8601": "2025-07-11T09:54:53.072840Z",
            "url": "https://files.pythonhosted.org/packages/4a/c6/88c96ad8cfba25ac89419cf7875f63fb63a2973f8e29cedb7fc0b4293a66/zerogpt-1.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-11 09:54:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "RedPiarOfficial",
    "github_project": "ZeroGPT",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "zerogpt"
}
        
Elapsed time: 0.97080s