pyllmlib


Namepyllmlib JSON
Version 0.1.1 PyPI version JSON
download
home_pagehttps://github.com/yazirofi/pyllmlib
SummaryA Simple Unified LLM API interface for OpenAI, Gemini, Mistral, Groq and more.
upload_time2025-08-22 18:46:15
maintainerNone
docs_urlNone
authorShay Yazirofi
requires_python>=3.7
licenseMIT
keywords python genai llm api openai gemini mistral groq ai chatbot
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # πŸ”Œ pyllmlib

[![PyPI Downloads](https://static.pepy.tech/badge/pyllmlib)](https://pepy.tech/projects/pyllmlib)
[![PyPI Version](https://img.shields.io/pypi/v/pyllmlib.svg)](https://pypi.org/project/pyllmlib/)
![Python Version](https://img.shields.io/pypi/pyversions/pyllmlib.svg)
[![License](https://img.shields.io/github/license/yazirofi/pyllmlib)](LICENSE)
[![GitHub Stars](https://img.shields.io/github/stars/yazirofi/pyllmlib?style=social)](https://github.com/yazirofi/pyllmlib)

---

<div style="
  width:200px; 
  height:100px; 
  overflow:hidden; 
  display:flex; 
  justify-content:center; 
  align-items:center; 
  margin:auto; 
  position:relative; 
  top:0; 
  bottom:0; 
  left:0; 
  right:0; 
  border-radius:10px; 
  box-shadow:0 4px 8px rgba(0,0,0,0.2);
">
  <img src="https://raw.githubusercontent.com/yazirofi/cdn/main/yazirofi.jpg" 
       style="width:100%; height:100%; object-fit:cover;" 
       alt="Cropped Image">
</div>

---

**pyllmlib** is a lightweight, provider-agnostic Python package that gives you **one simple interface** to work with multiple Large Language Model (LLM) APIs.

Stop juggling different SDKs and client libraries β€” whether it’s **OpenAI GPT**, **Google Gemini**, **Mistral AI**, or **Groq**, you write your code once and switch providers in seconds.

🎯 **Ideal for** developers who want to:

* Experiment with different LLMs quickly
* Build multi-provider AI applications
* Avoid vendor lock-in with a consistent API

---

## ✨ Features

* πŸ”Œ **Unified API** β€” A single `generate()` function for all providers
* 🌐 **Multi-Provider Support** β€” OpenAI, Gemini, Mistral, Groq (with more coming soon)
* 🧠 **Consistent Message Format** β€” Same request style across providers
* πŸ” **Flexible Config** β€” Use env vars, inline setup, or config files
* πŸ“¦ **Minimal Dependencies** β€” Only needs `requests`
* πŸ”„ **Quick Provider Switching** β€” Change models with one line
* πŸ›‘οΈ **Automatic Token Handling** β€” Prevents overflows & context errors
* πŸ“œ **Role-Based Conversations** β€” System, user, assistant messages
* πŸ”§ **Extensible** β€” Add your own providers with minimal code
* πŸš€ **No Vendor Lock-In** β€” Swap providers without rewriting logic

---

## πŸ“¦ Installation

From PyPI (recommended):

```bash
pip install pyllmlib
```

From GitHub:

```bash
# Latest release
pip install git+https://github.com/yazirofi/pyllmlib.git

# Development version
git clone https://github.com/yazirofi/pyllmlib.git
cd pyllmlib
pip install -e .
```

**Requirements**:

* Python 3.7+
* `requests` (installed automatically)

---

## πŸš€ Quick Start

```python
from pyllmlib import config, generate

# Configure your preferred LLM
config(
    provider="openai",
    api_key="your-openai-api-key",
    model="gpt-4"
)

# Generate text
response = generate("Explain quantum computing in simple terms")
print(response)
```

βœ… Same code works with **any provider** β€” just change the config.

---

## βš™οΈ Configuration

### 1. Direct in Code

```python
from pyllmlib import config

# OpenAI
config(provider="openai", api_key="sk-...", model="gpt-4")

# Google Gemini
config(provider="gemini", api_key="AIza...", model="gemini-2.5-flash")

# Mistral
config(provider="mistral", api_key="...", model="mistral-large-latest")

# Groq
config(provider="groq", api_key="gsk_...", model="mixtral-8x7b-32768")
```

### 2. Environment Variables

```bash
LLM_PROVIDER=openai
LLM_API_KEY=sk-your-openai-key
LLM_MODEL=gpt-4
LLM_BASE_URL=https://api.openai.com/v1  # Optional
```

```python
from pyllmlib import config, generate

config()  # Loads from env
print(generate("What is LLM?"))
```

---

## πŸ’¬ Usage Examples

### Text Generation

```python
from pyllmlib import config, generate

config(provider="gemini", api_key="AIza...", model="gemini-2.5-flash")

print(generate("What is the capital of France?"))

prompt = """
Write a Python function to calculate factorial with error handling and docstring.
"""
print(generate(prompt))
```

### Interactive Chat

```python
from pyllmlib import config, chat, reset_chat

config(provider="gemini", api_key="AIza...", model="gemini-2.5-flash")

while q := input("Ask: "):
    print(chat(q))

reset_chat()
```
---

## Style Generated Output Response
```python
from pyllmlib import config, chat, reset_chat, style, generate

config(provider="gemini", api_key="AIza...", model="gemini-2.5-flash")

style(generate("Write the wish in one line"))  # use the **style** function instead of **print** function

while q := input("Ask: "):
    style(chat(q))  # use the **style** function instead of **print** function
    
reset_chat()
```

---

## 🌐 Supported Providers

### βœ… Currently Supported

* **OpenAI** β€” `gpt-4`, `gpt-4-turbo`, `gpt-3.5-turbo`
* **Google Gemini** β€” `gemini-2.5-flash`, `gemini-1.5-flash`
* **Mistral AI** β€” `mistral-large-latest`, `mistral-small-latest`
* **Groq** β€” `mixtral-8x7b-32768`, `llama2-70b-4096`, `gemma-7b-it`

### πŸ”œ Coming Soon

* Anthropic Claude
* Cohere
* Ollama & LM Studio (local hosting)
* Hugging Face models

---

## πŸ› Troubleshooting

* **Auth Errors** β†’ Check API key format
* **Model Not Found** β†’ Verify model name is correct

---

## πŸ“Š Best Practices

```python
# βœ… Reuse config for multiple prompts
config(provider="openai", api_key="sk-...", model="gpt-4")
for p in prompts:
    print(generate(p))

# ❌ Don’t reconfigure on every request
```

πŸ’‘ **Cost Optimization**: Use `gpt-3.5-turbo` for simple tasks, `gpt-4` for complex ones.

---

## πŸ“š API Reference

* `config(**kwargs)` β†’ Set provider, API key, model
* `generate(prompt, **kwargs)` β†’ Single text output
* `generate_stream(prompt, **kwargs)` β†’ Streaming output
* `chat(message)` β†’ Conversational interface
* `reset_chat()` β†’ Clear conversation history

---


## πŸ“„ License

Licensed under the **MIT License** – see [LICENSE](LICENSE).

---

## πŸ‘¨πŸ’» Author

**Shay Yazirofi**

* GitHub: [@yazirofi](https://github.com/yazirofi)
* Email: [yazirofi@gmail.com](mailto:yazirofi@gmail.com)

---

⭐ If you find **pyllmlib** useful, please **star the repo** on GitHub!
πŸ“– More docs & tutorials: [Wiki](https://github.com/yazirofi/pyllmlib/)

---


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/yazirofi/pyllmlib",
    "name": "pyllmlib",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "python genai llm api openai gemini mistral groq ai chatbot",
    "author": "Shay Yazirofi",
    "author_email": "yazirofi@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/c7/ee/32cd9f5888f7629ef8da63a9d17b96f9f4fa40bee0c5f9484b52877432fc/pyllmlib-0.1.1.tar.gz",
    "platform": null,
    "description": "# \ud83d\udd0c pyllmlib\r\n\r\n[![PyPI Downloads](https://static.pepy.tech/badge/pyllmlib)](https://pepy.tech/projects/pyllmlib)\r\n[![PyPI Version](https://img.shields.io/pypi/v/pyllmlib.svg)](https://pypi.org/project/pyllmlib/)\r\n![Python Version](https://img.shields.io/pypi/pyversions/pyllmlib.svg)\r\n[![License](https://img.shields.io/github/license/yazirofi/pyllmlib)](LICENSE)\r\n[![GitHub Stars](https://img.shields.io/github/stars/yazirofi/pyllmlib?style=social)](https://github.com/yazirofi/pyllmlib)\r\n\r\n---\r\n\r\n<div style=\"\r\n  width:200px; \r\n  height:100px; \r\n  overflow:hidden; \r\n  display:flex; \r\n  justify-content:center; \r\n  align-items:center; \r\n  margin:auto; \r\n  position:relative; \r\n  top:0; \r\n  bottom:0; \r\n  left:0; \r\n  right:0; \r\n  border-radius:10px; \r\n  box-shadow:0 4px 8px rgba(0,0,0,0.2);\r\n\">\r\n  <img src=\"https://raw.githubusercontent.com/yazirofi/cdn/main/yazirofi.jpg\" \r\n       style=\"width:100%; height:100%; object-fit:cover;\" \r\n       alt=\"Cropped Image\">\r\n</div>\r\n\r\n---\r\n\r\n**pyllmlib** is a lightweight, provider-agnostic Python package that gives you **one simple interface** to work with multiple Large Language Model (LLM) APIs.\r\n\r\nStop juggling different SDKs and client libraries \u2014 whether it\u2019s **OpenAI GPT**, **Google Gemini**, **Mistral AI**, or **Groq**, you write your code once and switch providers in seconds.\r\n\r\n\ud83c\udfaf **Ideal for** developers who want to:\r\n\r\n* Experiment with different LLMs quickly\r\n* Build multi-provider AI applications\r\n* Avoid vendor lock-in with a consistent API\r\n\r\n---\r\n\r\n## \u2728 Features\r\n\r\n* \ud83d\udd0c **Unified API** \u2014 A single `generate()` function for all providers\r\n* \ud83c\udf10 **Multi-Provider Support** \u2014 OpenAI, Gemini, Mistral, Groq (with more coming soon)\r\n* \ud83e\udde0 **Consistent Message Format** \u2014 Same request style across providers\r\n* \ud83d\udd10 **Flexible Config** \u2014 Use env vars, inline setup, or config files\r\n* \ud83d\udce6 **Minimal Dependencies** \u2014 Only needs `requests`\r\n* \ud83d\udd04 **Quick Provider Switching** \u2014 Change models with one line\r\n* \ud83d\udee1\ufe0f **Automatic Token Handling** \u2014 Prevents overflows & context errors\r\n* \ud83d\udcdc **Role-Based Conversations** \u2014 System, user, assistant messages\r\n* \ud83d\udd27 **Extensible** \u2014 Add your own providers with minimal code\r\n* \ud83d\ude80 **No Vendor Lock-In** \u2014 Swap providers without rewriting logic\r\n\r\n---\r\n\r\n## \ud83d\udce6 Installation\r\n\r\nFrom PyPI (recommended):\r\n\r\n```bash\r\npip install pyllmlib\r\n```\r\n\r\nFrom GitHub:\r\n\r\n```bash\r\n# Latest release\r\npip install git+https://github.com/yazirofi/pyllmlib.git\r\n\r\n# Development version\r\ngit clone https://github.com/yazirofi/pyllmlib.git\r\ncd pyllmlib\r\npip install -e .\r\n```\r\n\r\n**Requirements**:\r\n\r\n* Python 3.7+\r\n* `requests` (installed automatically)\r\n\r\n---\r\n\r\n## \ud83d\ude80 Quick Start\r\n\r\n```python\r\nfrom pyllmlib import config, generate\r\n\r\n# Configure your preferred LLM\r\nconfig(\r\n    provider=\"openai\",\r\n    api_key=\"your-openai-api-key\",\r\n    model=\"gpt-4\"\r\n)\r\n\r\n# Generate text\r\nresponse = generate(\"Explain quantum computing in simple terms\")\r\nprint(response)\r\n```\r\n\r\n\u2705 Same code works with **any provider** \u2014 just change the config.\r\n\r\n---\r\n\r\n## \u2699\ufe0f Configuration\r\n\r\n### 1. Direct in Code\r\n\r\n```python\r\nfrom pyllmlib import config\r\n\r\n# OpenAI\r\nconfig(provider=\"openai\", api_key=\"sk-...\", model=\"gpt-4\")\r\n\r\n# Google Gemini\r\nconfig(provider=\"gemini\", api_key=\"AIza...\", model=\"gemini-2.5-flash\")\r\n\r\n# Mistral\r\nconfig(provider=\"mistral\", api_key=\"...\", model=\"mistral-large-latest\")\r\n\r\n# Groq\r\nconfig(provider=\"groq\", api_key=\"gsk_...\", model=\"mixtral-8x7b-32768\")\r\n```\r\n\r\n### 2. Environment Variables\r\n\r\n```bash\r\nLLM_PROVIDER=openai\r\nLLM_API_KEY=sk-your-openai-key\r\nLLM_MODEL=gpt-4\r\nLLM_BASE_URL=https://api.openai.com/v1  # Optional\r\n```\r\n\r\n```python\r\nfrom pyllmlib import config, generate\r\n\r\nconfig()  # Loads from env\r\nprint(generate(\"What is LLM?\"))\r\n```\r\n\r\n---\r\n\r\n## \ud83d\udcac Usage Examples\r\n\r\n### Text Generation\r\n\r\n```python\r\nfrom pyllmlib import config, generate\r\n\r\nconfig(provider=\"gemini\", api_key=\"AIza...\", model=\"gemini-2.5-flash\")\r\n\r\nprint(generate(\"What is the capital of France?\"))\r\n\r\nprompt = \"\"\"\r\nWrite a Python function to calculate factorial with error handling and docstring.\r\n\"\"\"\r\nprint(generate(prompt))\r\n```\r\n\r\n### Interactive Chat\r\n\r\n```python\r\nfrom pyllmlib import config, chat, reset_chat\r\n\r\nconfig(provider=\"gemini\", api_key=\"AIza...\", model=\"gemini-2.5-flash\")\r\n\r\nwhile q := input(\"Ask: \"):\r\n    print(chat(q))\r\n\r\nreset_chat()\r\n```\r\n---\r\n\r\n## Style Generated Output Response\r\n```python\r\nfrom pyllmlib import config, chat, reset_chat, style, generate\r\n\r\nconfig(provider=\"gemini\", api_key=\"AIza...\", model=\"gemini-2.5-flash\")\r\n\r\nstyle(generate(\"Write the wish in one line\"))  # use the **style** function instead of **print** function\r\n\r\nwhile q := input(\"Ask: \"):\r\n    style(chat(q))  # use the **style** function instead of **print** function\r\n    \r\nreset_chat()\r\n```\r\n\r\n---\r\n\r\n## \ud83c\udf10 Supported Providers\r\n\r\n### \u2705 Currently Supported\r\n\r\n* **OpenAI** \u2014 `gpt-4`, `gpt-4-turbo`, `gpt-3.5-turbo`\r\n* **Google Gemini** \u2014 `gemini-2.5-flash`, `gemini-1.5-flash`\r\n* **Mistral AI** \u2014 `mistral-large-latest`, `mistral-small-latest`\r\n* **Groq** \u2014 `mixtral-8x7b-32768`, `llama2-70b-4096`, `gemma-7b-it`\r\n\r\n### \ud83d\udd1c Coming Soon\r\n\r\n* Anthropic Claude\r\n* Cohere\r\n* Ollama & LM Studio (local hosting)\r\n* Hugging Face models\r\n\r\n---\r\n\r\n## \ud83d\udc1b Troubleshooting\r\n\r\n* **Auth Errors** \u2192 Check API key format\r\n* **Model Not Found** \u2192 Verify model name is correct\r\n\r\n---\r\n\r\n## \ud83d\udcca Best Practices\r\n\r\n```python\r\n# \u2705 Reuse config for multiple prompts\r\nconfig(provider=\"openai\", api_key=\"sk-...\", model=\"gpt-4\")\r\nfor p in prompts:\r\n    print(generate(p))\r\n\r\n# \u274c Don\u2019t reconfigure on every request\r\n```\r\n\r\n\ud83d\udca1 **Cost Optimization**: Use `gpt-3.5-turbo` for simple tasks, `gpt-4` for complex ones.\r\n\r\n---\r\n\r\n## \ud83d\udcda API Reference\r\n\r\n* `config(**kwargs)` \u2192 Set provider, API key, model\r\n* `generate(prompt, **kwargs)` \u2192 Single text output\r\n* `generate_stream(prompt, **kwargs)` \u2192 Streaming output\r\n* `chat(message)` \u2192 Conversational interface\r\n* `reset_chat()` \u2192 Clear conversation history\r\n\r\n---\r\n\r\n\r\n## \ud83d\udcc4 License\r\n\r\nLicensed under the **MIT License** \u2013 see [LICENSE](LICENSE).\r\n\r\n---\r\n\r\n## \ud83d\udc68\ud83d\udcbb Author\r\n\r\n**Shay Yazirofi**\r\n\r\n* GitHub: [@yazirofi](https://github.com/yazirofi)\r\n* Email: [yazirofi@gmail.com](mailto:yazirofi@gmail.com)\r\n\r\n---\r\n\r\n\u2b50 If you find **pyllmlib** useful, please **star the repo** on GitHub!\r\n\ud83d\udcd6 More docs & tutorials: [Wiki](https://github.com/yazirofi/pyllmlib/)\r\n\r\n---\r\n\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A Simple Unified LLM API interface for OpenAI, Gemini, Mistral, Groq and more.",
    "version": "0.1.1",
    "project_urls": {
        "Documentation": "https://github.com/yazirofi/pyllmlib#readme",
        "Homepage": "https://github.com/yazirofi/pyllmlib",
        "Source Code": "https://github.com/yazirofi/pyllmlib"
    },
    "split_keywords": [
        "python",
        "genai",
        "llm",
        "api",
        "openai",
        "gemini",
        "mistral",
        "groq",
        "ai",
        "chatbot"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c4d2672e46de9d636af6cb555740b8a6440c25e1f3adca69a9594833ccd7564e",
                "md5": "b075dc195161e260bc701f3c03dd012a",
                "sha256": "490f47063e9da6711438c5cadd464b36d40ad140c652401f33d957f30e72d745"
            },
            "downloads": -1,
            "filename": "pyllmlib-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b075dc195161e260bc701f3c03dd012a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 10566,
            "upload_time": "2025-08-22T18:46:14",
            "upload_time_iso_8601": "2025-08-22T18:46:14.005626Z",
            "url": "https://files.pythonhosted.org/packages/c4/d2/672e46de9d636af6cb555740b8a6440c25e1f3adca69a9594833ccd7564e/pyllmlib-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c7ee32cd9f5888f7629ef8da63a9d17b96f9f4fa40bee0c5f9484b52877432fc",
                "md5": "906919a2bee5918bb0ea73246c2383af",
                "sha256": "ffa0821d5184a9ac842c7118af88127892369bf112ada2420ea66060193f6cfc"
            },
            "downloads": -1,
            "filename": "pyllmlib-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "906919a2bee5918bb0ea73246c2383af",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 10934,
            "upload_time": "2025-08-22T18:46:15",
            "upload_time_iso_8601": "2025-08-22T18:46:15.082365Z",
            "url": "https://files.pythonhosted.org/packages/c7/ee/32cd9f5888f7629ef8da63a9d17b96f9f4fa40bee0c5f9484b52877432fc/pyllmlib-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-22 18:46:15",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "yazirofi",
    "github_project": "pyllmlib",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "pyllmlib"
}
        
Elapsed time: 2.10698s