# 🔌 plugllm
[](https://pepy.tech/projects/plugllm)
[](https://pypi.org/project/plugllm/)

[](LICENSE)
[](https://github.com/firoziya/plugllm)
**plugllm** is a unified and provider-agnostic Python package that lets you interact with multiple Large Language Model (LLM) APIs using a single, consistent interface. Whether you're using OpenAI's GPT models, Google's Gemini, Mistral AI, Groq, or other providers, plugllm abstracts away the complexity of different SDKs and APIs, giving you one simple way to generate text from any LLM.
🎯 **Perfect for**: Developers who want to experiment with different LLMs, switch between providers easily, or build applications that support multiple AI models without vendor lock-in.
***
## ✨ Key Features
- 🔌 **Unified API Interface** — One `generate()` function works with all supported providers
- 🌐 **Multi-Provider Support** — OpenAI, Google Gemini, Mistral AI, Groq, and more
- 🧠 **Consistent Message Format** — Same request structure across all providers
- 🔐 **Flexible Configuration** — Environment variables, inline setup, or config files
- 📦 **Lightweight Dependencies** — Only requires Python `requests` library
- 🔄 **Easy Provider Switching** — Change models with a single line of code
- 🛡️ **Context Management** — Automatic handling of token/character limits
- 📜 **Role-Based Conversations** — Support for system, user, and assistant message roles
- 🔧 **Extensible Architecture** — Add custom providers with minimal code
- 🚀 **No Vendor Lock-in** — Switch between providers without changing your application logic
***
## 📦 Installation
### Install from PyPI (Recommended)
```bash
pip install plugllm
```
### Install from Source
```bash
# Latest stable release
pip install git+https://github.com/firoziya/plugllm.git
# Development version
git clone https://github.com/firoziya/plugllm.git
cd plugllm
pip install -e .
```
### Requirements
- Python 3.7+
- `requests` library (automatically installed)
***
## 🚀 Quick Start
```python
from plugllm import config, generate
# Configure your preferred LLM
config(
provider="openai",
api_key="your-openai-api-key",
model="gpt-4"
)
# Generate text with a simple function call
response = generate("Explain quantum computing in simple terms")
print(response)
```
That's it! The same code works with any supported provider by just changing the configuration.
***
## ⚙️ Configuration
### Method 1: Direct Configuration
Configure directly in your Python code:
```python
from plugllm import config
# OpenAI Configuration
config(
provider="openai",
api_key="sk-your-openai-key",
model="gpt-4",
base_url=None # Optional: for custom endpoints
)
# Google Gemini Configuration
config(
provider="gemini",
api_key="your-gemini-api-key",
model="gemini-2.5-flash"
)
# Mistral AI Configuration
config(
provider="mistral",
api_key="your-mistral-key",
model="mistral-large-latest"
)
# Groq Configuration
config(
provider="groq",
api_key="gsk_your-groq-key",
model="deepseek-r1-distill-llama-70b"
)
```
### Method 2: Environment Variables
For better security and easier deployment:
```bash
# Set environment variables .env
LLM_PROVIDER=openai
LLM_API_KEY=sk-your-openai-key
LLM_MODEL=gpt-4
LLM_BASE_URL=https://api.openai.com/v1 # Optional
```
Then in your Python code:
```python
from plugllm import config, generate
# Configuration will be loaded from environment variables
config()
response = generate("What is LLM?")
print(response)
```
***
## 💬 Usage Examples
### Basic Text Generation
```python
from plugllm import config, generate
config(provider="gemini", api_key="AIzaSyC5...", model="gemini-2.5-flash")
# Simple question
response = generate("What is the capital of France?")
print(response)
# Complex prompt
prompt = """
Write a Python function that calculates the factorial of a number.
Include error handling and documentation.
"""
code = generate(prompt)
print(code)
```
### Interactive Chat Application
```python
from plugllm import config, chat, reset_chat
config(provider="gemini", api_key="AIzaSyC5...", model="gemini-2.5-flash")
while (i := input("Ask: ")): print(chat(i))
reset_chat() # Optional
```
***
## 🌐 Supported Providers \& Models
### OpenAI
- **Models**: `gpt-4`, `gpt-4-turbo`, `gpt-3.5-turbo`, `gpt-3.5-turbo-16k`
- **API Key**: Get from [OpenAI Platform](https://platform.openai.com/api-keys)
```python
config(provider="openai", api_key="sk-...", model="gpt-4")
```
### Google Gemini
- **Models**: `gemini-2.5-flash`, `gemini-1.5-flash`, `gemini-2.0-flash`
- **API Key**: Get from [Google AI Studio](https://makersuite.google.com/app/apikey)
```python
config(provider="gemini", api_key="AIza...", model="gemini-2.5-flash")
```
### Mistral AI
- **Models**: `mistral-large-latest`, `mistral-medium-latest`, `mistral-small-latest`
- **API Key**: Get from [Mistral Console](https://console.mistral.ai/)
```python
config(provider="mistral", api_key="your-key", model="mistral-large-latest")
```
### Groq
- **Models**: `mixtral-8x7b-32768`, `llama2-70b-4096`, `gemma-7b-it`
- **API Key**: Get from [Groq Console](https://console.groq.com/keys)
```python
config(provider="groq", api_key="gsk_...", model="mixtral-8x7b-32768")
```
### Coming Soon
- 🔜 **Anthropic Claude** - Constitutional AI models
- 🔜 **Cohere** - Command and Generate models
- 🔜 **Ollama** - Local model hosting
- 🔜 **LM Studio** - Local model interface
- 🔜 **Hugging Face** - Open source models
***
## 🐛 Troubleshooting
### Common Issues
**1. Authentication Errors**
```python
# ❌ Wrong API key format
config(provider="openai", api_key="wrong-key")
# ✅ Correct API key format
config(provider="openai", api_key="sk-proj-...")
```
**2. Model Not Found**
```python
# ❌ Invalid model name
config(provider="openai", model="gpt-6")
# ✅ Valid model name
config(provider="openai", model="gpt-4")
```
***
## 📊 Performance \& Best Practices
### Optimal Usage Patterns
```python
# ✅ Good: Reuse configuration
from plugllm import config, generate
config(provider="openai", api_key="sk-...", model="gpt-4")
# Multiple requests with same config
for prompt in prompts:
response = generate(prompt)
process_response(response)
# ❌ Avoid: Reconfiguring for each request
for prompt in prompts:
config(provider="openai", api_key="sk-...", model="gpt-4")
response = generate(prompt)
```
### Cost Optimization
```python
# Use appropriate models for different tasks
config(provider="openai", model="gpt-3.5-turbo") # Cheaper for simple tasks
simple_response = generate("What's 2+2?")
config(provider="openai", model="gpt-4") # More capable for complex tasks
complex_response = generate("Analyze this complex dataset...")
```
***
## 📚 API Reference
### Core Functions
#### `config(**kwargs)`
Configure the LLM provider and model.
**Parameters:**
- `provider` (str): Provider name ("openai", "gemini", "mistral", "groq")
- `api_key` (str): API key for the provider
- `model` (str): Model name to use
- `base_url` (str, optional): Custom API endpoint
#### `generate(prompt, **kwargs)`
Generate text using the configured LLM.
**Parameters:**
- `prompt` (str or list): Text prompt or conversation messages
- `**kwargs`: Additional parameters for the specific provider
**Returns:**
- `str`: Generated text response
#### `generate_stream(prompt, **kwargs)`
Generate streaming text response.
**Parameters:**
- Same as `generate()`
**Yields:**
- `str`: Text chunks as they're generated
***
## 📄 License
This project is licensed under the **MIT License** - see the [LICENSE](LICENSE) file for details.
```
MIT License
Copyright (c) 2025 Yash Kumar Firoziya
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
```
***
## 👨💻 Author
**Yash Kumar Firoziya**
- GitHub: [@firoziya](https://github.com/firoziya)
- Email: [ykfiroziya@gmail.com](mailto:ykfiroziya@gmail.com)
***
## 🙏 Acknowledgments
- Thanks to all the LLM providers for their amazing APIs
- Inspired by the need for a unified interface across different AI models
- Built with ❤️ for the Python AI community
***
## 📈 Roadmap
### Version 0.2.0
- [ ] Support for Anthropic Claude
- [ ] Async/await support
- [ ] Better error handling and retries
- [ ] Configuration validation
### Version 0.3.0
- [ ] Local model support (Ollama, LM Studio)
- [ ] Conversation memory management
- [ ] Built-in prompt templates
- [ ] Cost tracking and usage analytics
### Version 1.0.0
- [ ] Production-ready stability
- [ ] Comprehensive documentation
- [ ] Performance optimizations
- [ ] Plugin system for extensions
***
⭐ **If you find plugllm useful, please give it a star on GitHub!**
📖 **For more examples and tutorials, visit our [documentation](https://github.com/firoziya/plugllm/wiki)**
<div style="text-align: center">⁂</div>
[^1]: https://pepy.tech/projects/plugllm
[^2]: https://img.shields.io/pypi/v/plugllm.svg
[^3]: https://pypi.org/project/plugllm/
Raw data
{
"_id": null,
"home_page": "https://github.com/firoziya/plugllm",
"name": "plugllm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "llm api openai gemini mistral groq ai chatbot",
"author": "Yash Kumar Firoziya",
"author_email": "ykfiroziya@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/5a/fe/c96449237898fbf775d52d5ebc353a7a5a6b7c1aff1d006457b8e579c79f/plugllm-0.1.2.tar.gz",
"platform": null,
"description": "\r\n# \ud83d\udd0c plugllm\r\n\r\n[](https://pepy.tech/projects/plugllm)\r\n[](https://pypi.org/project/plugllm/)\r\n\r\n[](LICENSE)\r\n[](https://github.com/firoziya/plugllm)\r\n\r\n\r\n**plugllm** is a unified and provider-agnostic Python package that lets you interact with multiple Large Language Model (LLM) APIs using a single, consistent interface. Whether you're using OpenAI's GPT models, Google's Gemini, Mistral AI, Groq, or other providers, plugllm abstracts away the complexity of different SDKs and APIs, giving you one simple way to generate text from any LLM.\r\n\r\n\ud83c\udfaf **Perfect for**: Developers who want to experiment with different LLMs, switch between providers easily, or build applications that support multiple AI models without vendor lock-in.\r\n\r\n***\r\n\r\n## \u2728 Key Features\r\n\r\n- \ud83d\udd0c **Unified API Interface** \u2014 One `generate()` function works with all supported providers\r\n- \ud83c\udf10 **Multi-Provider Support** \u2014 OpenAI, Google Gemini, Mistral AI, Groq, and more\r\n- \ud83e\udde0 **Consistent Message Format** \u2014 Same request structure across all providers\r\n- \ud83d\udd10 **Flexible Configuration** \u2014 Environment variables, inline setup, or config files\r\n- \ud83d\udce6 **Lightweight Dependencies** \u2014 Only requires Python `requests` library\r\n- \ud83d\udd04 **Easy Provider Switching** \u2014 Change models with a single line of code\r\n- \ud83d\udee1\ufe0f **Context Management** \u2014 Automatic handling of token/character limits\r\n- \ud83d\udcdc **Role-Based Conversations** \u2014 Support for system, user, and assistant message roles\r\n- \ud83d\udd27 **Extensible Architecture** \u2014 Add custom providers with minimal code\r\n- \ud83d\ude80 **No Vendor Lock-in** \u2014 Switch between providers without changing your application logic\r\n\r\n***\r\n\r\n## \ud83d\udce6 Installation\r\n\r\n### Install from PyPI (Recommended)\r\n\r\n```bash\r\npip install plugllm\r\n```\r\n\r\n\r\n### Install from Source\r\n\r\n```bash\r\n# Latest stable release\r\npip install git+https://github.com/firoziya/plugllm.git\r\n\r\n# Development version\r\ngit clone https://github.com/firoziya/plugllm.git\r\ncd plugllm\r\npip install -e .\r\n```\r\n\r\n\r\n### Requirements\r\n\r\n- Python 3.7+\r\n- `requests` library (automatically installed)\r\n\r\n***\r\n\r\n## \ud83d\ude80 Quick Start\r\n\r\n```python\r\nfrom plugllm import config, generate\r\n\r\n# Configure your preferred LLM\r\nconfig(\r\n provider=\"openai\",\r\n api_key=\"your-openai-api-key\",\r\n model=\"gpt-4\"\r\n)\r\n\r\n# Generate text with a simple function call\r\nresponse = generate(\"Explain quantum computing in simple terms\")\r\nprint(response)\r\n```\r\n\r\nThat's it! The same code works with any supported provider by just changing the configuration.\r\n\r\n***\r\n\r\n## \u2699\ufe0f Configuration\r\n\r\n### Method 1: Direct Configuration\r\n\r\nConfigure directly in your Python code:\r\n\r\n```python\r\nfrom plugllm import config\r\n\r\n# OpenAI Configuration\r\nconfig(\r\n provider=\"openai\",\r\n api_key=\"sk-your-openai-key\",\r\n model=\"gpt-4\",\r\n base_url=None # Optional: for custom endpoints\r\n)\r\n\r\n# Google Gemini Configuration\r\nconfig(\r\n provider=\"gemini\",\r\n api_key=\"your-gemini-api-key\",\r\n model=\"gemini-2.5-flash\"\r\n)\r\n\r\n# Mistral AI Configuration\r\nconfig(\r\n provider=\"mistral\",\r\n api_key=\"your-mistral-key\",\r\n model=\"mistral-large-latest\"\r\n)\r\n\r\n# Groq Configuration\r\nconfig(\r\n provider=\"groq\",\r\n api_key=\"gsk_your-groq-key\",\r\n model=\"deepseek-r1-distill-llama-70b\"\r\n)\r\n```\r\n\r\n\r\n### Method 2: Environment Variables\r\n\r\nFor better security and easier deployment:\r\n\r\n```bash\r\n# Set environment variables .env\r\nLLM_PROVIDER=openai\r\nLLM_API_KEY=sk-your-openai-key\r\nLLM_MODEL=gpt-4\r\nLLM_BASE_URL=https://api.openai.com/v1 # Optional\r\n```\r\n\r\nThen in your Python code:\r\n\r\n```python\r\nfrom plugllm import config, generate\r\n\r\n# Configuration will be loaded from environment variables\r\nconfig()\r\n\r\nresponse = generate(\"What is LLM?\")\r\nprint(response)\r\n```\r\n***\r\n\r\n## \ud83d\udcac Usage Examples\r\n\r\n### Basic Text Generation\r\n\r\n```python\r\nfrom plugllm import config, generate\r\n\r\nconfig(provider=\"gemini\", api_key=\"AIzaSyC5...\", model=\"gemini-2.5-flash\")\r\n\r\n# Simple question\r\nresponse = generate(\"What is the capital of France?\")\r\nprint(response)\r\n\r\n# Complex prompt\r\nprompt = \"\"\"\r\nWrite a Python function that calculates the factorial of a number.\r\nInclude error handling and documentation.\r\n\"\"\"\r\ncode = generate(prompt)\r\nprint(code)\r\n```\r\n\r\n\r\n### Interactive Chat Application\r\n\r\n```python\r\nfrom plugllm import config, chat, reset_chat\r\n\r\nconfig(provider=\"gemini\", api_key=\"AIzaSyC5...\", model=\"gemini-2.5-flash\")\r\n\r\nwhile (i := input(\"Ask: \")): print(chat(i))\r\n\r\nreset_chat() # Optional\r\n```\r\n\r\n\r\n***\r\n\r\n## \ud83c\udf10 Supported Providers \\& Models\r\n\r\n### OpenAI\r\n\r\n- **Models**: `gpt-4`, `gpt-4-turbo`, `gpt-3.5-turbo`, `gpt-3.5-turbo-16k`\r\n- **API Key**: Get from [OpenAI Platform](https://platform.openai.com/api-keys)\r\n\r\n```python\r\nconfig(provider=\"openai\", api_key=\"sk-...\", model=\"gpt-4\")\r\n```\r\n\r\n\r\n### Google Gemini\r\n\r\n- **Models**: `gemini-2.5-flash`, `gemini-1.5-flash`, `gemini-2.0-flash`\r\n- **API Key**: Get from [Google AI Studio](https://makersuite.google.com/app/apikey)\r\n\r\n```python\r\nconfig(provider=\"gemini\", api_key=\"AIza...\", model=\"gemini-2.5-flash\")\r\n```\r\n\r\n\r\n### Mistral AI\r\n\r\n- **Models**: `mistral-large-latest`, `mistral-medium-latest`, `mistral-small-latest`\r\n- **API Key**: Get from [Mistral Console](https://console.mistral.ai/)\r\n\r\n```python\r\nconfig(provider=\"mistral\", api_key=\"your-key\", model=\"mistral-large-latest\")\r\n```\r\n\r\n\r\n### Groq\r\n\r\n- **Models**: `mixtral-8x7b-32768`, `llama2-70b-4096`, `gemma-7b-it`\r\n- **API Key**: Get from [Groq Console](https://console.groq.com/keys)\r\n\r\n```python\r\nconfig(provider=\"groq\", api_key=\"gsk_...\", model=\"mixtral-8x7b-32768\")\r\n```\r\n\r\n\r\n### Coming Soon\r\n\r\n- \ud83d\udd1c **Anthropic Claude** - Constitutional AI models\r\n- \ud83d\udd1c **Cohere** - Command and Generate models\r\n- \ud83d\udd1c **Ollama** - Local model hosting\r\n- \ud83d\udd1c **LM Studio** - Local model interface\r\n- \ud83d\udd1c **Hugging Face** - Open source models\r\n\r\n***\r\n\r\n## \ud83d\udc1b Troubleshooting\r\n\r\n### Common Issues\r\n\r\n**1. Authentication Errors**\r\n\r\n```python\r\n# \u274c Wrong API key format\r\nconfig(provider=\"openai\", api_key=\"wrong-key\")\r\n\r\n# \u2705 Correct API key format\r\nconfig(provider=\"openai\", api_key=\"sk-proj-...\")\r\n```\r\n\r\n**2. Model Not Found**\r\n\r\n```python\r\n# \u274c Invalid model name\r\nconfig(provider=\"openai\", model=\"gpt-6\")\r\n\r\n# \u2705 Valid model name\r\nconfig(provider=\"openai\", model=\"gpt-4\")\r\n```\r\n\r\n\r\n\r\n***\r\n\r\n## \ud83d\udcca Performance \\& Best Practices\r\n\r\n### Optimal Usage Patterns\r\n\r\n```python\r\n# \u2705 Good: Reuse configuration\r\nfrom plugllm import config, generate\r\n\r\nconfig(provider=\"openai\", api_key=\"sk-...\", model=\"gpt-4\")\r\n\r\n# Multiple requests with same config\r\nfor prompt in prompts:\r\n response = generate(prompt)\r\n process_response(response)\r\n\r\n# \u274c Avoid: Reconfiguring for each request\r\nfor prompt in prompts:\r\n config(provider=\"openai\", api_key=\"sk-...\", model=\"gpt-4\")\r\n response = generate(prompt)\r\n```\r\n\r\n\r\n### Cost Optimization\r\n\r\n```python\r\n# Use appropriate models for different tasks\r\nconfig(provider=\"openai\", model=\"gpt-3.5-turbo\") # Cheaper for simple tasks\r\nsimple_response = generate(\"What's 2+2?\")\r\n\r\nconfig(provider=\"openai\", model=\"gpt-4\") # More capable for complex tasks\r\ncomplex_response = generate(\"Analyze this complex dataset...\")\r\n```\r\n\r\n***\r\n\r\n## \ud83d\udcda API Reference\r\n\r\n### Core Functions\r\n\r\n#### `config(**kwargs)`\r\n\r\nConfigure the LLM provider and model.\r\n\r\n**Parameters:**\r\n\r\n- `provider` (str): Provider name (\"openai\", \"gemini\", \"mistral\", \"groq\")\r\n- `api_key` (str): API key for the provider\r\n- `model` (str): Model name to use\r\n- `base_url` (str, optional): Custom API endpoint\r\n\r\n\r\n#### `generate(prompt, **kwargs)`\r\n\r\nGenerate text using the configured LLM.\r\n\r\n**Parameters:**\r\n\r\n- `prompt` (str or list): Text prompt or conversation messages\r\n- `**kwargs`: Additional parameters for the specific provider\r\n\r\n**Returns:**\r\n\r\n- `str`: Generated text response\r\n\r\n\r\n#### `generate_stream(prompt, **kwargs)`\r\n\r\nGenerate streaming text response.\r\n\r\n**Parameters:**\r\n\r\n- Same as `generate()`\r\n\r\n**Yields:**\r\n\r\n- `str`: Text chunks as they're generated\r\n\r\n***\r\n\r\n## \ud83d\udcc4 License\r\n\r\nThis project is licensed under the **MIT License** - see the [LICENSE](LICENSE) file for details.\r\n\r\n```\r\nMIT License\r\n\r\nCopyright (c) 2025 Yash Kumar Firoziya\r\n\r\nPermission is hereby granted, free of charge, to any person obtaining a copy\r\nof this software and associated documentation files (the \"Software\"), to deal\r\nin the Software without restriction, including without limitation the rights\r\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\r\ncopies of the Software, and to permit persons to whom the Software is\r\nfurnished to do so, subject to the following conditions:\r\n\r\nThe above copyright notice and this permission notice shall be included in all\r\ncopies or substantial portions of the Software.\r\n```\r\n\r\n\r\n***\r\n\r\n## \ud83d\udc68\ud83d\udcbb Author\r\n\r\n**Yash Kumar Firoziya**\r\n\r\n- GitHub: [@firoziya](https://github.com/firoziya)\r\n- Email: [ykfiroziya@gmail.com](mailto:ykfiroziya@gmail.com)\r\n\r\n***\r\n\r\n## \ud83d\ude4f Acknowledgments\r\n\r\n- Thanks to all the LLM providers for their amazing APIs\r\n- Inspired by the need for a unified interface across different AI models\r\n- Built with \u2764\ufe0f for the Python AI community\r\n\r\n***\r\n\r\n## \ud83d\udcc8 Roadmap\r\n\r\n### Version 0.2.0\r\n\r\n- [ ] Support for Anthropic Claude\r\n- [ ] Async/await support\r\n- [ ] Better error handling and retries\r\n- [ ] Configuration validation\r\n\r\n\r\n### Version 0.3.0\r\n\r\n- [ ] Local model support (Ollama, LM Studio)\r\n- [ ] Conversation memory management\r\n- [ ] Built-in prompt templates\r\n- [ ] Cost tracking and usage analytics\r\n\r\n\r\n### Version 1.0.0\r\n\r\n- [ ] Production-ready stability\r\n- [ ] Comprehensive documentation\r\n- [ ] Performance optimizations\r\n- [ ] Plugin system for extensions\r\n\r\n***\r\n\r\n\u2b50 **If you find plugllm useful, please give it a star on GitHub!**\r\n\r\n\ud83d\udcd6 **For more examples and tutorials, visit our [documentation](https://github.com/firoziya/plugllm/wiki)**\r\n\r\n<div style=\"text-align: center\">\u2042</div>\r\n\r\n[^1]: https://pepy.tech/projects/plugllm\r\n\r\n[^2]: https://img.shields.io/pypi/v/plugllm.svg\r\n\r\n[^3]: https://pypi.org/project/plugllm/\r\n\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Unified LLM API interface for OpenAI, Gemini, Mistral, Groq and more.",
"version": "0.1.2",
"project_urls": {
"Documentation": "https://github.com/firoziya/plugllm#readme",
"Homepage": "https://github.com/firoziya/plugllm",
"Source Code": "https://github.com/firoziya/plugllm"
},
"split_keywords": [
"llm",
"api",
"openai",
"gemini",
"mistral",
"groq",
"ai",
"chatbot"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "3f4b153119f431be104728bf876585a25e6b296c3c41f2b478ce6967c43feb12",
"md5": "928f9b9d71f0c300aca98498ee807f69",
"sha256": "4bf6372ec0ca99db602888dea099b93e2f856f096764b280d9f174ac701aa142"
},
"downloads": -1,
"filename": "plugllm-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "928f9b9d71f0c300aca98498ee807f69",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 11332,
"upload_time": "2025-08-13T11:47:46",
"upload_time_iso_8601": "2025-08-13T11:47:46.013524Z",
"url": "https://files.pythonhosted.org/packages/3f/4b/153119f431be104728bf876585a25e6b296c3c41f2b478ce6967c43feb12/plugllm-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "5afec96449237898fbf775d52d5ebc353a7a5a6b7c1aff1d006457b8e579c79f",
"md5": "c3aa8253c12f3df5802012bc588ae27a",
"sha256": "48c6339621396f6f6ff2717a3f7c2993948a44ee160e1e22181e260191135443"
},
"downloads": -1,
"filename": "plugllm-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "c3aa8253c12f3df5802012bc588ae27a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 12772,
"upload_time": "2025-08-13T11:47:47",
"upload_time_iso_8601": "2025-08-13T11:47:47.452153Z",
"url": "https://files.pythonhosted.org/packages/5a/fe/c96449237898fbf775d52d5ebc353a7a5a6b7c1aff1d006457b8e579c79f/plugllm-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-13 11:47:47",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "firoziya",
"github_project": "plugllm",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "plugllm"
}