Name | nous-llm JSON |
Version |
0.1.5
JSON |
| download |
home_page | None |
Summary | Intelligent No Frills LLM Router - A unified interface for multiple LLM providers |
upload_time | 2025-08-24 01:00:06 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.12 |
license | MPL-2.0 |
keywords |
ai
anthropic
gemini
llm
ml
openai
openrouter
xai
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Nous LLM
> **Intelligent No Frills LLM Router** - A unified Python interface for multiple Large Language Model providers
[](https://badge.fury.io/py/nous-llm)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MPL-2.0)
[](https://github.com/astral-sh/ruff)
[](https://github.com/amod-ml/nous-llm/issues)
## Why Nous LLM?
Switch between LLM providers with a single line of code. Build AI applications without vendor lock-in.
```python
# Same interface, different providers
config = ProviderConfig(provider="openai", model="gpt-4o") # OpenAI
config = ProviderConfig(provider="anthropic", model="claude-3-5-sonnet") # Anthropic
config = ProviderConfig(provider="gemini", model="gemini-2.5-pro") # Google
```
## โจ Key Features
- **๐ Unified Interface**: Single API for multiple LLM providers
- **โก Async Support**: Both synchronous and asynchronous interfaces
- **๐ก๏ธ Type Safety**: Full typing with Pydantic v2 validation
- **๐ Provider Flexibility**: Easy switching between providers and models
- **โ๏ธ Serverless Ready**: Optimized for AWS Lambda and Google Cloud Run
- **๐จ Error Handling**: Comprehensive error taxonomy with provider context
- **๐ Extensible**: Plugin architecture for custom providers
## ๐ Quick Start
### Install
```bash
pip install nous-llm
```
### Use in 3 Lines
```python
from nous_llm import generate, ProviderConfig, Prompt
config = ProviderConfig(provider="openai", model="gpt-4o")
response = generate(config, Prompt(input="What is the capital of France?"))
print(response.text) # "Paris is the capital of France."
```
## ๐ฆ Supported Providers
| Provider | Popular Models | Latest Models |
|----------|---------------|---------------|
| **OpenAI** | GPT-4o, GPT-4-turbo, GPT-3.5-turbo | GPT-5, o3, o4-mini |
| **Anthropic** | Claude 3.5 Sonnet, Claude 3 Haiku | Claude Opus 4.1 |
| **Google** | Gemini 1.5 Pro, Gemini 1.5 Flash | Gemini 2.5 Pro |
| **xAI** | Grok Beta | Grok 4, Grok 4 Heavy |
| **OpenRouter** | Llama 3.3 70B, Mixtral | Llama 4 Maverick |
## Installation
### Quick Install
```bash
# Using pip
pip install nous-llm
# Using uv (recommended)
uv add nous-llm
```
### Installation Options
```bash
# Install with specific provider support
pip install nous-llm[openai] # OpenAI only
pip install nous-llm[anthropic] # Anthropic only
pip install nous-llm[all] # All providers
# Development installation
pip install nous-llm[dev] # Includes testing tools
```
### Environment Setup
Set your API keys as environment variables:
```bash
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="AIza..."
export XAI_API_KEY="xai-..."
export OPENROUTER_API_KEY="sk-or-..."
```
Or create a `.env` file:
```env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=AIza...
XAI_API_KEY=xai-...
OPENROUTER_API_KEY=sk-or-...
```
## Usage Examples
### 1. Basic Synchronous Usage
```python
from nous_llm import generate, ProviderConfig, Prompt
# Configure your provider
config = ProviderConfig(
provider="openai",
model="gpt-4o",
api_key="your-api-key" # or set OPENAI_API_KEY env var
)
# Create a prompt
prompt = Prompt(
instructions="You are a helpful assistant.",
input="What is the capital of France?"
)
# Generate response
response = generate(config, prompt)
print(response.text) # "Paris is the capital of France."
```
### 2. Asynchronous Usage
```python
import asyncio
from nous_llm import agenenerate, ProviderConfig, Prompt
async def main():
config = ProviderConfig(
provider="anthropic",
model="claude-3-5-sonnet-20241022"
)
prompt = Prompt(
instructions="You are a creative writing assistant.",
input="Write a haiku about coding."
)
response = await agenenerate(config, prompt)
print(response.text)
asyncio.run(main())
```
### 3. Client-Based Approach (Recommended for Multiple Calls)
```python
from nous_llm import LLMClient, ProviderConfig, Prompt
# Create a reusable client
client = LLMClient(ProviderConfig(
provider="gemini",
model="gemini-1.5-pro"
))
# Generate multiple responses efficiently
prompts = [
Prompt(instructions="You are helpful.", input="What is AI?"),
Prompt(instructions="You are creative.", input="Write a poem."),
]
for prompt in prompts:
response = client.generate(prompt)
print(f"{response.provider}: {response.text}")
```
## Advanced Features
### 4. Provider-Specific Parameters
```python
from nous_llm import generate, ProviderConfig, Prompt, GenParams
# OpenAI GPT-5 with reasoning mode
config = ProviderConfig(provider="openai", model="gpt-5")
params = GenParams(
max_tokens=1000,
temperature=0.7,
extra={"reasoning": True} # OpenAI-specific
)
# OpenAI O-series reasoning model
config = ProviderConfig(provider="openai", model="o3-mini")
params = GenParams(
max_tokens=1000,
temperature=0.7, # Will be automatically set to 1.0 with a warning
)
# Anthropic with thinking tokens
config = ProviderConfig(provider="anthropic", model="claude-3-5-sonnet-20241022")
params = GenParams(
extra={"thinking": True} # Anthropic-specific
)
response = generate(config, prompt, params)
```
> **Note for Developers**:
>
> **Parameter Changes in OpenAI's Latest Models:**
> - **Token Limits**: GPT-5 series and O-series models (o1, o3, o4-mini) use `max_completion_tokens` instead of `max_tokens`. The library automatically handles this with intelligent parameter mapping and fallback mechanisms.
> - **Temperature**: O-series reasoning models (o1, o3, o4-mini) and GPT-5 thinking/reasoning variants require `temperature=1.0`. The library automatically adjusts this and warns you if a different value is requested.
>
> You can continue using the standard parameters in `GenParams` - they will be automatically converted to the correct parameter for each model.
### 5. Custom Base URLs & Proxies
```python
# Use OpenRouter as a proxy for OpenAI models
config = ProviderConfig(
provider="openrouter",
model="openai/gpt-4o",
base_url="https://openrouter.ai/api/v1",
api_key="your-openrouter-key"
)
```
### 6. Error Handling
```python
from nous_llm import generate, AuthError, RateLimitError, ProviderError
try:
response = generate(config, prompt)
except AuthError as e:
print(f"Authentication failed: {e}")
except RateLimitError as e:
print(f"Rate limit exceeded: {e}")
except ProviderError as e:
print(f"Provider error: {e}")
```
## Production Integration
### FastAPI Web Service
```python
from fastapi import FastAPI, HTTPException
from nous_llm import agenenerate, ProviderConfig, Prompt, AuthError
app = FastAPI(title="Nous LLM API")
@app.post("/generate")
async def generate_text(request: dict):
try:
config = ProviderConfig(**request["config"])
prompt = Prompt(**request["prompt"])
response = await agenenerate(config, prompt)
return {
"text": response.text,
"usage": response.usage,
"provider": response.provider
}
except AuthError as e:
raise HTTPException(status_code=401, detail=str(e))
```
### AWS Lambda Function
```python
import json
from nous_llm import LLMClient, ProviderConfig, Prompt
# Global client for connection reuse across invocations
client = LLMClient(ProviderConfig(
provider="openai",
model="gpt-4o-mini"
))
def lambda_handler(event, context):
try:
prompt = Prompt(
instructions=event["instructions"],
input=event["input"]
)
response = client.generate(prompt)
return {
"statusCode": 200,
"body": json.dumps({
"text": response.text,
"usage": response.usage.model_dump() if response.usage else None
})
}
except Exception as e:
return {
"statusCode": 500,
"body": json.dumps({"error": str(e)})
}
```
---
## Development
### Project Setup
```bash
# Clone the repository
git clone https://github.com/amod-ml/nous-llm.git
cd nous-llm
# Install with development dependencies
uv sync --group dev
# Install pre-commit hooks (includes GPG validation)
./scripts/setup-gpg-hook.sh
```
### Testing & Quality
```bash
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=nous_llm
# Format and lint code
uv run ruff format
uv run ruff check
# Type checking
uv run mypy src/nous_llm
```
### Adding a New Provider
1. Create adapter in `src/nous_llm/adapters/`
2. Implement the `AdapterProtocol`
3. Register in `src/nous_llm/core/adapters.py`
4. Add model patterns to `src/nous_llm/core/registry.py`
5. Add comprehensive tests in `tests/`
## Examples & Resources
### Complete Examples
- ๐ `examples/basic_usage.py` - Core functionality demos
- ๐ `examples/fastapi_service.py` - REST API service
- ๐ `examples/lambda_example.py` - AWS Lambda function
### Documentation & Support
- ๐ [Full Documentation](https://github.com/amod-ml/nous-llm#readme)
- ๐ [Issue Tracker](https://github.com/amod-ml/nous-llm/issues)
- ๐ฌ [Discussions](https://github.com/amod-ml/nous-llm/discussions)
## ๐ Found an Issue?
We'd love to hear from you! Please [report any issues](https://github.com/amod-ml/nous-llm/issues/new) you encounter. When reporting issues, please include:
- Python version
- Nous LLM version (`pip show nous-llm`)
- Minimal code to reproduce the issue
- Full error traceback
## ๐ค Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
### ๐ Security Requirements for Contributors
**ALL commits to this repository MUST be GPG-signed.** This is automatically enforced by a pre-commit hook.
#### Why GPG Signing?
- **๐ Authentication**: Every commit is cryptographically verified
- **๐ก๏ธ Integrity**: Commits cannot be tampered with after signing
- **๐ Non-repudiation**: Contributors cannot deny authorship of signed commits
- **๐ Supply Chain Security**: Protection against commit spoofing attacks
#### Quick Setup for Contributors
**New to the project?**
```bash
# Automated setup - installs hook and guides through GPG configuration
./scripts/setup-gpg-hook.sh
```
**Already have GPG configured?**
```bash
# Enable GPG signing for this repository
git config commit.gpgsign true
git config user.signingkey YOUR_KEY_ID
```
#### Important Notes
- โ Unsigned commits will be automatically rejected
- โ
The pre-commit hook validates your GPG setup before every commit
- ๐ You must add your GPG public key to your GitHub account
- ๐ซ The hook cannot be bypassed with `--no-verify`
#### Need Help?
- ๐ **Full Setup Guide**: [GPG Signing Documentation](docs/GPG-SIGNING.md)
- ๐ง **Troubleshooting**: Run `./scripts/setup-gpg-hook.sh` for diagnostics
- ๐งช **Quick Test**: Try making a commit - the hook will guide you if anything's wrong
### Development Requirements
- โ
Python 3.12+
- ๐ All commits must be GPG-signed
- ๐งช Code must pass all tests and linting
- ๐ Follow established patterns and conventions
## ๐ License
This project is licensed under the **Mozilla Public License 2.0** - see the [LICENSE](LICENSE) file for details.
---
<p align="center">
<strong>Built with โค๏ธ for the AI community</strong><br>
<em>๐ GPG signing ensures the authenticity and integrity of all code contributions</em>
</p>
Raw data
{
"_id": null,
"home_page": null,
"name": "nous-llm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": null,
"keywords": "ai, anthropic, gemini, llm, ml, openai, openrouter, xai",
"author": null,
"author_email": "Amod ML <amodsahabandu@icloud.com>",
"download_url": "https://files.pythonhosted.org/packages/4a/99/f9e2edfeb59c748ad6a56e3fc05d831a6a54abceefb349d15bd1d0a097e4/nous_llm-0.1.5.tar.gz",
"platform": null,
"description": "# Nous LLM\n\n> **Intelligent No Frills LLM Router** - A unified Python interface for multiple Large Language Model providers\n\n[](https://badge.fury.io/py/nous-llm)\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MPL-2.0)\n[](https://github.com/astral-sh/ruff)\n[](https://github.com/amod-ml/nous-llm/issues)\n\n## Why Nous LLM?\n\nSwitch between LLM providers with a single line of code. Build AI applications without vendor lock-in.\n\n```python\n# Same interface, different providers\nconfig = ProviderConfig(provider=\"openai\", model=\"gpt-4o\") # OpenAI\nconfig = ProviderConfig(provider=\"anthropic\", model=\"claude-3-5-sonnet\") # Anthropic\nconfig = ProviderConfig(provider=\"gemini\", model=\"gemini-2.5-pro\") # Google\n```\n\n## \u2728 Key Features\n\n- **\ud83d\udd04 Unified Interface**: Single API for multiple LLM providers\n- **\u26a1 Async Support**: Both synchronous and asynchronous interfaces \n- **\ud83d\udee1\ufe0f Type Safety**: Full typing with Pydantic v2 validation\n- **\ud83d\udd00 Provider Flexibility**: Easy switching between providers and models\n- **\u2601\ufe0f Serverless Ready**: Optimized for AWS Lambda and Google Cloud Run\n- **\ud83d\udea8 Error Handling**: Comprehensive error taxonomy with provider context\n- **\ud83d\udd0c Extensible**: Plugin architecture for custom providers\n\n## \ud83d\ude80 Quick Start\n\n### Install\n\n```bash\npip install nous-llm\n```\n\n### Use in 3 Lines\n\n```python\nfrom nous_llm import generate, ProviderConfig, Prompt\n\nconfig = ProviderConfig(provider=\"openai\", model=\"gpt-4o\")\nresponse = generate(config, Prompt(input=\"What is the capital of France?\"))\nprint(response.text) # \"Paris is the capital of France.\"\n```\n\n## \ud83d\udce6 Supported Providers\n\n| Provider | Popular Models | Latest Models |\n|----------|---------------|---------------|\n| **OpenAI** | GPT-4o, GPT-4-turbo, GPT-3.5-turbo | GPT-5, o3, o4-mini |\n| **Anthropic** | Claude 3.5 Sonnet, Claude 3 Haiku | Claude Opus 4.1 |\n| **Google** | Gemini 1.5 Pro, Gemini 1.5 Flash | Gemini 2.5 Pro |\n| **xAI** | Grok Beta | Grok 4, Grok 4 Heavy |\n| **OpenRouter** | Llama 3.3 70B, Mixtral | Llama 4 Maverick |\n\n## Installation\n\n### Quick Install\n\n```bash\n# Using pip\npip install nous-llm\n\n# Using uv (recommended)\nuv add nous-llm\n```\n\n### Installation Options\n\n```bash\n# Install with specific provider support\npip install nous-llm[openai] # OpenAI only\npip install nous-llm[anthropic] # Anthropic only\npip install nous-llm[all] # All providers\n\n# Development installation\npip install nous-llm[dev] # Includes testing tools\n```\n\n### Environment Setup\n\nSet your API keys as environment variables:\n\n```bash\nexport OPENAI_API_KEY=\"sk-...\"\nexport ANTHROPIC_API_KEY=\"sk-ant-...\"\nexport GEMINI_API_KEY=\"AIza...\"\nexport XAI_API_KEY=\"xai-...\"\nexport OPENROUTER_API_KEY=\"sk-or-...\"\n```\n\nOr create a `.env` file:\n\n```env\nOPENAI_API_KEY=sk-...\nANTHROPIC_API_KEY=sk-ant-...\nGEMINI_API_KEY=AIza...\nXAI_API_KEY=xai-...\nOPENROUTER_API_KEY=sk-or-...\n```\n\n## Usage Examples\n\n### 1. Basic Synchronous Usage\n\n```python\nfrom nous_llm import generate, ProviderConfig, Prompt\n\n# Configure your provider\nconfig = ProviderConfig(\n provider=\"openai\",\n model=\"gpt-4o\",\n api_key=\"your-api-key\" # or set OPENAI_API_KEY env var\n)\n\n# Create a prompt\nprompt = Prompt(\n instructions=\"You are a helpful assistant.\",\n input=\"What is the capital of France?\"\n)\n\n# Generate response\nresponse = generate(config, prompt)\nprint(response.text) # \"Paris is the capital of France.\"\n```\n\n### 2. Asynchronous Usage\n\n```python\nimport asyncio\nfrom nous_llm import agenenerate, ProviderConfig, Prompt\n\nasync def main():\n config = ProviderConfig(\n provider=\"anthropic\",\n model=\"claude-3-5-sonnet-20241022\"\n )\n \n prompt = Prompt(\n instructions=\"You are a creative writing assistant.\",\n input=\"Write a haiku about coding.\"\n )\n \n response = await agenenerate(config, prompt)\n print(response.text)\n\nasyncio.run(main())\n```\n\n### 3. Client-Based Approach (Recommended for Multiple Calls)\n\n```python\nfrom nous_llm import LLMClient, ProviderConfig, Prompt\n\n# Create a reusable client\nclient = LLMClient(ProviderConfig(\n provider=\"gemini\",\n model=\"gemini-1.5-pro\"\n))\n\n# Generate multiple responses efficiently\nprompts = [\n Prompt(instructions=\"You are helpful.\", input=\"What is AI?\"),\n Prompt(instructions=\"You are creative.\", input=\"Write a poem.\"),\n]\n\nfor prompt in prompts:\n response = client.generate(prompt)\n print(f\"{response.provider}: {response.text}\")\n```\n\n## Advanced Features\n\n### 4. Provider-Specific Parameters\n\n```python\nfrom nous_llm import generate, ProviderConfig, Prompt, GenParams\n\n# OpenAI GPT-5 with reasoning mode\nconfig = ProviderConfig(provider=\"openai\", model=\"gpt-5\")\nparams = GenParams(\n max_tokens=1000,\n temperature=0.7,\n extra={\"reasoning\": True} # OpenAI-specific\n)\n\n# OpenAI O-series reasoning model\nconfig = ProviderConfig(provider=\"openai\", model=\"o3-mini\")\nparams = GenParams(\n max_tokens=1000,\n temperature=0.7, # Will be automatically set to 1.0 with a warning\n)\n\n# Anthropic with thinking tokens\nconfig = ProviderConfig(provider=\"anthropic\", model=\"claude-3-5-sonnet-20241022\")\nparams = GenParams(\n extra={\"thinking\": True} # Anthropic-specific\n)\n\nresponse = generate(config, prompt, params)\n```\n\n> **Note for Developers**: \n> \n> **Parameter Changes in OpenAI's Latest Models:**\n> - **Token Limits**: GPT-5 series and O-series models (o1, o3, o4-mini) use `max_completion_tokens` instead of `max_tokens`. The library automatically handles this with intelligent parameter mapping and fallback mechanisms.\n> - **Temperature**: O-series reasoning models (o1, o3, o4-mini) and GPT-5 thinking/reasoning variants require `temperature=1.0`. The library automatically adjusts this and warns you if a different value is requested.\n> \n> You can continue using the standard parameters in `GenParams` - they will be automatically converted to the correct parameter for each model.\n\n### 5. Custom Base URLs & Proxies\n\n```python\n# Use OpenRouter as a proxy for OpenAI models\nconfig = ProviderConfig(\n provider=\"openrouter\",\n model=\"openai/gpt-4o\",\n base_url=\"https://openrouter.ai/api/v1\",\n api_key=\"your-openrouter-key\"\n)\n```\n\n### 6. Error Handling\n\n```python\nfrom nous_llm import generate, AuthError, RateLimitError, ProviderError\n\ntry:\n response = generate(config, prompt)\nexcept AuthError as e:\n print(f\"Authentication failed: {e}\")\nexcept RateLimitError as e:\n print(f\"Rate limit exceeded: {e}\")\nexcept ProviderError as e:\n print(f\"Provider error: {e}\")\n```\n\n## Production Integration\n\n### FastAPI Web Service\n\n```python\nfrom fastapi import FastAPI, HTTPException\nfrom nous_llm import agenenerate, ProviderConfig, Prompt, AuthError\n\napp = FastAPI(title=\"Nous LLM API\")\n\n@app.post(\"/generate\")\nasync def generate_text(request: dict):\n try:\n config = ProviderConfig(**request[\"config\"])\n prompt = Prompt(**request[\"prompt\"])\n \n response = await agenenerate(config, prompt)\n return {\n \"text\": response.text, \n \"usage\": response.usage,\n \"provider\": response.provider\n }\n except AuthError as e:\n raise HTTPException(status_code=401, detail=str(e))\n```\n\n### AWS Lambda Function\n\n```python\nimport json\nfrom nous_llm import LLMClient, ProviderConfig, Prompt\n\n# Global client for connection reuse across invocations\nclient = LLMClient(ProviderConfig(\n provider=\"openai\",\n model=\"gpt-4o-mini\"\n))\n\ndef lambda_handler(event, context):\n try:\n prompt = Prompt(\n instructions=event[\"instructions\"],\n input=event[\"input\"]\n )\n \n response = client.generate(prompt)\n \n return {\n \"statusCode\": 200,\n \"body\": json.dumps({\n \"text\": response.text,\n \"usage\": response.usage.model_dump() if response.usage else None\n })\n }\n except Exception as e:\n return {\n \"statusCode\": 500,\n \"body\": json.dumps({\"error\": str(e)})\n }\n```\n\n---\n\n## Development\n\n### Project Setup\n\n```bash\n# Clone the repository\ngit clone https://github.com/amod-ml/nous-llm.git\ncd nous-llm\n\n# Install with development dependencies\nuv sync --group dev\n\n# Install pre-commit hooks (includes GPG validation)\n./scripts/setup-gpg-hook.sh\n```\n\n### Testing & Quality\n\n```bash\n# Run all tests\nuv run pytest\n\n# Run with coverage\nuv run pytest --cov=nous_llm\n\n# Format and lint code\nuv run ruff format\nuv run ruff check\n\n# Type checking\nuv run mypy src/nous_llm\n```\n\n### Adding a New Provider\n\n1. Create adapter in `src/nous_llm/adapters/`\n2. Implement the `AdapterProtocol` \n3. Register in `src/nous_llm/core/adapters.py`\n4. Add model patterns to `src/nous_llm/core/registry.py`\n5. Add comprehensive tests in `tests/`\n\n## Examples & Resources\n\n### Complete Examples\n- \ud83d\udcc1 `examples/basic_usage.py` - Core functionality demos\n- \ud83d\udcc1 `examples/fastapi_service.py` - REST API service \n- \ud83d\udcc1 `examples/lambda_example.py` - AWS Lambda function\n\n### Documentation & Support\n- \ud83d\udcd6 [Full Documentation](https://github.com/amod-ml/nous-llm#readme)\n- \ud83d\udc1b [Issue Tracker](https://github.com/amod-ml/nous-llm/issues)\n- \ud83d\udcac [Discussions](https://github.com/amod-ml/nous-llm/discussions)\n\n## \ud83d\udc1b Found an Issue?\n\nWe'd love to hear from you! Please [report any issues](https://github.com/amod-ml/nous-llm/issues/new) you encounter. When reporting issues, please include:\n\n- Python version\n- Nous LLM version (`pip show nous-llm`)\n- Minimal code to reproduce the issue\n- Full error traceback\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.\n\n### \ud83d\udd12 Security Requirements for Contributors\n\n**ALL commits to this repository MUST be GPG-signed.** This is automatically enforced by a pre-commit hook.\n\n#### Why GPG Signing?\n- **\ud83d\udd10 Authentication**: Every commit is cryptographically verified\n- **\ud83d\udee1\ufe0f Integrity**: Commits cannot be tampered with after signing \n- **\ud83d\udcdd Non-repudiation**: Contributors cannot deny authorship of signed commits\n- **\ud83d\udd17 Supply Chain Security**: Protection against commit spoofing attacks\n\n#### Quick Setup for Contributors\n\n**New to the project?**\n```bash\n# Automated setup - installs hook and guides through GPG configuration\n./scripts/setup-gpg-hook.sh\n```\n\n**Already have GPG configured?**\n```bash\n# Enable GPG signing for this repository\ngit config commit.gpgsign true\ngit config user.signingkey YOUR_KEY_ID\n```\n\n#### Important Notes\n- \u274c Unsigned commits will be automatically rejected\n- \u2705 The pre-commit hook validates your GPG setup before every commit\n- \ud83d\udccb You must add your GPG public key to your GitHub account\n- \ud83d\udeab The hook cannot be bypassed with `--no-verify`\n\n#### Need Help?\n- \ud83d\udcd6 **Full Setup Guide**: [GPG Signing Documentation](docs/GPG-SIGNING.md)\n- \ud83d\udd27 **Troubleshooting**: Run `./scripts/setup-gpg-hook.sh` for diagnostics\n- \ud83e\uddea **Quick Test**: Try making a commit - the hook will guide you if anything's wrong\n\n### Development Requirements\n- \u2705 Python 3.12+\n- \ud83d\udd10 All commits must be GPG-signed\n- \ud83e\uddea Code must pass all tests and linting\n- \ud83d\udccb Follow established patterns and conventions\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the **Mozilla Public License 2.0** - see the [LICENSE](LICENSE) file for details.\n\n---\n\n<p align=\"center\">\n <strong>Built with \u2764\ufe0f for the AI community</strong><br>\n <em>\ud83d\udd12 GPG signing ensures the authenticity and integrity of all code contributions</em>\n</p>\n",
"bugtrack_url": null,
"license": "MPL-2.0",
"summary": "Intelligent No Frills LLM Router - A unified interface for multiple LLM providers",
"version": "0.1.5",
"project_urls": {
"Bug Reports": "https://github.com/amod-ml/nous-llm/issues",
"Documentation": "https://github.com/amod-ml/nous-llm#readme",
"Homepage": "https://github.com/amod-ml/nous-llm",
"Repository": "https://github.com/amod-ml/nous-llm"
},
"split_keywords": [
"ai",
" anthropic",
" gemini",
" llm",
" ml",
" openai",
" openrouter",
" xai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "784691f5dbc68c9ff39868e71397d2aadecf4c3372268ec045e525f91d0da88d",
"md5": "b2730975f33b09e29f497d2ec0bf91f7",
"sha256": "293cc93b4e0b4dd4acb14dcf8c158cec872e638c4f54faf7b61e8b737acf8061"
},
"downloads": -1,
"filename": "nous_llm-0.1.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b2730975f33b09e29f497d2ec0bf91f7",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 36178,
"upload_time": "2025-08-24T01:00:04",
"upload_time_iso_8601": "2025-08-24T01:00:04.830203Z",
"url": "https://files.pythonhosted.org/packages/78/46/91f5dbc68c9ff39868e71397d2aadecf4c3372268ec045e525f91d0da88d/nous_llm-0.1.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "4a99f9e2edfeb59c748ad6a56e3fc05d831a6a54abceefb349d15bd1d0a097e4",
"md5": "38eb36c82a7828eabfd6c6b95f0284f5",
"sha256": "e5492875bedc0deda72c5bd5b424f2265f85fb1bd8dfeaa378d3268b13d0d913"
},
"downloads": -1,
"filename": "nous_llm-0.1.5.tar.gz",
"has_sig": false,
"md5_digest": "38eb36c82a7828eabfd6c6b95f0284f5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 127812,
"upload_time": "2025-08-24T01:00:06",
"upload_time_iso_8601": "2025-08-24T01:00:06.423262Z",
"url": "https://files.pythonhosted.org/packages/4a/99/f9e2edfeb59c748ad6a56e3fc05d831a6a54abceefb349d15bd1d0a097e4/nous_llm-0.1.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-24 01:00:06",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "amod-ml",
"github_project": "nous-llm",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "nous-llm"
}