| Name | nexgen-sdk JSON |
| Version |
0.1.2
JSON |
| download |
| home_page | None |
| Summary | A custom Python SDK for interacting with AI models, compatible with OpenAI's API format |
| upload_time | 2025-10-24 05:44:23 |
| maintainer | None |
| docs_url | None |
| author | None |
| requires_python | None |
| license | None |
| keywords |
ai
openai
sdk
machine-learning
nlp
llm
chatbot
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# AI SDK Python
A comprehensive Python SDK for interacting with AI models, compatible with OpenAI's API format and supporting multiple providers.
## Features
- 🤖 **Multiple Provider Support**: OpenAI, LiteLLM, and VLLM providers
- 🔄 **Streaming Support**: Real-time streaming responses with event handling
- 🛡️ **Error Handling**: Comprehensive error handling with custom exceptions
- 🔧 **Flexible Configuration**: Environment variables, config files, and programmatic setup
- 📦 **Easy Integration**: Simple, intuitive API design
- 🎯 **Type Safety**: Full type hints support
## Installation
```bash
pip install nexgen-sdk
```
For development:
```bash
pip install nexgen-sdk[dev]
```
## Quick Start
### Basic Usage
```python
from ai_sdk import AISDKClient
# Initialize client
client = AISDKClient(
api_key="your-api-key",
provider="openai" # or "vllm", "litellm"
)
# Create chat completion
response = client.chat().create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hello, world!"}
]
)
print(response['choices'][0]['message']['content'])
```
### Streaming
```python
# Stream responses in real-time
response = client.chat().create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in response:
if chunk['choices'][0]['delta'].get('content'):
print(chunk['choices'][0]['delta']['content'], end='', flush=True)
```
### Provider-Specific Examples
#### VLLM Provider (requires base_url)
```python
# Local vLLM deployment
client = AISDKClient(
provider="vllm",
base_url="http://localhost:8000", # Required for vllm
api_key="your-api-key"
)
# Custom vLLM endpoint
client = AISDKClient(
provider="vllm",
base_url="https://your-vllm-endpoint.com", # Required
api_key="your-api-key"
)
# New format supports complex user content objects
response = client.chat().create(
system_prompt="You are a helpful AI assistant",
messages=[
{
"role": "user",
"content": {
"query": "What is Python?",
"detail_level": "intermediate",
"include_examples": True
}
}
],
model="your-model-name"
)
```
## Configuration
### Environment Variables
```bash
export AI_SDK_API_KEY="your-api-key"
export AI_SDK_BASE_URL="https://api.openai.com/v1"
export AI_SDK_TIMEOUT="30.0"
```
### Builder Pattern
```python
client = (AISDKClient.builder()
.with_provider("vllm")
.with_api_key("your-key")
.with_base_url("https://your-vllm-endpoint")
.with_timeout(60.0)
.build())
```
## Providers
### OpenAI Provider
- **Base URL**: `https://api.openai.com/v1`
- **Models**: GPT-3.5, GPT-4, and all OpenAI models
- **Features**: Chat, completions, embeddings
### VLLM Provider
- **Required Configuration**: `base_url` parameter is mandatory
- **Custom Endpoints**: Supports any VLLM deployment (local, cloud, custom)
- **New Format**: Enhanced message format with complex user content
- **Streaming**: Optimized real-time streaming
**Note**: When using `provider="vllm"`, you must specify `base_url` pointing to your vLLM deployment.
### LiteLLM Provider
- **Unified Interface**: Access 100+ models through LiteLLM
- **Model Support**: OpenAI, Anthropic, Cohere, and more
- **Fallback**: Automatic fallback between providers
## API Reference
### Client
- `AISDKClient()` - Main client class
- `chat()` - Access chat completions
- `completions()` - Access text completions
- `embeddings()` - Access embeddings
### Chat Completions
- `create()` - Create chat completion or stream
- Parameters: `messages`, `model`, `temperature`, `stream`, etc.
### Error Handling
```python
from ai_sdk.exceptions import (
AISDKException,
APIException,
AuthenticationException,
RateLimitException
)
try:
response = client.chat().create(...)
except AuthenticationException:
print("Invalid API key")
except RateLimitException:
print("Rate limit exceeded")
except AISDKException as e:
print(f"SDK Error: {e}")
```
## Development
### Setup
```bash
git clone https://github.com/Dhruv1969Karnwal/ai-sdk-python
cd ai-sdk-python
pip install -e ".[dev]"
```
### Running Tests
```bash
pytest
```
### Code Formatting
```bash
black ai_sdk
flake8 ai_sdk
mypy ai_sdk
```
## Examples
See the `examples/` directory for comprehensive usage examples:
- `vllm_examples.py` - Complete VLLM provider examples with different deployments
- `new_format_example.py` - VLLM new format examples
- `debug_streaming.py` - Streaming debug examples
- `test_clean_streaming.py` - Clean streaming tests
### VLLM Quick Examples
```python
# Local vLLM deployment
client = AISDKClient(
provider="vllm",
base_url="http://localhost:8000",
api_key="your-key"
)
# Custom endpoint
client = AISDKClient(
provider="vllm",
base_url="https://your-vllm-endpoint.com",
api_key="your-key"
)
# Missing base_url will show helpful error message
# client = AISDKClient(provider="vllm", api_key="your-key") # Error!
## Requirements
- Python 3.7+
- httpx>=0.23.0
- typing_extensions>=3.7.4 (Python < 3.8)
### Optional Dependencies
- litellm>=1.0.0 (for LiteLLM provider)
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Support
- 📖 [Documentation](https://ai-sdk.readthedocs.io)
- 🐛 [Issues](https://github.com/Dhruv1969Karnwal/ai-sdk-python/issues)
- 💬 [Discussions](https://github.com/Dhruv1969Karnwal/ai-sdk-python/discussions)
## Contributing
Contributions are welcome! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
## Changelog
See [CHANGELOG.md](CHANGELOG.md) for version history.
Raw data
{
"_id": null,
"home_page": null,
"name": "nexgen-sdk",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "ai, openai, sdk, machine-learning, nlp, llm, chatbot",
"author": null,
"author_email": "Dhruv Karnwal <kdhruv.fsd@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/1b/5a/2f51194baa6bc3f1e1d8dc025a314850efbe22211478572450702d0d6180/nexgen_sdk-0.1.2.tar.gz",
"platform": null,
"description": "# AI SDK Python\r\n\r\nA comprehensive Python SDK for interacting with AI models, compatible with OpenAI's API format and supporting multiple providers.\r\n\r\n## Features\r\n\r\n- \ud83e\udd16 **Multiple Provider Support**: OpenAI, LiteLLM, and VLLM providers\r\n- \ud83d\udd04 **Streaming Support**: Real-time streaming responses with event handling\r\n- \ud83d\udee1\ufe0f **Error Handling**: Comprehensive error handling with custom exceptions\r\n- \ud83d\udd27 **Flexible Configuration**: Environment variables, config files, and programmatic setup\r\n- \ud83d\udce6 **Easy Integration**: Simple, intuitive API design\r\n- \ud83c\udfaf **Type Safety**: Full type hints support\r\n\r\n## Installation\r\n\r\n```bash\r\npip install nexgen-sdk\r\n```\r\n\r\nFor development:\r\n```bash\r\npip install nexgen-sdk[dev]\r\n```\r\n\r\n## Quick Start\r\n\r\n### Basic Usage\r\n\r\n```python\r\nfrom ai_sdk import AISDKClient\r\n\r\n# Initialize client\r\nclient = AISDKClient(\r\n api_key=\"your-api-key\",\r\n provider=\"openai\" # or \"vllm\", \"litellm\"\r\n)\r\n\r\n# Create chat completion\r\nresponse = client.chat().create(\r\n model=\"gpt-3.5-turbo\",\r\n messages=[\r\n {\"role\": \"user\", \"content\": \"Hello, world!\"}\r\n ]\r\n)\r\n\r\nprint(response['choices'][0]['message']['content'])\r\n```\r\n\r\n### Streaming\r\n\r\n```python\r\n# Stream responses in real-time\r\nresponse = client.chat().create(\r\n model=\"gpt-3.5-turbo\",\r\n messages=[{\"role\": \"user\", \"content\": \"Tell me a story\"}],\r\n stream=True\r\n)\r\n\r\nfor chunk in response:\r\n if chunk['choices'][0]['delta'].get('content'):\r\n print(chunk['choices'][0]['delta']['content'], end='', flush=True)\r\n```\r\n\r\n### Provider-Specific Examples\r\n\r\n#### VLLM Provider (requires base_url)\r\n```python\r\n# Local vLLM deployment\r\nclient = AISDKClient(\r\n provider=\"vllm\",\r\n base_url=\"http://localhost:8000\", # Required for vllm\r\n api_key=\"your-api-key\"\r\n)\r\n\r\n# Custom vLLM endpoint\r\nclient = AISDKClient(\r\n provider=\"vllm\",\r\n base_url=\"https://your-vllm-endpoint.com\", # Required\r\n api_key=\"your-api-key\"\r\n)\r\n\r\n# New format supports complex user content objects\r\nresponse = client.chat().create(\r\n system_prompt=\"You are a helpful AI assistant\",\r\n messages=[\r\n {\r\n \"role\": \"user\",\r\n \"content\": {\r\n \"query\": \"What is Python?\",\r\n \"detail_level\": \"intermediate\",\r\n \"include_examples\": True\r\n }\r\n }\r\n ],\r\n model=\"your-model-name\"\r\n)\r\n```\r\n\r\n## Configuration\r\n\r\n### Environment Variables\r\n```bash\r\nexport AI_SDK_API_KEY=\"your-api-key\"\r\nexport AI_SDK_BASE_URL=\"https://api.openai.com/v1\"\r\nexport AI_SDK_TIMEOUT=\"30.0\"\r\n```\r\n\r\n### Builder Pattern\r\n```python\r\nclient = (AISDKClient.builder()\r\n .with_provider(\"vllm\")\r\n .with_api_key(\"your-key\")\r\n .with_base_url(\"https://your-vllm-endpoint\")\r\n .with_timeout(60.0)\r\n .build())\r\n```\r\n\r\n## Providers\r\n\r\n### OpenAI Provider\r\n- **Base URL**: `https://api.openai.com/v1`\r\n- **Models**: GPT-3.5, GPT-4, and all OpenAI models\r\n- **Features**: Chat, completions, embeddings\r\n\r\n### VLLM Provider\r\n- **Required Configuration**: `base_url` parameter is mandatory\r\n- **Custom Endpoints**: Supports any VLLM deployment (local, cloud, custom)\r\n- **New Format**: Enhanced message format with complex user content\r\n- **Streaming**: Optimized real-time streaming\r\n\r\n**Note**: When using `provider=\"vllm\"`, you must specify `base_url` pointing to your vLLM deployment.\r\n\r\n### LiteLLM Provider\r\n- **Unified Interface**: Access 100+ models through LiteLLM\r\n- **Model Support**: OpenAI, Anthropic, Cohere, and more\r\n- **Fallback**: Automatic fallback between providers\r\n\r\n## API Reference\r\n\r\n### Client\r\n- `AISDKClient()` - Main client class\r\n- `chat()` - Access chat completions\r\n- `completions()` - Access text completions \r\n- `embeddings()` - Access embeddings\r\n\r\n### Chat Completions\r\n- `create()` - Create chat completion or stream\r\n- Parameters: `messages`, `model`, `temperature`, `stream`, etc.\r\n\r\n### Error Handling\r\n```python\r\nfrom ai_sdk.exceptions import (\r\n AISDKException,\r\n APIException,\r\n AuthenticationException,\r\n RateLimitException\r\n)\r\n\r\ntry:\r\n response = client.chat().create(...)\r\nexcept AuthenticationException:\r\n print(\"Invalid API key\")\r\nexcept RateLimitException:\r\n print(\"Rate limit exceeded\")\r\nexcept AISDKException as e:\r\n print(f\"SDK Error: {e}\")\r\n```\r\n\r\n## Development\r\n\r\n### Setup\r\n```bash\r\ngit clone https://github.com/Dhruv1969Karnwal/ai-sdk-python\r\ncd ai-sdk-python\r\npip install -e \".[dev]\"\r\n```\r\n\r\n### Running Tests\r\n```bash\r\npytest\r\n```\r\n\r\n### Code Formatting\r\n```bash\r\nblack ai_sdk\r\nflake8 ai_sdk\r\nmypy ai_sdk\r\n```\r\n\r\n## Examples\r\n\r\nSee the `examples/` directory for comprehensive usage examples:\r\n\r\n- `vllm_examples.py` - Complete VLLM provider examples with different deployments\r\n- `new_format_example.py` - VLLM new format examples\r\n- `debug_streaming.py` - Streaming debug examples\r\n- `test_clean_streaming.py` - Clean streaming tests\r\n\r\n### VLLM Quick Examples\r\n\r\n```python\r\n# Local vLLM deployment\r\nclient = AISDKClient(\r\n provider=\"vllm\",\r\n base_url=\"http://localhost:8000\",\r\n api_key=\"your-key\"\r\n)\r\n\r\n# Custom endpoint\r\nclient = AISDKClient(\r\n provider=\"vllm\",\r\n base_url=\"https://your-vllm-endpoint.com\",\r\n api_key=\"your-key\"\r\n)\r\n\r\n# Missing base_url will show helpful error message\r\n# client = AISDKClient(provider=\"vllm\", api_key=\"your-key\") # Error!\r\n\r\n## Requirements\r\n\r\n- Python 3.7+\r\n- httpx>=0.23.0\r\n- typing_extensions>=3.7.4 (Python < 3.8)\r\n\r\n### Optional Dependencies\r\n- litellm>=1.0.0 (for LiteLLM provider)\r\n\r\n## License\r\n\r\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\r\n\r\n## Support\r\n\r\n- \ud83d\udcd6 [Documentation](https://ai-sdk.readthedocs.io)\r\n- \ud83d\udc1b [Issues](https://github.com/Dhruv1969Karnwal/ai-sdk-python/issues)\r\n- \ud83d\udcac [Discussions](https://github.com/Dhruv1969Karnwal/ai-sdk-python/discussions)\r\n\r\n## Contributing\r\n\r\nContributions are welcome! Please see our [Contributing Guide](CONTRIBUTING.md) for details.\r\n\r\n## Changelog\r\n\r\nSee [CHANGELOG.md](CHANGELOG.md) for version history.\r\n",
"bugtrack_url": null,
"license": null,
"summary": "A custom Python SDK for interacting with AI models, compatible with OpenAI's API format",
"version": "0.1.2",
"project_urls": {
"Documentation": "https://ai-sdk.readthedocs.io",
"Homepage": "https://github.com/Dhruv1969Karnwal/ai-sdk-python",
"Issues": "https://github.com/Dhruv1969Karnwal/ai-sdk-python/issues",
"Repository": "https://github.com/Dhruv1969Karnwal/ai-sdk-python.git"
},
"split_keywords": [
"ai",
" openai",
" sdk",
" machine-learning",
" nlp",
" llm",
" chatbot"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "e305cb442bd391a94b6d00c787a55b855009b7f684481dac1a9a0da14f65a279",
"md5": "0224f2e7796a4ffef0e9c281401bfa78",
"sha256": "2fc2ccb3d2f77614272748933e024e55b39612926a8d19037f7c0a280c90fa76"
},
"downloads": -1,
"filename": "nexgen_sdk-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0224f2e7796a4ffef0e9c281401bfa78",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 30280,
"upload_time": "2025-10-24T05:44:22",
"upload_time_iso_8601": "2025-10-24T05:44:22.291790Z",
"url": "https://files.pythonhosted.org/packages/e3/05/cb442bd391a94b6d00c787a55b855009b7f684481dac1a9a0da14f65a279/nexgen_sdk-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "1b5a2f51194baa6bc3f1e1d8dc025a314850efbe22211478572450702d0d6180",
"md5": "b30027560c3b91dded00a62319d16a90",
"sha256": "f611339c96e546e5eb206694befbb39e69c1dcbca9d7128afd5c7502284f5c89"
},
"downloads": -1,
"filename": "nexgen_sdk-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "b30027560c3b91dded00a62319d16a90",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 26547,
"upload_time": "2025-10-24T05:44:23",
"upload_time_iso_8601": "2025-10-24T05:44:23.760790Z",
"url": "https://files.pythonhosted.org/packages/1b/5a/2f51194baa6bc3f1e1d8dc025a314850efbe22211478572450702d0d6180/nexgen_sdk-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-24 05:44:23",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Dhruv1969Karnwal",
"github_project": "ai-sdk-python",
"github_not_found": true,
"lcname": "nexgen-sdk"
}