Name | litegen JSON |
Version |
0.0.66
JSON |
| download |
home_page | https://github.com/santhosh/ |
Summary | All popular Framework HF integration kit |
upload_time | 2025-02-02 04:47:19 |
maintainer | None |
docs_url | None |
author | Kammari Santhosh |
requires_python | <4.0,>=3.10 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Litegen
Litegen is a lightweight Python wrapper for managing LLM interactions, supporting both local Ollama models and OpenAI API. It provides a simple, unified interface for chat completions with streaming capabilities.
## Installation
```bash
pip install litegen
```
## Features
- 🚀 Simple unified interface for LLM interactions
- 🤖 Support for both local Ollama models and OpenAI
- 📡 Built-in streaming capabilities
- 🛠Function calling support
- 🔄 Context management for conversations
- 🎯 GPU support for enhanced performance
## Quick Start
```python
from litegen import completion, pp_completion
# Simple completion
response = completion(
model="mistral", # or any Ollama/OpenAI model
prompt="What is the capital of France?"
)
print(response.choices[0].message.content)
# Streaming completion with pretty print
pp_completion(
model="llama2",
prompt="Write a short story about a robot",
temperature=0.7
)
```
## Advanced Usage
### System Prompts and Context
```python
response = completion(
model="mistral",
system_prompt="You are a helpful math tutor",
prompt="Explain the Pythagorean theorem",
context=[
{"role": "user", "content": "Can you help me with math?"},
{"role": "assistant", "content": "Of course! What would you like to know?"}
]
)
```
### Function Calling
```python
def get_weather(location: str, unit: str = "celsius"):
"""Get weather for a location"""
pass
response = completion(
model="gpt-3.5-turbo",
prompt="What's the weather in Paris?",
tools=[get_weather]
)
```
### GPU Support
```python
response = completion(
model="mistral",
prompt="Complex calculation task",
gpu=True # Enable GPU acceleration
)
```
## Configuration
The client can be configured with custom settings:
```python
from litegen import get_client
client = get_client(gpu=True) # Enable GPU support
# Use client directly for more control
```
### Environment Variables
- `OPENAI_API_KEY`: Your OpenAI API key (optional, for OpenAI models)
## API Reference
### completion(...)
Main function for chat completions.
```python
completion(
model: str, # Model name
messages: Optional[List[Dict[str, str]]] | str = None, # Raw messages or prompt string
system_prompt: str = "You are helpful Assistant", # System prompt
prompt: str = "", # User prompt
context: Optional[List[Dict[str, str]]] = None, # Conversation history
temperature: Optional[float] = None, # Temperature for response randomness
max_tokens: Optional[int] = None, # Max tokens in response
stream: bool = False, # Enable streaming
stop: Optional[List[str]] = None, # Stop sequences
tools: Optional[List] = None, # Function calling tools
gpu: bool = False, # Enable GPU
**kwargs # Additional parameters
)
```
### pp_completion(...)
Streaming-enabled completion with pretty printing. Takes the same parameters as `completion()`.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
MIT License
Raw data
{
"_id": null,
"home_page": "https://github.com/santhosh/",
"name": "litegen",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": null,
"author": "Kammari Santhosh",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/d4/ea/7455fe98c54ee784183f7288a520d29b97d517c7c97b871d4dac8e41fbca/litegen-0.0.66.tar.gz",
"platform": null,
"description": "# Litegen\n\nLitegen is a lightweight Python wrapper for managing LLM interactions, supporting both local Ollama models and OpenAI API. It provides a simple, unified interface for chat completions with streaming capabilities.\n\n## Installation\n\n```bash\npip install litegen\n```\n\n## Features\n\n- \ud83d\ude80 Simple unified interface for LLM interactions\n- \ud83e\udd16 Support for both local Ollama models and OpenAI\n- \ud83d\udce1 Built-in streaming capabilities\n- \ud83d\udee0 Function calling support\n- \ud83d\udd04 Context management for conversations\n- \ud83c\udfaf GPU support for enhanced performance\n\n## Quick Start\n\n```python\nfrom litegen import completion, pp_completion\n\n# Simple completion\nresponse = completion(\n model=\"mistral\", # or any Ollama/OpenAI model\n prompt=\"What is the capital of France?\"\n)\nprint(response.choices[0].message.content)\n\n# Streaming completion with pretty print\npp_completion(\n model=\"llama2\",\n prompt=\"Write a short story about a robot\",\n temperature=0.7\n)\n```\n\n## Advanced Usage\n\n### System Prompts and Context\n\n```python\nresponse = completion(\n model=\"mistral\",\n system_prompt=\"You are a helpful math tutor\",\n prompt=\"Explain the Pythagorean theorem\",\n context=[\n {\"role\": \"user\", \"content\": \"Can you help me with math?\"},\n {\"role\": \"assistant\", \"content\": \"Of course! What would you like to know?\"}\n ]\n)\n```\n\n### Function Calling\n\n```python\ndef get_weather(location: str, unit: str = \"celsius\"):\n \"\"\"Get weather for a location\"\"\"\n pass\n\nresponse = completion(\n model=\"gpt-3.5-turbo\",\n prompt=\"What's the weather in Paris?\",\n tools=[get_weather]\n)\n```\n\n### GPU Support\n\n```python\nresponse = completion(\n model=\"mistral\",\n prompt=\"Complex calculation task\",\n gpu=True # Enable GPU acceleration\n)\n```\n\n## Configuration\n\nThe client can be configured with custom settings:\n\n```python\nfrom litegen import get_client\n\nclient = get_client(gpu=True) # Enable GPU support\n# Use client directly for more control\n```\n\n### Environment Variables\n\n- `OPENAI_API_KEY`: Your OpenAI API key (optional, for OpenAI models)\n\n## API Reference\n\n### completion(...)\n\nMain function for chat completions.\n\n```python\ncompletion(\n model: str, # Model name\n messages: Optional[List[Dict[str, str]]] | str = None, # Raw messages or prompt string\n system_prompt: str = \"You are helpful Assistant\", # System prompt\n prompt: str = \"\", # User prompt\n context: Optional[List[Dict[str, str]]] = None, # Conversation history\n temperature: Optional[float] = None, # Temperature for response randomness\n max_tokens: Optional[int] = None, # Max tokens in response\n stream: bool = False, # Enable streaming\n stop: Optional[List[str]] = None, # Stop sequences\n tools: Optional[List] = None, # Function calling tools\n gpu: bool = False, # Enable GPU\n **kwargs # Additional parameters\n)\n```\n\n### pp_completion(...)\n\nStreaming-enabled completion with pretty printing. Takes the same parameters as `completion()`.\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nMIT License",
"bugtrack_url": null,
"license": "MIT",
"summary": "All popular Framework HF integration kit",
"version": "0.0.66",
"project_urls": {
"Homepage": "https://github.com/santhosh/",
"Repository": "https://github.com/santhosh/"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ede06f4039ad7c747140b3ab41dd6b1500610e3c407886b736722ccb1ff2dc7f",
"md5": "1307dd7df19227620b67a7b6eaa47d1e",
"sha256": "0470fbb0ee58bfbdaa037d22bcdd2aaca0b20a1e36dd5b390f74a1b8c945b90c"
},
"downloads": -1,
"filename": "litegen-0.0.66-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1307dd7df19227620b67a7b6eaa47d1e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 21173,
"upload_time": "2025-02-02T04:47:16",
"upload_time_iso_8601": "2025-02-02T04:47:16.653759Z",
"url": "https://files.pythonhosted.org/packages/ed/e0/6f4039ad7c747140b3ab41dd6b1500610e3c407886b736722ccb1ff2dc7f/litegen-0.0.66-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "d4ea7455fe98c54ee784183f7288a520d29b97d517c7c97b871d4dac8e41fbca",
"md5": "5f1a4c34cba62be93443fcb1d6e12fe5",
"sha256": "c7c429c0241700ad2fb3acfb1daa73fbafb09a5015c9a1906f072232461d168d"
},
"downloads": -1,
"filename": "litegen-0.0.66.tar.gz",
"has_sig": false,
"md5_digest": "5f1a4c34cba62be93443fcb1d6e12fe5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 17045,
"upload_time": "2025-02-02T04:47:19",
"upload_time_iso_8601": "2025-02-02T04:47:19.002414Z",
"url": "https://files.pythonhosted.org/packages/d4/ea/7455fe98c54ee784183f7288a520d29b97d517c7c97b871d4dac8e41fbca/litegen-0.0.66.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-02 04:47:19",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "litegen"
}