<div align='center'>
<h1> ⚡ LitAI </h1>
**Easiest way to access any AI model with a single subscription using Python.**
 
</div>
Every AI model is better at some tasks than others, and we have to switch between them. This requires subscriptions to multiple LLM providers and is costly. LitAI lets you use any LLM provider (both proprietary and open-source) under a single subscription.
Easily switch between any AI model, save costs, and track usage through a unified dashboard.
 
<div align='center'>
<pre>
✅ Access any AI model ✅ Usage dashboard ✅ Single subscription
✅ Bring your own model ✅ Easily switch across LLMs ✅ 20+ public models
✅ Track LLM token usage ✅ Easy setup ✅ No MLOps glue code
</pre>
</div>
<div align='center'>
[](https://pepy.tech/projects/litai)
[](https://discord.gg/WajDThKAur)

[](https://codecov.io/gh/Lightning-AI/litai)
[](https://github.com/Lightning-AI/litai/blob/main/LICENSE)
</div>
<p align="center">
<a href="https://lightning.ai/">Lightning AI</a> •
<a href="https://lightning.ai/docs/litai">Docs</a> •
<a href="#quick-start">Quick start</a>
</p>
______________________________________________________________________
# Quick Start
Install LitAI via pip ([more options](https://lightning.ai/docs/litai/home/install)):
```bash
pip install litai
```
## Run on a Studio
When running inside Lightning Studio, you can use any available LLM out of the box — no extra setup required.
```python
from litai import LLM
llm = LLM(model="openai/gpt-4")
print(llm.chat("who are you?"))
# I'm an AI by OpenAI
```
## Run locally (outside Studio)
To use LitAI outside of Lightning Studio, you'll need to explicitly provide your teamspace name.
The teamspace input format is: `"owner-name/teamspace-name"` (e.g. `"username/my-team"` or `"org-name/team-name"`)
```python
from litai import LLM
llm = LLM(model="openai/gpt-4", teamspace="owner-name/teamspace-name")
print(llm.chat("who are you?"))
# I'm an AI by OpenAI
```
# Key benefits
A few key benefits:
- Supports 20+ public models
- Bring your own model
- Keeps chat logs
- Optional guardrails
- Usage dashboard
# Features
✅ [Concurrency with async](https://lightning.ai/docs/litai/features/async-litai/)\
✅ [Fallback and retry](https://lightning.ai/docs/litai/features/fallback-retry/)\
✅ [Switch models](https://lightning.ai/docs/litai/features/models/)\
✅ [Multi-turn conversation logs](https://lightning.ai/docs/litai/features/multi-turn-conversation/)\
✅ [Streaming](https://lightning.ai/docs/litai/features/streaming/)
# Advanced features
## Concurrency with async
LitAI supports asynchronous execution, allowing you to handle multiple requests concurrently without blocking. This is especially useful in high-throughput applications like chatbots, APIs, or agent loops.
To enable async behavior, set `enable_async=True` when initializing the `LLM` class. Then use `await llm.chat(...)` inside an `async` function.
```python
import asyncio
from litai import LLM
async def main():
llm = LLM(model="openai/gpt-4", teamspace="lightning-ai/litai", enable_async=True)
print(await llm.chat("who are you?"))
if __name__ == "__main__":
asyncio.run(main())
```
## Streaming
Stream the model response as it's being generated.
```python
from litai import LLM
llm = LLM(model="openai/gpt-4")
for chunk in llm.chat("hello", stream=True):
print(chunk, end="", flush=True)
```
## Conversations
Keep chat history across multiple turns so the model remembers context.
This is useful for assistants, summarizers, or research tools that need multi-turn chat history.
Each conversation is identified by a unique name. LitAI stores conversation history separately for each name.
```python
from litai import LLM
llm = LLM(model="openai/gpt-4")
# Continue a conversation across multiple turns
llm.chat("What is Lightning AI?", conversation="intro")
llm.chat("What can it do?", conversation="intro")
print(llm.get_history("intro")) # View all messages from the 'intro' thread
llm.reset_conversation("intro") # Clear conversation history
```
Create multiple named conversations for different tasks.
```python
from litai import LLM
llm = LLM(model="openai/gpt-4")
llm.chat("Summarize this text", conversation="summarizer")
llm.chat("What's a RAG pipeline?", conversation="research")
print(llm.list_conversations())
```
## Switch models
Use the best model for each task.
LitAI lets us dynamically switch models at request time.
We set a default model when initializing `LLM` and override it with the `model` parameter only when needed.
```python
from litai import LLM
llm = LLM(model="openai/gpt-4")
# Uses the default model (openai/gpt-4)
print(llm.chat("Who created you?"))
# >> I am a large language model, trained by OpenAI.
# Override the default model for this request
print(llm.chat("Who created you?", model="google/gemini-2.5-flash"))
# >> I am a large language model, trained by Google.
# Uses the default model again
print(llm.chat("Who created you?"))
# >> I am a large language model, trained by OpenAI.
```
## Fallbacks and retries
Ensure reliable responses even if a model is unavailable.\
LitAI automatically retries requests and switches to fallback models in order.
- Fallback models are tried in the order provided.
- Each model gets up to `max_retries` attempts independently.
- The first successful response is returned immediately.
- If all models fail after their retry limits, LitAI raises an error.
```python
from litai import LLM
llm = LLM(
model="openai/gpt-4",
fallback_models=["google/gemini-2.5-flash", "anthropic/claude-3-5-sonnet-20240620"],
max_retries=4,
)
print(llm.chat("How do I fine-tune an LLM?"))
```
Raw data
{
"_id": null,
"home_page": "https://github.com/Lightning-AI/LitAI",
"name": "litai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "deep learning, pytorch, AI",
"author": "Lightning-AI et al.",
"author_email": "community@lightning.ai",
"download_url": "https://files.pythonhosted.org/packages/93/76/e4928e97ee8a4752b1873224d4c207112327bd7bfdd3cd71c681bb51180a/litai-0.0.2.tar.gz",
"platform": null,
"description": "<div align='center'>\n\n<h1> \u26a1 LitAI </h1>\n\n**Easiest way to access any AI model with a single subscription using Python.**\n\n \n\n</div>\n\nEvery AI model is better at some tasks than others, and we have to switch between them. This requires subscriptions to multiple LLM providers and is costly. LitAI lets you use any LLM provider (both proprietary and open-source) under a single subscription.\n\nEasily switch between any AI model, save costs, and track usage through a unified dashboard.\n\n \n\n<div align='center'>\n<pre>\n\u2705 Access any AI model \u2705 Usage dashboard \u2705 Single subscription \n\u2705 Bring your own model \u2705 Easily switch across LLMs \u2705 20+ public models \n\u2705 Track LLM token usage \u2705 Easy setup \u2705 No MLOps glue code \n</pre>\n</div> \n\n<div align='center'>\n\n[](https://pepy.tech/projects/litai)\n[](https://discord.gg/WajDThKAur)\n\n[](https://codecov.io/gh/Lightning-AI/litai)\n[](https://github.com/Lightning-AI/litai/blob/main/LICENSE)\n\n</div>\n\n<p align=\"center\">\n <a href=\"https://lightning.ai/\">Lightning AI</a> \u2022\n <a href=\"https://lightning.ai/docs/litai\">Docs</a> \u2022\n <a href=\"#quick-start\">Quick start</a>\n</p>\n\n______________________________________________________________________\n\n# Quick Start\n\nInstall LitAI via pip ([more options](https://lightning.ai/docs/litai/home/install)):\n\n```bash\npip install litai\n```\n\n## Run on a Studio\n\nWhen running inside Lightning Studio, you can use any available LLM out of the box \u2014 no extra setup required.\n\n```python\nfrom litai import LLM\n\nllm = LLM(model=\"openai/gpt-4\")\nprint(llm.chat(\"who are you?\"))\n# I'm an AI by OpenAI\n```\n\n## Run locally (outside Studio)\n\nTo use LitAI outside of Lightning Studio, you'll need to explicitly provide your teamspace name.\n\nThe teamspace input format is: `\"owner-name/teamspace-name\"` (e.g. `\"username/my-team\"` or `\"org-name/team-name\"`)\n\n```python\nfrom litai import LLM\n\nllm = LLM(model=\"openai/gpt-4\", teamspace=\"owner-name/teamspace-name\")\nprint(llm.chat(\"who are you?\"))\n# I'm an AI by OpenAI\n```\n\n# Key benefits\n\nA few key benefits:\n\n- Supports 20+ public models\n- Bring your own model\n- Keeps chat logs\n- Optional guardrails\n- Usage dashboard\n\n# Features\n\n\u2705 [Concurrency with async](https://lightning.ai/docs/litai/features/async-litai/)\\\n\u2705 [Fallback and retry](https://lightning.ai/docs/litai/features/fallback-retry/)\\\n\u2705 [Switch models](https://lightning.ai/docs/litai/features/models/)\\\n\u2705 [Multi-turn conversation logs](https://lightning.ai/docs/litai/features/multi-turn-conversation/)\\\n\u2705 [Streaming](https://lightning.ai/docs/litai/features/streaming/)\n\n# Advanced features\n\n## Concurrency with async\n\nLitAI supports asynchronous execution, allowing you to handle multiple requests concurrently without blocking. This is especially useful in high-throughput applications like chatbots, APIs, or agent loops.\n\nTo enable async behavior, set `enable_async=True` when initializing the `LLM` class. Then use `await llm.chat(...)` inside an `async` function.\n\n```python\nimport asyncio\nfrom litai import LLM\n\n\nasync def main():\n llm = LLM(model=\"openai/gpt-4\", teamspace=\"lightning-ai/litai\", enable_async=True)\n print(await llm.chat(\"who are you?\"))\n\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n## Streaming\n\nStream the model response as it's being generated.\n\n```python\nfrom litai import LLM\n\nllm = LLM(model=\"openai/gpt-4\")\nfor chunk in llm.chat(\"hello\", stream=True):\n print(chunk, end=\"\", flush=True)\n```\n\n## Conversations\n\nKeep chat history across multiple turns so the model remembers context.\nThis is useful for assistants, summarizers, or research tools that need multi-turn chat history.\n\nEach conversation is identified by a unique name. LitAI stores conversation history separately for each name.\n\n```python\nfrom litai import LLM\n\nllm = LLM(model=\"openai/gpt-4\")\n\n# Continue a conversation across multiple turns\nllm.chat(\"What is Lightning AI?\", conversation=\"intro\")\nllm.chat(\"What can it do?\", conversation=\"intro\")\n\nprint(llm.get_history(\"intro\")) # View all messages from the 'intro' thread\nllm.reset_conversation(\"intro\") # Clear conversation history\n```\n\nCreate multiple named conversations for different tasks.\n\n```python\nfrom litai import LLM\n\nllm = LLM(model=\"openai/gpt-4\")\n\nllm.chat(\"Summarize this text\", conversation=\"summarizer\")\nllm.chat(\"What's a RAG pipeline?\", conversation=\"research\")\n\nprint(llm.list_conversations())\n```\n\n## Switch models\n\nUse the best model for each task.\nLitAI lets us dynamically switch models at request time.\n\nWe set a default model when initializing `LLM` and override it with the `model` parameter only when needed.\n\n```python\nfrom litai import LLM\n\nllm = LLM(model=\"openai/gpt-4\")\n\n# Uses the default model (openai/gpt-4)\nprint(llm.chat(\"Who created you?\"))\n# >> I am a large language model, trained by OpenAI.\n\n# Override the default model for this request\nprint(llm.chat(\"Who created you?\", model=\"google/gemini-2.5-flash\"))\n# >> I am a large language model, trained by Google.\n\n# Uses the default model again\nprint(llm.chat(\"Who created you?\"))\n# >> I am a large language model, trained by OpenAI.\n```\n\n## Fallbacks and retries\n\nEnsure reliable responses even if a model is unavailable.\\\nLitAI automatically retries requests and switches to fallback models in order.\n\n- Fallback models are tried in the order provided.\n- Each model gets up to `max_retries` attempts independently.\n- The first successful response is returned immediately.\n- If all models fail after their retry limits, LitAI raises an error.\n\n```python\nfrom litai import LLM\n\nllm = LLM(\n model=\"openai/gpt-4\",\n fallback_models=[\"google/gemini-2.5-flash\", \"anthropic/claude-3-5-sonnet-20240620\"],\n max_retries=4,\n)\n\nprint(llm.chat(\"How do I fine-tune an LLM?\"))\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "Easiest way to access any AI model with a single subscription.",
"version": "0.0.2",
"project_urls": {
"Bug Tracker": "https://github.com/Lightning-AI/LightningLLM/issues",
"Documentation": "https://lightning-ai.github.io/LightningLLM/",
"Download": "https://github.com/Lightning-AI/litAI",
"Homepage": "https://github.com/Lightning-AI/LitAI",
"Source Code": "https://github.com/Lightning-AI/LightningLLM"
},
"split_keywords": [
"deep learning",
" pytorch",
" ai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "0ec08b08957b59be5d85ed223c9f47c6b6fea12f5c38f7c7127bd34cc7c884a6",
"md5": "9243a71a977946a7065cb688b2138e40",
"sha256": "30d50facc5f69a34182a8599b2abeda60531d3fa26f6a12f93c40129061161b2"
},
"downloads": -1,
"filename": "litai-0.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9243a71a977946a7065cb688b2138e40",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 18536,
"upload_time": "2025-07-24T00:51:50",
"upload_time_iso_8601": "2025-07-24T00:51:50.100581Z",
"url": "https://files.pythonhosted.org/packages/0e/c0/8b08957b59be5d85ed223c9f47c6b6fea12f5c38f7c7127bd34cc7c884a6/litai-0.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "9376e4928e97ee8a4752b1873224d4c207112327bd7bfdd3cd71c681bb51180a",
"md5": "81bf6918db51caf0ad9d5e9732e40df0",
"sha256": "042e09d6c5716727175a1403bcb89e826fc0d795b869b4f1d5be2afb3cca3201"
},
"downloads": -1,
"filename": "litai-0.0.2.tar.gz",
"has_sig": false,
"md5_digest": "81bf6918db51caf0ad9d5e9732e40df0",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 19622,
"upload_time": "2025-07-24T00:51:51",
"upload_time_iso_8601": "2025-07-24T00:51:51.386492Z",
"url": "https://files.pythonhosted.org/packages/93/76/e4928e97ee8a4752b1873224d4c207112327bd7bfdd3cd71c681bb51180a/litai-0.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-24 00:51:51",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Lightning-AI",
"github_project": "LitAI",
"github_not_found": true,
"lcname": "litai"
}