<h1 align="center">
🚅 LiteLLM
</h1>
<p align="center">
<p align="center">
<a href="https://render.com/deploy?repo=https://github.com/BerriAI/litellm" target="_blank" rel="nofollow"><img src="https://render.com/images/deploy-to-render-button.svg" alt="Deploy to Render"></a>
<a href="https://railway.app/template/HLP0Ub?referralCode=jch2ME">
<img src="https://railway.app/button.svg" alt="Deploy on Railway">
</a>
</p>
<p align="center">Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.]
<br>
</p>
<h4 align="center"><a href="https://docs.litellm.ai/docs/simple_proxy" target="_blank">LiteLLM Proxy Server (LLM Gateway)</a> | <a href="https://docs.litellm.ai/docs/hosted" target="_blank"> Hosted Proxy (Preview)</a> | <a href="https://docs.litellm.ai/docs/enterprise"target="_blank">Enterprise Tier</a></h4>
<h4 align="center">
<a href="https://pypi.org/project/litellm/" target="_blank">
<img src="https://img.shields.io/pypi/v/litellm.svg" alt="PyPI Version">
</a>
<a href="https://dl.circleci.com/status-badge/redirect/gh/BerriAI/litellm/tree/main" target="_blank">
<img src="https://dl.circleci.com/status-badge/img/gh/BerriAI/litellm/tree/main.svg?style=svg" alt="CircleCI">
</a>
<a href="https://www.ycombinator.com/companies/berriai">
<img src="https://img.shields.io/badge/Y%20Combinator-W23-orange?style=flat-square" alt="Y Combinator W23">
</a>
<a href="https://wa.link/huol9n">
<img src="https://img.shields.io/static/v1?label=Chat%20on&message=WhatsApp&color=success&logo=WhatsApp&style=flat-square" alt="Whatsapp">
</a>
<a href="https://discord.gg/wuPM9dRgDw">
<img src="https://img.shields.io/static/v1?label=Chat%20on&message=Discord&color=blue&logo=Discord&style=flat-square" alt="Discord">
</a>
</h4>
LiteLLM manages:
- Translate inputs to provider's `completion`, `embedding`, and `image_generation` endpoints
- [Consistent output](https://docs.litellm.ai/docs/completion/output), text responses will always be available at `['choices'][0]['message']['content']`
- Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - [Router](https://docs.litellm.ai/docs/routing)
- Set Budgets & Rate limits per project, api key, model [LiteLLM Proxy Server (LLM Gateway)](https://docs.litellm.ai/docs/simple_proxy)
[**Jump to LiteLLM Proxy (LLM Gateway) Docs**](https://github.com/BerriAI/litellm?tab=readme-ov-file#openai-proxy---docs) <br>
[**Jump to Supported LLM Providers**](https://github.com/BerriAI/litellm?tab=readme-ov-file#supported-providers-docs)
🚨 **Stable Release:** Use docker images with the `-stable` tag. These have undergone 12 hour load tests, before being published.
Support for more providers. Missing a provider or LLM Platform, raise a [feature request](https://github.com/BerriAI/litellm/issues/new?assignees=&labels=enhancement&projects=&template=feature_request.yml&title=%5BFeature%5D%3A+).
# Usage ([**Docs**](https://docs.litellm.ai/docs/))
> [!IMPORTANT]
> LiteLLM v1.0.0 now requires `openai>=1.0.0`. Migration guide [here](https://docs.litellm.ai/docs/migration)
> LiteLLM v1.40.14+ now requires `pydantic>=2.0.0`. No changes required.
<a target="_blank" href="https://colab.research.google.com/github/BerriAI/litellm/blob/main/cookbook/liteLLM_Getting_Started.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
```shell
pip install litellm
```
```python
from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["ANTHROPIC_API_KEY"] = "your-cohere-key"
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="openai/gpt-4o", messages=messages)
# anthropic call
response = completion(model="anthropic/claude-3-sonnet-20240229", messages=messages)
print(response)
```
### Response (OpenAI Format)
```json
{
"id": "chatcmpl-565d891b-a42e-4c39-8d14-82a1f5208885",
"created": 1734366691,
"model": "claude-3-sonnet-20240229",
"object": "chat.completion",
"system_fingerprint": null,
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "Hello! As an AI language model, I don't have feelings, but I'm operating properly and ready to assist you with any questions or tasks you may have. How can I help you today?",
"role": "assistant",
"tool_calls": null,
"function_call": null
}
}
],
"usage": {
"completion_tokens": 43,
"prompt_tokens": 13,
"total_tokens": 56,
"completion_tokens_details": null,
"prompt_tokens_details": {
"audio_tokens": null,
"cached_tokens": 0
},
"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 0
}
}
```
Call any model supported by a provider, with `model=<provider_name>/<model_name>`. There might be provider-specific details here, so refer to [provider docs for more information](https://docs.litellm.ai/docs/providers)
## Async ([Docs](https://docs.litellm.ai/docs/completion/stream#async-completion))
```python
from litellm import acompletion
import asyncio
async def test_get_response():
user_message = "Hello, how are you?"
messages = [{"content": user_message, "role": "user"}]
response = await acompletion(model="openai/gpt-4o", messages=messages)
return response
response = asyncio.run(test_get_response())
print(response)
```
## Streaming ([Docs](https://docs.litellm.ai/docs/completion/stream))
liteLLM supports streaming the model response back, pass `stream=True` to get a streaming iterator in response.
Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)
```python
from litellm import completion
response = completion(model="openai/gpt-4o", messages=messages, stream=True)
for part in response:
print(part.choices[0].delta.content or "")
# claude 2
response = completion('anthropic/claude-3-sonnet-20240229', messages, stream=True)
for part in response:
print(part)
```
### Response chunk (OpenAI Format)
```json
{
"id": "chatcmpl-2be06597-eb60-4c70-9ec5-8cd2ab1b4697",
"created": 1734366925,
"model": "claude-3-sonnet-20240229",
"object": "chat.completion.chunk",
"system_fingerprint": null,
"choices": [
{
"finish_reason": null,
"index": 0,
"delta": {
"content": "Hello",
"role": "assistant",
"function_call": null,
"tool_calls": null,
"audio": null
},
"logprobs": null
}
]
}
```
## Logging Observability ([Docs](https://docs.litellm.ai/docs/observability/callbacks))
LiteLLM exposes pre defined callbacks to send data to Lunary, Langfuse, DynamoDB, s3 Buckets, Helicone, Promptlayer, Traceloop, Athina, Slack, MLflow
```python
from litellm import completion
## set env variables for logging tools
os.environ["LUNARY_PUBLIC_KEY"] = "your-lunary-public-key"
os.environ["HELICONE_API_KEY"] = "your-helicone-auth-key"
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""
os.environ["ATHINA_API_KEY"] = "your-athina-api-key"
os.environ["OPENAI_API_KEY"]
# set callbacks
litellm.success_callback = ["lunary", "langfuse", "athina", "helicone"] # log input/output to lunary, langfuse, supabase, athina, helicone etc
#openai call
response = completion(model="anthropic/claude-3-sonnet-20240229", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
```
# LiteLLM Proxy Server (LLM Gateway) - ([Docs](https://docs.litellm.ai/docs/simple_proxy))
Track spend + Load Balance across multiple projects
[Hosted Proxy (Preview)](https://docs.litellm.ai/docs/hosted)
The proxy provides:
1. [Hooks for auth](https://docs.litellm.ai/docs/proxy/virtual_keys#custom-auth)
2. [Hooks for logging](https://docs.litellm.ai/docs/proxy/logging#step-1---create-your-custom-litellm-callback-class)
3. [Cost tracking](https://docs.litellm.ai/docs/proxy/virtual_keys#tracking-spend)
4. [Rate Limiting](https://docs.litellm.ai/docs/proxy/users#set-rate-limits)
## 📖 Proxy Endpoints - [Swagger Docs](https://litellm-api.up.railway.app/)
## Quick Start Proxy - CLI
```shell
pip install 'litellm[proxy]'
```
### Step 1: Start litellm proxy
```shell
$ litellm --model huggingface/bigcode/starcoder
#INFO: Proxy running on http://0.0.0.0:4000
```
### Step 2: Make ChatCompletions Request to Proxy
> [!IMPORTANT]
> 💡 [Use LiteLLM Proxy with Langchain (Python, JS), OpenAI SDK (Python, JS) Anthropic SDK, Mistral SDK, LlamaIndex, Instructor, Curl](https://docs.litellm.ai/docs/proxy/user_keys)
```python
import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:4000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
])
print(response)
```
## Proxy Key Management ([Docs](https://docs.litellm.ai/docs/proxy/virtual_keys))
Connect the proxy with a Postgres DB to create proxy keys
```bash
# Get the code
git clone https://github.com/BerriAI/litellm
# Go to folder
cd litellm
# Add the master key - you can change this after setup
echo 'LITELLM_MASTER_KEY="sk-1234"' > .env
# Add the litellm salt key - you cannot change this after adding a model
# It is used to encrypt / decrypt your LLM API Key credentials
# We recommned - https://1password.com/password-generator/
# password generator to get a random hash for litellm salt key
echo 'LITELLM_SALT_KEY="sk-1234"' > .env
source .env
# Start
docker-compose up
```
UI on `/ui` on your proxy server
![ui_3](https://github.com/BerriAI/litellm/assets/29436595/47c97d5e-b9be-4839-b28c-43d7f4f10033)
Set budgets and rate limits across multiple projects
`POST /key/generate`
### Request
```shell
curl 'http://0.0.0.0:4000/key/generate' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data-raw '{"models": ["gpt-3.5-turbo", "gpt-4", "claude-2"], "duration": "20m","metadata": {"user": "ishaan@berri.ai", "team": "core-infra"}}'
```
### Expected Response
```shell
{
"key": "sk-kdEXbIqZRwEeEiHwdg7sFA", # Bearer token
"expires": "2023-11-19T01:38:25.838000+00:00" # datetime object
}
```
## Supported Providers ([Docs](https://docs.litellm.ai/docs/providers))
| Provider | [Completion](https://docs.litellm.ai/docs/#basic-usage) | [Streaming](https://docs.litellm.ai/docs/completion/stream#streaming-responses) | [Async Completion](https://docs.litellm.ai/docs/completion/stream#async-completion) | [Async Streaming](https://docs.litellm.ai/docs/completion/stream#async-streaming) | [Async Embedding](https://docs.litellm.ai/docs/embedding/supported_embedding) | [Async Image Generation](https://docs.litellm.ai/docs/image_generation) |
|-------------------------------------------------------------------------------------|---------------------------------------------------------|---------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------|-------------------------------------------------------------------------|
| [openai](https://docs.litellm.ai/docs/providers/openai) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| [azure](https://docs.litellm.ai/docs/providers/azure) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| [aws - sagemaker](https://docs.litellm.ai/docs/providers/aws_sagemaker) | ✅ | ✅ | ✅ | ✅ | ✅ | |
| [aws - bedrock](https://docs.litellm.ai/docs/providers/bedrock) | ✅ | ✅ | ✅ | ✅ | ✅ | |
| [google - vertex_ai](https://docs.litellm.ai/docs/providers/vertex) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| [google - palm](https://docs.litellm.ai/docs/providers/palm) | ✅ | ✅ | ✅ | ✅ | | |
| [google AI Studio - gemini](https://docs.litellm.ai/docs/providers/gemini) | ✅ | ✅ | ✅ | ✅ | | |
| [mistral ai api](https://docs.litellm.ai/docs/providers/mistral) | ✅ | ✅ | ✅ | ✅ | ✅ | |
| [cloudflare AI Workers](https://docs.litellm.ai/docs/providers/cloudflare_workers) | ✅ | ✅ | ✅ | ✅ | | |
| [cohere](https://docs.litellm.ai/docs/providers/cohere) | ✅ | ✅ | ✅ | ✅ | ✅ | |
| [anthropic](https://docs.litellm.ai/docs/providers/anthropic) | ✅ | ✅ | ✅ | ✅ | | |
| [empower](https://docs.litellm.ai/docs/providers/empower) | ✅ | ✅ | ✅ | ✅ |
| [huggingface](https://docs.litellm.ai/docs/providers/huggingface) | ✅ | ✅ | ✅ | ✅ | ✅ | |
| [replicate](https://docs.litellm.ai/docs/providers/replicate) | ✅ | ✅ | ✅ | ✅ | | |
| [together_ai](https://docs.litellm.ai/docs/providers/togetherai) | ✅ | ✅ | ✅ | ✅ | | |
| [openrouter](https://docs.litellm.ai/docs/providers/openrouter) | ✅ | ✅ | ✅ | ✅ | | |
| [ai21](https://docs.litellm.ai/docs/providers/ai21) | ✅ | ✅ | ✅ | ✅ | | |
| [baseten](https://docs.litellm.ai/docs/providers/baseten) | ✅ | ✅ | ✅ | ✅ | | |
| [vllm](https://docs.litellm.ai/docs/providers/vllm) | ✅ | ✅ | ✅ | ✅ | | |
| [nlp_cloud](https://docs.litellm.ai/docs/providers/nlp_cloud) | ✅ | ✅ | ✅ | ✅ | | |
| [aleph alpha](https://docs.litellm.ai/docs/providers/aleph_alpha) | ✅ | ✅ | ✅ | ✅ | | |
| [petals](https://docs.litellm.ai/docs/providers/petals) | ✅ | ✅ | ✅ | ✅ | | |
| [ollama](https://docs.litellm.ai/docs/providers/ollama) | ✅ | ✅ | ✅ | ✅ | ✅ | |
| [deepinfra](https://docs.litellm.ai/docs/providers/deepinfra) | ✅ | ✅ | ✅ | ✅ | | |
| [perplexity-ai](https://docs.litellm.ai/docs/providers/perplexity) | ✅ | ✅ | ✅ | ✅ | | |
| [Groq AI](https://docs.litellm.ai/docs/providers/groq) | ✅ | ✅ | ✅ | ✅ | | |
| [Deepseek](https://docs.litellm.ai/docs/providers/deepseek) | ✅ | ✅ | ✅ | ✅ | | |
| [anyscale](https://docs.litellm.ai/docs/providers/anyscale) | ✅ | ✅ | ✅ | ✅ | | |
| [IBM - watsonx.ai](https://docs.litellm.ai/docs/providers/watsonx) | ✅ | ✅ | ✅ | ✅ | ✅ | |
| [voyage ai](https://docs.litellm.ai/docs/providers/voyage) | | | | | ✅ | |
| [xinference [Xorbits Inference]](https://docs.litellm.ai/docs/providers/xinference) | | | | | ✅ | |
| [FriendliAI](https://docs.litellm.ai/docs/providers/friendliai) | ✅ | ✅ | ✅ | ✅ | | |
| [Galadriel](https://docs.litellm.ai/docs/providers/galadriel) | ✅ | ✅ | ✅ | ✅ | | |
[**Read the Docs**](https://docs.litellm.ai/docs/)
## Contributing
To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.
Here's how to modify the repo locally:
Step 1: Clone the repo
```
git clone https://github.com/BerriAI/litellm.git
```
Step 2: Navigate into the project, and install dependencies:
```
cd litellm
poetry install -E extra_proxy -E proxy
```
Step 3: Test your change:
```
cd litellm/tests # pwd: Documents/litellm/litellm/tests
poetry run flake8
poetry run pytest .
```
Step 4: Submit a PR with your changes! 🚀
- push your fork to your GitHub repo
- submit a PR from there
### Building LiteLLM Docker Image
Follow these instructions if you want to build / run the LiteLLM Docker Image yourself.
Step 1: Clone the repo
```
git clone https://github.com/BerriAI/litellm.git
```
Step 2: Build the Docker Image
Build using Dockerfile.non_root
```
docker build -f docker/Dockerfile.non_root -t litellm_test_image .
```
Step 3: Run the Docker Image
Make sure config.yaml is present in the root directory. This is your litellm proxy config file.
```
docker run \
-v $(pwd)/proxy_config.yaml:/app/config.yaml \
-e DATABASE_URL="postgresql://xxxxxxxx" \
-e LITELLM_MASTER_KEY="sk-1234" \
-p 4000:4000 \
litellm_test_image \
--config /app/config.yaml --detailed_debug
```
# Enterprise
For companies that need better security, user management and professional support
[Talk to founders](https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat)
This covers:
- ✅ **Features under the [LiteLLM Commercial License](https://docs.litellm.ai/docs/proxy/enterprise):**
- ✅ **Feature Prioritization**
- ✅ **Custom Integrations**
- ✅ **Professional Support - Dedicated discord + slack**
- ✅ **Custom SLAs**
- ✅ **Secure access with Single Sign-On**
# Code Quality / Linting
LiteLLM follows the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html).
We run:
- Ruff for [formatting and linting checks](https://github.com/BerriAI/litellm/blob/e19bb55e3b4c6a858b6e364302ebbf6633a51de5/.circleci/config.yml#L320)
- Mypy + Pyright for typing [1](https://github.com/BerriAI/litellm/blob/e19bb55e3b4c6a858b6e364302ebbf6633a51de5/.circleci/config.yml#L90), [2](https://github.com/BerriAI/litellm/blob/e19bb55e3b4c6a858b6e364302ebbf6633a51de5/.pre-commit-config.yaml#L4)
- Black for [formatting](https://github.com/BerriAI/litellm/blob/e19bb55e3b4c6a858b6e364302ebbf6633a51de5/.circleci/config.yml#L79)
- isort for [import sorting](https://github.com/BerriAI/litellm/blob/e19bb55e3b4c6a858b6e364302ebbf6633a51de5/.pre-commit-config.yaml#L10)
If you have suggestions on how to improve the code quality feel free to open an issue or a PR.
# Support / talk with founders
- [Schedule Demo 👋](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version)
- [Community Discord 💭](https://discord.gg/wuPM9dRgDw)
- Our numbers 📞 +1 (770) 8783-106 / +1 (412) 618-6238
- Our emails ✉️ ishaan@berri.ai / krrish@berri.ai
# Why did we build this
- **Need for simplicity**: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.
# Contributors
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
<!-- markdownlint-restore -->
<!-- prettier-ignore-end -->
<!-- ALL-CONTRIBUTORS-LIST:END -->
<a href="https://github.com/BerriAI/litellm/graphs/contributors">
<img src="https://contrib.rocks/image?repo=BerriAI/litellm" />
</a>
Raw data
{
"_id": null,
"home_page": null,
"name": "litellm",
"maintainer": null,
"docs_url": null,
"requires_python": "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,>=3.8",
"maintainer_email": null,
"keywords": null,
"author": "BerriAI",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/6b/91/55e49113193156d5fcb695169721f70816363301582525d4dcf5ec5e982c/litellm-1.55.9.tar.gz",
"platform": null,
"description": "<h1 align=\"center\">\n \ud83d\ude85 LiteLLM\n </h1>\n <p align=\"center\">\n <p align=\"center\">\n <a href=\"https://render.com/deploy?repo=https://github.com/BerriAI/litellm\" target=\"_blank\" rel=\"nofollow\"><img src=\"https://render.com/images/deploy-to-render-button.svg\" alt=\"Deploy to Render\"></a>\n <a href=\"https://railway.app/template/HLP0Ub?referralCode=jch2ME\">\n <img src=\"https://railway.app/button.svg\" alt=\"Deploy on Railway\">\n </a>\n </p>\n <p align=\"center\">Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.]\n <br>\n </p>\n<h4 align=\"center\"><a href=\"https://docs.litellm.ai/docs/simple_proxy\" target=\"_blank\">LiteLLM Proxy Server (LLM Gateway)</a> | <a href=\"https://docs.litellm.ai/docs/hosted\" target=\"_blank\"> Hosted Proxy (Preview)</a> | <a href=\"https://docs.litellm.ai/docs/enterprise\"target=\"_blank\">Enterprise Tier</a></h4>\n<h4 align=\"center\">\n <a href=\"https://pypi.org/project/litellm/\" target=\"_blank\">\n <img src=\"https://img.shields.io/pypi/v/litellm.svg\" alt=\"PyPI Version\">\n </a>\n <a href=\"https://dl.circleci.com/status-badge/redirect/gh/BerriAI/litellm/tree/main\" target=\"_blank\">\n <img src=\"https://dl.circleci.com/status-badge/img/gh/BerriAI/litellm/tree/main.svg?style=svg\" alt=\"CircleCI\">\n </a>\n <a href=\"https://www.ycombinator.com/companies/berriai\">\n <img src=\"https://img.shields.io/badge/Y%20Combinator-W23-orange?style=flat-square\" alt=\"Y Combinator W23\">\n </a>\n <a href=\"https://wa.link/huol9n\">\n <img src=\"https://img.shields.io/static/v1?label=Chat%20on&message=WhatsApp&color=success&logo=WhatsApp&style=flat-square\" alt=\"Whatsapp\">\n </a>\n <a href=\"https://discord.gg/wuPM9dRgDw\">\n <img src=\"https://img.shields.io/static/v1?label=Chat%20on&message=Discord&color=blue&logo=Discord&style=flat-square\" alt=\"Discord\">\n </a>\n</h4>\n\nLiteLLM manages:\n\n- Translate inputs to provider's `completion`, `embedding`, and `image_generation` endpoints\n- [Consistent output](https://docs.litellm.ai/docs/completion/output), text responses will always be available at `['choices'][0]['message']['content']`\n- Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - [Router](https://docs.litellm.ai/docs/routing)\n- Set Budgets & Rate limits per project, api key, model [LiteLLM Proxy Server (LLM Gateway)](https://docs.litellm.ai/docs/simple_proxy)\n\n[**Jump to LiteLLM Proxy (LLM Gateway) Docs**](https://github.com/BerriAI/litellm?tab=readme-ov-file#openai-proxy---docs) <br>\n[**Jump to Supported LLM Providers**](https://github.com/BerriAI/litellm?tab=readme-ov-file#supported-providers-docs)\n\n\ud83d\udea8 **Stable Release:** Use docker images with the `-stable` tag. These have undergone 12 hour load tests, before being published. \n\nSupport for more providers. Missing a provider or LLM Platform, raise a [feature request](https://github.com/BerriAI/litellm/issues/new?assignees=&labels=enhancement&projects=&template=feature_request.yml&title=%5BFeature%5D%3A+).\n\n# Usage ([**Docs**](https://docs.litellm.ai/docs/))\n\n> [!IMPORTANT]\n> LiteLLM v1.0.0 now requires `openai>=1.0.0`. Migration guide [here](https://docs.litellm.ai/docs/migration) \n> LiteLLM v1.40.14+ now requires `pydantic>=2.0.0`. No changes required.\n\n<a target=\"_blank\" href=\"https://colab.research.google.com/github/BerriAI/litellm/blob/main/cookbook/liteLLM_Getting_Started.ipynb\">\n <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n</a>\n\n```shell\npip install litellm\n```\n\n```python\nfrom litellm import completion\nimport os\n\n## set ENV variables\nos.environ[\"OPENAI_API_KEY\"] = \"your-openai-key\"\nos.environ[\"ANTHROPIC_API_KEY\"] = \"your-cohere-key\"\n\nmessages = [{ \"content\": \"Hello, how are you?\",\"role\": \"user\"}]\n\n# openai call\nresponse = completion(model=\"openai/gpt-4o\", messages=messages)\n\n# anthropic call\nresponse = completion(model=\"anthropic/claude-3-sonnet-20240229\", messages=messages)\nprint(response)\n```\n\n### Response (OpenAI Format)\n\n```json\n{\n \"id\": \"chatcmpl-565d891b-a42e-4c39-8d14-82a1f5208885\",\n \"created\": 1734366691,\n \"model\": \"claude-3-sonnet-20240229\",\n \"object\": \"chat.completion\",\n \"system_fingerprint\": null,\n \"choices\": [\n {\n \"finish_reason\": \"stop\",\n \"index\": 0,\n \"message\": {\n \"content\": \"Hello! As an AI language model, I don't have feelings, but I'm operating properly and ready to assist you with any questions or tasks you may have. How can I help you today?\",\n \"role\": \"assistant\",\n \"tool_calls\": null,\n \"function_call\": null\n }\n }\n ],\n \"usage\": {\n \"completion_tokens\": 43,\n \"prompt_tokens\": 13,\n \"total_tokens\": 56,\n \"completion_tokens_details\": null,\n \"prompt_tokens_details\": {\n \"audio_tokens\": null,\n \"cached_tokens\": 0\n },\n \"cache_creation_input_tokens\": 0,\n \"cache_read_input_tokens\": 0\n }\n}\n```\n\nCall any model supported by a provider, with `model=<provider_name>/<model_name>`. There might be provider-specific details here, so refer to [provider docs for more information](https://docs.litellm.ai/docs/providers)\n\n## Async ([Docs](https://docs.litellm.ai/docs/completion/stream#async-completion))\n\n```python\nfrom litellm import acompletion\nimport asyncio\n\nasync def test_get_response():\n user_message = \"Hello, how are you?\"\n messages = [{\"content\": user_message, \"role\": \"user\"}]\n response = await acompletion(model=\"openai/gpt-4o\", messages=messages)\n return response\n\nresponse = asyncio.run(test_get_response())\nprint(response)\n```\n\n## Streaming ([Docs](https://docs.litellm.ai/docs/completion/stream))\n\nliteLLM supports streaming the model response back, pass `stream=True` to get a streaming iterator in response. \nStreaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)\n\n```python\nfrom litellm import completion\nresponse = completion(model=\"openai/gpt-4o\", messages=messages, stream=True)\nfor part in response:\n print(part.choices[0].delta.content or \"\")\n\n# claude 2\nresponse = completion('anthropic/claude-3-sonnet-20240229', messages, stream=True)\nfor part in response:\n print(part)\n```\n\n### Response chunk (OpenAI Format)\n\n```json\n{\n \"id\": \"chatcmpl-2be06597-eb60-4c70-9ec5-8cd2ab1b4697\",\n \"created\": 1734366925,\n \"model\": \"claude-3-sonnet-20240229\",\n \"object\": \"chat.completion.chunk\",\n \"system_fingerprint\": null,\n \"choices\": [\n {\n \"finish_reason\": null,\n \"index\": 0,\n \"delta\": {\n \"content\": \"Hello\",\n \"role\": \"assistant\",\n \"function_call\": null,\n \"tool_calls\": null,\n \"audio\": null\n },\n \"logprobs\": null\n }\n ]\n}\n```\n\n## Logging Observability ([Docs](https://docs.litellm.ai/docs/observability/callbacks))\n\nLiteLLM exposes pre defined callbacks to send data to Lunary, Langfuse, DynamoDB, s3 Buckets, Helicone, Promptlayer, Traceloop, Athina, Slack, MLflow\n\n```python\nfrom litellm import completion\n\n## set env variables for logging tools\nos.environ[\"LUNARY_PUBLIC_KEY\"] = \"your-lunary-public-key\"\nos.environ[\"HELICONE_API_KEY\"] = \"your-helicone-auth-key\"\nos.environ[\"LANGFUSE_PUBLIC_KEY\"] = \"\"\nos.environ[\"LANGFUSE_SECRET_KEY\"] = \"\"\nos.environ[\"ATHINA_API_KEY\"] = \"your-athina-api-key\"\n\nos.environ[\"OPENAI_API_KEY\"]\n\n# set callbacks\nlitellm.success_callback = [\"lunary\", \"langfuse\", \"athina\", \"helicone\"] # log input/output to lunary, langfuse, supabase, athina, helicone etc\n\n#openai call\nresponse = completion(model=\"anthropic/claude-3-sonnet-20240229\", messages=[{\"role\": \"user\", \"content\": \"Hi \ud83d\udc4b - i'm openai\"}])\n```\n\n# LiteLLM Proxy Server (LLM Gateway) - ([Docs](https://docs.litellm.ai/docs/simple_proxy))\n\nTrack spend + Load Balance across multiple projects\n\n[Hosted Proxy (Preview)](https://docs.litellm.ai/docs/hosted)\n\nThe proxy provides:\n\n1. [Hooks for auth](https://docs.litellm.ai/docs/proxy/virtual_keys#custom-auth)\n2. [Hooks for logging](https://docs.litellm.ai/docs/proxy/logging#step-1---create-your-custom-litellm-callback-class)\n3. [Cost tracking](https://docs.litellm.ai/docs/proxy/virtual_keys#tracking-spend)\n4. [Rate Limiting](https://docs.litellm.ai/docs/proxy/users#set-rate-limits)\n\n## \ud83d\udcd6 Proxy Endpoints - [Swagger Docs](https://litellm-api.up.railway.app/)\n\n\n## Quick Start Proxy - CLI\n\n```shell\npip install 'litellm[proxy]'\n```\n\n### Step 1: Start litellm proxy\n\n```shell\n$ litellm --model huggingface/bigcode/starcoder\n\n#INFO: Proxy running on http://0.0.0.0:4000\n```\n\n### Step 2: Make ChatCompletions Request to Proxy\n\n\n> [!IMPORTANT]\n> \ud83d\udca1 [Use LiteLLM Proxy with Langchain (Python, JS), OpenAI SDK (Python, JS) Anthropic SDK, Mistral SDK, LlamaIndex, Instructor, Curl](https://docs.litellm.ai/docs/proxy/user_keys) \n\n```python\nimport openai # openai v1.0.0+\nclient = openai.OpenAI(api_key=\"anything\",base_url=\"http://0.0.0.0:4000\") # set proxy to base_url\n# request sent to model set on litellm proxy, `litellm --model`\nresponse = client.chat.completions.create(model=\"gpt-3.5-turbo\", messages = [\n {\n \"role\": \"user\",\n \"content\": \"this is a test request, write a short poem\"\n }\n])\n\nprint(response)\n```\n\n## Proxy Key Management ([Docs](https://docs.litellm.ai/docs/proxy/virtual_keys))\n\nConnect the proxy with a Postgres DB to create proxy keys\n\n```bash\n# Get the code\ngit clone https://github.com/BerriAI/litellm\n\n# Go to folder\ncd litellm\n\n# Add the master key - you can change this after setup\necho 'LITELLM_MASTER_KEY=\"sk-1234\"' > .env\n\n# Add the litellm salt key - you cannot change this after adding a model\n# It is used to encrypt / decrypt your LLM API Key credentials\n# We recommned - https://1password.com/password-generator/ \n# password generator to get a random hash for litellm salt key\necho 'LITELLM_SALT_KEY=\"sk-1234\"' > .env\n\nsource .env\n\n# Start\ndocker-compose up\n```\n\n\nUI on `/ui` on your proxy server\n![ui_3](https://github.com/BerriAI/litellm/assets/29436595/47c97d5e-b9be-4839-b28c-43d7f4f10033)\n\nSet budgets and rate limits across multiple projects\n`POST /key/generate`\n\n### Request\n\n```shell\ncurl 'http://0.0.0.0:4000/key/generate' \\\n--header 'Authorization: Bearer sk-1234' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\"models\": [\"gpt-3.5-turbo\", \"gpt-4\", \"claude-2\"], \"duration\": \"20m\",\"metadata\": {\"user\": \"ishaan@berri.ai\", \"team\": \"core-infra\"}}'\n```\n\n### Expected Response\n\n```shell\n{\n \"key\": \"sk-kdEXbIqZRwEeEiHwdg7sFA\", # Bearer token\n \"expires\": \"2023-11-19T01:38:25.838000+00:00\" # datetime object\n}\n```\n\n## Supported Providers ([Docs](https://docs.litellm.ai/docs/providers))\n\n| Provider | [Completion](https://docs.litellm.ai/docs/#basic-usage) | [Streaming](https://docs.litellm.ai/docs/completion/stream#streaming-responses) | [Async Completion](https://docs.litellm.ai/docs/completion/stream#async-completion) | [Async Streaming](https://docs.litellm.ai/docs/completion/stream#async-streaming) | [Async Embedding](https://docs.litellm.ai/docs/embedding/supported_embedding) | [Async Image Generation](https://docs.litellm.ai/docs/image_generation) |\n|-------------------------------------------------------------------------------------|---------------------------------------------------------|---------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------|-------------------------------------------------------------------------|\n| [openai](https://docs.litellm.ai/docs/providers/openai) | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 |\n| [azure](https://docs.litellm.ai/docs/providers/azure) | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 |\n| [aws - sagemaker](https://docs.litellm.ai/docs/providers/aws_sagemaker) | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 | |\n| [aws - bedrock](https://docs.litellm.ai/docs/providers/bedrock) | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 | |\n| [google - vertex_ai](https://docs.litellm.ai/docs/providers/vertex) | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 |\n| [google - palm](https://docs.litellm.ai/docs/providers/palm) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [google AI Studio - gemini](https://docs.litellm.ai/docs/providers/gemini) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [mistral ai api](https://docs.litellm.ai/docs/providers/mistral) | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 | |\n| [cloudflare AI Workers](https://docs.litellm.ai/docs/providers/cloudflare_workers) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [cohere](https://docs.litellm.ai/docs/providers/cohere) | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 | |\n| [anthropic](https://docs.litellm.ai/docs/providers/anthropic) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [empower](https://docs.litellm.ai/docs/providers/empower) | \u2705 | \u2705 | \u2705 | \u2705 |\n| [huggingface](https://docs.litellm.ai/docs/providers/huggingface) | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 | |\n| [replicate](https://docs.litellm.ai/docs/providers/replicate) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [together_ai](https://docs.litellm.ai/docs/providers/togetherai) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [openrouter](https://docs.litellm.ai/docs/providers/openrouter) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [ai21](https://docs.litellm.ai/docs/providers/ai21) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [baseten](https://docs.litellm.ai/docs/providers/baseten) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [vllm](https://docs.litellm.ai/docs/providers/vllm) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [nlp_cloud](https://docs.litellm.ai/docs/providers/nlp_cloud) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [aleph alpha](https://docs.litellm.ai/docs/providers/aleph_alpha) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [petals](https://docs.litellm.ai/docs/providers/petals) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [ollama](https://docs.litellm.ai/docs/providers/ollama) | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 | |\n| [deepinfra](https://docs.litellm.ai/docs/providers/deepinfra) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [perplexity-ai](https://docs.litellm.ai/docs/providers/perplexity) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [Groq AI](https://docs.litellm.ai/docs/providers/groq) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [Deepseek](https://docs.litellm.ai/docs/providers/deepseek) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [anyscale](https://docs.litellm.ai/docs/providers/anyscale) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [IBM - watsonx.ai](https://docs.litellm.ai/docs/providers/watsonx) | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 | |\n| [voyage ai](https://docs.litellm.ai/docs/providers/voyage) | | | | | \u2705 | |\n| [xinference [Xorbits Inference]](https://docs.litellm.ai/docs/providers/xinference) | | | | | \u2705 | |\n| [FriendliAI](https://docs.litellm.ai/docs/providers/friendliai) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n| [Galadriel](https://docs.litellm.ai/docs/providers/galadriel) | \u2705 | \u2705 | \u2705 | \u2705 | | |\n\n[**Read the Docs**](https://docs.litellm.ai/docs/)\n\n## Contributing\n\nTo contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.\n\nHere's how to modify the repo locally:\nStep 1: Clone the repo\n\n```\ngit clone https://github.com/BerriAI/litellm.git\n```\n\nStep 2: Navigate into the project, and install dependencies:\n\n```\ncd litellm\npoetry install -E extra_proxy -E proxy\n```\n\nStep 3: Test your change:\n\n```\ncd litellm/tests # pwd: Documents/litellm/litellm/tests\npoetry run flake8\npoetry run pytest .\n```\n\nStep 4: Submit a PR with your changes! \ud83d\ude80\n\n- push your fork to your GitHub repo\n- submit a PR from there\n\n### Building LiteLLM Docker Image \n\nFollow these instructions if you want to build / run the LiteLLM Docker Image yourself.\n\nStep 1: Clone the repo\n\n```\ngit clone https://github.com/BerriAI/litellm.git\n```\n\nStep 2: Build the Docker Image\n\nBuild using Dockerfile.non_root\n```\ndocker build -f docker/Dockerfile.non_root -t litellm_test_image .\n```\n\nStep 3: Run the Docker Image\n\nMake sure config.yaml is present in the root directory. This is your litellm proxy config file.\n```\ndocker run \\\n -v $(pwd)/proxy_config.yaml:/app/config.yaml \\\n -e DATABASE_URL=\"postgresql://xxxxxxxx\" \\\n -e LITELLM_MASTER_KEY=\"sk-1234\" \\\n -p 4000:4000 \\\n litellm_test_image \\\n --config /app/config.yaml --detailed_debug\n```\n\n# Enterprise\nFor companies that need better security, user management and professional support\n\n[Talk to founders](https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat)\n\nThis covers: \n- \u2705 **Features under the [LiteLLM Commercial License](https://docs.litellm.ai/docs/proxy/enterprise):**\n- \u2705 **Feature Prioritization**\n- \u2705 **Custom Integrations**\n- \u2705 **Professional Support - Dedicated discord + slack**\n- \u2705 **Custom SLAs**\n- \u2705 **Secure access with Single Sign-On**\n\n# Code Quality / Linting\n\nLiteLLM follows the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html).\n\nWe run: \n- Ruff for [formatting and linting checks](https://github.com/BerriAI/litellm/blob/e19bb55e3b4c6a858b6e364302ebbf6633a51de5/.circleci/config.yml#L320)\n- Mypy + Pyright for typing [1](https://github.com/BerriAI/litellm/blob/e19bb55e3b4c6a858b6e364302ebbf6633a51de5/.circleci/config.yml#L90), [2](https://github.com/BerriAI/litellm/blob/e19bb55e3b4c6a858b6e364302ebbf6633a51de5/.pre-commit-config.yaml#L4)\n- Black for [formatting](https://github.com/BerriAI/litellm/blob/e19bb55e3b4c6a858b6e364302ebbf6633a51de5/.circleci/config.yml#L79)\n- isort for [import sorting](https://github.com/BerriAI/litellm/blob/e19bb55e3b4c6a858b6e364302ebbf6633a51de5/.pre-commit-config.yaml#L10)\n\n\nIf you have suggestions on how to improve the code quality feel free to open an issue or a PR.\n\n\n# Support / talk with founders\n\n- [Schedule Demo \ud83d\udc4b](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version)\n- [Community Discord \ud83d\udcad](https://discord.gg/wuPM9dRgDw)\n- Our numbers \ud83d\udcde +1 (770) 8783-106 / \u202d+1 (412) 618-6238\u202c\n- Our emails \u2709\ufe0f ishaan@berri.ai / krrish@berri.ai\n\n# Why did we build this\n\n- **Need for simplicity**: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.\n\n# Contributors\n\n<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->\n<!-- prettier-ignore-start -->\n<!-- markdownlint-disable -->\n\n<!-- markdownlint-restore -->\n<!-- prettier-ignore-end -->\n\n<!-- ALL-CONTRIBUTORS-LIST:END -->\n\n<a href=\"https://github.com/BerriAI/litellm/graphs/contributors\">\n <img src=\"https://contrib.rocks/image?repo=BerriAI/litellm\" />\n</a>\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Library to easily interface with LLM API providers",
"version": "1.55.9",
"project_urls": {
"documentation": "https://docs.litellm.ai",
"homepage": "https://litellm.ai",
"repository": "https://github.com/BerriAI/litellm"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "d683ae45fa54668abfa24a845a5976ca9cdd0e1c3f28702c61e00196a79cb7e8",
"md5": "401db59a3c191c94183dd391bc706adc",
"sha256": "5ea931bee64535090d49a54e6b9842883fa6cabd6849c3c9674c12b166145da0"
},
"downloads": -1,
"filename": "litellm-1.55.9-py3-none-any.whl",
"has_sig": false,
"md5_digest": "401db59a3c191c94183dd391bc706adc",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,>=3.8",
"size": 6474898,
"upload_time": "2024-12-21T15:58:24",
"upload_time_iso_8601": "2024-12-21T15:58:24.130873Z",
"url": "https://files.pythonhosted.org/packages/d6/83/ae45fa54668abfa24a845a5976ca9cdd0e1c3f28702c61e00196a79cb7e8/litellm-1.55.9-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "6b9155e49113193156d5fcb695169721f70816363301582525d4dcf5ec5e982c",
"md5": "bb0722e2679eb6d5e5989ac9b133ee1c",
"sha256": "861be3447552db32da05abff8af4945d1dd84df2f4b10985f97120dca5c07a42"
},
"downloads": -1,
"filename": "litellm-1.55.9.tar.gz",
"has_sig": false,
"md5_digest": "bb0722e2679eb6d5e5989ac9b133ee1c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,>=3.8",
"size": 6203068,
"upload_time": "2024-12-21T15:58:28",
"upload_time_iso_8601": "2024-12-21T15:58:28.270346Z",
"url": "https://files.pythonhosted.org/packages/6b/91/55e49113193156d5fcb695169721f70816363301582525d4dcf5ec5e982c/litellm-1.55.9.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-21 15:58:28",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "BerriAI",
"github_project": "litellm",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"circle": true,
"requirements": [
{
"name": "anyio",
"specs": [
[
"==",
"4.4.0"
]
]
},
{
"name": "httpx",
"specs": [
[
"==",
"0.27.0"
]
]
},
{
"name": "openai",
"specs": [
[
"==",
"1.55.3"
]
]
},
{
"name": "fastapi",
"specs": [
[
"==",
"0.111.0"
]
]
},
{
"name": "backoff",
"specs": [
[
"==",
"2.2.1"
]
]
},
{
"name": "pyyaml",
"specs": [
[
"==",
"6.0.2"
]
]
},
{
"name": "uvicorn",
"specs": [
[
"==",
"0.29.0"
]
]
},
{
"name": "gunicorn",
"specs": [
[
"==",
"22.0.0"
]
]
},
{
"name": "boto3",
"specs": [
[
"==",
"1.34.34"
]
]
},
{
"name": "redis",
"specs": [
[
"==",
"5.0.0"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"2.1.1"
]
]
},
{
"name": "prisma",
"specs": [
[
"==",
"0.11.0"
]
]
},
{
"name": "mangum",
"specs": [
[
"==",
"0.17.0"
]
]
},
{
"name": "pynacl",
"specs": [
[
"==",
"1.5.0"
]
]
},
{
"name": "google-cloud-aiplatform",
"specs": [
[
"==",
"1.47.0"
]
]
},
{
"name": "anthropic",
"specs": [
[
"==",
"0.21.3"
]
]
},
{
"name": "google-generativeai",
"specs": [
[
"==",
"0.5.0"
]
]
},
{
"name": "async_generator",
"specs": [
[
"==",
"1.10.0"
]
]
},
{
"name": "langfuse",
"specs": [
[
"==",
"2.45.0"
]
]
},
{
"name": "prometheus_client",
"specs": [
[
"==",
"0.20.0"
]
]
},
{
"name": "orjson",
"specs": [
[
"==",
"3.10.12"
]
]
},
{
"name": "apscheduler",
"specs": [
[
"==",
"3.10.4"
]
]
},
{
"name": "fastapi-sso",
"specs": [
[
"==",
"0.10.0"
]
]
},
{
"name": "pyjwt",
"specs": [
[
"==",
"2.9.0"
]
]
},
{
"name": "python-multipart",
"specs": [
[
"==",
"0.0.9"
]
]
},
{
"name": "Pillow",
"specs": [
[
"==",
"11.0.0"
]
]
},
{
"name": "azure-ai-contentsafety",
"specs": [
[
"==",
"1.0.0"
]
]
},
{
"name": "azure-identity",
"specs": [
[
"==",
"1.16.1"
]
]
},
{
"name": "azure-storage-file-datalake",
"specs": [
[
"==",
"12.15.0"
]
]
},
{
"name": "opentelemetry-api",
"specs": [
[
"==",
"1.25.0"
]
]
},
{
"name": "opentelemetry-sdk",
"specs": [
[
"==",
"1.25.0"
]
]
},
{
"name": "opentelemetry-exporter-otlp",
"specs": [
[
"==",
"1.25.0"
]
]
},
{
"name": "sentry_sdk",
"specs": [
[
"==",
"2.2.1"
]
]
},
{
"name": "detect-secrets",
"specs": [
[
"==",
"1.5.0"
]
]
},
{
"name": "cryptography",
"specs": [
[
"==",
"42.0.7"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
"==",
"1.0.0"
]
]
},
{
"name": "tiktoken",
"specs": [
[
"==",
"0.8.0"
]
]
},
{
"name": "importlib-metadata",
"specs": [
[
"==",
"6.8.0"
]
]
},
{
"name": "tokenizers",
"specs": [
[
"==",
"0.20.2"
]
]
},
{
"name": "click",
"specs": [
[
"==",
"8.1.7"
]
]
},
{
"name": "jinja2",
"specs": [
[
"==",
"3.1.4"
]
]
},
{
"name": "certifi",
"specs": [
[
"==",
"2024.7.4"
]
]
},
{
"name": "aiohttp",
"specs": [
[
"==",
"3.10.2"
]
]
},
{
"name": "aioboto3",
"specs": [
[
"==",
"12.3.0"
]
]
},
{
"name": "tenacity",
"specs": [
[
"==",
"8.2.3"
]
]
},
{
"name": "pydantic",
"specs": [
[
"==",
"2.10.0"
]
]
},
{
"name": "jsonschema",
"specs": [
[
"==",
"4.22.0"
]
]
},
{
"name": "websockets",
"specs": [
[
"==",
"10.4"
]
]
}
],
"lcname": "litellm"
}