Name | pipecat-ai JSON |
Version |
0.0.81
JSON |
| download |
home_page | None |
Summary | An open source framework for voice (and multimodal) assistants |
upload_time | 2025-08-25 16:29:54 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | None |
keywords |
webrtc
audio
video
ai
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<h1><div align="center">
<img alt="pipecat" width="300px" height="auto" src="https://raw.githubusercontent.com/pipecat-ai/pipecat/main/pipecat.png">
</div></h1>
[](https://pypi.org/project/pipecat-ai)  [](https://codecov.io/gh/pipecat-ai/pipecat) [](https://docs.pipecat.ai) [](https://discord.gg/pipecat)
# ποΈ Pipecat: Real-Time Voice & Multimodal AI Agents
**Pipecat** is an open-source Python framework for building real-time voice and multimodal conversational agents. Orchestrate audio and video, AI services, different transports, and conversation pipelines effortlesslyβso you can focus on what makes your agent unique.
> Want to dive right in? Try the [quickstart](https://docs.pipecat.ai/getting-started/quickstart).
## π What You Can Build
- **Voice Assistants** β natural, streaming conversations with AI
- **AI Companions** β coaches, meeting assistants, characters
- **Multimodal Interfaces** β voice, video, images, and more
- **Interactive Storytelling** β creative tools with generative media
- **Business Agents** β customer intake, support bots, guided flows
- **Complex Dialog Systems** β design logic with structured conversations
π§ Looking to build structured conversations? Check out [Pipecat Flows](https://github.com/pipecat-ai/pipecat-flows) for managing complex conversational states and transitions.
## π§ Why Pipecat?
- **Voice-first**: Integrates speech recognition, text-to-speech, and conversation handling
- **Pluggable**: Supports many AI services and tools
- **Composable Pipelines**: Build complex behavior from modular components
- **Real-Time**: Ultra-low latency interaction with different transports (e.g. WebSockets or WebRTC)
## π¬ See it in action
<p float="left">
<a href="https://github.com/pipecat-ai/pipecat-examples/tree/main/simple-chatbot"><img src="https://raw.githubusercontent.com/pipecat-ai/pipecat-examples/main/simple-chatbot/image.png" width="400" /></a>
<a href="https://github.com/pipecat-ai/pipecat-examples/tree/main/storytelling-chatbot"><img src="https://raw.githubusercontent.com/pipecat-ai/pipecat-examples/main/storytelling-chatbot/image.png" width="400" /></a>
<br/>
<a href="https://github.com/pipecat-ai/pipecat-examples/tree/main/translation-chatbot"><img src="https://raw.githubusercontent.com/pipecat-ai/pipecat-examples/main/translation-chatbot/image.png" width="400" /></a>
<a href="https://github.com/pipecat-ai/pipecat-examples/tree/main/moondream-chatbot"><img src="https://raw.githubusercontent.com/pipecat-ai/pipecat-examples/main/moondream-chatbot/image.png" width="400" /></a>
</p>
## π± Client SDKs
You can connect to Pipecat from any platform using our official SDKs:
| Platform | SDK Repo | Description |
| -------- | ------------------------------------------------------------------------------ | -------------------------------- |
| Web | [pipecat-client-web](https://github.com/pipecat-ai/pipecat-client-web) | JavaScript and React client SDKs |
| iOS | [pipecat-client-ios](https://github.com/pipecat-ai/pipecat-client-ios) | Swift SDK for iOS |
| Android | [pipecat-client-android](https://github.com/pipecat-ai/pipecat-client-android) | Kotlin SDK for Android |
| C++ | [pipecat-client-cxx](https://github.com/pipecat-ai/pipecat-client-cxx) | C++ client SDK |
## π§© Available services
| Category | Services |
| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Speech-to-Text | [AssemblyAI](https://docs.pipecat.ai/server/services/stt/assemblyai), [AWS](https://docs.pipecat.ai/server/services/stt/aws), [Azure](https://docs.pipecat.ai/server/services/stt/azure), [Cartesia](https://docs.pipecat.ai/server/services/stt/cartesia), [Deepgram](https://docs.pipecat.ai/server/services/stt/deepgram), [Fal Wizper](https://docs.pipecat.ai/server/services/stt/fal), [Gladia](https://docs.pipecat.ai/server/services/stt/gladia), [Google](https://docs.pipecat.ai/server/services/stt/google), [Groq (Whisper)](https://docs.pipecat.ai/server/services/stt/groq), [NVIDIA Riva](https://docs.pipecat.ai/server/services/stt/riva), [OpenAI (Whisper)](https://docs.pipecat.ai/server/services/stt/openai), [SambaNova (Whisper)](https://docs.pipecat.ai/server/services/stt/sambanova), [Soniox](https://docs.pipecat.ai/server/services/stt/soniox), [Speechmatics](https://docs.pipecat.ai/server/services/stt/speechmatics), [Ultravox](https://docs.pipecat.ai/server/services/stt/ultravox), [Whisper](https://docs.pipecat.ai/server/services/stt/whisper) |
| LLMs | [Anthropic](https://docs.pipecat.ai/server/services/llm/anthropic), [AWS](https://docs.pipecat.ai/server/services/llm/aws), [Azure](https://docs.pipecat.ai/server/services/llm/azure), [Cerebras](https://docs.pipecat.ai/server/services/llm/cerebras), [DeepSeek](https://docs.pipecat.ai/server/services/llm/deepseek), [Fireworks AI](https://docs.pipecat.ai/server/services/llm/fireworks), [Gemini](https://docs.pipecat.ai/server/services/llm/gemini), [Grok](https://docs.pipecat.ai/server/services/llm/grok), [Groq](https://docs.pipecat.ai/server/services/llm/groq), [Mistral](https://docs.pipecat.ai/server/services/llm/mistral), [NVIDIA NIM](https://docs.pipecat.ai/server/services/llm/nim), [Ollama](https://docs.pipecat.ai/server/services/llm/ollama), [OpenAI](https://docs.pipecat.ai/server/services/llm/openai), [OpenRouter](https://docs.pipecat.ai/server/services/llm/openrouter), [Perplexity](https://docs.pipecat.ai/server/services/llm/perplexity), [Qwen](https://docs.pipecat.ai/server/services/llm/qwen), [SambaNova](https://docs.pipecat.ai/server/services/llm/sambanova) [Together AI](https://docs.pipecat.ai/server/services/llm/together) |
| Text-to-Speech | [Async](https://docs.pipecat.ai/server/services/tts/asyncai), [AWS](https://docs.pipecat.ai/server/services/tts/aws), [Azure](https://docs.pipecat.ai/server/services/tts/azure), [Cartesia](https://docs.pipecat.ai/server/services/tts/cartesia), [Deepgram](https://docs.pipecat.ai/server/services/tts/deepgram), [ElevenLabs](https://docs.pipecat.ai/server/services/tts/elevenlabs), [Fish](https://docs.pipecat.ai/server/services/tts/fish), [Google](https://docs.pipecat.ai/server/services/tts/google), [Groq](https://docs.pipecat.ai/server/services/tts/groq), [Inworld](https://docs.pipecat.ai/server/services/tts/inworld), [LMNT](https://docs.pipecat.ai/server/services/tts/lmnt), [MiniMax](https://docs.pipecat.ai/server/services/tts/minimax), [Neuphonic](https://docs.pipecat.ai/server/services/tts/neuphonic), [NVIDIA Riva](https://docs.pipecat.ai/server/services/tts/riva), [OpenAI](https://docs.pipecat.ai/server/services/tts/openai), [Piper](https://docs.pipecat.ai/server/services/tts/piper), [PlayHT](https://docs.pipecat.ai/server/services/tts/playht), [Rime](https://docs.pipecat.ai/server/services/tts/rime), [Sarvam](https://docs.pipecat.ai/server/services/tts/sarvam), [XTTS](https://docs.pipecat.ai/server/services/tts/xtts) |
| Speech-to-Speech | [AWS Nova Sonic](https://docs.pipecat.ai/server/services/s2s/aws), [Gemini Multimodal Live](https://docs.pipecat.ai/server/services/s2s/gemini), [OpenAI Realtime](https://docs.pipecat.ai/server/services/s2s/openai) |
| Transport | [Daily (WebRTC)](https://docs.pipecat.ai/server/services/transport/daily), [FastAPI Websocket](https://docs.pipecat.ai/server/services/transport/fastapi-websocket), [SmallWebRTCTransport](https://docs.pipecat.ai/server/services/transport/small-webrtc), [WebSocket Server](https://docs.pipecat.ai/server/services/transport/websocket-server), Local |
| Serializers | [Plivo](https://docs.pipecat.ai/server/utilities/serializers/plivo), [Twilio](https://docs.pipecat.ai/server/utilities/serializers/twilio), [Telnyx](https://docs.pipecat.ai/server/utilities/serializers/telnyx) |
| Video | [HeyGen](https://docs.pipecat.ai/server/services/video/heygen), [Tavus](https://docs.pipecat.ai/server/services/video/tavus), [Simli](https://docs.pipecat.ai/server/services/video/simli) |
| Memory | [mem0](https://docs.pipecat.ai/server/services/memory/mem0) |
| Vision & Image | [fal](https://docs.pipecat.ai/server/services/image-generation/fal), [Google Imagen](https://docs.pipecat.ai/server/services/image-generation/fal), [Moondream](https://docs.pipecat.ai/server/services/vision/moondream) |
| Audio Processing | [Silero VAD](https://docs.pipecat.ai/server/utilities/audio/silero-vad-analyzer), [Krisp](https://docs.pipecat.ai/server/utilities/audio/krisp-filter), [Koala](https://docs.pipecat.ai/server/utilities/audio/koala-filter), [Noisereduce](https://docs.pipecat.ai/server/utilities/audio/noisereduce-filter) |
| Analytics & Metrics | [OpenTelemetry](https://docs.pipecat.ai/server/utilities/opentelemetry), [Sentry](https://docs.pipecat.ai/server/services/analytics/sentry) |
π [View full services documentation β](https://docs.pipecat.ai/server/services/supported-services)
## β‘ Getting started
You can get started with Pipecat running on your local machine, then move your agent processes to the cloud when you're ready.
1. Install uv
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
> **Need help?** Refer to the [uv install documentation](https://docs.astral.sh/uv/getting-started/installation/).
2. Install the module
```bash
# For new projects
uv init my-pipecat-app
cd my-pipecat-app
uv add pipecat-ai
# Or for existing projects
uv add pipecat-ai
```
3. Set up your environment
```bash
cp env.example .env
```
4. To keep things lightweight, only the core framework is included by default. If you need support for third-party AI services, you can add the necessary dependencies with:
```bash
uv add "pipecat-ai[option,...]"
```
> **Using pip?** You can still use `pip install pipecat-ai` and `pip install "pipecat-ai[option,...]"` to get set up.
## π§ͺ Code examples
- [Foundational](https://github.com/pipecat-ai/pipecat/tree/main/examples/foundational) β small snippets that build on each other, introducing one or two concepts at a time
- [Example apps](https://github.com/pipecat-ai/pipecat-examples) β complete applications that you can use as starting points for development
## π οΈ Contributing to the framework
### Prerequisites
**Minimum Python Version:** 3.10
**Recommended Python Version:** 3.12
### Setup Steps
1. Clone the repository and navigate to it:
```bash
git clone https://github.com/pipecat-ai/pipecat.git
cd pipecat
```
2. Install development and testing dependencies:
```bash
uv sync --group dev --all-extras --no-extra gstreamer --no-extra krisp --no-extra local
```
3. Install the git pre-commit hooks:
```bash
uv run pre-commit install
```
### Python 3.13+ Compatibility
Some features require PyTorch, which doesn't yet support Python 3.13+. Install using:
```bash
uv sync --group dev --all-extras \
--no-extra gstreamer \
--no-extra krisp \
--no-extra local \
--no-extra local-smart-turn \
--no-extra mlx-whisper \
--no-extra moondream \
--no-extra ultravox
```
> **Tip:** For full compatibility, use Python 3.12: `uv python pin 3.12`
> **Note**: Some extras (local, gstreamer) require system dependencies. See documentation if you encounter build errors.
### Running tests
To run all tests, from the root directory:
```bash
uv run pytest
```
Run a specific test suite:
```bash
uv run pytest tests/test_name.py
```
### Setting up your editor
This project uses strict [PEP 8](https://peps.python.org/pep-0008/) formatting via [Ruff](https://github.com/astral-sh/ruff).
#### Emacs
You can use [use-package](https://github.com/jwiegley/use-package) to install [emacs-lazy-ruff](https://github.com/christophermadsen/emacs-lazy-ruff) package and configure `ruff` arguments:
```elisp
(use-package lazy-ruff
:ensure t
:hook ((python-mode . lazy-ruff-mode))
:config
(setq lazy-ruff-format-command "ruff format")
(setq lazy-ruff-check-command "ruff check --select I"))
```
`ruff` was installed in the `venv` environment described before, so you should be able to use [pyvenv-auto](https://github.com/ryotaro612/pyvenv-auto) to automatically load that environment inside Emacs.
```elisp
(use-package pyvenv-auto
:ensure t
:defer t
:hook ((python-mode . pyvenv-auto-run)))
```
#### Visual Studio Code
Install the
[Ruff](https://marketplace.visualstudio.com/items?itemName=charliermarsh.ruff) extension. Then edit the user settings (_Ctrl-Shift-P_ `Open User Settings (JSON)`) and set it as the default Python formatter, and enable formatting on save:
```json
"[python]": {
"editor.defaultFormatter": "charliermarsh.ruff",
"editor.formatOnSave": true
}
```
#### PyCharm
`ruff` was installed in the `venv` environment described before, now to enable autoformatting on save, go to `File` -> `Settings` -> `Tools` -> `File Watchers` and add a new watcher with the following settings:
1. **Name**: `Ruff formatter`
2. **File type**: `Python`
3. **Working directory**: `$ContentRoot$`
4. **Arguments**: `format $FilePath$`
5. **Program**: `$PyInterpreterDirectory$/ruff`
## π€ Contributing
We welcome contributions from the community! Whether you're fixing bugs, improving documentation, or adding new features, here's how you can help:
- **Found a bug?** Open an [issue](https://github.com/pipecat-ai/pipecat/issues)
- **Have a feature idea?** Start a [discussion](https://discord.gg/pipecat)
- **Want to contribute code?** Check our [CONTRIBUTING.md](CONTRIBUTING.md) guide
- **Documentation improvements?** [Docs](https://github.com/pipecat-ai/docs) PRs are always welcome
Before submitting a pull request, please check existing issues and PRs to avoid duplicates.
We aim to review all contributions promptly and provide constructive feedback to help get your changes merged.
## π Getting help
β‘οΈ [Join our Discord](https://discord.gg/pipecat)
β‘οΈ [Read the docs](https://docs.pipecat.ai)
β‘οΈ [Reach us on X](https://x.com/pipecat_ai)
Raw data
{
"_id": null,
"home_page": null,
"name": "pipecat-ai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "webrtc, audio, video, ai",
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/dc/d8/0bc2854b07562b921ce83a063d0c2d564c62b67d8c1ea5ae941f0b6a8c08/pipecat_ai-0.0.81.tar.gz",
"platform": null,
"description": "<h1><div align=\"center\">\n <img alt=\"pipecat\" width=\"300px\" height=\"auto\" src=\"https://raw.githubusercontent.com/pipecat-ai/pipecat/main/pipecat.png\">\n</div></h1>\n\n[](https://pypi.org/project/pipecat-ai)  [](https://codecov.io/gh/pipecat-ai/pipecat) [](https://docs.pipecat.ai) [](https://discord.gg/pipecat)\n\n# \ud83c\udf99\ufe0f Pipecat: Real-Time Voice & Multimodal AI Agents\n\n**Pipecat** is an open-source Python framework for building real-time voice and multimodal conversational agents. Orchestrate audio and video, AI services, different transports, and conversation pipelines effortlessly\u2014so you can focus on what makes your agent unique.\n\n> Want to dive right in? Try the [quickstart](https://docs.pipecat.ai/getting-started/quickstart).\n\n## \ud83d\ude80 What You Can Build\n\n- **Voice Assistants** \u2013 natural, streaming conversations with AI\n- **AI Companions** \u2013 coaches, meeting assistants, characters\n- **Multimodal Interfaces** \u2013 voice, video, images, and more\n- **Interactive Storytelling** \u2013 creative tools with generative media\n- **Business Agents** \u2013 customer intake, support bots, guided flows\n- **Complex Dialog Systems** \u2013 design logic with structured conversations\n\n\ud83e\udded Looking to build structured conversations? Check out [Pipecat Flows](https://github.com/pipecat-ai/pipecat-flows) for managing complex conversational states and transitions.\n\n## \ud83e\udde0 Why Pipecat?\n\n- **Voice-first**: Integrates speech recognition, text-to-speech, and conversation handling\n- **Pluggable**: Supports many AI services and tools\n- **Composable Pipelines**: Build complex behavior from modular components\n- **Real-Time**: Ultra-low latency interaction with different transports (e.g. WebSockets or WebRTC)\n\n## \ud83c\udfac See it in action\n\n<p float=\"left\">\n <a href=\"https://github.com/pipecat-ai/pipecat-examples/tree/main/simple-chatbot\"><img src=\"https://raw.githubusercontent.com/pipecat-ai/pipecat-examples/main/simple-chatbot/image.png\" width=\"400\" /></a> \n <a href=\"https://github.com/pipecat-ai/pipecat-examples/tree/main/storytelling-chatbot\"><img src=\"https://raw.githubusercontent.com/pipecat-ai/pipecat-examples/main/storytelling-chatbot/image.png\" width=\"400\" /></a>\n <br/>\n <a href=\"https://github.com/pipecat-ai/pipecat-examples/tree/main/translation-chatbot\"><img src=\"https://raw.githubusercontent.com/pipecat-ai/pipecat-examples/main/translation-chatbot/image.png\" width=\"400\" /></a> \n <a href=\"https://github.com/pipecat-ai/pipecat-examples/tree/main/moondream-chatbot\"><img src=\"https://raw.githubusercontent.com/pipecat-ai/pipecat-examples/main/moondream-chatbot/image.png\" width=\"400\" /></a>\n</p>\n\n## \ud83d\udcf1 Client SDKs\n\nYou can connect to Pipecat from any platform using our official SDKs:\n\n| Platform | SDK Repo | Description |\n| -------- | ------------------------------------------------------------------------------ | -------------------------------- |\n| Web | [pipecat-client-web](https://github.com/pipecat-ai/pipecat-client-web) | JavaScript and React client SDKs |\n| iOS | [pipecat-client-ios](https://github.com/pipecat-ai/pipecat-client-ios) | Swift SDK for iOS |\n| Android | [pipecat-client-android](https://github.com/pipecat-ai/pipecat-client-android) | Kotlin SDK for Android |\n| C++ | [pipecat-client-cxx](https://github.com/pipecat-ai/pipecat-client-cxx) | C++ client SDK |\n\n## \ud83e\udde9 Available services\n\n| Category | Services |\n| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Speech-to-Text | [AssemblyAI](https://docs.pipecat.ai/server/services/stt/assemblyai), [AWS](https://docs.pipecat.ai/server/services/stt/aws), [Azure](https://docs.pipecat.ai/server/services/stt/azure), [Cartesia](https://docs.pipecat.ai/server/services/stt/cartesia), [Deepgram](https://docs.pipecat.ai/server/services/stt/deepgram), [Fal Wizper](https://docs.pipecat.ai/server/services/stt/fal), [Gladia](https://docs.pipecat.ai/server/services/stt/gladia), [Google](https://docs.pipecat.ai/server/services/stt/google), [Groq (Whisper)](https://docs.pipecat.ai/server/services/stt/groq), [NVIDIA Riva](https://docs.pipecat.ai/server/services/stt/riva), [OpenAI (Whisper)](https://docs.pipecat.ai/server/services/stt/openai), [SambaNova (Whisper)](https://docs.pipecat.ai/server/services/stt/sambanova), [Soniox](https://docs.pipecat.ai/server/services/stt/soniox), [Speechmatics](https://docs.pipecat.ai/server/services/stt/speechmatics), [Ultravox](https://docs.pipecat.ai/server/services/stt/ultravox), [Whisper](https://docs.pipecat.ai/server/services/stt/whisper) |\n| LLMs | [Anthropic](https://docs.pipecat.ai/server/services/llm/anthropic), [AWS](https://docs.pipecat.ai/server/services/llm/aws), [Azure](https://docs.pipecat.ai/server/services/llm/azure), [Cerebras](https://docs.pipecat.ai/server/services/llm/cerebras), [DeepSeek](https://docs.pipecat.ai/server/services/llm/deepseek), [Fireworks AI](https://docs.pipecat.ai/server/services/llm/fireworks), [Gemini](https://docs.pipecat.ai/server/services/llm/gemini), [Grok](https://docs.pipecat.ai/server/services/llm/grok), [Groq](https://docs.pipecat.ai/server/services/llm/groq), [Mistral](https://docs.pipecat.ai/server/services/llm/mistral), [NVIDIA NIM](https://docs.pipecat.ai/server/services/llm/nim), [Ollama](https://docs.pipecat.ai/server/services/llm/ollama), [OpenAI](https://docs.pipecat.ai/server/services/llm/openai), [OpenRouter](https://docs.pipecat.ai/server/services/llm/openrouter), [Perplexity](https://docs.pipecat.ai/server/services/llm/perplexity), [Qwen](https://docs.pipecat.ai/server/services/llm/qwen), [SambaNova](https://docs.pipecat.ai/server/services/llm/sambanova) [Together AI](https://docs.pipecat.ai/server/services/llm/together) |\n| Text-to-Speech | [Async](https://docs.pipecat.ai/server/services/tts/asyncai), [AWS](https://docs.pipecat.ai/server/services/tts/aws), [Azure](https://docs.pipecat.ai/server/services/tts/azure), [Cartesia](https://docs.pipecat.ai/server/services/tts/cartesia), [Deepgram](https://docs.pipecat.ai/server/services/tts/deepgram), [ElevenLabs](https://docs.pipecat.ai/server/services/tts/elevenlabs), [Fish](https://docs.pipecat.ai/server/services/tts/fish), [Google](https://docs.pipecat.ai/server/services/tts/google), [Groq](https://docs.pipecat.ai/server/services/tts/groq), [Inworld](https://docs.pipecat.ai/server/services/tts/inworld), [LMNT](https://docs.pipecat.ai/server/services/tts/lmnt), [MiniMax](https://docs.pipecat.ai/server/services/tts/minimax), [Neuphonic](https://docs.pipecat.ai/server/services/tts/neuphonic), [NVIDIA Riva](https://docs.pipecat.ai/server/services/tts/riva), [OpenAI](https://docs.pipecat.ai/server/services/tts/openai), [Piper](https://docs.pipecat.ai/server/services/tts/piper), [PlayHT](https://docs.pipecat.ai/server/services/tts/playht), [Rime](https://docs.pipecat.ai/server/services/tts/rime), [Sarvam](https://docs.pipecat.ai/server/services/tts/sarvam), [XTTS](https://docs.pipecat.ai/server/services/tts/xtts) |\n| Speech-to-Speech | [AWS Nova Sonic](https://docs.pipecat.ai/server/services/s2s/aws), [Gemini Multimodal Live](https://docs.pipecat.ai/server/services/s2s/gemini), [OpenAI Realtime](https://docs.pipecat.ai/server/services/s2s/openai) |\n| Transport | [Daily (WebRTC)](https://docs.pipecat.ai/server/services/transport/daily), [FastAPI Websocket](https://docs.pipecat.ai/server/services/transport/fastapi-websocket), [SmallWebRTCTransport](https://docs.pipecat.ai/server/services/transport/small-webrtc), [WebSocket Server](https://docs.pipecat.ai/server/services/transport/websocket-server), Local |\n| Serializers | [Plivo](https://docs.pipecat.ai/server/utilities/serializers/plivo), [Twilio](https://docs.pipecat.ai/server/utilities/serializers/twilio), [Telnyx](https://docs.pipecat.ai/server/utilities/serializers/telnyx) |\n| Video | [HeyGen](https://docs.pipecat.ai/server/services/video/heygen), [Tavus](https://docs.pipecat.ai/server/services/video/tavus), [Simli](https://docs.pipecat.ai/server/services/video/simli) |\n| Memory | [mem0](https://docs.pipecat.ai/server/services/memory/mem0) |\n| Vision & Image | [fal](https://docs.pipecat.ai/server/services/image-generation/fal), [Google Imagen](https://docs.pipecat.ai/server/services/image-generation/fal), [Moondream](https://docs.pipecat.ai/server/services/vision/moondream) |\n| Audio Processing | [Silero VAD](https://docs.pipecat.ai/server/utilities/audio/silero-vad-analyzer), [Krisp](https://docs.pipecat.ai/server/utilities/audio/krisp-filter), [Koala](https://docs.pipecat.ai/server/utilities/audio/koala-filter), [Noisereduce](https://docs.pipecat.ai/server/utilities/audio/noisereduce-filter) |\n| Analytics & Metrics | [OpenTelemetry](https://docs.pipecat.ai/server/utilities/opentelemetry), [Sentry](https://docs.pipecat.ai/server/services/analytics/sentry) |\n\n\ud83d\udcda [View full services documentation \u2192](https://docs.pipecat.ai/server/services/supported-services)\n\n## \u26a1 Getting started\n\nYou can get started with Pipecat running on your local machine, then move your agent processes to the cloud when you're ready.\n\n1. Install uv\n\n ```bash\n curl -LsSf https://astral.sh/uv/install.sh | sh\n ```\n\n > **Need help?** Refer to the [uv install documentation](https://docs.astral.sh/uv/getting-started/installation/).\n\n2. Install the module\n\n ```bash\n # For new projects\n uv init my-pipecat-app\n cd my-pipecat-app\n uv add pipecat-ai\n\n # Or for existing projects\n uv add pipecat-ai\n ```\n\n3. Set up your environment\n\n ```bash\n cp env.example .env\n ```\n\n4. To keep things lightweight, only the core framework is included by default. If you need support for third-party AI services, you can add the necessary dependencies with:\n\n ```bash\n uv add \"pipecat-ai[option,...]\"\n ```\n\n> **Using pip?** You can still use `pip install pipecat-ai` and `pip install \"pipecat-ai[option,...]\"` to get set up.\n\n## \ud83e\uddea Code examples\n\n- [Foundational](https://github.com/pipecat-ai/pipecat/tree/main/examples/foundational) \u2014 small snippets that build on each other, introducing one or two concepts at a time\n- [Example apps](https://github.com/pipecat-ai/pipecat-examples) \u2014 complete applications that you can use as starting points for development\n\n## \ud83d\udee0\ufe0f Contributing to the framework\n\n### Prerequisites\n\n**Minimum Python Version:** 3.10\n**Recommended Python Version:** 3.12\n\n### Setup Steps\n\n1. Clone the repository and navigate to it:\n\n ```bash\n git clone https://github.com/pipecat-ai/pipecat.git\n cd pipecat\n ```\n\n2. Install development and testing dependencies:\n\n ```bash\n uv sync --group dev --all-extras --no-extra gstreamer --no-extra krisp --no-extra local\n ```\n\n3. Install the git pre-commit hooks:\n\n ```bash\n uv run pre-commit install\n ```\n\n### Python 3.13+ Compatibility\n\nSome features require PyTorch, which doesn't yet support Python 3.13+. Install using:\n\n```bash\nuv sync --group dev --all-extras \\\n --no-extra gstreamer \\\n --no-extra krisp \\\n --no-extra local \\\n --no-extra local-smart-turn \\\n --no-extra mlx-whisper \\\n --no-extra moondream \\\n --no-extra ultravox\n```\n\n> **Tip:** For full compatibility, use Python 3.12: `uv python pin 3.12`\n\n> **Note**: Some extras (local, gstreamer) require system dependencies. See documentation if you encounter build errors.\n\n### Running tests\n\nTo run all tests, from the root directory:\n\n```bash\nuv run pytest\n```\n\nRun a specific test suite:\n\n```bash\nuv run pytest tests/test_name.py\n```\n\n### Setting up your editor\n\nThis project uses strict [PEP 8](https://peps.python.org/pep-0008/) formatting via [Ruff](https://github.com/astral-sh/ruff).\n\n#### Emacs\n\nYou can use [use-package](https://github.com/jwiegley/use-package) to install [emacs-lazy-ruff](https://github.com/christophermadsen/emacs-lazy-ruff) package and configure `ruff` arguments:\n\n```elisp\n(use-package lazy-ruff\n :ensure t\n :hook ((python-mode . lazy-ruff-mode))\n :config\n (setq lazy-ruff-format-command \"ruff format\")\n (setq lazy-ruff-check-command \"ruff check --select I\"))\n```\n\n`ruff` was installed in the `venv` environment described before, so you should be able to use [pyvenv-auto](https://github.com/ryotaro612/pyvenv-auto) to automatically load that environment inside Emacs.\n\n```elisp\n(use-package pyvenv-auto\n :ensure t\n :defer t\n :hook ((python-mode . pyvenv-auto-run)))\n```\n\n#### Visual Studio Code\n\nInstall the\n[Ruff](https://marketplace.visualstudio.com/items?itemName=charliermarsh.ruff) extension. Then edit the user settings (_Ctrl-Shift-P_ `Open User Settings (JSON)`) and set it as the default Python formatter, and enable formatting on save:\n\n```json\n\"[python]\": {\n \"editor.defaultFormatter\": \"charliermarsh.ruff\",\n \"editor.formatOnSave\": true\n}\n```\n\n#### PyCharm\n\n`ruff` was installed in the `venv` environment described before, now to enable autoformatting on save, go to `File` -> `Settings` -> `Tools` -> `File Watchers` and add a new watcher with the following settings:\n\n1. **Name**: `Ruff formatter`\n2. **File type**: `Python`\n3. **Working directory**: `$ContentRoot$`\n4. **Arguments**: `format $FilePath$`\n5. **Program**: `$PyInterpreterDirectory$/ruff`\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions from the community! Whether you're fixing bugs, improving documentation, or adding new features, here's how you can help:\n\n- **Found a bug?** Open an [issue](https://github.com/pipecat-ai/pipecat/issues)\n- **Have a feature idea?** Start a [discussion](https://discord.gg/pipecat)\n- **Want to contribute code?** Check our [CONTRIBUTING.md](CONTRIBUTING.md) guide\n- **Documentation improvements?** [Docs](https://github.com/pipecat-ai/docs) PRs are always welcome\n\nBefore submitting a pull request, please check existing issues and PRs to avoid duplicates.\n\nWe aim to review all contributions promptly and provide constructive feedback to help get your changes merged.\n\n## \ud83d\udedf Getting help\n\n\u27a1\ufe0f [Join our Discord](https://discord.gg/pipecat)\n\n\u27a1\ufe0f [Read the docs](https://docs.pipecat.ai)\n\n\u27a1\ufe0f [Reach us on X](https://x.com/pipecat_ai)\n",
"bugtrack_url": null,
"license": null,
"summary": "An open source framework for voice (and multimodal) assistants",
"version": "0.0.81",
"project_urls": {
"Source": "https://github.com/pipecat-ai/pipecat",
"Website": "https://pipecat.ai"
},
"split_keywords": [
"webrtc",
" audio",
" video",
" ai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "a0b4327bd98b5a37486b81abfdfcb3b815259cd3e4fc2e0b4a3d29860ccd543f",
"md5": "4bbff62552b9c07164a2eaa899769abb",
"sha256": "5f974b689b2ff2f471f91abe0c9fe624bc49573205d4aa72767fb6f4960d17d2"
},
"downloads": -1,
"filename": "pipecat_ai-0.0.81-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4bbff62552b9c07164a2eaa899769abb",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 2641374,
"upload_time": "2025-08-25T16:29:51",
"upload_time_iso_8601": "2025-08-25T16:29:51.591982Z",
"url": "https://files.pythonhosted.org/packages/a0/b4/327bd98b5a37486b81abfdfcb3b815259cd3e4fc2e0b4a3d29860ccd543f/pipecat_ai-0.0.81-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "dcd80bc2854b07562b921ce83a063d0c2d564c62b67d8c1ea5ae941f0b6a8c08",
"md5": "388a8d57d145a06988330356aec12c56",
"sha256": "8e3fe130933de9884c6fcfedddeadc853b08ca931a896cf2a279281595fa08d6"
},
"downloads": -1,
"filename": "pipecat_ai-0.0.81.tar.gz",
"has_sig": false,
"md5_digest": "388a8d57d145a06988330356aec12c56",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 2967156,
"upload_time": "2025-08-25T16:29:54",
"upload_time_iso_8601": "2025-08-25T16:29:54.193216Z",
"url": "https://files.pythonhosted.org/packages/dc/d8/0bc2854b07562b921ce83a063d0c2d564c62b67d8c1ea5ae941f0b6a8c08/pipecat_ai-0.0.81.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-25 16:29:54",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "pipecat-ai",
"github_project": "pipecat",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "pipecat-ai"
}