| Name | dv-pipecat-ai JSON |
| Version |
0.0.85.dev820
JSON |
| download |
| home_page | None |
| Summary | An open source framework for voice (and multimodal) assistants |
| upload_time | 2025-10-23 05:27:56 |
| maintainer | None |
| docs_url | None |
| author | None |
| requires_python | >=3.10 |
| license | None |
| keywords |
webrtc
audio
video
ai
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
<h1><div align="center">
<img alt="pipecat" width="300px" height="auto" src="https://raw.githubusercontent.com/pipecat-ai/pipecat/main/pipecat.png">
</div></h1>
[](https://pypi.org/project/pipecat-ai)  [](https://codecov.io/gh/pipecat-ai/pipecat) [](https://docs.pipecat.ai) [](https://discord.gg/pipecat) [](https://deepwiki.com/pipecat-ai/pipecat)
[](https://getmanta.ai/pipecat)
# ποΈ Pipecat: Real-Time Voice & Multimodal AI Agents
**Pipecat** is an open-source Python framework for building real-time voice and multimodal conversational agents. Orchestrate audio and video, AI services, different transports, and conversation pipelines effortlesslyβso you can focus on what makes your agent unique.
> Want to dive right in? Try the [quickstart](https://docs.pipecat.ai/getting-started/quickstart).
## π What You Can Build
- **Voice Assistants** β natural, streaming conversations with AI
- **AI Companions** β coaches, meeting assistants, characters
- **Multimodal Interfaces** β voice, video, images, and more
- **Interactive Storytelling** β creative tools with generative media
- **Business Agents** β customer intake, support bots, guided flows
- **Complex Dialog Systems** β design logic with structured conversations
## π§ Why Pipecat?
- **Voice-first**: Integrates speech recognition, text-to-speech, and conversation handling
- **Pluggable**: Supports many AI services and tools
- **Composable Pipelines**: Build complex behavior from modular components
- **Real-Time**: Ultra-low latency interaction with different transports (e.g. WebSockets or WebRTC)
## π Pipecat Ecosystem
### π± Client SDKs
Building client applications? You can connect to Pipecat from any platform using our official SDKs:
<a href="https://docs.pipecat.ai/client/js/introduction">JavaScript</a> | <a href="https://docs.pipecat.ai/client/react/introduction">React</a> | <a href="https://docs.pipecat.ai/client/react-native/introduction">React Native</a> |
<a href="https://docs.pipecat.ai/client/ios/introduction">Swift</a> | <a href="https://docs.pipecat.ai/client/android/introduction">Kotlin</a> | <a href="https://docs.pipecat.ai/client/c++/introduction">C++</a> | <a href="https://github.com/pipecat-ai/pipecat-esp32">ESP32</a>
### π§ Structured conversations
Looking to build structured conversations? Check out [Pipecat Flows](https://github.com/pipecat-ai/pipecat-flows) for managing complex conversational states and transitions.
### πͺ Beautiful UIs
Want to build beautiful and engaging experiences? Checkout the [Voice UI Kit](https://github.com/pipecat-ai/voice-ui-kit), a collection of components, hooks and templates for building voice AI applications quickly.
### π οΈ Create and deploy projects
Create a new project in under a minute with the [Pipecat CLI](https://github.com/pipecat-ai/pipecat-cli). Then use the CLI to monitor and deploy your agent to production.
### π Debugging
Looking for help debugging your pipeline and processors? Check out [Whisker](https://github.com/pipecat-ai/whisker), a real-time Pipecat debugger.
### π₯οΈ Terminal
Love terminal applications? Check out [Tail](https://github.com/pipecat-ai/tail), a terminal dashboard for Pipecat.
### πΊοΈ Pipecat TV Channel
Catch new features, interviews, and how-tos on our [Pipecat TV](https://www.youtube.com/playlist?list=PLzU2zoMTQIHjqC3v4q2XVSR3hGSzwKFwH) channel.
## π¬ See it in action
<p float="left">
<a href="https://github.com/pipecat-ai/pipecat-examples/tree/main/simple-chatbot"><img src="https://raw.githubusercontent.com/pipecat-ai/pipecat-examples/main/simple-chatbot/image.png" width="400" /></a>
<a href="https://github.com/pipecat-ai/pipecat-examples/tree/main/storytelling-chatbot"><img src="https://raw.githubusercontent.com/pipecat-ai/pipecat-examples/main/storytelling-chatbot/image.png" width="400" /></a>
<br/>
<a href="https://github.com/pipecat-ai/pipecat-examples/tree/main/translation-chatbot"><img src="https://raw.githubusercontent.com/pipecat-ai/pipecat-examples/main/translation-chatbot/image.png" width="400" /></a>
<a href="https://github.com/pipecat-ai/pipecat/blob/main/examples/foundational/12-describe-video.py"><img src="https://github.com/pipecat-ai/pipecat/blob/main/examples/foundational/assets/moondream.png" width="400" /></a>
</p>
## π§© Available services
| Category | Services |
| ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Speech-to-Text | [AssemblyAI](https://docs.pipecat.ai/server/services/stt/assemblyai), [AWS](https://docs.pipecat.ai/server/services/stt/aws), [Azure](https://docs.pipecat.ai/server/services/stt/azure), [Cartesia](https://docs.pipecat.ai/server/services/stt/cartesia), [Deepgram](https://docs.pipecat.ai/server/services/stt/deepgram), [ElevenLabs](https://docs.pipecat.ai/server/services/stt/elevenlabs), [Fal Wizper](https://docs.pipecat.ai/server/services/stt/fal), [Gladia](https://docs.pipecat.ai/server/services/stt/gladia), [Google](https://docs.pipecat.ai/server/services/stt/google), [Groq (Whisper)](https://docs.pipecat.ai/server/services/stt/groq), [NVIDIA Riva](https://docs.pipecat.ai/server/services/stt/riva), [OpenAI (Whisper)](https://docs.pipecat.ai/server/services/stt/openai), [SambaNova (Whisper)](https://docs.pipecat.ai/server/services/stt/sambanova), [Soniox](https://docs.pipecat.ai/server/services/stt/soniox), [Speechmatics](https://docs.pipecat.ai/server/services/stt/speechmatics), [Ultravox](https://docs.pipecat.ai/server/services/stt/ultravox), [Whisper](https://docs.pipecat.ai/server/services/stt/whisper) |
| LLMs | [Anthropic](https://docs.pipecat.ai/server/services/llm/anthropic), [AWS](https://docs.pipecat.ai/server/services/llm/aws), [Azure](https://docs.pipecat.ai/server/services/llm/azure), [Cerebras](https://docs.pipecat.ai/server/services/llm/cerebras), [DeepSeek](https://docs.pipecat.ai/server/services/llm/deepseek), [Fireworks AI](https://docs.pipecat.ai/server/services/llm/fireworks), [Gemini](https://docs.pipecat.ai/server/services/llm/gemini), [Grok](https://docs.pipecat.ai/server/services/llm/grok), [Groq](https://docs.pipecat.ai/server/services/llm/groq), [Mistral](https://docs.pipecat.ai/server/services/llm/mistral), [NVIDIA NIM](https://docs.pipecat.ai/server/services/llm/nim), [Ollama](https://docs.pipecat.ai/server/services/llm/ollama), [OpenAI](https://docs.pipecat.ai/server/services/llm/openai), [OpenRouter](https://docs.pipecat.ai/server/services/llm/openrouter), [Perplexity](https://docs.pipecat.ai/server/services/llm/perplexity), [Qwen](https://docs.pipecat.ai/server/services/llm/qwen), [SambaNova](https://docs.pipecat.ai/server/services/llm/sambanova) [Together AI](https://docs.pipecat.ai/server/services/llm/together) |
| Text-to-Speech | [Async](https://docs.pipecat.ai/server/services/tts/asyncai), [AWS](https://docs.pipecat.ai/server/services/tts/aws), [Azure](https://docs.pipecat.ai/server/services/tts/azure), [Cartesia](https://docs.pipecat.ai/server/services/tts/cartesia), [Deepgram](https://docs.pipecat.ai/server/services/tts/deepgram), [ElevenLabs](https://docs.pipecat.ai/server/services/tts/elevenlabs), [Fish](https://docs.pipecat.ai/server/services/tts/fish), [Google](https://docs.pipecat.ai/server/services/tts/google), [Groq](https://docs.pipecat.ai/server/services/tts/groq), [Hume](https://docs.pipecat.ai/server/services/tts/hume), [Inworld](https://docs.pipecat.ai/server/services/tts/inworld), [LMNT](https://docs.pipecat.ai/server/services/tts/lmnt), [MiniMax](https://docs.pipecat.ai/server/services/tts/minimax), [Neuphonic](https://docs.pipecat.ai/server/services/tts/neuphonic), [NVIDIA Riva](https://docs.pipecat.ai/server/services/tts/riva), [OpenAI](https://docs.pipecat.ai/server/services/tts/openai), [Piper](https://docs.pipecat.ai/server/services/tts/piper), [PlayHT](https://docs.pipecat.ai/server/services/tts/playht), [Rime](https://docs.pipecat.ai/server/services/tts/rime), [Sarvam](https://docs.pipecat.ai/server/services/tts/sarvam), [XTTS](https://docs.pipecat.ai/server/services/tts/xtts) |
| Speech-to-Speech | [AWS Nova Sonic](https://docs.pipecat.ai/server/services/s2s/aws), [Gemini Multimodal Live](https://docs.pipecat.ai/server/services/s2s/gemini), [OpenAI Realtime](https://docs.pipecat.ai/server/services/s2s/openai) |
| Transport | [Daily (WebRTC)](https://docs.pipecat.ai/server/services/transport/daily), [FastAPI Websocket](https://docs.pipecat.ai/server/services/transport/fastapi-websocket), [SmallWebRTCTransport](https://docs.pipecat.ai/server/services/transport/small-webrtc), [WebSocket Server](https://docs.pipecat.ai/server/services/transport/websocket-server), Local |
| Serializers | [Plivo](https://docs.pipecat.ai/server/utilities/serializers/plivo), [Twilio](https://docs.pipecat.ai/server/utilities/serializers/twilio), [Telnyx](https://docs.pipecat.ai/server/utilities/serializers/telnyx) |
| Video | [HeyGen](https://docs.pipecat.ai/server/services/video/heygen), [Tavus](https://docs.pipecat.ai/server/services/video/tavus), [Simli](https://docs.pipecat.ai/server/services/video/simli) |
| Memory | [mem0](https://docs.pipecat.ai/server/services/memory/mem0) |
| Vision & Image | [fal](https://docs.pipecat.ai/server/services/image-generation/fal), [Google Imagen](https://docs.pipecat.ai/server/services/image-generation/fal), [Moondream](https://docs.pipecat.ai/server/services/vision/moondream) |
| Audio Processing | [Silero VAD](https://docs.pipecat.ai/server/utilities/audio/silero-vad-analyzer), [Krisp](https://docs.pipecat.ai/server/utilities/audio/krisp-filter), [Koala](https://docs.pipecat.ai/server/utilities/audio/koala-filter), [ai-coustics](https://docs.pipecat.ai/server/utilities/audio/aic-filter) |
| Analytics & Metrics | [OpenTelemetry](https://docs.pipecat.ai/server/utilities/opentelemetry), [Sentry](https://docs.pipecat.ai/server/services/analytics/sentry) |
π [View full services documentation β](https://docs.pipecat.ai/server/services/supported-services)
## β‘ Getting started
You can get started with Pipecat running on your local machine, then move your agent processes to the cloud when you're ready.
1. Install uv
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
> **Need help?** Refer to the [uv install documentation](https://docs.astral.sh/uv/getting-started/installation/).
2. Install the module
```bash
# For new projects
uv init my-pipecat-app
cd my-pipecat-app
uv add pipecat-ai
# Or for existing projects
uv add pipecat-ai
```
3. Set up your environment
```bash
cp env.example .env
```
4. To keep things lightweight, only the core framework is included by default. If you need support for third-party AI services, you can add the necessary dependencies with:
```bash
uv add "pipecat-ai[option,...]"
```
> **Using pip?** You can still use `pip install pipecat-ai` and `pip install "pipecat-ai[option,...]"` to get set up.
## π§ͺ Code examples
- [Foundational](https://github.com/pipecat-ai/pipecat/tree/main/examples/foundational) β small snippets that build on each other, introducing one or two concepts at a time
- [Example apps](https://github.com/pipecat-ai/pipecat-examples) β complete applications that you can use as starting points for development
## π οΈ Contributing to the framework
### Prerequisites
**Minimum Python Version:** 3.10
**Recommended Python Version:** 3.12
### Setup Steps
1. Clone the repository and navigate to it:
```bash
git clone https://github.com/pipecat-ai/pipecat.git
cd pipecat
```
2. Install development and testing dependencies:
```bash
uv sync --group dev --all-extras \
--no-extra gstreamer \
--no-extra krisp \
--no-extra local \
--no-extra ultravox # (ultravox not fully supported on macOS)
```
3. Install the git pre-commit hooks:
```bash
uv run pre-commit install
```
> **Note**: Some extras (local, gstreamer) require system dependencies. See documentation if you encounter build errors.
### Running tests
To run all tests, from the root directory:
```bash
uv run pytest
```
Run a specific test suite:
```bash
uv run pytest tests/test_name.py
```
## π€ Contributing
We welcome contributions from the community! Whether you're fixing bugs, improving documentation, or adding new features, here's how you can help:
- **Found a bug?** Open an [issue](https://github.com/pipecat-ai/pipecat/issues)
- **Have a feature idea?** Start a [discussion](https://discord.gg/pipecat)
- **Want to contribute code?** Check our [CONTRIBUTING.md](CONTRIBUTING.md) guide
- **Documentation improvements?** [Docs](https://github.com/pipecat-ai/docs) PRs are always welcome
Before submitting a pull request, please check existing issues and PRs to avoid duplicates.
We aim to review all contributions promptly and provide constructive feedback to help get your changes merged.
## π Getting help
β‘οΈ [Join our Discord](https://discord.gg/pipecat)
β‘οΈ [Read the docs](https://docs.pipecat.ai)
β‘οΈ [Reach us on X](https://x.com/pipecat_ai)
Raw data
{
"_id": null,
"home_page": null,
"name": "dv-pipecat-ai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "webrtc, audio, video, ai",
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/f3/4e/96b526a766bce28c980ca0d214baa12a7ce705ef5de7e59c49b9026a47f0/dv_pipecat_ai-0.0.85.dev820.tar.gz",
"platform": null,
"description": "<h1><div align=\"center\">\n <img alt=\"pipecat\" width=\"300px\" height=\"auto\" src=\"https://raw.githubusercontent.com/pipecat-ai/pipecat/main/pipecat.png\">\n</div></h1>\n\n[](https://pypi.org/project/pipecat-ai)  [](https://codecov.io/gh/pipecat-ai/pipecat) [](https://docs.pipecat.ai) [](https://discord.gg/pipecat) [](https://deepwiki.com/pipecat-ai/pipecat)\n[](https://getmanta.ai/pipecat)\n\n# \ud83c\udf99\ufe0f Pipecat: Real-Time Voice & Multimodal AI Agents\n\n**Pipecat** is an open-source Python framework for building real-time voice and multimodal conversational agents. Orchestrate audio and video, AI services, different transports, and conversation pipelines effortlessly\u2014so you can focus on what makes your agent unique.\n\n> Want to dive right in? Try the [quickstart](https://docs.pipecat.ai/getting-started/quickstart).\n\n## \ud83d\ude80 What You Can Build\n\n- **Voice Assistants** \u2013 natural, streaming conversations with AI\n- **AI Companions** \u2013 coaches, meeting assistants, characters\n- **Multimodal Interfaces** \u2013 voice, video, images, and more\n- **Interactive Storytelling** \u2013 creative tools with generative media\n- **Business Agents** \u2013 customer intake, support bots, guided flows\n- **Complex Dialog Systems** \u2013 design logic with structured conversations\n\n## \ud83e\udde0 Why Pipecat?\n\n- **Voice-first**: Integrates speech recognition, text-to-speech, and conversation handling\n- **Pluggable**: Supports many AI services and tools\n- **Composable Pipelines**: Build complex behavior from modular components\n- **Real-Time**: Ultra-low latency interaction with different transports (e.g. WebSockets or WebRTC)\n\n## \ud83c\udf10 Pipecat Ecosystem\n\n### \ud83d\udcf1 Client SDKs\n\nBuilding client applications? You can connect to Pipecat from any platform using our official SDKs:\n\n<a href=\"https://docs.pipecat.ai/client/js/introduction\">JavaScript</a> | <a href=\"https://docs.pipecat.ai/client/react/introduction\">React</a> | <a href=\"https://docs.pipecat.ai/client/react-native/introduction\">React Native</a> |\n<a href=\"https://docs.pipecat.ai/client/ios/introduction\">Swift</a> | <a href=\"https://docs.pipecat.ai/client/android/introduction\">Kotlin</a> | <a href=\"https://docs.pipecat.ai/client/c++/introduction\">C++</a> | <a href=\"https://github.com/pipecat-ai/pipecat-esp32\">ESP32</a>\n\n### \ud83e\udded Structured conversations\n\nLooking to build structured conversations? Check out [Pipecat Flows](https://github.com/pipecat-ai/pipecat-flows) for managing complex conversational states and transitions.\n\n### \ud83e\ude84 Beautiful UIs\n\nWant to build beautiful and engaging experiences? Checkout the [Voice UI Kit](https://github.com/pipecat-ai/voice-ui-kit), a collection of components, hooks and templates for building voice AI applications quickly.\n\n### \ud83d\udee0\ufe0f Create and deploy projects\n\nCreate a new project in under a minute with the [Pipecat CLI](https://github.com/pipecat-ai/pipecat-cli). Then use the CLI to monitor and deploy your agent to production.\n\n### \ud83d\udd0d Debugging\n\nLooking for help debugging your pipeline and processors? Check out [Whisker](https://github.com/pipecat-ai/whisker), a real-time Pipecat debugger.\n\n### \ud83d\udda5\ufe0f Terminal\n\nLove terminal applications? Check out [Tail](https://github.com/pipecat-ai/tail), a terminal dashboard for Pipecat.\n\n### \ud83d\udcfa\ufe0f Pipecat TV Channel\n\nCatch new features, interviews, and how-tos on our [Pipecat TV](https://www.youtube.com/playlist?list=PLzU2zoMTQIHjqC3v4q2XVSR3hGSzwKFwH) channel.\n\n## \ud83c\udfac See it in action\n\n<p float=\"left\">\n <a href=\"https://github.com/pipecat-ai/pipecat-examples/tree/main/simple-chatbot\"><img src=\"https://raw.githubusercontent.com/pipecat-ai/pipecat-examples/main/simple-chatbot/image.png\" width=\"400\" /></a> \n <a href=\"https://github.com/pipecat-ai/pipecat-examples/tree/main/storytelling-chatbot\"><img src=\"https://raw.githubusercontent.com/pipecat-ai/pipecat-examples/main/storytelling-chatbot/image.png\" width=\"400\" /></a>\n <br/>\n <a href=\"https://github.com/pipecat-ai/pipecat-examples/tree/main/translation-chatbot\"><img src=\"https://raw.githubusercontent.com/pipecat-ai/pipecat-examples/main/translation-chatbot/image.png\" width=\"400\" /></a> \n <a href=\"https://github.com/pipecat-ai/pipecat/blob/main/examples/foundational/12-describe-video.py\"><img src=\"https://github.com/pipecat-ai/pipecat/blob/main/examples/foundational/assets/moondream.png\" width=\"400\" /></a>\n</p>\n\n## \ud83e\udde9 Available services\n\n| Category | Services |\n| ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Speech-to-Text | [AssemblyAI](https://docs.pipecat.ai/server/services/stt/assemblyai), [AWS](https://docs.pipecat.ai/server/services/stt/aws), [Azure](https://docs.pipecat.ai/server/services/stt/azure), [Cartesia](https://docs.pipecat.ai/server/services/stt/cartesia), [Deepgram](https://docs.pipecat.ai/server/services/stt/deepgram), [ElevenLabs](https://docs.pipecat.ai/server/services/stt/elevenlabs), [Fal Wizper](https://docs.pipecat.ai/server/services/stt/fal), [Gladia](https://docs.pipecat.ai/server/services/stt/gladia), [Google](https://docs.pipecat.ai/server/services/stt/google), [Groq (Whisper)](https://docs.pipecat.ai/server/services/stt/groq), [NVIDIA Riva](https://docs.pipecat.ai/server/services/stt/riva), [OpenAI (Whisper)](https://docs.pipecat.ai/server/services/stt/openai), [SambaNova (Whisper)](https://docs.pipecat.ai/server/services/stt/sambanova), [Soniox](https://docs.pipecat.ai/server/services/stt/soniox), [Speechmatics](https://docs.pipecat.ai/server/services/stt/speechmatics), [Ultravox](https://docs.pipecat.ai/server/services/stt/ultravox), [Whisper](https://docs.pipecat.ai/server/services/stt/whisper) |\n| LLMs | [Anthropic](https://docs.pipecat.ai/server/services/llm/anthropic), [AWS](https://docs.pipecat.ai/server/services/llm/aws), [Azure](https://docs.pipecat.ai/server/services/llm/azure), [Cerebras](https://docs.pipecat.ai/server/services/llm/cerebras), [DeepSeek](https://docs.pipecat.ai/server/services/llm/deepseek), [Fireworks AI](https://docs.pipecat.ai/server/services/llm/fireworks), [Gemini](https://docs.pipecat.ai/server/services/llm/gemini), [Grok](https://docs.pipecat.ai/server/services/llm/grok), [Groq](https://docs.pipecat.ai/server/services/llm/groq), [Mistral](https://docs.pipecat.ai/server/services/llm/mistral), [NVIDIA NIM](https://docs.pipecat.ai/server/services/llm/nim), [Ollama](https://docs.pipecat.ai/server/services/llm/ollama), [OpenAI](https://docs.pipecat.ai/server/services/llm/openai), [OpenRouter](https://docs.pipecat.ai/server/services/llm/openrouter), [Perplexity](https://docs.pipecat.ai/server/services/llm/perplexity), [Qwen](https://docs.pipecat.ai/server/services/llm/qwen), [SambaNova](https://docs.pipecat.ai/server/services/llm/sambanova) [Together AI](https://docs.pipecat.ai/server/services/llm/together) |\n| Text-to-Speech | [Async](https://docs.pipecat.ai/server/services/tts/asyncai), [AWS](https://docs.pipecat.ai/server/services/tts/aws), [Azure](https://docs.pipecat.ai/server/services/tts/azure), [Cartesia](https://docs.pipecat.ai/server/services/tts/cartesia), [Deepgram](https://docs.pipecat.ai/server/services/tts/deepgram), [ElevenLabs](https://docs.pipecat.ai/server/services/tts/elevenlabs), [Fish](https://docs.pipecat.ai/server/services/tts/fish), [Google](https://docs.pipecat.ai/server/services/tts/google), [Groq](https://docs.pipecat.ai/server/services/tts/groq), [Hume](https://docs.pipecat.ai/server/services/tts/hume), [Inworld](https://docs.pipecat.ai/server/services/tts/inworld), [LMNT](https://docs.pipecat.ai/server/services/tts/lmnt), [MiniMax](https://docs.pipecat.ai/server/services/tts/minimax), [Neuphonic](https://docs.pipecat.ai/server/services/tts/neuphonic), [NVIDIA Riva](https://docs.pipecat.ai/server/services/tts/riva), [OpenAI](https://docs.pipecat.ai/server/services/tts/openai), [Piper](https://docs.pipecat.ai/server/services/tts/piper), [PlayHT](https://docs.pipecat.ai/server/services/tts/playht), [Rime](https://docs.pipecat.ai/server/services/tts/rime), [Sarvam](https://docs.pipecat.ai/server/services/tts/sarvam), [XTTS](https://docs.pipecat.ai/server/services/tts/xtts) |\n| Speech-to-Speech | [AWS Nova Sonic](https://docs.pipecat.ai/server/services/s2s/aws), [Gemini Multimodal Live](https://docs.pipecat.ai/server/services/s2s/gemini), [OpenAI Realtime](https://docs.pipecat.ai/server/services/s2s/openai) |\n| Transport | [Daily (WebRTC)](https://docs.pipecat.ai/server/services/transport/daily), [FastAPI Websocket](https://docs.pipecat.ai/server/services/transport/fastapi-websocket), [SmallWebRTCTransport](https://docs.pipecat.ai/server/services/transport/small-webrtc), [WebSocket Server](https://docs.pipecat.ai/server/services/transport/websocket-server), Local |\n| Serializers | [Plivo](https://docs.pipecat.ai/server/utilities/serializers/plivo), [Twilio](https://docs.pipecat.ai/server/utilities/serializers/twilio), [Telnyx](https://docs.pipecat.ai/server/utilities/serializers/telnyx) |\n| Video | [HeyGen](https://docs.pipecat.ai/server/services/video/heygen), [Tavus](https://docs.pipecat.ai/server/services/video/tavus), [Simli](https://docs.pipecat.ai/server/services/video/simli) |\n| Memory | [mem0](https://docs.pipecat.ai/server/services/memory/mem0) |\n| Vision & Image | [fal](https://docs.pipecat.ai/server/services/image-generation/fal), [Google Imagen](https://docs.pipecat.ai/server/services/image-generation/fal), [Moondream](https://docs.pipecat.ai/server/services/vision/moondream) |\n| Audio Processing | [Silero VAD](https://docs.pipecat.ai/server/utilities/audio/silero-vad-analyzer), [Krisp](https://docs.pipecat.ai/server/utilities/audio/krisp-filter), [Koala](https://docs.pipecat.ai/server/utilities/audio/koala-filter), [ai-coustics](https://docs.pipecat.ai/server/utilities/audio/aic-filter) |\n| Analytics & Metrics | [OpenTelemetry](https://docs.pipecat.ai/server/utilities/opentelemetry), [Sentry](https://docs.pipecat.ai/server/services/analytics/sentry) |\n\n\ud83d\udcda [View full services documentation \u2192](https://docs.pipecat.ai/server/services/supported-services)\n\n## \u26a1 Getting started\n\nYou can get started with Pipecat running on your local machine, then move your agent processes to the cloud when you're ready.\n\n1. Install uv\n\n ```bash\n curl -LsSf https://astral.sh/uv/install.sh | sh\n ```\n\n > **Need help?** Refer to the [uv install documentation](https://docs.astral.sh/uv/getting-started/installation/).\n\n2. Install the module\n\n ```bash\n # For new projects\n uv init my-pipecat-app\n cd my-pipecat-app\n uv add pipecat-ai\n\n # Or for existing projects\n uv add pipecat-ai\n ```\n\n3. Set up your environment\n\n ```bash\n cp env.example .env\n ```\n\n4. To keep things lightweight, only the core framework is included by default. If you need support for third-party AI services, you can add the necessary dependencies with:\n\n ```bash\n uv add \"pipecat-ai[option,...]\"\n ```\n\n> **Using pip?** You can still use `pip install pipecat-ai` and `pip install \"pipecat-ai[option,...]\"` to get set up.\n\n## \ud83e\uddea Code examples\n\n- [Foundational](https://github.com/pipecat-ai/pipecat/tree/main/examples/foundational) \u2014 small snippets that build on each other, introducing one or two concepts at a time\n- [Example apps](https://github.com/pipecat-ai/pipecat-examples) \u2014 complete applications that you can use as starting points for development\n\n## \ud83d\udee0\ufe0f Contributing to the framework\n\n### Prerequisites\n\n**Minimum Python Version:** 3.10\n**Recommended Python Version:** 3.12\n\n### Setup Steps\n\n1. Clone the repository and navigate to it:\n\n ```bash\n git clone https://github.com/pipecat-ai/pipecat.git\n cd pipecat\n ```\n\n2. Install development and testing dependencies:\n\n ```bash\n uv sync --group dev --all-extras \\\n --no-extra gstreamer \\\n --no-extra krisp \\\n --no-extra local \\\n --no-extra ultravox # (ultravox not fully supported on macOS)\n ```\n\n3. Install the git pre-commit hooks:\n\n ```bash\n uv run pre-commit install\n ```\n\n> **Note**: Some extras (local, gstreamer) require system dependencies. See documentation if you encounter build errors.\n\n### Running tests\n\nTo run all tests, from the root directory:\n\n```bash\nuv run pytest\n```\n\nRun a specific test suite:\n\n```bash\nuv run pytest tests/test_name.py\n```\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions from the community! Whether you're fixing bugs, improving documentation, or adding new features, here's how you can help:\n\n- **Found a bug?** Open an [issue](https://github.com/pipecat-ai/pipecat/issues)\n- **Have a feature idea?** Start a [discussion](https://discord.gg/pipecat)\n- **Want to contribute code?** Check our [CONTRIBUTING.md](CONTRIBUTING.md) guide\n- **Documentation improvements?** [Docs](https://github.com/pipecat-ai/docs) PRs are always welcome\n\nBefore submitting a pull request, please check existing issues and PRs to avoid duplicates.\n\nWe aim to review all contributions promptly and provide constructive feedback to help get your changes merged.\n\n## \ud83d\udedf Getting help\n\n\u27a1\ufe0f [Join our Discord](https://discord.gg/pipecat)\n\n\u27a1\ufe0f [Read the docs](https://docs.pipecat.ai)\n\n\u27a1\ufe0f [Reach us on X](https://x.com/pipecat_ai)\n",
"bugtrack_url": null,
"license": null,
"summary": "An open source framework for voice (and multimodal) assistants",
"version": "0.0.85.dev820",
"project_urls": {
"Source": "https://github.com/pipecat-ai/pipecat",
"Website": "https://pipecat.ai"
},
"split_keywords": [
"webrtc",
" audio",
" video",
" ai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "1f18c19ca6af3c2962bb79338c5376acf9e472c671b0c00ce7cde1bac47ba676",
"md5": "80c78d513d6dd8ff4c6e53f05c427a37",
"sha256": "3a07b57c4209c8ed12c5217c54fd40493ee641cd85b72167931b77705b7f49e3"
},
"downloads": -1,
"filename": "dv_pipecat_ai-0.0.85.dev820-py3-none-any.whl",
"has_sig": false,
"md5_digest": "80c78d513d6dd8ff4c6e53f05c427a37",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 12372180,
"upload_time": "2025-10-23T05:27:54",
"upload_time_iso_8601": "2025-10-23T05:27:54.411415Z",
"url": "https://files.pythonhosted.org/packages/1f/18/c19ca6af3c2962bb79338c5376acf9e472c671b0c00ce7cde1bac47ba676/dv_pipecat_ai-0.0.85.dev820-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "f34e96b526a766bce28c980ca0d214baa12a7ce705ef5de7e59c49b9026a47f0",
"md5": "a1cdbec020c80e0385a44313bc1b33e2",
"sha256": "e4db9615ce310b587f240e367f793db808fbc1f23532b695c001e1b09066fe48"
},
"downloads": -1,
"filename": "dv_pipecat_ai-0.0.85.dev820.tar.gz",
"has_sig": false,
"md5_digest": "a1cdbec020c80e0385a44313bc1b33e2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 12711624,
"upload_time": "2025-10-23T05:27:56",
"upload_time_iso_8601": "2025-10-23T05:27:56.955443Z",
"url": "https://files.pythonhosted.org/packages/f3/4e/96b526a766bce28c980ca0d214baa12a7ce705ef5de7e59c49b9026a47f0/dv_pipecat_ai-0.0.85.dev820.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-23 05:27:56",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "pipecat-ai",
"github_project": "pipecat",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "dv-pipecat-ai"
}