Name | dv-pipecat-ai JSON |
Version |
0.0.74.dev532
JSON |
| download |
home_page | None |
Summary | An open source framework for voice (and multimodal) assistants |
upload_time | 2025-07-08 18:31:15 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | None |
keywords |
webrtc
audio
video
ai
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<h1><div align="center">
<img alt="pipecat" width="300px" height="auto" src="https://raw.githubusercontent.com/pipecat-ai/pipecat/main/pipecat.png">
</div></h1>
[](https://pypi.org/project/pipecat-ai)  [](https://codecov.io/gh/pipecat-ai/pipecat) [](https://docs.pipecat.ai) [](https://discord.gg/pipecat)
# 🎙️ Pipecat: Real-Time Voice & Multimodal AI Agents
**Pipecat** is an open-source Python framework for building real-time voice and multimodal conversational agents. Orchestrate audio and video, AI services, different transports, and conversation pipelines effortlessly—so you can focus on what makes your agent unique.
> Want to dive right in? [Install Pipecat](https://docs.pipecat.ai/getting-started/installation) then try the [quickstart](https://docs.pipecat.ai/getting-started/quickstart).
## 🚀 What You Can Build
- **Voice Assistants** – natural, streaming conversations with AI
- **AI Companions** – coaches, meeting assistants, characters
- **Multimodal Interfaces** – voice, video, images, and more
- **Interactive Storytelling** – creative tools with generative media
- **Business Agents** – customer intake, support bots, guided flows
- **Complex Dialog Systems** – design logic with structured conversations
🧭 Looking to build structured conversations? Check out [Pipecat Flows](https://github.com/pipecat-ai/pipecat-flows) for managing complex conversational states and transitions.
## 🧠 Why Pipecat?
- **Voice-first**: Integrates speech recognition, text-to-speech, and conversation handling
- **Pluggable**: Supports many AI services and tools
- **Composable Pipelines**: Build complex behavior from modular components
- **Real-Time**: Ultra-low latency interaction with different transports (e.g. WebSockets or WebRTC)
## 🎬 See it in action
<p float="left">
<a href="https://github.com/pipecat-ai/pipecat/tree/main/examples/simple-chatbot"><img src="https://raw.githubusercontent.com/pipecat-ai/pipecat/main/examples/simple-chatbot/image.png" width="400" /></a>
<a href="https://github.com/pipecat-ai/pipecat/tree/main/examples/storytelling-chatbot"><img src="https://raw.githubusercontent.com/pipecat-ai/pipecat/main/examples/storytelling-chatbot/image.png" width="400" /></a>
<br/>
<a href="https://github.com/pipecat-ai/pipecat/tree/main/examples/translation-chatbot"><img src="https://raw.githubusercontent.com/pipecat-ai/pipecat/main/examples/translation-chatbot/image.png" width="400" /></a>
<a href="https://github.com/pipecat-ai/pipecat/tree/main/examples/moondream-chatbot"><img src="https://raw.githubusercontent.com/pipecat-ai/pipecat/main/examples/moondream-chatbot/image.png" width="400" /></a>
</p>
## 📱 Client SDKs
You can connect to Pipecat from any platform using our official SDKs:
| Platform | SDK Repo | Description |
| -------- | ------------------------------------------------------------------------------ | -------------------------------- |
| Web | [pipecat-client-web](https://github.com/pipecat-ai/pipecat-client-web) | JavaScript and React client SDKs |
| iOS | [pipecat-client-ios](https://github.com/pipecat-ai/pipecat-client-ios) | Swift SDK for iOS |
| Android | [pipecat-client-android](https://github.com/pipecat-ai/pipecat-client-android) | Kotlin SDK for Android |
| C++ | [pipecat-client-cxx](https://github.com/pipecat-ai/pipecat-client-cxx) | C++ client SDK |
## 🧩 Available services
| Category | Services |
| ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Speech-to-Text | [AssemblyAI](https://docs.pipecat.ai/server/services/stt/assemblyai), [AWS](https://docs.pipecat.ai/server/services/stt/aws), [Azure](https://docs.pipecat.ai/server/services/stt/azure), [Cartesia](https://docs.pipecat.ai/server/services/stt/cartesia), [Deepgram](https://docs.pipecat.ai/server/services/stt/deepgram), [Fal Wizper](https://docs.pipecat.ai/server/services/stt/fal), [Gladia](https://docs.pipecat.ai/server/services/stt/gladia), [Google](https://docs.pipecat.ai/server/services/stt/google), [Groq (Whisper)](https://docs.pipecat.ai/server/services/stt/groq), [OpenAI (Whisper)](https://docs.pipecat.ai/server/services/stt/openai), [Parakeet (NVIDIA)](https://docs.pipecat.ai/server/services/stt/parakeet), [SambaNova (Whisper)](https://docs.pipecat.ai/server/services/stt/sambanova) [Ultravox](https://docs.pipecat.ai/server/services/stt/ultravox), [Whisper](https://docs.pipecat.ai/server/services/stt/whisper) |
| LLMs | [Anthropic](https://docs.pipecat.ai/server/services/llm/anthropic), [AWS](https://docs.pipecat.ai/server/services/llm/aws), [Azure](https://docs.pipecat.ai/server/services/llm/azure), [Cerebras](https://docs.pipecat.ai/server/services/llm/cerebras), [DeepSeek](https://docs.pipecat.ai/server/services/llm/deepseek), [Fireworks AI](https://docs.pipecat.ai/server/services/llm/fireworks), [Gemini](https://docs.pipecat.ai/server/services/llm/gemini), [Grok](https://docs.pipecat.ai/server/services/llm/grok), [Groq](https://docs.pipecat.ai/server/services/llm/groq), [NVIDIA NIM](https://docs.pipecat.ai/server/services/llm/nim), [Ollama](https://docs.pipecat.ai/server/services/llm/ollama), [OpenAI](https://docs.pipecat.ai/server/services/llm/openai), [OpenRouter](https://docs.pipecat.ai/server/services/llm/openrouter), [Perplexity](https://docs.pipecat.ai/server/services/llm/perplexity), [Qwen](https://docs.pipecat.ai/server/services/llm/qwen), [SambaNova](https://docs.pipecat.ai/server/services/llm/sambanova) [Together AI](https://docs.pipecat.ai/server/services/llm/together) |
| Text-to-Speech | [AWS](https://docs.pipecat.ai/server/services/tts/aws), [Azure](https://docs.pipecat.ai/server/services/tts/azure), [Cartesia](https://docs.pipecat.ai/server/services/tts/cartesia), [Deepgram](https://docs.pipecat.ai/server/services/tts/deepgram), [ElevenLabs](https://docs.pipecat.ai/server/services/tts/elevenlabs), [FastPitch (NVIDIA)](https://docs.pipecat.ai/server/services/tts/fastpitch), [Fish](https://docs.pipecat.ai/server/services/tts/fish), [Google](https://docs.pipecat.ai/server/services/tts/google), [LMNT](https://docs.pipecat.ai/server/services/tts/lmnt), [MiniMax](https://docs.pipecat.ai/server/services/tts/minimax), [Neuphonic](https://docs.pipecat.ai/server/services/tts/neuphonic), [OpenAI](https://docs.pipecat.ai/server/services/tts/openai), [Piper](https://docs.pipecat.ai/server/services/tts/piper), [PlayHT](https://docs.pipecat.ai/server/services/tts/playht), [Rime](https://docs.pipecat.ai/server/services/tts/rime), [Sarvam](https://docs.pipecat.ai/server/services/tts/sarvam), [XTTS](https://docs.pipecat.ai/server/services/tts/xtts) |
| Speech-to-Speech | [AWS Nova Sonic](https://docs.pipecat.ai/server/services/s2s/aws), [Gemini Multimodal Live](https://docs.pipecat.ai/server/services/s2s/gemini), [OpenAI Realtime](https://docs.pipecat.ai/server/services/s2s/openai) |
| Transport | [Daily (WebRTC)](https://docs.pipecat.ai/server/services/transport/daily), [FastAPI Websocket](https://docs.pipecat.ai/server/services/transport/fastapi-websocket), [SmallWebRTCTransport](https://docs.pipecat.ai/server/services/transport/small-webrtc), [WebSocket Server](https://docs.pipecat.ai/server/services/transport/websocket-server), Local |
| Serializers | [Plivo](https://docs.pipecat.ai/server/utilities/serializers/plivo), [Twilio](https://docs.pipecat.ai/server/utilities/serializers/twilio), [Telnyx](https://docs.pipecat.ai/server/utilities/serializers/telnyx) |
| Video | [Tavus](https://docs.pipecat.ai/server/services/video/tavus), [Simli](https://docs.pipecat.ai/server/services/video/simli) |
| Memory | [mem0](https://docs.pipecat.ai/server/services/memory/mem0) |
| Vision & Image | [fal](https://docs.pipecat.ai/server/services/image-generation/fal), [Google Imagen](https://docs.pipecat.ai/server/services/image-generation/fal), [Moondream](https://docs.pipecat.ai/server/services/vision/moondream) |
| Audio Processing | [Silero VAD](https://docs.pipecat.ai/server/utilities/audio/silero-vad-analyzer), [Krisp](https://docs.pipecat.ai/server/utilities/audio/krisp-filter), [Koala](https://docs.pipecat.ai/server/utilities/audio/koala-filter), [Noisereduce](https://docs.pipecat.ai/server/utilities/audio/noisereduce-filter) |
| Analytics & Metrics | [OpenTelemetry](https://docs.pipecat.ai/server/utilities/opentelemetry), [Sentry](https://docs.pipecat.ai/server/services/analytics/sentry) |
📚 [View full services documentation →](https://docs.pipecat.ai/server/services/supported-services)
## ⚡ Getting started
You can get started with Pipecat running on your local machine, then move your agent processes to the cloud when you’re ready.
```shell
# Install the module
pip install pipecat-ai
# Set up your environment
cp dot-env.template .env
```
To keep things lightweight, only the core framework is included by default. If you need support for third-party AI services, you can add the necessary dependencies with:
```shell
pip install "pipecat-ai[option,...]"
```
## 🧪 Code examples
- [Foundational](https://github.com/pipecat-ai/pipecat/tree/main/examples/foundational) — small snippets that build on each other, introducing one or two concepts at a time
- [Example apps](https://github.com/pipecat-ai/pipecat/tree/main/examples/) — complete applications that you can use as starting points for development
## 🛠️ Hacking on the framework itself
1. Set up a virtual environment before following these instructions. From the root of the repo:
```shell
python3 -m venv venv
source venv/bin/activate
```
2. Install the development dependencies:
```shell
pip install -r dev-requirements.txt
```
3. Install the git pre-commit hooks (these help ensure your code follows project rules):
```shell
pre-commit install
```
4. Install the `pipecat-ai` package locally in editable mode:
```shell
pip install -e .
```
> The `-e` or `--editable` option allows you to modify the code without reinstalling.
5. Include optional dependencies as needed. For example:
```shell
pip install -e ".[daily,deepgram,cartesia,openai,silero]"
```
6. (Optional) If you want to use this package from another directory:
```shell
pip install "path_to_this_repo[option,...]"
```
### Running tests
Install the test dependencies:
```shell
pip install -r test-requirements.txt
```
From the root directory, run:
```shell
pytest
```
### Setting up your editor
This project uses strict [PEP 8](https://peps.python.org/pep-0008/) formatting via [Ruff](https://github.com/astral-sh/ruff).
#### Emacs
You can use [use-package](https://github.com/jwiegley/use-package) to install [emacs-lazy-ruff](https://github.com/christophermadsen/emacs-lazy-ruff) package and configure `ruff` arguments:
```elisp
(use-package lazy-ruff
:ensure t
:hook ((python-mode . lazy-ruff-mode))
:config
(setq lazy-ruff-format-command "ruff format")
(setq lazy-ruff-check-command "ruff check --select I"))
```
`ruff` was installed in the `venv` environment described before, so you should be able to use [pyvenv-auto](https://github.com/ryotaro612/pyvenv-auto) to automatically load that environment inside Emacs.
```elisp
(use-package pyvenv-auto
:ensure t
:defer t
:hook ((python-mode . pyvenv-auto-run)))
```
#### Visual Studio Code
Install the
[Ruff](https://marketplace.visualstudio.com/items?itemName=charliermarsh.ruff) extension. Then edit the user settings (_Ctrl-Shift-P_ `Open User Settings (JSON)`) and set it as the default Python formatter, and enable formatting on save:
```json
"[python]": {
"editor.defaultFormatter": "charliermarsh.ruff",
"editor.formatOnSave": true
}
```
#### PyCharm
`ruff` was installed in the `venv` environment described before, now to enable autoformatting on save, go to `File` -> `Settings` -> `Tools` -> `File Watchers` and add a new watcher with the following settings:
1. **Name**: `Ruff formatter`
2. **File type**: `Python`
3. **Working directory**: `$ContentRoot$`
4. **Arguments**: `format $FilePath$`
5. **Program**: `$PyInterpreterDirectory$/ruff`
## 🤝 Contributing
We welcome contributions from the community! Whether you're fixing bugs, improving documentation, or adding new features, here's how you can help:
- **Found a bug?** Open an [issue](https://github.com/pipecat-ai/pipecat/issues)
- **Have a feature idea?** Start a [discussion](https://discord.gg/pipecat)
- **Want to contribute code?** Check our [CONTRIBUTING.md](CONTRIBUTING.md) guide
- **Documentation improvements?** [Docs](https://github.com/pipecat-ai/docs) PRs are always welcome
Before submitting a pull request, please check existing issues and PRs to avoid duplicates.
We aim to review all contributions promptly and provide constructive feedback to help get your changes merged.
## 🛟 Getting help
➡️ [Join our Discord](https://discord.gg/pipecat)
➡️ [Read the docs](https://docs.pipecat.ai)
➡️ [Reach us on X](https://x.com/pipecat_ai)
Raw data
{
"_id": null,
"home_page": null,
"name": "dv-pipecat-ai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "webrtc, audio, video, ai",
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/ca/a3/6fe2c641b20890d298f5f923e3201941963c15760320220d4db255c09baa/dv_pipecat_ai-0.0.74.dev532.tar.gz",
"platform": null,
"description": "<h1><div align=\"center\">\n <img alt=\"pipecat\" width=\"300px\" height=\"auto\" src=\"https://raw.githubusercontent.com/pipecat-ai/pipecat/main/pipecat.png\">\n</div></h1>\n\n[](https://pypi.org/project/pipecat-ai)  [](https://codecov.io/gh/pipecat-ai/pipecat) [](https://docs.pipecat.ai) [](https://discord.gg/pipecat)\n\n# \ud83c\udf99\ufe0f Pipecat: Real-Time Voice & Multimodal AI Agents\n\n**Pipecat** is an open-source Python framework for building real-time voice and multimodal conversational agents. Orchestrate audio and video, AI services, different transports, and conversation pipelines effortlessly\u2014so you can focus on what makes your agent unique.\n\n> Want to dive right in? [Install Pipecat](https://docs.pipecat.ai/getting-started/installation) then try the [quickstart](https://docs.pipecat.ai/getting-started/quickstart).\n\n## \ud83d\ude80 What You Can Build\n\n- **Voice Assistants** \u2013 natural, streaming conversations with AI\n- **AI Companions** \u2013 coaches, meeting assistants, characters\n- **Multimodal Interfaces** \u2013 voice, video, images, and more\n- **Interactive Storytelling** \u2013 creative tools with generative media\n- **Business Agents** \u2013 customer intake, support bots, guided flows\n- **Complex Dialog Systems** \u2013 design logic with structured conversations\n\n\ud83e\udded Looking to build structured conversations? Check out [Pipecat Flows](https://github.com/pipecat-ai/pipecat-flows) for managing complex conversational states and transitions.\n\n## \ud83e\udde0 Why Pipecat?\n\n- **Voice-first**: Integrates speech recognition, text-to-speech, and conversation handling\n- **Pluggable**: Supports many AI services and tools\n- **Composable Pipelines**: Build complex behavior from modular components\n- **Real-Time**: Ultra-low latency interaction with different transports (e.g. WebSockets or WebRTC)\n\n## \ud83c\udfac See it in action\n\n<p float=\"left\">\n <a href=\"https://github.com/pipecat-ai/pipecat/tree/main/examples/simple-chatbot\"><img src=\"https://raw.githubusercontent.com/pipecat-ai/pipecat/main/examples/simple-chatbot/image.png\" width=\"400\" /></a> \n <a href=\"https://github.com/pipecat-ai/pipecat/tree/main/examples/storytelling-chatbot\"><img src=\"https://raw.githubusercontent.com/pipecat-ai/pipecat/main/examples/storytelling-chatbot/image.png\" width=\"400\" /></a>\n <br/>\n <a href=\"https://github.com/pipecat-ai/pipecat/tree/main/examples/translation-chatbot\"><img src=\"https://raw.githubusercontent.com/pipecat-ai/pipecat/main/examples/translation-chatbot/image.png\" width=\"400\" /></a> \n <a href=\"https://github.com/pipecat-ai/pipecat/tree/main/examples/moondream-chatbot\"><img src=\"https://raw.githubusercontent.com/pipecat-ai/pipecat/main/examples/moondream-chatbot/image.png\" width=\"400\" /></a>\n</p>\n\n## \ud83d\udcf1 Client SDKs\n\nYou can connect to Pipecat from any platform using our official SDKs:\n\n| Platform | SDK Repo | Description |\n| -------- | ------------------------------------------------------------------------------ | -------------------------------- |\n| Web | [pipecat-client-web](https://github.com/pipecat-ai/pipecat-client-web) | JavaScript and React client SDKs |\n| iOS | [pipecat-client-ios](https://github.com/pipecat-ai/pipecat-client-ios) | Swift SDK for iOS |\n| Android | [pipecat-client-android](https://github.com/pipecat-ai/pipecat-client-android) | Kotlin SDK for Android |\n| C++ | [pipecat-client-cxx](https://github.com/pipecat-ai/pipecat-client-cxx) | C++ client SDK |\n\n## \ud83e\udde9 Available services\n\n| Category | Services |\n| ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Speech-to-Text | [AssemblyAI](https://docs.pipecat.ai/server/services/stt/assemblyai), [AWS](https://docs.pipecat.ai/server/services/stt/aws), [Azure](https://docs.pipecat.ai/server/services/stt/azure), [Cartesia](https://docs.pipecat.ai/server/services/stt/cartesia), [Deepgram](https://docs.pipecat.ai/server/services/stt/deepgram), [Fal Wizper](https://docs.pipecat.ai/server/services/stt/fal), [Gladia](https://docs.pipecat.ai/server/services/stt/gladia), [Google](https://docs.pipecat.ai/server/services/stt/google), [Groq (Whisper)](https://docs.pipecat.ai/server/services/stt/groq), [OpenAI (Whisper)](https://docs.pipecat.ai/server/services/stt/openai), [Parakeet (NVIDIA)](https://docs.pipecat.ai/server/services/stt/parakeet), [SambaNova (Whisper)](https://docs.pipecat.ai/server/services/stt/sambanova) [Ultravox](https://docs.pipecat.ai/server/services/stt/ultravox), [Whisper](https://docs.pipecat.ai/server/services/stt/whisper) |\n| LLMs | [Anthropic](https://docs.pipecat.ai/server/services/llm/anthropic), [AWS](https://docs.pipecat.ai/server/services/llm/aws), [Azure](https://docs.pipecat.ai/server/services/llm/azure), [Cerebras](https://docs.pipecat.ai/server/services/llm/cerebras), [DeepSeek](https://docs.pipecat.ai/server/services/llm/deepseek), [Fireworks AI](https://docs.pipecat.ai/server/services/llm/fireworks), [Gemini](https://docs.pipecat.ai/server/services/llm/gemini), [Grok](https://docs.pipecat.ai/server/services/llm/grok), [Groq](https://docs.pipecat.ai/server/services/llm/groq), [NVIDIA NIM](https://docs.pipecat.ai/server/services/llm/nim), [Ollama](https://docs.pipecat.ai/server/services/llm/ollama), [OpenAI](https://docs.pipecat.ai/server/services/llm/openai), [OpenRouter](https://docs.pipecat.ai/server/services/llm/openrouter), [Perplexity](https://docs.pipecat.ai/server/services/llm/perplexity), [Qwen](https://docs.pipecat.ai/server/services/llm/qwen), [SambaNova](https://docs.pipecat.ai/server/services/llm/sambanova) [Together AI](https://docs.pipecat.ai/server/services/llm/together) |\n| Text-to-Speech | [AWS](https://docs.pipecat.ai/server/services/tts/aws), [Azure](https://docs.pipecat.ai/server/services/tts/azure), [Cartesia](https://docs.pipecat.ai/server/services/tts/cartesia), [Deepgram](https://docs.pipecat.ai/server/services/tts/deepgram), [ElevenLabs](https://docs.pipecat.ai/server/services/tts/elevenlabs), [FastPitch (NVIDIA)](https://docs.pipecat.ai/server/services/tts/fastpitch), [Fish](https://docs.pipecat.ai/server/services/tts/fish), [Google](https://docs.pipecat.ai/server/services/tts/google), [LMNT](https://docs.pipecat.ai/server/services/tts/lmnt), [MiniMax](https://docs.pipecat.ai/server/services/tts/minimax), [Neuphonic](https://docs.pipecat.ai/server/services/tts/neuphonic), [OpenAI](https://docs.pipecat.ai/server/services/tts/openai), [Piper](https://docs.pipecat.ai/server/services/tts/piper), [PlayHT](https://docs.pipecat.ai/server/services/tts/playht), [Rime](https://docs.pipecat.ai/server/services/tts/rime), [Sarvam](https://docs.pipecat.ai/server/services/tts/sarvam), [XTTS](https://docs.pipecat.ai/server/services/tts/xtts) |\n| Speech-to-Speech | [AWS Nova Sonic](https://docs.pipecat.ai/server/services/s2s/aws), [Gemini Multimodal Live](https://docs.pipecat.ai/server/services/s2s/gemini), [OpenAI Realtime](https://docs.pipecat.ai/server/services/s2s/openai) |\n| Transport | [Daily (WebRTC)](https://docs.pipecat.ai/server/services/transport/daily), [FastAPI Websocket](https://docs.pipecat.ai/server/services/transport/fastapi-websocket), [SmallWebRTCTransport](https://docs.pipecat.ai/server/services/transport/small-webrtc), [WebSocket Server](https://docs.pipecat.ai/server/services/transport/websocket-server), Local |\n| Serializers | [Plivo](https://docs.pipecat.ai/server/utilities/serializers/plivo), [Twilio](https://docs.pipecat.ai/server/utilities/serializers/twilio), [Telnyx](https://docs.pipecat.ai/server/utilities/serializers/telnyx) |\n| Video | [Tavus](https://docs.pipecat.ai/server/services/video/tavus), [Simli](https://docs.pipecat.ai/server/services/video/simli) |\n| Memory | [mem0](https://docs.pipecat.ai/server/services/memory/mem0) |\n| Vision & Image | [fal](https://docs.pipecat.ai/server/services/image-generation/fal), [Google Imagen](https://docs.pipecat.ai/server/services/image-generation/fal), [Moondream](https://docs.pipecat.ai/server/services/vision/moondream) |\n| Audio Processing | [Silero VAD](https://docs.pipecat.ai/server/utilities/audio/silero-vad-analyzer), [Krisp](https://docs.pipecat.ai/server/utilities/audio/krisp-filter), [Koala](https://docs.pipecat.ai/server/utilities/audio/koala-filter), [Noisereduce](https://docs.pipecat.ai/server/utilities/audio/noisereduce-filter) |\n| Analytics & Metrics | [OpenTelemetry](https://docs.pipecat.ai/server/utilities/opentelemetry), [Sentry](https://docs.pipecat.ai/server/services/analytics/sentry) |\n\n\ud83d\udcda [View full services documentation \u2192](https://docs.pipecat.ai/server/services/supported-services)\n\n## \u26a1 Getting started\n\nYou can get started with Pipecat running on your local machine, then move your agent processes to the cloud when you\u2019re ready.\n\n```shell\n# Install the module\npip install pipecat-ai\n\n# Set up your environment\ncp dot-env.template .env\n```\n\nTo keep things lightweight, only the core framework is included by default. If you need support for third-party AI services, you can add the necessary dependencies with:\n\n```shell\npip install \"pipecat-ai[option,...]\"\n```\n\n## \ud83e\uddea Code examples\n\n- [Foundational](https://github.com/pipecat-ai/pipecat/tree/main/examples/foundational) \u2014 small snippets that build on each other, introducing one or two concepts at a time\n- [Example apps](https://github.com/pipecat-ai/pipecat/tree/main/examples/) \u2014 complete applications that you can use as starting points for development\n\n## \ud83d\udee0\ufe0f Hacking on the framework itself\n\n1. Set up a virtual environment before following these instructions. From the root of the repo:\n\n ```shell\n python3 -m venv venv\n source venv/bin/activate\n ```\n\n2. Install the development dependencies:\n\n ```shell\n pip install -r dev-requirements.txt\n ```\n\n3. Install the git pre-commit hooks (these help ensure your code follows project rules):\n\n ```shell\n pre-commit install\n ```\n\n4. Install the `pipecat-ai` package locally in editable mode:\n\n ```shell\n pip install -e .\n ```\n\n > The `-e` or `--editable` option allows you to modify the code without reinstalling.\n\n5. Include optional dependencies as needed. For example:\n\n ```shell\n pip install -e \".[daily,deepgram,cartesia,openai,silero]\"\n ```\n\n6. (Optional) If you want to use this package from another directory:\n\n ```shell\n pip install \"path_to_this_repo[option,...]\"\n ```\n\n### Running tests\n\nInstall the test dependencies:\n\n```shell\npip install -r test-requirements.txt\n```\n\nFrom the root directory, run:\n\n```shell\npytest\n```\n\n### Setting up your editor\n\nThis project uses strict [PEP 8](https://peps.python.org/pep-0008/) formatting via [Ruff](https://github.com/astral-sh/ruff).\n\n#### Emacs\n\nYou can use [use-package](https://github.com/jwiegley/use-package) to install [emacs-lazy-ruff](https://github.com/christophermadsen/emacs-lazy-ruff) package and configure `ruff` arguments:\n\n```elisp\n(use-package lazy-ruff\n :ensure t\n :hook ((python-mode . lazy-ruff-mode))\n :config\n (setq lazy-ruff-format-command \"ruff format\")\n (setq lazy-ruff-check-command \"ruff check --select I\"))\n```\n\n`ruff` was installed in the `venv` environment described before, so you should be able to use [pyvenv-auto](https://github.com/ryotaro612/pyvenv-auto) to automatically load that environment inside Emacs.\n\n```elisp\n(use-package pyvenv-auto\n :ensure t\n :defer t\n :hook ((python-mode . pyvenv-auto-run)))\n```\n\n#### Visual Studio Code\n\nInstall the\n[Ruff](https://marketplace.visualstudio.com/items?itemName=charliermarsh.ruff) extension. Then edit the user settings (_Ctrl-Shift-P_ `Open User Settings (JSON)`) and set it as the default Python formatter, and enable formatting on save:\n\n```json\n\"[python]\": {\n \"editor.defaultFormatter\": \"charliermarsh.ruff\",\n \"editor.formatOnSave\": true\n}\n```\n\n#### PyCharm\n\n`ruff` was installed in the `venv` environment described before, now to enable autoformatting on save, go to `File` -> `Settings` -> `Tools` -> `File Watchers` and add a new watcher with the following settings:\n\n1. **Name**: `Ruff formatter`\n2. **File type**: `Python`\n3. **Working directory**: `$ContentRoot$`\n4. **Arguments**: `format $FilePath$`\n5. **Program**: `$PyInterpreterDirectory$/ruff`\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions from the community! Whether you're fixing bugs, improving documentation, or adding new features, here's how you can help:\n\n- **Found a bug?** Open an [issue](https://github.com/pipecat-ai/pipecat/issues)\n- **Have a feature idea?** Start a [discussion](https://discord.gg/pipecat)\n- **Want to contribute code?** Check our [CONTRIBUTING.md](CONTRIBUTING.md) guide\n- **Documentation improvements?** [Docs](https://github.com/pipecat-ai/docs) PRs are always welcome\n\nBefore submitting a pull request, please check existing issues and PRs to avoid duplicates.\n\nWe aim to review all contributions promptly and provide constructive feedback to help get your changes merged.\n\n## \ud83d\udedf Getting help\n\n\u27a1\ufe0f [Join our Discord](https://discord.gg/pipecat)\n\n\u27a1\ufe0f [Read the docs](https://docs.pipecat.ai)\n\n\u27a1\ufe0f [Reach us on X](https://x.com/pipecat_ai)\n\n\n",
"bugtrack_url": null,
"license": null,
"summary": "An open source framework for voice (and multimodal) assistants",
"version": "0.0.74.dev532",
"project_urls": {
"Source": "https://github.com/pipecat-ai/pipecat",
"Website": "https://pipecat.ai"
},
"split_keywords": [
"webrtc",
" audio",
" video",
" ai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "32d783046569150d31b83a70ed68425b4525e7017b179edee886439ee3bdf2ca",
"md5": "31d79489cc10da9b0f6323e4418d98d3",
"sha256": "eac818abef27db28d2fc15cea924537cd93bd870ba4e41ca67f5a7403b492230"
},
"downloads": -1,
"filename": "dv_pipecat_ai-0.0.74.dev532-py3-none-any.whl",
"has_sig": false,
"md5_digest": "31d79489cc10da9b0f6323e4418d98d3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 2493654,
"upload_time": "2025-07-08T18:31:12",
"upload_time_iso_8601": "2025-07-08T18:31:12.479211Z",
"url": "https://files.pythonhosted.org/packages/32/d7/83046569150d31b83a70ed68425b4525e7017b179edee886439ee3bdf2ca/dv_pipecat_ai-0.0.74.dev532-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "caa36fe2c641b20890d298f5f923e3201941963c15760320220d4db255c09baa",
"md5": "1dfb34ad7ccd682b48bb6dc7e12272de",
"sha256": "8ca60507afcd350194f22f85d9eb94a3e9c1871a3d3b1cfa67dd6f4c86ef8e05"
},
"downloads": -1,
"filename": "dv_pipecat_ai-0.0.74.dev532.tar.gz",
"has_sig": false,
"md5_digest": "1dfb34ad7ccd682b48bb6dc7e12272de",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 2439514,
"upload_time": "2025-07-08T18:31:15",
"upload_time_iso_8601": "2025-07-08T18:31:15.582155Z",
"url": "https://files.pythonhosted.org/packages/ca/a3/6fe2c641b20890d298f5f923e3201941963c15760320220d4db255c09baa/dv_pipecat_ai-0.0.74.dev532.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-08 18:31:15",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "pipecat-ai",
"github_project": "pipecat",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "dv-pipecat-ai"
}