achatbot


Nameachatbot JSON
Version 0.0.8.4 PyPI version JSON
download
home_pageNone
SummaryAn open source chat bot for voice (and multimodal) assistants
upload_time2024-12-20 08:00:20
maintainerNone
docs_urlNone
authorNone
requires_pythonNone
licenseBSD 3-Clause License Copyright (c) 2024, weedge Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
keywords ai chat bot audio speech video image
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # achatbot
[![PyPI](https://img.shields.io/pypi/v/achatbot)](https://pypi.org/project/achatbot/)
<a href="https://app.commanddash.io/agent/github_ai-bot-pro_achatbot"><img src="https://img.shields.io/badge/AI-Code%20Agent-EB9FDA"></a>

achatbot factory, create chat bots with llm(tools), asr, tts, vad, ocr, detect object etc..

# Project Structure
![project-structure](https://github.com/user-attachments/assets/5bf7cebb-e590-4718-a78a-6b0c0b36ea28)

# Feature
- demo
  
  - [podcast](https://github.com/ai-bot-pro/achatbot/blob/main/demo/content_parser_tts.py)
  
    ```shell
    # need GOOGLE_API_KEY in environment variables
    # default use language English
    
    # websit
    python -m demo.content_parser_tts instruct-content-tts \
        "https://en.wikipedia.org/wiki/Large_language_model"
    
    python -m demo.content_parser_tts instruct-content-tts \
        --role-tts-voices zh-CN-YunjianNeural \
        --role-tts-voices zh-CN-XiaoxiaoNeural \
        --language zh \
        "https://en.wikipedia.org/wiki/Large_language_model"
    
    # pdf
    # https://www.apple.com/ios/ios-18/pdf/iOS_18_All_New_Features_Sept_2024.pdf
    python -m demo.content_parser_tts instruct-content-tts \
        "/Users/wuyong/Desktop/iOS_18_All_New_Features_Sept_2024.pdf"
    
    python -m demo.content_parser_tts instruct-content-tts \
        --role-tts-voices zh-CN-YunjianNeural \
        --role-tts-voices zh-CN-XiaoxiaoNeural \
        --language zh \
        "/Users/wuyong/Desktop/iOS_18_All_New_Features_Sept_2024.pdf"
    ```
  
- cmd chat bots:

  - [local-terminal-chat](https://github.com/ai-bot-pro/achatbot/tree/main/src/cmd/local-terminal-chat)(be/fe)
  - [remote-queue-chat](https://github.com/ai-bot-pro/achatbot/tree/main/src/cmd/remote-queue-chat)(be/fe)
  - [grpc-terminal-chat](https://github.com/ai-bot-pro/achatbot/tree/main/src/cmd/grpc/terminal-chat)(be/fe)
  - [grpc-speaker](https://github.com/ai-bot-pro/achatbot/tree/main/src/cmd/grpc/speaker)
  - [http fastapi_daily_bot_serve](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/http/server/fastapi_daily_bot_serve.py) (with chat bots pipeline)
  - [**bots with config**](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/main.py)  see notebooks:
    - [Run chat bots with colab notebook](https://github.com/ai-bot-pro/achatbot?tab=readme-ov-file#run-chat-bots-with-colab-notebook)  🏃

- support transport connector: 
  - [x] pipe(UNIX socket), 
  - [x] grpc, 
  - [x] queue (redis),
  - [ ] websocket
  - [ ] TCP/IP socket

- chat bot processors: 
  - aggreators(llm use, assistant message), 
  - ai_frameworks
    - [x] [langchain](https://www.langchain.com/): RAG
    - [ ] [llamaindex](https://www.llamaindex.ai/): RAG
    - [ ] [autoagen](https://github.com/microsoft/autogen): multi Agents
  - realtime voice inference(RTVI),
  - transport: 
    - webRTC: (daily,livekit KISS)
      - [x] **[daily](https://github.com/ai-bot-pro/achatbot/blob/main/src/transports/daily.py)**: audio, video(image)
      - [x] **[livekit](https://github.com/ai-bot-pro/achatbot/blob/main/src/transports/livekit.py)**: audio, video(image)
      - [x] **[agora](https://github.com/ai-bot-pro/achatbot/blob/main/src/transports/agora.py)**: audio, video(image)
    - [x] [Websocket server](https://github.com/ai-bot-pro/achatbot/blob/main/src/transports/websocket_server.py)
  - ai processor: llm, tts, asr etc..
    - llm_processor:
      - [x] [openai](https://github.com/ai-bot-pro/achatbot/blob/main/test/integration/processors/test_openai_llm_processor.py)(use openai sdk)
      - [x] [google gemini](https://github.com/ai-bot-pro/achatbot/blob/main/test/integration/processors/test_google_llm_processor.py)(use google-generativeai sdk)
      - [x] [litellm](https://github.com/ai-bot-pro/achatbot/blob/main/test/integration/processors/test_litellm_processor.py)(use openai input/output format proxy sdk) 

- core module:
  - local llm: 
    - [x] llama-cpp (support text,vision with function-call model)
    - [x] transformers(manual, pipeline) (support text,vision:🦙,Qwen2-vl,Molmo with function-call model)
    - [ ] mlx_lm 
  - remote api llm: personal-ai(like openai api, other ai provider)

- AI modules:
  - functions:
    - [x] search: search,search1,serper
    - [x] weather: openweathermap
  - speech:
    - [x] asr: sense_voice_asr, whisper_asr, whisper_timestamped_asr, whisper_faster_asr, whisper_transformers_asr, whisper_mlx_asr, lightning_whisper_mlx_asr(!TODO), whisper_groq_asr
    - [x] audio_stream: daily_room_audio_stream(in/out), pyaudio_stream(in/out)
    - [x] detector: porcupine_wakeword,pyannote_vad,webrtc_vad,silero_vad,webrtc_silero_vad
    - [x] player: stream_player
    - [x] recorder: rms_recorder, wakeword_rms_recorder, vad_recorder, wakeword_vad_recorder
    - [x] tts: tts_chat,tts_coqui,tts_cosy_voice,tts_edge,tts_g
    - [x] vad_analyzer: daily_webrtc_vad_analyzer,silero_vad_analyzer
  - vision
    - [x] OCR(*Optical Character Recognition*):
      - [ ] [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)
      - [x]  [GOT](https://github.com/Ucas-HaoranWei/GOT-OCR2.0)(*the General OCR Theory*)
    - [x] Detector:
      - [x] [YOLO](https://docs.ultralytics.com/) (*You Only Look Once*)
      - [ ] [RT-DETR v2](https://github.com/lyuwenyu/RT-DETR) (*RealTime End-to-End Object Detection with Transformers*)

- gen modules config(*.yaml, local/test/prod) from env with file: `.env`
   u also use HfArgumentParser this module's args to local cmd parse args

- deploy to cloud ☁️ serverless: 
  - vercel (frontend ui pages)
  - Cloudflare(frontend ui pages), personal ai workers 
  - [fastapi-daily-chat-bot](https://github.com/ai-bot-pro/achatbot/tree/main/deploy/cerebrium/fastapi-daily-chat-bot) on cerebrium (provider aws)
  - [fastapi-daily-chat-bot](https://github.com/ai-bot-pro/achatbot/tree/main/deploy/leptonai/fastapi-daily-chat-bot) on leptonai
  - aws lambda + api Gateway
  - docker -> k8s/k3s
  - etc...

# Service Deployment Architecture

## UI (easy to deploy with github like pages)
- [x] [ui/web-client-ui](https://github.com/ai-bot-pro/web-client-ui)
deploy it to cloudflare page with vite, access https://chat-client-weedge.pages.dev/
- [x] [ui/educator-client](https://github.com/ai-bot-pro/educator-client)
deploy it to cloudflare page with vite, access https://educator-client.pages.dev/
- [x] [chat-bot-rtvi-web-sandbox](https://github.com/ai-bot-pro/chat-bot-rtvi-client/tree/main/chat-bot-rtvi-web-sandbox)
use this web sandbox to test config, actions with [DailyRTVIGeneralBot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/rtvi/daily_rtvi_general_bot.py)
- [x] [vite-react-rtvi-web-voice](https://github.com/ai-bot-pro/vite-react-rtvi-web-voice) rtvi web voice chat bots, diff cctv roles etc, u can diy your own role by change the system prompt with [DailyRTVIGeneralBot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/rtvi/daily_rtvi_general_bot.py)
deploy it to cloudflare page with vite, access https://role-chat.pages.dev/
- [x] [vite-react-web-vision](https://github.com/ai-bot-pro/vite-react-web-vision) 
deploy it to cloudflare page with vite, access https://vision-weedge.pages.dev/
- [x] [nextjs-react-web-storytelling](https://github.com/ai-bot-pro/nextjs-react-web-storytelling) 
deploy it to cloudflare page worker with nextjs, access https://storytelling.pages.dev/ 
- [x] [websocket-demo](https://github.com/ai-bot-pro/achatbot/blob/main/ui/websocket/simple-demo): websocket audio chat bot demo


## Server Deploy (CD)
- [x] [deploy/modal](https://github.com/ai-bot-pro/achatbot/tree/main/deploy/modal)(KISS) 👍🏻 
- [x] [deploy/leptonai](https://github.com/ai-bot-pro/achatbot/tree/main/deploy/leptonai)(KISS)👍🏻
- [x] [deploy/cerebrium/fastapi-daily-chat-bot](https://github.com/ai-bot-pro/achatbot/tree/main/deploy/cerebrium/fastapi-daily-chat-bot) :)
- [x] [deploy/aws/fastapi-daily-chat-bot](https://github.com/ai-bot-pro/achatbot/tree/main/deploy/aws/fastapi-daily-chat-bot) :|
- [x] [deploy/docker/fastapi-daily-chat-bot](https://github.com/ai-bot-pro/achatbot/tree/main/deploy/docker) 🏃


# Install
> [!NOTE]
> `python --version` >=3.10 with [asyncio-task](https://docs.python.org/3.10/library/asyncio-task.html)

> [!TIP]
> use [uv](https://github.com/astral-sh/uv) + pip to run, install the required dependencies fastly, e.g.:
> `uv pip install achatbot`
> `uv pip install "achatbot[fastapi_bot_server]"`

## pypi
```bash
python3 -m venv .venv_achatbot
source .venv_achatbot/bin/activate
pip install achatbot
# optional-dependencies e.g.
pip install "achatbot[fastapi_bot_server]"
```

## local
```bash
git clone --recursive https://github.com/ai-bot-pro/chat-bot.git
cd chat-bot
python3 -m venv .venv_achatbot
source .venv_achatbot/bin/activate
bash scripts/pypi_achatbot.sh dev
# optional-dependencies e.g.
pip install "dist/achatbot-{$version}-py3-none-any.whl[fastapi_bot_server]"
```

#  Run chat bots
## Run chat bots with colab notebook

|                           Chat Bot                           | optional-dependencies                                        | Colab                                                        | Device                                                       | Pipeline Desc                                                |
| :----------------------------------------------------------: | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| [daily_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/daily_bot.py)<br />[livekit_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/livekit_bot.py)<br />[agora_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/agora_bot.py)<br /> | e.g.:<br />daily_room_audio_stream \| livekit_room_audio_stream,<br />sense_voice_asr,<br />groq \| together api llm(text), <br />tts_edge | <a href="https://github.com/weedge/doraemon-nb/blob/main/webrtc_audio_bot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | CPU (free, 2 cores)                                          | e.g.:<br />daily \| livekit room in stream<br />-> silero (vad)<br />-> sense_voice (asr) <br />-> groq \| together  (llm) <br />-> edge (tts)<br />-> daily \| livekit room out stream |
| [generate_audio2audio](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/remote-queue-chat/generate_audio2audio.py) | remote_queue_chat_bot_be_worker                              | <a href="https://github.com/weedge/doraemon-nb/blob/main/chat_bot_gpu_worker.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | T4(free)                                                     | e.g.:<br />pyaudio in stream<br />-> silero (vad)<br />-> sense_voice (asr) <br />-> qwen (llm) <br />-> cosy_voice (tts)<br />-> pyaudio out stream |
| [daily_describe_vision_tools_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/daily_describe_vision_tools_bot.py)<br />[livekit_describe_vision_tools_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/livekit_describe_vision_tools_bot.py)<br />[agora_describe_vision_tools_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/agora_describe_vision_tools_bot.py)<br /> | e.g.:<br />daily_room_audio_stream \|livekit_room_audio_stream<br />deepgram_asr,<br />goole_gemini,<br />tts_edge | <a href="https://github.com/weedge/doraemon-nb/blob/main/achatbot_describe_vision_tools_bot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | CPU(free, 2 cores)                                           | e.g.:<br />daily \|livekit room in stream<br />-> silero (vad)<br />-> deepgram (asr) <br />-> google gemini  <br />-> edge (tts)<br />-> daily \|livekit room out stream |
| [daily_describe_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/daily_describe_vision_bot.py)<br />[livekit_describe_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/livekit_describe_vision_bot.py)<br />[agora_describe_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/agora_describe_vision_bot.py)<br /> | e.g.:<br />daily_room_audio_stream \| livekit_room_audio_stream<br />sense_voice_asr,<br />llm_transformers_manual_vision_qwen,<br />tts_edge | <a href="https://github.com/weedge/doraemon-nb/blob/main/achatbot_vision_qwen_vl.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | - Qwen2-VL-2B-Instruct<br /> T4(free)<br />- Qwen2-VL-7B-Instruct<br />L4<br />- Llama-3.2-11B-Vision-Instruct<br />L4<br />- allenai/Molmo-7B-D-0924<br />A100 | e.g.:<br />daily \| livekit room in stream<br />-> silero (vad)<br />-> sense_voice (asr) <br />-> qwen-vl (llm) <br />-> edge (tts)<br />-> daily \| livekit room out stream |
| [daily_chat_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/daily_chat_vision_bot.py)<br />[livekit_chat_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/livekit_chat_vision_bot.py)<br />[agora_chat_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/agora_chat_vision_bot.py)<br /> | e.g.:<br />daily_room_audio_stream \|livekit_room_audio_stream<br />sense_voice_asr,<br />llm_transformers_manual_vision_qwen,<br />tts_edge | <a href="https://github.com/weedge/doraemon-nb/blob/main/achatbot_daily_chat_vision_bot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | - Qwen2-VL-2B-Instruct<br /> T4(free)<br />- Qwen2-VL-7B-Instruct<br />L4<br />- Ll<br/>ama-3.2-11B-Vision-Instruct<br />L4<br />- allenai/Molmo-7B-D-0924<br />A100 | e.g.:<br />daily \| livekit room in stream<br />-> silero (vad)<br />-> sense_voice (asr) <br />-> llm answer guide qwen-vl (llm) <br />-> edge (tts)<br />-> daily \| livekit room out stream |
| [daily_chat_tools_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/daily_chat_tools_vision_bot.py)<br />[livekit_chat_tools_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/livekit_chat_tools_vision_bot.py)<br />[agora_chat_tools_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/agora_chat_tools_vision_bot.py)<br /> | e.g.:<br />daily_room_audio_stream \| livekit_room_audio_stream<br />sense_voice_asr,<br />groq api llm(text), <br />tools:<br />- llm_transformers_manual_vision_qwen,<br />tts_edge | <a href="https://github.com/weedge/doraemon-nb/blob/main/achatbot_daily_chat_tools_vision_bot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | - Qwen2-VL-2B-Instruct<br<br/> /> T4(free)<br />- Qwen2-VL-7B-Instruct<br />L4<br />- Llama-3.2-11B-Vision-Instruct<br />L4 <br />- allenai/Molmo-7B-D-0924<br />A100 | e.g.:<br />daily \| livekit room in stream<br />-> silero (vad)<br />-> sense_voice (asr) <br />->llm with tools qwen-vl  <br />-> edge (tts)<br />-> daily \| livekit room out stream |
| [daily_annotate_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/daily_annotate_vision_bot.py)<br />[livekit_annotate_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/livekit_annotate_vision_bot.py)<br />[agora_annotate_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/agora_annotate_vision_bot.py)<br /> | e.g.:<br />daily_room_audio_stream \| livekit_room_audio_stream<br />vision_yolo_detector<br />tts_edge | <a href="https://github.com/weedge/doraemon-nb/blob/main/daily_annotate_vision_bot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | T4(free)                                                     | e.g.:<br />daily \| livekit room in stream<br />vision_yolo_detector<br />-> edge (tts)<br />-> daily \| livekit room out stream |
| [daily_detect_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/daily_detect_vision_bot.py)<br />[livekit_detect_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/livekit_detect_vision_bot.py)<br />[agora_detect_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/agora_detect_vision_bot.py)<br /> | e.g.:<br />daily_room_audio_stream \| livekit_room_audio_stream<br />vision_yolo_detector<br />tts_edge | <a href="https://github.com/weedge/doraemon-nb/blob/main/daily_detect_vision_bot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | T4(free)                                                     | e.g.:<br />daily \| livekit room in stream<br />vision_yolo_detector<br />-> edge (tts)<br />-> daily \| livekit room out stream |
| [daily_ocr_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/daily_ocr_vision_bot.py)<br />[livekit_ocr_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/livekit_ocr_vision_bot.py)<br/>[agora_ocr_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/agora_ocr_vision_bot.py)<br/> | e.g.:<br />daily_room_audio_stream \| livekit_room_audio_stream<br />sense_voice_asr,<br />vision_transformers_got_ocr<br />tts_edge | <a href="https://github.com/weedge/doraemon-nb/blob/main/daily_ocr_vision_bot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | T4(free)                                                     | e.g.:<br />daily \| livekit room in stream<br />-> silero (vad)<br />-> sense_voice (asr) <br />vision_transformers_got_ocr<br />-> edge (tts)<br />-> daily \| livekit room out stream |
| [daily_month_narration_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/image/daily_month_narration_bot.py) | e.g.:<br />daily_room_audio_stream <br />groq \|together api llm(text),<br />hf_sd, together api (image)<br />tts_edge | <a href="https://github.com/weedge/doraemon-nb/blob/main/achatbot_daily_month_narration_bot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | when use sd model with diffusers<br />T4(free) cpu+cuda (slow)<br />L4 cpu+cuda<br/>A100 all cuda<br /> | e.g.:<br />daily room in stream<br />-> together  (llm) <br />-> hf sd gen image model<br />-> edge (tts)<br />-> daily  room out stream |
| [daily_storytelling_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/image/storytelling/daily_bot.py) | e.g.:<br />daily_room_audio_stream <br />groq \|together api llm(text),<br />hf_sd, together api (image)<br />tts_edge | <a href="https://github.com/weedge/doraemon-nb/blob/main/achatbot_daily_storytelling_bot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | cpu (2 cores)<br />when use sd model with diffusers<br />T4(free) cpu+cuda (slow)<br />L4 cpu+cuda<br/>A100 all cuda<br /> | e.g.:<br />daily room in stream<br />-> together  (llm) <br />-> hf sd gen image model<br />-> edge (tts)<br />-> daily  room out stream |
| [websocket_server_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/websocket_server_bot.py)<br />[fastapi_websocket_server_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/fastapi_websocket_server_bot.py) | e.g.:<br /> websocket_server<br />sense_voice_asr,<br />groq \|together api llm(text),<br />tts_edge | <a href="https://github.com/weedge/doraemon-nb/blob/main/achatbot_websocket_server_bot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | cpu(2 cores)                                                 | e.g.:<br />websocket protocol  in stream<br />-> silero (vad)<br />-> sense_voice (asr) <br />-> together  (llm) <br />-> edge (tts)<br />-> websocket protocol out stream |
| [daily_natural_conversation_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/nlp/daily_natural_conversation_bot.py) | e.g.:<br /> daily_room_audio_stream<br />sense_voice_asr,<br />groq \|together api llm(NLP task),<br />gemini-1.5-flash (chat)<br />tts_edge | <a href="https://github.com/weedge/doraemon-nb/blob/main/achat_natural_conversation_bot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | cpu(2 cores)                                                 | e.g.:<br />daily room in stream<br />-> together  (llm NLP task) <br />->  gemini-1.5-flash model (chat)<br />-> edge (tts)<br />-> daily  room out stream |
| [fastapi_websocket_moshi_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/voice/fastapi_websocket_moshi_bot.py) | e.g.:<br /> websocket_server<br />moshi opus stream voice llm<br /> | <a href="https://github.com/weedge/doraemon-nb/blob/main/achatbot_moshi_bot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | L4                                                           | websocket protocol  in stream<br />-> silero (vad)<br />-> moshi opus stream voice llm<br />-> websocket protocol out stream |
| [daily_asr_glm_voice_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/voice/daily_asr_glm_voice_bot.py)<br>[daily_glm_voice_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/voice/daily_glm_voice_bot.py) | e.g.:<br /> daily_room_audio_stream<br />glm voice llm<br /> | <a href="https://github.com/weedge/doraemon-nb/blob/main/achatbot_glm_voice_bot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> | T4/L4/A100                                                   | e.g.:<br />daily room in stream<br />->glm4-voice<br />-> daily  room out stream |
|                                                              |                                                              |                                                              |                                                              |                                                              |
|                                                              |                                                              |                                                              |                                                              |                                                              |



## Run local chat bots

> [!NOTE]
>
> - run src code, replace achatbot to src, don't need set `ACHATBOT_PKG=1` e.g.:
>   ```
>   TQDM_DISABLE=True \
>        python -m src.cmd.local-terminal-chat.generate_audio2audio > log/std_out.log
>   ```
> - PyAudio need install python3-pyaudio 
> e.g. ubuntu `apt-get install python3-pyaudio`, macos `brew install portaudio`
> see: https://pypi.org/project/PyAudio/
>
> - llm llama-cpp-python init use cpu Pre-built Wheel to install, 
> if want to use other lib(cuda), see: https://github.com/abetlen/llama-cpp-python#installation-configuration
>
> - install `pydub`  need install `ffmpeg` see: https://www.ffmpeg.org/download.html

1. run `pip install "achatbot[local_terminal_chat_bot]"` to install dependencies to run local terminal chat bot;
2. create achatbot data dir in `$HOME` dir `mkdir -p ~/.achatbot/{log,config,models,records,videos}`;
3. `cp .env.example .env`, and check `.env`, add key/value env params;
4. select a model ckpt to download:
    - vad model ckpt (default vad ckpt model use [silero vad](https://github.com/snakers4/silero-vad))
    ```
    # vad pyannote segmentation ckpt
    huggingface-cli download pyannote/segmentation-3.0  --local-dir ~/.achatbot/models/pyannote/segmentation-3.0 --local-dir-use-symlinks False
    ```
    - asr model ckpt (default whipser ckpt model use base size)
    ```
    # asr openai whisper ckpt
    wget https://openaipublic.azureedge.net/main/whisper/models/ed3a0b6b1c0edf879ad9b11b1af5a0e6ab5db9205f891f668f8b0e6c6326e34e/base.pt -O ~/.achatbot/models/base.pt
    
    # asr hf openai whisper ckpt for transformers pipeline to load
    huggingface-cli download openai/whisper-base  --local-dir ~/.achatbot/models/openai/whisper-base --local-dir-use-symlinks False
    
    # asr hf faster whisper (CTranslate2)
    huggingface-cli download Systran/faster-whisper-base  --local-dir ~/.achatbot/models/Systran/faster-whisper-base --local-dir-use-symlinks False
    
    # asr SenseVoice ckpt
    huggingface-cli download FunAudioLLM/SenseVoiceSmall  --local-dir ~/.achatbot/models/FunAudioLLM/SenseVoiceSmall --local-dir-use-symlinks False
    ```
    - llm model ckpt (default llamacpp ckpt(ggml) model use qwen-2 instruct 1.5B size)
    ```
    # llm llamacpp Qwen2-Instruct
    huggingface-cli download Qwen/Qwen2-1.5B-Instruct-GGUF qwen2-1_5b-instruct-q8_0.gguf  --local-dir ~/.achatbot/models --local-dir-use-symlinks False
    
    # llm llamacpp Qwen1.5-chat
    huggingface-cli download Qwen/Qwen1.5-7B-Chat-GGUF qwen1_5-7b-chat-q8_0.gguf  --local-dir ~/.achatbot/models --local-dir-use-symlinks False
    
    # llm llamacpp phi-3-mini-4k-instruct
    huggingface-cli download microsoft/Phi-3-mini-4k-instruct-gguf Phi-3-mini-4k-instruct-q4.gguf --local-dir ~/.achatbot/models --local-dir-use-symlinks False
    
    ```
    - tts model ckpt (default whipser ckpt model use base size)
    ```
    # tts chatTTS
    huggingface-cli download 2Noise/ChatTTS  --local-dir ~/.achatbot/models/2Noise/ChatTTS --local-dir-use-symlinks False
    
    # tts coquiTTS
    huggingface-cli download coqui/XTTS-v2  --local-dir ~/.achatbot/models/coqui/XTTS-v2 --local-dir-use-symlinks False
    
    # tts cosy voice
    git lfs install
    git clone https://www.modelscope.cn/iic/CosyVoice-300M.git ~/.achatbot/models/CosyVoice-300M
    git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git ~/.achatbot/models/CosyVoice-300M-SFT
    git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git ~/.achatbot/models/CosyVoice-300M-Instruct
    #git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git ~/.achatbot/models/CosyVoice-ttsfrd
    
    ```

5. run local terminal chat bot with env; e.g. 
    - use dufault env params to run local chat bot
    ```
    ACHATBOT_PKG=1 TQDM_DISABLE=True \
        python -m achatbot.cmd.local-terminal-chat.generate_audio2audio > ~/.achatbot/log/std_out.log
    ```

## Run remote http fastapi daily chat bots
1. run `pip install "achatbot[fastapi_daily_bot_server]"` to install dependencies to run http fastapi daily chat bot; 

2. run below cmd to start http server, see api docs: http://0.0.0.0:4321/docs
    ```
    ACHATBOT_PKG=1 python -m achatbot.cmd.http.server.fastapi_daily_bot_serve
    ```
3. run chat bot processor, e.g. 
   - run a daily langchain rag bot api, with ui/educator-client
    > [!NOTE]
    > need process youtube audio save to local file with `pytube`, run `pip install "achatbot[pytube,deep_translator]"` to install dependencies
    > and transcribe/translate to text, then chunks to vector store, and run langchain rag bot api;
    > run data process: 
    > ```
    > ACHATBOT_PKG=1 python -m achatbot.cmd.bots.rag.data_process.youtube_audio_transcribe_to_tidb
    > ```
    > or download processed data from hf dataset [weege007/youtube_videos](https://huggingface.co/datasets/weege007/youtube_videos/tree/main/videos), then chunks to vector store .
   ```
   curl -XPOST "http://0.0.0.0:4321/bot_join/chat-bot/DailyLangchainRAGBot" \
    -H "Content-Type: application/json" \
    -d $'{"config":{"llm":{"model":"llama-3.1-70b-versatile","messages":[{"role":"system","content":""}],"language":"zh"},"tts":{"tag":"cartesia_tts_processor","args":{"voice_id":"eda5bbff-1ff1-4886-8ef1-4e69a77640a0","language":"zh"}},"asr":{"tag":"deepgram_asr_processor","args":{"language":"zh","model":"nova-2"}}}}' | jq .
   ```
   - run a simple daily chat bot api, with ui/web-client-ui (default language: zh)
   ```
   curl -XPOST "http://0.0.0.0:4321/bot_join/DailyBot" \
    -H "Content-Type: application/json" \
    -d '{}' | jq .
   ```

## Run remote rpc chat bot worker
1. run `pip install "achatbot[remote_rpc_chat_bot_be_worker]"` to install dependencies to run rpc chat bot BE worker; e.g. :
   - use dufault env params to run rpc chat bot BE worker
```
ACHATBOT_PKG=1 RUN_OP=be TQDM_DISABLE=True \
    TTS_TAG=tts_edge \
    python -m achatbot.cmd.grpc.terminal-chat.generate_audio2audio > ~/.achatbot/log/be_std_out.log
```
2. run `pip install "achatbot[remote_rpc_chat_bot_fe]"` to install dependencies to run rpc chat bot FE; 
```
ACHATBOT_PKG=1 RUN_OP=fe \
    TTS_TAG=tts_edge \
    python -m achatbot.cmd.grpc.terminal-chat.generate_audio2audio > ~/.achatbot/log/fe_std_out.log
```

## Run remote queue chat bot worker
1. run `pip install "achatbot[remote_queue_chat_bot_be_worker]"` to install dependencies to run queue chat bot worker; e.g.:
   - use default env params to run 
    ```
    ACHATBOT_PKG=1 REDIS_PASSWORD=$redis_pwd RUN_OP=be TQDM_DISABLE=True \
        python -m achatbot.cmd.remote-queue-chat.generate_audio2audio > ~/.achatbot/log/be_std_out.log
    ```
   - sense_voice(asr) -> qwen (llm) -> cosy_voice (tts)
   u can login [redislabs](https://app.redislabs.com/#/) create 30M free databases; set `REDIS_HOST`,`REDIS_PORT` and `REDIS_PASSWORD` to run, e.g.:
   ```
    ACHATBOT_PKG=1 RUN_OP=be \
      TQDM_DISABLE=True \
      REDIS_PASSWORD=$redis_pwd \
      REDIS_HOST=redis-14241.c256.us-east-1-2.ec2.redns.redis-cloud.com \
      REDIS_PORT=14241 \
      ASR_TAG=sense_voice_asr \
      ASR_LANG=zn \
      ASR_MODEL_NAME_OR_PATH=~/.achatbot/models/FunAudioLLM/SenseVoiceSmall \
      N_GPU_LAYERS=33 FLASH_ATTN=1 \
      LLM_MODEL_NAME=qwen \
      LLM_MODEL_PATH=~/.achatbot/models/qwen1_5-7b-chat-q8_0.gguf \
      TTS_TAG=tts_cosy_voice \
      python -m achatbot.cmd.remote-queue-chat.generate_audio2audio > ~/.achatbot/log/be_std_out.log
   ```
2. run `pip install "achatbot[remote_queue_chat_bot_fe]"` to install the required packages to run quueue chat bot frontend; e.g.:
   - use default env params to run (default vad_recorder)
    ```
    ACHATBOT_PKG=1 RUN_OP=fe \
        REDIS_PASSWORD=$redis_pwd \
        REDIS_HOST=redis-14241.c256.us-east-1-2.ec2.redns.redis-cloud.com \
        REDIS_PORT=14241 \
        python -m achatbot.cmd.remote-queue-chat.generate_audio2audio > ~/.achatbot/log/fe_std_out.log
    ```
   - with wake word
    ```
    ACHATBOT_PKG=1 RUN_OP=fe \
        REDIS_PASSWORD=$redis_pwd \
        REDIS_HOST=redis-14241.c256.us-east-1-2.ec2.redns.redis-cloud.com \
        REDIS_PORT=14241 \
        RECORDER_TAG=wakeword_rms_recorder \
        python -m achatbot.cmd.remote-queue-chat.generate_audio2audio > ~/.achatbot/log/fe_std_out.log
    ```
   - default pyaudio player stream with tts tag out sample info(rate,channels..), e.g.: (be use tts_cosy_voice out stream info)
   ```
    ACHATBOT_PKG=1 RUN_OP=fe \
        REDIS_PASSWORD=$redis_pwd \
        REDIS_HOST=redis-14241.c256.us-east-1-2.ec2.redns.redis-cloud.com \
        REDIS_PORT=14241 \
        RUN_OP=fe \
        TTS_TAG=tts_cosy_voice \
        python -m achatbot.cmd.remote-queue-chat.generate_audio2audio > ~/.achatbot/log/fe_std_out.log
   ```
   remote_queue_chat_bot_be_worker in colab examples :
   <a href="https://colab.research.google.com/github/weedge/doraemon-nb/blob/main/chat_bot_gpu_worker.ipynb" target="_parent">
   <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
   
   - sense_voice(asr) -> qwen (llm) -> cosy_voice (tts): 

## Run remote grpc tts speaker bot
1. run `pip install "achatbot[remote_grpc_tts_server]"` to install dependencies to run grpc tts speaker bot server; 
```
ACHATBOT_PKG=1 python -m achatbot.cmd.grpc.speaker.server.serve
```
2. run `pip install "achatbot[remote_grpc_tts_client]"` to install dependencies to run grpc tts speaker bot client; 
```
ACHATBOT_PKG=1 TTS_TAG=tts_edge IS_RELOAD=1 python -m achatbot.cmd.grpc.speaker.client
ACHATBOT_PKG=1 TTS_TAG=tts_g IS_RELOAD=1 python -m achatbot.cmd.grpc.speaker.client
ACHATBOT_PKG=1 TTS_TAG=tts_coqui IS_RELOAD=1 python -m achatbot.cmd.grpc.speaker.client
ACHATBOT_PKG=1 TTS_TAG=tts_chat IS_RELOAD=1 python -m achatbot.cmd.grpc.speaker.client
ACHATBOT_PKG=1 TTS_TAG=tts_cosy_voice IS_RELOAD=1 python -m achatbot.cmd.grpc.speaker.client
```


# Multimodal Interaction
## audio (voice)
- stream-stt (realtime-recorder)
![audio-text](https://github.com/user-attachments/assets/44bcec7d-f0a1-47db-bd95-21feee43a361)

- audio-llm (multimode-chat)
![pipe](https://github.com/user-attachments/assets/9970cf18-9bbc-4109-a3c5-e3e3c88086af)
![queue](https://github.com/user-attachments/assets/30f2e880-f16d-4b62-8668-61bb97c57b2b)


- stream-tts (realtime-(clone)-speaker)
![text-audio](https://github.com/user-attachments/assets/676230a0-0a99-475b-9ef5-6afc95f044d8)
![audio-text text-audio](https://github.com/user-attachments/assets/cbcabf98-731e-4887-9f37-649ec81e37a0)


## vision (CV)
- stream-ocr (realtime-object-detection)

## more
- Embodied Intelligence: Robots that touch the world, perceive and move

# License

achatbot is released under the [BSD 3 license](LICENSE). (Additional code in this distribution is covered by the MIT and Apache Open Source
licenses.) However you may have other legal obligations that govern your use of content, such as the terms of service for third-party models.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "achatbot",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": "weedge <weege007@gmail.com>",
    "keywords": "ai, chat bot, audio, speech, video, image",
    "author": null,
    "author_email": "weedge <weege007@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/f4/55/aec9e4bec1c2b4680e18890b232d25534ed15c18b27d7ad8750219976547/achatbot-0.0.8.4.tar.gz",
    "platform": null,
    "description": "# achatbot\n[![PyPI](https://img.shields.io/pypi/v/achatbot)](https://pypi.org/project/achatbot/)\n<a href=\"https://app.commanddash.io/agent/github_ai-bot-pro_achatbot\"><img src=\"https://img.shields.io/badge/AI-Code%20Agent-EB9FDA\"></a>\n\nachatbot factory, create chat bots with llm(tools), asr, tts, vad, ocr, detect object etc..\n\n# Project Structure\n![project-structure](https://github.com/user-attachments/assets/5bf7cebb-e590-4718-a78a-6b0c0b36ea28)\n\n# Feature\n- demo\n  \n  - [podcast](https://github.com/ai-bot-pro/achatbot/blob/main/demo/content_parser_tts.py)\n  \n    ```shell\n    # need GOOGLE_API_KEY in environment variables\n    # default use language English\n    \n    # websit\n    python -m demo.content_parser_tts instruct-content-tts \\\n        \"https://en.wikipedia.org/wiki/Large_language_model\"\n    \n    python -m demo.content_parser_tts instruct-content-tts \\\n        --role-tts-voices zh-CN-YunjianNeural \\\n        --role-tts-voices zh-CN-XiaoxiaoNeural \\\n        --language zh \\\n        \"https://en.wikipedia.org/wiki/Large_language_model\"\n    \n    # pdf\n    # https://www.apple.com/ios/ios-18/pdf/iOS_18_All_New_Features_Sept_2024.pdf\n    python -m demo.content_parser_tts instruct-content-tts \\\n        \"/Users/wuyong/Desktop/iOS_18_All_New_Features_Sept_2024.pdf\"\n    \n    python -m demo.content_parser_tts instruct-content-tts \\\n        --role-tts-voices zh-CN-YunjianNeural \\\n        --role-tts-voices zh-CN-XiaoxiaoNeural \\\n        --language zh \\\n        \"/Users/wuyong/Desktop/iOS_18_All_New_Features_Sept_2024.pdf\"\n    ```\n  \n- cmd chat bots:\n\n  - [local-terminal-chat](https://github.com/ai-bot-pro/achatbot/tree/main/src/cmd/local-terminal-chat)(be/fe)\n  - [remote-queue-chat](https://github.com/ai-bot-pro/achatbot/tree/main/src/cmd/remote-queue-chat)(be/fe)\n  - [grpc-terminal-chat](https://github.com/ai-bot-pro/achatbot/tree/main/src/cmd/grpc/terminal-chat)(be/fe)\n  - [grpc-speaker](https://github.com/ai-bot-pro/achatbot/tree/main/src/cmd/grpc/speaker)\n  - [http fastapi_daily_bot_serve](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/http/server/fastapi_daily_bot_serve.py) (with chat bots pipeline)\n  - [**bots with config**](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/main.py)  see notebooks:\n    - [Run chat bots with colab notebook](https://github.com/ai-bot-pro/achatbot?tab=readme-ov-file#run-chat-bots-with-colab-notebook)  \ud83c\udfc3\n\n- support transport connector: \n  - [x] pipe(UNIX socket), \n  - [x] grpc, \n  - [x] queue (redis),\n  - [ ] websocket\n  - [ ] TCP/IP socket\n\n- chat bot processors: \n  - aggreators(llm use, assistant message), \n  - ai_frameworks\n    - [x] [langchain](https://www.langchain.com/): RAG\n    - [ ] [llamaindex](https://www.llamaindex.ai/): RAG\n    - [ ] [autoagen](https://github.com/microsoft/autogen): multi Agents\n  - realtime voice inference(RTVI),\n  - transport: \n    - webRTC: (daily,livekit KISS)\n      - [x] **[daily](https://github.com/ai-bot-pro/achatbot/blob/main/src/transports/daily.py)**: audio, video(image)\n      - [x] **[livekit](https://github.com/ai-bot-pro/achatbot/blob/main/src/transports/livekit.py)**: audio, video(image)\n      - [x] **[agora](https://github.com/ai-bot-pro/achatbot/blob/main/src/transports/agora.py)**: audio, video(image)\n    - [x] [Websocket server](https://github.com/ai-bot-pro/achatbot/blob/main/src/transports/websocket_server.py)\n  - ai processor: llm, tts, asr etc..\n    - llm_processor:\n      - [x] [openai](https://github.com/ai-bot-pro/achatbot/blob/main/test/integration/processors/test_openai_llm_processor.py)(use openai sdk)\n      - [x] [google gemini](https://github.com/ai-bot-pro/achatbot/blob/main/test/integration/processors/test_google_llm_processor.py)(use google-generativeai sdk)\n      - [x] [litellm](https://github.com/ai-bot-pro/achatbot/blob/main/test/integration/processors/test_litellm_processor.py)(use openai input/output format proxy sdk) \n\n- core module:\n  - local llm: \n    - [x] llama-cpp (support text,vision with function-call model)\n    - [x] transformers(manual, pipeline) (support text,vision:\ud83e\udd99,Qwen2-vl,Molmo with function-call model)\n    - [ ] mlx_lm \n  - remote api llm: personal-ai(like openai api, other ai provider)\n\n- AI modules:\n  - functions:\n    - [x] search: search,search1,serper\n    - [x] weather: openweathermap\n  - speech:\n    - [x] asr: sense_voice_asr, whisper_asr, whisper_timestamped_asr, whisper_faster_asr, whisper_transformers_asr, whisper_mlx_asr, lightning_whisper_mlx_asr(!TODO), whisper_groq_asr\n    - [x] audio_stream: daily_room_audio_stream(in/out), pyaudio_stream(in/out)\n    - [x] detector: porcupine_wakeword,pyannote_vad,webrtc_vad,silero_vad,webrtc_silero_vad\n    - [x] player: stream_player\n    - [x] recorder: rms_recorder, wakeword_rms_recorder, vad_recorder, wakeword_vad_recorder\n    - [x] tts: tts_chat,tts_coqui,tts_cosy_voice,tts_edge,tts_g\n    - [x] vad_analyzer: daily_webrtc_vad_analyzer,silero_vad_analyzer\n  - vision\n    - [x] OCR(*Optical Character Recognition*):\n      - [ ] [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)\n      - [x]  [GOT](https://github.com/Ucas-HaoranWei/GOT-OCR2.0)(*the General OCR Theory*)\n    - [x] Detector:\n      - [x] [YOLO](https://docs.ultralytics.com/) (*You Only Look Once*)\n      - [ ] [RT-DETR v2](https://github.com/lyuwenyu/RT-DETR) (*RealTime End-to-End Object Detection with Transformers*)\n\n- gen modules config(*.yaml, local/test/prod) from env with file: `.env`\n   u also use HfArgumentParser this module's args to local cmd parse args\n\n- deploy to cloud \u2601\ufe0f serverless: \n  - vercel (frontend ui pages)\n  - Cloudflare(frontend ui pages), personal ai workers \n  - [fastapi-daily-chat-bot](https://github.com/ai-bot-pro/achatbot/tree/main/deploy/cerebrium/fastapi-daily-chat-bot) on cerebrium (provider aws)\n  - [fastapi-daily-chat-bot](https://github.com/ai-bot-pro/achatbot/tree/main/deploy/leptonai/fastapi-daily-chat-bot) on leptonai\n  - aws lambda + api Gateway\n  - docker -> k8s/k3s\n  - etc...\n\n# Service Deployment Architecture\n\n## UI (easy to deploy with github like pages)\n- [x] [ui/web-client-ui](https://github.com/ai-bot-pro/web-client-ui)\ndeploy it to cloudflare page with vite, access https://chat-client-weedge.pages.dev/\n- [x] [ui/educator-client](https://github.com/ai-bot-pro/educator-client)\ndeploy it to cloudflare page with vite, access https://educator-client.pages.dev/\n- [x] [chat-bot-rtvi-web-sandbox](https://github.com/ai-bot-pro/chat-bot-rtvi-client/tree/main/chat-bot-rtvi-web-sandbox)\nuse this web sandbox to test config, actions with [DailyRTVIGeneralBot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/rtvi/daily_rtvi_general_bot.py)\n- [x] [vite-react-rtvi-web-voice](https://github.com/ai-bot-pro/vite-react-rtvi-web-voice) rtvi web voice chat bots, diff cctv roles etc, u can diy your own role by change the system prompt with [DailyRTVIGeneralBot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/rtvi/daily_rtvi_general_bot.py)\ndeploy it to cloudflare page with vite, access https://role-chat.pages.dev/\n- [x] [vite-react-web-vision](https://github.com/ai-bot-pro/vite-react-web-vision) \ndeploy it to cloudflare page with vite, access https://vision-weedge.pages.dev/\n- [x] [nextjs-react-web-storytelling](https://github.com/ai-bot-pro/nextjs-react-web-storytelling) \ndeploy it to cloudflare page worker with nextjs, access https://storytelling.pages.dev/ \n- [x] [websocket-demo](https://github.com/ai-bot-pro/achatbot/blob/main/ui/websocket/simple-demo): websocket audio chat bot demo\n\n\n## Server Deploy (CD)\n- [x] [deploy/modal](https://github.com/ai-bot-pro/achatbot/tree/main/deploy/modal)(KISS) \ud83d\udc4d\ud83c\udffb \n- [x] [deploy/leptonai](https://github.com/ai-bot-pro/achatbot/tree/main/deploy/leptonai)(KISS)\ud83d\udc4d\ud83c\udffb\n- [x] [deploy/cerebrium/fastapi-daily-chat-bot](https://github.com/ai-bot-pro/achatbot/tree/main/deploy/cerebrium/fastapi-daily-chat-bot) :)\n- [x] [deploy/aws/fastapi-daily-chat-bot](https://github.com/ai-bot-pro/achatbot/tree/main/deploy/aws/fastapi-daily-chat-bot) :|\n- [x] [deploy/docker/fastapi-daily-chat-bot](https://github.com/ai-bot-pro/achatbot/tree/main/deploy/docker) \ud83c\udfc3\n\n\n# Install\n> [!NOTE]\n> `python --version` >=3.10 with [asyncio-task](https://docs.python.org/3.10/library/asyncio-task.html)\n\n> [!TIP]\n> use [uv](https://github.com/astral-sh/uv) + pip to run, install the required dependencies fastly, e.g.:\n> `uv pip install achatbot`\n> `uv pip install \"achatbot[fastapi_bot_server]\"`\n\n## pypi\n```bash\npython3 -m venv .venv_achatbot\nsource .venv_achatbot/bin/activate\npip install achatbot\n# optional-dependencies e.g.\npip install \"achatbot[fastapi_bot_server]\"\n```\n\n## local\n```bash\ngit clone --recursive https://github.com/ai-bot-pro/chat-bot.git\ncd chat-bot\npython3 -m venv .venv_achatbot\nsource .venv_achatbot/bin/activate\nbash scripts/pypi_achatbot.sh dev\n# optional-dependencies e.g.\npip install \"dist/achatbot-{$version}-py3-none-any.whl[fastapi_bot_server]\"\n```\n\n#  Run chat bots\n## Run chat bots with colab notebook\n\n|                           Chat Bot                           | optional-dependencies                                        | Colab                                                        | Device                                                       | Pipeline Desc                                                |\n| :----------------------------------------------------------: | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |\n| [daily_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/daily_bot.py)<br />[livekit_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/livekit_bot.py)<br />[agora_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/agora_bot.py)<br /> | e.g.:<br />daily_room_audio_stream \\| livekit_room_audio_stream,<br />sense_voice_asr,<br />groq \\| together api llm(text), <br />tts_edge | <a href=\"https://github.com/weedge/doraemon-nb/blob/main/webrtc_audio_bot.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> | CPU (free, 2 cores)                                          | e.g.:<br />daily \\| livekit room in stream<br />-> silero (vad)<br />-> sense_voice (asr) <br />-> groq \\| together  (llm) <br />-> edge (tts)<br />-> daily \\| livekit room out stream |\n| [generate_audio2audio](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/remote-queue-chat/generate_audio2audio.py) | remote_queue_chat_bot_be_worker                              | <a href=\"https://github.com/weedge/doraemon-nb/blob/main/chat_bot_gpu_worker.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> | T4(free)                                                     | e.g.:<br />pyaudio in stream<br />-> silero (vad)<br />-> sense_voice (asr) <br />-> qwen (llm) <br />-> cosy_voice (tts)<br />-> pyaudio out stream |\n| [daily_describe_vision_tools_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/daily_describe_vision_tools_bot.py)<br />[livekit_describe_vision_tools_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/livekit_describe_vision_tools_bot.py)<br />[agora_describe_vision_tools_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/agora_describe_vision_tools_bot.py)<br /> | e.g.:<br />daily_room_audio_stream \\|livekit_room_audio_stream<br />deepgram_asr,<br />goole_gemini,<br />tts_edge | <a href=\"https://github.com/weedge/doraemon-nb/blob/main/achatbot_describe_vision_tools_bot.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> | CPU(free, 2 cores)                                           | e.g.:<br />daily \\|livekit room in stream<br />-> silero (vad)<br />-> deepgram (asr) <br />-> google gemini  <br />-> edge (tts)<br />-> daily \\|livekit room out stream |\n| [daily_describe_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/daily_describe_vision_bot.py)<br />[livekit_describe_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/livekit_describe_vision_bot.py)<br />[agora_describe_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/agora_describe_vision_bot.py)<br /> | e.g.:<br />daily_room_audio_stream \\| livekit_room_audio_stream<br />sense_voice_asr,<br />llm_transformers_manual_vision_qwen,<br />tts_edge | <a href=\"https://github.com/weedge/doraemon-nb/blob/main/achatbot_vision_qwen_vl.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> | - Qwen2-VL-2B-Instruct<br /> T4(free)<br />- Qwen2-VL-7B-Instruct<br />L4<br />- Llama-3.2-11B-Vision-Instruct<br />L4<br />- allenai/Molmo-7B-D-0924<br />A100 | e.g.:<br />daily \\| livekit room in stream<br />-> silero (vad)<br />-> sense_voice (asr) <br />-> qwen-vl (llm) <br />-> edge (tts)<br />-> daily \\| livekit room out stream |\n| [daily_chat_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/daily_chat_vision_bot.py)<br />[livekit_chat_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/livekit_chat_vision_bot.py)<br />[agora_chat_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/agora_chat_vision_bot.py)<br /> | e.g.:<br />daily_room_audio_stream \\|livekit_room_audio_stream<br />sense_voice_asr,<br />llm_transformers_manual_vision_qwen,<br />tts_edge | <a href=\"https://github.com/weedge/doraemon-nb/blob/main/achatbot_daily_chat_vision_bot.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> | - Qwen2-VL-2B-Instruct<br /> T4(free)<br />- Qwen2-VL-7B-Instruct<br />L4<br />- Ll<br/>ama-3.2-11B-Vision-Instruct<br />L4<br />- allenai/Molmo-7B-D-0924<br />A100 | e.g.:<br />daily \\| livekit room in stream<br />-> silero (vad)<br />-> sense_voice (asr) <br />-> llm answer guide qwen-vl (llm) <br />-> edge (tts)<br />-> daily \\| livekit room out stream |\n| [daily_chat_tools_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/daily_chat_tools_vision_bot.py)<br />[livekit_chat_tools_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/livekit_chat_tools_vision_bot.py)<br />[agora_chat_tools_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/agora_chat_tools_vision_bot.py)<br /> | e.g.:<br />daily_room_audio_stream \\| livekit_room_audio_stream<br />sense_voice_asr,<br />groq api llm(text), <br />tools:<br />- llm_transformers_manual_vision_qwen,<br />tts_edge | <a href=\"https://github.com/weedge/doraemon-nb/blob/main/achatbot_daily_chat_tools_vision_bot.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> | - Qwen2-VL-2B-Instruct<br<br/> /> T4(free)<br />- Qwen2-VL-7B-Instruct<br />L4<br />- Llama-3.2-11B-Vision-Instruct<br />L4 <br />- allenai/Molmo-7B-D-0924<br />A100 | e.g.:<br />daily \\| livekit room in stream<br />-> silero (vad)<br />-> sense_voice (asr) <br />->llm with tools qwen-vl  <br />-> edge (tts)<br />-> daily \\| livekit room out stream |\n| [daily_annotate_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/daily_annotate_vision_bot.py)<br />[livekit_annotate_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/livekit_annotate_vision_bot.py)<br />[agora_annotate_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/agora_annotate_vision_bot.py)<br /> | e.g.:<br />daily_room_audio_stream \\| livekit_room_audio_stream<br />vision_yolo_detector<br />tts_edge | <a href=\"https://github.com/weedge/doraemon-nb/blob/main/daily_annotate_vision_bot.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> | T4(free)                                                     | e.g.:<br />daily \\| livekit room in stream<br />vision_yolo_detector<br />-> edge (tts)<br />-> daily \\| livekit room out stream |\n| [daily_detect_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/daily_detect_vision_bot.py)<br />[livekit_detect_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/livekit_detect_vision_bot.py)<br />[agora_detect_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/agora_detect_vision_bot.py)<br /> | e.g.:<br />daily_room_audio_stream \\| livekit_room_audio_stream<br />vision_yolo_detector<br />tts_edge | <a href=\"https://github.com/weedge/doraemon-nb/blob/main/daily_detect_vision_bot.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> | T4(free)                                                     | e.g.:<br />daily \\| livekit room in stream<br />vision_yolo_detector<br />-> edge (tts)<br />-> daily \\| livekit room out stream |\n| [daily_ocr_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/daily_ocr_vision_bot.py)<br />[livekit_ocr_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/livekit_ocr_vision_bot.py)<br/>[agora_ocr_vision_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/vision/agora_ocr_vision_bot.py)<br/> | e.g.:<br />daily_room_audio_stream \\| livekit_room_audio_stream<br />sense_voice_asr,<br />vision_transformers_got_ocr<br />tts_edge | <a href=\"https://github.com/weedge/doraemon-nb/blob/main/daily_ocr_vision_bot.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> | T4(free)                                                     | e.g.:<br />daily \\| livekit room in stream<br />-> silero (vad)<br />-> sense_voice (asr) <br />vision_transformers_got_ocr<br />-> edge (tts)<br />-> daily \\| livekit room out stream |\n| [daily_month_narration_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/image/daily_month_narration_bot.py) | e.g.:<br />daily_room_audio_stream <br />groq \\|together api llm(text),<br />hf_sd, together api (image)<br />tts_edge | <a href=\"https://github.com/weedge/doraemon-nb/blob/main/achatbot_daily_month_narration_bot.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> | when use sd model with diffusers<br />T4(free) cpu+cuda (slow)<br />L4 cpu+cuda<br/>A100 all cuda<br /> | e.g.:<br />daily room in stream<br />-> together  (llm) <br />-> hf sd gen image model<br />-> edge (tts)<br />-> daily  room out stream |\n| [daily_storytelling_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/image/storytelling/daily_bot.py) | e.g.:<br />daily_room_audio_stream <br />groq \\|together api llm(text),<br />hf_sd, together api (image)<br />tts_edge | <a href=\"https://github.com/weedge/doraemon-nb/blob/main/achatbot_daily_storytelling_bot.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> | cpu (2 cores)<br />when use sd model with diffusers<br />T4(free) cpu+cuda (slow)<br />L4 cpu+cuda<br/>A100 all cuda<br /> | e.g.:<br />daily room in stream<br />-> together  (llm) <br />-> hf sd gen image model<br />-> edge (tts)<br />-> daily  room out stream |\n| [websocket_server_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/websocket_server_bot.py)<br />[fastapi_websocket_server_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/fastapi_websocket_server_bot.py) | e.g.:<br /> websocket_server<br />sense_voice_asr,<br />groq \\|together api llm(text),<br />tts_edge | <a href=\"https://github.com/weedge/doraemon-nb/blob/main/achatbot_websocket_server_bot.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> | cpu(2 cores)                                                 | e.g.:<br />websocket protocol  in stream<br />-> silero (vad)<br />-> sense_voice (asr) <br />-> together  (llm) <br />-> edge (tts)<br />-> websocket protocol out stream |\n| [daily_natural_conversation_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/nlp/daily_natural_conversation_bot.py) | e.g.:<br /> daily_room_audio_stream<br />sense_voice_asr,<br />groq \\|together api llm(NLP task),<br />gemini-1.5-flash (chat)<br />tts_edge | <a href=\"https://github.com/weedge/doraemon-nb/blob/main/achat_natural_conversation_bot.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> | cpu(2 cores)                                                 | e.g.:<br />daily room in stream<br />-> together  (llm NLP task) <br />->  gemini-1.5-flash model (chat)<br />-> edge (tts)<br />-> daily  room out stream |\n| [fastapi_websocket_moshi_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/voice/fastapi_websocket_moshi_bot.py) | e.g.:<br /> websocket_server<br />moshi opus stream voice llm<br /> | <a href=\"https://github.com/weedge/doraemon-nb/blob/main/achatbot_moshi_bot.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> | L4                                                           | websocket protocol  in stream<br />-> silero (vad)<br />-> moshi opus stream voice llm<br />-> websocket protocol out stream |\n| [daily_asr_glm_voice_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/voice/daily_asr_glm_voice_bot.py)<br>[daily_glm_voice_bot](https://github.com/ai-bot-pro/achatbot/blob/main/src/cmd/bots/voice/daily_glm_voice_bot.py) | e.g.:<br /> daily_room_audio_stream<br />glm voice llm<br /> | <a href=\"https://github.com/weedge/doraemon-nb/blob/main/achatbot_glm_voice_bot.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> | T4/L4/A100                                                   | e.g.:<br />daily room in stream<br />->glm4-voice<br />-> daily  room out stream |\n|                                                              |                                                              |                                                              |                                                              |                                                              |\n|                                                              |                                                              |                                                              |                                                              |                                                              |\n\n\n\n## Run local chat bots\n\n> [!NOTE]\n>\n> - run src code, replace achatbot to src, don't need set `ACHATBOT_PKG=1` e.g.:\n>   ```\n>   TQDM_DISABLE=True \\\n>        python -m src.cmd.local-terminal-chat.generate_audio2audio > log/std_out.log\n>   ```\n> - PyAudio need install python3-pyaudio \n> e.g. ubuntu `apt-get install python3-pyaudio`, macos `brew install portaudio`\n> see: https://pypi.org/project/PyAudio/\n>\n> - llm llama-cpp-python init use cpu Pre-built Wheel to install, \n> if want to use other lib(cuda), see: https://github.com/abetlen/llama-cpp-python#installation-configuration\n>\n> - install `pydub`  need install `ffmpeg` see: https://www.ffmpeg.org/download.html\n\n1. run `pip install \"achatbot[local_terminal_chat_bot]\"` to install dependencies to run local terminal chat bot;\n2. create achatbot data dir in `$HOME` dir `mkdir -p ~/.achatbot/{log,config,models,records,videos}`;\n3. `cp .env.example .env`, and check `.env`, add key/value env params;\n4. select a model ckpt to download:\n    - vad model ckpt (default vad ckpt model use [silero vad](https://github.com/snakers4/silero-vad))\n    ```\n    # vad pyannote segmentation ckpt\n    huggingface-cli download pyannote/segmentation-3.0  --local-dir ~/.achatbot/models/pyannote/segmentation-3.0 --local-dir-use-symlinks False\n    ```\n    - asr model ckpt (default whipser ckpt model use base size)\n    ```\n    # asr openai whisper ckpt\n    wget https://openaipublic.azureedge.net/main/whisper/models/ed3a0b6b1c0edf879ad9b11b1af5a0e6ab5db9205f891f668f8b0e6c6326e34e/base.pt -O ~/.achatbot/models/base.pt\n    \n    # asr hf openai whisper ckpt for transformers pipeline to load\n    huggingface-cli download openai/whisper-base  --local-dir ~/.achatbot/models/openai/whisper-base --local-dir-use-symlinks False\n    \n    # asr hf faster whisper (CTranslate2)\n    huggingface-cli download Systran/faster-whisper-base  --local-dir ~/.achatbot/models/Systran/faster-whisper-base --local-dir-use-symlinks False\n    \n    # asr SenseVoice ckpt\n    huggingface-cli download FunAudioLLM/SenseVoiceSmall  --local-dir ~/.achatbot/models/FunAudioLLM/SenseVoiceSmall --local-dir-use-symlinks False\n    ```\n    - llm model ckpt (default llamacpp ckpt(ggml) model use qwen-2 instruct 1.5B size)\n    ```\n    # llm llamacpp Qwen2-Instruct\n    huggingface-cli download Qwen/Qwen2-1.5B-Instruct-GGUF qwen2-1_5b-instruct-q8_0.gguf  --local-dir ~/.achatbot/models --local-dir-use-symlinks False\n    \n    # llm llamacpp Qwen1.5-chat\n    huggingface-cli download Qwen/Qwen1.5-7B-Chat-GGUF qwen1_5-7b-chat-q8_0.gguf  --local-dir ~/.achatbot/models --local-dir-use-symlinks False\n    \n    # llm llamacpp phi-3-mini-4k-instruct\n    huggingface-cli download microsoft/Phi-3-mini-4k-instruct-gguf Phi-3-mini-4k-instruct-q4.gguf --local-dir ~/.achatbot/models --local-dir-use-symlinks False\n    \n    ```\n    - tts model ckpt (default whipser ckpt model use base size)\n    ```\n    # tts chatTTS\n    huggingface-cli download 2Noise/ChatTTS  --local-dir ~/.achatbot/models/2Noise/ChatTTS --local-dir-use-symlinks False\n    \n    # tts coquiTTS\n    huggingface-cli download coqui/XTTS-v2  --local-dir ~/.achatbot/models/coqui/XTTS-v2 --local-dir-use-symlinks False\n    \n    # tts cosy voice\n    git lfs install\n    git clone https://www.modelscope.cn/iic/CosyVoice-300M.git ~/.achatbot/models/CosyVoice-300M\n    git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git ~/.achatbot/models/CosyVoice-300M-SFT\n    git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git ~/.achatbot/models/CosyVoice-300M-Instruct\n    #git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git ~/.achatbot/models/CosyVoice-ttsfrd\n    \n    ```\n\n5. run local terminal chat bot with env; e.g. \n    - use dufault env params to run local chat bot\n    ```\n    ACHATBOT_PKG=1 TQDM_DISABLE=True \\\n        python -m achatbot.cmd.local-terminal-chat.generate_audio2audio > ~/.achatbot/log/std_out.log\n    ```\n\n## Run remote http fastapi daily chat bots\n1. run `pip install \"achatbot[fastapi_daily_bot_server]\"` to install dependencies to run http fastapi daily chat bot; \n\n2. run below cmd to start http server, see api docs: http://0.0.0.0:4321/docs\n    ```\n    ACHATBOT_PKG=1 python -m achatbot.cmd.http.server.fastapi_daily_bot_serve\n    ```\n3. run chat bot processor, e.g. \n   - run a daily langchain rag bot api, with ui/educator-client\n    > [!NOTE]\n    > need process youtube audio save to local file with `pytube`, run `pip install \"achatbot[pytube,deep_translator]\"` to install dependencies\n    > and transcribe/translate to text, then chunks to vector store, and run langchain rag bot api;\n    > run data process: \n    > ```\n    > ACHATBOT_PKG=1 python -m achatbot.cmd.bots.rag.data_process.youtube_audio_transcribe_to_tidb\n    > ```\n    > or download processed data from hf dataset [weege007/youtube_videos](https://huggingface.co/datasets/weege007/youtube_videos/tree/main/videos), then chunks to vector store .\n   ```\n   curl -XPOST \"http://0.0.0.0:4321/bot_join/chat-bot/DailyLangchainRAGBot\" \\\n    -H \"Content-Type: application/json\" \\\n    -d $'{\"config\":{\"llm\":{\"model\":\"llama-3.1-70b-versatile\",\"messages\":[{\"role\":\"system\",\"content\":\"\"}],\"language\":\"zh\"},\"tts\":{\"tag\":\"cartesia_tts_processor\",\"args\":{\"voice_id\":\"eda5bbff-1ff1-4886-8ef1-4e69a77640a0\",\"language\":\"zh\"}},\"asr\":{\"tag\":\"deepgram_asr_processor\",\"args\":{\"language\":\"zh\",\"model\":\"nova-2\"}}}}' | jq .\n   ```\n   - run a simple daily chat bot api, with ui/web-client-ui (default language: zh)\n   ```\n   curl -XPOST \"http://0.0.0.0:4321/bot_join/DailyBot\" \\\n    -H \"Content-Type: application/json\" \\\n    -d '{}' | jq .\n   ```\n\n## Run remote rpc chat bot worker\n1. run `pip install \"achatbot[remote_rpc_chat_bot_be_worker]\"` to install dependencies to run rpc chat bot BE worker; e.g. :\n   - use dufault env params to run rpc chat bot BE worker\n```\nACHATBOT_PKG=1 RUN_OP=be TQDM_DISABLE=True \\\n    TTS_TAG=tts_edge \\\n    python -m achatbot.cmd.grpc.terminal-chat.generate_audio2audio > ~/.achatbot/log/be_std_out.log\n```\n2. run `pip install \"achatbot[remote_rpc_chat_bot_fe]\"` to install dependencies to run rpc chat bot FE; \n```\nACHATBOT_PKG=1 RUN_OP=fe \\\n    TTS_TAG=tts_edge \\\n    python -m achatbot.cmd.grpc.terminal-chat.generate_audio2audio > ~/.achatbot/log/fe_std_out.log\n```\n\n## Run remote queue chat bot worker\n1. run `pip install \"achatbot[remote_queue_chat_bot_be_worker]\"` to install dependencies to run queue chat bot worker; e.g.:\n   - use default env params to run \n    ```\n    ACHATBOT_PKG=1 REDIS_PASSWORD=$redis_pwd RUN_OP=be TQDM_DISABLE=True \\\n        python -m achatbot.cmd.remote-queue-chat.generate_audio2audio > ~/.achatbot/log/be_std_out.log\n    ```\n   - sense_voice(asr) -> qwen (llm) -> cosy_voice (tts)\n   u can login [redislabs](https://app.redislabs.com/#/) create 30M free databases; set `REDIS_HOST`,`REDIS_PORT` and `REDIS_PASSWORD` to run, e.g.:\n   ```\n    ACHATBOT_PKG=1 RUN_OP=be \\\n      TQDM_DISABLE=True \\\n      REDIS_PASSWORD=$redis_pwd \\\n      REDIS_HOST=redis-14241.c256.us-east-1-2.ec2.redns.redis-cloud.com \\\n      REDIS_PORT=14241 \\\n      ASR_TAG=sense_voice_asr \\\n      ASR_LANG=zn \\\n      ASR_MODEL_NAME_OR_PATH=~/.achatbot/models/FunAudioLLM/SenseVoiceSmall \\\n      N_GPU_LAYERS=33 FLASH_ATTN=1 \\\n      LLM_MODEL_NAME=qwen \\\n      LLM_MODEL_PATH=~/.achatbot/models/qwen1_5-7b-chat-q8_0.gguf \\\n      TTS_TAG=tts_cosy_voice \\\n      python -m achatbot.cmd.remote-queue-chat.generate_audio2audio > ~/.achatbot/log/be_std_out.log\n   ```\n2. run `pip install \"achatbot[remote_queue_chat_bot_fe]\"` to install the required packages to run quueue chat bot frontend; e.g.:\n   - use default env params to run (default vad_recorder)\n    ```\n    ACHATBOT_PKG=1 RUN_OP=fe \\\n        REDIS_PASSWORD=$redis_pwd \\\n        REDIS_HOST=redis-14241.c256.us-east-1-2.ec2.redns.redis-cloud.com \\\n        REDIS_PORT=14241 \\\n        python -m achatbot.cmd.remote-queue-chat.generate_audio2audio > ~/.achatbot/log/fe_std_out.log\n    ```\n   - with wake word\n    ```\n    ACHATBOT_PKG=1 RUN_OP=fe \\\n        REDIS_PASSWORD=$redis_pwd \\\n        REDIS_HOST=redis-14241.c256.us-east-1-2.ec2.redns.redis-cloud.com \\\n        REDIS_PORT=14241 \\\n        RECORDER_TAG=wakeword_rms_recorder \\\n        python -m achatbot.cmd.remote-queue-chat.generate_audio2audio > ~/.achatbot/log/fe_std_out.log\n    ```\n   - default pyaudio player stream with tts tag out sample info(rate,channels..), e.g.: (be use tts_cosy_voice out stream info)\n   ```\n    ACHATBOT_PKG=1 RUN_OP=fe \\\n        REDIS_PASSWORD=$redis_pwd \\\n        REDIS_HOST=redis-14241.c256.us-east-1-2.ec2.redns.redis-cloud.com \\\n        REDIS_PORT=14241 \\\n        RUN_OP=fe \\\n        TTS_TAG=tts_cosy_voice \\\n        python -m achatbot.cmd.remote-queue-chat.generate_audio2audio > ~/.achatbot/log/fe_std_out.log\n   ```\n   remote_queue_chat_bot_be_worker in colab examples :\n   <a href=\"https://colab.research.google.com/github/weedge/doraemon-nb/blob/main/chat_bot_gpu_worker.ipynb\" target=\"_parent\">\n   <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n   \n   - sense_voice(asr) -> qwen (llm) -> cosy_voice (tts): \n\n## Run remote grpc tts speaker bot\n1. run `pip install \"achatbot[remote_grpc_tts_server]\"` to install dependencies to run grpc tts speaker bot server; \n```\nACHATBOT_PKG=1 python -m achatbot.cmd.grpc.speaker.server.serve\n```\n2. run `pip install \"achatbot[remote_grpc_tts_client]\"` to install dependencies to run grpc tts speaker bot client; \n```\nACHATBOT_PKG=1 TTS_TAG=tts_edge IS_RELOAD=1 python -m achatbot.cmd.grpc.speaker.client\nACHATBOT_PKG=1 TTS_TAG=tts_g IS_RELOAD=1 python -m achatbot.cmd.grpc.speaker.client\nACHATBOT_PKG=1 TTS_TAG=tts_coqui IS_RELOAD=1 python -m achatbot.cmd.grpc.speaker.client\nACHATBOT_PKG=1 TTS_TAG=tts_chat IS_RELOAD=1 python -m achatbot.cmd.grpc.speaker.client\nACHATBOT_PKG=1 TTS_TAG=tts_cosy_voice IS_RELOAD=1 python -m achatbot.cmd.grpc.speaker.client\n```\n\n\n# Multimodal Interaction\n## audio (voice)\n- stream-stt (realtime-recorder)\n![audio-text](https://github.com/user-attachments/assets/44bcec7d-f0a1-47db-bd95-21feee43a361)\n\n- audio-llm (multimode-chat)\n![pipe](https://github.com/user-attachments/assets/9970cf18-9bbc-4109-a3c5-e3e3c88086af)\n![queue](https://github.com/user-attachments/assets/30f2e880-f16d-4b62-8668-61bb97c57b2b)\n\n\n- stream-tts (realtime-(clone)-speaker)\n![text-audio](https://github.com/user-attachments/assets/676230a0-0a99-475b-9ef5-6afc95f044d8)\n![audio-text text-audio](https://github.com/user-attachments/assets/cbcabf98-731e-4887-9f37-649ec81e37a0)\n\n\n## vision (CV)\n- stream-ocr (realtime-object-detection)\n\n## more\n- Embodied Intelligence: Robots that touch the world, perceive and move\n\n# License\n\nachatbot is released under the [BSD 3 license](LICENSE). (Additional code in this distribution is covered by the MIT and Apache Open Source\nlicenses.) However you may have other legal obligations that govern your use of content, such as the terms of service for third-party models.\n",
    "bugtrack_url": null,
    "license": "BSD 3-Clause License  Copyright (c) 2024, weedge  Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.  3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ",
    "summary": "An open source chat bot for voice (and multimodal) assistants",
    "version": "0.0.8.4",
    "project_urls": {
        "Changelog": "https://github.com/ai-bot-pro/chat-bot/blob/main/CHANGELOG.md",
        "Documentation": "https://github.com/ai-bot-pro/chat-bot/blob/main/docs",
        "Homepage": "https://github.com/ai-bot-pro/chat-bot",
        "Issues": "https://github.com/ai-bot-pro/chat-bot/issues",
        "Repository": "https://github.com/ai-bot-pro/chat-bot.git"
    },
    "split_keywords": [
        "ai",
        " chat bot",
        " audio",
        " speech",
        " video",
        " image"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1d2d24f13f51a07643782d229e81d9d724929868defc908d09bdc6a425ba3d70",
                "md5": "f85d2beb883c0f1d284509d9bc802c81",
                "sha256": "350f4807f11105f7caa9c379a870a56e856dfe8dc196dc3a6078e47513a19621"
            },
            "downloads": -1,
            "filename": "achatbot-0.0.8.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f85d2beb883c0f1d284509d9bc802c81",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 1072841,
            "upload_time": "2024-12-20T08:00:16",
            "upload_time_iso_8601": "2024-12-20T08:00:16.698337Z",
            "url": "https://files.pythonhosted.org/packages/1d/2d/24f13f51a07643782d229e81d9d724929868defc908d09bdc6a425ba3d70/achatbot-0.0.8.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f455aec9e4bec1c2b4680e18890b232d25534ed15c18b27d7ad8750219976547",
                "md5": "4d843a02f5ce663d1b79ec8193aac7ff",
                "sha256": "8d1020ec5f2f2adc01d6a34e8d6a5a90c982c584f5bd1e35c7a9e66b61bae1c8"
            },
            "downloads": -1,
            "filename": "achatbot-0.0.8.4.tar.gz",
            "has_sig": false,
            "md5_digest": "4d843a02f5ce663d1b79ec8193aac7ff",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 759837,
            "upload_time": "2024-12-20T08:00:20",
            "upload_time_iso_8601": "2024-12-20T08:00:20.609832Z",
            "url": "https://files.pythonhosted.org/packages/f4/55/aec9e4bec1c2b4680e18890b232d25534ed15c18b27d7ad8750219976547/achatbot-0.0.8.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-20 08:00:20",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ai-bot-pro",
    "github_project": "chat-bot",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "achatbot"
}
        
Elapsed time: 0.42386s