dailyai


Namedailyai JSON
Version 0.0.8 PyPI version JSON
download
home_pageNone
SummaryAn open source framework for real-time, multi-modal, conversational AI applications
upload_time2024-04-11 22:39:18
maintainerNone
docs_urlNone
authorNone
requires_python>=3.7
licenseBSD 2-Clause License
keywords webrtc audio video ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # dailyai — an open source framework for real-time, multi-modal, conversational AI applications

Build things like this:

[![AI-powered voice patient intake for healthcare](https://img.youtube.com/vi/lDevgsp9vn0/0.jpg)](https://www.youtube.com/watch?v=lDevgsp9vn0)

**`dailyai` started as a toolkit for implementing generative AI voice bots.** Things like personal coaches, meeting assistants, story-telling toys for kids, customer support bots, and snarky social companions.

In 2023 a *lot* of us got excited about the possibility of having open-ended conversations with LLMs. It became clear pretty quickly that we were all solving the same [low-level problems](https://www.daily.co/blog/how-to-talk-to-an-llm-with-your-voice/):
- low-latency, reliable audio transport
- echo cancellation
- phrase endpointing (knowing when the bot should respond to human speech)
- interruptibility
- writing clean code to stream data through "pipelines" of speech-to-text, LLM inference, and text-to-speech models

As our applications expanded to include additional things like image generation, function calling, and vision models, we started to think about what a complete framework for these kinds of apps could look like.

Today, `dailyai` is:

1. a set of code building blocks for interacting with generative AI services and creating low-latency, interruptible data pipelines that use multiple services
2. transport services that moves audio, video, and events across the Internet
3. implementations of specific generative AI services

Currently implemented services:

- Speech-to-text
  - Deepgram
  - Whisper
- LLMs
  - Azure
  - Fireworks
  - OpenAI
- Image generation
  - Azure
  - Fal
  - OpenAI
- Text-to-speech
  - Azure
  - Deepgram
  - ElevenLabs
- Transport
  - Daily
  - Local (in progress, intended as a quick start example service)
- Vision
  - Moondream

If you'd like to [implement a service]((https://github.com/daily-co/daily-ai-sdk/tree/main/src/dailyai/services)), we welcome PRs! Our goal is to support lots of services in all of the above categories, plus new categories (like real-time video) as they emerge.

## Getting started

Today, the easiest way to get started with `dailyai` is to use [Daily](https://www.daily.co/) as your transport service. This toolkit started life as an internal SDK at Daily and millions of minutes of AI conversation have been served using it and its earlier prototype incarnations. (The [transport base class](https://github.com/daily-co/daily-ai-sdk/blob/main/src/dailyai/transports/abstract_transport.py) is easy to extend, though, so feel free to submit PRs if you'd like to implement another transport service.)

```
# install the module
pip install dailyai

# set up an .env file with API keys
cp dot-env.template .env
```

By default, in order to minimize dependencies, only the basic framework functionality is available. Some third-party AI services require additional
dependencies that you can install with:

```
pip install "dailyai[option,...]"
```

Your project may or may not need these, so they're made available as optional requirements. Here is a list:

- **AI services**: `anthropic`, `azure`, `fal`, `moondream`, `openai`, `playht`, `silero`, `whisper`
- **Transports**: `daily`, `local`, `websocket`

## Code examples

There are two directories of examples:

- [foundational](https://github.com/daily-co/daily-ai-sdk/tree/main/examples/foundational) — demos that build on each other, introducing one or two concepts at a time
- [starter apps](https://github.com/daily-co/daily-ai-sdk/tree/main/examples/starter-apps) — complete applications that you can use as starting points for development

Before running the examples you need to install the dependencies (which will install all the dependencies to run all of the examples):

```
pip install -r {env}-requirements.txt
```

To run the example below you need to sign up for a [free Daily account](https://dashboard.daily.co/u/signup) and create a Daily room (so you can hear the LLM talking). After that, join the room's URL directly from a browser tab and run:

```
python examples/foundational/02-llm-say-one-thing.py
```

## Hacking on the framework itself

_Note that you may need to set up a virtual environment before following the instructions below. For instance, you might need to run the following from the root of the repo:_

```
python3 -m venv venv
source venv/bin/activate
```

From the root of this repo, run the following:

```
pip install -r {env}-requirements.txt -r dev-requirements.txt
python -m build
```

This builds the package. To use the package locally (eg to run sample files), run

```
pip install --editable .
```

If you want to use this package from another directory, you can run:

```
pip install path_to_this_repo
```

### Running tests

From the root directory, run:

```
pytest --doctest-modules --ignore-glob="*to_be_updated*" src tests
```

## Setting up your editor

This project uses strict [PEP 8](https://peps.python.org/pep-0008/) formatting.

### Emacs

You can use [use-package](https://github.com/jwiegley/use-package) to install [py-autopep8](https://codeberg.org/ideasman42/emacs-py-autopep8) package and configure `autopep8` arguments:

```elisp
(use-package py-autopep8
  :ensure t
  :defer t
  :hook ((python-mode . py-autopep8-mode))
  :config
  (setq py-autopep8-options '("-a" "-a", "--max-line-length=100")))
```

`autopep8` was installed in the `venv` environment described before, so you should be able to use [pyvenv-auto](https://github.com/ryotaro612/pyvenv-auto) to automatically load that environment inside Emacs.

```elisp
(use-package pyvenv-auto
  :ensure t
  :defer t
  :hook ((python-mode . pyvenv-auto-run)))

```

### Visual Studio Code

Install the
[autopep8](https://marketplace.visualstudio.com/items?itemName=ms-python.autopep8) extension. Then edit the user settings (_Ctrl-Shift-P_ `Open User Settings (JSON)`) and set it as the default Python formatter, enable formatting on save and configure `autopep8` arguments:

```json
"[python]": {
    "editor.defaultFormatter": "ms-python.autopep8",
    "editor.formatOnSave": true
},
"autopep8.args": [
    "-a",
    "-a",
    "--max-line-length=100"
],
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "dailyai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "webrtc, audio, video, ai",
    "author": null,
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/ba/f7/d99fabeefd9418db1d6d5937410e302e709263acb7ab9b2f5f2066856ad3/dailyai-0.0.8.tar.gz",
    "platform": null,
    "description": "# dailyai \u2014 an open source framework for real-time, multi-modal, conversational AI applications\n\nBuild things like this:\n\n[![AI-powered voice patient intake for healthcare](https://img.youtube.com/vi/lDevgsp9vn0/0.jpg)](https://www.youtube.com/watch?v=lDevgsp9vn0)\n\n**`dailyai` started as a toolkit for implementing generative AI voice bots.** Things like personal coaches, meeting assistants, story-telling toys for kids, customer support bots, and snarky social companions.\n\nIn 2023 a *lot* of us got excited about the possibility of having open-ended conversations with LLMs. It became clear pretty quickly that we were all solving the same [low-level problems](https://www.daily.co/blog/how-to-talk-to-an-llm-with-your-voice/):\n- low-latency, reliable audio transport\n- echo cancellation\n- phrase endpointing (knowing when the bot should respond to human speech)\n- interruptibility\n- writing clean code to stream data through \"pipelines\" of speech-to-text, LLM inference, and text-to-speech models\n\nAs our applications expanded to include additional things like image generation, function calling, and vision models, we started to think about what a complete framework for these kinds of apps could look like.\n\nToday, `dailyai` is:\n\n1. a set of code building blocks for interacting with generative AI services and creating low-latency, interruptible data pipelines that use multiple services\n2. transport services that moves audio, video, and events across the Internet\n3. implementations of specific generative AI services\n\nCurrently implemented services:\n\n- Speech-to-text\n  - Deepgram\n  - Whisper\n- LLMs\n  - Azure\n  - Fireworks\n  - OpenAI\n- Image generation\n  - Azure\n  - Fal\n  - OpenAI\n- Text-to-speech\n  - Azure\n  - Deepgram\n  - ElevenLabs\n- Transport\n  - Daily\n  - Local (in progress, intended as a quick start example service)\n- Vision\n  - Moondream\n\nIf you'd like to [implement a service]((https://github.com/daily-co/daily-ai-sdk/tree/main/src/dailyai/services)), we welcome PRs! Our goal is to support lots of services in all of the above categories, plus new categories (like real-time video) as they emerge.\n\n## Getting started\n\nToday, the easiest way to get started with `dailyai` is to use [Daily](https://www.daily.co/) as your transport service. This toolkit started life as an internal SDK at Daily and millions of minutes of AI conversation have been served using it and its earlier prototype incarnations. (The [transport base class](https://github.com/daily-co/daily-ai-sdk/blob/main/src/dailyai/transports/abstract_transport.py) is easy to extend, though, so feel free to submit PRs if you'd like to implement another transport service.)\n\n```\n# install the module\npip install dailyai\n\n# set up an .env file with API keys\ncp dot-env.template .env\n```\n\nBy default, in order to minimize dependencies, only the basic framework functionality is available. Some third-party AI services require additional\ndependencies that you can install with:\n\n```\npip install \"dailyai[option,...]\"\n```\n\nYour project may or may not need these, so they're made available as optional requirements. Here is a list:\n\n- **AI services**: `anthropic`, `azure`, `fal`, `moondream`, `openai`, `playht`, `silero`, `whisper`\n- **Transports**: `daily`, `local`, `websocket`\n\n## Code examples\n\nThere are two directories of examples:\n\n- [foundational](https://github.com/daily-co/daily-ai-sdk/tree/main/examples/foundational) \u2014 demos that build on each other, introducing one or two concepts at a time\n- [starter apps](https://github.com/daily-co/daily-ai-sdk/tree/main/examples/starter-apps) \u2014 complete applications that you can use as starting points for development\n\nBefore running the examples you need to install the dependencies (which will install all the dependencies to run all of the examples):\n\n```\npip install -r {env}-requirements.txt\n```\n\nTo run the example below you need to sign up for a [free Daily account](https://dashboard.daily.co/u/signup) and create a Daily room (so you can hear the LLM talking). After that, join the room's URL directly from a browser tab and run:\n\n```\npython examples/foundational/02-llm-say-one-thing.py\n```\n\n## Hacking on the framework itself\n\n_Note that you may need to set up a virtual environment before following the instructions below. For instance, you might need to run the following from the root of the repo:_\n\n```\npython3 -m venv venv\nsource venv/bin/activate\n```\n\nFrom the root of this repo, run the following:\n\n```\npip install -r {env}-requirements.txt -r dev-requirements.txt\npython -m build\n```\n\nThis builds the package. To use the package locally (eg to run sample files), run\n\n```\npip install --editable .\n```\n\nIf you want to use this package from another directory, you can run:\n\n```\npip install path_to_this_repo\n```\n\n### Running tests\n\nFrom the root directory, run:\n\n```\npytest --doctest-modules --ignore-glob=\"*to_be_updated*\" src tests\n```\n\n## Setting up your editor\n\nThis project uses strict [PEP 8](https://peps.python.org/pep-0008/) formatting.\n\n### Emacs\n\nYou can use [use-package](https://github.com/jwiegley/use-package) to install [py-autopep8](https://codeberg.org/ideasman42/emacs-py-autopep8) package and configure `autopep8` arguments:\n\n```elisp\n(use-package py-autopep8\n  :ensure t\n  :defer t\n  :hook ((python-mode . py-autopep8-mode))\n  :config\n  (setq py-autopep8-options '(\"-a\" \"-a\", \"--max-line-length=100\")))\n```\n\n`autopep8` was installed in the `venv` environment described before, so you should be able to use [pyvenv-auto](https://github.com/ryotaro612/pyvenv-auto) to automatically load that environment inside Emacs.\n\n```elisp\n(use-package pyvenv-auto\n  :ensure t\n  :defer t\n  :hook ((python-mode . pyvenv-auto-run)))\n\n```\n\n### Visual Studio Code\n\nInstall the\n[autopep8](https://marketplace.visualstudio.com/items?itemName=ms-python.autopep8) extension. Then edit the user settings (_Ctrl-Shift-P_ `Open User Settings (JSON)`) and set it as the default Python formatter, enable formatting on save and configure `autopep8` arguments:\n\n```json\n\"[python]\": {\n    \"editor.defaultFormatter\": \"ms-python.autopep8\",\n    \"editor.formatOnSave\": true\n},\n\"autopep8.args\": [\n    \"-a\",\n    \"-a\",\n    \"--max-line-length=100\"\n],\n```\n",
    "bugtrack_url": null,
    "license": "BSD 2-Clause License",
    "summary": "An open source framework for real-time, multi-modal, conversational AI applications",
    "version": "0.0.8",
    "project_urls": {
        "Source": "https://github.com/daily-co/dailyai",
        "Website": "https://daily.co"
    },
    "split_keywords": [
        "webrtc",
        " audio",
        " video",
        " ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a58e011b4f2de05616dff095899d935c010c6973a478bf1804b112ab5a5f86ab",
                "md5": "a447e809376201ccf29f01434365be80",
                "sha256": "e265642f2255047041621b1b45009d210110261908dcec15662734f275e8d97c"
            },
            "downloads": -1,
            "filename": "dailyai-0.0.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a447e809376201ccf29f01434365be80",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 51899,
            "upload_time": "2024-04-11T22:39:15",
            "upload_time_iso_8601": "2024-04-11T22:39:15.829865Z",
            "url": "https://files.pythonhosted.org/packages/a5/8e/011b4f2de05616dff095899d935c010c6973a478bf1804b112ab5a5f86ab/dailyai-0.0.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "baf7d99fabeefd9418db1d6d5937410e302e709263acb7ab9b2f5f2066856ad3",
                "md5": "c128b6121e3c246be38077d465884070",
                "sha256": "e5a6ebc4205d21abf2c1ae40b02ad5ecbf59cf2f083b930b1f3a717b80ac6047"
            },
            "downloads": -1,
            "filename": "dailyai-0.0.8.tar.gz",
            "has_sig": false,
            "md5_digest": "c128b6121e3c246be38077d465884070",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 33468737,
            "upload_time": "2024-04-11T22:39:18",
            "upload_time_iso_8601": "2024-04-11T22:39:18.001387Z",
            "url": "https://files.pythonhosted.org/packages/ba/f7/d99fabeefd9418db1d6d5937410e302e709263acb7ab9b2f5f2066856ad3/dailyai-0.0.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-11 22:39:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "daily-co",
    "github_project": "dailyai",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "dailyai"
}
        
Elapsed time: 0.25436s