livekit-plugins-nextevi


Namelivekit-plugins-nextevi JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
SummaryLiveKit Agents plugin for NextEVI voice AI platform with real-time speech-to-speech capabilities
upload_time2025-09-13 07:00:05
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9.0
licenseApache-2.0
keywords audio livekit nextevi realtime video voice-ai webrtc
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LiveKit NextEVI Plugin

A LiveKit Agents plugin for NextEVI's voice AI platform, providing real-time speech-to-speech capabilities with advanced voice AI features.

## Features

- **Real-time speech-to-speech** - Direct audio input to voice response
- **Voice interruption support** - Natural conversation with interruption handling  
- **Multiple TTS engines** - Orpheus, Ethos, Kokoro, and more
- **Emotion analysis** - Real-time emotion detection and adaptation
- **Voice cloning** - Custom voice synthesis capabilities
- **Knowledge base integration** - Contextual AI responses
- **LiveKit Agents integration** - Seamless integration with LiveKit's AgentSession

## Installation

```bash
pip install livekit-plugins-nextevi
```

## Quick Start

### Basic Usage

```python
import asyncio
from livekit import rtc
from livekit.agents import JobContext, WorkerOptions, cli
from livekit_nextevi import NextEVIRealtimeModel

async def agent_entrypoint(ctx: JobContext):
    # Connect to room
    await ctx.connect()
    
    # Create NextEVI model
    model = NextEVIRealtimeModel(
        api_key="your_nextevi_api_key",  # Set via environment: NEXTEVI_API_KEY
        config_id="your_config_id",     # Set via environment: NEXTEVI_CONFIG_ID
        project_id="your_project_id"    # Set via environment: NEXTEVI_PROJECT_ID (optional)
    )
    
    # Set up audio output
    audio_source = rtc.AudioSource(sample_rate=48000, num_channels=1)
    track = rtc.LocalAudioTrack.create_audio_track("nextevi-voice", audio_source)
    await ctx.room.local_participant.publish_track(track)
    
    # Configure model with audio source
    model.set_audio_source(audio_source)
    model.set_livekit_context(ctx)
    
    # Handle incoming audio
    @ctx.room.on("track_subscribed")
    def on_track_subscribed(track: rtc.Track, publication: rtc.TrackPublication, participant: rtc.RemoteParticipant):
        if track.kind == rtc.TrackKind.KIND_AUDIO:
            async def process_audio():
                audio_stream = rtc.AudioStream(track)
                async for event in audio_stream:
                    if isinstance(event, rtc.AudioFrameEvent):
                        # Send audio to NextEVI for processing
                        await model.push_audio(event.frame)
            
            asyncio.create_task(process_audio())
    
    # Stream NextEVI audio output to LiveKit
    async def stream_audio_output():
        audio_stream = model.audio_output_stream()
        async for audio_frame in audio_stream:
            await audio_source.capture_frame(audio_frame)
    
    asyncio.create_task(stream_audio_output())
    
    # Keep running
    await asyncio.Future()

# Run the agent
if __name__ == "__main__":
    cli.run_app(WorkerOptions(entrypoint_fnc=agent_entrypoint))
```

### With AgentSession (Recommended)

For better integration with LiveKit's playground and transcription display:

```python
import asyncio
from livekit import rtc
from livekit.agents import JobContext, WorkerOptions, cli, AgentSession
from livekit_nextevi import NextEVIRealtimeModel, NextEVISTT

async def agent_entrypoint(ctx: JobContext):
    await ctx.connect()
    
    # Create custom STT for transcription forwarding
    nextevi_stt = NextEVISTT()
    
    # Create NextEVI model
    model = NextEVIRealtimeModel(
        api_key="your_nextevi_api_key",
        config_id="your_config_id",
        project_id="your_project_id"
    )
    
    # Bridge NextEVI transcriptions to STT
    def on_transcription(transcript: str, is_final: bool):
        nextevi_stt.forward_transcription(transcript, is_final)
    
    model.set_transcription_callback(on_transcription)
    
    # Create AgentSession
    session = AgentSession(
        stt=nextevi_stt,  # Custom STT for transcription forwarding
        llm=model,        # NextEVI handles LLM + TTS
    )
    
    # Set up audio output (same as basic usage)
    audio_source = rtc.AudioSource(sample_rate=48000, num_channels=1)
    track = rtc.LocalAudioTrack.create_audio_track("nextevi-voice", audio_source)
    await ctx.room.local_participant.publish_track(track)
    
    model.set_audio_source(audio_source)
    model.set_livekit_context(ctx)
    
    # Handle audio input (same as basic usage)
    @ctx.room.on("track_subscribed")
    def on_track_subscribed(track: rtc.Track, publication: rtc.TrackPublication, participant: rtc.RemoteParticipant):
        if track.kind == rtc.TrackKind.KIND_AUDIO:
            async def process_audio():
                audio_stream = rtc.AudioStream(track)
                async for event in audio_stream:
                    if isinstance(event, rtc.AudioFrameEvent):
                        await model.push_audio(event.frame)
            asyncio.create_task(process_audio())
    
    # Stream audio output (same as basic usage)
    async def stream_audio_output():
        audio_stream = model.audio_output_stream()
        async for audio_frame in audio_stream:
            await audio_source.capture_frame(audio_frame)
    
    asyncio.create_task(stream_audio_output())
    await asyncio.Future()

if __name__ == "__main__":
    cli.run_app(WorkerOptions(entrypoint_fnc=agent_entrypoint))
```

## Configuration

### Environment Variables

Set these environment variables to configure the plugin:

```bash
export NEXTEVI_API_KEY="your_api_key_here"
export NEXTEVI_CONFIG_ID="your_config_id_here" 
export NEXTEVI_PROJECT_ID="your_project_id_here"  # Optional
```

### NextEVIRealtimeModel Options

```python
model = NextEVIRealtimeModel(
    api_key="your_api_key",           # Required: NextEVI API key
    config_id="your_config_id",       # Required: NextEVI configuration ID
    project_id="your_project_id",     # Optional: NextEVI project ID
    tts_engine="orpheus",             # TTS engine: orpheus, ethos, kokoro
    voice_id="leo",                   # Voice ID for TTS
    llm_provider="anthropic",         # LLM provider: anthropic, openai
    temperature=0.8,                  # LLM temperature
    speech_speed=1.0,                 # TTS speed multiplier
    enable_emotion_analysis=True,     # Enable emotion analysis
    enable_knowledge_base=True,       # Enable knowledge base integration
    enable_interruption=True,         # Enable voice interruption
    recording_enabled=False,          # Enable session recording
)
```

## Key Methods

### NextEVIRealtimeModel

- `push_audio(audio_frame)` - Send audio frame for processing
- `audio_output_stream()` - Get stream of TTS audio output
- `set_audio_source(audio_source)` - Configure LiveKit audio source
- `set_transcription_callback(callback)` - Set transcription forwarding callback
- `commit_audio()` - Commit audio input (no-op for NextEVI)

### NextEVISTT (Internal)

- `forward_transcription(transcript, is_final)` - Forward transcription to AgentSession

## Requirements

- Python 3.9+
- LiveKit Agents 1.2.8+
- NextEVI API account and credentials

## License

Apache 2.0

## Support

- [NextEVI Documentation](https://docs.nextevi.com)
- [LiveKit Agents Documentation](https://docs.livekit.io/agents/)
- [GitHub Issues](https://github.com/nextevi/livekit-nextevi/issues)
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "livekit-plugins-nextevi",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9.0",
    "maintainer_email": null,
    "keywords": "audio, livekit, nextevi, realtime, video, voice-ai, webrtc",
    "author": null,
    "author_email": "NextEVI <team@nextevi.com>",
    "download_url": "https://files.pythonhosted.org/packages/c2/90/b311aa731319d5fb7ad7211aa730532b4d73d3fe4996e9075e568642f0d9/livekit_plugins_nextevi-0.1.0.tar.gz",
    "platform": null,
    "description": "# LiveKit NextEVI Plugin\n\nA LiveKit Agents plugin for NextEVI's voice AI platform, providing real-time speech-to-speech capabilities with advanced voice AI features.\n\n## Features\n\n- **Real-time speech-to-speech** - Direct audio input to voice response\n- **Voice interruption support** - Natural conversation with interruption handling  \n- **Multiple TTS engines** - Orpheus, Ethos, Kokoro, and more\n- **Emotion analysis** - Real-time emotion detection and adaptation\n- **Voice cloning** - Custom voice synthesis capabilities\n- **Knowledge base integration** - Contextual AI responses\n- **LiveKit Agents integration** - Seamless integration with LiveKit's AgentSession\n\n## Installation\n\n```bash\npip install livekit-plugins-nextevi\n```\n\n## Quick Start\n\n### Basic Usage\n\n```python\nimport asyncio\nfrom livekit import rtc\nfrom livekit.agents import JobContext, WorkerOptions, cli\nfrom livekit_nextevi import NextEVIRealtimeModel\n\nasync def agent_entrypoint(ctx: JobContext):\n    # Connect to room\n    await ctx.connect()\n    \n    # Create NextEVI model\n    model = NextEVIRealtimeModel(\n        api_key=\"your_nextevi_api_key\",  # Set via environment: NEXTEVI_API_KEY\n        config_id=\"your_config_id\",     # Set via environment: NEXTEVI_CONFIG_ID\n        project_id=\"your_project_id\"    # Set via environment: NEXTEVI_PROJECT_ID (optional)\n    )\n    \n    # Set up audio output\n    audio_source = rtc.AudioSource(sample_rate=48000, num_channels=1)\n    track = rtc.LocalAudioTrack.create_audio_track(\"nextevi-voice\", audio_source)\n    await ctx.room.local_participant.publish_track(track)\n    \n    # Configure model with audio source\n    model.set_audio_source(audio_source)\n    model.set_livekit_context(ctx)\n    \n    # Handle incoming audio\n    @ctx.room.on(\"track_subscribed\")\n    def on_track_subscribed(track: rtc.Track, publication: rtc.TrackPublication, participant: rtc.RemoteParticipant):\n        if track.kind == rtc.TrackKind.KIND_AUDIO:\n            async def process_audio():\n                audio_stream = rtc.AudioStream(track)\n                async for event in audio_stream:\n                    if isinstance(event, rtc.AudioFrameEvent):\n                        # Send audio to NextEVI for processing\n                        await model.push_audio(event.frame)\n            \n            asyncio.create_task(process_audio())\n    \n    # Stream NextEVI audio output to LiveKit\n    async def stream_audio_output():\n        audio_stream = model.audio_output_stream()\n        async for audio_frame in audio_stream:\n            await audio_source.capture_frame(audio_frame)\n    \n    asyncio.create_task(stream_audio_output())\n    \n    # Keep running\n    await asyncio.Future()\n\n# Run the agent\nif __name__ == \"__main__\":\n    cli.run_app(WorkerOptions(entrypoint_fnc=agent_entrypoint))\n```\n\n### With AgentSession (Recommended)\n\nFor better integration with LiveKit's playground and transcription display:\n\n```python\nimport asyncio\nfrom livekit import rtc\nfrom livekit.agents import JobContext, WorkerOptions, cli, AgentSession\nfrom livekit_nextevi import NextEVIRealtimeModel, NextEVISTT\n\nasync def agent_entrypoint(ctx: JobContext):\n    await ctx.connect()\n    \n    # Create custom STT for transcription forwarding\n    nextevi_stt = NextEVISTT()\n    \n    # Create NextEVI model\n    model = NextEVIRealtimeModel(\n        api_key=\"your_nextevi_api_key\",\n        config_id=\"your_config_id\",\n        project_id=\"your_project_id\"\n    )\n    \n    # Bridge NextEVI transcriptions to STT\n    def on_transcription(transcript: str, is_final: bool):\n        nextevi_stt.forward_transcription(transcript, is_final)\n    \n    model.set_transcription_callback(on_transcription)\n    \n    # Create AgentSession\n    session = AgentSession(\n        stt=nextevi_stt,  # Custom STT for transcription forwarding\n        llm=model,        # NextEVI handles LLM + TTS\n    )\n    \n    # Set up audio output (same as basic usage)\n    audio_source = rtc.AudioSource(sample_rate=48000, num_channels=1)\n    track = rtc.LocalAudioTrack.create_audio_track(\"nextevi-voice\", audio_source)\n    await ctx.room.local_participant.publish_track(track)\n    \n    model.set_audio_source(audio_source)\n    model.set_livekit_context(ctx)\n    \n    # Handle audio input (same as basic usage)\n    @ctx.room.on(\"track_subscribed\")\n    def on_track_subscribed(track: rtc.Track, publication: rtc.TrackPublication, participant: rtc.RemoteParticipant):\n        if track.kind == rtc.TrackKind.KIND_AUDIO:\n            async def process_audio():\n                audio_stream = rtc.AudioStream(track)\n                async for event in audio_stream:\n                    if isinstance(event, rtc.AudioFrameEvent):\n                        await model.push_audio(event.frame)\n            asyncio.create_task(process_audio())\n    \n    # Stream audio output (same as basic usage)\n    async def stream_audio_output():\n        audio_stream = model.audio_output_stream()\n        async for audio_frame in audio_stream:\n            await audio_source.capture_frame(audio_frame)\n    \n    asyncio.create_task(stream_audio_output())\n    await asyncio.Future()\n\nif __name__ == \"__main__\":\n    cli.run_app(WorkerOptions(entrypoint_fnc=agent_entrypoint))\n```\n\n## Configuration\n\n### Environment Variables\n\nSet these environment variables to configure the plugin:\n\n```bash\nexport NEXTEVI_API_KEY=\"your_api_key_here\"\nexport NEXTEVI_CONFIG_ID=\"your_config_id_here\" \nexport NEXTEVI_PROJECT_ID=\"your_project_id_here\"  # Optional\n```\n\n### NextEVIRealtimeModel Options\n\n```python\nmodel = NextEVIRealtimeModel(\n    api_key=\"your_api_key\",           # Required: NextEVI API key\n    config_id=\"your_config_id\",       # Required: NextEVI configuration ID\n    project_id=\"your_project_id\",     # Optional: NextEVI project ID\n    tts_engine=\"orpheus\",             # TTS engine: orpheus, ethos, kokoro\n    voice_id=\"leo\",                   # Voice ID for TTS\n    llm_provider=\"anthropic\",         # LLM provider: anthropic, openai\n    temperature=0.8,                  # LLM temperature\n    speech_speed=1.0,                 # TTS speed multiplier\n    enable_emotion_analysis=True,     # Enable emotion analysis\n    enable_knowledge_base=True,       # Enable knowledge base integration\n    enable_interruption=True,         # Enable voice interruption\n    recording_enabled=False,          # Enable session recording\n)\n```\n\n## Key Methods\n\n### NextEVIRealtimeModel\n\n- `push_audio(audio_frame)` - Send audio frame for processing\n- `audio_output_stream()` - Get stream of TTS audio output\n- `set_audio_source(audio_source)` - Configure LiveKit audio source\n- `set_transcription_callback(callback)` - Set transcription forwarding callback\n- `commit_audio()` - Commit audio input (no-op for NextEVI)\n\n### NextEVISTT (Internal)\n\n- `forward_transcription(transcript, is_final)` - Forward transcription to AgentSession\n\n## Requirements\n\n- Python 3.9+\n- LiveKit Agents 1.2.8+\n- NextEVI API account and credentials\n\n## License\n\nApache 2.0\n\n## Support\n\n- [NextEVI Documentation](https://docs.nextevi.com)\n- [LiveKit Agents Documentation](https://docs.livekit.io/agents/)\n- [GitHub Issues](https://github.com/nextevi/livekit-nextevi/issues)",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "LiveKit Agents plugin for NextEVI voice AI platform with real-time speech-to-speech capabilities",
    "version": "0.1.0",
    "project_urls": {
        "Documentation": "https://docs.nextevi.com",
        "Issues": "https://github.com/nextevi/livekit-nextevi/issues",
        "Source": "https://github.com/nextevi/livekit-nextevi",
        "Website": "https://nextevi.com"
    },
    "split_keywords": [
        "audio",
        " livekit",
        " nextevi",
        " realtime",
        " video",
        " voice-ai",
        " webrtc"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "47d69eae4eee758bf3822d72ca4034219a7254790e2b91f46d41188838cc529c",
                "md5": "0971edc89adc4374163f158ced5ed438",
                "sha256": "f9b44c8f15e5e53846f0eb61bf120a9a6c8fcb8b6e3702cbe9652a04e73ebbe5"
            },
            "downloads": -1,
            "filename": "livekit_plugins_nextevi-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0971edc89adc4374163f158ced5ed438",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9.0",
            "size": 26146,
            "upload_time": "2025-09-13T07:00:03",
            "upload_time_iso_8601": "2025-09-13T07:00:03.853265Z",
            "url": "https://files.pythonhosted.org/packages/47/d6/9eae4eee758bf3822d72ca4034219a7254790e2b91f46d41188838cc529c/livekit_plugins_nextevi-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c290b311aa731319d5fb7ad7211aa730532b4d73d3fe4996e9075e568642f0d9",
                "md5": "1075f902bee2067b97ca8ce217109600",
                "sha256": "4f5b78c775652760a68d25dcf840cd910b40db1f9b6318b5ef7e34ff4ea6bfa0"
            },
            "downloads": -1,
            "filename": "livekit_plugins_nextevi-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "1075f902bee2067b97ca8ce217109600",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9.0",
            "size": 20966,
            "upload_time": "2025-09-13T07:00:05",
            "upload_time_iso_8601": "2025-09-13T07:00:05.778495Z",
            "url": "https://files.pythonhosted.org/packages/c2/90/b311aa731319d5fb7ad7211aa730532b4d73d3fe4996e9075e568642f0d9/livekit_plugins_nextevi-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-13 07:00:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "nextevi",
    "github_project": "livekit-nextevi",
    "github_not_found": true,
    "lcname": "livekit-plugins-nextevi"
}
        
Elapsed time: 1.68605s