# 💻🎤🔊 localtalk
A privacy-first voice assistant that runs entirely offline on Apple Silicon, perfect for travelers, privacy-conscious users, and anyone who values their data sovereignty. No accounts, no cloud services, no tracking - just powerful AI that respects your privacy.
Currently, this library needs immediate work in the following areas before I can recommend usage.
- Develop a "System Prompt" with various personas
- Augment with local system knowledge (date/time, username, etc)
-
## Why This Project Exists
1. **Technology preview** - While the tech isn't perfect yet, we can build something functional right now that respects your privacy and runs entirely offline.
2. **As a vibe check on offline-first AI** - How realistic is it to avoid cloud services like OpenAI and ElevenLabs? This project explores what's possible with local models and helps identify the gaps.
3. **Future-proofing for real-time local AI** - One day soon, these models and consumer computers will be capable of real-time TTS that rivals cloud services. When that day comes, this library will be ready to leverage those improvements immediately.
### Why Not Use Apple's Built-in "Say" Command?
We deliberately chose not to use macOS's built-in `say` command for text-to-speech. While it's readily available and requires no setup, the voice quality is too robotic to meet today's user expectations. After being exposed to natural-sounding AI voices from services like ElevenLabs and OpenAI, users expect conversational AI to sound human-like. The `say` command's 1990s-era voice synthesis would make the assistant feel outdated and diminish the user experience, so it wasn't worth implementing as an option.
Apple's newer [Speech Synthesis API](https://developer.apple.com/documentation/avfoundation/speech-synthesis) offers much higher quality voices that could be a great fit for this project. However, we're waiting for proper Python library support to integrate it. Once Python bindings become available, we'll add support for these modern Apple voices as another local TTS option.
Built with speech recognition (Whisper), language model processing (Gemma3/MLX), and text-to-speech synthesis (Kokoro/ChatterBox), LocalTalk gives you the convenience of modern AI assistants without sacrificing your privacy or requiring internet connectivity.
## Why "LocalTalk"?
The name "LocalTalk" is a playful homage to [Apple's classic LocalTalk networking protocol](https://en.wikipedia.org/wiki/LocalTalk) from the 1980s. Just as the original LocalTalk enabled local network communication between Apple devices without needing external infrastructure, our LocalTalk enables local AI conversations without needing external cloud services.
The name works on two levels:
- **Local**: Everything runs locally on your Mac - no internet required after initial setup
- **Talk**: It's literally a talking app that listens and responds with voice
It's the perfect name for an offline voice assistant that embodies Apple's tradition of making powerful technology accessible and self-contained.
## Features
- 🎤 **Speech Recognition**: Convert speech to text using OpenAI Whisper
- 🤖 **Native Audio Processing**: Gemma3 model with direct audio understanding
- 🚀 **Fast TTS**: MLX-Audio Kokoro for near real-time speech synthesis
- 🔊 **Multiple TTS Options**: Choose between fast Kokoro or high-quality ChatterBox
- 💬 **Dual Input Modes**: Type or speak your queries
- 🎭 **Voice Options**: Multiple voice personalities with Kokoro
- 💾 **Fully Offline**: No internet connection required after setup
- 🔒 **100% Private**: Your conversations never leave your device
## Requirements
- Python 3.11+
- macOS with Apple Silicon (M1/M2/M3)
- Microphone for voice input
- MLX framework (installed automatically)
**Platform Support:**
- macOS (Apple Silicon): ✅ Fully supported as first class platform.
- Linux / CUDA backend: 🚧 Planned (see roadmap below).
- Windows: 🤷🏼♂️ Would consider, but not seriously.
## Installation - with uv
Recommended: install the CLI as a [`uv tool`](https://docs.astral.sh/uv/concepts/tools/)
```bash
uv tool install localtalk
# uvx also works, nice demo one-liner
uvx localtalk
```
## Contributor/Developer Setup
1. **Clone the repository**:
```bash
git clone https://github.com/anthonywu/localtalk
cd localtalk
```
2. **Create a virtual environment** (using `uv` recommended):
```bash
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
```
3. **Install the package**:
```bash
uv pip install -e .
```
4. **Download NLTK data** (required for sentence tokenization):
```bash
python -c "import nltk; nltk.download('punkt')"
```
5. **MLX-VLM will automatically download models on first run**
- No additional setup required
- Models are cached locally for offline use
## Quick Start (Hello World)
### Basic Usage
Run the voice assistant with default settings:
```bash
localtalk
```
This will:
1. Start with fast Kokoro TTS (MLX-Audio)
2. Use the `mlx-community/gemma-3n-E2B-it-4bit` model
3. Enable dual-modal input (type or speak)
4. Use `base.en` Whisper model for speech recognition
### Complete Hello World Example
```bash
# 1. Run the voice assistant
localtalk
# 2. You'll see: "💬 Type your message or press Enter to record audio:"
# 3. Either:
# - Type "Hello, how are you?" and press Enter
# - OR press Enter, speak, then press Enter again
# 4. Listen to the AI's response with fast Kokoro TTS!
```
### Different TTS Backends
```bash
# Fast mode (default) - Kokoro TTS with audio output
localtalk
# Different Kokoro voices: American female "nova"
localtalk --kokoro-voice af_nova --kokoro-speed 1.2
# Different Kokoro voices: Engish female "bella"
localtalk --kokoro-voice bf_bella --kokoro-speed 1.2
# High-quality mode - ChatterBox TTS (experimental, slow)
localtalk --use-chatterbox
```
## Configuration Options
### Command-Line Arguments
**Primary AI Model Options:**
- `--model NAME`: MLX model from Huggingface Hub (default: mlx-community/gemma-3n-E2B-it-4bit)
- `--whisper-model SIZE`: Whisper model size (default: base.en)
- `--temperature FLOAT`: Temperature for text generation (default: 0.7)
- `--top-p FLOAT`: Top-p sampling parameter (default: 1.0)
- `--max-tokens INT`: Maximum tokens to generate (default: 100)
**TTS Options:**
- `--kokoro-model`: Choose Kokoro model (4bit/6bit/8bit/bf16, default: 4bit)
- `--kokoro-voice`: Voice personality (af_heart/af_nova/af_bella/bf_emma)
- `--kokoro-speed`: Speech speed 0.5-2.0 (default: 1.0)
- `--no-tts`: Disable TTS for text-only mode
- `--use-chatterbox`: Use experimental ChatterBox TTS (slow but high quality)
**ChatterBox Options (requires --use-chatterbox):**
- `--exaggeration FLOAT`: Emotion intensity (0.0-1.0, default: 0.5)
- `--cfg-weight FLOAT`: Pacing control (0.0-1.0, default: 0.5)
- `--tts-quality`: Use quality mode instead of fast mode
**Other Options:**
- `--save-voice`: Save generated audio responses
- `--system-prompt`: Custom system prompt for the LLM
### Example Configurations
**Calm, professional assistant (ChatterBox)**:
```bash
localtalk --use-chatterbox --exaggeration 0.3 --cfg-weight 0.7 --temperature 0.5
```
**Expressive, dynamic assistant (ChatterBox)**:
```bash
localtalk --use-chatterbox --exaggeration 0.8 --cfg-weight 0.3 --temperature 0.9
```
**Using a different model**:
```bash
localtalk --model mlx-community/Llama-3.2-3B-Instruct-4bit --whisper-model small.en
```
## Secrets and API Keys
**Good news!** This application requires **NO API keys or secrets** to run.
Everything runs locally on your Mac!
- ✅ **Whisper**: Runs locally, no API key needed
- ✅ **MLX-LM**: Runs locally on Apple Silicon, no API key needed
- ✅ **ChatterBox**: Runs locally, no API key needed
## Advanced Usage
### Programmatic Usage
You can also use the voice assistant programmatically:
```python
from localtalk import VoiceAssistant, AppConfig
# Create custom configuration
config = AppConfig()
config.mlx_lm.model = "mlx-community/Llama-3.2-3B-Instruct-4bit"
config.chatterbox.exaggeration = 0.7
# Create and run assistant
assistant = VoiceAssistant(config)
assistant.run()
```
### Custom System Prompts
```bash
localtalk --system-prompt "You are a pirate. Respond in pirate speak, matey!"
```
## Troubleshooting
### Common Issues
1. **"Model not found" error**:
- The model will be automatically downloaded on first use
- Ensure you have a stable internet connection for the initial download
- Check that you have sufficient disk space (~4-8GB per model)
2. **"No microphone found" error**:
- Check your system's audio permissions
- Ensure your microphone is properly connected
- Try specifying a different audio device
3. **"Out of memory" error**:
- MLX is optimized for Apple Silicon but large models may still require significant RAM
- Try using a smaller/quantized model
- Close other applications to free up memory
4. **Poor voice cloning quality**:
- Use a longer, clearer voice sample (10-30 seconds)
- Ensure the sample has minimal background noise
- Try adjusting exaggeration and cfg-weight parameters
## Development
### Running Tests
```bash
# Install dev dependencies
uv pip install -e ".[dev]"
# Run tests
pytest
# Run with coverage
pytest --cov
```
### Code Style
```bash
# Format code
ruff format
# Lint code
ruff check --fix
```
## License
MIT License - see LICENSE file for details.
## Acknowledgments
- Apple MLX team for the efficient ML framework for Apple Silicon
- MLX-LM community for providing quantized models
- OpenAI Whisper for speech recognition
- Resemble AI for ChatterBox TTS
## Future Plans & Roadmap
### Language Support
Currently, LocalTalk supports English (American and British accents). **Chinese language support is coming next**, with other major world languages to follow. The underlying models (Whisper, Gemma3, and Kokoro) already have multilingual capabilities - we just need to wire up the language detection and configuration.
**Contributors welcome!** If you'd like to help add support for your language, please check our [Issues](https://github.com/anthonywu/localtalk/issues) page or submit a PR. Language additions mainly involve:
- Configuring Whisper for the target language
- Testing Gemma3's response quality in that language
- Setting up Kokoro TTS with appropriate voice models
- Adding language-specific prompts and examples
### Offline Knowledge Base
We're planning to add support for **offline data sources** to augment the LLM's knowledge while maintaining complete privacy:
- **Offline Wikipedia**: Full-text search and retrieval from Wikipedia dumps
- **Personal Documents**: Index and query your own documents, notes, and PDFs
- **Technical Documentation**: Offline access to programming docs, manuals, and references
- **Custom Knowledge Bases**: Import and index any structured data source
This will enable LocalTalk to provide informed responses about current events, technical topics, and personal information - all while keeping everything local and private on your device. The RAG (Retrieval Augmented Generation) pipeline will seamlessly integrate with the voice interface.
### Other Planned Features
- **Real-time streaming**: Stream responses as they're generated
- **Multi-turn conversations**: Better context management for longer discussions
- **Voice activity detection**: Automatic recording start/stop
- **Custom wake words**: "Hey LocalTalk" activation
- **Model hot-swapping**: Switch between models without restarting
- **Voice profiles**: Save and switch between different voice configurations
- **Plugin system**: Extend functionality with custom modules
- **Platform support**: Linux support (P2), Windows consideration (P3)
Raw data
{
"_id": null,
"home_page": null,
"name": "localtalk",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "apple-silicon, llm, local, offline, privacy, speech-recognition, tts, voice-assistant",
"author": "Anthony Wu",
"author_email": "Anthony Wu <pls-file-gh-issue@users.noreply.github.com>",
"download_url": "https://files.pythonhosted.org/packages/db/aa/eb6b40a4b704a0cc18344015797d98c428bb165c6acfb63e7681416b3f21/localtalk-0.1.0a3.tar.gz",
"platform": null,
"description": "# \ud83d\udcbb\ud83c\udfa4\ud83d\udd0a localtalk\n\nA privacy-first voice assistant that runs entirely offline on Apple Silicon, perfect for travelers, privacy-conscious users, and anyone who values their data sovereignty. No accounts, no cloud services, no tracking - just powerful AI that respects your privacy.\n\nCurrently, this library needs immediate work in the following areas before I can recommend usage.\n\n- Develop a \"System Prompt\" with various personas\n- Augment with local system knowledge (date/time, username, etc)\n- \n\n## Why This Project Exists\n\n1. **Technology preview** - While the tech isn't perfect yet, we can build something functional right now that respects your privacy and runs entirely offline.\n\n2. **As a vibe check on offline-first AI** - How realistic is it to avoid cloud services like OpenAI and ElevenLabs? This project explores what's possible with local models and helps identify the gaps.\n\n3. **Future-proofing for real-time local AI** - One day soon, these models and consumer computers will be capable of real-time TTS that rivals cloud services. When that day comes, this library will be ready to leverage those improvements immediately.\n\n### Why Not Use Apple's Built-in \"Say\" Command?\n\nWe deliberately chose not to use macOS's built-in `say` command for text-to-speech. While it's readily available and requires no setup, the voice quality is too robotic to meet today's user expectations. After being exposed to natural-sounding AI voices from services like ElevenLabs and OpenAI, users expect conversational AI to sound human-like. The `say` command's 1990s-era voice synthesis would make the assistant feel outdated and diminish the user experience, so it wasn't worth implementing as an option.\n\nApple's newer [Speech Synthesis API](https://developer.apple.com/documentation/avfoundation/speech-synthesis) offers much higher quality voices that could be a great fit for this project. However, we're waiting for proper Python library support to integrate it. Once Python bindings become available, we'll add support for these modern Apple voices as another local TTS option.\n\nBuilt with speech recognition (Whisper), language model processing (Gemma3/MLX), and text-to-speech synthesis (Kokoro/ChatterBox), LocalTalk gives you the convenience of modern AI assistants without sacrificing your privacy or requiring internet connectivity.\n\n## Why \"LocalTalk\"?\n\nThe name \"LocalTalk\" is a playful homage to [Apple's classic LocalTalk networking protocol](https://en.wikipedia.org/wiki/LocalTalk) from the 1980s. Just as the original LocalTalk enabled local network communication between Apple devices without needing external infrastructure, our LocalTalk enables local AI conversations without needing external cloud services.\n\nThe name works on two levels:\n- **Local**: Everything runs locally on your Mac - no internet required after initial setup\n- **Talk**: It's literally a talking app that listens and responds with voice\n\nIt's the perfect name for an offline voice assistant that embodies Apple's tradition of making powerful technology accessible and self-contained.\n\n## Features\n\n- \ud83c\udfa4 **Speech Recognition**: Convert speech to text using OpenAI Whisper\n- \ud83e\udd16 **Native Audio Processing**: Gemma3 model with direct audio understanding\n- \ud83d\ude80 **Fast TTS**: MLX-Audio Kokoro for near real-time speech synthesis\n- \ud83d\udd0a **Multiple TTS Options**: Choose between fast Kokoro or high-quality ChatterBox\n- \ud83d\udcac **Dual Input Modes**: Type or speak your queries\n- \ud83c\udfad **Voice Options**: Multiple voice personalities with Kokoro\n- \ud83d\udcbe **Fully Offline**: No internet connection required after setup\n- \ud83d\udd12 **100% Private**: Your conversations never leave your device\n\n\n## Requirements\n\n- Python 3.11+\n- macOS with Apple Silicon (M1/M2/M3)\n- Microphone for voice input\n- MLX framework (installed automatically)\n\n**Platform Support:**\n\n- macOS (Apple Silicon): \u2705 Fully supported as first class platform.\n- Linux / CUDA backend: \ud83d\udea7 Planned (see roadmap below).\n- Windows: \ud83e\udd37\ud83c\udffc\u200d\u2642\ufe0f Would consider, but not seriously.\n\n## Installation - with uv\n\nRecommended: install the CLI as a [`uv tool`](https://docs.astral.sh/uv/concepts/tools/)\n\n```bash\nuv tool install localtalk\n\n# uvx also works, nice demo one-liner\nuvx localtalk\n```\n\n## Contributor/Developer Setup\n\n1. **Clone the repository**:\n```bash\ngit clone https://github.com/anthonywu/localtalk\ncd localtalk\n```\n\n2. **Create a virtual environment** (using `uv` recommended):\n```bash\nuv venv\nsource .venv/bin/activate # On Windows: .venv\\Scripts\\activate\n```\n\n3. **Install the package**:\n```bash\nuv pip install -e .\n```\n\n4. **Download NLTK data** (required for sentence tokenization):\n```bash\npython -c \"import nltk; nltk.download('punkt')\"\n```\n\n5. **MLX-VLM will automatically download models on first run**\n - No additional setup required\n - Models are cached locally for offline use\n\n## Quick Start (Hello World)\n\n### Basic Usage\n\nRun the voice assistant with default settings:\n\n```bash\nlocaltalk\n```\n\nThis will:\n1. Start with fast Kokoro TTS (MLX-Audio)\n2. Use the `mlx-community/gemma-3n-E2B-it-4bit` model\n3. Enable dual-modal input (type or speak)\n4. Use `base.en` Whisper model for speech recognition\n\n### Complete Hello World Example\n\n```bash\n# 1. Run the voice assistant\nlocaltalk\n\n# 2. You'll see: \"\ud83d\udcac Type your message or press Enter to record audio:\"\n# 3. Either:\n# - Type \"Hello, how are you?\" and press Enter\n# - OR press Enter, speak, then press Enter again\n# 4. Listen to the AI's response with fast Kokoro TTS!\n```\n\n### Different TTS Backends\n\n```bash\n# Fast mode (default) - Kokoro TTS with audio output\nlocaltalk\n\n# Different Kokoro voices: American female \"nova\"\nlocaltalk --kokoro-voice af_nova --kokoro-speed 1.2\n\n# Different Kokoro voices: Engish female \"bella\"\nlocaltalk --kokoro-voice bf_bella --kokoro-speed 1.2\n\n# High-quality mode - ChatterBox TTS (experimental, slow)\nlocaltalk --use-chatterbox\n```\n\n## Configuration Options\n\n### Command-Line Arguments\n\n**Primary AI Model Options:**\n- `--model NAME`: MLX model from Huggingface Hub (default: mlx-community/gemma-3n-E2B-it-4bit)\n- `--whisper-model SIZE`: Whisper model size (default: base.en)\n- `--temperature FLOAT`: Temperature for text generation (default: 0.7)\n- `--top-p FLOAT`: Top-p sampling parameter (default: 1.0)\n- `--max-tokens INT`: Maximum tokens to generate (default: 100)\n\n**TTS Options:**\n- `--kokoro-model`: Choose Kokoro model (4bit/6bit/8bit/bf16, default: 4bit)\n- `--kokoro-voice`: Voice personality (af_heart/af_nova/af_bella/bf_emma)\n- `--kokoro-speed`: Speech speed 0.5-2.0 (default: 1.0)\n- `--no-tts`: Disable TTS for text-only mode\n- `--use-chatterbox`: Use experimental ChatterBox TTS (slow but high quality)\n\n**ChatterBox Options (requires --use-chatterbox):**\n- `--exaggeration FLOAT`: Emotion intensity (0.0-1.0, default: 0.5)\n- `--cfg-weight FLOAT`: Pacing control (0.0-1.0, default: 0.5)\n- `--tts-quality`: Use quality mode instead of fast mode\n\n**Other Options:**\n- `--save-voice`: Save generated audio responses\n- `--system-prompt`: Custom system prompt for the LLM\n\n### Example Configurations\n\n**Calm, professional assistant (ChatterBox)**:\n```bash\nlocaltalk --use-chatterbox --exaggeration 0.3 --cfg-weight 0.7 --temperature 0.5\n```\n\n**Expressive, dynamic assistant (ChatterBox)**:\n```bash\nlocaltalk --use-chatterbox --exaggeration 0.8 --cfg-weight 0.3 --temperature 0.9\n```\n\n**Using a different model**:\n```bash\nlocaltalk --model mlx-community/Llama-3.2-3B-Instruct-4bit --whisper-model small.en\n```\n\n## Secrets and API Keys\n\n**Good news!** This application requires **NO API keys or secrets** to run.\n\nEverything runs locally on your Mac!\n\n- \u2705 **Whisper**: Runs locally, no API key needed\n- \u2705 **MLX-LM**: Runs locally on Apple Silicon, no API key needed\n- \u2705 **ChatterBox**: Runs locally, no API key needed\n\n## Advanced Usage\n\n### Programmatic Usage\n\nYou can also use the voice assistant programmatically:\n\n```python\nfrom localtalk import VoiceAssistant, AppConfig\n\n# Create custom configuration\nconfig = AppConfig()\nconfig.mlx_lm.model = \"mlx-community/Llama-3.2-3B-Instruct-4bit\"\nconfig.chatterbox.exaggeration = 0.7\n\n# Create and run assistant\nassistant = VoiceAssistant(config)\nassistant.run()\n```\n\n### Custom System Prompts\n\n```bash\nlocaltalk --system-prompt \"You are a pirate. Respond in pirate speak, matey!\"\n```\n\n## Troubleshooting\n\n### Common Issues\n\n1. **\"Model not found\" error**:\n - The model will be automatically downloaded on first use\n - Ensure you have a stable internet connection for the initial download\n - Check that you have sufficient disk space (~4-8GB per model)\n\n2. **\"No microphone found\" error**:\n - Check your system's audio permissions\n - Ensure your microphone is properly connected\n - Try specifying a different audio device\n\n3. **\"Out of memory\" error**:\n - MLX is optimized for Apple Silicon but large models may still require significant RAM\n - Try using a smaller/quantized model\n - Close other applications to free up memory\n\n4. **Poor voice cloning quality**:\n - Use a longer, clearer voice sample (10-30 seconds)\n - Ensure the sample has minimal background noise\n - Try adjusting exaggeration and cfg-weight parameters\n\n## Development\n\n### Running Tests\n\n```bash\n# Install dev dependencies\nuv pip install -e \".[dev]\"\n\n# Run tests\npytest\n\n# Run with coverage\npytest --cov\n```\n\n### Code Style\n\n```bash\n# Format code\nruff format\n\n# Lint code\nruff check --fix\n```\n\n## License\n\nMIT License - see LICENSE file for details.\n\n## Acknowledgments\n\n- Apple MLX team for the efficient ML framework for Apple Silicon\n- MLX-LM community for providing quantized models\n- OpenAI Whisper for speech recognition\n- Resemble AI for ChatterBox TTS\n\n## Future Plans & Roadmap\n\n### Language Support\n\nCurrently, LocalTalk supports English (American and British accents). **Chinese language support is coming next**, with other major world languages to follow. The underlying models (Whisper, Gemma3, and Kokoro) already have multilingual capabilities - we just need to wire up the language detection and configuration.\n\n**Contributors welcome!** If you'd like to help add support for your language, please check our [Issues](https://github.com/anthonywu/localtalk/issues) page or submit a PR. Language additions mainly involve:\n- Configuring Whisper for the target language\n- Testing Gemma3's response quality in that language \n- Setting up Kokoro TTS with appropriate voice models\n- Adding language-specific prompts and examples\n\n### Offline Knowledge Base\n\nWe're planning to add support for **offline data sources** to augment the LLM's knowledge while maintaining complete privacy:\n\n- **Offline Wikipedia**: Full-text search and retrieval from Wikipedia dumps\n- **Personal Documents**: Index and query your own documents, notes, and PDFs\n- **Technical Documentation**: Offline access to programming docs, manuals, and references\n- **Custom Knowledge Bases**: Import and index any structured data source\n\nThis will enable LocalTalk to provide informed responses about current events, technical topics, and personal information - all while keeping everything local and private on your device. The RAG (Retrieval Augmented Generation) pipeline will seamlessly integrate with the voice interface.\n\n### Other Planned Features\n\n- **Real-time streaming**: Stream responses as they're generated\n- **Multi-turn conversations**: Better context management for longer discussions\n- **Voice activity detection**: Automatic recording start/stop\n- **Custom wake words**: \"Hey LocalTalk\" activation\n- **Model hot-swapping**: Switch between models without restarting\n- **Voice profiles**: Save and switch between different voice configurations\n- **Plugin system**: Extend functionality with custom modules\n- **Platform support**: Linux support (P2), Windows consideration (P3)\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A local/offline-capable voice assistant with speech recognition, LLM processing, and text-to-speech",
"version": "0.1.0a3",
"project_urls": {
"Documentation": "https://github.com/anthonywu/localtalk#readme",
"Homepage": "https://github.com/anthonywu/localtalk",
"Issues": "https://github.com/anthonywu/localtalk/issues",
"Repository": "https://github.com/anthonywu/localtalk.git"
},
"split_keywords": [
"apple-silicon",
" llm",
" local",
" offline",
" privacy",
" speech-recognition",
" tts",
" voice-assistant"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "76398984fbd71766462e0f76443ccff8648a89a1f66df7400ec5d5c7863c9735",
"md5": "168537a741155de8d48218ad78a51238",
"sha256": "08c65c74b6d1198f56e082e709226734473cc96aab9d7076880333472c3c0529"
},
"downloads": -1,
"filename": "localtalk-0.1.0a3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "168537a741155de8d48218ad78a51238",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 33822,
"upload_time": "2025-07-27T04:57:01",
"upload_time_iso_8601": "2025-07-27T04:57:01.051025Z",
"url": "https://files.pythonhosted.org/packages/76/39/8984fbd71766462e0f76443ccff8648a89a1f66df7400ec5d5c7863c9735/localtalk-0.1.0a3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "dbaaeb6b40a4b704a0cc18344015797d98c428bb165c6acfb63e7681416b3f21",
"md5": "0c1f13480b19e41c7ab1490a2d575bb7",
"sha256": "83dba35e92ebae149053534f51b48207f34da0e37dab0d4d595ce9461db9af01"
},
"downloads": -1,
"filename": "localtalk-0.1.0a3.tar.gz",
"has_sig": false,
"md5_digest": "0c1f13480b19e41c7ab1490a2d575bb7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 23731,
"upload_time": "2025-07-27T04:57:02",
"upload_time_iso_8601": "2025-07-27T04:57:02.305941Z",
"url": "https://files.pythonhosted.org/packages/db/aa/eb6b40a4b704a0cc18344015797d98c428bb165c6acfb63e7681416b3f21/localtalk-0.1.0a3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-27 04:57:02",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "anthonywu",
"github_project": "localtalk#readme",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "localtalk"
}