llama-assistant


Namellama-assistant JSON
Version 0.1.37 PyPI version JSON
download
home_pagehttps://github.com/vietanhdev/llama-assistant
SummaryAn AI assistant powered by Llama models
upload_time2024-10-07 03:02:12
maintainerNone
docs_urlNone
authorViet-Anh Nguyen
requires_python>=3.9
licenseMIT
keywords ai assistant llama pyqt5
VCS
bugtrack_url
requirements PyQt5 SpeechRecognition markdown pynput llama-cpp-python huggingface_hub openwakeword pyinstaller ffmpeg-python None
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <img alt="Llama Assistant" style="width: 128px; max-width: 100%; height: auto;" src="https://raw.githubusercontent.com/vietanhdev/llama-assistant/refs/heads/main/logo.png"/>
  <h1 align="center">🌟 Llama Assistant 🌟</h1>
  <p align="center">Local AI Assistant That Respects Your Privacy! 🔒</p>
<p align="center"><b>Website:</b> <a href="https://llama-assistant.nrl.ai/" target="_blank">llama-assistant.nrl.ai</a></p>
</p>

[![Llama Assistant](https://user-images.githubusercontent.com/18329471/234640541-a6a65fbc-d7a5-4ec3-9b65-55305b01a7aa.png)](https://www.youtube.com/watch?v=kyRf8maKuDc)

![Python](https://img.shields.io/badge/python-3.9%2B-blue.svg)
![Llama 3](https://img.shields.io/badge/Llama-3-green.svg)
![License](https://img.shields.io/badge/license-MIT-orange.svg)
![Version](https://img.shields.io/badge/version-0.1.0-red.svg)
![Stars](https://img.shields.io/github/stars/vietanhdev/llama-assistant.svg)
![Forks](https://img.shields.io/github/forks/vietanhdev/llama-assistant.svg)
![Issues](https://img.shields.io/github/issues/vietanhdev/llama-assistant.svg)
[![Downloads](https://static.pepy.tech/badge/llama-assistant)](https://pepy.tech/project/llama-assistant)
[![Downloads](https://static.pepy.tech/badge/llama-assistant/month)](https://pepy.tech/project/llama-assistant)

<a href="https://www.producthunt.com/products/llama-assistant/reviews?utm_source=badge-product_review&utm_medium=badge&utm_souce=badge-llama&#0045;assistant" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/product_review.svg?product_id=610711&theme=light" alt="Llama&#0032;Assistant - Local&#0032;AI&#0032;Assistant&#0032;That&#0032;Respects&#0032;Your&#0032;Privacy&#0033;&#0032;🔒 | Product Hunt" style="width: 250px; height: 54px;" width="250" height="54" /></a>

AI-powered assistant to help you with your daily tasks, powered by Llama 3.2. It can recognize your voice, process natural language, and perform various actions based on your commands: summarizing text, rephrasing sentences, answering questions, writing emails, and more.

This assistant can run offline on your local machine, and it respects your privacy by not sending any data to external servers.

[![Screenshot](https://raw.githubusercontent.com/vietanhdev/llama-assistant/refs/heads/main/screenshot.png)](https://www.youtube.com/watch?v=kyRf8maKuDc)

https://github.com/user-attachments/assets/af2c544b-6d46-4c44-87d8-9a051ba213db

![Settings](https://raw.githubusercontent.com/vietanhdev/llama-assistant/refs/heads/main/docs/custom-models.png)

## Supported Models

- 📝 Text-only models:
  - [Llama 3.2](https://github.com/facebookresearch/llama) - 1B, 3B (4/8-bit quantized).
  - [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct-GGUF) (4-bit quantized).
  - [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct-GGUF) (4-bit quantized).
  - [gemma-2-2b-it](https://huggingface.co/lmstudio-community/gemma-2-2b-it-GGUF-Q4_K_M) (4-bit quantized).
  - And other models that [LlamaCPP](https://github.com/ggerganov/llama.cpp) supports via custom models. [See the list](https://github.com/ggerganov/llama.cpp).

- 🖼️ Multimodal models:
  - [Moondream2](https://huggingface.co/vikhyatk/moondream2).
  - [MiniCPM-v2.6](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf).
  - [LLaVA 1.5/1.6](https://llava-vl.github.io/).
  - Besides supported models, you can try other variants via custom models.

## TODO

- [x] 🖼️ Support multimodal model: [moondream2](https://huggingface.co/vikhyatk/moondream2).
- [x] 🗣️ Add wake word detection: "Hey Llama!".
- [x] 🛠️ Custom models: Add support for custom models.
- [x] 📚 Support 5 other text models.
- [x] 🖼️ Support 5 other multimodal models.
- [x] ⚡ Streaming support for response.
- [x] 🎙️ Add offline STT support: WhisperCPP.
- [ ] 🧠 Knowledge database: Langchain or LlamaIndex?.
- [ ] 🔌 Plugin system for extensibility.
- [ ] 📰 News and weather updates.
- [ ] 📧 Email integration with Gmail and Outlook.
- [ ] 📝 Note-taking and task management.
- [ ] 🎵 Music player and podcast integration.
- [ ] 🤖 Workflow with multiple agents.
- [ ] 🌐 Multi-language support: English, Spanish, French, German, etc.
- [ ] 📦 Package for Windows, Linux, and macOS.
- [ ] 🔄 Automated tests and CI/CD pipeline.

## Features

- 🎙️ Voice recognition for hands-free interaction.
- 💬 Natural language processing with Llama 3.2.
- 🖼️ Image analysis capabilities (TODO).
- ⚡ Global hotkey for quick access (Cmd+Shift+Space on macOS).
- 🎨 Customizable UI with adjustable transparency.

**Note:** This project is a work in progress, and new features are being added regularly.

## Technologies Used

- ![Python](https://img.shields.io/badge/Python-3.9%2B-blue?style=flat-square&logo=python&logoColor=white)
- ![Llama](https://img.shields.io/badge/Llama-3.2-yellow?style=flat-square&logo=meta&logoColor=white)
- ![SpeechRecognition](https://img.shields.io/badge/SpeechRecognition-3.8-green?style=flat-square&logo=google&logoColor=white)
- ![yt](https://img.shields.io/badge/PyQt-5-41CD52?style=flat-square&logo=qt&logoColor=white)

## Installation

**Recommended Python Version:** 3.10.

**Install PortAudio:**

<details>
Install `PortAudio`_. This is required by the `PyAudio`_ library to stream
audio from your computer's microphone. PyAudio depends on PortAudio for cross-platform compatibility, and is installed differently depending on the
platform.

* For Mac OS X, you can use `Homebrew`_::

      brew install portaudio

  **Note**: if you encounter an error when running `pip install` that indicates
  it can't find `portaudio.h`, try running `pip install` with the following
  flags::

      pip install --global-option='build_ext' \
          --global-option='-I/usr/local/include' \
          --global-option='-L/usr/local/lib' \
          pyaudio

* For Debian / Ubuntu Linux::

      apt-get install portaudio19-dev python3-all-dev

* Windows may work without having to install PortAudio explicitly (it will get
  installed with PyAudio).

For more details, see the `PyAudio installation`_ page.


.. _PyAudio: https://people.csail.mit.edu/hubert/pyaudio/
.. _PortAudio: http://www.portaudio.com/
.. _PyAudio installation:
  https://people.csail.mit.edu/hubert/pyaudio/#downloads
.. _Homebrew: http://brew.sh
</details>

**On Windows: Installing the MinGW-w64 toolchain**

<details>
- Download and install with instructions from [here](https://code.visualstudio.com/docs/cpp/config-mingw).
- Direct download link: [MinGW-w64](https://github.com/msys2/msys2-installer/releases/download/2024-01-13/msys2-x86_64-20240113.exe).
</details>

**Install from PyPI:**

```bash
pip install pyaudio
pip install git+https://github.com/stlukey/whispercpp.py
pip install llama-assistant
```

**Or install from source:**

<details>

1. Clone the repository:

```bash
git clone https://github.com/vietanhdev/llama-assistant.git
cd llama-assistant
```

2. Install the required dependencies and install the package:

```bash
pip install pyaudio
pip install git+https://github.com/stlukey/whispercpp.py
pip install -r requirements.txt
pip install .
```

</details>

**Speed Hack for Apple Silicon (M1, M2, M3) users:** 🔥🔥🔥

<details>

- Install Xcode:

```bash
# check the path of your xcode install
xcode-select -p

# xcode installed returns
# /Applications/Xcode-beta.app/Contents/Developer

# if xcode is missing then install it... it takes ages;
xcode-select --install
```

- Build `llama-cpp-python` with METAL support:

```bash
pip uninstall llama-cpp-python -y
CMAKE_ARGS="-DGGML_METAL=on" pip install -U llama-cpp-python --no-cache-dir

# You should now have llama-cpp-python v0.1.62 or higher installed
# llama-cpp-python         0.1.68
```

</details>

## Usage

Run the assistant using the following command:

```bash
llama-assistant

# Or with a
python -m llama_assistant.main
```

Use the global hotkey (default: `Cmd+Shift+Space`) to quickly access the assistant from anywhere on your system.

## Configuration

The assistant's settings can be customized by editing the `settings.json` file located in your home directory: `~/llama_assistant/settings.json`.

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

## License

This project is licensed under the GPLv3 License - see the [LICENSE](LICENSE) file for details.

## Acknowledgements

- This project uses [llama.cpp](https://github.com/ggerganov/llama.cpp), [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) for running large language models. The default model is [Llama 3.2](https://github.com/facebookresearch/llama) by Meta AI Research.
- Speech recognition is powered by [whisper.cpp](hhttps://github.com/ggerganov/whisper.cpp) and [whispercpp.py](https://github.com/stlukey/whispercpp.py).

## Star History

[![Star History Chart](https://api.star-history.com/svg?repos=vietanhdev/llama-assistant&type=Date)](https://star-history.com/#vietanhdev/llama-assistant&Date)

## Contact

- Viet-Anh Nguyen - [vietanhdev](https://github.com/vietanhdev), [contact form](https://www.vietanh.dev/contact).
- Project Link: [https://github.com/vietanhdev/llama-assistant](https://github.com/vietanhdev/llama-assistant), [https://llama-assistant.nrl.ai/](https://llama-assistant.nrl.ai/).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/vietanhdev/llama-assistant",
    "name": "llama-assistant",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "AI, assistant, Llama, PyQt5",
    "author": "Viet-Anh Nguyen",
    "author_email": "Viet-Anh Nguyen <vietanh.dev@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/e5/9a/7087346d1a56cf226869050fa4373854a23079e9bce63a6a757466ba0112/llama_assistant-0.1.37.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <img alt=\"Llama Assistant\" style=\"width: 128px; max-width: 100%; height: auto;\" src=\"https://raw.githubusercontent.com/vietanhdev/llama-assistant/refs/heads/main/logo.png\"/>\n  <h1 align=\"center\">\ud83c\udf1f Llama Assistant \ud83c\udf1f</h1>\n  <p align=\"center\">Local AI Assistant That Respects Your Privacy! \ud83d\udd12</p>\n<p align=\"center\"><b>Website:</b> <a href=\"https://llama-assistant.nrl.ai/\" target=\"_blank\">llama-assistant.nrl.ai</a></p>\n</p>\n\n[![Llama Assistant](https://user-images.githubusercontent.com/18329471/234640541-a6a65fbc-d7a5-4ec3-9b65-55305b01a7aa.png)](https://www.youtube.com/watch?v=kyRf8maKuDc)\n\n![Python](https://img.shields.io/badge/python-3.9%2B-blue.svg)\n![Llama 3](https://img.shields.io/badge/Llama-3-green.svg)\n![License](https://img.shields.io/badge/license-MIT-orange.svg)\n![Version](https://img.shields.io/badge/version-0.1.0-red.svg)\n![Stars](https://img.shields.io/github/stars/vietanhdev/llama-assistant.svg)\n![Forks](https://img.shields.io/github/forks/vietanhdev/llama-assistant.svg)\n![Issues](https://img.shields.io/github/issues/vietanhdev/llama-assistant.svg)\n[![Downloads](https://static.pepy.tech/badge/llama-assistant)](https://pepy.tech/project/llama-assistant)\n[![Downloads](https://static.pepy.tech/badge/llama-assistant/month)](https://pepy.tech/project/llama-assistant)\n\n<a href=\"https://www.producthunt.com/products/llama-assistant/reviews?utm_source=badge-product_review&utm_medium=badge&utm_souce=badge-llama&#0045;assistant\" target=\"_blank\"><img src=\"https://api.producthunt.com/widgets/embed-image/v1/product_review.svg?product_id=610711&theme=light\" alt=\"Llama&#0032;Assistant - Local&#0032;AI&#0032;Assistant&#0032;That&#0032;Respects&#0032;Your&#0032;Privacy&#0033;&#0032;\ud83d\udd12 | Product Hunt\" style=\"width: 250px; height: 54px;\" width=\"250\" height=\"54\" /></a>\n\nAI-powered assistant to help you with your daily tasks, powered by Llama 3.2. It can recognize your voice, process natural language, and perform various actions based on your commands: summarizing text, rephrasing sentences, answering questions, writing emails, and more.\n\nThis assistant can run offline on your local machine, and it respects your privacy by not sending any data to external servers.\n\n[![Screenshot](https://raw.githubusercontent.com/vietanhdev/llama-assistant/refs/heads/main/screenshot.png)](https://www.youtube.com/watch?v=kyRf8maKuDc)\n\nhttps://github.com/user-attachments/assets/af2c544b-6d46-4c44-87d8-9a051ba213db\n\n![Settings](https://raw.githubusercontent.com/vietanhdev/llama-assistant/refs/heads/main/docs/custom-models.png)\n\n## Supported Models\n\n- \ud83d\udcdd Text-only models:\n  - [Llama 3.2](https://github.com/facebookresearch/llama) - 1B, 3B (4/8-bit quantized).\n  - [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct-GGUF) (4-bit quantized).\n  - [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct-GGUF) (4-bit quantized).\n  - [gemma-2-2b-it](https://huggingface.co/lmstudio-community/gemma-2-2b-it-GGUF-Q4_K_M) (4-bit quantized).\n  - And other models that [LlamaCPP](https://github.com/ggerganov/llama.cpp) supports via custom models. [See the list](https://github.com/ggerganov/llama.cpp).\n\n- \ud83d\uddbc\ufe0f Multimodal models:\n  - [Moondream2](https://huggingface.co/vikhyatk/moondream2).\n  - [MiniCPM-v2.6](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf).\n  - [LLaVA 1.5/1.6](https://llava-vl.github.io/).\n  - Besides supported models, you can try other variants via custom models.\n\n## TODO\n\n- [x] \ud83d\uddbc\ufe0f Support multimodal model: [moondream2](https://huggingface.co/vikhyatk/moondream2).\n- [x] \ud83d\udde3\ufe0f Add wake word detection: \"Hey Llama!\".\n- [x] \ud83d\udee0\ufe0f Custom models: Add support for custom models.\n- [x] \ud83d\udcda Support 5 other text models.\n- [x] \ud83d\uddbc\ufe0f Support 5 other multimodal models.\n- [x] \u26a1 Streaming support for response.\n- [x] \ud83c\udf99\ufe0f Add offline STT support: WhisperCPP.\n- [ ] \ud83e\udde0 Knowledge database: Langchain or LlamaIndex?.\n- [ ] \ud83d\udd0c Plugin system for extensibility.\n- [ ] \ud83d\udcf0 News and weather updates.\n- [ ] \ud83d\udce7 Email integration with Gmail and Outlook.\n- [ ] \ud83d\udcdd Note-taking and task management.\n- [ ] \ud83c\udfb5 Music player and podcast integration.\n- [ ] \ud83e\udd16 Workflow with multiple agents.\n- [ ] \ud83c\udf10 Multi-language support: English, Spanish, French, German, etc.\n- [ ] \ud83d\udce6 Package for Windows, Linux, and macOS.\n- [ ] \ud83d\udd04 Automated tests and CI/CD pipeline.\n\n## Features\n\n- \ud83c\udf99\ufe0f Voice recognition for hands-free interaction.\n- \ud83d\udcac Natural language processing with Llama 3.2.\n- \ud83d\uddbc\ufe0f Image analysis capabilities (TODO).\n- \u26a1 Global hotkey for quick access (Cmd+Shift+Space on macOS).\n- \ud83c\udfa8 Customizable UI with adjustable transparency.\n\n**Note:** This project is a work in progress, and new features are being added regularly.\n\n## Technologies Used\n\n- ![Python](https://img.shields.io/badge/Python-3.9%2B-blue?style=flat-square&logo=python&logoColor=white)\n- ![Llama](https://img.shields.io/badge/Llama-3.2-yellow?style=flat-square&logo=meta&logoColor=white)\n- ![SpeechRecognition](https://img.shields.io/badge/SpeechRecognition-3.8-green?style=flat-square&logo=google&logoColor=white)\n- ![yt](https://img.shields.io/badge/PyQt-5-41CD52?style=flat-square&logo=qt&logoColor=white)\n\n## Installation\n\n**Recommended Python Version:** 3.10.\n\n**Install PortAudio:**\n\n<details>\nInstall `PortAudio`_. This is required by the `PyAudio`_ library to stream\naudio from your computer's microphone. PyAudio depends on PortAudio for cross-platform compatibility, and is installed differently depending on the\nplatform.\n\n* For Mac OS X, you can use `Homebrew`_::\n\n      brew install portaudio\n\n  **Note**: if you encounter an error when running `pip install` that indicates\n  it can't find `portaudio.h`, try running `pip install` with the following\n  flags::\n\n      pip install --global-option='build_ext' \\\n          --global-option='-I/usr/local/include' \\\n          --global-option='-L/usr/local/lib' \\\n          pyaudio\n\n* For Debian / Ubuntu Linux::\n\n      apt-get install portaudio19-dev python3-all-dev\n\n* Windows may work without having to install PortAudio explicitly (it will get\n  installed with PyAudio).\n\nFor more details, see the `PyAudio installation`_ page.\n\n\n.. _PyAudio: https://people.csail.mit.edu/hubert/pyaudio/\n.. _PortAudio: http://www.portaudio.com/\n.. _PyAudio installation:\n  https://people.csail.mit.edu/hubert/pyaudio/#downloads\n.. _Homebrew: http://brew.sh\n</details>\n\n**On Windows: Installing the MinGW-w64 toolchain**\n\n<details>\n- Download and install with instructions from [here](https://code.visualstudio.com/docs/cpp/config-mingw).\n- Direct download link: [MinGW-w64](https://github.com/msys2/msys2-installer/releases/download/2024-01-13/msys2-x86_64-20240113.exe).\n</details>\n\n**Install from PyPI:**\n\n```bash\npip install pyaudio\npip install git+https://github.com/stlukey/whispercpp.py\npip install llama-assistant\n```\n\n**Or install from source:**\n\n<details>\n\n1. Clone the repository:\n\n```bash\ngit clone https://github.com/vietanhdev/llama-assistant.git\ncd llama-assistant\n```\n\n2. Install the required dependencies and install the package:\n\n```bash\npip install pyaudio\npip install git+https://github.com/stlukey/whispercpp.py\npip install -r requirements.txt\npip install .\n```\n\n</details>\n\n**Speed Hack for Apple Silicon (M1, M2, M3) users:** \ud83d\udd25\ud83d\udd25\ud83d\udd25\n\n<details>\n\n- Install Xcode:\n\n```bash\n# check the path of your xcode install\nxcode-select -p\n\n# xcode installed returns\n# /Applications/Xcode-beta.app/Contents/Developer\n\n# if xcode is missing then install it... it takes ages;\nxcode-select --install\n```\n\n- Build `llama-cpp-python` with METAL support:\n\n```bash\npip uninstall llama-cpp-python -y\nCMAKE_ARGS=\"-DGGML_METAL=on\" pip install -U llama-cpp-python --no-cache-dir\n\n# You should now have llama-cpp-python v0.1.62 or higher installed\n# llama-cpp-python         0.1.68\n```\n\n</details>\n\n## Usage\n\nRun the assistant using the following command:\n\n```bash\nllama-assistant\n\n# Or with a\npython -m llama_assistant.main\n```\n\nUse the global hotkey (default: `Cmd+Shift+Space`) to quickly access the assistant from anywhere on your system.\n\n## Configuration\n\nThe assistant's settings can be customized by editing the `settings.json` file located in your home directory: `~/llama_assistant/settings.json`.\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nThis project is licensed under the GPLv3 License - see the [LICENSE](LICENSE) file for details.\n\n## Acknowledgements\n\n- This project uses [llama.cpp](https://github.com/ggerganov/llama.cpp), [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) for running large language models. The default model is [Llama 3.2](https://github.com/facebookresearch/llama) by Meta AI Research.\n- Speech recognition is powered by [whisper.cpp](hhttps://github.com/ggerganov/whisper.cpp) and [whispercpp.py](https://github.com/stlukey/whispercpp.py).\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=vietanhdev/llama-assistant&type=Date)](https://star-history.com/#vietanhdev/llama-assistant&Date)\n\n## Contact\n\n- Viet-Anh Nguyen - [vietanhdev](https://github.com/vietanhdev), [contact form](https://www.vietanh.dev/contact).\n- Project Link: [https://github.com/vietanhdev/llama-assistant](https://github.com/vietanhdev/llama-assistant), [https://llama-assistant.nrl.ai/](https://llama-assistant.nrl.ai/).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "An AI assistant powered by Llama models",
    "version": "0.1.37",
    "project_urls": {
        "Bug Tracker": "https://github.com/vietanhdev/llama-assistant/issues",
        "Homepage": "https://github.com/vietanhdev/llama-assistant"
    },
    "split_keywords": [
        "ai",
        " assistant",
        " llama",
        " pyqt5"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1c52ecee3cbdcb7f550cee1bb9b5e27d3001d7a135220c843612bd6d4c7effc4",
                "md5": "0a9013829edcaddeebddcf447c28e096",
                "sha256": "2faaeb82e393b1791253ebfcb427b6c52800f6a8f809c4ccee6bdfecd0eb595d"
            },
            "downloads": -1,
            "filename": "llama_assistant-0.1.37-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0a9013829edcaddeebddcf447c28e096",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 383474,
            "upload_time": "2024-10-07T03:02:10",
            "upload_time_iso_8601": "2024-10-07T03:02:10.707432Z",
            "url": "https://files.pythonhosted.org/packages/1c/52/ecee3cbdcb7f550cee1bb9b5e27d3001d7a135220c843612bd6d4c7effc4/llama_assistant-0.1.37-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e59a7087346d1a56cf226869050fa4373854a23079e9bce63a6a757466ba0112",
                "md5": "0eede5d2f44aa20379646635a7f1a0ae",
                "sha256": "d12a8bc132c0fe2964da32b1fbf8e0f8c889bcef3574fa424787cd2d527eb96c"
            },
            "downloads": -1,
            "filename": "llama_assistant-0.1.37.tar.gz",
            "has_sig": false,
            "md5_digest": "0eede5d2f44aa20379646635a7f1a0ae",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 6088968,
            "upload_time": "2024-10-07T03:02:12",
            "upload_time_iso_8601": "2024-10-07T03:02:12.340459Z",
            "url": "https://files.pythonhosted.org/packages/e5/9a/7087346d1a56cf226869050fa4373854a23079e9bce63a6a757466ba0112/llama_assistant-0.1.37.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-07 03:02:12",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "vietanhdev",
    "github_project": "llama-assistant",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "PyQt5",
            "specs": [
                [
                    "==",
                    "5.15.6"
                ]
            ]
        },
        {
            "name": "SpeechRecognition",
            "specs": [
                [
                    "==",
                    "3.10.4"
                ]
            ]
        },
        {
            "name": "markdown",
            "specs": [
                [
                    "==",
                    "3.7"
                ]
            ]
        },
        {
            "name": "pynput",
            "specs": [
                [
                    "==",
                    "1.7.7"
                ]
            ]
        },
        {
            "name": "llama-cpp-python",
            "specs": [
                [
                    "==",
                    "0.3.1"
                ]
            ]
        },
        {
            "name": "huggingface_hub",
            "specs": [
                [
                    "==",
                    "0.25.1"
                ]
            ]
        },
        {
            "name": "openwakeword",
            "specs": [
                [
                    "==",
                    "0.6.0"
                ]
            ]
        },
        {
            "name": "pyinstaller",
            "specs": [
                [
                    "==",
                    "6.10.0"
                ]
            ]
        },
        {
            "name": "ffmpeg-python",
            "specs": [
                [
                    "==",
                    "0.2.0"
                ]
            ]
        },
        {
            "name": null,
            "specs": []
        }
    ],
    "lcname": "llama-assistant"
}
        
Elapsed time: 0.74433s