aishalib


Nameaishalib JSON
Version 0.0.24 PyPI version JSON
download
home_pageNone
SummaryAI Smart Human Assistant Library
upload_time2024-08-10 11:49:41
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # AISHA Lib: A High-Level Abstraction for Building AI Assistants
In the evolving landscape of artificial intelligence, the development of smart assistants has become increasingly prevalent. To streamline this process, the **AISHA (AI Smart Human Assistant) Lib** offers a high-level abstraction designed for creating AI assistants. This versatile library supports various large language models (LLMs) and different LLM backends, providing developers with a powerful and flexible toolset.

## Environment
To create a Python virtual environment, use the command:
```console
conda env create -f environment.yml
```

## Installation
```console
pip install aishalib
```

## Supported Models
The following LLM models are supported:
- microsoft/Phi-3-medium-128k-instruct
- CohereForAI/c4ai-command-r-v01
- google/gemma-2-27b-it
- Qwen/Qwen2-72B-Instruct

## LLM backends
The following LLM backends are supported:
- Llama.cpp Server API

## Telegram bot example
```python
import os

from aishalib.aishalib import Aisha
from aishalib.llmbackend import LlamaCppBackend
from aishalib.tools import parseToolResponse
from aishalib.utils import get_time_string
from aishalib.memory import SimpleMemory
from telegram import Update
from telegram.ext import Application, MessageHandler, ContextTypes, filters


BOT_NAME = os.environ['BOT_NAME']
TG_TOKEN = os.environ['TG_TOKEN']

PERSISTENCE_DIR = BOT_NAME + "/"

if not os.path.exists(PERSISTENCE_DIR):
    os.makedirs(PERSISTENCE_DIR)

memory = SimpleMemory(PERSISTENCE_DIR + "memory.json")


def get_aisha(aisha_context_key, tg_context):
    if aisha_context_key not in tg_context.user_data:
        backend = LlamaCppBackend("http://127.0.0.1:8088/completion", max_predict=256)
        aisha = Aisha(backend, "google/gemma-2-27b-it", prompt_file="system_prompt_example.txt", max_context=8192)
        tg_context.user_data[aisha_context_key] = aisha
    aisha = tg_context.user_data[aisha_context_key]
    aisha.load_context(aisha_context_key)
    return aisha


async def process_message(update: Update, context: ContextTypes.DEFAULT_TYPE):
    chat_id = update.effective_chat.id
    user_id = str(update.message.from_user.id)
    user_name = memory.get_memory_value("names:" + user_id, "")
    computed_name = user_name if user_name else f"id_{user_id}"
    message = update.message.text

    aisha = get_aisha(PERSISTENCE_DIR + str(chat_id), context)
    aisha.add_user_request(f"{computed_name}: {message}", meta_info=get_time_string())
    tools_response = aisha.completion(temp=0.7, top_p=0.9)
    aisha.save_context(PERSISTENCE_DIR + str(chat_id))

    tools = parseToolResponse(tools_response, ["directly_answer", "save_human_name", "pass"])

    if "save_human_name" in tools:
        user_name = tools["save_human_name"]
        memory.save_memory_value("names:" + user_name.split(":")[0].replace("id_", ""), user_name.split(":")[1])

    if "pass" not in tools:
        await context.bot.send_message(chat_id=chat_id,
                                       text=tools["directly_answer"],
                                       reply_to_message_id=update.message.message_id)


application = Application.builder().token(TG_TOKEN).build()
application.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, process_message))
application.run_polling()
```

## Run Llama.CPP Server backend
```console
llama.cpp/build/bin/llama-server -m model_q5_k_m.gguf -ngl 99 -fa -c 4096 --host 0.0.0.0 --port 8000
```

## Install CUDA toolkit for Llama.cpp compilation
Please note that the toolkit version must match the driver version. The driver version can be found using the nvidia-smi command.
To install toolkit for CUDA 12.5 you need to run the following commands:
```console
CUDA_TOOLKIT_VERSION=12-5
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update
sudo apt -y install cuda-toolkit-${CUDA_TOOLKIT_VERSION}
echo -e '
export CUDA_HOME=/usr/local/cuda
export PATH=${CUDA_HOME}/bin:${PATH}
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH
' >> ~/.bashrc
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "aishalib",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "Vladimir Petrukhin <man4j@ya.ru>",
    "download_url": "https://files.pythonhosted.org/packages/f7/e3/889f3fd74397b04da2565ea893fb9238475bbee025c7277b0859edd04b83/aishalib-0.0.24.tar.gz",
    "platform": null,
    "description": "# AISHA Lib: A High-Level Abstraction for Building AI Assistants\nIn the evolving landscape of artificial intelligence, the development of smart assistants has become increasingly prevalent. To streamline this process, the **AISHA (AI Smart Human Assistant) Lib** offers a high-level abstraction designed for creating AI assistants. This versatile library supports various large language models (LLMs) and different LLM backends, providing developers with a powerful and flexible toolset.\n\n## Environment\nTo create a Python virtual environment, use the command:\n```console\nconda env create -f environment.yml\n```\n\n## Installation\n```console\npip install aishalib\n```\n\n## Supported Models\nThe following LLM models are supported:\n- microsoft/Phi-3-medium-128k-instruct\n- CohereForAI/c4ai-command-r-v01\n- google/gemma-2-27b-it\n- Qwen/Qwen2-72B-Instruct\n\n## LLM backends\nThe following LLM backends are supported:\n- Llama.cpp Server API\n\n## Telegram bot example\n```python\nimport os\n\nfrom aishalib.aishalib import Aisha\nfrom aishalib.llmbackend import LlamaCppBackend\nfrom aishalib.tools import parseToolResponse\nfrom aishalib.utils import get_time_string\nfrom aishalib.memory import SimpleMemory\nfrom telegram import Update\nfrom telegram.ext import Application, MessageHandler, ContextTypes, filters\n\n\nBOT_NAME = os.environ['BOT_NAME']\nTG_TOKEN = os.environ['TG_TOKEN']\n\nPERSISTENCE_DIR = BOT_NAME + \"/\"\n\nif not os.path.exists(PERSISTENCE_DIR):\n    os.makedirs(PERSISTENCE_DIR)\n\nmemory = SimpleMemory(PERSISTENCE_DIR + \"memory.json\")\n\n\ndef get_aisha(aisha_context_key, tg_context):\n    if aisha_context_key not in tg_context.user_data:\n        backend = LlamaCppBackend(\"http://127.0.0.1:8088/completion\", max_predict=256)\n        aisha = Aisha(backend, \"google/gemma-2-27b-it\", prompt_file=\"system_prompt_example.txt\", max_context=8192)\n        tg_context.user_data[aisha_context_key] = aisha\n    aisha = tg_context.user_data[aisha_context_key]\n    aisha.load_context(aisha_context_key)\n    return aisha\n\n\nasync def process_message(update: Update, context: ContextTypes.DEFAULT_TYPE):\n    chat_id = update.effective_chat.id\n    user_id = str(update.message.from_user.id)\n    user_name = memory.get_memory_value(\"names:\" + user_id, \"\")\n    computed_name = user_name if user_name else f\"id_{user_id}\"\n    message = update.message.text\n\n    aisha = get_aisha(PERSISTENCE_DIR + str(chat_id), context)\n    aisha.add_user_request(f\"{computed_name}: {message}\", meta_info=get_time_string())\n    tools_response = aisha.completion(temp=0.7, top_p=0.9)\n    aisha.save_context(PERSISTENCE_DIR + str(chat_id))\n\n    tools = parseToolResponse(tools_response, [\"directly_answer\", \"save_human_name\", \"pass\"])\n\n    if \"save_human_name\" in tools:\n        user_name = tools[\"save_human_name\"]\n        memory.save_memory_value(\"names:\" + user_name.split(\":\")[0].replace(\"id_\", \"\"), user_name.split(\":\")[1])\n\n    if \"pass\" not in tools:\n        await context.bot.send_message(chat_id=chat_id,\n                                       text=tools[\"directly_answer\"],\n                                       reply_to_message_id=update.message.message_id)\n\n\napplication = Application.builder().token(TG_TOKEN).build()\napplication.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, process_message))\napplication.run_polling()\n```\n\n## Run Llama.CPP Server backend\n```console\nllama.cpp/build/bin/llama-server -m model_q5_k_m.gguf -ngl 99 -fa -c 4096 --host 0.0.0.0 --port 8000\n```\n\n## Install CUDA toolkit for Llama.cpp compilation\nPlease note that the toolkit version must match the driver version. The driver version can be found using the nvidia-smi command.\nTo install toolkit for CUDA 12.5 you need to run the following commands:\n```console\nCUDA_TOOLKIT_VERSION=12-5\nwget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb\nsudo dpkg -i cuda-keyring_1.1-1_all.deb\nsudo apt update\nsudo apt -y install cuda-toolkit-${CUDA_TOOLKIT_VERSION}\necho -e '\nexport CUDA_HOME=/usr/local/cuda\nexport PATH=${CUDA_HOME}/bin:${PATH}\nexport LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH\n' >> ~/.bashrc\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "AI Smart Human Assistant Library",
    "version": "0.0.24",
    "project_urls": {
        "Homepage": "https://github.com/Equiron-AI/aishalib"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ae861cc51bcbb857ddb251be624b77e26f36d6e2da5557dac5d795ec2d719741",
                "md5": "8c86c0baa8d5d93b9d0f199025e3f6e3",
                "sha256": "e5e4240b2f61a246324d2a8334413abb2f4dd5721a6c9e890123098300936952"
            },
            "downloads": -1,
            "filename": "aishalib-0.0.24-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8c86c0baa8d5d93b9d0f199025e3f6e3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 8110,
            "upload_time": "2024-08-10T11:49:39",
            "upload_time_iso_8601": "2024-08-10T11:49:39.575439Z",
            "url": "https://files.pythonhosted.org/packages/ae/86/1cc51bcbb857ddb251be624b77e26f36d6e2da5557dac5d795ec2d719741/aishalib-0.0.24-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f7e3889f3fd74397b04da2565ea893fb9238475bbee025c7277b0859edd04b83",
                "md5": "f8987cdb252d645592378e25ac09c4fa",
                "sha256": "d4e2b7bb9826f059d3bb66b3bf1aae0adb9652c392ffbcabc4c3769ef715fbc4"
            },
            "downloads": -1,
            "filename": "aishalib-0.0.24.tar.gz",
            "has_sig": false,
            "md5_digest": "f8987cdb252d645592378e25ac09c4fa",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 9158,
            "upload_time": "2024-08-10T11:49:41",
            "upload_time_iso_8601": "2024-08-10T11:49:41.194266Z",
            "url": "https://files.pythonhosted.org/packages/f7/e3/889f3fd74397b04da2565ea893fb9238475bbee025c7277b0859edd04b83/aishalib-0.0.24.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-10 11:49:41",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Equiron-AI",
    "github_project": "aishalib",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "aishalib"
}
        
Elapsed time: 0.33769s