ALLMDEV


NameALLMDEV JSON
Version 1.3.7 PyPI version JSON
download
home_pageNone
SummaryA simple and efficient python library for fast inference of GGUF Large Language Models.
upload_time2024-05-26 10:56:07
maintainerSoham Ghadge
docs_urlNone
authorAll Advance AI
requires_pythonNone
licenseNone
keywords gguf gguf large language model gguf large language models gguf large language modeling gguf large language modeling library
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ALLM

ALLM is a Python library designed for fast inference of GGUF (Generic Global Unsupervised Features) Large Language Models (LLMs) on both CPU and GPU. It provides a convenient interface for loading pre-trained GGUF models and performing inference using them. This library is ideal for applications where quick response times are crucial, such as chatbots, text generation, and more.

## Features

- **Efficient Inference**: ALLM leverages the power of GGUF models to provide fast and accurate inference.
- **CPU and GPU Support**: The library is optimized for both CPU and GPU, allowing you to choose the best hardware for your application.
- **Simple Interface**: With a straightforward command line support, you can easily load models and perform inference with just a single command.
- **Flexible Configuration**: Customize inference settings such as temperature and model path to suit your needs.

## Installation

You can install ALLM using pip:

```bash
pip install allm
```

## Usage

You can start inference with a simple 'allm-run' command. The command takes name or path, temperature(optional), max new tokens(optional) and additional model kwargs(optional) as arguments.

```bash
allm-run --name model_name_or_path
```

## API

You can initiate the inference API by simply using the 'allm-serve' command. This command launches the API server on the default host, 127.0.0.1:5000. If you prefer to run the API server on a different port and host, you have the option to customize the apiconfig.txt file within your model directory.

```bash
allm-serve
```


## ALLM AGENTS 

## Local Agent Inference

To create local agent, begin by loading your knowledge documents into the database using the allm-newagent command and specifying the agent name:

```bash
allm-newagent --doc "document_path" --agent agent_name
```

or

```bash
allm-newagent --dir "directory containing files to be ingested" --agent agent_name
```

After agent is created successfully with your knowledge document, you can start the local agent chat with the allm-agentchat command:


```bash
allm-agentchat --agent agent name
```

After your agents are created you can also initiate agent-specific API server using the allm-agentapi command:


```bash
allm-agentapi --agent agent name
```

You can also add additional documents to your existing agents by using the allm-updateagent command:

```bash
allm-updateagent --doc "document path" --agent agentname
```

##Supported Cloud models.

ALLM supports the Generative LLMs on VertexAI, including Gemini-1.5 pro and AzureOpenAi models. You can start local inference of cloud based models using the following command:

```bash
allm-run-vertex --projectid Id_of_your_GCP_project --region location_of_your_cloud_server
```

or

```bash
allm-run-azure --key key --version version --endpoint https://{your_endpoint}.openai.azure.com --model model_name
```

ALLM supports the local config based inference of Generative LLMs on VertexAI, including Gemini-1.5 pro and AzureOpenAi models. You can manually create a json confi file or ALLM will create one for you and start local inference of cloud based models using the following command:

```bash
allm-run-vertex
```
Note that for the above command to work, config file needs to have all the necessary parameters set. This can be achieved by running thr full command including CLI arguments once, and then using the shortened command

Same procedure can be followed for azure.

```bash
allm-run-azure
```


You can also have a custom agent working with your cloud deployed model using the following command. It is important to note that before this step, agent should be created using the commands in the AGENTS section above.

```bash
allm-agentchat-vertex --projectid Id_of_your_GCP_project --region location_of_your_cloud_server --agent agent_name
```
or
```bash
allm-run-azure --key key --version version --endpoint https://{your_endpoint}.openai.azure.com --model model_name --agent agentname
```
model_name is an optional parameter in both vertex and azure, if not mentioned, inference will work on gemini-1.0-pro-002 for vertex and gpt-35-turbo for OpenAI by default.

Also, have an api config file ready, the following commands can be used:

```bash
allm-agentchat-vertex --agent agent_name
```
and
```bash
allm-agentchat-azure --agent agent_name
```

ALLM also supports inferencing of cloud model based agents on API

```bash
allm-agentapi-vertex --projectid Id_of_your_GCP_project --region location_of_your_cloud_server --agent agent_name
```
or
```bash
allm-agentapi-vertex --agent agent_name
```

For Azure,

```bash
allm-run-azure --key key --version version --endpoint https://{your_endpoint}.openai.azure.com --model model_name --agent agentname
```
or
```bash
allm-agentapi-vertex --agent agent_name
```

#ALLM-Enterprise
You can launch the UI with the following command:
```bash
allm-launch
```

## Supported Model names
Llama3, Llama2, llama, llama2_chat, Llama_chat, Mistral, Mistral_instruct


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ALLMDEV",
    "maintainer": "Soham Ghadge",
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": "soham.ghadge@allaai.com",
    "keywords": "GGUF, GGUF Large Language Model, GGUF Large Language Models, GGUF Large Language Modeling, GGUF Large Language Modeling Library",
    "author": "All Advance AI",
    "author_email": "allmdev@allaai.com",
    "download_url": null,
    "platform": null,
    "description": "# ALLM\r\n\r\nALLM is a Python library designed for fast inference of GGUF (Generic Global Unsupervised Features) Large Language Models (LLMs) on both CPU and GPU. It provides a convenient interface for loading pre-trained GGUF models and performing inference using them. This library is ideal for applications where quick response times are crucial, such as chatbots, text generation, and more.\r\n\r\n## Features\r\n\r\n- **Efficient Inference**: ALLM leverages the power of GGUF models to provide fast and accurate inference.\r\n- **CPU and GPU Support**: The library is optimized for both CPU and GPU, allowing you to choose the best hardware for your application.\r\n- **Simple Interface**: With a straightforward command line support, you can easily load models and perform inference with just a single command.\r\n- **Flexible Configuration**: Customize inference settings such as temperature and model path to suit your needs.\r\n\r\n## Installation\r\n\r\nYou can install ALLM using pip:\r\n\r\n```bash\r\npip install allm\r\n```\r\n\r\n## Usage\r\n\r\nYou can start inference with a simple 'allm-run' command. The command takes name or path, temperature(optional), max new tokens(optional) and additional model kwargs(optional) as arguments.\r\n\r\n```bash\r\nallm-run --name model_name_or_path\r\n```\r\n\r\n## API\r\n\r\nYou can initiate the inference API by simply using the 'allm-serve' command. This command launches the API server on the default host, 127.0.0.1:5000. If you prefer to run the API server on a different port and host, you have the option to customize the apiconfig.txt file within your model directory.\r\n\r\n```bash\r\nallm-serve\r\n```\r\n\r\n\r\n## ALLM AGENTS \r\n\r\n## Local Agent Inference\r\n\r\nTo create local agent, begin by loading your knowledge documents into the database using the allm-newagent command and specifying the agent name:\r\n\r\n```bash\r\nallm-newagent --doc \"document_path\" --agent agent_name\r\n```\r\n\r\nor\r\n\r\n```bash\r\nallm-newagent --dir \"directory containing files to be ingested\" --agent agent_name\r\n```\r\n\r\nAfter agent is created successfully with your knowledge document, you can start the local agent chat with the allm-agentchat command:\r\n\r\n\r\n```bash\r\nallm-agentchat --agent agent name\r\n```\r\n\r\nAfter your agents are created you can also initiate agent-specific API server using the allm-agentapi command:\r\n\r\n\r\n```bash\r\nallm-agentapi --agent agent name\r\n```\r\n\r\nYou can also add additional documents to your existing agents by using the allm-updateagent command:\r\n\r\n```bash\r\nallm-updateagent --doc \"document path\" --agent agentname\r\n```\r\n\r\n##Supported Cloud models.\r\n\r\nALLM supports the Generative LLMs on VertexAI, including Gemini-1.5 pro and AzureOpenAi models. You can start local inference of cloud based models using the following command:\r\n\r\n```bash\r\nallm-run-vertex --projectid Id_of_your_GCP_project --region location_of_your_cloud_server\r\n```\r\n\r\nor\r\n\r\n```bash\r\nallm-run-azure --key key --version version --endpoint https://{your_endpoint}.openai.azure.com --model model_name\r\n```\r\n\r\nALLM supports the local config based inference of Generative LLMs on VertexAI, including Gemini-1.5 pro and AzureOpenAi models. You can manually create a json confi file or ALLM will create one for you and start local inference of cloud based models using the following command:\r\n\r\n```bash\r\nallm-run-vertex\r\n```\r\nNote that for the above command to work, config file needs to have all the necessary parameters set. This can be achieved by running thr full command including CLI arguments once, and then using the shortened command\r\n\r\nSame procedure can be followed for azure.\r\n\r\n```bash\r\nallm-run-azure\r\n```\r\n\r\n\r\nYou can also have a custom agent working with your cloud deployed model using the following command. It is important to note that before this step, agent should be created using the commands in the AGENTS section above.\r\n\r\n```bash\r\nallm-agentchat-vertex --projectid Id_of_your_GCP_project --region location_of_your_cloud_server --agent agent_name\r\n```\r\nor\r\n```bash\r\nallm-run-azure --key key --version version --endpoint https://{your_endpoint}.openai.azure.com --model model_name --agent agentname\r\n```\r\nmodel_name is an optional parameter in both vertex and azure, if not mentioned, inference will work on gemini-1.0-pro-002 for vertex and gpt-35-turbo for OpenAI by default.\r\n\r\nAlso, have an api config file ready, the following commands can be used:\r\n\r\n```bash\r\nallm-agentchat-vertex --agent agent_name\r\n```\r\nand\r\n```bash\r\nallm-agentchat-azure --agent agent_name\r\n```\r\n\r\nALLM also supports inferencing of cloud model based agents on API\r\n\r\n```bash\r\nallm-agentapi-vertex --projectid Id_of_your_GCP_project --region location_of_your_cloud_server --agent agent_name\r\n```\r\nor\r\n```bash\r\nallm-agentapi-vertex --agent agent_name\r\n```\r\n\r\nFor Azure,\r\n\r\n```bash\r\nallm-run-azure --key key --version version --endpoint https://{your_endpoint}.openai.azure.com --model model_name --agent agentname\r\n```\r\nor\r\n```bash\r\nallm-agentapi-vertex --agent agent_name\r\n```\r\n\r\n#ALLM-Enterprise\r\nYou can launch the UI with the following command:\r\n```bash\r\nallm-launch\r\n```\r\n\r\n## Supported Model names\r\nLlama3, Llama2, llama, llama2_chat, Llama_chat, Mistral, Mistral_instruct\r\n\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A simple and efficient python library for fast inference of GGUF Large Language Models.",
    "version": "1.3.7",
    "project_urls": null,
    "split_keywords": [
        "gguf",
        " gguf large language model",
        " gguf large language models",
        " gguf large language modeling",
        " gguf large language modeling library"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2604df193487954786a8f3bbbfef3474c26d0180c57423f06ce9f88782133c39",
                "md5": "c674d637ba6847400013ae1f9f39ed0a",
                "sha256": "aa806c2fcef12bf14964da2ccb11a4fbc84cc04b1dbd5603453a17b6c16c7755"
            },
            "downloads": -1,
            "filename": "ALLMDEV-1.3.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c674d637ba6847400013ae1f9f39ed0a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 29609,
            "upload_time": "2024-05-26T10:56:07",
            "upload_time_iso_8601": "2024-05-26T10:56:07.253053Z",
            "url": "https://files.pythonhosted.org/packages/26/04/df193487954786a8f3bbbfef3474c26d0180c57423f06ce9f88782133c39/ALLMDEV-1.3.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-26 10:56:07",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "allmdev"
}
        
Elapsed time: 0.29879s