hcai-lens


Namehcai-lens JSON
Version 1.0.0 PyPI version JSON
download
home_pageNone
SummaryLENS is a lightweight webserver designed to use Large Language Models as tool for data exploration in human interactions.
upload_time2024-09-22 10:28:45
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseNone
keywords lens discover llm machine learning
VCS
bugtrack_url
requirements flask waitress python-dotenv litellm requests Flask-Caching
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Description
LENS: Learning and Exploring through Natural language Systems is a lightweight webserver designed to use Large Language Models as tool for data exploration in human interactions.
LENS ist best used together with [NOVA](https://github.com/hcmlab/nova) and [DISCOVER](https://github.com/hcmlab/nova-server).

# Usage

LENS currently provides support for the [OpenAI](https://platform.openai.com/docs/overview) and the [OLLAMA](https://github.com/ollama/ollama/blob/main/docs/api.md).
So before you start make sure to either have access to an OpenAI API key or to set up a local OLLAMA server.

To install lens install python > 3.9 and run the following command in your terminal

`pip install lens` 

Create a file named `lens.env` at suitable location. 
Copy + Paste the contents from the [environment](#Environment) section to the newly created environment file and adapt the contents accordingly. 
Run LENS using the following command: 

`lens --env /path/to/lens.env`

# Environment

Example for .env file
```
# server
LENS_HOST = 127.0.0.1
LENS_PORT = 1337
LENS_CACHE_DUR = 600 #results pf /models are cached for the specified amount in seconds

# model
DEFAULT_MODEL = llama3.1

# API_BASES
API_BASE_OLLAMA = http://127.0.0.1:11434
API_BASE_OLLAMA_CHAT = http://127.0.0.1:11434

# api keys
OPENAI_API_KEY = <openai-api-key>
OLLAMA_API_KEY = None # Api keys are required for each model. Set to None if model doesn't need it.

# prompts
LENS_DEFAULT_MAX_NEW_TOKENS = 1024
LENS_DEFAULT_TEMPERATURE = 0.8
LENS_DEFAULT_TOP_K = 50
LENS_DEFAULT_TOP_P = 0.95
LENS_DEFAULT_SYSTEM_PROMPT = "Your name is Nova. You are a a helpful assistant."
```


# API
LENS provides a REST API that can be called from any client. 
If applicable an endpoint accepts a reqeust body as json formatted dictionary.
The api provides the following endpoints: 

<details>
 <summary><code>GET</code> <code><b>/models</b></code> <code>Retrieving a list of available models</code></summary>

##### Parameters

> None

##### Responses

> | http code | content-type              | example response                                                       |
> |-----------|---------------------------|------------------------------------------------------------------------|
> | `200`     | `application/json`        | `[{"id":"gpt-3.5-turbo-1106","max_tokens":16385,"provider":"openai"}]` |


</details>

---

<details>
 <summary><code>POST</code> <code><b>/assist</b></code> <code>application/json</code> <code>Sending a reqeust to a LLM and return the answer</code></summary>

##### Parameters

> | name           | type     | data type  | description                                                                    |
> |----------------|----------|------------|--------------------------------------------------------------------------------|
> | `model`        | required | str        | The id of the model as provided by `/models`                                   |
> | `provider`     | required | str        | The provider of the model as provided by `/models`                             |
> | `message`      | required | str        | The prompt that should be send to the model                                    |
> | `history`      | optional | list[list] | A history of previous question-answer-pairs in chronological order             |
> | `systemprompt` | optional | str        | Set of instructions that define the model behaviour                            |
> | `data_desc`    | optional | str        | An explanation of how context data should be interpreted by the model          |
> | `data`         | optional | str        | Additional context data for the llm                                            |
> | `stream`       | optional | bool       | If the answer should be streamed                                               |
> | `top_k`        | optional | int        | Select among the k most probable next tokens                                   |
> | `temperature`  | optional | int        | Degree of randomness to select next token among candidates                     |
> | `api_base`     | optional | str        | Overwrites the api_base of the server for the given provider/model combination |  


##### Responses

> | http code | content-type | response                                           |
> |-----------|--------------|----------------------------------------------------|
> | `200`     | `bytestring` | `A bytestring containing the UTF-8 encoded answer` |
                           
</details>


# Requests
```python
import requests
api_base="http://127.0.0.1:1337"
# Making a POST request with the stream parameter set to True to handle streaming responses
with requests.get(api_base + '/models') as response:
    print(response.content)

request = {
    'model': 'llama3.1',
    'provider': 'ollama_chat',
    'message': 'Add the cost of an apple to the last thing I asked you.',
    'system_prompt': 'Your name is LENS. You are a a helpful shopping assistant.',
    'data_desc': 'The data is provided in the form of tuples where the first entry is the name of a fruit, and the second entry is the price of that fruit.',
    'data' : '("apple", "0.50"), ("avocado", "1.0"), ("banana", "0.80")',
    'stream': True,
    'top_k': 50,
    'top_p': 0.95,
    'temperature': 0.8,
    'history': [
        ["How much does a banana cost?", "Hello there! As a helpful shopping assistant, I'd be happy to help you find the price of a banana. According to the data provided, the cost of a banana is $0.80. So, one banana costs $0.80."]
    ]
}

with requests.post(api_base + '/assist', json=request) as response:
    print(response.content)
```


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "hcai-lens",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "LENS, DISCOVER, LLM, machine learning",
    "author": null,
    "author_email": "Dominik Schiller <dominik.schiller@uni-a.de>",
    "download_url": "https://files.pythonhosted.org/packages/48/27/b78a38e008d9f02f875ed9ca4a3234430e6a23455355c430e1c9a88e76c1/hcai_lens-1.0.0.tar.gz",
    "platform": null,
    "description": "# Description\nLENS: Learning and Exploring through Natural language Systems is a lightweight webserver designed to use Large Language Models as tool for data exploration in human interactions.\nLENS ist best used together with [NOVA](https://github.com/hcmlab/nova) and [DISCOVER](https://github.com/hcmlab/nova-server).\n\n# Usage\n\nLENS currently provides support for the [OpenAI](https://platform.openai.com/docs/overview) and the [OLLAMA](https://github.com/ollama/ollama/blob/main/docs/api.md).\nSo before you start make sure to either have access to an OpenAI API key or to set up a local OLLAMA server.\n\nTo install lens install python > 3.9 and run the following command in your terminal\n\n`pip install lens` \n\nCreate a file named `lens.env` at suitable location. \nCopy + Paste the contents from the [environment](#Environment) section to the newly created environment file and adapt the contents accordingly. \nRun LENS using the following command: \n\n`lens --env /path/to/lens.env`\n\n# Environment\n\nExample for .env file\n```\n# server\nLENS_HOST = 127.0.0.1\nLENS_PORT = 1337\nLENS_CACHE_DUR = 600 #results pf /models are cached for the specified amount in seconds\n\n# model\nDEFAULT_MODEL = llama3.1\n\n# API_BASES\nAPI_BASE_OLLAMA = http://127.0.0.1:11434\nAPI_BASE_OLLAMA_CHAT = http://127.0.0.1:11434\n\n# api keys\nOPENAI_API_KEY = <openai-api-key>\nOLLAMA_API_KEY = None # Api keys are required for each model. Set to None if model doesn't need it.\n\n# prompts\nLENS_DEFAULT_MAX_NEW_TOKENS = 1024\nLENS_DEFAULT_TEMPERATURE = 0.8\nLENS_DEFAULT_TOP_K = 50\nLENS_DEFAULT_TOP_P = 0.95\nLENS_DEFAULT_SYSTEM_PROMPT = \"Your name is Nova. You are a a helpful assistant.\"\n```\n\n\n# API\nLENS provides a REST API that can be called from any client. \nIf applicable an endpoint accepts a reqeust body as json formatted dictionary.\nThe api provides the following endpoints: \n\n<details>\n <summary><code>GET</code> <code><b>/models</b></code> <code>Retrieving a list of available models</code></summary>\n\n##### Parameters\n\n> None\n\n##### Responses\n\n> | http code | content-type              | example response                                                       |\n> |-----------|---------------------------|------------------------------------------------------------------------|\n> | `200`     | `application/json`        | `[{\"id\":\"gpt-3.5-turbo-1106\",\"max_tokens\":16385,\"provider\":\"openai\"}]` |\n\n\n</details>\n\n---\n\n<details>\n <summary><code>POST</code> <code><b>/assist</b></code> <code>application/json</code> <code>Sending a reqeust to a LLM and return the answer</code></summary>\n\n##### Parameters\n\n> | name           | type     | data type  | description                                                                    |\n> |----------------|----------|------------|--------------------------------------------------------------------------------|\n> | `model`        | required | str        | The id of the model as provided by `/models`                                   |\n> | `provider`     | required | str        | The provider of the model as provided by `/models`                             |\n> | `message`      | required | str        | The prompt that should be send to the model                                    |\n> | `history`      | optional | list[list] | A history of previous question-answer-pairs in chronological order             |\n> | `systemprompt` | optional | str        | Set of instructions that define the model behaviour                            |\n> | `data_desc`    | optional | str        | An explanation of how context data should be interpreted by the model          |\n> | `data`         | optional | str        | Additional context data for the llm                                            |\n> | `stream`       | optional | bool       | If the answer should be streamed                                               |\n> | `top_k`        | optional | int        | Select among the k most probable next tokens                                   |\n> | `temperature`  | optional | int        | Degree of randomness to select next token among candidates                     |\n> | `api_base`     | optional | str        | Overwrites the api_base of the server for the given provider/model combination |  \n\n\n##### Responses\n\n> | http code | content-type | response                                           |\n> |-----------|--------------|----------------------------------------------------|\n> | `200`     | `bytestring` | `A bytestring containing the UTF-8 encoded answer` |\n                           \n</details>\n\n\n# Requests\n```python\nimport requests\napi_base=\"http://127.0.0.1:1337\"\n# Making a POST request with the stream parameter set to True to handle streaming responses\nwith requests.get(api_base + '/models') as response:\n    print(response.content)\n\nrequest = {\n    'model': 'llama3.1',\n    'provider': 'ollama_chat',\n    'message': 'Add the cost of an apple to the last thing I asked you.',\n    'system_prompt': 'Your name is LENS. You are a a helpful shopping assistant.',\n    'data_desc': 'The data is provided in the form of tuples where the first entry is the name of a fruit, and the second entry is the price of that fruit.',\n    'data' : '(\"apple\", \"0.50\"), (\"avocado\", \"1.0\"), (\"banana\", \"0.80\")',\n    'stream': True,\n    'top_k': 50,\n    'top_p': 0.95,\n    'temperature': 0.8,\n    'history': [\n        [\"How much does a banana cost?\", \"Hello there! As a helpful shopping assistant, I'd be happy to help you find the price of a banana. According to the data provided, the cost of a banana is $0.80. So, one banana costs $0.80.\"]\n    ]\n}\n\nwith requests.post(api_base + '/assist', json=request) as response:\n    print(response.content)\n```\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "LENS is a lightweight webserver designed to use Large Language Models as tool for data exploration in human interactions.",
    "version": "1.0.0",
    "project_urls": {
        "Repository": "https://github.com/hcmlab/lens"
    },
    "split_keywords": [
        "lens",
        " discover",
        " llm",
        " machine learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "eb39b5f451490e9f1a0420e55163c7e56f42edd6de549abf8b53cd1bbac45de3",
                "md5": "06c94e7f8dc7107d676ef4f8b4115800",
                "sha256": "2d3d214399024ddb7c63dd80972821eea11f61588317054b48a0cc58567b9cdd"
            },
            "downloads": -1,
            "filename": "hcai_lens-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "06c94e7f8dc7107d676ef4f8b4115800",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 8469,
            "upload_time": "2024-09-22T10:28:44",
            "upload_time_iso_8601": "2024-09-22T10:28:44.268589Z",
            "url": "https://files.pythonhosted.org/packages/eb/39/b5f451490e9f1a0420e55163c7e56f42edd6de549abf8b53cd1bbac45de3/hcai_lens-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4827b78a38e008d9f02f875ed9ca4a3234430e6a23455355c430e1c9a88e76c1",
                "md5": "1e0a45cd9b76ab70982cb3840eeaeec9",
                "sha256": "65fe7e5f36b1b2c3bc316420ad5e0861e2c309aa954800857c705938c1f8e732"
            },
            "downloads": -1,
            "filename": "hcai_lens-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "1e0a45cd9b76ab70982cb3840eeaeec9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 8194,
            "upload_time": "2024-09-22T10:28:45",
            "upload_time_iso_8601": "2024-09-22T10:28:45.913282Z",
            "url": "https://files.pythonhosted.org/packages/48/27/b78a38e008d9f02f875ed9ca4a3234430e6a23455355c430e1c9a88e76c1/hcai_lens-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-22 10:28:45",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "hcmlab",
    "github_project": "lens",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "flask",
            "specs": [
                [
                    "~=",
                    "2.3.3"
                ]
            ]
        },
        {
            "name": "waitress",
            "specs": [
                [
                    "~=",
                    "2.1.2"
                ]
            ]
        },
        {
            "name": "python-dotenv",
            "specs": [
                [
                    "~=",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "litellm",
            "specs": [
                [
                    "==",
                    "1.27.4"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    "~=",
                    "2.31.0"
                ]
            ]
        },
        {
            "name": "Flask-Caching",
            "specs": [
                [
                    "~=",
                    "2.3.0"
                ]
            ]
        }
    ],
    "lcname": "hcai-lens"
}
        
Elapsed time: 0.53145s