semantic-ai


Namesemantic-ai JSON
Version 0.0.5 PyPI version JSON
download
home_pagehttps://github.com/decisionfacts/semantic-ai
SummarySematic AI RAG System
upload_time2024-02-15 12:16:29
maintainerDecisionFacts
docs_urlNone
authorDecisionFacts
requires_python
licenseApache License 2.0
keywords pdf machine-learning ocr deep-neural-networks openai docx approximate-nearest-neighbor-search semantic-search document-parser rag fastapi vector-database inference-api openai-api llm retrieval-augmented-generation llama2
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![Semantic AI Logo](https://github.com/decisionfacts/semantic-ai/blob/master/docs/source/_static/images/createLLM.png?raw=True)
# Semantic AI Lib

[![Python version](https://img.shields.io/badge/python-3.10-green)](https://img.shields.io/badge/python-3.10-green)[![PyPI version](https://badge.fury.io/py/semantic-ai.svg)](https://badge.fury.io/py/semantic-ai)[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)

An open-source framework for Retrieval-Augmented System (RAG) uses semantic search to retrieve the expected results and generate human-readable conversational responses with the help of LLM (Large Language Model).

**Semantic AI Library Documentation [Docs here](https://docs-semantic-ai.decisionfacts.ai/)**

## Requirements

Python 3.10+ asyncio

## Installation
```shell
# Using pip
$ python -m pip install semantic-ai

# Manual install
$ python -m pip install .
```
# Set the environment variable
Set the credentials in .env file. Only give the credential for an one connector, an one indexer and an one llm model config. other fields put as empty
```shell
# Default
FILE_DOWNLOAD_DIR_PATH= # default directory name 'download_file_dir'
EXTRACTED_DIR_PATH= # default directory name 'extracted_dir'

# Connector (SharePoint, S3, GCP Bucket, GDrive, Confluence etc.,)
CONNECTOR_TYPE="connector_name" # sharepoint
SHAREPOINT_CLIENT_ID="client_id"
SHAREPOINT_CLIENT_SECRET="client_secret"
SHAREPOINT_TENANT_ID="tenant_id"
SHAREPOINT_HOST_NAME='<tenant_name>.sharepoint.com'
SHAREPOINT_SCOPE='https://graph.microsoft.com/.default'
SHAREPOINT_SITE_ID="site_id"
SHAREPOINT_DRIVE_ID="drive_id"
SHAREPOINT_FOLDER_URL="folder_url" # /My_folder/child_folder/

# Indexer
INDEXER_TYPE="vector_db_name" # elasticsearch, qdrant
ELASTICSEARCH_URL="elasticsearch_url" # give valid url
ELASTICSEARCH_USER="elasticsearch_user" # give valid user
ELASTICSEARCH_PASSWORD="elasticsearch_password" # give valid password
ELASTICSEARCH_INDEX_NAME="index_name"
ELASTICSEARCH_SSL_VERIFY="ssl_verify" # True or False

# Qdrant
QDRANT_URL="<qdrant_url>"
QDRANT_INDEX_NAME="<index_name>"
QDRANT_API_KEY="<apikey>"

# LLM
LLM_MODEL="<llm_model>" # llama, openai
LLM_MODEL_NAME_OR_PATH="" # model name
OPENAI_API_KEY="<openai_api_key>" # if using openai

# SQL
SQLITE_SQL_PATH="<database_path>" # sqlit db path

# MYSQL
MYSQL_HOST="<host_name>" # localhost or Ip Address
MYSQL_USER="<user_name>"
MYSQL_PASSWORD="<password>"
MYSQL_DATABASE="<database_name>"
MYSQL_PORT="<port>" # default port is 3306

```
Method 1:
    To load the .env file. Env file should have the credentials
```shell
%load_ext dotenv
%dotenv
%dotenv relative/or/absolute/path/to/.env

(or)

dotenv -f .env run -- python
```
Method 2:
```python
from semantic_ai.config import Settings
settings = Settings()
```

# Un-Structure 
### 1. Import the module
```python
import asyncio
import semantic_ai
```

### 2. To download the files from a given source, extract the content from the downloaded files and index the extracted data in the given vector db.
```python
await semantic_ai.download()
await semantic_ai.extract()
await semantic_ai.index()
```
After completion of download, extract and index, we can generate the answer from indexed vector db. That code given below.
### 3. To generate the answer from indexed vector db using retrieval LLM model.
```python
search_obj = await semantic_ai.search()
query = ""
search = await search_obj.generate(query)
```
Suppose the job is running for a long time, we can watch the number of files processed, the number of files failed, and that filename stored in the text file that is processed and failed in the 'EXTRACTED_DIR_PATH/meta' directory.

### Example
To connect the source and get the connection object. We can see that in the examples folder.
Example: SharePoint connector
```python
from semantic_ai.connectors import Sharepoint

CLIENT_ID = '<client_id>'  # sharepoint client id
CLIENT_SECRET = '<client_secret>'  # sharepoint client seceret
TENANT_ID = '<tenant_id>'  # sharepoint tenant id
SCOPE = 'https://graph.microsoft.com/.default'  # scope
HOST_NAME = "<tenant_name>.sharepoint.com"  # for example 'contoso.sharepoint.com'

# Sharepoint object creation
connection = Sharepoint(
    client_id=CLIENT_ID,
    client_secret=CLIENT_SECRET,
    tenant_id=TENANT_ID,
    host_name=HOST_NAME,
    scope=SCOPE
)
```

# Structure

### 1. Import the module
```python
import asyncio
import semantic_ai
```

### 2. The database connection  

#### Sqlite:
```
from semantic_ai.connectors import Sqlite

file_path= <database_file_path>

sql = Sqlite(sql_path=file_path)
```

#### Mysql:
```
from semantic_ai.connectors import Mysql

sql = Mysql(
    host=<host_name>,
    user=<user_name>,
    password=<password>,
    database=<database>,
    port=<port_number>  # 3306 is default port
)
```

### 3. To generate the answer from db using retrieval LLM model.
```
query = ""
search_obj = await semantic_ai.db_search(query=query)
```

## Run in the server
```shell
$ semantic_ai serve -f .env

INFO:     Loading environment from '.env'
INFO:     Started server process [43973]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
```
Open your browser at http://127.0.0.1:8000/semantic-ai

### Interactive API docs
Now go to http://127.0.0.1:8000/docs.
You will see the automatic interactive API documentation (provided by Swagger UI):
![Swagger UI](https://github.com/decisionfacts/semantic-ai/blob/master/docs/source/_static/images/img.png?raw=True)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/decisionfacts/semantic-ai",
    "name": "semantic-ai",
    "maintainer": "DecisionFacts",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "info@decisionfacts.io",
    "keywords": "pdf machine-learning ocr deep-neural-networks openai docx approximate-nearest-neighbor-search semantic-search document-parser rag fastapi vector-database inference-api openai-api llm retrieval-augmented-generation llama2",
    "author": "DecisionFacts",
    "author_email": "info@decisionfacts.io",
    "download_url": "https://files.pythonhosted.org/packages/9e/16/d47db3aec8f726ccdf3505f81baacab6e3dec98cb275d1562c6e8e9133da/semantic_ai-0.0.5.tar.gz",
    "platform": null,
    "description": "![Semantic AI Logo](https://github.com/decisionfacts/semantic-ai/blob/master/docs/source/_static/images/createLLM.png?raw=True)\n# Semantic AI Lib\n\n[![Python version](https://img.shields.io/badge/python-3.10-green)](https://img.shields.io/badge/python-3.10-green)[![PyPI version](https://badge.fury.io/py/semantic-ai.svg)](https://badge.fury.io/py/semantic-ai)[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n\nAn open-source framework for Retrieval-Augmented System (RAG) uses semantic search to retrieve the expected results and generate human-readable conversational responses with the help of LLM (Large Language Model).\n\n**Semantic AI Library Documentation [Docs here](https://docs-semantic-ai.decisionfacts.ai/)**\n\n## Requirements\n\nPython 3.10+ asyncio\n\n## Installation\n```shell\n# Using pip\n$ python -m pip install semantic-ai\n\n# Manual install\n$ python -m pip install .\n```\n# Set the environment variable\nSet the credentials in .env file. Only give the credential for an one connector, an one indexer and an one llm model config. other fields put as empty\n```shell\n# Default\nFILE_DOWNLOAD_DIR_PATH= # default directory name 'download_file_dir'\nEXTRACTED_DIR_PATH= # default directory name 'extracted_dir'\n\n# Connector (SharePoint, S3, GCP Bucket, GDrive, Confluence etc.,)\nCONNECTOR_TYPE=\"connector_name\" # sharepoint\nSHAREPOINT_CLIENT_ID=\"client_id\"\nSHAREPOINT_CLIENT_SECRET=\"client_secret\"\nSHAREPOINT_TENANT_ID=\"tenant_id\"\nSHAREPOINT_HOST_NAME='<tenant_name>.sharepoint.com'\nSHAREPOINT_SCOPE='https://graph.microsoft.com/.default'\nSHAREPOINT_SITE_ID=\"site_id\"\nSHAREPOINT_DRIVE_ID=\"drive_id\"\nSHAREPOINT_FOLDER_URL=\"folder_url\" # /My_folder/child_folder/\n\n# Indexer\nINDEXER_TYPE=\"vector_db_name\" # elasticsearch, qdrant\nELASTICSEARCH_URL=\"elasticsearch_url\" # give valid url\nELASTICSEARCH_USER=\"elasticsearch_user\" # give valid user\nELASTICSEARCH_PASSWORD=\"elasticsearch_password\" # give valid password\nELASTICSEARCH_INDEX_NAME=\"index_name\"\nELASTICSEARCH_SSL_VERIFY=\"ssl_verify\" # True or False\n\n# Qdrant\nQDRANT_URL=\"<qdrant_url>\"\nQDRANT_INDEX_NAME=\"<index_name>\"\nQDRANT_API_KEY=\"<apikey>\"\n\n# LLM\nLLM_MODEL=\"<llm_model>\" # llama, openai\nLLM_MODEL_NAME_OR_PATH=\"\" # model name\nOPENAI_API_KEY=\"<openai_api_key>\" # if using openai\n\n# SQL\nSQLITE_SQL_PATH=\"<database_path>\" # sqlit db path\n\n# MYSQL\nMYSQL_HOST=\"<host_name>\" # localhost or Ip Address\nMYSQL_USER=\"<user_name>\"\nMYSQL_PASSWORD=\"<password>\"\nMYSQL_DATABASE=\"<database_name>\"\nMYSQL_PORT=\"<port>\" # default port is 3306\n\n```\nMethod 1:\n    To load the .env file. Env file should have the credentials\n```shell\n%load_ext dotenv\n%dotenv\n%dotenv relative/or/absolute/path/to/.env\n\n(or)\n\ndotenv -f .env run -- python\n```\nMethod 2:\n```python\nfrom semantic_ai.config import Settings\nsettings = Settings()\n```\n\n# Un-Structure \n### 1. Import the module\n```python\nimport asyncio\nimport semantic_ai\n```\n\n### 2. To download the files from a given source, extract the content from the downloaded files and index the extracted data in the given vector db.\n```python\nawait semantic_ai.download()\nawait semantic_ai.extract()\nawait semantic_ai.index()\n```\nAfter completion of download, extract and index, we can generate the answer from indexed vector db. That code given below.\n### 3. To generate the answer from indexed vector db using retrieval LLM model.\n```python\nsearch_obj = await semantic_ai.search()\nquery = \"\"\nsearch = await search_obj.generate(query)\n```\nSuppose the job is running for a long time, we can watch the number of files processed, the number of files failed, and that filename stored in the text file that is processed and failed in the 'EXTRACTED_DIR_PATH/meta' directory.\n\n### Example\nTo connect the source and get the connection object. We can see that in the examples folder.\nExample: SharePoint connector\n```python\nfrom semantic_ai.connectors import Sharepoint\n\nCLIENT_ID = '<client_id>'  # sharepoint client id\nCLIENT_SECRET = '<client_secret>'  # sharepoint client seceret\nTENANT_ID = '<tenant_id>'  # sharepoint tenant id\nSCOPE = 'https://graph.microsoft.com/.default'  # scope\nHOST_NAME = \"<tenant_name>.sharepoint.com\"  # for example 'contoso.sharepoint.com'\n\n# Sharepoint object creation\nconnection = Sharepoint(\n    client_id=CLIENT_ID,\n    client_secret=CLIENT_SECRET,\n    tenant_id=TENANT_ID,\n    host_name=HOST_NAME,\n    scope=SCOPE\n)\n```\n\n# Structure\n\n### 1. Import the module\n```python\nimport asyncio\nimport semantic_ai\n```\n\n### 2. The database connection  \n\n#### Sqlite:\n```\nfrom semantic_ai.connectors import Sqlite\n\nfile_path= <database_file_path>\n\nsql = Sqlite(sql_path=file_path)\n```\n\n#### Mysql:\n```\nfrom semantic_ai.connectors import Mysql\n\nsql = Mysql(\n    host=<host_name>,\n    user=<user_name>,\n    password=<password>,\n    database=<database>,\n    port=<port_number>  # 3306 is default port\n)\n```\n\n### 3. To generate the answer from db using retrieval LLM model.\n```\nquery = \"\"\nsearch_obj = await semantic_ai.db_search(query=query)\n```\n\n## Run in the server\n```shell\n$ semantic_ai serve -f .env\n\nINFO:     Loading environment from '.env'\nINFO:     Started server process [43973]\nINFO:     Waiting for application startup.\nINFO:     Application startup complete.\nINFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)\n```\nOpen your browser at http://127.0.0.1:8000/semantic-ai\n\n### Interactive API docs\nNow go to http://127.0.0.1:8000/docs.\nYou will see the automatic interactive API documentation (provided by Swagger UI):\n![Swagger UI](https://github.com/decisionfacts/semantic-ai/blob/master/docs/source/_static/images/img.png?raw=True)\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "Sematic AI RAG System",
    "version": "0.0.5",
    "project_urls": {
        "Download": "https://github.com/decisionfacts/semantic-ai.git",
        "Homepage": "https://github.com/decisionfacts/semantic-ai"
    },
    "split_keywords": [
        "pdf",
        "machine-learning",
        "ocr",
        "deep-neural-networks",
        "openai",
        "docx",
        "approximate-nearest-neighbor-search",
        "semantic-search",
        "document-parser",
        "rag",
        "fastapi",
        "vector-database",
        "inference-api",
        "openai-api",
        "llm",
        "retrieval-augmented-generation",
        "llama2"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "09c42945b9aa4c689e292ed69625f2c26a424f4998b6c648bee89a7010e2e6c6",
                "md5": "e3148c364a7f8abe45a244b93309edb9",
                "sha256": "4d56a4e79cea81d28d51cdc501a5e50d69baf11bbf9cc0c06f886b7e9d8f4f93"
            },
            "downloads": -1,
            "filename": "semantic_ai-0.0.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e3148c364a7f8abe45a244b93309edb9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 36439,
            "upload_time": "2024-02-15T12:16:27",
            "upload_time_iso_8601": "2024-02-15T12:16:27.195535Z",
            "url": "https://files.pythonhosted.org/packages/09/c4/2945b9aa4c689e292ed69625f2c26a424f4998b6c648bee89a7010e2e6c6/semantic_ai-0.0.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9e16d47db3aec8f726ccdf3505f81baacab6e3dec98cb275d1562c6e8e9133da",
                "md5": "9d60806d2ca3b2f3c75797acc1da40d2",
                "sha256": "25dc63449a2e6e371de5974943da134ba2262732c939372a8baeab15b927bc25"
            },
            "downloads": -1,
            "filename": "semantic_ai-0.0.5.tar.gz",
            "has_sig": false,
            "md5_digest": "9d60806d2ca3b2f3c75797acc1da40d2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 28530,
            "upload_time": "2024-02-15T12:16:29",
            "upload_time_iso_8601": "2024-02-15T12:16:29.536417Z",
            "url": "https://files.pythonhosted.org/packages/9e/16/d47db3aec8f726ccdf3505f81baacab6e3dec98cb275d1562c6e8e9133da/semantic_ai-0.0.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-15 12:16:29",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "decisionfacts",
    "github_project": "semantic-ai",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "semantic-ai"
}
        
Elapsed time: 0.18680s