# NebulaGraph Query Engine Pack
This LlamaPack creates a NebulaGraph query engine, and executes its `query` function. This pack offers the option of creating multiple types of query engines, namely:
- Knowledge graph vector-based entity retrieval (default if no query engine type option is provided)
- Knowledge graph keyword-based entity retrieval
- Knowledge graph hybrid entity retrieval
- Raw vector index retrieval
- Custom combo query engine (vector similarity + KG entity retrieval)
- KnowledgeGraphQueryEngine
- KnowledgeGraphRAGRetriever
## CLI Usage
You can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:
```bash
llamaindex-cli download-llamapack NebulaGraphQueryEnginePack --download-dir ./nebulagraph_pack
```
You can then inspect the files at `./nebulagraph_pack` and use them as a template for your own project!
## Code Usage
You can download the pack to a `./nebulagraph_pack` directory:
```python
from llama_index.core.llama_pack import download_llama_pack
# download and install dependencies
NebulaGraphQueryEnginePack = download_llama_pack(
"NebulaGraphQueryEnginePack", "./nebulagraph_pack"
)
```
From here, you can use the pack, or inspect and modify the pack in `./nebulagraph_pack`.
Then, you can set up the pack like so:
```python
# Load the docs (example of Paleo diet from Wikipedia)
from llama_index import download_loader
WikipediaReader = download_loader("WikipediaReader")
loader = WikipediaReader()
docs = loader.load_data(pages=["Paleolithic diet"], auto_suggest=False)
print(f"Loaded {len(docs)} documents")
# get NebulaGraph credentials (assume it's stored in credentials.json)
with open("credentials.json") as f:
nebulagraph_connection_params = json.load(f)
username = nebulagraph_connection_params["username"]
password = nebulagraph_connection_params["password"]
ip_and_port = nebulagraph_connection_params["ip_and_port"]
space_name = "paleo_diet"
edge_types, rel_prop_names = ["relationship"], ["relationship"]
tags = ["entity"]
max_triplets_per_chunk = 10
# create the pack
nebulagraph_pack = NebulaGraphQueryEnginePack(
username=username,
password=password,
ip_and_port=ip_and_port,
space_name=space_name,
edge_types=edge_types,
rel_prop_names=rel_prop_names,
tags=tags,
max_triplets_per_chunk=max_triplets_per_chunk,
docs=docs,
)
```
Optionally, you can pass in the `query_engine_type` from `NebulaGraphQueryEngineType` to construct `NebulaGraphQueryEnginePack`. If `query_engine_type` is not defined, it defaults to Knowledge Graph vector based entity retrieval.
```python
from llama_index.packs.nebulagraph_query_engine.base import (
NebulaGraphQueryEngineType,
)
# create the pack
nebulagraph_pack = NebulaGraphQueryEnginePack(
username=username,
password=password,
ip_and_port=ip_and_port,
space_name=space_name,
edge_types=edge_types,
rel_prop_names=rel_prop_names,
tags=tags,
max_triplets_per_chunk=max_triplets_per_chunk,
docs=docs,
query_engine_type=NebulaGraphQueryEngineType.KG_HYBRID,
)
```
`NebulaGraphQueryEnginePack` is a enum defined as follows:
```python
class NebulaGraphQueryEngineType(str, Enum):
"""NebulaGraph query engine type"""
KG_KEYWORD = "keyword"
KG_HYBRID = "hybrid"
RAW_VECTOR = "vector"
RAW_VECTOR_KG_COMBO = "vector_kg"
KG_QE = "KnowledgeGraphQueryEngine"
KG_RAG_RETRIEVER = "KnowledgeGraphRAGRetriever"
```
The `run()` function is a light wrapper around `query_engine.query()`, see a sample query below.
```python
response = nebulagraph_pack.run("Tell me about the benefits of paleo diet.")
```
You can also use modules individually.
```python
# call the query_engine.query()
query_engine = nebulagraph_pack.query_engine
response = query_engine.query("query_str")
```
Raw data
{
"_id": null,
"home_page": "",
"name": "llama-index-packs-nebulagraph-query-engine",
"maintainer": "wenqiglantz",
"docs_url": null,
"requires_python": ">=3.8.1,<4.0",
"maintainer_email": "",
"keywords": "knowledge graph,nebulagraph,query engine",
"author": "Your Name",
"author_email": "you@example.com",
"download_url": "https://files.pythonhosted.org/packages/84/ef/3fda0bed42b51d1b164bdbdefafe23dd25d714a5c0cf763e4b14a3ba528f/llama_index_packs_nebulagraph_query_engine-0.1.3.tar.gz",
"platform": null,
"description": "# NebulaGraph Query Engine Pack\n\nThis LlamaPack creates a NebulaGraph query engine, and executes its `query` function. This pack offers the option of creating multiple types of query engines, namely:\n\n- Knowledge graph vector-based entity retrieval (default if no query engine type option is provided)\n- Knowledge graph keyword-based entity retrieval\n- Knowledge graph hybrid entity retrieval\n- Raw vector index retrieval\n- Custom combo query engine (vector similarity + KG entity retrieval)\n- KnowledgeGraphQueryEngine\n- KnowledgeGraphRAGRetriever\n\n## CLI Usage\n\nYou can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:\n\n```bash\nllamaindex-cli download-llamapack NebulaGraphQueryEnginePack --download-dir ./nebulagraph_pack\n```\n\nYou can then inspect the files at `./nebulagraph_pack` and use them as a template for your own project!\n\n## Code Usage\n\nYou can download the pack to a `./nebulagraph_pack` directory:\n\n```python\nfrom llama_index.core.llama_pack import download_llama_pack\n\n# download and install dependencies\nNebulaGraphQueryEnginePack = download_llama_pack(\n \"NebulaGraphQueryEnginePack\", \"./nebulagraph_pack\"\n)\n```\n\nFrom here, you can use the pack, or inspect and modify the pack in `./nebulagraph_pack`.\n\nThen, you can set up the pack like so:\n\n```python\n# Load the docs (example of Paleo diet from Wikipedia)\nfrom llama_index import download_loader\n\nWikipediaReader = download_loader(\"WikipediaReader\")\nloader = WikipediaReader()\ndocs = loader.load_data(pages=[\"Paleolithic diet\"], auto_suggest=False)\nprint(f\"Loaded {len(docs)} documents\")\n\n# get NebulaGraph credentials (assume it's stored in credentials.json)\nwith open(\"credentials.json\") as f:\n nebulagraph_connection_params = json.load(f)\n username = nebulagraph_connection_params[\"username\"]\n password = nebulagraph_connection_params[\"password\"]\n ip_and_port = nebulagraph_connection_params[\"ip_and_port\"]\n\nspace_name = \"paleo_diet\"\nedge_types, rel_prop_names = [\"relationship\"], [\"relationship\"]\ntags = [\"entity\"]\nmax_triplets_per_chunk = 10\n\n# create the pack\nnebulagraph_pack = NebulaGraphQueryEnginePack(\n username=username,\n password=password,\n ip_and_port=ip_and_port,\n space_name=space_name,\n edge_types=edge_types,\n rel_prop_names=rel_prop_names,\n tags=tags,\n max_triplets_per_chunk=max_triplets_per_chunk,\n docs=docs,\n)\n```\n\nOptionally, you can pass in the `query_engine_type` from `NebulaGraphQueryEngineType` to construct `NebulaGraphQueryEnginePack`. If `query_engine_type` is not defined, it defaults to Knowledge Graph vector based entity retrieval.\n\n```python\nfrom llama_index.packs.nebulagraph_query_engine.base import (\n NebulaGraphQueryEngineType,\n)\n\n# create the pack\nnebulagraph_pack = NebulaGraphQueryEnginePack(\n username=username,\n password=password,\n ip_and_port=ip_and_port,\n space_name=space_name,\n edge_types=edge_types,\n rel_prop_names=rel_prop_names,\n tags=tags,\n max_triplets_per_chunk=max_triplets_per_chunk,\n docs=docs,\n query_engine_type=NebulaGraphQueryEngineType.KG_HYBRID,\n)\n```\n\n`NebulaGraphQueryEnginePack` is a enum defined as follows:\n\n```python\nclass NebulaGraphQueryEngineType(str, Enum):\n \"\"\"NebulaGraph query engine type\"\"\"\n\n KG_KEYWORD = \"keyword\"\n KG_HYBRID = \"hybrid\"\n RAW_VECTOR = \"vector\"\n RAW_VECTOR_KG_COMBO = \"vector_kg\"\n KG_QE = \"KnowledgeGraphQueryEngine\"\n KG_RAG_RETRIEVER = \"KnowledgeGraphRAGRetriever\"\n```\n\nThe `run()` function is a light wrapper around `query_engine.query()`, see a sample query below.\n\n```python\nresponse = nebulagraph_pack.run(\"Tell me about the benefits of paleo diet.\")\n```\n\nYou can also use modules individually.\n\n```python\n# call the query_engine.query()\nquery_engine = nebulagraph_pack.query_engine\nresponse = query_engine.query(\"query_str\")\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index packs nebulagraph_query_engine integration",
"version": "0.1.3",
"project_urls": null,
"split_keywords": [
"knowledge graph",
"nebulagraph",
"query engine"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "b61ff1f8e77d7c58a8cbd1c8e84b4c91ede02828e7f589e21c12931c5988aa58",
"md5": "16516882768e498cb937b1722b1eb15f",
"sha256": "eb28e8bc989cfe2eb4aae1913d43da1505f8ea5d36266ab33ccc12b565acbbb9"
},
"downloads": -1,
"filename": "llama_index_packs_nebulagraph_query_engine-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "16516882768e498cb937b1722b1eb15f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8.1,<4.0",
"size": 5034,
"upload_time": "2024-02-22T01:24:27",
"upload_time_iso_8601": "2024-02-22T01:24:27.346439Z",
"url": "https://files.pythonhosted.org/packages/b6/1f/f1f8e77d7c58a8cbd1c8e84b4c91ede02828e7f589e21c12931c5988aa58/llama_index_packs_nebulagraph_query_engine-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "84ef3fda0bed42b51d1b164bdbdefafe23dd25d714a5c0cf763e4b14a3ba528f",
"md5": "ad31df927a8f7b0e5938cdb0c9141643",
"sha256": "3f20f8ae18da08388bc5f87b61817fe4b1316f5995639252158f78c3485a8b7b"
},
"downloads": -1,
"filename": "llama_index_packs_nebulagraph_query_engine-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "ad31df927a8f7b0e5938cdb0c9141643",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8.1,<4.0",
"size": 4477,
"upload_time": "2024-02-22T01:24:29",
"upload_time_iso_8601": "2024-02-22T01:24:29.609341Z",
"url": "https://files.pythonhosted.org/packages/84/ef/3fda0bed42b51d1b164bdbdefafe23dd25d714a5c0cf763e4b14a3ba528f/llama_index_packs_nebulagraph_query_engine-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-02-22 01:24:29",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-packs-nebulagraph-query-engine"
}