duohub


Nameduohub JSON
Version 0.13.3 PyPI version JSON
download
home_pagehttps://github.com/duohub-ai/duohub-py
Summaryduohub retriever package for querying memories
upload_time2024-10-19 14:35:39
maintainerNone
docs_urlNone
authorOseh Mathias
requires_python<4.0,>=3.12
licenseISC
keywords duohub graphrag voiceai rag retrieval chatbot
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # duohub GraphRAG python client

![PyPI version](https://img.shields.io/pypi/v/duohub.svg)

This is a python client for the Duohub API. 

Duohub is a blazing fast graph RAG service designed for voice AI and other low-latency applications. It is used to retrieve memory from your knowledege graph in under 50ms.

You will need an API key to use the client. You can get one by signing up on the [Duohub app](https://app.duohub.ai). For more information, visit our website: [duohub.ai](https://duohub.ai).

## Table of Contents

- [duohub GraphRAG python client](#duohub-graphrag-python-client)
  - [Table of Contents](#table-of-contents)
  - [Installation](#installation)
  - [Usage](#usage)
    - [Options](#options)
    - [Default Mode - Voice AI Compatible](#default-mode---voice-ai-compatible)
      - [Default Mode Response](#default-mode-response)
    - [Assisted Queries - Voice AI Compatible](#assisted-queries---voice-ai-compatible)
      - [Assisted Mode Results](#assisted-mode-results)
    - [Fact Queries](#fact-queries)
      - [Fact Query Response](#fact-query-response)
    - [Combining Options](#combining-options)
      - [Combining Options Response](#combining-options-response)
  - [Contributing](#contributing)

## Installation

```bash
pip install duohub
```

or 

```bash
poetry add duohub
```

## Usage

Basic usage is as follows:

```python
from duohub import Duohub
client = Duohub(api_key="your_api_key")
response = client.query(query="What is the capital of France?", memoryID="your_memory_id")
print(response)
```

Output schema is as follows:  

```json
{
  "payload": "string",
  "facts": [
    {
      "content": "string"
    }
  ],
  "token_count": 0
}
```

Token count is the number of tokens in the graph context. Regardless of your mode, you will get the same token content if you use the same query and memory ID on a graph.

### Options

- `facts`: Whether to return facts in the response. Defaults to `False`.
- `assisted`: Whether to return an answer in the response. Defaults to `False`.
- `query`: The query to search the graph with.
- `memoryID`: The memory ID to isolate your search results to.

### Default Mode - Voice AI Compatible

When you only pass a query and memory ID, you are using default mode. This is the fastest option, and most single sentence queries will get a response in under 50ms. 


```python
from duohub import Duohub

client = Duohub(api_key="your_api_key")

response = client.query(query="What is the capital of France?", memoryID="your_memory_id")

print(response)
```

#### Default Mode Response

Your response (located in `payload`) is a string representation of a subgraph that is relevant to your query returned as the payload. You can pass this to your context window using a system message and user message template. 

### Assisted Queries - Voice AI Compatible

If you pass the `assisted=True` parameter to the client, the API will add reasoning to your query and uses the graph context to returns the answer. Assisted mode will add some latency to your query, though it should still be under 250ms.

Using assisted mode will improve the results of your chatbot as it will eliminate any irrelevant information before being passed to your context window, preventing your LLM from assigning attention to noise in your graph results.

```python
from duohub import Duohub

client = Duohub(api_key="your_api_key")

response = client.query(query="What is the capital of France?", memoryID="your_memory_id", assisted=True)

print(response)
``` 

#### Assisted Mode Results

Assisted mode results will be a JSON object with the following structure:

```json
{
    "payload": "The capital of France is Paris.",
    "facts": [],
    "tokens": 100,
}
```

### Fact Queries 

If you pass `facts=True` to the client, the API will return a list of facts that are relevant to your query. This is useful if you want to pass the results to another model for deeper reasoning.

Because the latency for a fact query is higher than default or assisted mode, we recommend not using these in voice AI or other low-latency applications.

It is more suitable for chatbot workflows or other applications that do not require real-time responses.

```python
from duohub import Duohub

client = Duohub(api_key="your_api_key")

response = client.query(query="What is the capital of France?", memoryID="your_memory_id", facts=True)

print(response)
```

#### Fact Query Response

Your response (located in `facts`) will be a list of facts that are relevant to your query.

```json
{
  "payload": "subgraph_content",
  "facts": [
    {
      "content": "Paris is the capital of France."
    },
    {
      "content": "Paris is a city in France."
    },
    {
      "content": "France is a country in Europe."
    }
  ],
  "token_count": 100
}
```

### Combining Options

You can combine the options to get a more tailored response. For example, you can get facts and a payload:

```python
from duohub import Duohub

client = Duohub(api_key="your_api_key")

response = client.query(query="What is the capital of France?", memoryID="your_memory_id", facts=True, assisted=True)

print(response)
```

#### Combining Options Response

Your response will be a JSON object with the following structure:

```json
{
  "payload": "Paris is the capital of France.",
  "facts": [
    {
      "content": "Paris is the capital of France."
    },
    {
      "content": "Paris is a city in France."
    },
    {
      "content": "France is a country in Europe."
    }
  ],
  "token_count": 100
}
```



## Contributing

We welcome contributions to this client! Please feel free to submit a PR. If you encounter any issues, please open an issue.
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/duohub-ai/duohub-py",
    "name": "duohub",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.12",
    "maintainer_email": null,
    "keywords": "duohub, graphrag, voiceai, rag, retrieval, chatbot",
    "author": "Oseh Mathias",
    "author_email": "o@matmail.me",
    "download_url": "https://files.pythonhosted.org/packages/31/87/23fc9c089bfd0b810787725caeab5c4b7c15d7598feed575b050c539feb3/duohub-0.13.3.tar.gz",
    "platform": null,
    "description": "# duohub GraphRAG python client\n\n![PyPI version](https://img.shields.io/pypi/v/duohub.svg)\n\nThis is a python client for the Duohub API. \n\nDuohub is a blazing fast graph RAG service designed for voice AI and other low-latency applications. It is used to retrieve memory from your knowledege graph in under 50ms.\n\nYou will need an API key to use the client. You can get one by signing up on the [Duohub app](https://app.duohub.ai). For more information, visit our website: [duohub.ai](https://duohub.ai).\n\n## Table of Contents\n\n- [duohub GraphRAG python client](#duohub-graphrag-python-client)\n  - [Table of Contents](#table-of-contents)\n  - [Installation](#installation)\n  - [Usage](#usage)\n    - [Options](#options)\n    - [Default Mode - Voice AI Compatible](#default-mode---voice-ai-compatible)\n      - [Default Mode Response](#default-mode-response)\n    - [Assisted Queries - Voice AI Compatible](#assisted-queries---voice-ai-compatible)\n      - [Assisted Mode Results](#assisted-mode-results)\n    - [Fact Queries](#fact-queries)\n      - [Fact Query Response](#fact-query-response)\n    - [Combining Options](#combining-options)\n      - [Combining Options Response](#combining-options-response)\n  - [Contributing](#contributing)\n\n## Installation\n\n```bash\npip install duohub\n```\n\nor \n\n```bash\npoetry add duohub\n```\n\n## Usage\n\nBasic usage is as follows:\n\n```python\nfrom duohub import Duohub\nclient = Duohub(api_key=\"your_api_key\")\nresponse = client.query(query=\"What is the capital of France?\", memoryID=\"your_memory_id\")\nprint(response)\n```\n\nOutput schema is as follows:  \n\n```json\n{\n  \"payload\": \"string\",\n  \"facts\": [\n    {\n      \"content\": \"string\"\n    }\n  ],\n  \"token_count\": 0\n}\n```\n\nToken count is the number of tokens in the graph context. Regardless of your mode, you will get the same token content if you use the same query and memory ID on a graph.\n\n### Options\n\n- `facts`: Whether to return facts in the response. Defaults to `False`.\n- `assisted`: Whether to return an answer in the response. Defaults to `False`.\n- `query`: The query to search the graph with.\n- `memoryID`: The memory ID to isolate your search results to.\n\n### Default Mode - Voice AI Compatible\n\nWhen you only pass a query and memory ID, you are using default mode. This is the fastest option, and most single sentence queries will get a response in under 50ms. \n\n\n```python\nfrom duohub import Duohub\n\nclient = Duohub(api_key=\"your_api_key\")\n\nresponse = client.query(query=\"What is the capital of France?\", memoryID=\"your_memory_id\")\n\nprint(response)\n```\n\n#### Default Mode Response\n\nYour response (located in `payload`) is a string representation of a subgraph that is relevant to your query returned as the payload. You can pass this to your context window using a system message and user message template. \n\n### Assisted Queries - Voice AI Compatible\n\nIf you pass the `assisted=True` parameter to the client, the API will add reasoning to your query and uses the graph context to returns the answer. Assisted mode will add some latency to your query, though it should still be under 250ms.\n\nUsing assisted mode will improve the results of your chatbot as it will eliminate any irrelevant information before being passed to your context window, preventing your LLM from assigning attention to noise in your graph results.\n\n```python\nfrom duohub import Duohub\n\nclient = Duohub(api_key=\"your_api_key\")\n\nresponse = client.query(query=\"What is the capital of France?\", memoryID=\"your_memory_id\", assisted=True)\n\nprint(response)\n``` \n\n#### Assisted Mode Results\n\nAssisted mode results will be a JSON object with the following structure:\n\n```json\n{\n    \"payload\": \"The capital of France is Paris.\",\n    \"facts\": [],\n    \"tokens\": 100,\n}\n```\n\n### Fact Queries \n\nIf you pass `facts=True` to the client, the API will return a list of facts that are relevant to your query. This is useful if you want to pass the results to another model for deeper reasoning.\n\nBecause the latency for a fact query is higher than default or assisted mode, we recommend not using these in voice AI or other low-latency applications.\n\nIt is more suitable for chatbot workflows or other applications that do not require real-time responses.\n\n```python\nfrom duohub import Duohub\n\nclient = Duohub(api_key=\"your_api_key\")\n\nresponse = client.query(query=\"What is the capital of France?\", memoryID=\"your_memory_id\", facts=True)\n\nprint(response)\n```\n\n#### Fact Query Response\n\nYour response (located in `facts`) will be a list of facts that are relevant to your query.\n\n```json\n{\n  \"payload\": \"subgraph_content\",\n  \"facts\": [\n    {\n      \"content\": \"Paris is the capital of France.\"\n    },\n    {\n      \"content\": \"Paris is a city in France.\"\n    },\n    {\n      \"content\": \"France is a country in Europe.\"\n    }\n  ],\n  \"token_count\": 100\n}\n```\n\n### Combining Options\n\nYou can combine the options to get a more tailored response. For example, you can get facts and a payload:\n\n```python\nfrom duohub import Duohub\n\nclient = Duohub(api_key=\"your_api_key\")\n\nresponse = client.query(query=\"What is the capital of France?\", memoryID=\"your_memory_id\", facts=True, assisted=True)\n\nprint(response)\n```\n\n#### Combining Options Response\n\nYour response will be a JSON object with the following structure:\n\n```json\n{\n  \"payload\": \"Paris is the capital of France.\",\n  \"facts\": [\n    {\n      \"content\": \"Paris is the capital of France.\"\n    },\n    {\n      \"content\": \"Paris is a city in France.\"\n    },\n    {\n      \"content\": \"France is a country in Europe.\"\n    }\n  ],\n  \"token_count\": 100\n}\n```\n\n\n\n## Contributing\n\nWe welcome contributions to this client! Please feel free to submit a PR. If you encounter any issues, please open an issue.",
    "bugtrack_url": null,
    "license": "ISC",
    "summary": "duohub retriever package for querying memories",
    "version": "0.13.3",
    "project_urls": {
        "Bug Tracker": "https://github.com/duohub-ai/duohub-py/issues",
        "Documentation": "https://github.com/duohub-ai/duohub-py#readme",
        "Homepage": "https://github.com/duohub-ai/duohub-py",
        "Repository": "https://github.com/duohub-ai/duohub-py"
    },
    "split_keywords": [
        "duohub",
        " graphrag",
        " voiceai",
        " rag",
        " retrieval",
        " chatbot"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "de1f73b0293b39c3d0b99c144f488d3dca3530131485cc3ba693198a98da12c0",
                "md5": "c87320472e20d4de55332d6f403a2632",
                "sha256": "bae913574f3aec26c6d4426df28b24d4b4d45f617a2042087415700fdb2bec47"
            },
            "downloads": -1,
            "filename": "duohub-0.13.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c87320472e20d4de55332d6f403a2632",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.12",
            "size": 6043,
            "upload_time": "2024-10-19T14:35:38",
            "upload_time_iso_8601": "2024-10-19T14:35:38.078670Z",
            "url": "https://files.pythonhosted.org/packages/de/1f/73b0293b39c3d0b99c144f488d3dca3530131485cc3ba693198a98da12c0/duohub-0.13.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "318723fc9c089bfd0b810787725caeab5c4b7c15d7598feed575b050c539feb3",
                "md5": "df874e6ad362399e2948f37eb9c90aee",
                "sha256": "6abf2f737b42b475aa892696797e14d017fd3d42fc27a39597d2157831325122"
            },
            "downloads": -1,
            "filename": "duohub-0.13.3.tar.gz",
            "has_sig": false,
            "md5_digest": "df874e6ad362399e2948f37eb9c90aee",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.12",
            "size": 5301,
            "upload_time": "2024-10-19T14:35:39",
            "upload_time_iso_8601": "2024-10-19T14:35:39.669174Z",
            "url": "https://files.pythonhosted.org/packages/31/87/23fc9c089bfd0b810787725caeab5c4b7c15d7598feed575b050c539feb3/duohub-0.13.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-19 14:35:39",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "duohub-ai",
    "github_project": "duohub-py",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "duohub"
}
        
Elapsed time: 1.12674s