indoxRagHelper


NameindoxRagHelper JSON
Version 0.0.3 PyPI version JSON
download
home_pagehttps://github.com/osllmai/inDox/libs/indoxArcg
SummaryIndox Retrieval Augmentation
upload_time2025-02-10 12:52:02
maintainerNone
docs_urlNone
authorosllm.ai
requires_python>=3.9
licenseAGPL-3.0
keywords rag cag llm retrieval-augmented generation machine learning natural language processing nlp ai deep learning language models
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div style="text-align: center;">
    <h1>inDoxArcg</h1>
    <a href="https://github.com/osllmai/inDox/libs/IndoxArcg">
        <img src="https://readme-typing-svg.demolab.com?font=Georgia&size=16&duration=3000&pause=500&multiline=true&width=700&height=100&lines=inDoxArcg;Cache-Augmented+and+Retrieval-Augmented+Generative+%7C+Open+Source;Copyright+%C2%A9%EF%B8%8F+OSLLAM.ai" alt="Typing SVG" style="margin-top: 20px;"/>
    </a>
</div>

---

[![License](https://img.shields.io/github/license/osllmai/inDoxArcg)](https://github.com/osllmai/inDox/blob/master/LICENSE)
[![PyPI](https://badge.furyIndoxArcg.svg)](https://pypi.org/IndoxArcg/0.0.3/)
[![Python](https://img.shields.io/pypi/pyveIndoxArcg.svg)](https://pypi.org/pIndoxArcg/0.0.3/)
[![Downloads](https://static.pepy.techIndoxArcg)](https://pepy.tech/pIndoxArcg)

[![Discord](https://img.shields.io/discord/1223867382460579961?label=Discord&logo=Discord&style=social)](https://discord.com/invite/ossllmai)
[![GitHub stars](https://img.shields.io/github/stars/osllmai/inDoxArcg?style=social)](https://github.com/osllmai/inDoxArcg)

<p align="center">
  <a href="https://osllm.ai">Official Website</a> &bull; <a href="https://docs.osllm.ai/index.html">Documentation</a> &bull; <a href="https://discord.gg/qrCc56ZR">Discord</a>
</p>

<p align="center">
  <b>NEW:</b> <a href="https://docs.google.com/forms/d/1CQXJvxLUqLBSXnjqQmRpOyZqD6nrKubLz2WTcIJ37fU/prefill">Subscribe to our mailing list</a> for updates and news!
</p>

## Overview

**inDoxArcg** is a next-generation application designed for advanced document processing and retrieval augmentation. It offers two powerful pipelines:

1. **Cache-Augmented Generation (CAG)**: Enhances LLM responses by leveraging local caching, similarity search, and fallback mechanisms.
2. **Retrieval-Augmented Generation (RAG)**: Provides context-aware answers by retrieving relevant information from vector stores.

Key features include multi-query retrieval, smart validation, web search fallback, and customizable similarity search algorithms.

---

## Features

The **inDoxArcg** application offers two powerful pipelines designed to optimize the use of large language models (LLMs) and enhance document retrieval capabilities. These pipelines provide flexibility and adaptability to meet diverse use cases:

### Cache-Augmented Generation (CAG) Pipeline

- **Multi-query retrieval**: Expands single queries into multiple related queries.
- **Smart retrieval**: Validates context for relevance and hallucination.
- **Web search fallback**: Uses DuckDuckGo when local cache is insufficient.
- **Customizable similarity search**: Supports TF-IDF, BM25, and Jaccard similarity algorithms.

### Retrieval-Augmented Generation (RAG) Pipeline

The **Retrieval-Augmented Generation (RAG) Pipeline** is designed to provide highly accurate and contextually aware answers by retrieving relevant documents from a vector store. For example, if you want to answer the question, "What are the health benefits of green tea?", the pipeline will:

1. Search for relevant articles or documents in the vector store.
2. Validate the retrieved context for relevance and accuracy.
3. Generate a detailed answer using the Large Language Model (LLM) based on the retrieved context.

This makes RAG particularly suitable for scenarios requiring:

- **Research and Academia:** Retrieving specific scientific studies or historical data.
- **Customer Support:** Answering customer queries by extracting relevant data from a knowledge base.
- **Legal and Compliance:** Providing precise answers using legal documents or compliance guidelines.
- **Standard retrieval**: Uses vector similarity search.
- **Context clustering**: Organizes retrieved context for enhanced usability.
- **Advanced querying**: Offers options like multi-query expansion and smart validation.
- **Web fallback**: Ensures high-quality results with external web searches when needed.

---

## Roadmap

| Feature               | Implemented | Description                                           |
| --------------------- | ----------- | ----------------------------------------------------- |
| **Model Support**     |             |                                                       |
| Ollama (e.g., Llama3) | ✅          | Local Embedding and LLM Models powered by Ollama      |
| HuggingFace           | ✅          | Local Embedding and LLM Models powered by HuggingFace |
| Google (e.g., Gemini) | ✅          | Embedding and Generation Models by Google             |
| OpenAI (e.g., GPT4)   | ✅          | Embedding and Generation Models by OpenAI             |
| **API Model Support** |             |                                                       |
| OpenAI                | ✅          | Embedding and LLM Models from Indox API               |
| Mistral               | ✅          | Embedding and LLM Models from Indox API               |
| Anthropic             | ✅          | Embedding and LLM Models from Indox API               |

| Loader and Splitter      | Implemented | Description                                    |
| ------------------------ | ----------- | ---------------------------------------------- |
| Simple PDF               | ✅          | Import PDF files                               |
| UnstructuredIO           | ✅          | Import data through Unstructured               |
| Clustered Load And Split | ✅          | Adds a clustering layer to PDFs and text files |

| RAG Features          | Implemented | Description                                                  |
| --------------------- | ----------- | ------------------------------------------------------------ |
| Hybrid Search         | ✅          | Combines Semantic Search with Keyword Search                 |
| Semantic Caching      | ✅          | Saves and retrieves results based on semantic meaning        |
| Clustered Prompt      | ✅          | Retrieves smaller chunks and clusters for summarization      |
| Agentic RAG           | ✅          | Ranks context and performs web searches for reliable answers |
| Advanced Querying     | ✅          | Delegates tasks based on LLM evaluation                      |
| Reranking             | ✅          | Improves results by ranking based on context                 |
| Customizable Metadata | ❌          | Offers flexible control over metadata                        |

| Bonus Features        | Implemented | Description                 |
| --------------------- | ----------- | --------------------------- |
| Docker Support        | ❌          | Deployable via Docker       |
| Customizable Frontend | ❌          | Fully customizable frontend |

---

## Installation

Install the latest stable version:

```bash
pip install inDoxArcg
```

> **Note:** This package requires Python 3.9 or later. Please ensure you have the appropriate version installed before proceeding. Additionally, verify that you have `pip` updated to the latest version to avoid dependency issues.

### Setting Up Python Environment

1. Create a virtual environment:

   **Windows:**

   ```bash
   python -m venv indoxarcg_env
   ```

   **macOS/Linux:**

   ```bash
   python3 -m venv indoxarcg_env
   ```

2. Activate the virtual environment:

   **Windows:**

   ```bash
   indoxarcg_env\Scripts\activate
   ```

   **macOS/Linux:**

   ```bash
   source indoxarcg_env/bin/activate
   ```

3. Install dependencies:
   ```bash
   pip install -r requirements.txt
   ```

---

## Usage Examples

### Cache-Augmented Generation (CAG)

The `indoxArcg` package emphasizes a modular design to provide flexibility and ease of use. The imports are structured to clearly separate functionalities such as LLMs, vector stores, data loaders, and pipelines. Below is an example of using the Cache-Augmented Generation (CAG) pipeline:

**Initialization:**

```python
from indoxArcg.llms import OpenAi
from indoxArcg.data_loaders import Txt, DoclingReader
from indoxArcg.splitter import RecursiveCharacterTextSplitter, SemanticTextSplitter
from indoxArcg.pipelines.cag import CAG, KVCache

llm = OpenAi(api_key="your_openai_api_key")
embedding_model = DeepSeek()
cache = KVCache()

cag = CAG(llm, embedding_model, cache)
```

**Preload Documents:**

```python
documents = ["Document 1 text...", "Document 2 text..."]
cag.preload_documents(documents, cache_key="my_cache")
```

**Inference:**

```python
query = "What is the capital of France?"
response = cag.infer(
    query=query,
    cache_key="my_cache",
    context_strategy="recent",
    context_turns=5,
    top_k=5,
    similarity_threshold=0.5,
    web_search=True,
    smart_retrieval=True,
)
print(response)
```

**Initialization:**

```python
from indoxarcg import CAG, KVCache

llm = YourLLM()
embedding_model = YourEmbeddingModel()
cache = KVCache()

cag = CAG(llm, embedding_model, cache)
```

**Preload Documents:**

```python
documents = ["Document 1 text...", "Document 2 text..."]
cag.preload_documents(documents, cache_key="my_cache")
```

**Inference:**

```python
query = "What is the capital of France?"
response = cag.infer(
    query=query,
    cache_key="my_cache",
    context_strategy="recent",
    context_turns=5,
    top_k=5,
    similarity_threshold=0.5,
    web_search=True,
    smart_retrieval=True,
)
print(response)
```

### Retrieval-Augmented Generation (RAG)

**Initialization:**

```python
from indoxArcg.pipelines.rag import RAG
from indoxArcg.llms import OpenAi
from indoxArcg.vector_stores import Chroma
from indoxArcg.embeddings import OpenAiEmbedding

llm = OpenAi(api_key="your_openai_api_key",model="gpt-3.5-turbo")
embedding = OpenAiEmbedding(api_key="your_openai_api_key")
db = Chroma(collection_name="your_collection_name",embedding)
rag = RAG(llm, vector_store)
```

**Inference:**

```python
question = "What is the capital of France?"
response = rag.infer(
    question=question,
    top_k=5,
    use_clustering=True,
    use_multi_query=False,
    smart_retrieval=True,
)
print(response)
```

---

## Contribution

We welcome contributions to improve inDoxArcg. Please refer to our [Contribution Guidelines](https://github.com/osllmai/inDox/blob/master/CONTRIBUTING.md) for detailed instructions on how to get started. The guide includes information on setting up your development environment, submitting pull requests, and adhering to our code of conduct. Please refer to our [Contribution Guidelines](https://github.com/osllmai/inDox/blob/master/CONTRIBUTING.md).

---

## License

This project is licensed under the AGPL-3.0 License. See the [LICENSE](https://github.com/osllmai/inDox/blob/master/LICENSE) file for details.

---

## Support

For questions, support, or feedback, join our [Discord](https://discord.com/invite/ossllmai) or contact us via [our website](https://osllm.ai).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/osllmai/inDox/libs/indoxArcg",
    "name": "indoxRagHelper",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "RAG, CAG, LLM, retrieval-augmented generation, machine learning, natural language processing, NLP, AI, deep learning, language models",
    "author": "osllm.ai",
    "author_email": "ashkan@nematifamilyfundation.onmicrosoft.com",
    "download_url": "https://files.pythonhosted.org/packages/80/41/aa20e5275adc0857d8df6d33406454a57c14ba8649a9dc55ab9df054babf/indoxraghelper-0.0.3.tar.gz",
    "platform": null,
    "description": "<div style=\"text-align: center;\">\r\n    <h1>inDoxArcg</h1>\r\n    <a href=\"https://github.com/osllmai/inDox/libs/IndoxArcg\">\r\n        <img src=\"https://readme-typing-svg.demolab.com?font=Georgia&size=16&duration=3000&pause=500&multiline=true&width=700&height=100&lines=inDoxArcg;Cache-Augmented+and+Retrieval-Augmented+Generative+%7C+Open+Source;Copyright+%C2%A9%EF%B8%8F+OSLLAM.ai\" alt=\"Typing SVG\" style=\"margin-top: 20px;\"/>\r\n    </a>\r\n</div>\r\n\r\n---\r\n\r\n[![License](https://img.shields.io/github/license/osllmai/inDoxArcg)](https://github.com/osllmai/inDox/blob/master/LICENSE)\r\n[![PyPI](https://badge.furyIndoxArcg.svg)](https://pypi.org/IndoxArcg/0.0.3/)\r\n[![Python](https://img.shields.io/pypi/pyveIndoxArcg.svg)](https://pypi.org/pIndoxArcg/0.0.3/)\r\n[![Downloads](https://static.pepy.techIndoxArcg)](https://pepy.tech/pIndoxArcg)\r\n\r\n[![Discord](https://img.shields.io/discord/1223867382460579961?label=Discord&logo=Discord&style=social)](https://discord.com/invite/ossllmai)\r\n[![GitHub stars](https://img.shields.io/github/stars/osllmai/inDoxArcg?style=social)](https://github.com/osllmai/inDoxArcg)\r\n\r\n<p align=\"center\">\r\n  <a href=\"https://osllm.ai\">Official Website</a> &bull; <a href=\"https://docs.osllm.ai/index.html\">Documentation</a> &bull; <a href=\"https://discord.gg/qrCc56ZR\">Discord</a>\r\n</p>\r\n\r\n<p align=\"center\">\r\n  <b>NEW:</b> <a href=\"https://docs.google.com/forms/d/1CQXJvxLUqLBSXnjqQmRpOyZqD6nrKubLz2WTcIJ37fU/prefill\">Subscribe to our mailing list</a> for updates and news!\r\n</p>\r\n\r\n## Overview\r\n\r\n**inDoxArcg** is a next-generation application designed for advanced document processing and retrieval augmentation. It offers two powerful pipelines:\r\n\r\n1. **Cache-Augmented Generation (CAG)**: Enhances LLM responses by leveraging local caching, similarity search, and fallback mechanisms.\r\n2. **Retrieval-Augmented Generation (RAG)**: Provides context-aware answers by retrieving relevant information from vector stores.\r\n\r\nKey features include multi-query retrieval, smart validation, web search fallback, and customizable similarity search algorithms.\r\n\r\n---\r\n\r\n## Features\r\n\r\nThe **inDoxArcg** application offers two powerful pipelines designed to optimize the use of large language models (LLMs) and enhance document retrieval capabilities. These pipelines provide flexibility and adaptability to meet diverse use cases:\r\n\r\n### Cache-Augmented Generation (CAG) Pipeline\r\n\r\n- **Multi-query retrieval**: Expands single queries into multiple related queries.\r\n- **Smart retrieval**: Validates context for relevance and hallucination.\r\n- **Web search fallback**: Uses DuckDuckGo when local cache is insufficient.\r\n- **Customizable similarity search**: Supports TF-IDF, BM25, and Jaccard similarity algorithms.\r\n\r\n### Retrieval-Augmented Generation (RAG) Pipeline\r\n\r\nThe **Retrieval-Augmented Generation (RAG) Pipeline** is designed to provide highly accurate and contextually aware answers by retrieving relevant documents from a vector store. For example, if you want to answer the question, \"What are the health benefits of green tea?\", the pipeline will:\r\n\r\n1. Search for relevant articles or documents in the vector store.\r\n2. Validate the retrieved context for relevance and accuracy.\r\n3. Generate a detailed answer using the Large Language Model (LLM) based on the retrieved context.\r\n\r\nThis makes RAG particularly suitable for scenarios requiring:\r\n\r\n- **Research and Academia:** Retrieving specific scientific studies or historical data.\r\n- **Customer Support:** Answering customer queries by extracting relevant data from a knowledge base.\r\n- **Legal and Compliance:** Providing precise answers using legal documents or compliance guidelines.\r\n- **Standard retrieval**: Uses vector similarity search.\r\n- **Context clustering**: Organizes retrieved context for enhanced usability.\r\n- **Advanced querying**: Offers options like multi-query expansion and smart validation.\r\n- **Web fallback**: Ensures high-quality results with external web searches when needed.\r\n\r\n---\r\n\r\n## Roadmap\r\n\r\n| Feature               | Implemented | Description                                           |\r\n| --------------------- | ----------- | ----------------------------------------------------- |\r\n| **Model Support**     |             |                                                       |\r\n| Ollama (e.g., Llama3) | \u2705          | Local Embedding and LLM Models powered by Ollama      |\r\n| HuggingFace           | \u2705          | Local Embedding and LLM Models powered by HuggingFace |\r\n| Google (e.g., Gemini) | \u2705          | Embedding and Generation Models by Google             |\r\n| OpenAI (e.g., GPT4)   | \u2705          | Embedding and Generation Models by OpenAI             |\r\n| **API Model Support** |             |                                                       |\r\n| OpenAI                | \u2705          | Embedding and LLM Models from Indox API               |\r\n| Mistral               | \u2705          | Embedding and LLM Models from Indox API               |\r\n| Anthropic             | \u2705          | Embedding and LLM Models from Indox API               |\r\n\r\n| Loader and Splitter      | Implemented | Description                                    |\r\n| ------------------------ | ----------- | ---------------------------------------------- |\r\n| Simple PDF               | \u2705          | Import PDF files                               |\r\n| UnstructuredIO           | \u2705          | Import data through Unstructured               |\r\n| Clustered Load And Split | \u2705          | Adds a clustering layer to PDFs and text files |\r\n\r\n| RAG Features          | Implemented | Description                                                  |\r\n| --------------------- | ----------- | ------------------------------------------------------------ |\r\n| Hybrid Search         | \u2705          | Combines Semantic Search with Keyword Search                 |\r\n| Semantic Caching      | \u2705          | Saves and retrieves results based on semantic meaning        |\r\n| Clustered Prompt      | \u2705          | Retrieves smaller chunks and clusters for summarization      |\r\n| Agentic RAG           | \u2705          | Ranks context and performs web searches for reliable answers |\r\n| Advanced Querying     | \u2705          | Delegates tasks based on LLM evaluation                      |\r\n| Reranking             | \u2705          | Improves results by ranking based on context                 |\r\n| Customizable Metadata | \u274c          | Offers flexible control over metadata                        |\r\n\r\n| Bonus Features        | Implemented | Description                 |\r\n| --------------------- | ----------- | --------------------------- |\r\n| Docker Support        | \u274c          | Deployable via Docker       |\r\n| Customizable Frontend | \u274c          | Fully customizable frontend |\r\n\r\n---\r\n\r\n## Installation\r\n\r\nInstall the latest stable version:\r\n\r\n```bash\r\npip install inDoxArcg\r\n```\r\n\r\n> **Note:** This package requires Python 3.9 or later. Please ensure you have the appropriate version installed before proceeding. Additionally, verify that you have `pip` updated to the latest version to avoid dependency issues.\r\n\r\n### Setting Up Python Environment\r\n\r\n1. Create a virtual environment:\r\n\r\n   **Windows:**\r\n\r\n   ```bash\r\n   python -m venv indoxarcg_env\r\n   ```\r\n\r\n   **macOS/Linux:**\r\n\r\n   ```bash\r\n   python3 -m venv indoxarcg_env\r\n   ```\r\n\r\n2. Activate the virtual environment:\r\n\r\n   **Windows:**\r\n\r\n   ```bash\r\n   indoxarcg_env\\Scripts\\activate\r\n   ```\r\n\r\n   **macOS/Linux:**\r\n\r\n   ```bash\r\n   source indoxarcg_env/bin/activate\r\n   ```\r\n\r\n3. Install dependencies:\r\n   ```bash\r\n   pip install -r requirements.txt\r\n   ```\r\n\r\n---\r\n\r\n## Usage Examples\r\n\r\n### Cache-Augmented Generation (CAG)\r\n\r\nThe `indoxArcg` package emphasizes a modular design to provide flexibility and ease of use. The imports are structured to clearly separate functionalities such as LLMs, vector stores, data loaders, and pipelines. Below is an example of using the Cache-Augmented Generation (CAG) pipeline:\r\n\r\n**Initialization:**\r\n\r\n```python\r\nfrom indoxArcg.llms import OpenAi\r\nfrom indoxArcg.data_loaders import Txt, DoclingReader\r\nfrom indoxArcg.splitter import RecursiveCharacterTextSplitter, SemanticTextSplitter\r\nfrom indoxArcg.pipelines.cag import CAG, KVCache\r\n\r\nllm = OpenAi(api_key=\"your_openai_api_key\")\r\nembedding_model = DeepSeek()\r\ncache = KVCache()\r\n\r\ncag = CAG(llm, embedding_model, cache)\r\n```\r\n\r\n**Preload Documents:**\r\n\r\n```python\r\ndocuments = [\"Document 1 text...\", \"Document 2 text...\"]\r\ncag.preload_documents(documents, cache_key=\"my_cache\")\r\n```\r\n\r\n**Inference:**\r\n\r\n```python\r\nquery = \"What is the capital of France?\"\r\nresponse = cag.infer(\r\n    query=query,\r\n    cache_key=\"my_cache\",\r\n    context_strategy=\"recent\",\r\n    context_turns=5,\r\n    top_k=5,\r\n    similarity_threshold=0.5,\r\n    web_search=True,\r\n    smart_retrieval=True,\r\n)\r\nprint(response)\r\n```\r\n\r\n**Initialization:**\r\n\r\n```python\r\nfrom indoxarcg import CAG, KVCache\r\n\r\nllm = YourLLM()\r\nembedding_model = YourEmbeddingModel()\r\ncache = KVCache()\r\n\r\ncag = CAG(llm, embedding_model, cache)\r\n```\r\n\r\n**Preload Documents:**\r\n\r\n```python\r\ndocuments = [\"Document 1 text...\", \"Document 2 text...\"]\r\ncag.preload_documents(documents, cache_key=\"my_cache\")\r\n```\r\n\r\n**Inference:**\r\n\r\n```python\r\nquery = \"What is the capital of France?\"\r\nresponse = cag.infer(\r\n    query=query,\r\n    cache_key=\"my_cache\",\r\n    context_strategy=\"recent\",\r\n    context_turns=5,\r\n    top_k=5,\r\n    similarity_threshold=0.5,\r\n    web_search=True,\r\n    smart_retrieval=True,\r\n)\r\nprint(response)\r\n```\r\n\r\n### Retrieval-Augmented Generation (RAG)\r\n\r\n**Initialization:**\r\n\r\n```python\r\nfrom indoxArcg.pipelines.rag import RAG\r\nfrom indoxArcg.llms import OpenAi\r\nfrom indoxArcg.vector_stores import Chroma\r\nfrom indoxArcg.embeddings import OpenAiEmbedding\r\n\r\nllm = OpenAi(api_key=\"your_openai_api_key\",model=\"gpt-3.5-turbo\")\r\nembedding = OpenAiEmbedding(api_key=\"your_openai_api_key\")\r\ndb = Chroma(collection_name=\"your_collection_name\",embedding)\r\nrag = RAG(llm, vector_store)\r\n```\r\n\r\n**Inference:**\r\n\r\n```python\r\nquestion = \"What is the capital of France?\"\r\nresponse = rag.infer(\r\n    question=question,\r\n    top_k=5,\r\n    use_clustering=True,\r\n    use_multi_query=False,\r\n    smart_retrieval=True,\r\n)\r\nprint(response)\r\n```\r\n\r\n---\r\n\r\n## Contribution\r\n\r\nWe welcome contributions to improve inDoxArcg. Please refer to our [Contribution Guidelines](https://github.com/osllmai/inDox/blob/master/CONTRIBUTING.md) for detailed instructions on how to get started. The guide includes information on setting up your development environment, submitting pull requests, and adhering to our code of conduct. Please refer to our [Contribution Guidelines](https://github.com/osllmai/inDox/blob/master/CONTRIBUTING.md).\r\n\r\n---\r\n\r\n## License\r\n\r\nThis project is licensed under the AGPL-3.0 License. See the [LICENSE](https://github.com/osllmai/inDox/blob/master/LICENSE) file for details.\r\n\r\n---\r\n\r\n## Support\r\n\r\nFor questions, support, or feedback, join our [Discord](https://discord.com/invite/ossllmai) or contact us via [our website](https://osllm.ai).\r\n",
    "bugtrack_url": null,
    "license": "AGPL-3.0",
    "summary": "Indox Retrieval Augmentation",
    "version": "0.0.3",
    "project_urls": {
        "Homepage": "https://github.com/osllmai/inDox/libs/indoxArcg"
    },
    "split_keywords": [
        "rag",
        " cag",
        " llm",
        " retrieval-augmented generation",
        " machine learning",
        " natural language processing",
        " nlp",
        " ai",
        " deep learning",
        " language models"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6c24a5fb4bd6f43a7d4d92d0689357ef4b331bc06d779744668fd31bdd2072a9",
                "md5": "cd9cc0fc81c466ae48942c46c042572a",
                "sha256": "51c5dca58d30fd15bf00fb711eafd123d069a59c10f5a36a70967fa9806cf046"
            },
            "downloads": -1,
            "filename": "indoxRagHelper-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "cd9cc0fc81c466ae48942c46c042572a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 175301,
            "upload_time": "2025-02-10T12:51:59",
            "upload_time_iso_8601": "2025-02-10T12:51:59.486229Z",
            "url": "https://files.pythonhosted.org/packages/6c/24/a5fb4bd6f43a7d4d92d0689357ef4b331bc06d779744668fd31bdd2072a9/indoxRagHelper-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8041aa20e5275adc0857d8df6d33406454a57c14ba8649a9dc55ab9df054babf",
                "md5": "0a00192d4bb4ad407c43798bd312b1ad",
                "sha256": "5a169e5df4aee3d8f02d3f694516eed9d4549b614a5bd406cdc65e3591e1d3ea"
            },
            "downloads": -1,
            "filename": "indoxraghelper-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "0a00192d4bb4ad407c43798bd312b1ad",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 119692,
            "upload_time": "2025-02-10T12:52:02",
            "upload_time_iso_8601": "2025-02-10T12:52:02.430012Z",
            "url": "https://files.pythonhosted.org/packages/80/41/aa20e5275adc0857d8df6d33406454a57c14ba8649a9dc55ab9df054babf/indoxraghelper-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-10 12:52:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "osllmai",
    "github_project": "inDox",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "indoxraghelper"
}
        
Elapsed time: 0.39959s