prebuilt-RAG-LU


Nameprebuilt-RAG-LU JSON
Version 1.0.2 PyPI version JSON
download
home_pagehttps://github.com/mehrdadalmasi2020/prebuilt_RAG_LU
SummaryA library for building Retrieval-Augmented Generation (RAG) systems using ChromaDB and popular language models (LLMs).
upload_time2024-10-25 16:05:31
maintainerNone
docs_urlNone
authorMehrdad ALMASI, Demival VASQUES FILHO
requires_python>=3.6
licenseNone
keywords rag retrieval-augmented generation transformers chromadb fine-tuning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # prebuilt_RAG_LU

## Overview
`prebuilt_RAG_LU` is a Python library designed to facilitate **Retrieval-Augmented Generation (RAG)** workflows. It provides a streamlined interface for embedding generation, vector-based document retrieval using **ChromaDB**, and integration with popular language models (LLMs) to generate human-readable responses based on retrieved documents.

### Features:
- **Embedding Generation**: Supports multiple pre-trained models from Hugging Face for generating vector embeddings, including multilingual and lightweight models.
- **Vector Database Management**: Store, query, and delete vector-based document embeddings using **ChromaDB**.
- **LLM Integration**: Generate text responses using state-of-the-art models like **GPT-2**, **T5**, and the high-performance **Mistral-7B-v0.3**.

## Installation
To install the required dependencies, use the following commands:

```bash
pip install transformers chromadb

```

## Usage

Here is how you can use `prebuilt_RAG_LU` for embedding generation, document retrieval, and final response generation using LLMs.

### 1. Embedding Generation
Use the `EmbeddingGenerator` class to generate vector embeddings for text. You can choose from several pre-trained models:

```python 
from prebuilt_RAG_LU import EmbeddingGenerator, VectorDatabase, LLMGenerator, Config

# Initialize configuration
config = Config()

# Initialize components using values from the config instance
embedding_generator = EmbeddingGenerator(model_name="xlm-roberta-large")  # or another embedding model


```
### Available Models for Embedding Generation

#### Lightweight Models:
- `distilbert-base-uncased`
- `microsoft/MiniLM-L12-H384-uncased`

#### Heavyweight Models:
- `bert-base-uncased`
- `sentence-transformers/all-MiniLM-L6-v2`

#### Multilingual Models:
- `xlm-roberta-large` (1024 dimensions)
- `bert-base-multilingual-cased`
- `sentence-transformers/LaBSE`

---

### 2. Vector Database Management
Once you have the embeddings, you can store and manage them in ChromaDB using the `VectorDatabase` class:

```python

vector_db = VectorDatabase()
llm_generator = LLMGenerator(model_name=config.model_name, token=config.user_token)

# 1. Insert Documents into the Vector Database
documents = [
    {"id": "doc1", "text": "AI is revolutionizing the food industry by analyzing nutritional data and recommending healthier meal options tailored to individual dietary needs. This helps people make better food choices."},
    {"id": "doc2", "text": "Artificial Intelligence enables personalized nutrition plans based on health data such as age, weight, and medical history, ensuring individuals receive optimal nutrients for a balanced, healthy lifestyle."},
    {"id": "doc3", "text": "AI applications in healthcare extend to monitoring dietary habits and suggesting improvements, helping people adopt healthier eating patterns, reduce stress, and improve overall quality of life."},
    {"id": "doc4", "text": "With AI-driven apps, users can track their calorie intake, monitor nutrient levels, and receive real-time suggestions for healthier food alternatives, helping them achieve their fitness and health goals."},
    {"id": "doc5", "text": "AI-powered health assistants can provide instant feedback on meal choices and suggest healthier recipes, helping individuals maintain a balanced diet while avoiding unhealthy foods."},
    {"id": "doc6", "text": "AI in food science is being used to develop plant-based food alternatives that mimic meat but offer healthier options for consumers, reducing reliance on animal-based products and promoting sustainability."},
    {"id": "doc7", "text": "Through AI, food companies are able to optimize their product lines by analyzing consumer data and creating healthier snack and meal options that cater to different dietary preferences and health concerns."},
    {"id": "doc8", "text": "AI-enabled wearable devices can track a person's activity levels, heart rate, and caloric burn, then adjust food recommendations accordingly, ensuring the body receives the energy it needs to function optimally."},
    {"id": "doc9", "text": "AI helps detect harmful ingredients in processed foods by analyzing chemical compositions and flagging unhealthy additives, empowering consumers to make more informed food choices."},
    {"id": "doc10", "text": "AI can predict future health risks by analyzing a person's diet and lifestyle patterns, then recommend proactive dietary changes that reduce the risk of diseases such as diabetes and heart disease."}
]


# Generate embeddings for the documents and upsert them into the vector database
doc_ids = [doc["id"] for doc in documents]
doc_embeddings = [embedding_generator.get_embedding(doc["text"]) for doc in documents]
doc_metadatas = [{"content": doc["text"]} for doc in documents]

vector_db.upsert_documents(ids=doc_ids, embeddings=doc_embeddings, metadatas=doc_metadatas)

# Embedding Generation for the Query
query_text = "How does AI contribute to healthy food choices and living a better life?"
query_embedding = embedding_generator.get_embedding(query_text)

# Document Retrieval based on the Query
retrieved_docs = vector_db.query_documents(query_embedding, n_results=3)
context = " ".join([doc["metadata"]["content"] for doc in retrieved_docs])

```
### Key Methods

- `upsert_documents(ids, embeddings, metadatas)`: Inserts or updates documents in the vector database.

- `query_documents(query_embedding, n_results=2)`: Retrieves the most similar documents based on the input embedding.

- `delete_document(doc_id)`: Deletes a document from the vector database.

### 3. LLM-Based Result Generation
After retrieving the relevant documents, use an LLM to generate a final response. The `LLMGenerator` class integrates several popular language models, including `Mistral-7B-v0.3`.

```python
# LLM-Based Generation with Retrieved Context
final_prompt = f"Based on the following information: {context}. Answer the question: {query_text}"
generated_text = llm_generator.generate_text(final_prompt)

print("Generated Text:", generated_text)

```

### Available LLM Models for Text Generation

- **GPT-2** (lightweight): `gpt2`
- **EleutherAI GPT-Neo 1.3B** (medium scale): `EleutherAI/gpt-neo-1.3B`
- **Meta OPT 1.3B** (medium scale): `facebook/opt-1.3b`
- **Google Flan-T5 Base** (T5 variant): `google/flan-t5-base`
- **Mistral-7B-v0.3** (high-performance): `mistralai/Mistral-7B-v0.3`

---

## Full RAG Workflow Example
Here is how you can combine embedding generation, vector database management, and LLM-based result generation into a complete RAG workflow.

```python

from prebuilt_RAG_LU import EmbeddingGenerator, VectorDatabase, LLMGenerator, Config

# Initialize configuration
config = Config()

# Initialize components using values from the config instance
embedding_generator = EmbeddingGenerator(model_name="xlm-roberta-large")  # or another embedding model
vector_db = VectorDatabase()
llm_generator = LLMGenerator(model_name=config.model_name, token=config.user_token)

# 1. Insert Documents into the Vector Database
# Example documents to insert
documents = [
    {"id": "doc1", "text": "AI is revolutionizing the food industry by analyzing nutritional data and recommending healthier meal options tailored to individual dietary needs. This helps people make better food choices."},
    {"id": "doc2", "text": "Artificial Intelligence enables personalized nutrition plans based on health data such as age, weight, and medical history, ensuring individuals receive optimal nutrients for a balanced, healthy lifestyle."},
    {"id": "doc3", "text": "AI applications in healthcare extend to monitoring dietary habits and suggesting improvements, helping people adopt healthier eating patterns, reduce stress, and improve overall quality of life."},
    {"id": "doc4", "text": "With AI-driven apps, users can track their calorie intake, monitor nutrient levels, and receive real-time suggestions for healthier food alternatives, helping them achieve their fitness and health goals."},
    {"id": "doc5", "text": "AI-powered health assistants can provide instant feedback on meal choices and suggest healthier recipes, helping individuals maintain a balanced diet while avoiding unhealthy foods."},
    {"id": "doc6", "text": "AI in food science is being used to develop plant-based food alternatives that mimic meat but offer healthier options for consumers, reducing reliance on animal-based products and promoting sustainability."},
    {"id": "doc7", "text": "Through AI, food companies are able to optimize their product lines by analyzing consumer data and creating healthier snack and meal options that cater to different dietary preferences and health concerns."},
    {"id": "doc8", "text": "AI-enabled wearable devices can track a person���s activity levels, heart rate, and caloric burn, then adjust food recommendations accordingly, ensuring the body receives the energy it needs to function optimally."},
    {"id": "doc9", "text": "AI helps detect harmful ingredients in processed foods by analyzing chemical compositions and flagging unhealthy additives, empowering consumers to make more informed food choices."},
    {"id": "doc10", "text": "AI can predict future health risks by analyzing a person's diet and lifestyle patterns, then recommend proactive dietary changes that reduce the risk of diseases such as diabetes and heart disease."}
]

# Generate embeddings for the documents and upsert them into the vector database
doc_ids = [doc["id"] for doc in documents]
doc_embeddings = [embedding_generator.get_embedding(doc["text"]) for doc in documents]
doc_metadatas = [{"content": doc["text"]} for doc in documents]

vector_db.upsert_documents(ids=doc_ids, embeddings=doc_embeddings, metadatas=doc_metadatas)

# 2. Embedding Generation for the Query
query_text = "How does AI contribute to healthy food choices and living a better life?"
query_embedding = embedding_generator.get_embedding(query_text)

# 3. Document Retrieval based on the Query
retrieved_docs = vector_db.query_documents(query_embedding, n_results=3)
context = " ".join([doc["metadata"]["content"] for doc in retrieved_docs])

# 4. LLM-Based Generation with Retrieved Context
final_prompt = f"Based on the following information: {context}. Answer the question: {query_text}"
generated_text = llm_generator.generate_text(final_prompt)

print("Generated Text:", generated_text)

```
## Configuration

The library allows flexible configuration of models for embedding generation and text generation. You can modify the model selection based on your specific needs.

### Embedding Models:

- **Lightweight**:
  - `distilbert-base-uncased`
  - `microsoft/MiniLM-L12-H384-uncased`

- **Heavyweight**:
  - `bert-base-uncased`
  - `sentence-transformers/all-MiniLM-L6-v2`

- **Multilingual**:
  - `xlm-roberta-large`
  - `bert-base-multilingual-cased`
  - `sentence-transformers/LaBSE`

### LLM Models:

- **GPT-2**: `gpt2`
- **GPT-Neo 1.3B**: `EleutherAI/gpt-neo-1.3B`
- **OPT 1.3B**: `facebook/opt-1.3b`
- **Flan-T5 Base**: `google/flan-t5-base`
- **Mistral-7B-v0.3**: `mistralai/Mistral-7B-v0.3`

## License

This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.




            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/mehrdadalmasi2020/prebuilt_RAG_LU",
    "name": "prebuilt-RAG-LU",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": "RAG, Retrieval-Augmented Generation, transformers, ChromaDB, fine-tuning",
    "author": "Mehrdad ALMASI, Demival VASQUES FILHO",
    "author_email": "mehrdad.al.2023@gmail.com, demival.vasques@uni.lu",
    "download_url": "https://files.pythonhosted.org/packages/86/65/6b5c26ab2debb166c2097c30de9f467e810d2df97efa62f644fab2a02451/prebuilt_rag_lu-1.0.2.tar.gz",
    "platform": null,
    "description": "# prebuilt_RAG_LU\n\n## Overview\n`prebuilt_RAG_LU` is a Python library designed to facilitate **Retrieval-Augmented Generation (RAG)** workflows. It provides a streamlined interface for embedding generation, vector-based document retrieval using **ChromaDB**, and integration with popular language models (LLMs) to generate human-readable responses based on retrieved documents.\n\n### Features:\n- **Embedding Generation**: Supports multiple pre-trained models from Hugging Face for generating vector embeddings, including multilingual and lightweight models.\n- **Vector Database Management**: Store, query, and delete vector-based document embeddings using **ChromaDB**.\n- **LLM Integration**: Generate text responses using state-of-the-art models like **GPT-2**, **T5**, and the high-performance **Mistral-7B-v0.3**.\n\n## Installation\nTo install the required dependencies, use the following commands:\n\n```bash\npip install transformers chromadb\n\n```\n\n## Usage\n\nHere is how you can use `prebuilt_RAG_LU` for embedding generation, document retrieval, and final response generation using LLMs.\n\n### 1. Embedding Generation\nUse the `EmbeddingGenerator` class to generate vector embeddings for text. You can choose from several pre-trained models:\n\n```python \nfrom prebuilt_RAG_LU import EmbeddingGenerator, VectorDatabase, LLMGenerator, Config\n\n# Initialize configuration\nconfig = Config()\n\n# Initialize components using values from the config instance\nembedding_generator = EmbeddingGenerator(model_name=\"xlm-roberta-large\")  # or another embedding model\n\n\n```\n### Available Models for Embedding Generation\n\n#### Lightweight Models:\n- `distilbert-base-uncased`\n- `microsoft/MiniLM-L12-H384-uncased`\n\n#### Heavyweight Models:\n- `bert-base-uncased`\n- `sentence-transformers/all-MiniLM-L6-v2`\n\n#### Multilingual Models:\n- `xlm-roberta-large` (1024 dimensions)\n- `bert-base-multilingual-cased`\n- `sentence-transformers/LaBSE`\n\n---\n\n### 2. Vector Database Management\nOnce you have the embeddings, you can store and manage them in ChromaDB using the `VectorDatabase` class:\n\n```python\n\nvector_db = VectorDatabase()\nllm_generator = LLMGenerator(model_name=config.model_name, token=config.user_token)\n\n# 1. Insert Documents into the Vector Database\ndocuments = [\n    {\"id\": \"doc1\", \"text\": \"AI is revolutionizing the food industry by analyzing nutritional data and recommending healthier meal options tailored to individual dietary needs. This helps people make better food choices.\"},\n    {\"id\": \"doc2\", \"text\": \"Artificial Intelligence enables personalized nutrition plans based on health data such as age, weight, and medical history, ensuring individuals receive optimal nutrients for a balanced, healthy lifestyle.\"},\n    {\"id\": \"doc3\", \"text\": \"AI applications in healthcare extend to monitoring dietary habits and suggesting improvements, helping people adopt healthier eating patterns, reduce stress, and improve overall quality of life.\"},\n    {\"id\": \"doc4\", \"text\": \"With AI-driven apps, users can track their calorie intake, monitor nutrient levels, and receive real-time suggestions for healthier food alternatives, helping them achieve their fitness and health goals.\"},\n    {\"id\": \"doc5\", \"text\": \"AI-powered health assistants can provide instant feedback on meal choices and suggest healthier recipes, helping individuals maintain a balanced diet while avoiding unhealthy foods.\"},\n    {\"id\": \"doc6\", \"text\": \"AI in food science is being used to develop plant-based food alternatives that mimic meat but offer healthier options for consumers, reducing reliance on animal-based products and promoting sustainability.\"},\n    {\"id\": \"doc7\", \"text\": \"Through AI, food companies are able to optimize their product lines by analyzing consumer data and creating healthier snack and meal options that cater to different dietary preferences and health concerns.\"},\n    {\"id\": \"doc8\", \"text\": \"AI-enabled wearable devices can track a person's activity levels, heart rate, and caloric burn, then adjust food recommendations accordingly, ensuring the body receives the energy it needs to function optimally.\"},\n    {\"id\": \"doc9\", \"text\": \"AI helps detect harmful ingredients in processed foods by analyzing chemical compositions and flagging unhealthy additives, empowering consumers to make more informed food choices.\"},\n    {\"id\": \"doc10\", \"text\": \"AI can predict future health risks by analyzing a person's diet and lifestyle patterns, then recommend proactive dietary changes that reduce the risk of diseases such as diabetes and heart disease.\"}\n]\n\n\n# Generate embeddings for the documents and upsert them into the vector database\ndoc_ids = [doc[\"id\"] for doc in documents]\ndoc_embeddings = [embedding_generator.get_embedding(doc[\"text\"]) for doc in documents]\ndoc_metadatas = [{\"content\": doc[\"text\"]} for doc in documents]\n\nvector_db.upsert_documents(ids=doc_ids, embeddings=doc_embeddings, metadatas=doc_metadatas)\n\n# Embedding Generation for the Query\nquery_text = \"How does AI contribute to healthy food choices and living a better life?\"\nquery_embedding = embedding_generator.get_embedding(query_text)\n\n# Document Retrieval based on the Query\nretrieved_docs = vector_db.query_documents(query_embedding, n_results=3)\ncontext = \" \".join([doc[\"metadata\"][\"content\"] for doc in retrieved_docs])\n\n```\n### Key Methods\n\n- `upsert_documents(ids, embeddings, metadatas)`: Inserts or updates documents in the vector database.\n\n- `query_documents(query_embedding, n_results=2)`: Retrieves the most similar documents based on the input embedding.\n\n- `delete_document(doc_id)`: Deletes a document from the vector database.\n\n### 3. LLM-Based Result Generation\nAfter retrieving the relevant documents, use an LLM to generate a final response. The `LLMGenerator` class integrates several popular language models, including `Mistral-7B-v0.3`.\n\n```python\n# LLM-Based Generation with Retrieved Context\nfinal_prompt = f\"Based on the following information: {context}. Answer the question: {query_text}\"\ngenerated_text = llm_generator.generate_text(final_prompt)\n\nprint(\"Generated Text:\", generated_text)\n\n```\n\n### Available LLM Models for Text Generation\n\n- **GPT-2** (lightweight): `gpt2`\n- **EleutherAI GPT-Neo 1.3B** (medium scale): `EleutherAI/gpt-neo-1.3B`\n- **Meta OPT 1.3B** (medium scale): `facebook/opt-1.3b`\n- **Google Flan-T5 Base** (T5 variant): `google/flan-t5-base`\n- **Mistral-7B-v0.3** (high-performance): `mistralai/Mistral-7B-v0.3`\n\n---\n\n## Full RAG Workflow Example\nHere is how you can combine embedding generation, vector database management, and LLM-based result generation into a complete RAG workflow.\n\n```python\n\nfrom prebuilt_RAG_LU import EmbeddingGenerator, VectorDatabase, LLMGenerator, Config\n\n# Initialize configuration\nconfig = Config()\n\n# Initialize components using values from the config instance\nembedding_generator = EmbeddingGenerator(model_name=\"xlm-roberta-large\")  # or another embedding model\nvector_db = VectorDatabase()\nllm_generator = LLMGenerator(model_name=config.model_name, token=config.user_token)\n\n# 1. Insert Documents into the Vector Database\n# Example documents to insert\ndocuments = [\n    {\"id\": \"doc1\", \"text\": \"AI is revolutionizing the food industry by analyzing nutritional data and recommending healthier meal options tailored to individual dietary needs. This helps people make better food choices.\"},\n    {\"id\": \"doc2\", \"text\": \"Artificial Intelligence enables personalized nutrition plans based on health data such as age, weight, and medical history, ensuring individuals receive optimal nutrients for a balanced, healthy lifestyle.\"},\n    {\"id\": \"doc3\", \"text\": \"AI applications in healthcare extend to monitoring dietary habits and suggesting improvements, helping people adopt healthier eating patterns, reduce stress, and improve overall quality of life.\"},\n    {\"id\": \"doc4\", \"text\": \"With AI-driven apps, users can track their calorie intake, monitor nutrient levels, and receive real-time suggestions for healthier food alternatives, helping them achieve their fitness and health goals.\"},\n    {\"id\": \"doc5\", \"text\": \"AI-powered health assistants can provide instant feedback on meal choices and suggest healthier recipes, helping individuals maintain a balanced diet while avoiding unhealthy foods.\"},\n    {\"id\": \"doc6\", \"text\": \"AI in food science is being used to develop plant-based food alternatives that mimic meat but offer healthier options for consumers, reducing reliance on animal-based products and promoting sustainability.\"},\n    {\"id\": \"doc7\", \"text\": \"Through AI, food companies are able to optimize their product lines by analyzing consumer data and creating healthier snack and meal options that cater to different dietary preferences and health concerns.\"},\n    {\"id\": \"doc8\", \"text\": \"AI-enabled wearable devices can track a person\ufffd\ufffd\ufffds activity levels, heart rate, and caloric burn, then adjust food recommendations accordingly, ensuring the body receives the energy it needs to function optimally.\"},\n    {\"id\": \"doc9\", \"text\": \"AI helps detect harmful ingredients in processed foods by analyzing chemical compositions and flagging unhealthy additives, empowering consumers to make more informed food choices.\"},\n    {\"id\": \"doc10\", \"text\": \"AI can predict future health risks by analyzing a person's diet and lifestyle patterns, then recommend proactive dietary changes that reduce the risk of diseases such as diabetes and heart disease.\"}\n]\n\n# Generate embeddings for the documents and upsert them into the vector database\ndoc_ids = [doc[\"id\"] for doc in documents]\ndoc_embeddings = [embedding_generator.get_embedding(doc[\"text\"]) for doc in documents]\ndoc_metadatas = [{\"content\": doc[\"text\"]} for doc in documents]\n\nvector_db.upsert_documents(ids=doc_ids, embeddings=doc_embeddings, metadatas=doc_metadatas)\n\n# 2. Embedding Generation for the Query\nquery_text = \"How does AI contribute to healthy food choices and living a better life?\"\nquery_embedding = embedding_generator.get_embedding(query_text)\n\n# 3. Document Retrieval based on the Query\nretrieved_docs = vector_db.query_documents(query_embedding, n_results=3)\ncontext = \" \".join([doc[\"metadata\"][\"content\"] for doc in retrieved_docs])\n\n# 4. LLM-Based Generation with Retrieved Context\nfinal_prompt = f\"Based on the following information: {context}. Answer the question: {query_text}\"\ngenerated_text = llm_generator.generate_text(final_prompt)\n\nprint(\"Generated Text:\", generated_text)\n\n```\n## Configuration\n\nThe library allows flexible configuration of models for embedding generation and text generation. You can modify the model selection based on your specific needs.\n\n### Embedding Models:\n\n- **Lightweight**:\n  - `distilbert-base-uncased`\n  - `microsoft/MiniLM-L12-H384-uncased`\n\n- **Heavyweight**:\n  - `bert-base-uncased`\n  - `sentence-transformers/all-MiniLM-L6-v2`\n\n- **Multilingual**:\n  - `xlm-roberta-large`\n  - `bert-base-multilingual-cased`\n  - `sentence-transformers/LaBSE`\n\n### LLM Models:\n\n- **GPT-2**: `gpt2`\n- **GPT-Neo 1.3B**: `EleutherAI/gpt-neo-1.3B`\n- **OPT 1.3B**: `facebook/opt-1.3b`\n- **Flan-T5 Base**: `google/flan-t5-base`\n- **Mistral-7B-v0.3**: `mistralai/Mistral-7B-v0.3`\n\n## License\n\nThis project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.\n\n\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A library for building Retrieval-Augmented Generation (RAG) systems using ChromaDB and popular language models (LLMs).",
    "version": "1.0.2",
    "project_urls": {
        "Documentation": "https://github.com/mehrdadalmasi2020/prebuilt_RAG_LU",
        "Homepage": "https://github.com/mehrdadalmasi2020/prebuilt_RAG_LU",
        "Source": "https://github.com/mehrdadalmasi2020/prebuilt_RAG_LU",
        "Tracker": "https://github.com/mehrdadalmasi2020/prebuilt_RAG_LU/issues"
    },
    "split_keywords": [
        "rag",
        " retrieval-augmented generation",
        " transformers",
        " chromadb",
        " fine-tuning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "598865ad346f1dcbd26a9fe0fa32c0e131a620954601f1300be541e18fd7a114",
                "md5": "b9deaf0ae78fcd6fbadf591e6a8272ab",
                "sha256": "05b4075945145d51a2aebe10bdc3883859eb1b8c9e43ef684590d85a0d68d38e"
            },
            "downloads": -1,
            "filename": "prebuilt_RAG_LU-1.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b9deaf0ae78fcd6fbadf591e6a8272ab",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 8258,
            "upload_time": "2024-10-25T16:05:30",
            "upload_time_iso_8601": "2024-10-25T16:05:30.008877Z",
            "url": "https://files.pythonhosted.org/packages/59/88/65ad346f1dcbd26a9fe0fa32c0e131a620954601f1300be541e18fd7a114/prebuilt_RAG_LU-1.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "86656b5c26ab2debb166c2097c30de9f467e810d2df97efa62f644fab2a02451",
                "md5": "4056c32c02ca9875986a549ad869162c",
                "sha256": "a460fceb632dd7f01d016c2ee33cac24d46aafd5748ab6ff88cae8bc3c8d6249"
            },
            "downloads": -1,
            "filename": "prebuilt_rag_lu-1.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "4056c32c02ca9875986a549ad869162c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 8230,
            "upload_time": "2024-10-25T16:05:31",
            "upload_time_iso_8601": "2024-10-25T16:05:31.699944Z",
            "url": "https://files.pythonhosted.org/packages/86/65/6b5c26ab2debb166c2097c30de9f467e810d2df97efa62f644fab2a02451/prebuilt_rag_lu-1.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-25 16:05:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mehrdadalmasi2020",
    "github_project": "prebuilt_RAG_LU",
    "github_not_found": true,
    "lcname": "prebuilt-rag-lu"
}
        
Elapsed time: 0.54973s