# Redis Ingestion Pipeline Pack
This LlamaPack creates an [ingestion pipeline](https://docs.llamaindex.ai/en/stable/module_guides/loading/ingestion_pipeline/root.html), with both a cache and vector store backed by Redis.
## CLI Usage
You can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:
```bash
llamaindex-cli download-llamapack RedisIngestionPipelinePack --download-dir ./redis_ingestion_pack
```
You can then inspect the files at `./redis_ingestion_pack` and use them as a template for your own project!
## Code Usage
You can download the pack to a `./redis_ingestion_pack` directory:
```python
from llama_index.core.llama_pack import download_llama_pack
# download and install dependencies
RedisIngestionPipelinePack = download_llama_pack(
"RedisIngestionPipelinePack", "./redis_ingestion_pack"
)
```
From here, you can use the pack, or inspect and modify the pack in `./redis_ingestion_pack`.
Then, you can set up the pack like so:
```python
from llama_index.core.node_parser import SentenceSplitter
from llama_index.embeddings.openai import OpenAIEmbedding
transformations = [SentenceSplitter(), OpenAIEmbedding()]
# create the pack
ingest_pack = RedisIngestionPipelinePack(
transformations,
hostname="localhost",
port=6379,
cache_collection_name="ingest_cache",
vector_collection_name="vector_store",
)
```
The `run()` function is a light wrapper around `pipeline.run()`.
You can use this to ingest data and then create an index from the vector store.
```python
pipeline.run(documents)
index = VectorStoreIndex.from_vector_store(inget_pack.vector_store)
```
You can learn more about the ingestion pipeline at the [LlamaIndex documentation](https://docs.llamaindex.ai/en/stable/module_guides/loading/ingestion_pipeline/root.html).
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-packs-redis-ingestion-pipeline",
"maintainer": "logan-markewich",
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": "index, ingestion, pipeline, redis",
"author": "Your Name",
"author_email": "you@example.com",
"download_url": "https://files.pythonhosted.org/packages/ac/cb/18e9a3a10dd5dad826b86482451cf939ccc648307d246a6cb55296779896/llama_index_packs_redis_ingestion_pipeline-0.3.0.tar.gz",
"platform": null,
"description": "# Redis Ingestion Pipeline Pack\n\nThis LlamaPack creates an [ingestion pipeline](https://docs.llamaindex.ai/en/stable/module_guides/loading/ingestion_pipeline/root.html), with both a cache and vector store backed by Redis.\n\n## CLI Usage\n\nYou can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:\n\n```bash\nllamaindex-cli download-llamapack RedisIngestionPipelinePack --download-dir ./redis_ingestion_pack\n```\n\nYou can then inspect the files at `./redis_ingestion_pack` and use them as a template for your own project!\n\n## Code Usage\n\nYou can download the pack to a `./redis_ingestion_pack` directory:\n\n```python\nfrom llama_index.core.llama_pack import download_llama_pack\n\n# download and install dependencies\nRedisIngestionPipelinePack = download_llama_pack(\n \"RedisIngestionPipelinePack\", \"./redis_ingestion_pack\"\n)\n```\n\nFrom here, you can use the pack, or inspect and modify the pack in `./redis_ingestion_pack`.\n\nThen, you can set up the pack like so:\n\n```python\nfrom llama_index.core.node_parser import SentenceSplitter\nfrom llama_index.embeddings.openai import OpenAIEmbedding\n\ntransformations = [SentenceSplitter(), OpenAIEmbedding()]\n\n# create the pack\ningest_pack = RedisIngestionPipelinePack(\n transformations,\n hostname=\"localhost\",\n port=6379,\n cache_collection_name=\"ingest_cache\",\n vector_collection_name=\"vector_store\",\n)\n```\n\nThe `run()` function is a light wrapper around `pipeline.run()`.\n\nYou can use this to ingest data and then create an index from the vector store.\n\n```python\npipeline.run(documents)\n\nindex = VectorStoreIndex.from_vector_store(inget_pack.vector_store)\n```\n\nYou can learn more about the ingestion pipeline at the [LlamaIndex documentation](https://docs.llamaindex.ai/en/stable/module_guides/loading/ingestion_pipeline/root.html).\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index packs redis_ingestion_pipeline integration",
"version": "0.3.0",
"project_urls": null,
"split_keywords": [
"index",
" ingestion",
" pipeline",
" redis"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9d33c5434ae667dcd6e59a5a082817e15add337cd9e861dcfe567088a9c569a4",
"md5": "9a44b39862c2aa7e048e302d4b864386",
"sha256": "719319b922ef59ea617fc133b11645864404cf433affbd6a2000adcf3eead9a0"
},
"downloads": -1,
"filename": "llama_index_packs_redis_ingestion_pipeline-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9a44b39862c2aa7e048e302d4b864386",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 3109,
"upload_time": "2024-11-18T00:54:52",
"upload_time_iso_8601": "2024-11-18T00:54:52.495692Z",
"url": "https://files.pythonhosted.org/packages/9d/33/c5434ae667dcd6e59a5a082817e15add337cd9e861dcfe567088a9c569a4/llama_index_packs_redis_ingestion_pipeline-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "accb18e9a3a10dd5dad826b86482451cf939ccc648307d246a6cb55296779896",
"md5": "f2c4312def6029ae0ee5becb09b7e110",
"sha256": "11acd7118bb49e565127d670e26de64048db9dbe5b0f92cd701d0cc825620913"
},
"downloads": -1,
"filename": "llama_index_packs_redis_ingestion_pipeline-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "f2c4312def6029ae0ee5becb09b7e110",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 2719,
"upload_time": "2024-11-18T00:54:54",
"upload_time_iso_8601": "2024-11-18T00:54:54.050629Z",
"url": "https://files.pythonhosted.org/packages/ac/cb/18e9a3a10dd5dad826b86482451cf939ccc648307d246a6cb55296779896/llama_index_packs_redis_ingestion_pipeline-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-18 00:54:54",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-packs-redis-ingestion-pipeline"
}