llama-index-packs-redis-ingestion-pipeline


Namellama-index-packs-redis-ingestion-pipeline JSON
Version 0.1.3 PyPI version JSON
download
home_page
Summaryllama-index packs redis_ingestion_pipeline integration
upload_time2024-02-22 01:33:18
maintainerlogan-markewich
docs_urlNone
authorYour Name
requires_python>=3.8.1,<4.0
licenseMIT
keywords index ingestion pipeline redis
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Redis Ingestion Pipeline Pack

This LlamaPack creates an [ingestion pipeline](https://docs.llamaindex.ai/en/stable/module_guides/loading/ingestion_pipeline/root.html), with both a cache and vector store backed by Redis.

## CLI Usage

You can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:

```bash
llamaindex-cli download-llamapack RedisIngestionPipelinePack --download-dir ./redis_ingestion_pack
```

You can then inspect the files at `./redis_ingestion_pack` and use them as a template for your own project!

## Code Usage

You can download the pack to a `./redis_ingestion_pack` directory:

```python
from llama_index.core.llama_pack import download_llama_pack

# download and install dependencies
RedisIngestionPipelinePack = download_llama_pack(
    "RedisIngestionPipelinePack", "./redis_ingestion_pack"
)
```

From here, you can use the pack, or inspect and modify the pack in `./redis_ingestion_pack`.

Then, you can set up the pack like so:

```python
from llama_index.core.text_splitter import SentenceSplitter
from llama_index.core.embeddings import OpenAIEmbedding

transformations = [SentenceSplitter(), OpenAIEmbedding()]

# create the pack
ingest_pack = RedisIngestionPipelinePack(
    transformations,
    hostname="localhost",
    port=6379,
    cache_collection_name="ingest_cache",
    vector_collection_name="vector_store",
)
```

The `run()` function is a light wrapper around `pipeline.run()`.

You can use this to ingest data and then create an index from the vector store.

```python
pipeline.run(documents)

index = VectorStoreIndex.from_vector_store(inget_pack.vector_store)
```

You can learn more about the ingestion pipeline at the [LlamaIndex documentation](https://docs.llamaindex.ai/en/stable/module_guides/loading/ingestion_pipeline/root.html).

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "llama-index-packs-redis-ingestion-pipeline",
    "maintainer": "logan-markewich",
    "docs_url": null,
    "requires_python": ">=3.8.1,<4.0",
    "maintainer_email": "",
    "keywords": "index,ingestion,pipeline,redis",
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/bf/f8/a85da861908d7e15ea58ebde9615dc8cd13292eca241205f7e95a6e77cf9/llama_index_packs_redis_ingestion_pipeline-0.1.3.tar.gz",
    "platform": null,
    "description": "# Redis Ingestion Pipeline Pack\n\nThis LlamaPack creates an [ingestion pipeline](https://docs.llamaindex.ai/en/stable/module_guides/loading/ingestion_pipeline/root.html), with both a cache and vector store backed by Redis.\n\n## CLI Usage\n\nYou can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:\n\n```bash\nllamaindex-cli download-llamapack RedisIngestionPipelinePack --download-dir ./redis_ingestion_pack\n```\n\nYou can then inspect the files at `./redis_ingestion_pack` and use them as a template for your own project!\n\n## Code Usage\n\nYou can download the pack to a `./redis_ingestion_pack` directory:\n\n```python\nfrom llama_index.core.llama_pack import download_llama_pack\n\n# download and install dependencies\nRedisIngestionPipelinePack = download_llama_pack(\n    \"RedisIngestionPipelinePack\", \"./redis_ingestion_pack\"\n)\n```\n\nFrom here, you can use the pack, or inspect and modify the pack in `./redis_ingestion_pack`.\n\nThen, you can set up the pack like so:\n\n```python\nfrom llama_index.core.text_splitter import SentenceSplitter\nfrom llama_index.core.embeddings import OpenAIEmbedding\n\ntransformations = [SentenceSplitter(), OpenAIEmbedding()]\n\n# create the pack\ningest_pack = RedisIngestionPipelinePack(\n    transformations,\n    hostname=\"localhost\",\n    port=6379,\n    cache_collection_name=\"ingest_cache\",\n    vector_collection_name=\"vector_store\",\n)\n```\n\nThe `run()` function is a light wrapper around `pipeline.run()`.\n\nYou can use this to ingest data and then create an index from the vector store.\n\n```python\npipeline.run(documents)\n\nindex = VectorStoreIndex.from_vector_store(inget_pack.vector_store)\n```\n\nYou can learn more about the ingestion pipeline at the [LlamaIndex documentation](https://docs.llamaindex.ai/en/stable/module_guides/loading/ingestion_pipeline/root.html).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index packs redis_ingestion_pipeline integration",
    "version": "0.1.3",
    "project_urls": null,
    "split_keywords": [
        "index",
        "ingestion",
        "pipeline",
        "redis"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a5618b5457be518456f8fb7068760afd54500bd48ab4676566ae48d4db7223db",
                "md5": "b47970c8bf8e78569806b964678d77fa",
                "sha256": "e89f6c5dc2eb3f4c69a99d8901e719452bc904c6812d6e0eb9e4f3fb3eddbb28"
            },
            "downloads": -1,
            "filename": "llama_index_packs_redis_ingestion_pipeline-0.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b47970c8bf8e78569806b964678d77fa",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.1,<4.0",
            "size": 3101,
            "upload_time": "2024-02-22T01:33:16",
            "upload_time_iso_8601": "2024-02-22T01:33:16.748855Z",
            "url": "https://files.pythonhosted.org/packages/a5/61/8b5457be518456f8fb7068760afd54500bd48ab4676566ae48d4db7223db/llama_index_packs_redis_ingestion_pipeline-0.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bff8a85da861908d7e15ea58ebde9615dc8cd13292eca241205f7e95a6e77cf9",
                "md5": "d7f0b40ee20b8906269190d7463c83b3",
                "sha256": "e3259a7c453ed04289c16bdc0f76a3cb078066f4c4251f8182745b4a86273cfb"
            },
            "downloads": -1,
            "filename": "llama_index_packs_redis_ingestion_pipeline-0.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "d7f0b40ee20b8906269190d7463c83b3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.1,<4.0",
            "size": 2710,
            "upload_time": "2024-02-22T01:33:18",
            "upload_time_iso_8601": "2024-02-22T01:33:18.001379Z",
            "url": "https://files.pythonhosted.org/packages/bf/f8/a85da861908d7e15ea58ebde9615dc8cd13292eca241205f7e95a6e77cf9/llama_index_packs_redis_ingestion_pipeline-0.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-22 01:33:18",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-packs-redis-ingestion-pipeline"
}
        
Elapsed time: 0.20220s