llama-index-readers-lilac


Namellama-index-readers-lilac JSON
Version 0.2.0 PyPI version JSON
download
home_pageNone
Summaryllama-index readers lilac integration
upload_time2024-08-22 06:27:52
maintainernsthorat
docs_urlNone
authorYour Name
requires_python<4.0,>=3.9
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Lilac reader

```bash
pip install llama-index-readers-papers

pip install llama-index-readers-lilac
```

[Lilac](https://lilacml.com/) is an open-source product that helps you analyze, enrich, and clean unstructured data with AI.

It can be used to analyze, clean, structure, and label data that can be used in downstream LlamaIndex and LangChain applications.

## Lilac projects

This assumes you've already run Lilac locally, and have a project directory with a dataset. For more details on Lilac projects, see [Lilac Projects](https://lilacml.com/projects/projects.html)

You can use any LlamaIndex loader to load data into Lilac, clean data, and then bring it back into LlamaIndex Documents.

## Usage

### LlamaIndex => Lilac

See [this notebook](https://github.com/lilacai/lilac/blob/main/notebooks/LlamaIndexLoader.ipynb) for getting data into Lilac from LlamaHub.

```python
import lilac as ll

# See: https://llamahub.ai/l/papers-arxiv
from llama_index.readers.papers import ArxivReader

loader = ArxivReader()
documents = loader.load_data(search_query="au:Karpathy")

# Set the project directory for Lilac.
ll.set_project_dir("./data")

# This assumes you already have a lilac project set up.
# If you don't, use ll.init(project_dir='./data')
ll.create_dataset(
    config=ll.DatasetConfig(
        namespace="local",
        name="arxiv-karpathy",
        source=ll.LlamaIndexDocsSource(
            # documents comes from the loader.load_data call in the previous cell.
            documents=documents
        ),
    )
)

# You can start a lilac server with. Once you've cleaned the dataset, you can come back into GPTIndex.
ll.start_server(project_dir="./data")
```

### Lilac => LlamaIndex Documents

```python
from llama_index.core import VectorStoreIndex, download_loader

from llama_index.readers.lilac import LilacReader

loader = LilacReader()
documents = loader.load_data(
    project_dir="~/my_project",
    # The name of your dataset in the project dir.
    dataset="local/arxiv-karpathy",
)

index = VectorStoreIndex.from_documents(documents)

index.query("How are ImageNet labels validated?")
```

This loader is designed to be used as a way to load data into [GPT Index](https://github.com/run-llama/llama_index/tree/main/llama_index) and/or subsequently used in a [LangChain](https://github.com/hwchase17/langchain) Agent.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-readers-lilac",
    "maintainer": "nsthorat",
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/9f/21/a1c691b20c2681053bb800bbef8274d267e513f06c23ad54eb15c40f84a7/llama_index_readers_lilac-0.2.0.tar.gz",
    "platform": null,
    "description": "# Lilac reader\n\n```bash\npip install llama-index-readers-papers\n\npip install llama-index-readers-lilac\n```\n\n[Lilac](https://lilacml.com/) is an open-source product that helps you analyze, enrich, and clean unstructured data with AI.\n\nIt can be used to analyze, clean, structure, and label data that can be used in downstream LlamaIndex and LangChain applications.\n\n## Lilac projects\n\nThis assumes you've already run Lilac locally, and have a project directory with a dataset. For more details on Lilac projects, see [Lilac Projects](https://lilacml.com/projects/projects.html)\n\nYou can use any LlamaIndex loader to load data into Lilac, clean data, and then bring it back into LlamaIndex Documents.\n\n## Usage\n\n### LlamaIndex => Lilac\n\nSee [this notebook](https://github.com/lilacai/lilac/blob/main/notebooks/LlamaIndexLoader.ipynb) for getting data into Lilac from LlamaHub.\n\n```python\nimport lilac as ll\n\n# See: https://llamahub.ai/l/papers-arxiv\nfrom llama_index.readers.papers import ArxivReader\n\nloader = ArxivReader()\ndocuments = loader.load_data(search_query=\"au:Karpathy\")\n\n# Set the project directory for Lilac.\nll.set_project_dir(\"./data\")\n\n# This assumes you already have a lilac project set up.\n# If you don't, use ll.init(project_dir='./data')\nll.create_dataset(\n    config=ll.DatasetConfig(\n        namespace=\"local\",\n        name=\"arxiv-karpathy\",\n        source=ll.LlamaIndexDocsSource(\n            # documents comes from the loader.load_data call in the previous cell.\n            documents=documents\n        ),\n    )\n)\n\n# You can start a lilac server with. Once you've cleaned the dataset, you can come back into GPTIndex.\nll.start_server(project_dir=\"./data\")\n```\n\n### Lilac => LlamaIndex Documents\n\n```python\nfrom llama_index.core import VectorStoreIndex, download_loader\n\nfrom llama_index.readers.lilac import LilacReader\n\nloader = LilacReader()\ndocuments = loader.load_data(\n    project_dir=\"~/my_project\",\n    # The name of your dataset in the project dir.\n    dataset=\"local/arxiv-karpathy\",\n)\n\nindex = VectorStoreIndex.from_documents(documents)\n\nindex.query(\"How are ImageNet labels validated?\")\n```\n\nThis loader is designed to be used as a way to load data into [GPT Index](https://github.com/run-llama/llama_index/tree/main/llama_index) and/or subsequently used in a [LangChain](https://github.com/hwchase17/langchain) Agent.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index readers lilac integration",
    "version": "0.2.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7cd7e30af02838b5d07955306318c5082784d86175dd627eece9c8d6473cee43",
                "md5": "d0df36ba9b5bd706225a6dc5974f7bfb",
                "sha256": "f47b93e9b8085eef22691c25e39104d5e3742eb7b1609fc912d96f650badd34c"
            },
            "downloads": -1,
            "filename": "llama_index_readers_lilac-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d0df36ba9b5bd706225a6dc5974f7bfb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 3805,
            "upload_time": "2024-08-22T06:27:51",
            "upload_time_iso_8601": "2024-08-22T06:27:51.496029Z",
            "url": "https://files.pythonhosted.org/packages/7c/d7/e30af02838b5d07955306318c5082784d86175dd627eece9c8d6473cee43/llama_index_readers_lilac-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9f21a1c691b20c2681053bb800bbef8274d267e513f06c23ad54eb15c40f84a7",
                "md5": "62893d9d041d97aa6057059f4cbd766c",
                "sha256": "3fd84c310f39fe494fc6a9a2032c00b7200ee0aa30b0df9e4ed599274da90af4"
            },
            "downloads": -1,
            "filename": "llama_index_readers_lilac-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "62893d9d041d97aa6057059f4cbd766c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 3559,
            "upload_time": "2024-08-22T06:27:52",
            "upload_time_iso_8601": "2024-08-22T06:27:52.282155Z",
            "url": "https://files.pythonhosted.org/packages/9f/21/a1c691b20c2681053bb800bbef8274d267e513f06c23ad54eb15c40f84a7/llama_index_readers_lilac-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-22 06:27:52",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-readers-lilac"
}
        
Elapsed time: 0.28728s