flying-delta-legacy


Nameflying-delta-legacy JSON
Version 0.9.42 PyPI version JSON
download
home_pagehttps://llamaindex.ai
SummaryInterface between LLMs and your data
upload_time2024-02-02 17:30:21
maintainerAndrei Fajardo
docs_urlNone
authorJerry Liu
requires_python>=3.8.1,<4.0
licenseMIT
keywords llm nlp rag data devtools index retrieval
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🗂️ LlamaIndex 🦙

[![PyPI - Downloads](https://img.shields.io/pypi/dm/llama-index)](https://pypi.org/project/llama-index/)
[![GitHub contributors](https://img.shields.io/github/contributors/jerryjliu/llama_index)](https://github.com/jerryjliu/llama_index/graphs/contributors)
[![Discord](https://img.shields.io/discord/1059199217496772688)](https://discord.gg/dGcwcsnxhU)

LlamaIndex (GPT Index) is a data framework for your LLM application.

PyPI:

- LlamaIndex: https://pypi.org/project/llama-index/.
- GPT Index (duplicate): https://pypi.org/project/gpt-index/.

LlamaIndex.TS (Typescript/Javascript): https://github.com/run-llama/LlamaIndexTS.

Documentation: https://docs.llamaindex.ai/en/stable/.

Twitter: https://twitter.com/llama_index.legacy.

Discord: https://discord.gg/dGcwcsnxhU.

### Ecosystem

- LlamaHub (community library of data loaders): https://llamahub.ai.
- LlamaLab (cutting-edge AGI projects using LlamaIndex): https://github.com/run-llama/llama-lab.

## 🚀 Overview

**NOTE**: This README is not updated as frequently as the documentation. Please check out the documentation above for the latest updates!

### Context

- LLMs are a phenomenal piece of technology for knowledge generation and reasoning. They are pre-trained on large amounts of publicly available data.
- How do we best augment LLMs with our own private data?

We need a comprehensive toolkit to help perform this data augmentation for LLMs.

### Proposed Solution

That's where **LlamaIndex** comes in. LlamaIndex is a "data framework" to help you build LLM apps. It provides the following tools:

- Offers **data connectors** to ingest your existing data sources and data formats (APIs, PDFs, docs, SQL, etc.).
- Provides ways to **structure your data** (indices, graphs) so that this data can be easily used with LLMs.
- Provides an **advanced retrieval/query interface over your data**: Feed in any LLM input prompt, get back retrieved context and knowledge-augmented output.
- Allows easy integrations with your outer application framework (e.g. with LangChain, Flask, Docker, ChatGPT, anything else).

LlamaIndex provides tools for both beginner users and advanced users. Our high-level API allows beginner users to use LlamaIndex to ingest and query their data in
5 lines of code. Our lower-level APIs allow advanced users to customize and extend any module (data connectors, indices, retrievers, query engines, reranking modules),
to fit their needs.

## 💡 Contributing

Interested in contributing? See our [Contribution Guide](CONTRIBUTING.md) for more details.

## 📄 Documentation

Full documentation can be found here: https://docs.llamaindex.ai/en/latest/.

Please check it out for the most up-to-date tutorials, how-to guides, references, and other resources!

## 💻 Example Usage

```
pip install llama-index
```

Examples are in the `examples` folder. Indices are in the `indices` folder (see list of indices below).

To build a simple vector store index using OpenAI:

```python
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

from llama_index.legacy import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("YOUR_DATA_DIRECTORY").load_data()
index = VectorStoreIndex.from_documents(documents)
```

To build a simple vector store index using non-OpenAI LLMs, e.g. Llama 2 hosted on [Replicate](https://replicate.com/), where you can easily create a free trial API token:

```python
import os

os.environ["REPLICATE_API_TOKEN"] = "YOUR_REPLICATE_API_TOKEN"

from llama_index.legacy.llms import Replicate

llama2_7b_chat = "meta/llama-2-7b-chat:8e6975e5ed6174911a6ff3d60540dfd4844201974602551e10e9e87ab143d81e"
llm = Replicate(
    model=llama2_7b_chat,
    temperature=0.01,
    additional_kwargs={"top_p": 1, "max_new_tokens": 300},
)

# set tokenizer to match LLM
from llama_index.legacy import set_global_tokenizer
from transformers import AutoTokenizer

set_global_tokenizer(
    AutoTokenizer.from_pretrained("NousResearch/Llama-2-7b-chat-hf").encode
)

from llama_index.legacy.embeddings import HuggingFaceEmbedding
from llama_index.legacy import ServiceContext

embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
service_context = ServiceContext.from_defaults(
    llm=llm, embed_model=embed_model
)

from llama_index.legacy import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("YOUR_DATA_DIRECTORY").load_data()
index = VectorStoreIndex.from_documents(
    documents, service_context=service_context
)
```

To query:

```python
query_engine = index.as_query_engine()
query_engine.query("YOUR_QUESTION")
```

By default, data is stored in-memory.
To persist to disk (under `./storage`):

```python
index.storage_context.persist()
```

To reload from disk:

```python
from llama_index.legacy import StorageContext, load_index_from_storage

# rebuild storage context
storage_context = StorageContext.from_defaults(persist_dir="./storage")
# load index
index = load_index_from_storage(storage_context)
```

## 🔧 Dependencies

The main third-party package requirements are `tiktoken`, `openai`, and `langchain`.

All requirements should be contained within the `setup.py` file.
To run the package locally without building the wheel, simply run:

```bash
pip install poetry
poetry install --with dev
```

## 📖 Citation

Reference to cite if you use LlamaIndex in a paper:

```
@software{Liu_LlamaIndex_2022,
author = {Liu, Jerry},
doi = {10.5281/zenodo.1234},
month = {11},
title = {{LlamaIndex}},
url = {https://github.com/jerryjliu/llama_index},
year = {2022}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://llamaindex.ai",
    "name": "flying-delta-legacy",
    "maintainer": "Andrei Fajardo",
    "docs_url": null,
    "requires_python": ">=3.8.1,<4.0",
    "maintainer_email": "andrei@runllama.ai",
    "keywords": "LLM,NLP,RAG,data,devtools,index,retrieval",
    "author": "Jerry Liu",
    "author_email": "jerry@llamaindex.ai",
    "download_url": "https://files.pythonhosted.org/packages/05/a6/b320484f1286b39932379c5d1eda2d3280e75299cdee1ecb29862438a220/flying_delta_legacy-0.9.42.tar.gz",
    "platform": null,
    "description": "# \ud83d\uddc2\ufe0f LlamaIndex \ud83e\udd99\n\n[![PyPI - Downloads](https://img.shields.io/pypi/dm/llama-index)](https://pypi.org/project/llama-index/)\n[![GitHub contributors](https://img.shields.io/github/contributors/jerryjliu/llama_index)](https://github.com/jerryjliu/llama_index/graphs/contributors)\n[![Discord](https://img.shields.io/discord/1059199217496772688)](https://discord.gg/dGcwcsnxhU)\n\nLlamaIndex (GPT Index) is a data framework for your LLM application.\n\nPyPI:\n\n- LlamaIndex: https://pypi.org/project/llama-index/.\n- GPT Index (duplicate): https://pypi.org/project/gpt-index/.\n\nLlamaIndex.TS (Typescript/Javascript): https://github.com/run-llama/LlamaIndexTS.\n\nDocumentation: https://docs.llamaindex.ai/en/stable/.\n\nTwitter: https://twitter.com/llama_index.legacy.\n\nDiscord: https://discord.gg/dGcwcsnxhU.\n\n### Ecosystem\n\n- LlamaHub (community library of data loaders): https://llamahub.ai.\n- LlamaLab (cutting-edge AGI projects using LlamaIndex): https://github.com/run-llama/llama-lab.\n\n## \ud83d\ude80 Overview\n\n**NOTE**: This README is not updated as frequently as the documentation. Please check out the documentation above for the latest updates!\n\n### Context\n\n- LLMs are a phenomenal piece of technology for knowledge generation and reasoning. They are pre-trained on large amounts of publicly available data.\n- How do we best augment LLMs with our own private data?\n\nWe need a comprehensive toolkit to help perform this data augmentation for LLMs.\n\n### Proposed Solution\n\nThat's where **LlamaIndex** comes in. LlamaIndex is a \"data framework\" to help you build LLM apps. It provides the following tools:\n\n- Offers **data connectors** to ingest your existing data sources and data formats (APIs, PDFs, docs, SQL, etc.).\n- Provides ways to **structure your data** (indices, graphs) so that this data can be easily used with LLMs.\n- Provides an **advanced retrieval/query interface over your data**: Feed in any LLM input prompt, get back retrieved context and knowledge-augmented output.\n- Allows easy integrations with your outer application framework (e.g. with LangChain, Flask, Docker, ChatGPT, anything else).\n\nLlamaIndex provides tools for both beginner users and advanced users. Our high-level API allows beginner users to use LlamaIndex to ingest and query their data in\n5 lines of code. Our lower-level APIs allow advanced users to customize and extend any module (data connectors, indices, retrievers, query engines, reranking modules),\nto fit their needs.\n\n## \ud83d\udca1 Contributing\n\nInterested in contributing? See our [Contribution Guide](CONTRIBUTING.md) for more details.\n\n## \ud83d\udcc4 Documentation\n\nFull documentation can be found here: https://docs.llamaindex.ai/en/latest/.\n\nPlease check it out for the most up-to-date tutorials, how-to guides, references, and other resources!\n\n## \ud83d\udcbb Example Usage\n\n```\npip install llama-index\n```\n\nExamples are in the `examples` folder. Indices are in the `indices` folder (see list of indices below).\n\nTo build a simple vector store index using OpenAI:\n\n```python\nimport os\n\nos.environ[\"OPENAI_API_KEY\"] = \"YOUR_OPENAI_API_KEY\"\n\nfrom llama_index.legacy import VectorStoreIndex, SimpleDirectoryReader\n\ndocuments = SimpleDirectoryReader(\"YOUR_DATA_DIRECTORY\").load_data()\nindex = VectorStoreIndex.from_documents(documents)\n```\n\nTo build a simple vector store index using non-OpenAI LLMs, e.g. Llama 2 hosted on [Replicate](https://replicate.com/), where you can easily create a free trial API token:\n\n```python\nimport os\n\nos.environ[\"REPLICATE_API_TOKEN\"] = \"YOUR_REPLICATE_API_TOKEN\"\n\nfrom llama_index.legacy.llms import Replicate\n\nllama2_7b_chat = \"meta/llama-2-7b-chat:8e6975e5ed6174911a6ff3d60540dfd4844201974602551e10e9e87ab143d81e\"\nllm = Replicate(\n    model=llama2_7b_chat,\n    temperature=0.01,\n    additional_kwargs={\"top_p\": 1, \"max_new_tokens\": 300},\n)\n\n# set tokenizer to match LLM\nfrom llama_index.legacy import set_global_tokenizer\nfrom transformers import AutoTokenizer\n\nset_global_tokenizer(\n    AutoTokenizer.from_pretrained(\"NousResearch/Llama-2-7b-chat-hf\").encode\n)\n\nfrom llama_index.legacy.embeddings import HuggingFaceEmbedding\nfrom llama_index.legacy import ServiceContext\n\nembed_model = HuggingFaceEmbedding(model_name=\"BAAI/bge-small-en-v1.5\")\nservice_context = ServiceContext.from_defaults(\n    llm=llm, embed_model=embed_model\n)\n\nfrom llama_index.legacy import VectorStoreIndex, SimpleDirectoryReader\n\ndocuments = SimpleDirectoryReader(\"YOUR_DATA_DIRECTORY\").load_data()\nindex = VectorStoreIndex.from_documents(\n    documents, service_context=service_context\n)\n```\n\nTo query:\n\n```python\nquery_engine = index.as_query_engine()\nquery_engine.query(\"YOUR_QUESTION\")\n```\n\nBy default, data is stored in-memory.\nTo persist to disk (under `./storage`):\n\n```python\nindex.storage_context.persist()\n```\n\nTo reload from disk:\n\n```python\nfrom llama_index.legacy import StorageContext, load_index_from_storage\n\n# rebuild storage context\nstorage_context = StorageContext.from_defaults(persist_dir=\"./storage\")\n# load index\nindex = load_index_from_storage(storage_context)\n```\n\n## \ud83d\udd27 Dependencies\n\nThe main third-party package requirements are `tiktoken`, `openai`, and `langchain`.\n\nAll requirements should be contained within the `setup.py` file.\nTo run the package locally without building the wheel, simply run:\n\n```bash\npip install poetry\npoetry install --with dev\n```\n\n## \ud83d\udcd6 Citation\n\nReference to cite if you use LlamaIndex in a paper:\n\n```\n@software{Liu_LlamaIndex_2022,\nauthor = {Liu, Jerry},\ndoi = {10.5281/zenodo.1234},\nmonth = {11},\ntitle = {{LlamaIndex}},\nurl = {https://github.com/jerryjliu/llama_index},\nyear = {2022}\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Interface between LLMs and your data",
    "version": "0.9.42",
    "project_urls": {
        "Documentation": "https://docs.llamaindex.ai/en/stable/",
        "Homepage": "https://llamaindex.ai",
        "Repository": "https://github.com/run-llama/llama_index"
    },
    "split_keywords": [
        "llm",
        "nlp",
        "rag",
        "data",
        "devtools",
        "index",
        "retrieval"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "da1f8abba7e5143b012bba3abbde80a80511c748995c0ed0b5e3ed6d5684106b",
                "md5": "23d8fb6fcf4b79908b9e447f4c82c7c8",
                "sha256": "d476246c8e213ee371c35a4b59bb2e2b095998b82f4f1ea39df7cbc9764064d9"
            },
            "downloads": -1,
            "filename": "flying_delta_legacy-0.9.42-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "23d8fb6fcf4b79908b9e447f4c82c7c8",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.1,<4.0",
            "size": 1164629,
            "upload_time": "2024-02-02T17:30:18",
            "upload_time_iso_8601": "2024-02-02T17:30:18.395288Z",
            "url": "https://files.pythonhosted.org/packages/da/1f/8abba7e5143b012bba3abbde80a80511c748995c0ed0b5e3ed6d5684106b/flying_delta_legacy-0.9.42-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "05a6b320484f1286b39932379c5d1eda2d3280e75299cdee1ecb29862438a220",
                "md5": "8e59e6ff4244aab18afae7bdc1b29102",
                "sha256": "9ea2547c8c81fe6c3556552ab8e561bd76ae16ca0f0c09ffe931d186cff426fb"
            },
            "downloads": -1,
            "filename": "flying_delta_legacy-0.9.42.tar.gz",
            "has_sig": false,
            "md5_digest": "8e59e6ff4244aab18afae7bdc1b29102",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.1,<4.0",
            "size": 770045,
            "upload_time": "2024-02-02T17:30:21",
            "upload_time_iso_8601": "2024-02-02T17:30:21.061008Z",
            "url": "https://files.pythonhosted.org/packages/05/a6/b320484f1286b39932379c5d1eda2d3280e75299cdee1ecb29862438a220/flying_delta_legacy-0.9.42.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-02 17:30:21",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "run-llama",
    "github_project": "llama_index",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "flying-delta-legacy"
}
        
Elapsed time: 0.17560s