llama-hub


Namellama-hub JSON
Version 0.0.79.post1 PyPI version JSON
download
home_pagehttps://llamahub.ai
SummaryA library of community-driven data loaders for LLMs. Use with LlamaIndex and/or LangChain.
upload_time2024-02-13 01:32:31
maintainer
docs_urlNone
authorJerry Liu
requires_python>=3.8.1,<3.12
licenseMIT
keywords llama-index llama-hub llama
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LlamaHub 🦙

> [!CAUTION]
> With the launch of LlamaIndex v0.10, we are deprecating this `llama_hub` repo - all integrations (data loaders, tools) and packs are now in the core [`llama-index` Python repository](https://github.com/run-llama/llama_index). 
> LlamaHub will continue to exist. We are revamping [llamahub.ai](https://llamahub.ai/) point to all integrations/packs/datasets available in the `llama-index` repo.

**Original creator**: Jesse Zhang (GH: [emptycrown](https://github.com/emptycrown), Twitter: [@thejessezhang](https://twitter.com/thejessezhang)), who courteously donated the repo to LlamaIndex!

> 👥 **Contributing**
> 
> Interested in contributing? Skip over to our [Contribution Section](https://github.com/run-llama/llama-hub#how-to-add-a-loadertoolllama-pack) below for more details.

This is a simple library of all the data loaders / readers / tools / llama-packs / llama-datasets that have been created by the community. The goal is to make it extremely easy to connect large language models to a large variety of knowledge sources. These are general-purpose utilities that are meant to be used in [LlamaIndex](https://github.com/run-llama/llama_index), [LangChain](https://github.com/hwchase17/langchain) and more!.

Loaders and readers allow you to easily ingest data for search and retrieval by a large language model, while tools allow the models to both read and write to third party data services and sources. Ultimately, this allows you to create your own customized data agent to intelligently work with you and your data to unlock the full capability of next level large language models.

For a variety of examples of data agents, see the [notebooks directory](https://github.com/emptycrown/llama-hub/tree/main/llama_hub/tools/notebooks). You can find example Jupyter notebooks for creating data agents that can load and parse data from Google Docs, SQL Databases, Notion, and Slack, and also manage your Google Calendar, and Gmail inbox, or read and use OpenAPI specs. 

For an easier way to browse the integrations available, check out the website here: https://llamahub.ai/.

<img width="1465" alt="Screenshot 2023-07-17 at 6 12 32 PM" src="https://github.com/ajhofmann/llama-hub/assets/10040285/5e344de4-4aca-4f6c-9944-46c00baa5eb2">

## Usage (Use `llama-hub` as PyPI package)
These general-purpose loaders are designed to be used as a way to load data into [LlamaIndex](https://github.com/jerryjliu/llama_index) and/or subsequently used in [LangChain](https://github.com/hwchase17/langchain). 

### Installation
```
pip install llama-hub
```

### LlamaIndex

```python
from llama_index import VectorStoreIndex
from llama_hub.google_docs import GoogleDocsReader

gdoc_ids = ['1wf-y2pd9C878Oh-FmLH7Q_BQkljdm6TQal-c1pUfrec']
loader = GoogleDocsReader()
documents = loader.load_data(document_ids=gdoc_ids)
index = VectorStoreIndex.from_documents(documents)
index.query('Where did the author go to school?')
```

### LlamaIndex Data Agent

```python
from llama_index.agent import OpenAIAgent
import openai
openai.api_key = 'sk-api-key'

from llama_hub.tools.google_calendar import GoogleCalendarToolSpec
tool_spec = GoogleCalendarToolSpec()

agent = OpenAIAgent.from_tools(tool_spec.to_tool_list())
agent.chat('what is the first thing on my calendar today')
agent.chat("Please create an event for tomorrow at 4pm to review pull requests")
```

For a variety of examples of creating and using data agents, see the [notebooks directory](https://github.com/emptycrown/llama-hub/tree/main/llama_hub/tools/notebooks).

### LangChain

Note: Make sure you change the description of the `Tool` to match your use case.

```python
from llama_index import VectorStoreIndex
from llama_hub.google_docs import GoogleDocsReader
from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain

# load documents
gdoc_ids = ['1wf-y2pd9C878Oh-FmLH7Q_BQkljdm6TQal-c1pUfrec']
loader = GoogleDocsReader()
documents = loader.load_data(document_ids=gdoc_ids)
langchain_documents = [d.to_langchain_format() for d in documents]

# initialize sample QA chain
llm = OpenAI(temperature=0)
qa_chain = load_qa_chain(llm)
question="<query here>"
answer = qa_chain.run(input_documents=langchain_documents, question=question)

```

## Loader Usage (Use `download_loader` from LlamaIndex)

You can also use the loaders with `download_loader` from LlamaIndex in a single line of code.

For example, see the code snippets below using the Google Docs Loader.

```python
from llama_index import VectorStoreIndex, download_loader

GoogleDocsReader = download_loader('GoogleDocsReader')

gdoc_ids = ['1wf-y2pd9C878Oh-FmLH7Q_BQkljdm6TQal-c1pUfrec']
loader = GoogleDocsReader()
documents = loader.load_data(document_ids=gdoc_ids)
index = VectorStoreIndex.from_documents(documents)
index.query('Where did the author go to school?')

```

## Llama-Pack Usage

Llama-packs can be downloaded using the `llamaindex-cli` tool that comes with `llama-index`:

```bash
llamaindex-cli download-llamapack ZephyrQueryEnginePack --download-dir ./zephyr_pack
```

Or with the `download_llama_pack` function directly:

```python
from llama_index.llama_pack import download_llama_pack

# download and install dependencies
LlavaCompletionPack = download_llama_pack(
  "LlavaCompletionPack", "./llava_pack"
)
```

## Llama-Dataset Usage

(NOTE: in what follows we present the pattern for producing a RAG benchmark with
the `RagEvaluatorPack` over a `LabelledRagDataset`. However, there are also other
types of llama-datasets such as `LabelledEvaluatorDataset` and corresponding llama-packs
for producing benchmarks on their respective tasks. They all follow the similar
usage pattern. Please refer to the README's to learn more on each type of
llama-dataset.)

The primary use of llama-dataset is for evaluating the performance of a RAG system.
In particular, it serves as a new test set (in traditional machine learning speak)
for one to build a RAG over, predict on, and subsequently perform evaluations
comparing the predicted response versus the reference response. To perform the
evaluation, the recommended usage pattern involves the application of the
`RagEvaluatorPack`. We recommend reading the [docs](https://docs.llamaindex.ai/en/stable/module_guides/evaluating/root.html) for the "Evaluation" module for
more information on all of our llama-dataset's.

```python
from llama_index.llama_dataset import download_llama_dataset
from llama_index.llama_pack import download_llama_pack
from llama_index import VectorStoreIndex

# download and install dependencies for benchmark dataset
rag_dataset, documents = download_llama_dataset(
  "PaulGrahamEssayDataset", "./data"
)

# build basic RAG system
index = VectorStoreIndex.from_documents(documents=documents)
query_engine = VectorStoreIndex.as_query_engine()

# evaluate using the RagEvaluatorPack
RagEvaluatorPack = download_llama_pack(
  "RagEvaluatorPack", "./rag_evaluator_pack"
)
rag_evaluator_pack = RagEvaluatorPack(
    rag_dataset=rag_dataset,
    query_engine=query_engine
)
benchmark_df = rag_evaluate_pack.run()  # async arun() supported as well
```

Llama-datasets can also be downloaded directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:

```bash
llamaindex-cli download-llamadataset PaulGrahamEssayDataset --download-dir ./data
```

After downloading them from `llamaindex-cli`, you can inspect the dataset and
it source files (stored in a directory `/source_files`) then load them into python:

```python
from llama_index import SimpleDirectoryReader
from llama_index.llama_dataset import LabelledRagDataset

rag_dataset = LabelledRagDataset.from_json("./data/rag_dataset.json")
documents = SimpleDirectoryReader(
    input_dir="./data/source_files"
).load_data()
```

## How to add a loader/tool/llama-pack

Adding a loader/tool/llama-pack simply requires forking this repo and making a Pull Request. The Llama Hub website will update automatically when a new `llama-hub` release is made. However, please keep in mind the following guidelines when making your PR.

### Step 0: Setup virtual environment, install Poetry and dependencies

Create a new Python virtual environment. The command below creates an environment in `.venv`,
and activates it:
```bash
python -m venv .venv
source .venv/bin/activate
```

if you are in windows, use the following to activate your virtual environment:

```bash
.venv\scripts\activate
```

Install poetry:

```bash
pip install poetry
```

Install the required dependencies (this will also install `llama_index`):

```bash
poetry install
```

This will create an editable install of `llama-hub` in your venv.


### Step 1: Create a new directory

For loaders, create a new directory in `llama_hub`, for tools create a directory in `llama_hub/tools`, and for llama-packs create a directory in `llama_hub/llama_packs` It can be nested within another, but name it something unique because the name of the directory will become the identifier for your loader (e.g. `google_docs`). Inside your new directory, create a `__init__.py` file specifying the module's public interface with `__all__`, a `base.py` file which will contain your loader implementation, and, if needed, a `requirements.txt` file to list the package dependencies of your loader. Those packages will automatically be installed when your loader is used, so no need to worry about that anymore!

If you'd like, you can create the new directory and files by running the following script in the `llama_hub` directory. Just remember to put your dependencies into a `requirements.txt` file.

```
./add_loader.sh [NAME_OF_NEW_DIRECTORY]
```

### Step 2: Write your README

Inside your new directory, create a `README.md` that mirrors that of the existing ones. It should have a summary of what your loader or tool does, its inputs, and how it is used in the context of LlamaIndex and LangChain.

### Step 3: Add your loader to the library.json file

Finally, add your loader to the `llama_hub/library.json` file (or for the equivalent `library.json` under `tools/` or `llama-packs/`) so that it may be used by others. As is exemplified by the current file, add the class name of your loader or tool, along with its ID, author, etc. This file is referenced by the Llama Hub website and the download function within LlamaIndex.

### Step 4: Make a Pull Request!

Create a PR against the main branch. We typically review the PR within a day. To help expedite the process, it may be helpful to provide screenshots (either in the PR or in
the README directly) Show your data loader or tool in action!

## How to add a llama-dataset

Similar to the process of adding a tool / loader / llama-pack, adding a llama-
datset also requires forking this repo and making a Pull Request. However, for a
llama-dataset, only its metadata is checked into this repo. The actual dataset
and it's source files are instead checked into another Github repo, that is the
[llama-datasets repository](https://github.com/run-llama/llama-datasets). You will need to fork and clone that repo in addition to forking and cloning this one. 

Please ensure that when you clone the llama-datasets repository, that you set
the environment variable `GIT_LFS_SKIP_SMUDGE` prior to calling the `git clone`
command:

```bash
# for bash
GIT_LFS_SKIP_SMUDGE=1 git clone git@github.com:<your-github-user-name>/llama-datasets.git  # for ssh
GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/<your-github-user-name>/llama-datasets.git  # for https

# for windows its done in two commands
set GIT_LFS_SKIP_SMUDGE=1  
git clone git@github.com:<your-github-user-name>/llama-datasets.git  # for ssh

set GIT_LFS_SKIP_SMUDGE=1  
git clone https://github.com/<your-github-user-name>/llama-datasets.git  # for https
```

The high-level steps for adding a llama-dataset are as follows:

1. Create a `LabelledRagDataset` (the initial class of llama-dataset made available on llama-hub)
2. Generate a baseline result with a RAG system of your own choosing on the
`LabelledRagDataset`
3. Prepare the dataset's metadata (`card.json` and `README.md`)
4. Submit a Pull Request to this repo to check in the metadata
5. Submit a Pull Request to the [llama-datasets repository](https://github.com/run-llama/llama-datasets) to check in the `LabelledRagDataset` and the source files

To assist with the submission process, we have prepared a [submission template
notebook](https://github.com/run-llama/llama_index/blob/main/docs/examples/llama_dataset/ragdataset_submission_template.ipynb) that walks you through the above-listed steps. We highly recommend
that you use this template notebook.

(NOTE: you can use the above process for submitting any of our other supported
types of llama-datasets such as the `LabelledEvaluatorDataset`.)

## Running tests

```shell
python3.9 -m venv .venv
source .venv/bin/activate 
pip3 install -r test_requirements.txt

poetry run make test
```

## Changelog

If you want to track the latest version updates / see which loaders are added to each release, take a look at our [full changelog here](https://github.com/emptycrown/llama-hub/blob/main/CHANGELOG.md)! 

## FAQ

### How do I test my loader before it's merged?

There is an argument called `loader_hub_url` in [`download_loader`](https://github.com/jerryjliu/llama_index/blob/main/llama_index/readers/download.py) that defaults to the main branch of this repo. You can set it to your branch or fork to test your new loader.

### Should I create a PR against LlamaHub or the LlamaIndex repo directly?

If you have a data loader PR, by default let's try to create it against LlamaHub! We will make exceptions in certain cases
(for instance, if we think the data loader should be core to the LlamaIndex repo).

For all other PR's relevant to LlamaIndex, let's create it directly against the [LlamaIndex repo](https://github.com/jerryjliu/llama_index).

### How can I get a verified badge on LlamaHub? 
We have just started offering badges to our contributors. At the moment, we're focused on our early adopters and official partners, but we're gradually opening up badge consideration to all submissions. If you're interested in being considered, please review the criteria below and if everything aligns, feel free to contact us via [community Discord](https://discord.gg/dGcwcsnxhU).

We are still refining our criteria but here are some aspects we consider:

**Quality**
- Code Quality illustrated by the use of coding standards and style guidelines.
- Code readability and proper documentation.

**Usability**
- Self-contained module with no external links or libraries, and it is easy to run.
- Module should not break any existing unit tests.

**Safety**
- Safety considerations, such as proper input validation, avoiding SQL injection, and secure handling of user data.

**Community Engagement & Feedback**
- The module's usefulness to the library's users as gauged by the number of likes, downloads, etc.
- Positive feedback from module users.
 
Note: 
* It's possible that we decide to award a badge to a subset of your submissions based on the above criteria. 
* Being a regular contributor doesn't guarantee a badge, we will still look at each submission individually. 

### Other questions?

Feel free to hop into the [community Discord](https://discord.gg/dGcwcsnxhU) or tag the official [Twitter account](https://twitter.com/llama_index)!

            

Raw data

            {
    "_id": null,
    "home_page": "https://llamahub.ai",
    "name": "llama-hub",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8.1,<3.12",
    "maintainer_email": "",
    "keywords": "llama-index,llama-hub,llama",
    "author": "Jerry Liu",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/8e/93/f6145e786f2052a6f474660eea7bf195a1cdc7ace46fd0eec9bfd6a403b9/llama_hub-0.0.79.post1.tar.gz",
    "platform": null,
    "description": "# LlamaHub \ud83e\udd99\n\n> [!CAUTION]\n> With the launch of LlamaIndex v0.10, we are deprecating this `llama_hub` repo - all integrations (data loaders, tools) and packs are now in the core [`llama-index` Python repository](https://github.com/run-llama/llama_index). \n> LlamaHub will continue to exist. We are revamping [llamahub.ai](https://llamahub.ai/) point to all integrations/packs/datasets available in the `llama-index` repo.\n\n**Original creator**: Jesse Zhang (GH: [emptycrown](https://github.com/emptycrown), Twitter: [@thejessezhang](https://twitter.com/thejessezhang)), who courteously donated the repo to LlamaIndex!\n\n> \ud83d\udc65 **Contributing**\n> \n> Interested in contributing? Skip over to our [Contribution Section](https://github.com/run-llama/llama-hub#how-to-add-a-loadertoolllama-pack) below for more details.\n\nThis is a simple library of all the data loaders / readers / tools / llama-packs / llama-datasets that have been created by the community. The goal is to make it extremely easy to connect large language models to a large variety of knowledge sources. These are general-purpose utilities that are meant to be used in [LlamaIndex](https://github.com/run-llama/llama_index), [LangChain](https://github.com/hwchase17/langchain) and more!.\n\nLoaders and readers allow you to easily ingest data for search and retrieval by a large language model, while tools allow the models to both read and write to third party data services and sources. Ultimately, this allows you to create your own customized data agent to intelligently work with you and your data to unlock the full capability of next level large language models.\n\nFor a variety of examples of data agents, see the [notebooks directory](https://github.com/emptycrown/llama-hub/tree/main/llama_hub/tools/notebooks). You can find example Jupyter notebooks for creating data agents that can load and parse data from Google Docs, SQL Databases, Notion, and Slack, and also manage your Google Calendar, and Gmail inbox, or read and use OpenAPI specs. \n\nFor an easier way to browse the integrations available, check out the website here: https://llamahub.ai/.\n\n<img width=\"1465\" alt=\"Screenshot 2023-07-17 at 6 12 32 PM\" src=\"https://github.com/ajhofmann/llama-hub/assets/10040285/5e344de4-4aca-4f6c-9944-46c00baa5eb2\">\n\n## Usage (Use `llama-hub` as PyPI package)\nThese general-purpose loaders are designed to be used as a way to load data into [LlamaIndex](https://github.com/jerryjliu/llama_index) and/or subsequently used in [LangChain](https://github.com/hwchase17/langchain). \n\n### Installation\n```\npip install llama-hub\n```\n\n### LlamaIndex\n\n```python\nfrom llama_index import VectorStoreIndex\nfrom llama_hub.google_docs import GoogleDocsReader\n\ngdoc_ids = ['1wf-y2pd9C878Oh-FmLH7Q_BQkljdm6TQal-c1pUfrec']\nloader = GoogleDocsReader()\ndocuments = loader.load_data(document_ids=gdoc_ids)\nindex = VectorStoreIndex.from_documents(documents)\nindex.query('Where did the author go to school?')\n```\n\n### LlamaIndex Data Agent\n\n```python\nfrom llama_index.agent import OpenAIAgent\nimport openai\nopenai.api_key = 'sk-api-key'\n\nfrom llama_hub.tools.google_calendar import GoogleCalendarToolSpec\ntool_spec = GoogleCalendarToolSpec()\n\nagent = OpenAIAgent.from_tools(tool_spec.to_tool_list())\nagent.chat('what is the first thing on my calendar today')\nagent.chat(\"Please create an event for tomorrow at 4pm to review pull requests\")\n```\n\nFor a variety of examples of creating and using data agents, see the [notebooks directory](https://github.com/emptycrown/llama-hub/tree/main/llama_hub/tools/notebooks).\n\n### LangChain\n\nNote: Make sure you change the description of the `Tool` to match your use case.\n\n```python\nfrom llama_index import VectorStoreIndex\nfrom llama_hub.google_docs import GoogleDocsReader\nfrom langchain.llms import OpenAI\nfrom langchain.chains.question_answering import load_qa_chain\n\n# load documents\ngdoc_ids = ['1wf-y2pd9C878Oh-FmLH7Q_BQkljdm6TQal-c1pUfrec']\nloader = GoogleDocsReader()\ndocuments = loader.load_data(document_ids=gdoc_ids)\nlangchain_documents = [d.to_langchain_format() for d in documents]\n\n# initialize sample QA chain\nllm = OpenAI(temperature=0)\nqa_chain = load_qa_chain(llm)\nquestion=\"<query here>\"\nanswer = qa_chain.run(input_documents=langchain_documents, question=question)\n\n```\n\n## Loader Usage (Use `download_loader` from LlamaIndex)\n\nYou can also use the loaders with `download_loader` from LlamaIndex in a single line of code.\n\nFor example, see the code snippets below using the Google Docs Loader.\n\n```python\nfrom llama_index import VectorStoreIndex, download_loader\n\nGoogleDocsReader = download_loader('GoogleDocsReader')\n\ngdoc_ids = ['1wf-y2pd9C878Oh-FmLH7Q_BQkljdm6TQal-c1pUfrec']\nloader = GoogleDocsReader()\ndocuments = loader.load_data(document_ids=gdoc_ids)\nindex = VectorStoreIndex.from_documents(documents)\nindex.query('Where did the author go to school?')\n\n```\n\n## Llama-Pack Usage\n\nLlama-packs can be downloaded using the `llamaindex-cli` tool that comes with `llama-index`:\n\n```bash\nllamaindex-cli download-llamapack ZephyrQueryEnginePack --download-dir ./zephyr_pack\n```\n\nOr with the `download_llama_pack` function directly:\n\n```python\nfrom llama_index.llama_pack import download_llama_pack\n\n# download and install dependencies\nLlavaCompletionPack = download_llama_pack(\n  \"LlavaCompletionPack\", \"./llava_pack\"\n)\n```\n\n## Llama-Dataset Usage\n\n(NOTE: in what follows we present the pattern for producing a RAG benchmark with\nthe `RagEvaluatorPack` over a `LabelledRagDataset`. However, there are also other\ntypes of llama-datasets such as `LabelledEvaluatorDataset` and corresponding llama-packs\nfor producing benchmarks on their respective tasks. They all follow the similar\nusage pattern. Please refer to the README's to learn more on each type of\nllama-dataset.)\n\nThe primary use of llama-dataset is for evaluating the performance of a RAG system.\nIn particular, it serves as a new test set (in traditional machine learning speak)\nfor one to build a RAG over, predict on, and subsequently perform evaluations\ncomparing the predicted response versus the reference response. To perform the\nevaluation, the recommended usage pattern involves the application of the\n`RagEvaluatorPack`. We recommend reading the [docs](https://docs.llamaindex.ai/en/stable/module_guides/evaluating/root.html) for the \"Evaluation\" module for\nmore information on all of our llama-dataset's.\n\n```python\nfrom llama_index.llama_dataset import download_llama_dataset\nfrom llama_index.llama_pack import download_llama_pack\nfrom llama_index import VectorStoreIndex\n\n# download and install dependencies for benchmark dataset\nrag_dataset, documents = download_llama_dataset(\n  \"PaulGrahamEssayDataset\", \"./data\"\n)\n\n# build basic RAG system\nindex = VectorStoreIndex.from_documents(documents=documents)\nquery_engine = VectorStoreIndex.as_query_engine()\n\n# evaluate using the RagEvaluatorPack\nRagEvaluatorPack = download_llama_pack(\n  \"RagEvaluatorPack\", \"./rag_evaluator_pack\"\n)\nrag_evaluator_pack = RagEvaluatorPack(\n    rag_dataset=rag_dataset,\n    query_engine=query_engine\n)\nbenchmark_df = rag_evaluate_pack.run()  # async arun() supported as well\n```\n\nLlama-datasets can also be downloaded directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:\n\n```bash\nllamaindex-cli download-llamadataset PaulGrahamEssayDataset --download-dir ./data\n```\n\nAfter downloading them from `llamaindex-cli`, you can inspect the dataset and\nit source files (stored in a directory `/source_files`) then load them into python:\n\n```python\nfrom llama_index import SimpleDirectoryReader\nfrom llama_index.llama_dataset import LabelledRagDataset\n\nrag_dataset = LabelledRagDataset.from_json(\"./data/rag_dataset.json\")\ndocuments = SimpleDirectoryReader(\n    input_dir=\"./data/source_files\"\n).load_data()\n```\n\n## How to add a loader/tool/llama-pack\n\nAdding a loader/tool/llama-pack simply requires forking this repo and making a Pull Request. The Llama Hub website will update automatically when a new `llama-hub` release is made. However, please keep in mind the following guidelines when making your PR.\n\n### Step 0: Setup virtual environment, install Poetry and dependencies\n\nCreate a new Python virtual environment. The command below creates an environment in `.venv`,\nand activates it:\n```bash\npython -m venv .venv\nsource .venv/bin/activate\n```\n\nif you are in windows, use the following to activate your virtual environment:\n\n```bash\n.venv\\scripts\\activate\n```\n\nInstall poetry:\n\n```bash\npip install poetry\n```\n\nInstall the required dependencies (this will also install `llama_index`):\n\n```bash\npoetry install\n```\n\nThis will create an editable install of `llama-hub` in your venv.\n\n\n### Step 1: Create a new directory\n\nFor loaders, create a new directory in `llama_hub`, for tools create a directory in `llama_hub/tools`, and for llama-packs create a directory in `llama_hub/llama_packs` It can be nested within another, but name it something unique because the name of the directory will become the identifier for your loader (e.g. `google_docs`). Inside your new directory, create a `__init__.py` file specifying the module's public interface with `__all__`, a `base.py` file which will contain your loader implementation, and, if needed, a `requirements.txt` file to list the package dependencies of your loader. Those packages will automatically be installed when your loader is used, so no need to worry about that anymore!\n\nIf you'd like, you can create the new directory and files by running the following script in the `llama_hub` directory. Just remember to put your dependencies into a `requirements.txt` file.\n\n```\n./add_loader.sh [NAME_OF_NEW_DIRECTORY]\n```\n\n### Step 2: Write your README\n\nInside your new directory, create a `README.md` that mirrors that of the existing ones. It should have a summary of what your loader or tool does, its inputs, and how it is used in the context of LlamaIndex and LangChain.\n\n### Step 3: Add your loader to the library.json file\n\nFinally, add your loader to the `llama_hub/library.json` file (or for the equivalent `library.json` under `tools/` or `llama-packs/`) so that it may be used by others. As is exemplified by the current file, add the class name of your loader or tool, along with its ID, author, etc. This file is referenced by the Llama Hub website and the download function within LlamaIndex.\n\n### Step 4: Make a Pull Request!\n\nCreate a PR against the main branch. We typically review the PR within a day. To help expedite the process, it may be helpful to provide screenshots (either in the PR or in\nthe README directly) Show your data loader or tool in action!\n\n## How to add a llama-dataset\n\nSimilar to the process of adding a tool / loader / llama-pack, adding a llama-\ndatset also requires forking this repo and making a Pull Request. However, for a\nllama-dataset, only its metadata is checked into this repo. The actual dataset\nand it's source files are instead checked into another Github repo, that is the\n[llama-datasets repository](https://github.com/run-llama/llama-datasets). You will need to fork and clone that repo in addition to forking and cloning this one. \n\nPlease ensure that when you clone the llama-datasets repository, that you set\nthe environment variable `GIT_LFS_SKIP_SMUDGE` prior to calling the `git clone`\ncommand:\n\n```bash\n# for bash\nGIT_LFS_SKIP_SMUDGE=1 git clone git@github.com:<your-github-user-name>/llama-datasets.git  # for ssh\nGIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/<your-github-user-name>/llama-datasets.git  # for https\n\n# for windows its done in two commands\nset GIT_LFS_SKIP_SMUDGE=1  \ngit clone git@github.com:<your-github-user-name>/llama-datasets.git  # for ssh\n\nset GIT_LFS_SKIP_SMUDGE=1  \ngit clone https://github.com/<your-github-user-name>/llama-datasets.git  # for https\n```\n\nThe high-level steps for adding a llama-dataset are as follows:\n\n1. Create a `LabelledRagDataset` (the initial class of llama-dataset made available on llama-hub)\n2. Generate a baseline result with a RAG system of your own choosing on the\n`LabelledRagDataset`\n3. Prepare the dataset's metadata (`card.json` and `README.md`)\n4. Submit a Pull Request to this repo to check in the metadata\n5. Submit a Pull Request to the [llama-datasets repository](https://github.com/run-llama/llama-datasets) to check in the `LabelledRagDataset` and the source files\n\nTo assist with the submission process, we have prepared a [submission template\nnotebook](https://github.com/run-llama/llama_index/blob/main/docs/examples/llama_dataset/ragdataset_submission_template.ipynb) that walks you through the above-listed steps. We highly recommend\nthat you use this template notebook.\n\n(NOTE: you can use the above process for submitting any of our other supported\ntypes of llama-datasets such as the `LabelledEvaluatorDataset`.)\n\n## Running tests\n\n```shell\npython3.9 -m venv .venv\nsource .venv/bin/activate \npip3 install -r test_requirements.txt\n\npoetry run make test\n```\n\n## Changelog\n\nIf you want to track the latest version updates / see which loaders are added to each release, take a look at our [full changelog here](https://github.com/emptycrown/llama-hub/blob/main/CHANGELOG.md)! \n\n## FAQ\n\n### How do I test my loader before it's merged?\n\nThere is an argument called `loader_hub_url` in [`download_loader`](https://github.com/jerryjliu/llama_index/blob/main/llama_index/readers/download.py) that defaults to the main branch of this repo. You can set it to your branch or fork to test your new loader.\n\n### Should I create a PR against LlamaHub or the LlamaIndex repo directly?\n\nIf you have a data loader PR, by default let's try to create it against LlamaHub! We will make exceptions in certain cases\n(for instance, if we think the data loader should be core to the LlamaIndex repo).\n\nFor all other PR's relevant to LlamaIndex, let's create it directly against the [LlamaIndex repo](https://github.com/jerryjliu/llama_index).\n\n### How can I get a verified badge on LlamaHub? \nWe have just started offering badges to our contributors. At the moment, we're focused on our early adopters and official partners, but we're gradually opening up badge consideration to all submissions. If you're interested in being considered, please review the criteria below and if everything aligns, feel free to contact us via [community Discord](https://discord.gg/dGcwcsnxhU).\n\nWe are still refining our criteria but here are some aspects we consider:\n\n**Quality**\n- Code Quality illustrated by the use of coding standards and style guidelines.\n- Code readability and proper documentation.\n\n**Usability**\n- Self-contained module with no external links or libraries, and it is easy to run.\n- Module should not break any existing unit tests.\n\n**Safety**\n- Safety considerations, such as proper input validation, avoiding SQL injection, and secure handling of user data.\n\n**Community Engagement & Feedback**\n- The module's usefulness to the library's users as gauged by the number of likes, downloads, etc.\n- Positive feedback from module users.\n \nNote: \n* It's possible that we decide to award a badge to a subset of your submissions based on the above criteria. \n* Being a regular contributor doesn't guarantee a badge, we will still look at each submission individually. \n\n### Other questions?\n\nFeel free to hop into the [community Discord](https://discord.gg/dGcwcsnxhU) or tag the official [Twitter account](https://twitter.com/llama_index)!\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A library of community-driven data loaders for LLMs. Use with LlamaIndex and/or LangChain. ",
    "version": "0.0.79.post1",
    "project_urls": {
        "Homepage": "https://llamahub.ai",
        "Repository": "https://github.com/emptycrown/llama-hub"
    },
    "split_keywords": [
        "llama-index",
        "llama-hub",
        "llama"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "64fe53056ecef5c7666a1a8d1a7ea36a99369a977750ed871e8266d891d56f0c",
                "md5": "4fe6b71b1b316a8c0d2bb565df4f9847",
                "sha256": "b4f9e6513ca5ee336256eb19a8b0d623bfc19fa5b82e864e12d092997ebc657a"
            },
            "downloads": -1,
            "filename": "llama_hub-0.0.79.post1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4fe6b71b1b316a8c0d2bb565df4f9847",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.1,<3.12",
            "size": 103923929,
            "upload_time": "2024-02-13T01:32:10",
            "upload_time_iso_8601": "2024-02-13T01:32:10.488537Z",
            "url": "https://files.pythonhosted.org/packages/64/fe/53056ecef5c7666a1a8d1a7ea36a99369a977750ed871e8266d891d56f0c/llama_hub-0.0.79.post1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8e93f6145e786f2052a6f474660eea7bf195a1cdc7ace46fd0eec9bfd6a403b9",
                "md5": "64ed4dcb429bcca211940bc2947a7793",
                "sha256": "3dac59bbe87c4f8c1ac211c977c3265acd6c95b9460cfac9a9315f87dd449484"
            },
            "downloads": -1,
            "filename": "llama_hub-0.0.79.post1.tar.gz",
            "has_sig": false,
            "md5_digest": "64ed4dcb429bcca211940bc2947a7793",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.1,<3.12",
            "size": 97929713,
            "upload_time": "2024-02-13T01:32:31",
            "upload_time_iso_8601": "2024-02-13T01:32:31.871396Z",
            "url": "https://files.pythonhosted.org/packages/8e/93/f6145e786f2052a6f474660eea7bf195a1cdc7ace46fd0eec9bfd6a403b9/llama_hub-0.0.79.post1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-13 01:32:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "emptycrown",
    "github_project": "llama-hub",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "test_requirements": [],
    "lcname": "llama-hub"
}
        
Elapsed time: 0.40418s