# LlamaParse
LlamaParse is an API created by LlamaIndex to efficiently parse and represent files for efficient retrieval and context augmentation using LlamaIndex frameworks.
LlamaParse directly integrates with [LlamaIndex](https://github.com/run-llama/llama_index).
Currently available for **free**. Try it out today!
**NOTE:** Currently, only PDF files are supported.
## Getting Started
First, login and get an api-key from `https://cloud.llamaindex.ai`.
Then, make sure you have the latest LlamaIndex version installed.
**NOTE:** If you are upgrading from v0.9.X, we recommend following our [migration guide](https://pretty-sodium-5e0.notion.site/v0-10-0-Migration-Guide-6ede431dcb8841b09ea171e7f133bd77), as well as uninstalling your previous version first.
```
pip uninstall llama-index # run this if upgrading from v0.9.x or older
pip install -U llama-index --upgrade --no-cache-dir --force-reinstall
```
Lastly, install the package:
`pip install llama-parse`
Now you can run the following to parse your first PDF file:
```python
import nest_asyncio
nest_asyncio.apply()
from llama_parse import LlamaParse
parser = LlamaParse(
api_key="llx-...", # can also be set in your env as LLAMA_CLOUD_API_KEY
result_type="markdown", # "markdown" and "text" are available
verbose=True,
)
# sync
documents = parser.load_data("./my_file.pdf")
# sync batch
documents = parser.load_data(["./my_file1.pdf", "./my_file2.pdf"])
# async
documents = await parser.aload_data("./my_file.pdf")
# async batch
documents = await parser.aload_data(["./my_file1.pdf", "./my_file2.pdf"])
```
## Using with `SimpleDirectoryReader`
You can also integrate the parser as the default PDF loader in `SimpleDirectoryReader`:
```python
import nest_asyncio
nest_asyncio.apply()
from llama_parse import LlamaParse
from llama_index.core import SimpleDirectoryReader
parser = LlamaParse(
api_key="llx-...", # can also be set in your env as LLAMA_CLOUD_API_KEY
result_type="markdown", # "markdown" and "text" are available
verbose=True,
)
file_extractor = {".pdf": parser}
documents = SimpleDirectoryReader(
"./data", file_extractor=file_extractor
).load_data()
```
Full documentation for `SimpleDirectoryReader` can be found on the [LlamaIndex Documentation](https://docs.llamaindex.ai/en/stable/module_guides/loading/simpledirectoryreader.html).
## Examples
Several end-to-end indexing examples can be found in the examples folder
- [Getting Started](https://github.com/run-llama/llama_parse/blob/main/examples/demo_basic.ipynb)
- [Advanced RAG Example](https://github.com/run-llama/llama_parse/blob/main/examples/demo_advanced.ipynb)
- [Raw API Usage](https://github.com/run-llama/llama_parse/blob/main/examples/demo_api.ipynb)
## Terms of Service
See the [Terms of Service Here](https://github.com/run-llama/llama_parse/blob/main/TOS.pdf).
Raw data
{
"_id": null,
"home_page": null,
"name": "llama-index-readers-llama-parse",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.8.1",
"maintainer_email": null,
"keywords": "PDF, llama, llama-parse, parse",
"author": "Logan Markewich",
"author_email": "logan@runllama.ai",
"download_url": "https://files.pythonhosted.org/packages/66/f7/72a2eacf86b2f97c1f7a3b1640982d2cf6ebc48db1fac5e79c01b6598b7e/llama_index_readers_llama_parse-0.1.4.tar.gz",
"platform": null,
"description": "# LlamaParse\n\nLlamaParse is an API created by LlamaIndex to efficiently parse and represent files for efficient retrieval and context augmentation using LlamaIndex frameworks.\n\nLlamaParse directly integrates with [LlamaIndex](https://github.com/run-llama/llama_index).\n\nCurrently available for **free**. Try it out today!\n\n**NOTE:** Currently, only PDF files are supported.\n\n## Getting Started\n\nFirst, login and get an api-key from `https://cloud.llamaindex.ai`.\n\nThen, make sure you have the latest LlamaIndex version installed.\n\n**NOTE:** If you are upgrading from v0.9.X, we recommend following our [migration guide](https://pretty-sodium-5e0.notion.site/v0-10-0-Migration-Guide-6ede431dcb8841b09ea171e7f133bd77), as well as uninstalling your previous version first.\n\n```\npip uninstall llama-index # run this if upgrading from v0.9.x or older\npip install -U llama-index --upgrade --no-cache-dir --force-reinstall\n```\n\nLastly, install the package:\n\n`pip install llama-parse`\n\nNow you can run the following to parse your first PDF file:\n\n```python\nimport nest_asyncio\n\nnest_asyncio.apply()\n\nfrom llama_parse import LlamaParse\n\nparser = LlamaParse(\n api_key=\"llx-...\", # can also be set in your env as LLAMA_CLOUD_API_KEY\n result_type=\"markdown\", # \"markdown\" and \"text\" are available\n verbose=True,\n)\n\n# sync\ndocuments = parser.load_data(\"./my_file.pdf\")\n\n# sync batch\ndocuments = parser.load_data([\"./my_file1.pdf\", \"./my_file2.pdf\"])\n\n# async\ndocuments = await parser.aload_data(\"./my_file.pdf\")\n\n# async batch\ndocuments = await parser.aload_data([\"./my_file1.pdf\", \"./my_file2.pdf\"])\n```\n\n## Using with `SimpleDirectoryReader`\n\nYou can also integrate the parser as the default PDF loader in `SimpleDirectoryReader`:\n\n```python\nimport nest_asyncio\n\nnest_asyncio.apply()\n\nfrom llama_parse import LlamaParse\nfrom llama_index.core import SimpleDirectoryReader\n\nparser = LlamaParse(\n api_key=\"llx-...\", # can also be set in your env as LLAMA_CLOUD_API_KEY\n result_type=\"markdown\", # \"markdown\" and \"text\" are available\n verbose=True,\n)\n\nfile_extractor = {\".pdf\": parser}\ndocuments = SimpleDirectoryReader(\n \"./data\", file_extractor=file_extractor\n).load_data()\n```\n\nFull documentation for `SimpleDirectoryReader` can be found on the [LlamaIndex Documentation](https://docs.llamaindex.ai/en/stable/module_guides/loading/simpledirectoryreader.html).\n\n## Examples\n\nSeveral end-to-end indexing examples can be found in the examples folder\n\n- [Getting Started](https://github.com/run-llama/llama_parse/blob/main/examples/demo_basic.ipynb)\n- [Advanced RAG Example](https://github.com/run-llama/llama_parse/blob/main/examples/demo_advanced.ipynb)\n- [Raw API Usage](https://github.com/run-llama/llama_parse/blob/main/examples/demo_api.ipynb)\n\n## Terms of Service\n\nSee the [Terms of Service Here](https://github.com/run-llama/llama_parse/blob/main/TOS.pdf).\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "llama-index readers llama-parse integration",
"version": "0.1.4",
"project_urls": null,
"split_keywords": [
"pdf",
" llama",
" llama-parse",
" parse"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "79afe9aaf5e8869dd8a759b83ab8847cfb84642f3f583332a1786b49c691bbc6",
"md5": "bff4d25d5b44b25d49b5783f40ce5537",
"sha256": "c4914b37d12cceee56fbd185cca80f87d60acbf8ea7a73f9719610180be1fcdd"
},
"downloads": -1,
"filename": "llama_index_readers_llama_parse-0.1.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "bff4d25d5b44b25d49b5783f40ce5537",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8.1",
"size": 2511,
"upload_time": "2024-03-25T15:09:21",
"upload_time_iso_8601": "2024-03-25T15:09:21.680977Z",
"url": "https://files.pythonhosted.org/packages/79/af/e9aaf5e8869dd8a759b83ab8847cfb84642f3f583332a1786b49c691bbc6/llama_index_readers_llama_parse-0.1.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "66f772a2eacf86b2f97c1f7a3b1640982d2cf6ebc48db1fac5e79c01b6598b7e",
"md5": "00a2df763d47a99f90ad2e47ff578af1",
"sha256": "78608b193c818894aefeee0aa303f02b7f80f2e4caf13866c2fd3b0b1023e2c0"
},
"downloads": -1,
"filename": "llama_index_readers_llama_parse-0.1.4.tar.gz",
"has_sig": false,
"md5_digest": "00a2df763d47a99f90ad2e47ff578af1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8.1",
"size": 2480,
"upload_time": "2024-03-25T15:09:22",
"upload_time_iso_8601": "2024-03-25T15:09:22.588467Z",
"url": "https://files.pythonhosted.org/packages/66/f7/72a2eacf86b2f97c1f7a3b1640982d2cf6ebc48db1fac5e79c01b6598b7e/llama_index_readers_llama_parse-0.1.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-03-25 15:09:22",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llama-index-readers-llama-parse"
}