<p align="center">
<img src="https://raw.githubusercontent.com/neuml/annotateai/master/logo.png"/>
</p>
<p align="center">
<b>Automatically annotate papers using LLMs</b>
</p>
<p align="center">
<a href="https://github.com/neuml/annotateai/releases">
<img src="https://img.shields.io/github/release/neuml/annotateai.svg?style=flat&color=success" alt="Version"/>
</a>
<a href="https://github.com/neuml/annotateai/releases">
<img src="https://img.shields.io/github/release-date/neuml/annotateai.svg?style=flat&color=blue" alt="GitHub Release Date"/>
</a>
<a href="https://github.com/neuml/annotateai/issues">
<img src="https://img.shields.io/github/issues/neuml/annotateai.svg?style=flat&color=success" alt="GitHub issues"/>
</a>
<a href="https://github.com/neuml/annotateai">
<img src="https://img.shields.io/github/last-commit/neuml/annotateai.svg?style=flat&color=blue" alt="GitHub last commit"/>
</a>
</p>
-------------------------------------------------------------------------------------------------------------------------------------------------------
![demo](https://raw.githubusercontent.com/neuml/annotateai/master/demo.png)
`annotateai` automatically annotates papers using Large Language Models (LLMs). While LLMs can summarize papers, search papers and build generative text about papers, this project focuses on providing human readers with context as they read.
## Architecture
![architecture](https://raw.githubusercontent.com/neuml/annotateai/master/images/architecture.png#gh-light-mode-only)
A one-line call does the following:
- Reads the paper
- Finds the title and important key concepts
- Goes through each page and finds sections that best emphasis the key concepts
- Reads the section and builds a concise short topic
- Annotates the paper and highlights those sections
## Installation
The easiest way to install is via pip and PyPI
```
pip install annotateai
```
Python 3.9+ is supported. Using a Python [virtual environment](https://docs.python.org/3/library/venv.html) is recommended.
`annotateai` can also be installed directly from GitHub to access the latest, unreleased features.
```
pip install git+https://github.com/neuml/annotateai
```
## Examples
`annotateai` can annotate any PDF but it works especially well for medical and scientific papers. The following shows a series of examples using papers from [arXiv](https://arxiv.org/).
This project also works well with papers from [PubMed](https://pubmed.ncbi.nlm.nih.gov/), [bioRxiv](https://www.biorxiv.org/) and [medRxiv](https://www.medrxiv.org/)!
### Setup
Install the following.
```bash
# Change autoawq[kernels] to "autoawq autoawq-kernels" if a flash-attn error is raised
pip install annotateai autoawq[kernels]
# macOS users should run this instead
pip install annotateai llama-cpp-python
```
The primary input parameter is the path to the LLM. This project is backed by [txtai](https://github.com/neuml/txtai) and it supports any [txtai-supported LLM](https://neuml.github.io/txtai/pipeline/text/llm/).
```python
from annotateai import Annotate
# This model works well with medical and scientific literature
annotate = Annotate("NeuML/Llama-3.1_OpenScholar-8B-AWQ")
# macOS users should run this instead
annotate = Annotate(
"bartowski/Llama-3.1_OpenScholar-8B-GGUF/Llama-3.1_OpenScholar-8B-Q4_K_M.gguf"
)
```
### Annotate paper "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks"
This paper proposed RAG before most of us knew we needed it.
```python
annotate("https://arxiv.org/pdf/2005.11401")
```
![rag](https://raw.githubusercontent.com/neuml/annotateai/master/images/rag.png)
_Source: https://arxiv.org/pdf/2005.11401_
### Annotate paper "HunyuanVideo: A Systematic Framework For Large Video Generative Models"
This paper builds the largest open-source video generation model. It's trending on [Papers With Code](https://paperswithcode.com/) as of Dec 2024.
```python
annotate("https://arxiv.org/pdf/2412.03603v2")
```
![video](https://raw.githubusercontent.com/neuml/annotateai/master/images/video.png)
_Source: https://arxiv.org/pdf/2412.03603v2_
### Annotate paper "OpenDebateEvidence: A Massive-Scale Argument Mining and Summarization Dataset"
This paper was presented at the `38th Conference on Neural Information Processing Systems (NeurIPS 2024) Track on Datasets and Benchmarks`.
```python
annotate("https://arxiv.org/pdf/2406.14657")
```
![debate](https://raw.githubusercontent.com/neuml/annotateai/master/images/debate.png)
_Source: https://arxiv.org/pdf/2406.14657_
### LLM APIs and llama.cpp
As mentioned earlier, this project supports any [txtai-supported LLM](https://neuml.github.io/txtai/pipeline/text/llm/). Some examples below.
```
pip install txtai[pipeline-llm]
```
```python
# LLM API services
annotate = Annotate("gpt-4o")
annotate = Annotate("claude-3-5-sonnet-20240620")
# Ollama endpoint
annotate = Annotate("ollama/llama3.1")
# llama.cpp GGUF from Hugging Face Hub
annotate = Annotate(
"bartowski/Llama-3.1_OpenScholar-8B-GGUF/Llama-3.1_OpenScholar-8B-Q4_K_M.gguf"
)
```
### Additional parameters
The default mode for an `annotate` instance is to automatically generate the key concepts to search for. But these concepts can be provided via the `keywords` parameter.
```python
annotate("https://arxiv.org/pdf/2005.11401", keywords=["hallucinations", "llm"])
```
This is useful for situations where we have a large batch of papers and we want it to identify a specific set of concepts to help with a review.
The progress bar can be disabled as follows:
```python
annotate("https://arxiv.org/pdf/2005.11401", progress=False)
```
## Docker Web Application
![app](https://raw.githubusercontent.com/neuml/annotateai/master/images/app.gif)
[neuml/annotateai](https://hub.docker.com/r/neuml/annotateai) is a web application available on Docker Hub.
This can be run with the default settings as follows.
```
docker run -d --gpus=all -it -p 8501:8501 neuml/annotateai
```
The LLM can also be set via ENV parameters.
```
docker run -d --gpus=all -it -p 8501:8501 -e LLM=bartowski/Llama-3.2-1B-Instruct-GGUF/Llama-3.2-1B-Instruct-Q4_K_M.gguf neuml/annotateai
```
The code for this application can be found in the [app folder](https://github.com/neuml/annotateai/tree/master/app).
Raw data
{
"_id": null,
"home_page": "https://github.com/neuml/annotateai",
"name": "annotateai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "pdf highlight llm ai",
"author": "NeuML",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/2b/1e/2bff9a3f22ced05f9ce1332a3708b9ce16079f9f444d942e9799dce4f184/annotateai-0.2.0.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <img src=\"https://raw.githubusercontent.com/neuml/annotateai/master/logo.png\"/>\n</p>\n\n<p align=\"center\">\n <b>Automatically annotate papers using LLMs</b>\n</p>\n\n<p align=\"center\">\n <a href=\"https://github.com/neuml/annotateai/releases\">\n <img src=\"https://img.shields.io/github/release/neuml/annotateai.svg?style=flat&color=success\" alt=\"Version\"/>\n </a>\n <a href=\"https://github.com/neuml/annotateai/releases\">\n <img src=\"https://img.shields.io/github/release-date/neuml/annotateai.svg?style=flat&color=blue\" alt=\"GitHub Release Date\"/>\n </a>\n <a href=\"https://github.com/neuml/annotateai/issues\">\n <img src=\"https://img.shields.io/github/issues/neuml/annotateai.svg?style=flat&color=success\" alt=\"GitHub issues\"/>\n </a>\n <a href=\"https://github.com/neuml/annotateai\">\n <img src=\"https://img.shields.io/github/last-commit/neuml/annotateai.svg?style=flat&color=blue\" alt=\"GitHub last commit\"/>\n </a>\n</p>\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n\n![demo](https://raw.githubusercontent.com/neuml/annotateai/master/demo.png)\n\n`annotateai` automatically annotates papers using Large Language Models (LLMs). While LLMs can summarize papers, search papers and build generative text about papers, this project focuses on providing human readers with context as they read.\n\n## Architecture\n\n![architecture](https://raw.githubusercontent.com/neuml/annotateai/master/images/architecture.png#gh-light-mode-only)\n\nA one-line call does the following:\n\n- Reads the paper\n- Finds the title and important key concepts\n- Goes through each page and finds sections that best emphasis the key concepts\n- Reads the section and builds a concise short topic\n- Annotates the paper and highlights those sections\n\n## Installation\nThe easiest way to install is via pip and PyPI\n\n```\npip install annotateai\n```\n\nPython 3.9+ is supported. Using a Python [virtual environment](https://docs.python.org/3/library/venv.html) is recommended.\n\n`annotateai` can also be installed directly from GitHub to access the latest, unreleased features.\n\n```\npip install git+https://github.com/neuml/annotateai\n```\n\n## Examples\n\n`annotateai` can annotate any PDF but it works especially well for medical and scientific papers. The following shows a series of examples using papers from [arXiv](https://arxiv.org/).\n\nThis project also works well with papers from [PubMed](https://pubmed.ncbi.nlm.nih.gov/), [bioRxiv](https://www.biorxiv.org/) and [medRxiv](https://www.medrxiv.org/)!\n\n### Setup\n\nInstall the following.\n\n```bash\n# Change autoawq[kernels] to \"autoawq autoawq-kernels\" if a flash-attn error is raised\npip install annotateai autoawq[kernels]\n\n# macOS users should run this instead\npip install annotateai llama-cpp-python\n```\n\nThe primary input parameter is the path to the LLM. This project is backed by [txtai](https://github.com/neuml/txtai) and it supports any [txtai-supported LLM](https://neuml.github.io/txtai/pipeline/text/llm/).\n\n```python\nfrom annotateai import Annotate\n\n# This model works well with medical and scientific literature\nannotate = Annotate(\"NeuML/Llama-3.1_OpenScholar-8B-AWQ\")\n\n# macOS users should run this instead\nannotate = Annotate(\n \"bartowski/Llama-3.1_OpenScholar-8B-GGUF/Llama-3.1_OpenScholar-8B-Q4_K_M.gguf\"\n)\n```\n\n### Annotate paper \"Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks\"\n\nThis paper proposed RAG before most of us knew we needed it.\n\n```python\nannotate(\"https://arxiv.org/pdf/2005.11401\")\n```\n\n![rag](https://raw.githubusercontent.com/neuml/annotateai/master/images/rag.png)\n\n_Source: https://arxiv.org/pdf/2005.11401_\n\n### Annotate paper \"HunyuanVideo: A Systematic Framework For Large Video Generative Models\"\n\nThis paper builds the largest open-source video generation model. It's trending on [Papers With Code](https://paperswithcode.com/) as of Dec 2024.\n\n```python\nannotate(\"https://arxiv.org/pdf/2412.03603v2\")\n```\n\n![video](https://raw.githubusercontent.com/neuml/annotateai/master/images/video.png)\n\n_Source: https://arxiv.org/pdf/2412.03603v2_\n\n### Annotate paper \"OpenDebateEvidence: A Massive-Scale Argument Mining and Summarization Dataset\"\n\nThis paper was presented at the `38th Conference on Neural Information Processing Systems (NeurIPS 2024) Track on Datasets and Benchmarks`.\n\n```python\nannotate(\"https://arxiv.org/pdf/2406.14657\")\n```\n\n![debate](https://raw.githubusercontent.com/neuml/annotateai/master/images/debate.png)\n\n_Source: https://arxiv.org/pdf/2406.14657_\n\n### LLM APIs and llama.cpp\n\nAs mentioned earlier, this project supports any [txtai-supported LLM](https://neuml.github.io/txtai/pipeline/text/llm/). Some examples below.\n\n```\npip install txtai[pipeline-llm]\n```\n\n```python\n# LLM API services\nannotate = Annotate(\"gpt-4o\")\nannotate = Annotate(\"claude-3-5-sonnet-20240620\")\n\n# Ollama endpoint\nannotate = Annotate(\"ollama/llama3.1\")\n\n# llama.cpp GGUF from Hugging Face Hub\nannotate = Annotate(\n \"bartowski/Llama-3.1_OpenScholar-8B-GGUF/Llama-3.1_OpenScholar-8B-Q4_K_M.gguf\"\n)\n```\n\n### Additional parameters\n\nThe default mode for an `annotate` instance is to automatically generate the key concepts to search for. But these concepts can be provided via the `keywords` parameter.\n\n```python\nannotate(\"https://arxiv.org/pdf/2005.11401\", keywords=[\"hallucinations\", \"llm\"])\n```\n\nThis is useful for situations where we have a large batch of papers and we want it to identify a specific set of concepts to help with a review.\n\nThe progress bar can be disabled as follows:\n\n```python\nannotate(\"https://arxiv.org/pdf/2005.11401\", progress=False)\n```\n\n## Docker Web Application\n\n![app](https://raw.githubusercontent.com/neuml/annotateai/master/images/app.gif)\n\n[neuml/annotateai](https://hub.docker.com/r/neuml/annotateai) is a web application available on Docker Hub.\n\nThis can be run with the default settings as follows.\n\n```\ndocker run -d --gpus=all -it -p 8501:8501 neuml/annotateai\n```\n\nThe LLM can also be set via ENV parameters.\n\n```\ndocker run -d --gpus=all -it -p 8501:8501 -e LLM=bartowski/Llama-3.2-1B-Instruct-GGUF/Llama-3.2-1B-Instruct-Q4_K_M.gguf neuml/annotateai\n```\n\nThe code for this application can be found in the [app folder](https://github.com/neuml/annotateai/tree/master/app).\n",
"bugtrack_url": null,
"license": "Apache 2.0: http://www.apache.org/licenses/LICENSE-2.0",
"summary": "Automatically annotate papers using LLMs",
"version": "0.2.0",
"project_urls": {
"Documentation": "https://github.com/neuml/annotateai",
"Homepage": "https://github.com/neuml/annotateai",
"Issue Tracker": "https://github.com/neuml/annotateai/issues",
"Source Code": "https://github.com/neuml/annotateai"
},
"split_keywords": [
"pdf",
"highlight",
"llm",
"ai"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "716eba8c01ae80e9fcdd394a1c2ddda19a73995fd15754086601f75ece9b3d82",
"md5": "31a17542be3bc64cf1981abf4663829d",
"sha256": "2ec014942b02060feb46bf3c8154578e100cdfb30aec54bbd586d2f4632542ca"
},
"downloads": -1,
"filename": "annotateai-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "31a17542be3bc64cf1981abf4663829d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 10156,
"upload_time": "2024-12-18T17:26:58",
"upload_time_iso_8601": "2024-12-18T17:26:58.838954Z",
"url": "https://files.pythonhosted.org/packages/71/6e/ba8c01ae80e9fcdd394a1c2ddda19a73995fd15754086601f75ece9b3d82/annotateai-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "2b1e2bff9a3f22ced05f9ce1332a3708b9ce16079f9f444d942e9799dce4f184",
"md5": "e51fe6a66b56dbc3f8e4ab047c6fd368",
"sha256": "9ebb2f7fcd601b5b4a465a5360350ebce5cca5474a6c4ad4eb4a3e1892e9562c"
},
"downloads": -1,
"filename": "annotateai-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "e51fe6a66b56dbc3f8e4ab047c6fd368",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 12258,
"upload_time": "2024-12-18T17:27:01",
"upload_time_iso_8601": "2024-12-18T17:27:01.143476Z",
"url": "https://files.pythonhosted.org/packages/2b/1e/2bff9a3f22ced05f9ce1332a3708b9ce16079f9f444d942e9799dce4f184/annotateai-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-18 17:27:01",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "neuml",
"github_project": "annotateai",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "annotateai"
}