Name | paper-qa JSON |
Version |
5.23.0
JSON |
| download |
home_page | None |
Summary | LLM Chain for answering questions from docs |
upload_time | 2025-07-10 02:22:15 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.11 |
license | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2024 FutureHouse
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|
keywords |
question
answering
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# PaperQA2
[](https://github.com/Future-House/paper-qa)
[](https://badge.fury.io/py/paper-qa)
[](https://github.com/Future-House/paper-qa)


PaperQA2 is a package for doing high-accuracy retrieval augmented generation (RAG) on PDFs or text files,
with a focus on the scientific literature.
See our [recent 2024 paper](https://paper.wikicrow.ai)
to see examples of PaperQA2's superhuman performance in scientific tasks like
question answering, summarization, and contradiction detection.
<!--TOC-->
- [Quickstart](#quickstart)
- [Example Output](#example-output)
- [What is PaperQA2](#what-is-paperqa2)
- [PaperQA2 vs PaperQA](#paperqa2-vs-paperqa)
- [What's New in Version 5 (aka PaperQA2)?](#whats-new-in-version-5-aka-paperqa2)
- [PaperQA2 Algorithm](#paperqa2-algorithm)
- [Installation](#installation)
- [CLI Usage](#cli-usage)
- [Bundled Settings](#bundled-settings)
- [Rate Limits](#rate-limits)
- [Library Usage](#library-usage)
- [Agentic Adding/Querying Documents](#agentic-addingquerying-documents)
- [Manual (No Agent) Adding/Querying Documents](#manual-no-agent-addingquerying-documents)
- [Async](#async)
- [Choosing Model](#choosing-model)
- [Locally Hosted](#locally-hosted)
- [Embedding Model](#embedding-model)
- [Specifying the Embedding Model](#specifying-the-embedding-model)
- [Local Embedding Models (Sentence Transformers)](#local-embedding-models-sentence-transformers)
- [Adjusting number of sources](#adjusting-number-of-sources)
- [Using Code or HTML](#using-code-or-html)
- [Using External DB/Vector DB and Caching](#using-external-dbvector-db-and-caching)
- [Creating Index](#creating-index)
- [Manifest Files](#manifest-files)
- [Reusing Index](#reusing-index)
- [Using Clients Directly](#using-clients-directly)
- [Settings Cheatsheet](#settings-cheatsheet)
- [Where do I get papers?](#where-do-i-get-papers)
- [Callbacks](#callbacks)
- [Caching Embeddings](#caching-embeddings)
- [Customizing Prompts](#customizing-prompts)
- [Pre and Post Prompts](#pre-and-post-prompts)
- [FAQ](#faq)
- [How come I get different results than your papers?](#how-come-i-get-different-results-than-your-papers)
- [How is this different from LlamaIndex or LangChain?](#how-is-this-different-from-llamaindex-or-langchain)
- [Can I save or load?](#can-i-save-or-load)
- [Reproduction](#reproduction)
- [Citation](#citation)
<!--TOC-->
## Quickstart
In this example we take a folder of research paper PDFs,
magically get their metadata - including citation counts with a retraction check,
then parse and cache PDFs into a full-text search index,
and finally answer the user question with an LLM agent.
```bash
pip install paper-qa
mkdir my_papers
curl -o my_papers/PaperQA2.pdf https://arxiv.org/pdf/2409.13740
cd my_papers
pqa ask 'What is PaperQA2?'
```
### Example Output
Question: Has anyone designed neural networks that compute with proteins or DNA?
> The claim that neural networks have been designed to compute with DNA is supported by multiple sources.
> The work by Qian, Winfree, and Bruck demonstrates the use of DNA strand displacement cascades
> to construct neural network components, such as artificial neurons and associative memories,
> using a DNA-based system (Qian2011Neural pages 1-2, Qian2011Neural pages 15-16, Qian2011Neural pages 54-56).
> This research includes the implementation of a 3-bit XOR gate and a four-neuron Hopfield associative memory,
> showcasing the potential of DNA for neural network computation.
> Additionally, the application of deep learning techniques to genomics,
> which involves computing with DNA sequences, is well-documented.
> Studies have applied convolutional neural networks (CNNs) to predict genomic features such as
> transcription factor binding and DNA accessibility (Eraslan2019Deep pages 4-5, Eraslan2019Deep pages 5-6).
> These models leverage DNA sequences as input data,
> effectively using neural networks to compute with DNA.
> While the provided excerpts do not explicitly mention protein-based neural network computation,
> they do highlight the use of neural networks in tasks related to protein sequences,
> such as predicting DNA-protein binding (Zeng2016Convolutional pages 1-2).
> However, the primary focus remains on DNA-based computation.
## What is PaperQA2
PaperQA2 is engineered to be the best agentic RAG model for working with scientific papers.
Here are some features:
- A simple interface to get good answers with grounded responses containing in-text citations.
- State-of-the-art implementation including document metadata-awareness
in embeddings and LLM-based re-ranking and contextual summarization (RCS).
- Support for agentic RAG, where a language agent can iteratively refine queries and answers.
- Automatic redundant fetching of paper metadata,
including citation and journal quality data from multiple providers.
- A usable full-text search engine for a local repository of PDF/text files.
- A robust interface for customization, with default support for all [LiteLLM][LiteLLM providers] models.
[LiteLLM providers]: https://docs.litellm.ai/docs/providers
[LiteLLM general docs]: https://docs.litellm.ai/docs/
By default, it uses [OpenAI embeddings](https://platform.openai.com/docs/guides/embeddings)
and [models](https://platform.openai.com/docs/models) with a Numpy vector DB to embed and search documents.
However, you can easily use other closed-source, open-source models or embeddings (see details below).
PaperQA2 depends on some awesome libraries/APIs that make our repo possible.
Here are some in no particular order:
1. [Semantic Scholar](https://www.semanticscholar.org/)
2. [Crossref](https://www.crossref.org/)
3. [Unpaywall](https://unpaywall.org/)
4. [Pydantic](https://docs.pydantic.dev/latest/)
5. [tantivy](https://github.com/quickwit-oss/tantivy)
6. [LiteLLM][LiteLLM general docs]
7. [pybtex](https://pybtex.org/)
8. [PyMuPDF](https://pymupdf.readthedocs.io/en/latest/)
### PaperQA2 vs PaperQA
We've been working on hard on fundamental upgrades for a while and mostly followed [SemVer](https://semver.org/).
meaning we've incremented the major version number on each breaking change.
This brings us to the current major version number v5.
So why call is the repo now called PaperQA2?
We wanted to remark on the fact though that we've
exceeded human performance on [many important metrics](https://paper.wikicrow.ai).
So we arbitrarily call version 5 and onward PaperQA2,
and versions before it as PaperQA1 to denote the significant change in performance.
We recognize that we are challenged at naming and counting at FutureHouse,
so we reserve the right at any time to arbitrarily change the name to PaperCrow.
### What's New in Version 5 (aka PaperQA2)?
Version 5 added:
- A CLI `pqa`
- Agentic workflows invoking tools for
paper search, gathering evidence, and generating an answer
- Removed much of the statefulness from the `Docs` object
- A migration to LiteLLM for compatibility with many LLM providers
as well as centralized rate limits and cost tracking
- A bundled set of configurations (read [this section here](#bundled-settings)))
containing known-good hyperparameters
Note that `Docs` objects pickled from prior versions of `PaperQA` are incompatible with version 5,
and will need to be rebuilt.
Also, our minimum Python version was increased to Python 3.11.
### PaperQA2 Algorithm
To understand PaperQA2, let's start with the pieces of the underlying algorithm.
The default workflow of PaperQA2 is as follows:
| Phase | PaperQA2 Actions |
| ---------------------- | ------------------------------------------------------------------------- |
| **1. Paper Search** | - Get candidate papers from LLM-generated keyword query |
| | - Chunk, embed, and add candidate papers to state |
| **2. Gather Evidence** | - Embed query into vector |
| | - Rank top _k_ document chunks in current state |
| | - Create scored summary of each chunk in the context of the current query |
| | - Use LLM to re-score and select most relevant summaries |
| **3. Generate Answer** | - Put best summaries into prompt with context |
| | - Generate answer with prompt |
The tools can be invoked in any order by a language agent.
For example, an LLM agent might do a narrow and broad search,
or using different phrasing for the gather evidence step from the generate answer step.
## Installation
For a non-development setup,
install PaperQA2 (aka version 5) from [PyPI](https://pypi.org/project/paper-qa/).
Note version 5 requires Python 3.11+.
```bash
pip install paper-qa>=5
```
For development setup,
please refer to the [CONTRIBUTING.md](CONTRIBUTING.md) file.
PaperQA2 uses an LLM to operate,
so you'll need to either set an appropriate [API key environment variable][LiteLLM providers]
(i.e. `export OPENAI_API_KEY=sk-...`)
or set up an open source LLM server (i.e. using [llamafile](https://github.com/Mozilla-Ocho/llamafile).
Any LiteLLM compatible model can be configured to use with PaperQA2.
If you need to index a large set of papers (100+),
you will likely want an API key for both
[Crossref](https://www.crossref.org/documentation/metadata-plus/metadata-plus-keys/)
and [Semantic Scholar](https://www.semanticscholar.org/product/api#api-key),
which will allow you to avoid hitting public rate limits using these metadata services.
Those can be exported as `CROSSREF_API_KEY` and `SEMANTIC_SCHOLAR_API_KEY` variables.
## CLI Usage
The fastest way to test PaperQA2 is via the CLI. First navigate to a directory with some papers and use the `pqa` cli:
```bash
pqa ask 'What is PaperQA2?'
```
You will see PaperQA2 index your local PDF files,
gathering the necessary metadata for each of them
(using [Crossref](https://www.crossref.org/) and [Semantic Scholar](https://www.semanticscholar.org/)),
search over that index, then break the files into chunked evidence contexts,
rank them, and ultimately generate an answer.
The next time this directory is queried,
your index will already be built (save for any differences detected, like new added papers),
so it will skip the indexing and chunking steps.
All prior answers will be indexed and stored,
you can view them by querying via the `search` subcommand,
or access them yourself in your `PQA_HOME` directory,
which defaults to `~/.pqa/`.
```bash
pqa -i 'answers' search 'ranking and contextual summarization'
```
PaperQA2 is highly configurable, when running from the command line,
`pqa --help` shows all options and short descriptions.
For example to run with a higher temperature:
```bash
pqa --temperature 0.5 ask 'What is PaperQA2?'
```
You can view all settings with `pqa view`.
Another useful thing is to change to other templated settings - for example
`fast` is a setting that answers more quickly
and you can see it with `pqa -s fast view`
Maybe you have some new settings you want to save? You can do that with
```bash
pqa -s my_new_settings --temperature 0.5 --llm foo-bar-5 save
```
and then you can use it with
```bash
pqa -s my_new_settings ask 'What is PaperQA2?'
```
If you run `pqa` with a command which requires a new indexing,
say if you change the default chunk_size,
a new index will automatically be created for you.
```bash
pqa --parsing.chunk_size 5000 ask 'What is PaperQA2?'
```
You can also use `pqa` to do full-text search with use of LLMs view the search command.
For example, let's save the index from a directory and give it a name:
```bash
pqa -i nanomaterials index
```
Now I can search for papers about thermoelectrics:
```bash
pqa -i nanomaterials search thermoelectrics
```
or I can use the normal ask
```bash
pqa -i nanomaterials ask 'Are there nm scale features in thermoelectric materials?'
```
Both the CLI and module have pre-configured settings based on prior performance and our publications,
they can be invoked as follows:
```bash
pqa --settings <setting name> \
ask 'Are there nm scale features in thermoelectric materials?'
```
### Bundled Settings
Inside [`paperqa/configs`](paperqa/configs) we bundle known useful settings:
| Setting Name | Description |
| ------------ | ---------------------------------------------------------------------------------------------------------------------------- |
| high_quality | Highly performant, relatively expensive (due to having `evidence_k` = 15) query using a `ToolSelector` agent. |
| fast | Setting to get answers cheaply and quickly. |
| wikicrow | Setting to emulate the Wikipedia article writing used in our WikiCrow publication. |
| contracrow | Setting to find contradictions in papers, your query should be a claim that needs to be flagged as a contradiction (or not). |
| debug | Setting useful solely for debugging, but not in any actual application beyond debugging. |
| tier1_limits | Settings that match OpenAI rate limits for each tier, you can use `tier<1-5>_limits` to specify the tier. |
### Rate Limits
If you are hitting rate limits, say with the OpenAI Tier 1 plan, you can add them into PaperQA2.
For each OpenAI tier, a pre-built setting exists to limit usage.
```bash
pqa --settings 'tier1_limits' ask 'What is PaperQA2?'
```
This will limit your system to use the [tier1_limits](paperqa/configs/tier1_limits.json),
and slow down your queries to accommodate.
You can also specify them manually with any rate limit string that matches the specification in
the [limits](https://limits.readthedocs.io/en/stable/quickstart.html#rate-limit-string-notation) module:
```bash
pqa --summary_llm_config '{"rate_limit": {"gpt-4o-2024-11-20": "30000 per 1 minute"}}' \
ask 'What is PaperQA2?'
```
Or by adding into a `Settings` object, if calling imperatively:
```python
from paperqa import Settings, ask
answer_response = ask(
"What is PaperQA2?",
settings=Settings(
llm_config={"rate_limit": {"gpt-4o-2024-11-20": "30000 per 1 minute"}},
summary_llm_config={"rate_limit": {"gpt-4o-2024-11-20": "30000 per 1 minute"}},
),
)
```
## Library Usage
PaperQA2's full workflow can be accessed via Python directly:
```python
from paperqa import Settings, ask
answer_response = ask(
"What is PaperQA2?",
settings=Settings(temperature=0.5, paper_directory="my_papers"),
)
```
Please see our [installation docs](#installation) for how to install the package from PyPI.
### Agentic Adding/Querying Documents
The answer object has the following attributes:
`formatted_answer`, `answer` (answer alone), `question` , and `context` (the summaries of passages found for answer).
`ask` will use the `SearchPapers` tool, which will query a local index of files,
you can specify this location via the `Settings` object:
```python
from paperqa import Settings, ask
answer_response = ask(
"What is PaperQA2?",
settings=Settings(temperature=0.5, paper_directory="my_papers"),
)
```
`ask` is just a convenience wrapper around the real entrypoint,
which can be accessed if you'd like to run concurrent asynchronous workloads:
```python
from paperqa import Settings, agent_query
answer_response = await agent_query(
query="What is PaperQA2?",
settings=Settings(temperature=0.5, paper_directory="my_papers"),
)
```
The default agent will use an LLM based agent,
but you can also specify a `"fake"` agent to use a hard coded call path of
search -> gather evidence -> answer to reduce token usage.
### Manual (No Agent) Adding/Querying Documents
Normally via agent execution, the agent invokes the search tool,
which adds documents to the `Docs` object for you behind the scenes.
However, if you prefer fine-grained control,
you can directly interact with the `Docs` object.
Note that manually adding and querying `Docs` does not impact performance.
It just removes the automation associated with an agent picking the documents to add.
```python
from paperqa import Docs, Settings
# valid extensions include .pdf, .txt, .md, and .html
doc_paths = ("myfile.pdf", "myotherfile.pdf")
# Prepare the Docs object by adding a bunch of documents
docs = Docs()
for doc_path in doc_paths:
await docs.aadd(doc_path)
# Set up how we want to query the Docs object
settings = Settings()
settings.llm = "claude-3-5-sonnet-20240620"
settings.answer.answer_max_sources = 3
# Query the Docs object to get an answer
session = await docs.aquery("What is PaperQA2?", settings=settings)
print(session)
```
### Async
PaperQA2 is written to be used asynchronously.
The synchronous API is just a wrapper around the async.
Here are the methods and their `async` equivalents:
| Sync | Async |
| ------------------- | -------------------- |
| `Docs.add` | `Docs.aadd` |
| `Docs.add_file` | `Docs.aadd_file` |
| `Docs.add_url` | `Docs.aadd_url` |
| `Docs.get_evidence` | `Docs.aget_evidence` |
| `Docs.query` | `Docs.aquery` |
The synchronous version just calls the async version in a loop.
Most modern python environments support `async` natively (including Jupyter notebooks!).
So you can do this in a Jupyter Notebook:
```python
import asyncio
from paperqa import Docs
async def main() -> None:
docs = Docs()
# valid extensions include .pdf, .txt, .md, and .html
for doc in ("myfile.pdf", "myotherfile.pdf"):
await docs.aadd(doc)
session = await docs.aquery("What is PaperQA2?")
print(session)
asyncio.run(main())
```
### Choosing Model
By default, PaperQA2 uses OpenAI's `gpt-4o-2024-11-20` model for the
`summary_llm`, `llm`, and `agent_llm`.
Please see the [Settings Cheatsheet](#settings-cheatsheet)
for more information on these settings.
PaperQA2 also defaults to using OpenAI's `text-embedding-3-small` model for the `embedding` setting.
If you don't have an OpenAI API key, you can use a different embedding model.
More information about embedding models can be found [in the "Embedding Model" section](#embedding-model).
We use the [`lmi`](https://github.com/Future-House/ldp/tree/main/packages/lmi) package for our LLM interface,
which in turn uses `litellm` to support many LLM providers.
You can adjust this easily to use any model supported by `litellm`:
```python
from paperqa import Settings, ask
answer_response = ask(
"What is PaperQA2?",
settings=Settings(
llm="gpt-4o-mini", summary_llm="gpt-4o-mini", paper_directory="my_papers"
),
)
```
To use Claude, make sure you set the `ANTHROPIC_API_KEY` environment variable.
In this example, we also use a different embedding model.
Please make sure to `pip install paper-qa[local]` to use a local embedding model.
```python
from paperqa import Settings, ask
from paperqa.settings import AgentSettings
answer_response = ask(
"What is PaperQA2?",
settings=Settings(
llm="claude-3-5-sonnet-20240620",
summary_llm="claude-3-5-sonnet-20240620",
agent=AgentSettings(agent_llm="claude-3-5-sonnet-20240620"),
# SEE: https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1
embedding="st-multi-qa-MiniLM-L6-cos-v1",
),
)
```
Or Gemini, by setting the `GEMINI_API_KEY` from Google AI Studio
```python
from paperqa import Settings, ask
from paperqa.settings import AgentSettings
answer_response = ask(
"What is PaperQA2?",
settings=Settings(
llm="gemini/gemini-2.0-flash",
summary_llm="gemini/gemini-2.0-flash",
agent=AgentSettings(agent_llm="gemini/gemini-2.0-flash"),
embedding="gemini/text-embedding-004",
),
)
```
#### Locally Hosted
You can use llama.cpp to be the LLM.
Note that you should be using relatively large models,
because PaperQA2 requires following a lot of instructions.
You won't get good performance with 7B models.
The easiest way to get set-up is to download a [llama file](https://github.com/Mozilla-Ocho/llamafile)
and execute it with `-cb -np 4 -a my-llm-model --embedding`
which will enable continuous batching and embeddings.
```python
from paperqa import Settings, ask
local_llm_config = dict(
model_list=[
dict(
model_name="my_llm_model",
litellm_params=dict(
model="my-llm-model",
api_base="http://localhost:8080/v1",
api_key="sk-no-key-required",
temperature=0.1,
frequency_penalty=1.5,
max_tokens=512,
),
)
]
)
answer_response = ask(
"What is PaperQA2?",
settings=Settings(
llm="my-llm-model",
llm_config=local_llm_config,
summary_llm="my-llm-model",
summary_llm_config=local_llm_config,
),
)
```
Models hosted with `ollama` are also supported.
To run the example below make sure you have downloaded llama3.2 and mxbai-embed-large via ollama.
```python
from paperqa import Settings, ask
local_llm_config = {
"model_list": [
{
"model_name": "ollama/llama3.2",
"litellm_params": {
"model": "ollama/llama3.2",
"api_base": "http://localhost:11434",
},
}
]
}
answer_response = ask(
"What is PaperQA2?",
settings=Settings(
llm="ollama/llama3.2",
llm_config=local_llm_config,
summary_llm="ollama/llama3.2",
summary_llm_config=local_llm_config,
embedding="ollama/mxbai-embed-large",
),
)
```
### Embedding Model
Embeddings are used to retrieve k texts (where k is specified via `Settings.answer.evidence_k`)
for re-ranking and contextual summarization.
If you don't want to use embeddings, but instead just fetch all chunks,
disable "evidence retrieval" via the `Settings.answer.evidence_retrieval` setting.
PaperQA2 defaults to using OpenAI (`text-embedding-3-small`) embeddings,
but has flexible options for both vector stores and embedding choices.
#### Specifying the Embedding Model
The simplest way to specify the embedding model is via `Settings.embedding`:
```python
from paperqa import Settings, ask
answer_response = ask(
"What is PaperQA2?",
settings=Settings(embedding="text-embedding-3-large"),
)
```
`embedding` accepts any embedding model name supported by litellm.
PaperQA2 also supports an embedding input of `"hybrid-<model_name>"`
i.e. `"hybrid-text-embedding-3-small"` to use a hybrid sparse keyword (based on a token modulo embedding)
and dense vector embedding, where any litellm model can be used in the dense model name.
`"sparse"` can be used to use a sparse keyword embedding only.
Embedding models are used to create PaperQA2's index of the full-text embedding vectors (`texts_index` argument).
The embedding model can be specified as a setting when you are adding new papers to the `Docs` object:
```python
from paperqa import Docs, Settings
docs = Docs()
for doc in ("myfile.pdf", "myotherfile.pdf"):
await docs.aadd(doc, settings=Settings(embedding="text-embedding-large-3"))
```
Note that PaperQA2 uses Numpy as a dense vector store.
Its design of using a keyword search initially reduces the number of chunks
needed for each answer to a relatively small number < 1k.
Therefore, `NumpyVectorStore` is a good place to start, it's a simple in-memory store, without an index.
However, if a larger-than-memory vector store is needed,
you can an external vector database like [Qdrant](https://qdrant.tech/) via the `QdrantVectorStore` class.
The hybrid embeddings can be customized:
```python
from paperqa import (
Docs,
HybridEmbeddingModel,
SparseEmbeddingModel,
LiteLLMEmbeddingModel,
)
model = HybridEmbeddingModel(
models=[LiteLLMEmbeddingModel(), SparseEmbeddingModel(ndim=1024)]
)
docs = Docs()
for doc in ("myfile.pdf", "myotherfile.pdf"):
await docs.aadd(doc, embedding_model=model)
```
The sparse embedding (keyword) models default to having 256 dimensions,
but this can be specified via the `ndim` argument.
#### Local Embedding Models (Sentence Transformers)
You can use a `SentenceTransformerEmbeddingModel` model if you install `sentence-transformers`,
which is [a local embedding library](https://sbert.net/) with support for HuggingFace models and more.
You can install it by adding the `local` extras.
```sh
pip install paper-qa[local]
```
and then prefix embedding model names with `st-`:
```python
from paperqa import Settings, ask
answer_response = ask(
"What is PaperQA2?",
settings=Settings(embedding="st-multi-qa-MiniLM-L6-cos-v1"),
)
```
or with a hybrid model
```python
from paperqa import Settings, ask
answer_response = ask(
"What is PaperQA2?",
settings=Settings(embedding="hybrid-st-multi-qa-MiniLM-L6-cos-v1"),
)
```
### Adjusting number of sources
You can adjust the numbers of sources (passages of text) to reduce token usage or add more context.
`k` refers to the top k most relevant and diverse (may from different sources) passages.
Each passage is sent to the LLM to summarize, or determine if it is irrelevant.
After this step, a limit of `max_sources` is applied so that the final answer can fit into the LLM context window.
Thus, `k` > `max_sources` and `max_sources` is the number of sources used in the final answer.
```python
from paperqa import Settings
settings = Settings()
settings.answer.answer_max_sources = 3
settings.answer.evidence_k = 5
await docs.aquery(
"What is PaperQA2?",
settings=settings,
)
```
### Using Code or HTML
You do not need to use papers -- you can use code or raw HTML.
Note that this tool is focused on answering questions,
so it won't do well at writing code.
One note is that the tool cannot infer citations from code,
so you will need to provide them yourself.
```python
import glob
import os
from paperqa import Docs
source_files = glob.glob("**/*.js")
docs = Docs()
for f in source_files:
# this assumes the file names are unique in code
await docs.aadd(
f, citation="File " + os.path.basename(f), docname=os.path.basename(f)
)
session = await docs.aquery("Where is the search bar in the header defined?")
print(session)
```
### Using External DB/Vector DB and Caching
You may want to cache parsed texts and embeddings in an external database or file.
You can then build a Docs object from those directly:
```python
from paperqa import Docs, Doc, Text
docs = Docs()
for ... in my_docs:
doc = Doc(docname=..., citation=..., dockey=..., citation=...)
texts = [Text(text=..., name=..., doc=doc) for ... in my_texts]
docs.add_texts(texts, doc)
```
### Creating Index
Indexes will be placed in the [home directory][home dir] by default.
This can be controlled via the `PQA_HOME` environment variable.
Indexes are made by reading files in the `Settings.paper_directory`.
By default, we recursively read from subdirectories of the paper directory,
unless disabled using `Settings.index_recursively`.
The paper directory is not modified in any way, it's just read from.
[home dir]: https://docs.python.org/3/library/pathlib.html#pathlib.Path.home
#### Manifest Files
The indexing process attempts to infer paper metadata like title and DOI
using LLM-powered text processing.
You can avoid this point of uncertainty using a "manifest" file,
which is a CSV containing three columns (order doesn't matter):
- `file_location`: relative path to the paper's PDF within the index directory
- `doi`: DOI of the paper
- `title`: title of the paper
By providing this information,
we ensure queries to metadata providers like Crossref are accurate.
### Reusing Index
The local search indexes are built based on a hash of the current `Settings` object.
So make sure you properly specify the `paper_directory` to your `Settings` object.
In general, it's advisable to:
1. Pre-build an index given a folder of papers (can take several minutes)
2. Reuse the index to perform many queries
```python
import os
from paperqa import Settings
from paperqa.agents.main import agent_query
from paperqa.agents.search import get_directory_index
async def amain(folder_of_papers: str | os.PathLike) -> None:
settings = Settings(paper_directory=folder_of_papers)
# 1. Build the index. Note an index name is autogenerated when unspecified
built_index = await get_directory_index(settings=settings)
print(settings.get_index_name()) # Display the autogenerated index name
print(await built_index.index_files) # Display the index contents
# 2. Use the settings as many times as you want with ask
answer_response_1 = await agent_query(
query="What is a cool retrieval augmented generation technique?",
settings=settings,
)
answer_response_2 = await agent_query(
query="What is PaperQA2?",
settings=settings,
)
```
### Using Clients Directly
One of the most powerful features of PaperQA2 is its ability to combine data from multiple metadata sources.
For example, [Unpaywall](https://unpaywall.org/) can provide open access status/direct links to PDFs,
[Crossref](https://www.crossref.org/) can provide bibtex,
and [Semantic Scholar](https://www.semanticscholar.org/) can provide citation licenses.
Here's a short demo of how to do this:
```python
from paperqa.clients import DocMetadataClient, ALL_CLIENTS
client = DocMetadataClient(clients=ALL_CLIENTS)
details = await client.query(title="Augmenting language models with chemistry tools")
print(details.formatted_citation)
# Andres M. Bran, Sam Cox, Oliver Schilter, Carlo Baldassari,
# Andrew D. White, and Philippe Schwaller.
# Augmenting large language models with chemistry tools. Nature Machine Intelligence,
# 6:525-535, May 2024. URL: https://doi.org/10.1038/s42256-024-00832-8,
# doi:10.1038/s42256-024-00832-8.
# This article has 243 citations and is from a domain leading peer-reviewed journal.
print(details.citation_count)
# 243
print(details.license)
# cc-by
print(details.pdf_url)
# https://www.nature.com/articles/s42256-024-00832-8.pdf
```
the `client.query` is meant to check for exact matches of title.
It's a bit robust (like to casing, missing a word).
There are duplicates for titles though - so you can also add authors to disambiguate.
Or you can provide a doi directly `client.query(doi="10.1038/s42256-024-00832-8")`.
If you're doing this at a large scale,
you may not want to use `ALL_CLIENTS` (just omit the argument)
and you can specify which specific fields you want to speed up queries.
For example:
```python
details = await client.query(
title="Augmenting large language models with chemistry tools",
authors=["Andres M. Bran", "Sam Cox"],
fields=["title", "doi"],
)
```
will return much faster than the first query and we'll be certain the authors match.
## Settings Cheatsheet
| Setting | Default | Description |
| -------------------------------------------- | -------------------------------------- | ------------------------------------------------------------------------------------------------------- |
| `llm` | `"gpt-4o-2024-11-20"` | Default LLM for most things, including answers. Should be 'best' LLM. |
| `llm_config` | `None` | Optional configuration for `llm`. |
| `summary_llm` | `"gpt-4o-2024-11-20"` | Default LLM for summaries and parsing citations. |
| `summary_llm_config` | `None` | Optional configuration for `summary_llm`. |
| `embedding` | `"text-embedding-3-small"` | Default embedding model for texts. |
| `embedding_config` | `None` | Optional configuration for `embedding`. |
| `temperature` | `0.0` | Temperature for LLMs. |
| `batch_size` | `1` | Batch size for calling LLMs. |
| `texts_index_mmr_lambda` | `1.0` | Lambda for MMR in text index. |
| `verbosity` | `0` | Integer verbosity level for logging (0-3). 3 = all LLM/Embeddings calls logged. |
| `answer.evidence_k` | `10` | Number of evidence pieces to retrieve. |
| `answer.evidence_detailed_citations` | `True` | Include detailed citations in summaries. |
| `answer.evidence_retrieval` | `True` | Use retrieval vs processing all docs. |
| `answer.evidence_summary_length` | `"about 100 words"` | Length of evidence summary. |
| `answer.evidence_skip_summary` | `False` | Whether to skip summarization. |
| `answer.answer_max_sources` | `5` | Max number of sources for an answer. |
| `answer.max_answer_attempts` | `None` | Max attempts to generate an answer. |
| `answer.answer_length` | `"about 200 words, but can be longer"` | Length of final answer. |
| `answer.max_concurrent_requests` | `4` | Max concurrent requests to LLMs. |
| `answer.answer_filter_extra_background` | `False` | Whether to cite background info from model. |
| `answer.get_evidence_if_no_contexts` | `True` | Allow lazy evidence gathering. |
| `parsing.chunk_size` | `5000` | Characters per chunk (0 for no chunking). |
| `parsing.page_size_limit` | `1,280,000` | Character limit per page. |
| `parsing.pdfs_use_block_parsing` | `False` | Opt-in flag for block-based PDF parsing over text-based PDF parsing. |
| `parsing.use_doc_details` | `True` | Whether to get metadata details for docs. |
| `parsing.overlap` | `250` | Characters to overlap chunks. |
| `parsing.defer_embedding` | `False` | Whether to defer embedding until summarization. |
| `parsing.parse_pdf` | `parse_pdf_to_pages` | Function to parse PDF files. |
| `parsing.configure_pdf_parser` | `setup_pymupdf_python_logging` | Callable to configure the PDF parser within `parse_pdf`, useful for behaviors such as enabling logging. |
| `parsing.chunking_algorithm` | `ChunkingOptions.SIMPLE_OVERLAP` | Algorithm for chunking. |
| `parsing.doc_filters` | `None` | Optional filters for allowed documents. |
| `parsing.use_human_readable_clinical_trials` | `False` | Parse clinical trial JSONs into readable text. |
| `prompt.summary` | `summary_prompt` | Template for summarizing text, must contain variables matching `summary_prompt`. |
| `prompt.qa` | `qa_prompt` | Template for QA, must contain variables matching `qa_prompt`. |
| `prompt.select` | `select_paper_prompt` | Template for selecting papers, must contain variables matching `select_paper_prompt`. |
| `prompt.pre` | `None` | Optional pre-prompt templated with just the original question to append information before a qa prompt. |
| `prompt.post` | `None` | Optional post-processing prompt that can access PQASession fields. |
| `prompt.system` | `default_system_prompt` | System prompt for the model. |
| `prompt.use_json` | `True` | Whether to use JSON formatting. |
| `prompt.summary_json` | `summary_json_prompt` | JSON-specific summary prompt. |
| `prompt.summary_json_system` | `summary_json_system_prompt` | System prompt for JSON summaries. |
| `prompt.context_outer` | `CONTEXT_OUTER_PROMPT` | Prompt for how to format all contexts in generate answer. |
| `prompt.context_inner` | `CONTEXT_INNER_PROMPT` | Prompt for how to format a single context in generate answer. Must contain 'name' and 'text' variables. |
| `agent.agent_llm` | `"gpt-4o-2024-11-20"` | Model to use for agent making tool selections. |
| `agent.agent_llm_config` | `None` | Optional configuration for `agent_llm`. |
| `agent.agent_type` | `"ToolSelector"` | Type of agent to use. |
| `agent.agent_config` | `None` | Optional kwarg for AGENT constructor. |
| `agent.agent_system_prompt` | `env_system_prompt` | Optional system prompt message. |
| `agent.agent_prompt` | `env_reset_prompt` | Agent prompt. |
| `agent.return_paper_metadata` | `False` | Whether to include paper title/year in search tool results. |
| `agent.search_count` | `8` | Search count. |
| `agent.timeout` | `500.0` | Timeout on agent execution (seconds). |
| `agent.should_pre_search` | `False` | Whether to run search tool before invoking agent. |
| `agent.tool_names` | `None` | Optional override on tools to provide the agent. |
| `agent.max_timesteps` | `None` | Optional upper limit on environment steps. |
| `agent.index.name` | `None` | Optional name of the index. |
| `agent.index.paper_directory` | `Current working directory` | Directory containing papers to be indexed. |
| `agent.index.manifest_file` | `None` | Path to manifest CSV with document attributes. |
| `agent.index.index_directory` | `pqa_directory("indexes")` | Directory to store PQA indexes. |
| `agent.index.use_absolute_paper_directory` | `False` | Whether to use absolute paper directory path. |
| `agent.index.recurse_subdirectories` | `True` | Whether to recurse into subdirectories when indexing. |
| `agent.index.concurrency` | `5` | Number of concurrent filesystem reads. |
| `agent.index.sync_with_paper_directory` | `True` | Whether to sync index with paper directory on load. |
| `agent.index.files_filter` | `lambda f: f.suffix in {...}` | Filter function to mark files in the paper directory to index. |
## Where do I get papers?
Well that's a really good question!
It's probably best to just download PDFs of papers you think will help answer your question and start from there.
See detailed docs [about zotero, openreview and parsing](docs/tutorials/where_do_I_get_papers.md)
## Callbacks
To execute a function on each chunk of LLM completions,
you need to provide a function that can be executed on each chunk.
For example, to get a typewriter view of the completions, you can do:
```python
from paperqa import Docs
def typewriter(chunk: str) -> None:
print(chunk, end="")
docs = Docs()
# add some docs...
await docs.aquery("What is PaperQA2?", callbacks=[typewriter])
```
### Caching Embeddings
In general, embeddings are cached when you pickle a `Docs` regardless of what vector store you use.
So as long as you save your underlying `Docs` object,
you should be able to avoid re-embedding your documents.
## Customizing Prompts
You can customize any of the prompts using settings.
```python
from paperqa import Docs, Settings
my_qa_prompt = (
"Answer the question '{question}'\n"
"Use the context below if helpful. "
"You can cite the context using the key like (pqac-abcd1234). "
"If there is insufficient context, write a poem "
"about how you cannot answer.\n\n"
"Context: {context}"
)
docs = Docs()
settings = Settings()
settings.prompts.qa = my_qa_prompt
await docs.aquery("What is PaperQA2?", settings=settings)
```
### Pre and Post Prompts
Following the syntax above, you can also include prompts that
are executed after the query and before the query.
For example, you can use this to critique the answer.
## FAQ
### How come I get different results than your papers?
Internally at FutureHouse, we have a slightly different set of tools.
We're trying to get some of them, like citation traversal, into this repo.
However, we have APIs and licenses to access research papers that we cannot share openly.
Similarly, in our research papers' results we do not start with the known relevant PDFs.
Our agent has to identify them using keyword search over all papers, rather than just a subset.
We're gradually aligning these two versions of PaperQA,
but until there is an open-source way to freely access papers (even just open source papers)
you will need to provide PDFs yourself.
### How is this different from LlamaIndex or LangChain?
[LangChain](https://github.com/langchain-ai/langchain)
and [LlamaIndex](https://github.com/run-llama/llama_index)
are both frameworks for working with LLM applications,
with abstractions made for agentic workflows and retrieval augmented generation.
Over time, the PaperQA team over time chose to become framework-agnostic,
instead outsourcing LLM drivers to [LiteLLM][LiteLLM general docs]
and no framework besides Pydantic for its tools.
PaperQA focuses on scientific papers and their metadata.
PaperQA can be reimplemented using either LlamaIndex or LangChain.
For example, our `GatherEvidence` tool can be reimplemented
as a retriever with an LLM-based re-ranking and contextual summary.
There is similar work with the tree response method in LlamaIndex.
### Can I save or load?
The `Docs` class can be pickled and unpickled.
This is useful if you want to save the embeddings of the documents and then load them later.
```python
import pickle
# save
with open("my_docs.pkl", "wb") as f:
pickle.dump(docs, f)
# load
with open("my_docs.pkl", "rb") as f:
docs = pickle.load(f)
```
## Reproduction
Contained in [docs/2024-10-16_litqa2-splits.json5](docs/2024-10-16_litqa2-splits.json5)
are the question IDs used in train, evaluation, and test splits,
as well as paper DOIs used to build the splits' indexes.
- Train and eval splits: question IDs come from
[LAB-Bench's LitQA2 question IDs](https://github.com/Future-House/LAB-Bench/blob/main/LitQA2/litqa-v2-public.jsonl).
- Test split: questions IDs come from
[aviary-paper-data's LitQA2 question IDs](https://huggingface.co/datasets/futurehouse/aviary-paper-data).
There are multiple papers slowly building PaperQA, shown below in [Citation](#citation).
To reproduce:
- `skarlinski2024language`: train and eval splits are applicable.
The test split remains held out.
- `narayanan2024aviarytraininglanguageagents`: train, eval, and test splits are applicable.
Example on how to use LitQA for evaluation can be found in
[aviary.litqa](https://github.com/Future-House/aviary/tree/main/packages/litqa#running-litqa).
## Citation
Please read and cite the following papers if you use this software:
```bibtex
@article{narayanan2024aviarytraininglanguageagents,
title = {Aviary: training language agents on challenging scientific tasks},
author = {
Siddharth Narayanan and
James D. Braza and
Ryan-Rhys Griffiths and
Manu Ponnapati and
Albert Bou and
Jon Laurent and
Ori Kabeli and
Geemi Wellawatte and
Sam Cox and
Samuel G. Rodriques and
Andrew D. White},
journal = {arXiv preprent arXiv:2412.21154},
year = {2024},
url = {https://doi.org/10.48550/arXiv.2412.21154},
}
```
```bibtex
@article{skarlinski2024language,
title = {Language agents achieve superhuman synthesis of scientific knowledge},
author = {
Michael D. Skarlinski and
Sam Cox and
Jon M. Laurent and
James D. Braza and
Michaela Hinks and
Michael J. Hammerling and
Manvitha Ponnapati and
Samuel G. Rodriques and
Andrew D. White},
journal = {arXiv preprent arXiv:2409.13740},
year = {2024},
url = {https://doi.org/10.48550/arXiv.2409.13740}
}
```
```bibtex
@article{lala2023paperqa,
title = {PaperQA: Retrieval-Augmented Generative Agent for Scientific Research},
author = {
Jakub Lála and
Odhran O'Donoghue and
Aleksandar Shtedritski and
Sam Cox and
Samuel G. Rodriques and
Andrew D. White},
journal = {arXiv preprint arXiv:2312.07559},
year = {2023},
url = {https://doi.org/10.48550/arXiv.2312.07559}
}
```
Raw data
{
"_id": null,
"home_page": null,
"name": "paper-qa",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": "James Braza <jamesbraza@gmail.com>, Michael Skarlinski <michael.skarlinski@gmail.com>, Andrew White <white.d.andrew@gmail.com>",
"keywords": "question answering",
"author": null,
"author_email": "FutureHouse technical staff <hello@futurehouse.org>",
"download_url": "https://files.pythonhosted.org/packages/58/b6/e15588f342f18ecaf4bcf07145fb4a5c1ad54112cfe236ccdf04e7d3ec4d/paper_qa-5.23.0.tar.gz",
"platform": null,
"description": "# PaperQA2\n\n[](https://github.com/Future-House/paper-qa)\n[](https://badge.fury.io/py/paper-qa)\n[](https://github.com/Future-House/paper-qa)\n\n\n\nPaperQA2 is a package for doing high-accuracy retrieval augmented generation (RAG) on PDFs or text files,\nwith a focus on the scientific literature.\nSee our [recent 2024 paper](https://paper.wikicrow.ai)\nto see examples of PaperQA2's superhuman performance in scientific tasks like\nquestion answering, summarization, and contradiction detection.\n\n<!--TOC-->\n\n- [Quickstart](#quickstart)\n - [Example Output](#example-output)\n- [What is PaperQA2](#what-is-paperqa2)\n - [PaperQA2 vs PaperQA](#paperqa2-vs-paperqa)\n - [What's New in Version 5 (aka PaperQA2)?](#whats-new-in-version-5-aka-paperqa2)\n - [PaperQA2 Algorithm](#paperqa2-algorithm)\n- [Installation](#installation)\n- [CLI Usage](#cli-usage)\n - [Bundled Settings](#bundled-settings)\n - [Rate Limits](#rate-limits)\n- [Library Usage](#library-usage)\n - [Agentic Adding/Querying Documents](#agentic-addingquerying-documents)\n - [Manual (No Agent) Adding/Querying Documents](#manual-no-agent-addingquerying-documents)\n - [Async](#async)\n - [Choosing Model](#choosing-model)\n - [Locally Hosted](#locally-hosted)\n - [Embedding Model](#embedding-model)\n - [Specifying the Embedding Model](#specifying-the-embedding-model)\n - [Local Embedding Models (Sentence Transformers)](#local-embedding-models-sentence-transformers)\n - [Adjusting number of sources](#adjusting-number-of-sources)\n - [Using Code or HTML](#using-code-or-html)\n - [Using External DB/Vector DB and Caching](#using-external-dbvector-db-and-caching)\n - [Creating Index](#creating-index)\n - [Manifest Files](#manifest-files)\n - [Reusing Index](#reusing-index)\n - [Using Clients Directly](#using-clients-directly)\n- [Settings Cheatsheet](#settings-cheatsheet)\n- [Where do I get papers?](#where-do-i-get-papers)\n- [Callbacks](#callbacks)\n - [Caching Embeddings](#caching-embeddings)\n- [Customizing Prompts](#customizing-prompts)\n - [Pre and Post Prompts](#pre-and-post-prompts)\n- [FAQ](#faq)\n - [How come I get different results than your papers?](#how-come-i-get-different-results-than-your-papers)\n - [How is this different from LlamaIndex or LangChain?](#how-is-this-different-from-llamaindex-or-langchain)\n - [Can I save or load?](#can-i-save-or-load)\n- [Reproduction](#reproduction)\n- [Citation](#citation)\n\n<!--TOC-->\n\n## Quickstart\n\nIn this example we take a folder of research paper PDFs,\nmagically get their metadata - including citation counts with a retraction check,\nthen parse and cache PDFs into a full-text search index,\nand finally answer the user question with an LLM agent.\n\n```bash\npip install paper-qa\nmkdir my_papers\ncurl -o my_papers/PaperQA2.pdf https://arxiv.org/pdf/2409.13740\ncd my_papers\npqa ask 'What is PaperQA2?'\n```\n\n### Example Output\n\nQuestion: Has anyone designed neural networks that compute with proteins or DNA?\n\n> The claim that neural networks have been designed to compute with DNA is supported by multiple sources.\n> The work by Qian, Winfree, and Bruck demonstrates the use of DNA strand displacement cascades\n> to construct neural network components, such as artificial neurons and associative memories,\n> using a DNA-based system (Qian2011Neural pages 1-2, Qian2011Neural pages 15-16, Qian2011Neural pages 54-56).\n> This research includes the implementation of a 3-bit XOR gate and a four-neuron Hopfield associative memory,\n> showcasing the potential of DNA for neural network computation.\n> Additionally, the application of deep learning techniques to genomics,\n> which involves computing with DNA sequences, is well-documented.\n> Studies have applied convolutional neural networks (CNNs) to predict genomic features such as\n> transcription factor binding and DNA accessibility (Eraslan2019Deep pages 4-5, Eraslan2019Deep pages 5-6).\n> These models leverage DNA sequences as input data,\n> effectively using neural networks to compute with DNA.\n> While the provided excerpts do not explicitly mention protein-based neural network computation,\n> they do highlight the use of neural networks in tasks related to protein sequences,\n> such as predicting DNA-protein binding (Zeng2016Convolutional pages 1-2).\n> However, the primary focus remains on DNA-based computation.\n\n## What is PaperQA2\n\nPaperQA2 is engineered to be the best agentic RAG model for working with scientific papers.\nHere are some features:\n\n- A simple interface to get good answers with grounded responses containing in-text citations.\n- State-of-the-art implementation including document metadata-awareness\n in embeddings and LLM-based re-ranking and contextual summarization (RCS).\n- Support for agentic RAG, where a language agent can iteratively refine queries and answers.\n- Automatic redundant fetching of paper metadata,\n including citation and journal quality data from multiple providers.\n- A usable full-text search engine for a local repository of PDF/text files.\n- A robust interface for customization, with default support for all [LiteLLM][LiteLLM providers] models.\n\n[LiteLLM providers]: https://docs.litellm.ai/docs/providers\n[LiteLLM general docs]: https://docs.litellm.ai/docs/\n\nBy default, it uses [OpenAI embeddings](https://platform.openai.com/docs/guides/embeddings)\nand [models](https://platform.openai.com/docs/models) with a Numpy vector DB to embed and search documents.\nHowever, you can easily use other closed-source, open-source models or embeddings (see details below).\n\nPaperQA2 depends on some awesome libraries/APIs that make our repo possible.\nHere are some in no particular order:\n\n1. [Semantic Scholar](https://www.semanticscholar.org/)\n2. [Crossref](https://www.crossref.org/)\n3. [Unpaywall](https://unpaywall.org/)\n4. [Pydantic](https://docs.pydantic.dev/latest/)\n5. [tantivy](https://github.com/quickwit-oss/tantivy)\n6. [LiteLLM][LiteLLM general docs]\n7. [pybtex](https://pybtex.org/)\n8. [PyMuPDF](https://pymupdf.readthedocs.io/en/latest/)\n\n### PaperQA2 vs PaperQA\n\nWe've been working on hard on fundamental upgrades for a while and mostly followed [SemVer](https://semver.org/).\nmeaning we've incremented the major version number on each breaking change.\nThis brings us to the current major version number v5.\nSo why call is the repo now called PaperQA2?\nWe wanted to remark on the fact though that we've\nexceeded human performance on [many important metrics](https://paper.wikicrow.ai).\nSo we arbitrarily call version 5 and onward PaperQA2,\nand versions before it as PaperQA1 to denote the significant change in performance.\nWe recognize that we are challenged at naming and counting at FutureHouse,\nso we reserve the right at any time to arbitrarily change the name to PaperCrow.\n\n### What's New in Version 5 (aka PaperQA2)?\n\nVersion 5 added:\n\n- A CLI `pqa`\n- Agentic workflows invoking tools for\n paper search, gathering evidence, and generating an answer\n- Removed much of the statefulness from the `Docs` object\n- A migration to LiteLLM for compatibility with many LLM providers\n as well as centralized rate limits and cost tracking\n- A bundled set of configurations (read [this section here](#bundled-settings)))\n containing known-good hyperparameters\n\nNote that `Docs` objects pickled from prior versions of `PaperQA` are incompatible with version 5,\nand will need to be rebuilt.\nAlso, our minimum Python version was increased to Python 3.11.\n\n### PaperQA2 Algorithm\n\nTo understand PaperQA2, let's start with the pieces of the underlying algorithm.\nThe default workflow of PaperQA2 is as follows:\n\n| Phase | PaperQA2 Actions |\n| ---------------------- | ------------------------------------------------------------------------- |\n| **1. Paper Search** | - Get candidate papers from LLM-generated keyword query |\n| | - Chunk, embed, and add candidate papers to state |\n| **2. Gather Evidence** | - Embed query into vector |\n| | - Rank top _k_ document chunks in current state |\n| | - Create scored summary of each chunk in the context of the current query |\n| | - Use LLM to re-score and select most relevant summaries |\n| **3. Generate Answer** | - Put best summaries into prompt with context |\n| | - Generate answer with prompt |\n\nThe tools can be invoked in any order by a language agent.\nFor example, an LLM agent might do a narrow and broad search,\nor using different phrasing for the gather evidence step from the generate answer step.\n\n## Installation\n\nFor a non-development setup,\ninstall PaperQA2 (aka version 5) from [PyPI](https://pypi.org/project/paper-qa/).\nNote version 5 requires Python 3.11+.\n\n```bash\npip install paper-qa>=5\n```\n\nFor development setup,\nplease refer to the [CONTRIBUTING.md](CONTRIBUTING.md) file.\n\nPaperQA2 uses an LLM to operate,\nso you'll need to either set an appropriate [API key environment variable][LiteLLM providers]\n(i.e. `export OPENAI_API_KEY=sk-...`)\nor set up an open source LLM server (i.e. using [llamafile](https://github.com/Mozilla-Ocho/llamafile).\nAny LiteLLM compatible model can be configured to use with PaperQA2.\n\nIf you need to index a large set of papers (100+),\nyou will likely want an API key for both\n[Crossref](https://www.crossref.org/documentation/metadata-plus/metadata-plus-keys/)\nand [Semantic Scholar](https://www.semanticscholar.org/product/api#api-key),\nwhich will allow you to avoid hitting public rate limits using these metadata services.\nThose can be exported as `CROSSREF_API_KEY` and `SEMANTIC_SCHOLAR_API_KEY` variables.\n\n## CLI Usage\n\nThe fastest way to test PaperQA2 is via the CLI. First navigate to a directory with some papers and use the `pqa` cli:\n\n```bash\npqa ask 'What is PaperQA2?'\n```\n\nYou will see PaperQA2 index your local PDF files,\ngathering the necessary metadata for each of them\n(using [Crossref](https://www.crossref.org/) and [Semantic Scholar](https://www.semanticscholar.org/)),\nsearch over that index, then break the files into chunked evidence contexts,\nrank them, and ultimately generate an answer.\nThe next time this directory is queried,\nyour index will already be built (save for any differences detected, like new added papers),\nso it will skip the indexing and chunking steps.\n\nAll prior answers will be indexed and stored,\nyou can view them by querying via the `search` subcommand,\nor access them yourself in your `PQA_HOME` directory,\nwhich defaults to `~/.pqa/`.\n\n```bash\npqa -i 'answers' search 'ranking and contextual summarization'\n```\n\nPaperQA2 is highly configurable, when running from the command line,\n`pqa --help` shows all options and short descriptions.\nFor example to run with a higher temperature:\n\n```bash\npqa --temperature 0.5 ask 'What is PaperQA2?'\n```\n\nYou can view all settings with `pqa view`.\nAnother useful thing is to change to other templated settings - for example\n`fast` is a setting that answers more quickly\nand you can see it with `pqa -s fast view`\n\nMaybe you have some new settings you want to save? You can do that with\n\n```bash\npqa -s my_new_settings --temperature 0.5 --llm foo-bar-5 save\n```\n\nand then you can use it with\n\n```bash\npqa -s my_new_settings ask 'What is PaperQA2?'\n```\n\nIf you run `pqa` with a command which requires a new indexing,\nsay if you change the default chunk_size,\na new index will automatically be created for you.\n\n```bash\npqa --parsing.chunk_size 5000 ask 'What is PaperQA2?'\n```\n\nYou can also use `pqa` to do full-text search with use of LLMs view the search command.\nFor example, let's save the index from a directory and give it a name:\n\n```bash\npqa -i nanomaterials index\n```\n\nNow I can search for papers about thermoelectrics:\n\n```bash\npqa -i nanomaterials search thermoelectrics\n```\n\nor I can use the normal ask\n\n```bash\npqa -i nanomaterials ask 'Are there nm scale features in thermoelectric materials?'\n```\n\nBoth the CLI and module have pre-configured settings based on prior performance and our publications,\nthey can be invoked as follows:\n\n```bash\npqa --settings <setting name> \\\n ask 'Are there nm scale features in thermoelectric materials?'\n```\n\n### Bundled Settings\n\nInside [`paperqa/configs`](paperqa/configs) we bundle known useful settings:\n\n| Setting Name | Description |\n| ------------ | ---------------------------------------------------------------------------------------------------------------------------- |\n| high_quality | Highly performant, relatively expensive (due to having `evidence_k` = 15) query using a `ToolSelector` agent. |\n| fast | Setting to get answers cheaply and quickly. |\n| wikicrow | Setting to emulate the Wikipedia article writing used in our WikiCrow publication. |\n| contracrow | Setting to find contradictions in papers, your query should be a claim that needs to be flagged as a contradiction (or not). |\n| debug | Setting useful solely for debugging, but not in any actual application beyond debugging. |\n| tier1_limits | Settings that match OpenAI rate limits for each tier, you can use `tier<1-5>_limits` to specify the tier. |\n\n### Rate Limits\n\nIf you are hitting rate limits, say with the OpenAI Tier 1 plan, you can add them into PaperQA2.\nFor each OpenAI tier, a pre-built setting exists to limit usage.\n\n```bash\npqa --settings 'tier1_limits' ask 'What is PaperQA2?'\n```\n\nThis will limit your system to use the [tier1_limits](paperqa/configs/tier1_limits.json),\nand slow down your queries to accommodate.\n\nYou can also specify them manually with any rate limit string that matches the specification in\nthe [limits](https://limits.readthedocs.io/en/stable/quickstart.html#rate-limit-string-notation) module:\n\n```bash\npqa --summary_llm_config '{\"rate_limit\": {\"gpt-4o-2024-11-20\": \"30000 per 1 minute\"}}' \\\n ask 'What is PaperQA2?'\n```\n\nOr by adding into a `Settings` object, if calling imperatively:\n\n```python\nfrom paperqa import Settings, ask\n\nanswer_response = ask(\n \"What is PaperQA2?\",\n settings=Settings(\n llm_config={\"rate_limit\": {\"gpt-4o-2024-11-20\": \"30000 per 1 minute\"}},\n summary_llm_config={\"rate_limit\": {\"gpt-4o-2024-11-20\": \"30000 per 1 minute\"}},\n ),\n)\n```\n\n## Library Usage\n\nPaperQA2's full workflow can be accessed via Python directly:\n\n```python\nfrom paperqa import Settings, ask\n\nanswer_response = ask(\n \"What is PaperQA2?\",\n settings=Settings(temperature=0.5, paper_directory=\"my_papers\"),\n)\n```\n\nPlease see our [installation docs](#installation) for how to install the package from PyPI.\n\n### Agentic Adding/Querying Documents\n\nThe answer object has the following attributes:\n`formatted_answer`, `answer` (answer alone), `question` , and `context` (the summaries of passages found for answer).\n`ask` will use the `SearchPapers` tool, which will query a local index of files,\nyou can specify this location via the `Settings` object:\n\n```python\nfrom paperqa import Settings, ask\n\nanswer_response = ask(\n \"What is PaperQA2?\",\n settings=Settings(temperature=0.5, paper_directory=\"my_papers\"),\n)\n```\n\n`ask` is just a convenience wrapper around the real entrypoint,\nwhich can be accessed if you'd like to run concurrent asynchronous workloads:\n\n```python\nfrom paperqa import Settings, agent_query\n\nanswer_response = await agent_query(\n query=\"What is PaperQA2?\",\n settings=Settings(temperature=0.5, paper_directory=\"my_papers\"),\n)\n```\n\nThe default agent will use an LLM based agent,\nbut you can also specify a `\"fake\"` agent to use a hard coded call path of\nsearch -> gather evidence -> answer to reduce token usage.\n\n### Manual (No Agent) Adding/Querying Documents\n\nNormally via agent execution, the agent invokes the search tool,\nwhich adds documents to the `Docs` object for you behind the scenes.\nHowever, if you prefer fine-grained control,\nyou can directly interact with the `Docs` object.\n\nNote that manually adding and querying `Docs` does not impact performance.\nIt just removes the automation associated with an agent picking the documents to add.\n\n```python\nfrom paperqa import Docs, Settings\n\n# valid extensions include .pdf, .txt, .md, and .html\ndoc_paths = (\"myfile.pdf\", \"myotherfile.pdf\")\n\n# Prepare the Docs object by adding a bunch of documents\ndocs = Docs()\nfor doc_path in doc_paths:\n await docs.aadd(doc_path)\n\n# Set up how we want to query the Docs object\nsettings = Settings()\nsettings.llm = \"claude-3-5-sonnet-20240620\"\nsettings.answer.answer_max_sources = 3\n\n# Query the Docs object to get an answer\nsession = await docs.aquery(\"What is PaperQA2?\", settings=settings)\nprint(session)\n```\n\n### Async\n\nPaperQA2 is written to be used asynchronously.\nThe synchronous API is just a wrapper around the async.\nHere are the methods and their `async` equivalents:\n\n| Sync | Async |\n| ------------------- | -------------------- |\n| `Docs.add` | `Docs.aadd` |\n| `Docs.add_file` | `Docs.aadd_file` |\n| `Docs.add_url` | `Docs.aadd_url` |\n| `Docs.get_evidence` | `Docs.aget_evidence` |\n| `Docs.query` | `Docs.aquery` |\n\nThe synchronous version just calls the async version in a loop.\nMost modern python environments support `async` natively (including Jupyter notebooks!).\nSo you can do this in a Jupyter Notebook:\n\n```python\nimport asyncio\nfrom paperqa import Docs\n\n\nasync def main() -> None:\n docs = Docs()\n # valid extensions include .pdf, .txt, .md, and .html\n for doc in (\"myfile.pdf\", \"myotherfile.pdf\"):\n await docs.aadd(doc)\n\n session = await docs.aquery(\"What is PaperQA2?\")\n print(session)\n\n\nasyncio.run(main())\n```\n\n### Choosing Model\n\nBy default, PaperQA2 uses OpenAI's `gpt-4o-2024-11-20` model for the\n`summary_llm`, `llm`, and `agent_llm`.\nPlease see the [Settings Cheatsheet](#settings-cheatsheet)\nfor more information on these settings.\nPaperQA2 also defaults to using OpenAI's `text-embedding-3-small` model for the `embedding` setting.\nIf you don't have an OpenAI API key, you can use a different embedding model.\nMore information about embedding models can be found [in the \"Embedding Model\" section](#embedding-model).\n\nWe use the [`lmi`](https://github.com/Future-House/ldp/tree/main/packages/lmi) package for our LLM interface,\nwhich in turn uses `litellm` to support many LLM providers.\nYou can adjust this easily to use any model supported by `litellm`:\n\n```python\nfrom paperqa import Settings, ask\n\nanswer_response = ask(\n \"What is PaperQA2?\",\n settings=Settings(\n llm=\"gpt-4o-mini\", summary_llm=\"gpt-4o-mini\", paper_directory=\"my_papers\"\n ),\n)\n```\n\nTo use Claude, make sure you set the `ANTHROPIC_API_KEY` environment variable.\nIn this example, we also use a different embedding model.\nPlease make sure to `pip install paper-qa[local]` to use a local embedding model.\n\n```python\nfrom paperqa import Settings, ask\nfrom paperqa.settings import AgentSettings\n\nanswer_response = ask(\n \"What is PaperQA2?\",\n settings=Settings(\n llm=\"claude-3-5-sonnet-20240620\",\n summary_llm=\"claude-3-5-sonnet-20240620\",\n agent=AgentSettings(agent_llm=\"claude-3-5-sonnet-20240620\"),\n # SEE: https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1\n embedding=\"st-multi-qa-MiniLM-L6-cos-v1\",\n ),\n)\n```\n\nOr Gemini, by setting the `GEMINI_API_KEY` from Google AI Studio\n\n```python\nfrom paperqa import Settings, ask\nfrom paperqa.settings import AgentSettings\n\nanswer_response = ask(\n \"What is PaperQA2?\",\n settings=Settings(\n llm=\"gemini/gemini-2.0-flash\",\n summary_llm=\"gemini/gemini-2.0-flash\",\n agent=AgentSettings(agent_llm=\"gemini/gemini-2.0-flash\"),\n embedding=\"gemini/text-embedding-004\",\n ),\n)\n```\n\n#### Locally Hosted\n\nYou can use llama.cpp to be the LLM.\nNote that you should be using relatively large models,\nbecause PaperQA2 requires following a lot of instructions.\nYou won't get good performance with 7B models.\n\nThe easiest way to get set-up is to download a [llama file](https://github.com/Mozilla-Ocho/llamafile)\nand execute it with `-cb -np 4 -a my-llm-model --embedding`\nwhich will enable continuous batching and embeddings.\n\n```python\nfrom paperqa import Settings, ask\n\nlocal_llm_config = dict(\n model_list=[\n dict(\n model_name=\"my_llm_model\",\n litellm_params=dict(\n model=\"my-llm-model\",\n api_base=\"http://localhost:8080/v1\",\n api_key=\"sk-no-key-required\",\n temperature=0.1,\n frequency_penalty=1.5,\n max_tokens=512,\n ),\n )\n ]\n)\n\nanswer_response = ask(\n \"What is PaperQA2?\",\n settings=Settings(\n llm=\"my-llm-model\",\n llm_config=local_llm_config,\n summary_llm=\"my-llm-model\",\n summary_llm_config=local_llm_config,\n ),\n)\n```\n\nModels hosted with `ollama` are also supported.\nTo run the example below make sure you have downloaded llama3.2 and mxbai-embed-large via ollama.\n\n```python\nfrom paperqa import Settings, ask\n\nlocal_llm_config = {\n \"model_list\": [\n {\n \"model_name\": \"ollama/llama3.2\",\n \"litellm_params\": {\n \"model\": \"ollama/llama3.2\",\n \"api_base\": \"http://localhost:11434\",\n },\n }\n ]\n}\n\nanswer_response = ask(\n \"What is PaperQA2?\",\n settings=Settings(\n llm=\"ollama/llama3.2\",\n llm_config=local_llm_config,\n summary_llm=\"ollama/llama3.2\",\n summary_llm_config=local_llm_config,\n embedding=\"ollama/mxbai-embed-large\",\n ),\n)\n```\n\n### Embedding Model\n\nEmbeddings are used to retrieve k texts (where k is specified via `Settings.answer.evidence_k`)\nfor re-ranking and contextual summarization.\nIf you don't want to use embeddings, but instead just fetch all chunks,\ndisable \"evidence retrieval\" via the `Settings.answer.evidence_retrieval` setting.\n\nPaperQA2 defaults to using OpenAI (`text-embedding-3-small`) embeddings,\nbut has flexible options for both vector stores and embedding choices.\n\n#### Specifying the Embedding Model\n\nThe simplest way to specify the embedding model is via `Settings.embedding`:\n\n```python\nfrom paperqa import Settings, ask\n\nanswer_response = ask(\n \"What is PaperQA2?\",\n settings=Settings(embedding=\"text-embedding-3-large\"),\n)\n```\n\n`embedding` accepts any embedding model name supported by litellm.\nPaperQA2 also supports an embedding input of `\"hybrid-<model_name>\"`\ni.e. `\"hybrid-text-embedding-3-small\"` to use a hybrid sparse keyword (based on a token modulo embedding)\nand dense vector embedding, where any litellm model can be used in the dense model name.\n`\"sparse\"` can be used to use a sparse keyword embedding only.\n\nEmbedding models are used to create PaperQA2's index of the full-text embedding vectors (`texts_index` argument).\nThe embedding model can be specified as a setting when you are adding new papers to the `Docs` object:\n\n```python\nfrom paperqa import Docs, Settings\n\ndocs = Docs()\nfor doc in (\"myfile.pdf\", \"myotherfile.pdf\"):\n await docs.aadd(doc, settings=Settings(embedding=\"text-embedding-large-3\"))\n```\n\nNote that PaperQA2 uses Numpy as a dense vector store.\nIts design of using a keyword search initially reduces the number of chunks\nneeded for each answer to a relatively small number < 1k.\nTherefore, `NumpyVectorStore` is a good place to start, it's a simple in-memory store, without an index.\nHowever, if a larger-than-memory vector store is needed,\nyou can an external vector database like [Qdrant](https://qdrant.tech/) via the `QdrantVectorStore` class.\n\nThe hybrid embeddings can be customized:\n\n```python\nfrom paperqa import (\n Docs,\n HybridEmbeddingModel,\n SparseEmbeddingModel,\n LiteLLMEmbeddingModel,\n)\n\n\nmodel = HybridEmbeddingModel(\n models=[LiteLLMEmbeddingModel(), SparseEmbeddingModel(ndim=1024)]\n)\ndocs = Docs()\nfor doc in (\"myfile.pdf\", \"myotherfile.pdf\"):\n await docs.aadd(doc, embedding_model=model)\n```\n\nThe sparse embedding (keyword) models default to having 256 dimensions,\nbut this can be specified via the `ndim` argument.\n\n#### Local Embedding Models (Sentence Transformers)\n\nYou can use a `SentenceTransformerEmbeddingModel` model if you install `sentence-transformers`,\nwhich is [a local embedding library](https://sbert.net/) with support for HuggingFace models and more.\nYou can install it by adding the `local` extras.\n\n```sh\npip install paper-qa[local]\n```\n\nand then prefix embedding model names with `st-`:\n\n```python\nfrom paperqa import Settings, ask\n\nanswer_response = ask(\n \"What is PaperQA2?\",\n settings=Settings(embedding=\"st-multi-qa-MiniLM-L6-cos-v1\"),\n)\n```\n\nor with a hybrid model\n\n```python\nfrom paperqa import Settings, ask\n\nanswer_response = ask(\n \"What is PaperQA2?\",\n settings=Settings(embedding=\"hybrid-st-multi-qa-MiniLM-L6-cos-v1\"),\n)\n```\n\n### Adjusting number of sources\n\nYou can adjust the numbers of sources (passages of text) to reduce token usage or add more context.\n`k` refers to the top k most relevant and diverse (may from different sources) passages.\nEach passage is sent to the LLM to summarize, or determine if it is irrelevant.\nAfter this step, a limit of `max_sources` is applied so that the final answer can fit into the LLM context window.\nThus, `k` > `max_sources` and `max_sources` is the number of sources used in the final answer.\n\n```python\nfrom paperqa import Settings\n\nsettings = Settings()\nsettings.answer.answer_max_sources = 3\nsettings.answer.evidence_k = 5\n\nawait docs.aquery(\n \"What is PaperQA2?\",\n settings=settings,\n)\n```\n\n### Using Code or HTML\n\nYou do not need to use papers -- you can use code or raw HTML.\nNote that this tool is focused on answering questions,\nso it won't do well at writing code.\nOne note is that the tool cannot infer citations from code,\nso you will need to provide them yourself.\n\n```python\nimport glob\nimport os\nfrom paperqa import Docs\n\nsource_files = glob.glob(\"**/*.js\")\n\ndocs = Docs()\nfor f in source_files:\n # this assumes the file names are unique in code\n await docs.aadd(\n f, citation=\"File \" + os.path.basename(f), docname=os.path.basename(f)\n )\nsession = await docs.aquery(\"Where is the search bar in the header defined?\")\nprint(session)\n```\n\n### Using External DB/Vector DB and Caching\n\nYou may want to cache parsed texts and embeddings in an external database or file.\nYou can then build a Docs object from those directly:\n\n```python\nfrom paperqa import Docs, Doc, Text\n\ndocs = Docs()\n\nfor ... in my_docs:\n doc = Doc(docname=..., citation=..., dockey=..., citation=...)\n texts = [Text(text=..., name=..., doc=doc) for ... in my_texts]\n docs.add_texts(texts, doc)\n```\n\n### Creating Index\n\nIndexes will be placed in the [home directory][home dir] by default.\nThis can be controlled via the `PQA_HOME` environment variable.\n\nIndexes are made by reading files in the `Settings.paper_directory`.\nBy default, we recursively read from subdirectories of the paper directory,\nunless disabled using `Settings.index_recursively`.\nThe paper directory is not modified in any way, it's just read from.\n\n[home dir]: https://docs.python.org/3/library/pathlib.html#pathlib.Path.home\n\n#### Manifest Files\n\nThe indexing process attempts to infer paper metadata like title and DOI\nusing LLM-powered text processing.\nYou can avoid this point of uncertainty using a \"manifest\" file,\nwhich is a CSV containing three columns (order doesn't matter):\n\n- `file_location`: relative path to the paper's PDF within the index directory\n- `doi`: DOI of the paper\n- `title`: title of the paper\n\nBy providing this information,\nwe ensure queries to metadata providers like Crossref are accurate.\n\n### Reusing Index\n\nThe local search indexes are built based on a hash of the current `Settings` object.\nSo make sure you properly specify the `paper_directory` to your `Settings` object.\nIn general, it's advisable to:\n\n1. Pre-build an index given a folder of papers (can take several minutes)\n2. Reuse the index to perform many queries\n\n```python\nimport os\n\nfrom paperqa import Settings\nfrom paperqa.agents.main import agent_query\nfrom paperqa.agents.search import get_directory_index\n\n\nasync def amain(folder_of_papers: str | os.PathLike) -> None:\n settings = Settings(paper_directory=folder_of_papers)\n\n # 1. Build the index. Note an index name is autogenerated when unspecified\n built_index = await get_directory_index(settings=settings)\n print(settings.get_index_name()) # Display the autogenerated index name\n print(await built_index.index_files) # Display the index contents\n\n # 2. Use the settings as many times as you want with ask\n answer_response_1 = await agent_query(\n query=\"What is a cool retrieval augmented generation technique?\",\n settings=settings,\n )\n answer_response_2 = await agent_query(\n query=\"What is PaperQA2?\",\n settings=settings,\n )\n```\n\n### Using Clients Directly\n\nOne of the most powerful features of PaperQA2 is its ability to combine data from multiple metadata sources.\nFor example, [Unpaywall](https://unpaywall.org/) can provide open access status/direct links to PDFs,\n[Crossref](https://www.crossref.org/) can provide bibtex,\nand [Semantic Scholar](https://www.semanticscholar.org/) can provide citation licenses.\nHere's a short demo of how to do this:\n\n```python\nfrom paperqa.clients import DocMetadataClient, ALL_CLIENTS\n\nclient = DocMetadataClient(clients=ALL_CLIENTS)\ndetails = await client.query(title=\"Augmenting language models with chemistry tools\")\n\nprint(details.formatted_citation)\n# Andres M. Bran, Sam Cox, Oliver Schilter, Carlo Baldassari,\n# Andrew D. White, and Philippe Schwaller.\n# Augmenting large language models with chemistry tools. Nature Machine Intelligence,\n# 6:525-535, May 2024. URL: https://doi.org/10.1038/s42256-024-00832-8,\n# doi:10.1038/s42256-024-00832-8.\n# This article has 243 citations and is from a domain leading peer-reviewed journal.\n\nprint(details.citation_count)\n# 243\n\nprint(details.license)\n# cc-by\n\nprint(details.pdf_url)\n# https://www.nature.com/articles/s42256-024-00832-8.pdf\n```\n\nthe `client.query` is meant to check for exact matches of title.\nIt's a bit robust (like to casing, missing a word).\nThere are duplicates for titles though - so you can also add authors to disambiguate.\nOr you can provide a doi directly `client.query(doi=\"10.1038/s42256-024-00832-8\")`.\n\nIf you're doing this at a large scale,\nyou may not want to use `ALL_CLIENTS` (just omit the argument)\nand you can specify which specific fields you want to speed up queries.\nFor example:\n\n```python\ndetails = await client.query(\n title=\"Augmenting large language models with chemistry tools\",\n authors=[\"Andres M. Bran\", \"Sam Cox\"],\n fields=[\"title\", \"doi\"],\n)\n```\n\nwill return much faster than the first query and we'll be certain the authors match.\n\n## Settings Cheatsheet\n\n| Setting | Default | Description |\n| -------------------------------------------- | -------------------------------------- | ------------------------------------------------------------------------------------------------------- |\n| `llm` | `\"gpt-4o-2024-11-20\"` | Default LLM for most things, including answers. Should be 'best' LLM. |\n| `llm_config` | `None` | Optional configuration for `llm`. |\n| `summary_llm` | `\"gpt-4o-2024-11-20\"` | Default LLM for summaries and parsing citations. |\n| `summary_llm_config` | `None` | Optional configuration for `summary_llm`. |\n| `embedding` | `\"text-embedding-3-small\"` | Default embedding model for texts. |\n| `embedding_config` | `None` | Optional configuration for `embedding`. |\n| `temperature` | `0.0` | Temperature for LLMs. |\n| `batch_size` | `1` | Batch size for calling LLMs. |\n| `texts_index_mmr_lambda` | `1.0` | Lambda for MMR in text index. |\n| `verbosity` | `0` | Integer verbosity level for logging (0-3). 3 = all LLM/Embeddings calls logged. |\n| `answer.evidence_k` | `10` | Number of evidence pieces to retrieve. |\n| `answer.evidence_detailed_citations` | `True` | Include detailed citations in summaries. |\n| `answer.evidence_retrieval` | `True` | Use retrieval vs processing all docs. |\n| `answer.evidence_summary_length` | `\"about 100 words\"` | Length of evidence summary. |\n| `answer.evidence_skip_summary` | `False` | Whether to skip summarization. |\n| `answer.answer_max_sources` | `5` | Max number of sources for an answer. |\n| `answer.max_answer_attempts` | `None` | Max attempts to generate an answer. |\n| `answer.answer_length` | `\"about 200 words, but can be longer\"` | Length of final answer. |\n| `answer.max_concurrent_requests` | `4` | Max concurrent requests to LLMs. |\n| `answer.answer_filter_extra_background` | `False` | Whether to cite background info from model. |\n| `answer.get_evidence_if_no_contexts` | `True` | Allow lazy evidence gathering. |\n| `parsing.chunk_size` | `5000` | Characters per chunk (0 for no chunking). |\n| `parsing.page_size_limit` | `1,280,000` | Character limit per page. |\n| `parsing.pdfs_use_block_parsing` | `False` | Opt-in flag for block-based PDF parsing over text-based PDF parsing. |\n| `parsing.use_doc_details` | `True` | Whether to get metadata details for docs. |\n| `parsing.overlap` | `250` | Characters to overlap chunks. |\n| `parsing.defer_embedding` | `False` | Whether to defer embedding until summarization. |\n| `parsing.parse_pdf` | `parse_pdf_to_pages` | Function to parse PDF files. |\n| `parsing.configure_pdf_parser` | `setup_pymupdf_python_logging` | Callable to configure the PDF parser within `parse_pdf`, useful for behaviors such as enabling logging. |\n| `parsing.chunking_algorithm` | `ChunkingOptions.SIMPLE_OVERLAP` | Algorithm for chunking. |\n| `parsing.doc_filters` | `None` | Optional filters for allowed documents. |\n| `parsing.use_human_readable_clinical_trials` | `False` | Parse clinical trial JSONs into readable text. |\n| `prompt.summary` | `summary_prompt` | Template for summarizing text, must contain variables matching `summary_prompt`. |\n| `prompt.qa` | `qa_prompt` | Template for QA, must contain variables matching `qa_prompt`. |\n| `prompt.select` | `select_paper_prompt` | Template for selecting papers, must contain variables matching `select_paper_prompt`. |\n| `prompt.pre` | `None` | Optional pre-prompt templated with just the original question to append information before a qa prompt. |\n| `prompt.post` | `None` | Optional post-processing prompt that can access PQASession fields. |\n| `prompt.system` | `default_system_prompt` | System prompt for the model. |\n| `prompt.use_json` | `True` | Whether to use JSON formatting. |\n| `prompt.summary_json` | `summary_json_prompt` | JSON-specific summary prompt. |\n| `prompt.summary_json_system` | `summary_json_system_prompt` | System prompt for JSON summaries. |\n| `prompt.context_outer` | `CONTEXT_OUTER_PROMPT` | Prompt for how to format all contexts in generate answer. |\n| `prompt.context_inner` | `CONTEXT_INNER_PROMPT` | Prompt for how to format a single context in generate answer. Must contain 'name' and 'text' variables. |\n| `agent.agent_llm` | `\"gpt-4o-2024-11-20\"` | Model to use for agent making tool selections. |\n| `agent.agent_llm_config` | `None` | Optional configuration for `agent_llm`. |\n| `agent.agent_type` | `\"ToolSelector\"` | Type of agent to use. |\n| `agent.agent_config` | `None` | Optional kwarg for AGENT constructor. |\n| `agent.agent_system_prompt` | `env_system_prompt` | Optional system prompt message. |\n| `agent.agent_prompt` | `env_reset_prompt` | Agent prompt. |\n| `agent.return_paper_metadata` | `False` | Whether to include paper title/year in search tool results. |\n| `agent.search_count` | `8` | Search count. |\n| `agent.timeout` | `500.0` | Timeout on agent execution (seconds). |\n| `agent.should_pre_search` | `False` | Whether to run search tool before invoking agent. |\n| `agent.tool_names` | `None` | Optional override on tools to provide the agent. |\n| `agent.max_timesteps` | `None` | Optional upper limit on environment steps. |\n| `agent.index.name` | `None` | Optional name of the index. |\n| `agent.index.paper_directory` | `Current working directory` | Directory containing papers to be indexed. |\n| `agent.index.manifest_file` | `None` | Path to manifest CSV with document attributes. |\n| `agent.index.index_directory` | `pqa_directory(\"indexes\")` | Directory to store PQA indexes. |\n| `agent.index.use_absolute_paper_directory` | `False` | Whether to use absolute paper directory path. |\n| `agent.index.recurse_subdirectories` | `True` | Whether to recurse into subdirectories when indexing. |\n| `agent.index.concurrency` | `5` | Number of concurrent filesystem reads. |\n| `agent.index.sync_with_paper_directory` | `True` | Whether to sync index with paper directory on load. |\n| `agent.index.files_filter` | `lambda f: f.suffix in {...}` | Filter function to mark files in the paper directory to index. |\n\n## Where do I get papers?\n\nWell that's a really good question!\nIt's probably best to just download PDFs of papers you think will help answer your question and start from there.\n\nSee detailed docs [about zotero, openreview and parsing](docs/tutorials/where_do_I_get_papers.md)\n\n## Callbacks\n\nTo execute a function on each chunk of LLM completions,\nyou need to provide a function that can be executed on each chunk.\nFor example, to get a typewriter view of the completions, you can do:\n\n```python\nfrom paperqa import Docs\n\n\ndef typewriter(chunk: str) -> None:\n print(chunk, end=\"\")\n\n\ndocs = Docs()\n\n# add some docs...\n\nawait docs.aquery(\"What is PaperQA2?\", callbacks=[typewriter])\n```\n\n### Caching Embeddings\n\nIn general, embeddings are cached when you pickle a `Docs` regardless of what vector store you use.\nSo as long as you save your underlying `Docs` object,\nyou should be able to avoid re-embedding your documents.\n\n## Customizing Prompts\n\nYou can customize any of the prompts using settings.\n\n```python\nfrom paperqa import Docs, Settings\n\nmy_qa_prompt = (\n \"Answer the question '{question}'\\n\"\n \"Use the context below if helpful. \"\n \"You can cite the context using the key like (pqac-abcd1234). \"\n \"If there is insufficient context, write a poem \"\n \"about how you cannot answer.\\n\\n\"\n \"Context: {context}\"\n)\n\ndocs = Docs()\nsettings = Settings()\nsettings.prompts.qa = my_qa_prompt\nawait docs.aquery(\"What is PaperQA2?\", settings=settings)\n```\n\n### Pre and Post Prompts\n\nFollowing the syntax above, you can also include prompts that\nare executed after the query and before the query.\nFor example, you can use this to critique the answer.\n\n## FAQ\n\n### How come I get different results than your papers?\n\nInternally at FutureHouse, we have a slightly different set of tools.\nWe're trying to get some of them, like citation traversal, into this repo.\nHowever, we have APIs and licenses to access research papers that we cannot share openly.\nSimilarly, in our research papers' results we do not start with the known relevant PDFs.\nOur agent has to identify them using keyword search over all papers, rather than just a subset.\nWe're gradually aligning these two versions of PaperQA,\nbut until there is an open-source way to freely access papers (even just open source papers)\nyou will need to provide PDFs yourself.\n\n### How is this different from LlamaIndex or LangChain?\n\n[LangChain](https://github.com/langchain-ai/langchain)\nand [LlamaIndex](https://github.com/run-llama/llama_index)\nare both frameworks for working with LLM applications,\nwith abstractions made for agentic workflows and retrieval augmented generation.\n\nOver time, the PaperQA team over time chose to become framework-agnostic,\ninstead outsourcing LLM drivers to [LiteLLM][LiteLLM general docs]\nand no framework besides Pydantic for its tools.\nPaperQA focuses on scientific papers and their metadata.\n\nPaperQA can be reimplemented using either LlamaIndex or LangChain.\nFor example, our `GatherEvidence` tool can be reimplemented\nas a retriever with an LLM-based re-ranking and contextual summary.\nThere is similar work with the tree response method in LlamaIndex.\n\n### Can I save or load?\n\nThe `Docs` class can be pickled and unpickled.\nThis is useful if you want to save the embeddings of the documents and then load them later.\n\n```python\nimport pickle\n\n# save\nwith open(\"my_docs.pkl\", \"wb\") as f:\n pickle.dump(docs, f)\n\n# load\nwith open(\"my_docs.pkl\", \"rb\") as f:\n docs = pickle.load(f)\n```\n\n## Reproduction\n\nContained in [docs/2024-10-16_litqa2-splits.json5](docs/2024-10-16_litqa2-splits.json5)\nare the question IDs used in train, evaluation, and test splits,\nas well as paper DOIs used to build the splits' indexes.\n\n- Train and eval splits: question IDs come from\n [LAB-Bench's LitQA2 question IDs](https://github.com/Future-House/LAB-Bench/blob/main/LitQA2/litqa-v2-public.jsonl).\n- Test split: questions IDs come from\n [aviary-paper-data's LitQA2 question IDs](https://huggingface.co/datasets/futurehouse/aviary-paper-data).\n\nThere are multiple papers slowly building PaperQA, shown below in [Citation](#citation).\nTo reproduce:\n\n- `skarlinski2024language`: train and eval splits are applicable.\n The test split remains held out.\n- `narayanan2024aviarytraininglanguageagents`: train, eval, and test splits are applicable.\n\nExample on how to use LitQA for evaluation can be found in\n[aviary.litqa](https://github.com/Future-House/aviary/tree/main/packages/litqa#running-litqa).\n\n## Citation\n\nPlease read and cite the following papers if you use this software:\n\n```bibtex\n@article{narayanan2024aviarytraininglanguageagents,\n title = {Aviary: training language agents on challenging scientific tasks},\n author = {\n Siddharth Narayanan and\n James D. Braza and\n Ryan-Rhys Griffiths and\n Manu Ponnapati and\n Albert Bou and\n Jon Laurent and\n Ori Kabeli and\n Geemi Wellawatte and\n Sam Cox and\n Samuel G. Rodriques and\n Andrew D. White},\n journal = {arXiv preprent arXiv:2412.21154},\n year = {2024},\n url = {https://doi.org/10.48550/arXiv.2412.21154},\n}\n```\n\n```bibtex\n@article{skarlinski2024language,\n title = {Language agents achieve superhuman synthesis of scientific knowledge},\n author = {\n Michael D. Skarlinski and\n Sam Cox and\n Jon M. Laurent and\n James D. Braza and\n Michaela Hinks and\n Michael J. Hammerling and\n Manvitha Ponnapati and\n Samuel G. Rodriques and\n Andrew D. White},\n journal = {arXiv preprent arXiv:2409.13740},\n year = {2024},\n url = {https://doi.org/10.48550/arXiv.2409.13740}\n}\n```\n\n```bibtex\n@article{lala2023paperqa,\n title = {PaperQA: Retrieval-Augmented Generative Agent for Scientific Research},\n author = {\n Jakub L\u00e1la and\n Odhran O'Donoghue and\n Aleksandar Shtedritski and\n Sam Cox and\n Samuel G. Rodriques and\n Andrew D. White},\n journal = {arXiv preprint arXiv:2312.07559},\n year = {2023},\n url = {https://doi.org/10.48550/arXiv.2312.07559}\n}\n```\n",
"bugtrack_url": null,
"license": "Apache License\n Version 2.0, January 2004\n http://www.apache.org/licenses/\n \n TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n \n 1. Definitions.\n \n \"License\" shall mean the terms and conditions for use, reproduction,\n and distribution as defined by Sections 1 through 9 of this document.\n \n \"Licensor\" shall mean the copyright owner or entity authorized by\n the copyright owner that is granting the License.\n \n \"Legal Entity\" shall mean the union of the acting entity and all\n other entities that control, are controlled by, or are under common\n control with that entity. For the purposes of this definition,\n \"control\" means (i) the power, direct or indirect, to cause the\n direction or management of such entity, whether by contract or\n otherwise, or (ii) ownership of fifty percent (50%) or more of the\n outstanding shares, or (iii) beneficial ownership of such entity.\n \n \"You\" (or \"Your\") shall mean an individual or Legal Entity\n exercising permissions granted by this License.\n \n \"Source\" form shall mean the preferred form for making modifications,\n including but not limited to software source code, documentation\n source, and configuration files.\n \n \"Object\" form shall mean any form resulting from mechanical\n transformation or translation of a Source form, including but\n not limited to compiled object code, generated documentation,\n and conversions to other media types.\n \n \"Work\" shall mean the work of authorship, whether in Source or\n Object form, made available under the License, as indicated by a\n copyright notice that is included in or attached to the work\n (an example is provided in the Appendix below).\n \n \"Derivative Works\" shall mean any work, whether in Source or Object\n form, that is based on (or derived from) the Work and for which the\n editorial revisions, annotations, elaborations, or other modifications\n represent, as a whole, an original work of authorship. For the purposes\n of this License, Derivative Works shall not include works that remain\n separable from, or merely link (or bind by name) to the interfaces of,\n the Work and Derivative Works thereof.\n \n \"Contribution\" shall mean any work of authorship, including\n the original version of the Work and any modifications or additions\n to that Work or Derivative Works thereof, that is intentionally\n submitted to Licensor for inclusion in the Work by the copyright owner\n or by an individual or Legal Entity authorized to submit on behalf of\n the copyright owner. For the purposes of this definition, \"submitted\"\n means any form of electronic, verbal, or written communication sent\n to the Licensor or its representatives, including but not limited to\n communication on electronic mailing lists, source code control systems,\n and issue tracking systems that are managed by, or on behalf of, the\n Licensor for the purpose of discussing and improving the Work, but\n excluding communication that is conspicuously marked or otherwise\n designated in writing by the copyright owner as \"Not a Contribution.\"\n \n \"Contributor\" shall mean Licensor and any individual or Legal Entity\n on behalf of whom a Contribution has been received by Licensor and\n subsequently incorporated within the Work.\n \n 2. Grant of Copyright License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n copyright license to reproduce, prepare Derivative Works of,\n publicly display, publicly perform, sublicense, and distribute the\n Work and such Derivative Works in Source or Object form.\n \n 3. Grant of Patent License. Subject to the terms and conditions of\n this License, each Contributor hereby grants to You a perpetual,\n worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n (except as stated in this section) patent license to make, have made,\n use, offer to sell, sell, import, and otherwise transfer the Work,\n where such license applies only to those patent claims licensable\n by such Contributor that are necessarily infringed by their\n Contribution(s) alone or by combination of their Contribution(s)\n with the Work to which such Contribution(s) was submitted. If You\n institute patent litigation against any entity (including a\n cross-claim or counterclaim in a lawsuit) alleging that the Work\n or a Contribution incorporated within the Work constitutes direct\n or contributory patent infringement, then any patent licenses\n granted to You under this License for that Work shall terminate\n as of the date such litigation is filed.\n \n 4. Redistribution. You may reproduce and distribute copies of the\n Work or Derivative Works thereof in any medium, with or without\n modifications, and in Source or Object form, provided that You\n meet the following conditions:\n \n (a) You must give any other recipients of the Work or\n Derivative Works a copy of this License; and\n \n (b) You must cause any modified files to carry prominent notices\n stating that You changed the files; and\n \n (c) You must retain, in the Source form of any Derivative Works\n that You distribute, all copyright, patent, trademark, and\n attribution notices from the Source form of the Work,\n excluding those notices that do not pertain to any part of\n the Derivative Works; and\n \n (d) If the Work includes a \"NOTICE\" text file as part of its\n distribution, then any Derivative Works that You distribute must\n include a readable copy of the attribution notices contained\n within such NOTICE file, excluding those notices that do not\n pertain to any part of the Derivative Works, in at least one\n of the following places: within a NOTICE text file distributed\n as part of the Derivative Works; within the Source form or\n documentation, if provided along with the Derivative Works; or,\n within a display generated by the Derivative Works, if and\n wherever such third-party notices normally appear. The contents\n of the NOTICE file are for informational purposes only and\n do not modify the License. You may add Your own attribution\n notices within Derivative Works that You distribute, alongside\n or as an addendum to the NOTICE text from the Work, provided\n that such additional attribution notices cannot be construed\n as modifying the License.\n \n You may add Your own copyright statement to Your modifications and\n may provide additional or different license terms and conditions\n for use, reproduction, or distribution of Your modifications, or\n for any such Derivative Works as a whole, provided Your use,\n reproduction, and distribution of the Work otherwise complies with\n the conditions stated in this License.\n \n 5. Submission of Contributions. Unless You explicitly state otherwise,\n any Contribution intentionally submitted for inclusion in the Work\n by You to the Licensor shall be under the terms and conditions of\n this License, without any additional terms or conditions.\n Notwithstanding the above, nothing herein shall supersede or modify\n the terms of any separate license agreement you may have executed\n with Licensor regarding such Contributions.\n \n 6. Trademarks. This License does not grant permission to use the trade\n names, trademarks, service marks, or product names of the Licensor,\n except as required for reasonable and customary use in describing the\n origin of the Work and reproducing the content of the NOTICE file.\n \n 7. Disclaimer of Warranty. Unless required by applicable law or\n agreed to in writing, Licensor provides the Work (and each\n Contributor provides its Contributions) on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n implied, including, without limitation, any warranties or conditions\n of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n PARTICULAR PURPOSE. You are solely responsible for determining the\n appropriateness of using or redistributing the Work and assume any\n risks associated with Your exercise of permissions under this License.\n \n 8. Limitation of Liability. In no event and under no legal theory,\n whether in tort (including negligence), contract, or otherwise,\n unless required by applicable law (such as deliberate and grossly\n negligent acts) or agreed to in writing, shall any Contributor be\n liable to You for damages, including any direct, indirect, special,\n incidental, or consequential damages of any character arising as a\n result of this License or out of the use or inability to use the\n Work (including but not limited to damages for loss of goodwill,\n work stoppage, computer failure or malfunction, or any and all\n other commercial damages or losses), even if such Contributor\n has been advised of the possibility of such damages.\n \n 9. Accepting Warranty or Additional Liability. While redistributing\n the Work or Derivative Works thereof, You may choose to offer,\n and charge a fee for, acceptance of support, warranty, indemnity,\n or other liability obligations and/or rights consistent with this\n License. However, in accepting such obligations, You may act only\n on Your own behalf and on Your sole responsibility, not on behalf\n of any other Contributor, and only if You agree to indemnify,\n defend, and hold each Contributor harmless for any liability\n incurred by, or claims asserted against, such Contributor by reason\n of your accepting any such warranty or additional liability.\n \n END OF TERMS AND CONDITIONS\n \n APPENDIX: How to apply the Apache License to your work.\n \n To apply the Apache License to your work, attach the following\n boilerplate notice, with the fields enclosed by brackets \"[]\"\n replaced with your own identifying information. (Don't include\n the brackets!) The text should be enclosed in the appropriate\n comment syntax for the file format. We also recommend that a\n file or class name and description of purpose be included on the\n same \"printed page\" as the copyright notice for easier\n identification within third-party archives.\n \n Copyright 2024 FutureHouse\n \n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n \n http://www.apache.org/licenses/LICENSE-2.0\n \n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n ",
"summary": "LLM Chain for answering questions from docs",
"version": "5.23.0",
"project_urls": {
"issues": "https://github.com/Future-House/paper-qa/issues",
"repository": "https://github.com/Future-House/paper-qa"
},
"split_keywords": [
"question",
"answering"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "99f1e8f113c1aa0846e592c9d1099884925692a9b70717c4b5b809acf582cf68",
"md5": "1bded0a832868ec5dade15b2c512cc1c",
"sha256": "3b4c1b8be371767d9c06e5096cf26c909f3bc57730a638fca5fc76ba6984a1af"
},
"downloads": -1,
"filename": "paper_qa-5.23.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1bded0a832868ec5dade15b2c512cc1c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 534338,
"upload_time": "2025-07-10T02:22:13",
"upload_time_iso_8601": "2025-07-10T02:22:13.446679Z",
"url": "https://files.pythonhosted.org/packages/99/f1/e8f113c1aa0846e592c9d1099884925692a9b70717c4b5b809acf582cf68/paper_qa-5.23.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "58b6e15588f342f18ecaf4bcf07145fb4a5c1ad54112cfe236ccdf04e7d3ec4d",
"md5": "e25e80a7e2767352a5c9bb8223d90ed7",
"sha256": "c658ad5cc8afd91c19babc3db14926fa055ea8b5914275422dbdc1c4aa49be4a"
},
"downloads": -1,
"filename": "paper_qa-5.23.0.tar.gz",
"has_sig": false,
"md5_digest": "e25e80a7e2767352a5c9bb8223d90ed7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 4289937,
"upload_time": "2025-07-10T02:22:15",
"upload_time_iso_8601": "2025-07-10T02:22:15.198885Z",
"url": "https://files.pythonhosted.org/packages/58/b6/e15588f342f18ecaf4bcf07145fb4a5c1ad54112cfe236ccdf04e7d3ec4d/paper_qa-5.23.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-10 02:22:15",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Future-House",
"github_project": "paper-qa",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "paper-qa"
}