gpt-researcher


Namegpt-researcher JSON
Version 0.4.5 PyPI version JSON
download
home_pagehttps://github.com/assafelovic/gpt-researcher
SummaryGPT Researcher is an autonomous agent designed for comprehensive online research on a variety of tasks.
upload_time2024-05-14 16:20:20
maintainerNone
docs_urlNone
authorAssaf Elovic
requires_pythonNone
licenseMIT
keywords
VCS
bugtrack_url
requirements beautifulsoup4 colorama duckduckgo_search md2pdf playwright openai python-dotenv pyyaml uvicorn pydantic fastapi python-multipart markdown langchain tavily-python arxiv PyMuPDF requests jinja2 aiofiles newspaper3k langchain_community SQLAlchemy langchain-openai mistune python-docx htmldocx langchain-google-genai lxml websockets
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # πŸ”Ž GPT Researcher
[![Official Website](https://img.shields.io/badge/Official%20Website-gptr.dev-blue?style=for-the-badge&logo=world&logoColor=white)](https://gptr.dev)
[![Discord Follow](https://dcbadge.vercel.app/api/server/2pFkc83fRq?style=for-the-badge)](https://discord.gg/MN9M86kb)

[![GitHub Repo stars](https://img.shields.io/github/stars/assafelovic/gpt-researcher?style=social)](https://github.com/assafelovic/gpt-researcher)
[![Twitter Follow](https://img.shields.io/twitter/follow/assaf_elovic?style=social)](https://twitter.com/assaf_elovic)
[![PyPI version](https://badge.fury.io/py/gpt-researcher.svg)](https://badge.fury.io/py/gpt-researcher)

<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
[contributors-shield]: https://img.shields.io/github/contributors/assafelovic/gpt-researcher?style=for-the-badge&color=orange

-  [English](https://github.com/assafelovic/gpt-researcher/blob/master/README.md)
-  [δΈ­ζ–‡](https://github.com/assafelovic/gpt-researcher/blob/master/README-zh_CN.md)

**GPT Researcher is an autonomous agent designed for comprehensive online research on a variety of tasks.** 

The agent can produce detailed, factual and unbiased research reports, with customization options for focusing on relevant resources, outlines, and lessons. Inspired by the recent [Plan-and-Solve](https://arxiv.org/abs/2305.04091) and [RAG](https://arxiv.org/abs/2005.11401) papers, GPT Researcher addresses issues of speed, determinism and reliability, offering a more stable performance and increased speed through parallelized agent work, as opposed to synchronous operations.

**Our mission is to empower individuals and organizations with accurate, unbiased, and factual information by leveraging the power of AI.**

## Why GPT Researcher?

- To form objective conclusions for manual research tasks can take time, sometimes weeks to find the right resources and information.
- Current LLMs are trained on past and outdated information, with heavy risks of hallucinations, making them almost irrelevant for research tasks.
- Services that enable web search (such as ChatGPT + Web Plugin), only consider limited sources and content that in some cases result in superficial and biased answers.
- Using only a selection of web sources can create bias in determining the right conclusions for research tasks.

## Demo
https://github.com/assafelovic/gpt-researcher/assets/13554167/dd6cf08f-b31e-40c6-9907-1915f52a7110

## Architecture
The main idea is to run "planner" and "execution" agents, whereas the planner generates questions to research, and the execution agents seek the most related information based on each generated research question. Finally, the planner filters and aggregates all related information and creates a research report. <br /> <br /> 
The agents leverage both `gpt3.5-turbo` and `gpt-4o` (128K context) to complete a research task. We optimize for costs using each only when necessary. **The average research task takes around 3 minutes to complete, and costs ~$0.1.**

<div align="center">
<img align="center" height="500" src="https://cowriter-images.s3.amazonaws.com/architecture.png">
</div>


More specifically:
* Create a domain specific agent based on research query or task.
* Generate a set of research questions that together form an objective opinion on any given task. 
* For each research question, trigger a crawler agent that scrapes online resources for information relevant to the given task.
* For each scraped resources, summarize based on relevant information and keep track of its sources.
* Finally, filter and aggregate all summarized sources and generate a final research report.

## Tutorials
 - [How it Works](https://docs.tavily.com/blog/building-gpt-researcher)
 - [How to Install](https://www.loom.com/share/04ebffb6ed2a4520a27c3e3addcdde20?sid=da1848e8-b1f1-42d1-93c3-5b0b9c3b24ea)
 - [Live Demo](https://www.loom.com/share/6a3385db4e8747a1913dd85a7834846f?sid=a740fd5b-2aa3-457e-8fb7-86976f59f9b8)

## Features
- πŸ“ Generate research, outlines, resources and lessons reports
- πŸ“œ Can generate long and detailed research reports (over 2K words)
- 🌐 Aggregates over 20 web sources per research to form objective and factual conclusions
- πŸ–₯️ Includes an easy-to-use web interface (HTML/CSS/JS)
- πŸ” Scrapes web sources with javascript support
- πŸ“‚ Keeps track and context of visited and used web sources
- πŸ“„ Export research reports to PDF, Word and more...

## πŸ“– Documentation

Please see [here](https://docs.tavily.com/docs/gpt-researcher/getting-started) for full documentation on:

- Getting started (installation, setting up the environment, simple examples)
- Customization and configuration
- How-To examples (demos, integrations, docker support)
- Reference (full API docs)

## βš™οΈ Getting Started
### Installation
> **Step 0** - Install Python 3.11 or later. [See here](https://www.tutorialsteacher.com/python/install-python) for a step-by-step guide.

> **Step 1** - Download the project and navigate to its directory

```bash
git clone https://github.com/assafelovic/gpt-researcher.git
cd gpt-researcher
```

> **Step 3** - Set up API keys using two methods: exporting them directly or storing them in a `.env` file.

For Linux/Windows temporary setup, use the export method:

```bash
export OPENAI_API_KEY={Your OpenAI API Key here}
export TAVILY_API_KEY={Your Tavily API Key here}
```

For a more permanent setup, create a `.env` file in the current `gpt-researcher` directory and input the env vars (without `export`).

- **For LLM, we recommend [OpenAI GPT](https://platform.openai.com/docs/guides/gpt)**, but you can use any other LLM model (including open sources) supported by [Langchain Adapter](https://python.langchain.com/docs/integrations/adapters/openai/), simply change the llm model and provider in config/config.py. 
- **For web search API, we recommend [Tavily Search API](https://app.tavily.com)**, but you can also refer to other search APIs of your choice by changing the search provider in config/config.py to `"duckduckgo"`, `"googleAPI"`, `"bing"`, `"googleSerp"`, `"searx"` and more. Then add the corresponding env API key as seen in the config.py file.

### Quickstart

> **Step 1** - Install dependencies

```bash
pip install -r requirements.txt
```

> **Step 2** - Run the agent with FastAPI

```bash
uvicorn main:app --reload
```

> **Step 3** - Go to http://localhost:8000 on any browser and enjoy researching!

<br />

**To learn how to get started with [Docker](https://docs.tavily.com/docs/gpt-researcher/getting-started#try-it-with-docker), [Poetry](https://docs.tavily.com/docs/gpt-researcher/getting-started#poetry) or a [virtual environment](https://docs.tavily.com/docs/gpt-researcher/getting-started#virtual-environment) check out the [documentation](https://docs.tavily.com/docs/gpt-researcher/getting-started) page.**

### Run as PIP package
```bash
pip install gpt-researcher
```

```python
from gpt_researcher import GPTResearcher

query = "why is Nvidia stock going up?"
researcher = GPTResearcher(query=query, report_type="research_report")
# Conduct research on the given query
research_result = await researcher.conduct_research()
# Write the report
report = await researcher.write_report()
```

**For more examples and configurations, please refer to the [PIP documentation](https://docs.tavily.com/docs/gpt-researcher/pip-package) page.**

## πŸ‘ͺ Multi-Agent Assistant
As AI evolves from prompt engineering and RAG to multi-agent systems, we're excited to introduce our new multi-agent assistant built with [LangGraph](https://python.langchain.com/v0.1/docs/langgraph/).

By using LangGraph, the research process can be significantly improved in depth and quality by leveraging multiple agents with specialized skills. Inspired by the recent [STORM](https://arxiv.org/abs/2402.14207) paper, this project showcases how a team of AI agents can work together to conduct research on a given topic, from planning to publication.

An average run generates a 5-6 page research report in multiple formats such as PDF, Docx and Markdown.

Check it out [here](https://github.com/assafelovic/gpt-researcher/tree/master/multi_agents) or head over to our [documentation](https://docs.tavily.com/docs/gpt-researcher/agent_frameworks) for more information.

## πŸš€ Contributing
We highly welcome contributions! Please check out [contributing](https://github.com/assafelovic/gpt-researcher/blob/master/CONTRIBUTING.md) if you're interested.

Please check out our [roadmap](https://trello.com/b/3O7KBePw/gpt-researcher-roadmap) page and reach out to us via our [Discord community](https://discord.gg/2pFkc83fRq) if you're interested in joining our mission.

## βœ‰οΈ Support / Contact us
- [Community Discord](https://discord.gg/spBgZmm3Xe)
- Author Email: assaf.elovic@gmail.com

## πŸ›‘ Disclaimer

This project, GPT Researcher, is an experimental application and is provided "as-is" without any warranty, express or implied. We are sharing codes for academic purposes under the MIT license. Nothing herein is academic advice, and NOT a recommendation to use in academic or research papers.

Our view on unbiased research claims:
1. The main goal of GPT Researcher is to reduce incorrect and biased facts. How? We assume that the more sites we scrape the less chances of incorrect data. By scraping over 20 sites per research, and choosing the most frequent information, the chances that they are all wrong is extremely low.
2. We do not aim to eliminate biases; we aim to reduce it as much as possible. **We are here as a community to figure out the most effective human/llm interactions.**
3. In research, people also tend towards biases as most have already opinions on the topics they research about. This tool scrapes many opinions and will evenly explain diverse views that a biased person would never have read.

**Please note that the use of the GPT-4 language model can be expensive due to its token usage.** By utilizing this project, you acknowledge that you are responsible for monitoring and managing your own token usage and the associated costs. It is highly recommended to check your OpenAI API usage regularly and set up any necessary limits or alerts to prevent unexpected charges.

---

<p align="center">
<a href="https://star-history.com/#assafelovic/gpt-researcher">
  <picture>
    <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=assafelovic/gpt-researcher&type=Date&theme=dark" />
    <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=assafelovic/gpt-researcher&type=Date" />
    <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=assafelovic/gpt-researcher&type=Date" />
  </picture>
</a>
</p>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/assafelovic/gpt-researcher",
    "name": "gpt-researcher",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "Assaf Elovic",
    "author_email": "assaf.elovic@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/d2/c5/3e406b961230e300930f5285cbb94a18186f42dd3ed36b6c484f1bf47d74/gpt-researcher-0.4.5.tar.gz",
    "platform": null,
    "description": "# \ud83d\udd0e GPT Researcher\n[![Official Website](https://img.shields.io/badge/Official%20Website-gptr.dev-blue?style=for-the-badge&logo=world&logoColor=white)](https://gptr.dev)\n[![Discord Follow](https://dcbadge.vercel.app/api/server/2pFkc83fRq?style=for-the-badge)](https://discord.gg/MN9M86kb)\n\n[![GitHub Repo stars](https://img.shields.io/github/stars/assafelovic/gpt-researcher?style=social)](https://github.com/assafelovic/gpt-researcher)\n[![Twitter Follow](https://img.shields.io/twitter/follow/assaf_elovic?style=social)](https://twitter.com/assaf_elovic)\n[![PyPI version](https://badge.fury.io/py/gpt-researcher.svg)](https://badge.fury.io/py/gpt-researcher)\n\n<!-- MARKDOWN LINKS & IMAGES -->\n<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->\n[contributors-shield]: https://img.shields.io/github/contributors/assafelovic/gpt-researcher?style=for-the-badge&color=orange\n\n-  [English](https://github.com/assafelovic/gpt-researcher/blob/master/README.md)\n-  [\u4e2d\u6587](https://github.com/assafelovic/gpt-researcher/blob/master/README-zh_CN.md)\n\n**GPT Researcher is an autonomous agent designed for comprehensive online research on a variety of tasks.** \n\nThe agent can produce detailed, factual and unbiased research reports, with customization options for focusing on relevant resources, outlines, and lessons. Inspired by the recent [Plan-and-Solve](https://arxiv.org/abs/2305.04091) and [RAG](https://arxiv.org/abs/2005.11401) papers, GPT Researcher addresses issues of speed, determinism and reliability, offering a more stable performance and increased speed through parallelized agent work, as opposed to synchronous operations.\n\n**Our mission is to empower individuals and organizations with accurate, unbiased, and factual information by leveraging the power of AI.**\n\n## Why GPT Researcher?\n\n- To form objective conclusions for manual research tasks can take time, sometimes weeks to find the right resources and information.\n- Current LLMs are trained on past and outdated information, with heavy risks of hallucinations, making them almost irrelevant for research tasks.\n- Services that enable web search (such as ChatGPT + Web Plugin), only consider limited sources and content that in some cases result in superficial and biased answers.\n- Using only a selection of web sources can create bias in determining the right conclusions for research tasks.\n\n## Demo\nhttps://github.com/assafelovic/gpt-researcher/assets/13554167/dd6cf08f-b31e-40c6-9907-1915f52a7110\n\n## Architecture\nThe main idea is to run \"planner\" and \"execution\" agents, whereas the planner generates questions to research, and the execution agents seek the most related information based on each generated research question. Finally, the planner filters and aggregates all related information and creates a research report. <br /> <br /> \nThe agents leverage both `gpt3.5-turbo` and `gpt-4o` (128K context) to complete a research task. We optimize for costs using each only when necessary. **The average research task takes around 3 minutes to complete, and costs ~$0.1.**\n\n<div align=\"center\">\n<img align=\"center\" height=\"500\" src=\"https://cowriter-images.s3.amazonaws.com/architecture.png\">\n</div>\n\n\nMore specifically:\n* Create a domain specific agent based on research query or task.\n* Generate a set of research questions that together form an objective opinion on any given task. \n* For each research question, trigger a crawler agent that scrapes online resources for information relevant to the given task.\n* For each scraped resources, summarize based on relevant information and keep track of its sources.\n* Finally, filter and aggregate all summarized sources and generate a final research report.\n\n## Tutorials\n - [How it Works](https://docs.tavily.com/blog/building-gpt-researcher)\n - [How to Install](https://www.loom.com/share/04ebffb6ed2a4520a27c3e3addcdde20?sid=da1848e8-b1f1-42d1-93c3-5b0b9c3b24ea)\n - [Live Demo](https://www.loom.com/share/6a3385db4e8747a1913dd85a7834846f?sid=a740fd5b-2aa3-457e-8fb7-86976f59f9b8)\n\n## Features\n- \ud83d\udcdd Generate research, outlines, resources and lessons reports\n- \ud83d\udcdc Can generate long and detailed research reports (over 2K words)\n- \ud83c\udf10 Aggregates over 20 web sources per research to form objective and factual conclusions\n- \ud83d\udda5\ufe0f Includes an easy-to-use web interface (HTML/CSS/JS)\n- \ud83d\udd0d Scrapes web sources with javascript support\n- \ud83d\udcc2 Keeps track and context of visited and used web sources\n- \ud83d\udcc4 Export research reports to PDF, Word and more...\n\n## \ud83d\udcd6 Documentation\n\nPlease see [here](https://docs.tavily.com/docs/gpt-researcher/getting-started) for full documentation on:\n\n- Getting started (installation, setting up the environment, simple examples)\n- Customization and configuration\n- How-To examples (demos, integrations, docker support)\n- Reference (full API docs)\n\n## \u2699\ufe0f Getting Started\n### Installation\n> **Step 0** - Install Python 3.11 or later. [See here](https://www.tutorialsteacher.com/python/install-python) for a step-by-step guide.\n\n> **Step 1** - Download the project and navigate to its directory\n\n```bash\ngit clone https://github.com/assafelovic/gpt-researcher.git\ncd gpt-researcher\n```\n\n> **Step 3** - Set up API keys using two methods: exporting them directly or storing them in a `.env` file.\n\nFor Linux/Windows temporary setup, use the export method:\n\n```bash\nexport OPENAI_API_KEY={Your OpenAI API Key here}\nexport TAVILY_API_KEY={Your Tavily API Key here}\n```\n\nFor a more permanent setup, create a `.env` file in the current `gpt-researcher` directory and input the env vars (without `export`).\n\n- **For LLM, we recommend [OpenAI GPT](https://platform.openai.com/docs/guides/gpt)**, but you can use any other LLM model (including open sources) supported by [Langchain Adapter](https://python.langchain.com/docs/integrations/adapters/openai/), simply change the llm model and provider in config/config.py. \n- **For web search API, we recommend [Tavily Search API](https://app.tavily.com)**, but you can also refer to other search APIs of your choice by changing the search provider in config/config.py to `\"duckduckgo\"`, `\"googleAPI\"`, `\"bing\"`, `\"googleSerp\"`, `\"searx\"` and more. Then add the corresponding env API key as seen in the config.py file.\n\n### Quickstart\n\n> **Step 1** - Install dependencies\n\n```bash\npip install -r requirements.txt\n```\n\n> **Step 2** - Run the agent with FastAPI\n\n```bash\nuvicorn main:app --reload\n```\n\n> **Step 3** - Go to http://localhost:8000 on any browser and enjoy researching!\n\n<br />\n\n**To learn how to get started with [Docker](https://docs.tavily.com/docs/gpt-researcher/getting-started#try-it-with-docker), [Poetry](https://docs.tavily.com/docs/gpt-researcher/getting-started#poetry) or a [virtual environment](https://docs.tavily.com/docs/gpt-researcher/getting-started#virtual-environment) check out the [documentation](https://docs.tavily.com/docs/gpt-researcher/getting-started) page.**\n\n### Run as PIP package\n```bash\npip install gpt-researcher\n```\n\n```python\nfrom gpt_researcher import GPTResearcher\n\nquery = \"why is Nvidia stock going up?\"\nresearcher = GPTResearcher(query=query, report_type=\"research_report\")\n# Conduct research on the given query\nresearch_result = await researcher.conduct_research()\n# Write the report\nreport = await researcher.write_report()\n```\n\n**For more examples and configurations, please refer to the [PIP documentation](https://docs.tavily.com/docs/gpt-researcher/pip-package) page.**\n\n## \ud83d\udc6a Multi-Agent Assistant\nAs AI evolves from prompt engineering and RAG to multi-agent systems, we're excited to introduce our new multi-agent assistant built with [LangGraph](https://python.langchain.com/v0.1/docs/langgraph/).\n\nBy using LangGraph, the research process can be significantly improved in depth and quality by leveraging multiple agents with specialized skills. Inspired by the recent [STORM](https://arxiv.org/abs/2402.14207) paper, this project showcases how a team of AI agents can work together to conduct research on a given topic, from planning to publication.\n\nAn average run generates a 5-6 page research report in multiple formats such as PDF, Docx and Markdown.\n\nCheck it out [here](https://github.com/assafelovic/gpt-researcher/tree/master/multi_agents) or head over to our [documentation](https://docs.tavily.com/docs/gpt-researcher/agent_frameworks) for more information.\n\n## \ud83d\ude80 Contributing\nWe highly welcome contributions! Please check out [contributing](https://github.com/assafelovic/gpt-researcher/blob/master/CONTRIBUTING.md) if you're interested.\n\nPlease check out our [roadmap](https://trello.com/b/3O7KBePw/gpt-researcher-roadmap) page and reach out to us via our [Discord community](https://discord.gg/2pFkc83fRq) if you're interested in joining our mission.\n\n## \u2709\ufe0f Support / Contact us\n- [Community Discord](https://discord.gg/spBgZmm3Xe)\n- Author Email: assaf.elovic@gmail.com\n\n## \ud83d\udee1 Disclaimer\n\nThis project, GPT Researcher, is an experimental application and is provided \"as-is\" without any warranty, express or implied. We are sharing codes for academic purposes under the MIT license. Nothing herein is academic advice, and NOT a recommendation to use in academic or research papers.\n\nOur view on unbiased research claims:\n1. The main goal of GPT Researcher is to reduce incorrect and biased facts. How? We assume that the more sites we scrape the less chances of incorrect data. By scraping over 20 sites per research, and choosing the most frequent information, the chances that they are all wrong is extremely low.\n2. We do not aim to eliminate biases; we aim to reduce it as much as possible. **We are here as a community to figure out the most effective human/llm interactions.**\n3. In research, people also tend towards biases as most have already opinions on the topics they research about. This tool scrapes many opinions and will evenly explain diverse views that a biased person would never have read.\n\n**Please note that the use of the GPT-4 language model can be expensive due to its token usage.** By utilizing this project, you acknowledge that you are responsible for monitoring and managing your own token usage and the associated costs. It is highly recommended to check your OpenAI API usage regularly and set up any necessary limits or alerts to prevent unexpected charges.\n\n---\n\n<p align=\"center\">\n<a href=\"https://star-history.com/#assafelovic/gpt-researcher\">\n  <picture>\n    <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://api.star-history.com/svg?repos=assafelovic/gpt-researcher&type=Date&theme=dark\" />\n    <source media=\"(prefers-color-scheme: light)\" srcset=\"https://api.star-history.com/svg?repos=assafelovic/gpt-researcher&type=Date\" />\n    <img alt=\"Star History Chart\" src=\"https://api.star-history.com/svg?repos=assafelovic/gpt-researcher&type=Date\" />\n  </picture>\n</a>\n</p>\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "GPT Researcher is an autonomous agent designed for comprehensive online research on a variety of tasks.",
    "version": "0.4.5",
    "project_urls": {
        "Homepage": "https://github.com/assafelovic/gpt-researcher"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a47e4cd8ae20d58cce905c5c3d9f7cba2bb8c4c0fcc8bc01a7d9b378db3b5062",
                "md5": "902ca2d014d4cf8346ee7e46b47dae5e",
                "sha256": "6af8651e0cedba275f0dd72c99eaaaaf5ad0fc907a997e3a85b6b554e252a035"
            },
            "downloads": -1,
            "filename": "gpt_researcher-0.4.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "902ca2d014d4cf8346ee7e46b47dae5e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 63218,
            "upload_time": "2024-05-14T16:20:18",
            "upload_time_iso_8601": "2024-05-14T16:20:18.129050Z",
            "url": "https://files.pythonhosted.org/packages/a4/7e/4cd8ae20d58cce905c5c3d9f7cba2bb8c4c0fcc8bc01a7d9b378db3b5062/gpt_researcher-0.4.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d2c53e406b961230e300930f5285cbb94a18186f42dd3ed36b6c484f1bf47d74",
                "md5": "6b67617166f81f807124876605e4b88d",
                "sha256": "eb8df378b18698659dea65c86bc172e34f228c3af24b3472671cc41ae088e531"
            },
            "downloads": -1,
            "filename": "gpt-researcher-0.4.5.tar.gz",
            "has_sig": false,
            "md5_digest": "6b67617166f81f807124876605e4b88d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 39253,
            "upload_time": "2024-05-14T16:20:20",
            "upload_time_iso_8601": "2024-05-14T16:20:20.317103Z",
            "url": "https://files.pythonhosted.org/packages/d2/c5/3e406b961230e300930f5285cbb94a18186f42dd3ed36b6c484f1bf47d74/gpt-researcher-0.4.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-14 16:20:20",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "assafelovic",
    "github_project": "gpt-researcher",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "beautifulsoup4",
            "specs": []
        },
        {
            "name": "colorama",
            "specs": []
        },
        {
            "name": "duckduckgo_search",
            "specs": []
        },
        {
            "name": "md2pdf",
            "specs": []
        },
        {
            "name": "playwright",
            "specs": []
        },
        {
            "name": "openai",
            "specs": []
        },
        {
            "name": "python-dotenv",
            "specs": []
        },
        {
            "name": "pyyaml",
            "specs": []
        },
        {
            "name": "uvicorn",
            "specs": []
        },
        {
            "name": "pydantic",
            "specs": []
        },
        {
            "name": "fastapi",
            "specs": []
        },
        {
            "name": "python-multipart",
            "specs": []
        },
        {
            "name": "markdown",
            "specs": []
        },
        {
            "name": "langchain",
            "specs": []
        },
        {
            "name": "tavily-python",
            "specs": []
        },
        {
            "name": "arxiv",
            "specs": []
        },
        {
            "name": "PyMuPDF",
            "specs": []
        },
        {
            "name": "requests",
            "specs": []
        },
        {
            "name": "jinja2",
            "specs": []
        },
        {
            "name": "aiofiles",
            "specs": []
        },
        {
            "name": "newspaper3k",
            "specs": []
        },
        {
            "name": "langchain_community",
            "specs": []
        },
        {
            "name": "SQLAlchemy",
            "specs": []
        },
        {
            "name": "langchain-openai",
            "specs": []
        },
        {
            "name": "mistune",
            "specs": []
        },
        {
            "name": "python-docx",
            "specs": []
        },
        {
            "name": "htmldocx",
            "specs": []
        },
        {
            "name": "langchain-google-genai",
            "specs": []
        },
        {
            "name": "lxml",
            "specs": []
        },
        {
            "name": "websockets",
            "specs": []
        }
    ],
    "lcname": "gpt-researcher"
}
        
Elapsed time: 0.29403s