Name | data-to-paper JSON |
Version |
1.1.7
JSON |
| download |
home_page | None |
Summary | data-to-paper: Backward-traceable AI-driven scientific research |
upload_time | 2024-12-21 21:01:34 |
maintainer | None |
docs_url | None |
author | Kishony lab, Technion Israel Institute of Technology |
requires_python | >=3.9 |
license | MIT License Copyright (c) 2024 Technion-Kishony-lab Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
data-to-paper
research
ai-driven-research
backward-traceable
ai
agents
autonomous-agents
scientific-research
interactive-machine-learning
llm
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
## Backward-traceable AI-driven Research
<picture>
<img src="https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/assets/data_to_paper_icon.gif" width="350"
align="right">
</picture>
[![License: MIT](https://img.shields.io/badge/License-MIT-brightgreen.svg)](https://opensource.org/licenses/MIT) [![Hits](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2FTechnion-Kishony-lab%2Fdata-to-paper&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=hits&edge_flat=false)](https://hits.seeyoufarm.com)
**data-to-paper** is an automation framework that systematically navigates interacting AI agents through a **complete end-to-end scientific research**,
starting from *raw data* alone and concluding with *transparent, backward-traceable,
human-verifiable scientific papers*
(<a href="https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/assets/ExampleManuscriptFigures.pdf" target="_blank">Example AI-created paper</a>,
[Copilot App DEMO](https://youtu.be/vrsxgX67n6I)).
This repository is the code implementation for the paper ["Autonomous LLM-Driven Research — from Data to Human-Verifiable Research Papers"](https://doi.org/10.1056/AIoa2400555).
<picture>
<img src="https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/assets/AI-Human-agents.png" width="300"
align="left">
</picture>
### Try it out
```commandline
pip install data-to-paper
```
then run: `data-to-paper`
See [INSTALL](INSTALL.md) for dependencies.
<br clear="left"/>
<picture>
<img src="https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/assets/page_flipping.gif" width="400" align="right">
</picture>
### Key features
* **End-to-end field-agnostic research.** The process navigates through the entire scientific path,
from data exploration, literature search and ideation, through data analysis and interpretation,
to the step-by-step writing of a complete research papers.
* **Traceable "data-chained" manuscripts**. Tracing informtion flow, *data-to-paper* creates backward-traceable and verifiable manuscripts,
where any numeric values can be click-traced all the way up to the specific code lines that created them
([data-chaining DEMO](https://youtu.be/HUkJcMXd9x0)).
* **Autopilot or Copilot.** The platform can run fully autonomously, or can be human-guided through the [Copilot App](https://youtu.be/vrsxgX67n6I), allowing users to:
- Oversee, Inspect and Guide the research
- Set research goals, or let the AI autonomously raise and test hypotheses
- Provide review, or invoke on-demand AI-reviews
- Rewind the process to prior steps
- Record and replay runs
- Track API costs
* **Coding guardrails.** Standard statistical packages are overridden with multiple guardrails
to minimize common LLM coding errors.
<picture>
<img src="https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/assets/research-steps-horizontal.png" width=100%
align="left">
</picture>
<br><br>
https://github.com/Technion-Kishony-lab/data-to-paper/assets/31969897/0f3acf7a-a775-43bd-a79c-6877f780f2d4
### Motivation: Building a new standard for Transparent, Traceable, and Verifiable AI-driven Research
The *data-to-paper* framework is created as a research project to understand the
capacities and limitations of LLM-driven scientific research, and to develop ways of harnessing LLM to accelerate
research while maintaining, and even enhancing, the key scientific values, such as transparency, traceability and verifiability,
and while allowing scientist to oversee and direct the process
(see also: [living guidelines](https://www.nature.com/articles/d41586-023-03266-1)).
### Implementation
Towards this goal, *data-to-paper* systematically guides **interacting LLM and rule-based agents**
through the conventional scientific path, from annotated data, through creating
research hypotheses, conducting literature search, writing and debugging data analysis code,
interpreting the results, and ultimately the step-by-step writing of a complete research paper.
### Reference
The **data-to-paper** framework is described in the following NEJM AI paper:
- Tal Ifargan, Lukas Hafner, Maor Kern, Ori Alcalay and Roy Kishony,
"Autonomous LLM-Driven Research — from Data to Human-Verifiable Research Papers"
[10.1056/AIoa2400555](https://doi.org/10.1056/AIoa2400555)
and in the following pre-print:
- Tal Ifargan, Lukas Hafner, Maor Kern, Ori Alcalay and Roy Kishony,
"Autonomous LLM-driven research from data to human-verifiable research papers",
[arXiv:2404.17605](
https://doi.org/10.48550/arXiv.2404.17605)
### Examples
We ran **data-to-paper** on the following test cases:
* **Health Indicators (open goal).** A clean unweighted subset of
CDC’s Behavioral Risk Factor Surveillance System (BRFSS) 2015 annual dataset
([Kaggle](https://www.kaggle.com/datasets/alexteboul/diabetes-health-indicators-dataset)). Here is an [example Paper](https://github.com/rkishony/data-to-paper-supplementary/blob/3704b0508192ff1f68b33be2ef282249f10f1254/Supplementary%20Data-chained%20Manuscripts/Supplementary%20Data-chained%20Manuscript%20A.pdf) created by data-to paper.
Try out:
```shell
data-to-paper diabetes
```
* **Social Network (open goal).** A directed graph of Twitter interactions among the 117th Congress members
([Fink et al](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10493874/)). Here is an [example Paper](https://github.com/rkishony/data-to-paper-supplementary/blob/3704b0508192ff1f68b33be2ef282249f10f1254/Supplementary%20Data-chained%20Manuscripts/Supplementary%20Data-chained%20Manuscript%20B.pdf) created by data-to paper.
Try out:
```shell
data-to-paper social_network
```
* **Treatment Policy (fixed-goal).** A dataset on treatment and outcomes of non-vigorous infants admitted to the Neonatal Intensive Care Unit (NICU), before and after a change to treatment guidelines was implemented
([Saint-Fleur et al](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0289945)). Here is an [example Paper](https://github.com/rkishony/data-to-paper-supplementary/blob/3704b0508192ff1f68b33be2ef282249f10f1254/Supplementary%20Data-chained%20Manuscripts/Supplementary%20Data-chained%20Manuscript%20C.pdf) created by data-to paper.
Try out:
```shell
data-to-paper npr_nicu
```
* **Treatment Optimization (fixed-goal).** A dataset of pediatric patients, which received mechanical ventilation after undergoing surgery, including an x-ray-based determination of the optimal tracheal tube intubation depth and a set of personalized patient attributes to be used in machine learning and formula-based models to predict this optimal depth
([Shim et al](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0257069)). Here is an [example Paper](https://github.com/rkishony/data-to-paper-supplementary/blob/3704b0508192ff1f68b33be2ef282249f10f1254/Supplementary%20Data-chained%20Manuscripts/Supplementary%20Data-chained%20Manuscript%20D.pdf) created by data-to paper.
We defined three levels of difficulty for the research question for this paper.
1. **easy**: Compare two ML methods for predicting optimal intubation depth
Try out:
```shell
data-to-paper ML_easy
```
2. **medium**: Compare one ML method and one formula-based method for predicting optimal intubation depth
Try out:
```shell
data-to-paper ML_medium
```
3. **hard**: Compare 4 ML methods with 3 formula-based methods for predicting optimal intubation depth
Try out:
```shell
data-to-paper ML_hard
```
### Contributing
We invite people to try out **data-to-paper** with their own data and are eager **for feedback and suggestions**.
It is currently designed for relatively simple research goals and simple datasets, where
we want to raise and test a statistical hypothesis.
We also invite people to help develop and extend the **data-to-paper** framework in science or other fields.
### Important notes
**Disclaimer.** By using this software, you agree to assume all risks associated with its use, including but not limited
to data loss, system failure, or any other issues that may arise, especially, but not limited to, the
consequences of running of LLM created code on your local machine. The developers of this project
do not accept any responsibility or liability for any losses, damages, or other consequences that may occur as
a result of using this software.
**Accountability.** You are solely responsible for the entire content of
created manuscripts including their rigour, quality, ethics and any other aspect.
The process should be overseen and directed by a human-in-the-loop and created manuscripts should be carefully vetted
by a domain expert.
The process is NOT error-proof and human intervention is _necessary_ to ensure accuracy and the quality of the results.
**Compliance.** It is your responsibility to ensure that any actions or decisions made based on the output of this
software comply with all applicable laws, regulations, and ethical standards.
The developers and contributors of this project shall not be held responsible for any consequences arising from
using this software. Further, data-to-paper manuscripts are watermarked for transparency as AI-created.
Users should not remove this watermark.
**Token Usage.** Please note that the use of most language models through external APIs, especially GPT4,
can be expensive due to its token usage. By utilizing this project, you acknowledge that you are
responsible for monitoring and managing your own token usage and the associated costs.
It is highly recommended to check your API usage regularly and set up any necessary limits or alerts to
prevent unexpected charges.
### Related projects
Here are some other cool multi-agent related projects:
- [SakanaAI](https://github.com/SakanaAI/AI-Scientist)
- [PaperQ2A](https://github.com/Future-House/paper-qa)
- [LangChain](https://github.com/langchain-ai/langchain)
- [AutoGen](https://microsoft.github.io/autogen/)
- [AutoGPT](https://github.com/Significant-Gravitas/AutoGPT)
- [MetaGPT](https://github.com/geekan/MetaGPT)
And also this curated list of [awesome-agents](https://github.com/kyrolabs/awesome-agents).
Raw data
{
"_id": null,
"home_page": null,
"name": "data-to-paper",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "data-to-paper, research, AI-driven-research, backward-traceable, ai, agents, autonomous-agents, scientific-research, interactive-machine-learning, llm",
"author": "Kishony lab, Technion Israel Institute of Technology",
"author_email": "Roy Kishony <rkishony@technion.ac.il>, Tal Ifargan <talifargan@campus.technion.ac.il>, Lukas Hafner <lukashafner@campus.technion.ac.il>",
"download_url": "https://files.pythonhosted.org/packages/f4/e7/8e9255f587a3af8f04aedb2d12211056257654b3de1572dd2d5fd7d9cfee/data_to_paper-1.1.7.tar.gz",
"platform": null,
"description": "## Backward-traceable AI-driven Research\n\n<picture>\n<img src=\"https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/assets/data_to_paper_icon.gif\" width=\"350\" \nalign=\"right\">\n</picture>\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-brightgreen.svg)](https://opensource.org/licenses/MIT) [![Hits](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2FTechnion-Kishony-lab%2Fdata-to-paper&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=hits&edge_flat=false)](https://hits.seeyoufarm.com)\n\n**data-to-paper** is an automation framework that systematically navigates interacting AI agents through a **complete end-to-end scientific research**, \nstarting from *raw data* alone and concluding with *transparent, backward-traceable, \nhuman-verifiable scientific papers* \n(<a href=\"https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/assets/ExampleManuscriptFigures.pdf\" target=\"_blank\">Example AI-created paper</a>, \n[Copilot App DEMO](https://youtu.be/vrsxgX67n6I)).\nThis repository is the code implementation for the paper [\"Autonomous LLM-Driven Research \u2014 from Data to Human-Verifiable Research Papers\"](https://doi.org/10.1056/AIoa2400555).\n\n<picture>\n<img src=\"https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/assets/AI-Human-agents.png\" width=\"300\" \nalign=\"left\">\n</picture>\n\n### Try it out\n\n```commandline\npip install data-to-paper\n```\nthen run: `data-to-paper`\n\nSee [INSTALL](INSTALL.md) for dependencies.\n<br clear=\"left\"/>\n\n<picture>\n<img src=\"https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/assets/page_flipping.gif\" width=\"400\" align=\"right\">\n</picture>\n\n### Key features\n\n* **End-to-end field-agnostic research.** The process navigates through the entire scientific path, \nfrom data exploration, literature search and ideation, through data analysis and interpretation, \nto the step-by-step writing of a complete research papers.\n* **Traceable \"data-chained\" manuscripts**. Tracing informtion flow, *data-to-paper* creates backward-traceable and verifiable manuscripts,\nwhere any numeric values can be click-traced all the way up to the specific code lines that created them\n([data-chaining DEMO](https://youtu.be/HUkJcMXd9x0)).\n\n* **Autopilot or Copilot.** The platform can run fully autonomously, or can be human-guided through the [Copilot App](https://youtu.be/vrsxgX67n6I), allowing users to:\n\n - Oversee, Inspect and Guide the research\n\n - Set research goals, or let the AI autonomously raise and test hypotheses\n\n - Provide review, or invoke on-demand AI-reviews\n\n - Rewind the process to prior steps\n\n - Record and replay runs\n\n -\tTrack API costs\n* **Coding guardrails.** Standard statistical packages are overridden with multiple guardrails \nto minimize common LLM coding errors.\n\n\n\n<picture>\n<img src=\"https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/assets/research-steps-horizontal.png\" width=100% \nalign=\"left\">\n</picture>\n<br><br>\n\nhttps://github.com/Technion-Kishony-lab/data-to-paper/assets/31969897/0f3acf7a-a775-43bd-a79c-6877f780f2d4\n\n \n### Motivation: Building a new standard for Transparent, Traceable, and Verifiable AI-driven Research\nThe *data-to-paper* framework is created as a research project to understand the \ncapacities and limitations of LLM-driven scientific research, and to develop ways of harnessing LLM to accelerate \nresearch while maintaining, and even enhancing, the key scientific values, such as transparency, traceability and verifiability, \nand while allowing scientist to oversee and direct the process\n(see also: [living guidelines](https://www.nature.com/articles/d41586-023-03266-1)).\n\n### Implementation\nTowards this goal, *data-to-paper* systematically guides **interacting LLM and rule-based agents** \nthrough the conventional scientific path, from annotated data, through creating \nresearch hypotheses, conducting literature search, writing and debugging data analysis code, \ninterpreting the results, and ultimately the step-by-step writing of a complete research paper.\n\n\n### Reference\nThe **data-to-paper** framework is described in the following NEJM AI paper:\n- Tal Ifargan, Lukas Hafner, Maor Kern, Ori Alcalay and Roy Kishony,\n\"Autonomous LLM-Driven Research \u2014 from Data to Human-Verifiable Research Papers\"\n[10.1056/AIoa2400555](https://doi.org/10.1056/AIoa2400555)\n\nand in the following pre-print:\n - Tal Ifargan, Lukas Hafner, Maor Kern, Ori Alcalay and Roy Kishony, \n\"Autonomous LLM-driven research from data to human-verifiable research papers\", \n[arXiv:2404.17605](\nhttps://doi.org/10.48550/arXiv.2404.17605)\n\n\n### Examples\n\nWe ran **data-to-paper** on the following test cases:\n\n* **Health Indicators (open goal).** A clean unweighted subset of \nCDC\u2019s Behavioral Risk Factor Surveillance System (BRFSS) 2015 annual dataset \n ([Kaggle](https://www.kaggle.com/datasets/alexteboul/diabetes-health-indicators-dataset)). Here is an [example Paper](https://github.com/rkishony/data-to-paper-supplementary/blob/3704b0508192ff1f68b33be2ef282249f10f1254/Supplementary%20Data-chained%20Manuscripts/Supplementary%20Data-chained%20Manuscript%20A.pdf) created by data-to paper.\n\nTry out: \n```shell\ndata-to-paper diabetes\n```\n\n\n* **Social Network (open goal).** A directed graph of Twitter interactions among the 117th Congress members\n ([Fink et al](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10493874/)). Here is an [example Paper](https://github.com/rkishony/data-to-paper-supplementary/blob/3704b0508192ff1f68b33be2ef282249f10f1254/Supplementary%20Data-chained%20Manuscripts/Supplementary%20Data-chained%20Manuscript%20B.pdf) created by data-to paper.\n\nTry out:\n```shell\ndata-to-paper social_network\n```\n\n* **Treatment Policy (fixed-goal).** A dataset on treatment and outcomes of non-vigorous infants admitted to the Neonatal Intensive Care Unit (NICU), before and after a change to treatment guidelines was implemented\n ([Saint-Fleur et al](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0289945)). Here is an [example Paper](https://github.com/rkishony/data-to-paper-supplementary/blob/3704b0508192ff1f68b33be2ef282249f10f1254/Supplementary%20Data-chained%20Manuscripts/Supplementary%20Data-chained%20Manuscript%20C.pdf) created by data-to paper.\n\nTry out: \n```shell\ndata-to-paper npr_nicu\n```\n* **Treatment Optimization (fixed-goal).** A dataset of pediatric patients, which received mechanical ventilation after undergoing surgery, including an x-ray-based determination of the optimal tracheal tube intubation depth and a set of personalized patient attributes to be used in machine learning and formula-based models to predict this optimal depth\n ([Shim et al](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0257069)). Here is an [example Paper](https://github.com/rkishony/data-to-paper-supplementary/blob/3704b0508192ff1f68b33be2ef282249f10f1254/Supplementary%20Data-chained%20Manuscripts/Supplementary%20Data-chained%20Manuscript%20D.pdf) created by data-to paper.\n\nWe defined three levels of difficulty for the research question for this paper. \n1. **easy**: Compare two ML methods for predicting optimal intubation depth \nTry out: \n```shell\ndata-to-paper ML_easy\n``` \n \n2. **medium**: Compare one ML method and one formula-based method for predicting optimal intubation depth \nTry out: \n```shell\ndata-to-paper ML_medium\n``` \n \n3. **hard**: Compare 4 ML methods with 3 formula-based methods for predicting optimal intubation depth \nTry out:\n```shell\ndata-to-paper ML_hard\n```\n\n### Contributing\nWe invite people to try out **data-to-paper** with their own data and are eager **for feedback and suggestions**.\nIt is currently designed for relatively simple research goals and simple datasets, where \nwe want to raise and test a statistical hypothesis.\n\nWe also invite people to help develop and extend the **data-to-paper** framework in science or other fields.\n\n\n### Important notes\n\n**Disclaimer.** By using this software, you agree to assume all risks associated with its use, including but not limited \nto data loss, system failure, or any other issues that may arise, especially, but not limited to, the\nconsequences of running of LLM created code on your local machine. The developers of this project \ndo not accept any responsibility or liability for any losses, damages, or other consequences that may occur as \na result of using this software. \n\n**Accountability.** You are solely responsible for the entire content of \ncreated manuscripts including their rigour, quality, ethics and any other aspect. \nThe process should be overseen and directed by a human-in-the-loop and created manuscripts should be carefully vetted \nby a domain expert. \nThe process is NOT error-proof and human intervention is _necessary_ to ensure accuracy and the quality of the results. \n\n**Compliance.** It is your responsibility to ensure that any actions or decisions made based on the output of this \nsoftware comply with all applicable laws, regulations, and ethical standards. \nThe developers and contributors of this project shall not be held responsible for any consequences arising from \nusing this software. Further, data-to-paper manuscripts are watermarked for transparency as AI-created. \nUsers should not remove this watermark.\n\n**Token Usage.** Please note that the use of most language models through external APIs, especially GPT4, \ncan be expensive due to its token usage. By utilizing this project, you acknowledge that you are \nresponsible for monitoring and managing your own token usage and the associated costs. \nIt is highly recommended to check your API usage regularly and set up any necessary limits or alerts to \nprevent unexpected charges.\n\n### Related projects\n\nHere are some other cool multi-agent related projects:\n- [SakanaAI](https://github.com/SakanaAI/AI-Scientist)\n- [PaperQ2A](https://github.com/Future-House/paper-qa)\n- [LangChain](https://github.com/langchain-ai/langchain)\n- [AutoGen](https://microsoft.github.io/autogen/)\n- [AutoGPT](https://github.com/Significant-Gravitas/AutoGPT)\n- [MetaGPT](https://github.com/geekan/MetaGPT)\n\nAnd also this curated list of [awesome-agents](https://github.com/kyrolabs/awesome-agents).\n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) 2024 Technion-Kishony-lab Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
"summary": "data-to-paper: Backward-traceable AI-driven scientific research",
"version": "1.1.7",
"project_urls": {
"Repository": "https://github.com/Technion-Kishony-lab/data-to-paper"
},
"split_keywords": [
"data-to-paper",
" research",
" ai-driven-research",
" backward-traceable",
" ai",
" agents",
" autonomous-agents",
" scientific-research",
" interactive-machine-learning",
" llm"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "6a3c2d14171305017fce5a6bef443995759445e09b552ed26e8bdec4c1d11f98",
"md5": "4bf820d2f1bc77a67066a260c19a5e42",
"sha256": "bc67d1c5121a81ed11c26ef38647cdf003f38ea60eecea54fb37d3566f260f35"
},
"downloads": -1,
"filename": "data_to_paper-1.1.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4bf820d2f1bc77a67066a260c19a5e42",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 2643828,
"upload_time": "2024-12-21T21:01:21",
"upload_time_iso_8601": "2024-12-21T21:01:21.944839Z",
"url": "https://files.pythonhosted.org/packages/6a/3c/2d14171305017fce5a6bef443995759445e09b552ed26e8bdec4c1d11f98/data_to_paper-1.1.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f4e78e9255f587a3af8f04aedb2d12211056257654b3de1572dd2d5fd7d9cfee",
"md5": "5b319ee928b1b48f43b332a57c094334",
"sha256": "8fc69f89ef55332afd2d9aa33bbf094e9509fd8e8c9265dd8950b003a7441488"
},
"downloads": -1,
"filename": "data_to_paper-1.1.7.tar.gz",
"has_sig": false,
"md5_digest": "5b319ee928b1b48f43b332a57c094334",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 2558822,
"upload_time": "2024-12-21T21:01:34",
"upload_time_iso_8601": "2024-12-21T21:01:34.127457Z",
"url": "https://files.pythonhosted.org/packages/f4/e7/8e9255f587a3af8f04aedb2d12211056257654b3de1572dd2d5fd7d9cfee/data_to_paper-1.1.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-21 21:01:34",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Technion-Kishony-lab",
"github_project": "data-to-paper",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "data-to-paper"
}