tril


Nametril JSON
Version 0.2.1 PyPI version JSON
download
home_page
SummaryTransformers Reinforcement and Imitation Learning Library
upload_time2023-11-13 19:12:44
maintainer
docs_urlNone
author
requires_python>=3.10
licenseMIT License Copyright (c) 2023 The Reinforcement Learning, AI, and Decision Making Lab at Cornell Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords reinforcement learning imitation learning machine learning transformers
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1 align="center"> <p>TRIL</p></h1>
<h3 align="center">
    <p>Transformers Reinforcement and Imitation Learning Library</p>
</h3>

`TRIL` is a modular library for Reinforcment Learning (RL) and Imitation Learning (IL) algorithm development with transformers. We directly build on top of [`transformers`](https://github.com/huggingface/transformers), [`accelerate`](https://huggingface.co/docs/accelerate/index), and [`peft`](https://huggingface.co/docs/peft/index) libraries by 🤗 Hugging Face. That way TRIL is able to support open-sourced pretrained models, distributed computing, as well as parameter efficient training. Note we currently support most decoder and encoder-decoder architectures availble in `transformers`.

**Supported Algorithms:**

- Behavior Cloning (i.e. Supervised Fine Tuning)
- Proximal Policy Optimization (PPO) (https://arxiv.org/abs/1707.06347)
- Generative Adversarial Imitation Learning (GAIL) (https://arxiv.org/abs/1606.03476)
- PPO++ (https://arxiv.org/pdf/2306.11816)
- AggreVaTeD (https://arxiv.org/pdf/2306.11816)
- Locally Optimal Learning to Search (LOLS) (https://arxiv.org/pdf/2306.11816)
- Direct and Differentiable Locally Optimal Learning to Search (D2LOLS) (https://arxiv.org/pdf/2306.11816)

**Supported Tasks:**
- IMDB Positive Sentiment (https://arxiv.org/abs/2210.01241)
- CommonGen: Common Sense Generation (https://arxiv.org/abs/1911.03705)
- TL;DR Summarization (https://arxiv.org/pdf/2203.02155.pdf)

---

**Planned Algorithms:**
- Direct Preference Optimization (DPO) (https://arxiv.org/pdf/2305.18290.pdf)
- Statistical Rejection Sampling Optimization (RSO) (https://arxiv.org/pdf/2309.06657.pdf)
- Phasic Policy Gradient (PPG) (https://arxiv.org/abs/2009.04416)
- Pairwise Proximal Policy Optimization (P3O) (https://arxiv.org/pdf/2310.00212.pdf)
- Advantage-Induced Policy Alignment (APA) (https://arxiv.org/pdf/2306.02231.pdf)
- Advantage-Leftover Lunch RL (A-LoL) (https://arxiv.org/abs/2305.14718)

**Planned Tasks:**
- Helpfulness and Harmfullness (https://arxiv.org/pdf/2204.05862.pdf)


## Installation
To install `tril` do:
```
pip install tril
```
For the run scripts and the example scripts for usage please see the respository.

To setup a development environment we use `conda` for version control. To install TRIL, please follow these steps
```
conda create -n tril python=3.10
conda activate tril
pip install -e .
```

Optionally, for `caption_metrics` such as CiDER-D and SPICE, please install these additional dependencies.
```
# Spacy model install
python -m spacy download en_core_web_sm

# CoreNLP library install
cd src/tril/metrics/caption_metrics/spice && bash get_stanford_models.sh
```

## Example Scripts
In the `examples` directory, there are example scripts to run TRIL algorithms on `IMDB` positive sentiment generation using pytorch `Fully Sharded Data Parallel (FSDP)` and `TL;DR` summarization using `deepspeed`. The name of each script is of the format, `<task>_<alg>.yaml`. Run each experiment like the following:
```
./examples/<task>/<script>
```

Within each script the command is
```
accelerate --config <accelerate config> [accelerate args] main.py task=<task config> alg=<alg config> [hydra CLI config specification]
```

Please see the [`accelerate` launch tutorial](https://huggingface.co/docs/accelerate/basic_tutorials/launch) for how to launch jobs with `accelerate`. We provide examples of different `accelerate` configs in the `accelerate_cfgs` directoy. For more details on Hydra CLI and config usage please see this [tutorial](https://hydra.cc/docs/tutorials/basic/your_first_app/simple_cli/).

## Usage Example
Here is a minimal example of running PPO with TRIL:
```python
import hydra
from accelerate import Accelerator
from tril import tril_run
from tril.logging import Tracker
from tril.algorithms import PPO

@hydra.main(version_base=None, config_path="cfgs", config_name="config") # Hydra Decorator for Config
@tril_run # TRIL decorator for hydra config processing
def run_ppo(cfg):
    # Initialize accelerator for distributed computing
    accelerator = Accelerator()

    # Grab experiment save directory from Hydra
    save_path = hydra.core.hydra_config.HydraConfig.get().runtime.output_dir

    # Instantiate TRIL logger for WandB and CLI logging/saving
    tracker = Tracker(
        save_path,
        OmegaConf.to_container(cfg, resolve=True),
        cfg.project_name,
        cfg.experiment_name,
        cfg.entity_name,
        cfg.log_to_wandb,
        log_level=logging.INFO,
        is_main_process=accelerator.is_main_process,
    )

    # Instantiate Algorithm
    ppo = PPO(cfg, accelerator, tracker)

    # Start learn to train LLM
    ppo.learn()

if __name__ == '__main__':
    run_ppo()
```

`TRIL` also provides an [`AlgorithmRegistry`](https://github.com/Cornell-RL/tril/blob/main/src/tril/algorithms/__init__.py) to instantiate algorithms. Please see our `main.py` to see how our scripts instantiate the algorithms. The list of available algorithms can be seen by the configs in `cfgs/task`.

## Current Task/Algorithm Support Matrix

| Algorithm  | IMDB | CommonGen | TL;DR |
|------------| ---- | ---- | ---- |
| PPO        | ✅ | ✅ | ✅ |
| PPO++      | ✅ | ✅ | ✅ |
| AggreVaTeD | ✅ | ✅ | ✅ |
| LOLS       | ✅ | ✅ | ✅ |
| D2LOLS     | ✅ | ✅ | ✅ |
| BC         | ✅ | ✅ | ✅ |
| GAIL       | ✅ |  |  |

## Code Structure
The directory structure of the configs, run script, and TRIL components looks like this.

```
├── cfgs                    <- Hydra configs
│   ├── alg                 <- Algorithm configs (e.g. PPO)
│   ├── task                <- Task configs (e.g. TL;DR summarization)
│   ├── logging             <- Logging configs (e.g. WandB)
│   │
│   └── config.yaml         <- Main config for training
│
├── accelerate_cfgs         <- Accelerate configs
│
├── main.py                 <- TRIL main function
│
├── tril                    <- TRIL src
│   ├── algorithms          <- Algorithm implementations
│   ├── buffers             <- Data Buffer (e.g. OnlineBuffer, PromptBuffer)
│   ├── metrics             <- Evaluation Metrics
│   ├── policies            <- Language Model Policies (e.g. Actor, ActorCritic)
│   ├── rewards             <- Reward Functions
│   ├── tasks               <- Supported Tasks
│   ├── utils               <- Helper functions for TRIL
│   │
│   ├── agent.py            <- Agent contains all torch.nn Modules (i.e. Policy and Reward)
│   ├── base_algorithm.py   <- Algorithm abstract class
│   ├── base_metric.py      <- Metric abstract class
│   ├── base_reward.py      <- Reward abstract class
│   ├── base_task.py        <- Task abstract class
│   └── logging.py          <- TRIL Logger
```

In each directory's `__init__.py`, there is a registry to register all supported `algorithms`, `metrics`, `rewards`, and `tasks`. When extending `TRIL`, please add the respective addition to one of these registries.

## Logging
TRIL support Weights and Biases logging. Please enter your `wandb` details such as `entity_name` and `project_name` into `cfgs/logging/wandb.yaml`. If you would not like to log to `wandb`, please set `log_to_wandb=False`.

By default, we save training and evaluation information in `outputs/<experiment_name>/<datetime>` You can define `experiment_name` in `cfgs/config.yaml` or through Hydra CLI, `main.py experiment_name=<name>`.


## Example WandB Reports
Here is an example WandB Report of how the logging would look like when running multiple different algorithms

* [CommonGen Report](https://api.wandb.ai/links/coactivelearning/hfocjp17).
* [TL;DR PPO Report](https://api.wandb.ai/links/coactivelearning/ga4r1uqd).

## Citing TRIL
If you use TRIL in your publication, please cite it by using the following BibTeX entry.
```bibtex
@misc{TRIL,
      title={TRIL: Transformers Reinforcement and Imitation Learning Library},
      author={Jonathan D Chang and Kiante Brantley and Rajkumar Ramamurthy and Dipendra Misra and Wen Sun},
      howpublished={\url{https://github.com/Cornell-RL/tril}},
      year={2023}
}
```

Here is the citation of the accompanying paper for many of the supported algorithms.
```bibtex
@misc{chang2023learning,
      title={Learning to Generate Better Than Your LLM}, 
      author={Jonathan D. Chang and Kiante Brantley and Rajkumar Ramamurthy and Dipendra Misra and Wen Sun},
      year={2023},
      eprint={2306.11816},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
```

## Acknowledgements
We would like to acknowledge [RL4LMs](https://github.com/allenai/RL4LMs), [TRL](https://github.com/huggingface/trl), and [TRLx](https://github.com/CarperAI/trlx) for being inspirations for this library.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "tril",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "",
    "keywords": "reinforcement learning,imitation learning,machine learning,transformers",
    "author": "",
    "author_email": "Jonathan Chang <jdc396@cornell.edu>, Kiante Brantley <kdb82@cornell.edu>",
    "download_url": "https://files.pythonhosted.org/packages/ce/f9/f161645a242967fda7456d44b47cbeeb617069889e4e835543e7227c9db7/tril-0.2.1.tar.gz",
    "platform": null,
    "description": "<h1 align=\"center\"> <p>TRIL</p></h1>\n<h3 align=\"center\">\n    <p>Transformers Reinforcement and Imitation Learning Library</p>\n</h3>\n\n`TRIL` is a modular library for Reinforcment Learning (RL) and Imitation Learning (IL) algorithm development with transformers. We directly build on top of [`transformers`](https://github.com/huggingface/transformers), [`accelerate`](https://huggingface.co/docs/accelerate/index), and [`peft`](https://huggingface.co/docs/peft/index) libraries by \ud83e\udd17 Hugging Face. That way TRIL is able to support open-sourced pretrained models, distributed computing, as well as parameter efficient training. Note we currently support most decoder and encoder-decoder architectures availble in `transformers`.\n\n**Supported Algorithms:**\n\n- Behavior Cloning (i.e. Supervised Fine Tuning)\n- Proximal Policy Optimization (PPO) (https://arxiv.org/abs/1707.06347)\n- Generative Adversarial Imitation Learning (GAIL) (https://arxiv.org/abs/1606.03476)\n- PPO++ (https://arxiv.org/pdf/2306.11816)\n- AggreVaTeD (https://arxiv.org/pdf/2306.11816)\n- Locally Optimal Learning to Search (LOLS) (https://arxiv.org/pdf/2306.11816)\n- Direct and Differentiable Locally Optimal Learning to Search (D2LOLS) (https://arxiv.org/pdf/2306.11816)\n\n**Supported Tasks:**\n- IMDB Positive Sentiment (https://arxiv.org/abs/2210.01241)\n- CommonGen: Common Sense Generation (https://arxiv.org/abs/1911.03705)\n- TL;DR Summarization (https://arxiv.org/pdf/2203.02155.pdf)\n\n---\n\n**Planned Algorithms:**\n- Direct Preference Optimization (DPO) (https://arxiv.org/pdf/2305.18290.pdf)\n- Statistical Rejection Sampling Optimization (RSO) (https://arxiv.org/pdf/2309.06657.pdf)\n- Phasic Policy Gradient (PPG) (https://arxiv.org/abs/2009.04416)\n- Pairwise Proximal Policy Optimization (P3O) (https://arxiv.org/pdf/2310.00212.pdf)\n- Advantage-Induced Policy Alignment (APA) (https://arxiv.org/pdf/2306.02231.pdf)\n- Advantage-Leftover Lunch RL (A-LoL) (https://arxiv.org/abs/2305.14718)\n\n**Planned Tasks:**\n- Helpfulness and Harmfullness (https://arxiv.org/pdf/2204.05862.pdf)\n\n\n## Installation\nTo install `tril` do:\n```\npip install tril\n```\nFor the run scripts and the example scripts for usage please see the respository.\n\nTo setup a development environment we use `conda` for version control. To install TRIL, please follow these steps\n```\nconda create -n tril python=3.10\nconda activate tril\npip install -e .\n```\n\nOptionally, for `caption_metrics` such as CiDER-D and SPICE, please install these additional dependencies.\n```\n# Spacy model install\npython -m spacy download en_core_web_sm\n\n# CoreNLP library install\ncd src/tril/metrics/caption_metrics/spice && bash get_stanford_models.sh\n```\n\n## Example Scripts\nIn the `examples` directory, there are example scripts to run TRIL algorithms on `IMDB` positive sentiment generation using pytorch `Fully Sharded Data Parallel (FSDP)` and `TL;DR` summarization using `deepspeed`. The name of each script is of the format, `<task>_<alg>.yaml`. Run each experiment like the following:\n```\n./examples/<task>/<script>\n```\n\nWithin each script the command is\n```\naccelerate --config <accelerate config> [accelerate args] main.py task=<task config> alg=<alg config> [hydra CLI config specification]\n```\n\nPlease see the [`accelerate` launch tutorial](https://huggingface.co/docs/accelerate/basic_tutorials/launch) for how to launch jobs with `accelerate`. We provide examples of different `accelerate` configs in the `accelerate_cfgs` directoy. For more details on Hydra CLI and config usage please see this [tutorial](https://hydra.cc/docs/tutorials/basic/your_first_app/simple_cli/).\n\n## Usage Example\nHere is a minimal example of running PPO with TRIL:\n```python\nimport hydra\nfrom accelerate import Accelerator\nfrom tril import tril_run\nfrom tril.logging import Tracker\nfrom tril.algorithms import PPO\n\n@hydra.main(version_base=None, config_path=\"cfgs\", config_name=\"config\") # Hydra Decorator for Config\n@tril_run # TRIL decorator for hydra config processing\ndef run_ppo(cfg):\n    # Initialize accelerator for distributed computing\n    accelerator = Accelerator()\n\n    # Grab experiment save directory from Hydra\n    save_path = hydra.core.hydra_config.HydraConfig.get().runtime.output_dir\n\n    # Instantiate TRIL logger for WandB and CLI logging/saving\n    tracker = Tracker(\n        save_path,\n        OmegaConf.to_container(cfg, resolve=True),\n        cfg.project_name,\n        cfg.experiment_name,\n        cfg.entity_name,\n        cfg.log_to_wandb,\n        log_level=logging.INFO,\n        is_main_process=accelerator.is_main_process,\n    )\n\n    # Instantiate Algorithm\n    ppo = PPO(cfg, accelerator, tracker)\n\n    # Start learn to train LLM\n    ppo.learn()\n\nif __name__ == '__main__':\n    run_ppo()\n```\n\n`TRIL` also provides an [`AlgorithmRegistry`](https://github.com/Cornell-RL/tril/blob/main/src/tril/algorithms/__init__.py) to instantiate algorithms. Please see our `main.py` to see how our scripts instantiate the algorithms. The list of available algorithms can be seen by the configs in `cfgs/task`.\n\n## Current Task/Algorithm Support Matrix\n\n| Algorithm  | IMDB | CommonGen | TL;DR |\n|------------| ---- | ---- | ---- |\n| PPO        | \u2705 | \u2705 | \u2705 |\n| PPO++      | \u2705 | \u2705 | \u2705 |\n| AggreVaTeD | \u2705 | \u2705 | \u2705 |\n| LOLS       | \u2705 | \u2705 | \u2705 |\n| D2LOLS     | \u2705 | \u2705 | \u2705 |\n| BC         | \u2705 | \u2705 | \u2705 |\n| GAIL       | \u2705 |  |  |\n\n## Code Structure\nThe directory structure of the configs, run script, and TRIL components looks like this.\n\n```\n\u251c\u2500\u2500 cfgs                    <- Hydra configs\n\u2502   \u251c\u2500\u2500 alg                 <- Algorithm configs (e.g. PPO)\n\u2502   \u251c\u2500\u2500 task                <- Task configs (e.g. TL;DR summarization)\n\u2502   \u251c\u2500\u2500 logging             <- Logging configs (e.g. WandB)\n\u2502   \u2502\n\u2502   \u2514\u2500\u2500 config.yaml         <- Main config for training\n\u2502\n\u251c\u2500\u2500 accelerate_cfgs         <- Accelerate configs\n\u2502\n\u251c\u2500\u2500 main.py                 <- TRIL main function\n\u2502\n\u251c\u2500\u2500 tril                    <- TRIL src\n\u2502   \u251c\u2500\u2500 algorithms          <- Algorithm implementations\n\u2502   \u251c\u2500\u2500 buffers             <- Data Buffer (e.g. OnlineBuffer, PromptBuffer)\n\u2502   \u251c\u2500\u2500 metrics             <- Evaluation Metrics\n\u2502   \u251c\u2500\u2500 policies            <- Language Model Policies (e.g. Actor, ActorCritic)\n\u2502   \u251c\u2500\u2500 rewards             <- Reward Functions\n\u2502   \u251c\u2500\u2500 tasks               <- Supported Tasks\n\u2502   \u251c\u2500\u2500 utils               <- Helper functions for TRIL\n\u2502   \u2502\n\u2502   \u251c\u2500\u2500 agent.py            <- Agent contains all torch.nn Modules (i.e. Policy and Reward)\n\u2502   \u251c\u2500\u2500 base_algorithm.py   <- Algorithm abstract class\n\u2502   \u251c\u2500\u2500 base_metric.py      <- Metric abstract class\n\u2502   \u251c\u2500\u2500 base_reward.py      <- Reward abstract class\n\u2502   \u251c\u2500\u2500 base_task.py        <- Task abstract class\n\u2502   \u2514\u2500\u2500 logging.py          <- TRIL Logger\n```\n\nIn each directory's `__init__.py`, there is a registry to register all supported `algorithms`, `metrics`, `rewards`, and `tasks`. When extending `TRIL`, please add the respective addition to one of these registries.\n\n## Logging\nTRIL support Weights and Biases logging. Please enter your `wandb` details such as `entity_name` and `project_name` into `cfgs/logging/wandb.yaml`. If you would not like to log to `wandb`, please set `log_to_wandb=False`.\n\nBy default, we save training and evaluation information in `outputs/<experiment_name>/<datetime>` You can define `experiment_name` in `cfgs/config.yaml` or through Hydra CLI, `main.py experiment_name=<name>`.\n\n\n## Example WandB Reports\nHere is an example WandB Report of how the logging would look like when running multiple different algorithms\n\n* [CommonGen Report](https://api.wandb.ai/links/coactivelearning/hfocjp17).\n* [TL;DR PPO Report](https://api.wandb.ai/links/coactivelearning/ga4r1uqd).\n\n## Citing TRIL\nIf you use TRIL in your publication, please cite it by using the following BibTeX entry.\n```bibtex\n@misc{TRIL,\n      title={TRIL: Transformers Reinforcement and Imitation Learning Library},\n      author={Jonathan D Chang and Kiante Brantley and Rajkumar Ramamurthy and Dipendra Misra and Wen Sun},\n      howpublished={\\url{https://github.com/Cornell-RL/tril}},\n      year={2023}\n}\n```\n\nHere is the citation of the accompanying paper for many of the supported algorithms.\n```bibtex\n@misc{chang2023learning,\n      title={Learning to Generate Better Than Your LLM}, \n      author={Jonathan D. Chang and Kiante Brantley and Rajkumar Ramamurthy and Dipendra Misra and Wen Sun},\n      year={2023},\n      eprint={2306.11816},\n      archivePrefix={arXiv},\n      primaryClass={cs.LG}\n}\n```\n\n## Acknowledgements\nWe would like to acknowledge [RL4LMs](https://github.com/allenai/RL4LMs), [TRL](https://github.com/huggingface/trl), and [TRLx](https://github.com/CarperAI/trlx) for being inspirations for this library.\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2023 The Reinforcement Learning, AI, and Decision Making Lab at Cornell  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Transformers Reinforcement and Imitation Learning Library",
    "version": "0.2.1",
    "project_urls": {
        "Homepage": "https://github.com/Cornell-RL/tril",
        "Source": "https://github.com/Cornell-RL/tril"
    },
    "split_keywords": [
        "reinforcement learning",
        "imitation learning",
        "machine learning",
        "transformers"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0b2677b9e4c1d22ccad72ff0718d9997b18c32e4f892274eb12f64e08d2e66ec",
                "md5": "b2fb7497b204478332f5a57f0ef65a54",
                "sha256": "97c525b752f64dd8443662b220cca8098539c08815e587deae8df6d29917e9cb"
            },
            "downloads": -1,
            "filename": "tril-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b2fb7497b204478332f5a57f0ef65a54",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 87181,
            "upload_time": "2023-11-13T19:12:37",
            "upload_time_iso_8601": "2023-11-13T19:12:37.374556Z",
            "url": "https://files.pythonhosted.org/packages/0b/26/77b9e4c1d22ccad72ff0718d9997b18c32e4f892274eb12f64e08d2e66ec/tril-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cef9f161645a242967fda7456d44b47cbeeb617069889e4e835543e7227c9db7",
                "md5": "d1df14d748310f2392410b83365aeae3",
                "sha256": "efb4523576892421df0b80d8cbd49e843d9856d5a47816f3f984ef07c9f5cb05"
            },
            "downloads": -1,
            "filename": "tril-0.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "d1df14d748310f2392410b83365aeae3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 67824,
            "upload_time": "2023-11-13T19:12:44",
            "upload_time_iso_8601": "2023-11-13T19:12:44.766668Z",
            "url": "https://files.pythonhosted.org/packages/ce/f9/f161645a242967fda7456d44b47cbeeb617069889e4e835543e7227c9db7/tril-0.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-13 19:12:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Cornell-RL",
    "github_project": "tril",
    "github_not_found": true,
    "lcname": "tril"
}
        
Elapsed time: 4.16541s