paragraph2actions


Nameparagraph2actions JSON
Version 1.5.0 PyPI version JSON
download
home_pagehttps://github.com/rxn4chemistry/paragraph2actions
SummaryExtraction of actions from experimental procedures
upload_time2023-10-13 11:01:45
maintainer
docs_urlNone
authorIBM RXN team
requires_python>=3.6
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Extraction of actions from experimental procedures

This repository contains the code for [Automated Extraction of Chemical Synthesis Actions from Experimental Procedures](https://doi.org/10.1038/s41467-020-17266-6).

- [Overview](#overview)
- [System Requirements](#system-requirements)
- [Installation Guide](#installation-guide)
- [Training the transformer model for action extraction](#training-the-transformer-model-for-action-extraction)
- [Data augmentation example](#data-augmentation)
- [Action post-processing example](#action-post-processing)

# Overview

This repository contains code to extract actions from experimental procedures. In particular, it contains the following:
* Definition and handling of synthesis actions
* Code for data augmentation
* Training and usage of a transformer-based model

A trained model can be freely used online at https://rxn.res.ibm.com or with the Python wrapper available [here](https://github.com/rxn4chemistry/rxn4chemistry).

Links:
* [GitHub repository](https://github.com/rxn4chemistry/paragraph2actions)
* [PyPI package](https://pypi.org/project/paragraph2actions/)

# System Requirements

## Hardware requirements
The code can run on any standard computer.
It is recommended to run the training scripts in a GPU-enabled environment.

## Software requirements
### OS Requirements
This package is supported for *macOS* and *Linux*. The package has been tested on the following systems:
+ macOS: Catalina (10.15.4)
+ Linux: Ubuntu 16.04.3

### Python
A Python version of 3.6 or greater is recommended.
The Python package dependencies are listed in [`setup.cfg`](./setup.cfg).

# Installation guide

To use the package, we recommended to create a dedicated `conda` or `venv` environment:
```bash
# Conda
conda create -n p2a python=3.8
conda activate p2a

# venv
python3.8 -m venv myenv
source myenv/bin/activate
```

The package can then be installed from Pypi:
```bash
pip install paragraph2actions
```

For local development, the package can be installed with:
```bash
pip install -e .[dev]
```
The installation should not take more than a few minutes.

# Training the transformer model for action extraction

This section explains how to train the translation model for action extraction.

## General setup

For simplicity, set the following environment variable:
```bash
export DATA_DIR="$(pwd)/test_data"
```
`DATA_DIR` can be changed to any other location containing the data to train on.
We assume that `DATA_DIR` contains the following files:
```bash
src-test.txt    src-train.txt   src-valid.txt   tgt-test.txt    tgt-train.txt   tgt-valid.txt
```

## Subword tokenization

We train a SentencePiece tokenizer on the train split:
```bash
export VOCAB_SIZE=200  # for the production model, a size of 16000 is used
paragraph2actions-create-tokenizer -i $DATA_DIR/src-train.txt -i $DATA_DIR/tgt-train.txt -m $DATA_DIR/sp_model -v $VOCAB_SIZE
```

We then tokenize the data:
```bash
paragraph2actions-tokenize -m $DATA_DIR/sp_model.model -i $DATA_DIR/src-train.txt -o $DATA_DIR/tok-src-train.txt
paragraph2actions-tokenize -m $DATA_DIR/sp_model.model -i $DATA_DIR/src-valid.txt -o $DATA_DIR/tok-src-valid.txt
paragraph2actions-tokenize -m $DATA_DIR/sp_model.model -i $DATA_DIR/tgt-train.txt -o $DATA_DIR/tok-tgt-train.txt
paragraph2actions-tokenize -m $DATA_DIR/sp_model.model -i $DATA_DIR/tgt-valid.txt -o $DATA_DIR/tok-tgt-valid.txt
```

## Training

Convert the data to the format required by OpenNMT:
```bash
onmt_preprocess \
  -train_src $DATA_DIR/tok-src-train.txt -train_tgt $DATA_DIR/tok-tgt-train.txt \
  -valid_src $DATA_DIR/tok-src-valid.txt -valid_tgt $DATA_DIR/tok-tgt-valid.txt \
  -save_data $DATA_DIR/preprocessed -src_seq_length 300 -tgt_seq_length 300 \
  -src_vocab_size $VOCAB_SIZE -tgt_vocab_size $VOCAB_SIZE -share_vocab
```

To then train the transformer model with OpenNMT: 
```bash
onmt_train \
  -data $DATA_DIR/preprocessed  -save_model  $DATA_DIR/models/model  \
  -seed 42 -save_checkpoint_steps 10000 -keep_checkpoint 5 \
  -train_steps 500000 -param_init 0  -param_init_glorot -max_generator_batches 32 \
  -batch_size 4096 -batch_type tokens -normalization tokens -max_grad_norm 0  -accum_count 4 \
  -optim adam -adam_beta1 0.9 -adam_beta2 0.998 -decay_method noam -warmup_steps 8000  \
  -learning_rate 2 -label_smoothing 0.0 -report_every 1000  -valid_batch_size 32 \
  -layers 4 -rnn_size 256 -word_vec_size 256 -encoder_type transformer -decoder_type transformer \
  -dropout 0.1 -position_encoding -share_embeddings -valid_steps 20000 \
  -global_attention general -global_attention_function softmax -self_attn_type scaled-dot \
  -heads 8 -transformer_ff 2048
```
Training the model can take up to a few days in a GPU-enabled environment.
For testing purposes in a CPU-only environment, the same command with `-save_checkpoint_steps 10` and `-train_steps 10` will take only a few minutes.

## Finetuning

For finetuning, we first generate appropriate data in OpenNMT format by following the steps described above.
We assume that the preprocessed data is then available as `$DATA_DIR/preprocessed_finetuning `

We then use the same training command with slightly different parameters 
```bash
onmt_train \
  -data $DATA_DIR/preprocessed_finetuning  \
  -train_from $DATA_DIR/models/model_step_500000.pt \
  -save_model  $DATA_DIR/models/model  \
  -seed 42 -save_checkpoint_steps 1000 -keep_checkpoint 40 \
  -train_steps 530000 -param_init 0  -param_init_glorot -max_generator_batches 32 \
  -batch_size 4096 -batch_type tokens -normalization tokens -max_grad_norm 0  -accum_count 4 \
  -optim adam -adam_beta1 0.9 -adam_beta2 0.998 -decay_method noam -warmup_steps 8000  \
  -learning_rate 2 -label_smoothing 0.0 -report_every 200  -valid_batch_size 512 \
  -layers 4 -rnn_size 256 -word_vec_size 256 -encoder_type transformer -decoder_type transformer \
  -dropout 0.1 -position_encoding -share_embeddings -valid_steps 200 \
  -global_attention general -global_attention_function softmax -self_attn_type scaled-dot \
  -heads 8 -transformer_ff 2048
```

## Extraction of actions with the transformer model

Experimental procedure sentences can then be translated to action sequences with the following:
```bash
# Update the path to the OpenNMT model as required
export MODEL="$DATA_DIR/models/model_step_520000.pt"

paragraph2actions-translate -t $MODEL -p $DATA_DIR/sp_model.model -s $DATA_DIR/src-test.txt -o $DATA_DIR/pred.txt
```

## Evaluation

To print the metrics on the predictions, the following command can be used:
```bash
paragraph2actions-calculate-metrics -g $DATA_DIR/tgt-test.txt -p $DATA_DIR/pred.txt
```


# Data augmentation

The following code illustrate how to augment the data for existing sentences and associated action sequences.

```python
from paragraph2actions.augmentation.compound_name_augmenter import CompoundNameAugmenter
from paragraph2actions.augmentation.compound_quantity_augmenter import CompoundQuantityAugmenter
from paragraph2actions.augmentation.duration_augmenter import DurationAugmenter
from paragraph2actions.augmentation.temperature_augmenter import TemperatureAugmenter
from paragraph2actions.misc import load_samples, TextWithActions
from paragraph2actions.readable_converter import ReadableConverter

converter = ReadableConverter()
samples = load_samples('test_data/src-test.txt', 'test_data/tgt-test.txt', converter)

cna = CompoundNameAugmenter(0.5, ['NaH', 'hydrogen', 'C2H6', 'water'])
cqa = CompoundQuantityAugmenter(0.5, ['5.0 g', '8 mL', '3 mmol'])
da = DurationAugmenter(0.5, ['overnight', '15 minutes', '6 h'])
ta = TemperatureAugmenter(0.5, ['room temperature', '30 °C', '-5 °C'])


def augment(sample: TextWithActions) -> TextWithActions:
    sample = cna.augment(sample)
    sample = cqa.augment(sample)
    sample = da.augment(sample)
    sample = ta.augment(sample)
    return sample


for sample in samples:
    print('Original:')
    print(sample.text)
    print(converter.actions_to_string(sample.actions))
    for _ in range(5):
        augmented = augment(sample)
        print('  Augmented:')
        print(' ', augmented.text)
        print(' ', converter.actions_to_string(augmented.actions))
    print()
```
This script can produce the following output:
```
Original:
The reaction mixture is allowed to warm to room temperature and stirred overnight.
STIR for overnight at room temperature.
  Augmented:
  The reaction mixture is allowed to warm to -5 °C and stirred overnight.
  STIR for overnight at -5 °C.
  Augmented:
  The reaction mixture is allowed to warm to room temperature and stirred 15 minutes.
  STIR for 15 minutes at room temperature.
[...]
```

# Action post-processing

The following code illustrate the postprocessing of actions.

```python
from paragraph2actions.postprocessing.filter_postprocessor import FilterPostprocessor
from paragraph2actions.postprocessing.noaction_postprocessor import NoActionPostprocessor
from paragraph2actions.postprocessing.postprocessor_combiner import PostprocessorCombiner
from paragraph2actions.postprocessing.wait_postprocessor import WaitPostprocessor
from paragraph2actions.readable_converter import ReadableConverter

converter = ReadableConverter()
postprocessor = PostprocessorCombiner([
    FilterPostprocessor(),
    NoActionPostprocessor(),
    WaitPostprocessor(),
])

original_action_string = 'NOACTION; STIR at 5 °C; WAIT for 10 minutes; FILTER; DRYSOLUTION over sodium sulfate.'
original_actions = converter.string_to_actions(original_action_string)

postprocessed_actions = postprocessor.postprocess(original_actions)
postprocessed_action_string = converter.actions_to_string(postprocessed_actions)

print('Original actions     :', original_action_string)
print('Postprocessed actions:', postprocessed_action_string)
```

The output of this code will be the following:
```
Original actions     : NOACTION; STIR at 5 °C; WAIT for 10 minutes; FILTER; DRYSOLUTION over sodium sulfate.
Postprocessed actions: STIR for 10 minutes at 5 °C; FILTER keep filtrate; DRYSOLUTION over sodium sulfate.
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/rxn4chemistry/paragraph2actions",
    "name": "paragraph2actions",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "",
    "author": "IBM RXN team",
    "author_email": "rxn4chemistry@zurich.ibm.com",
    "download_url": "https://files.pythonhosted.org/packages/fc/9e/9cf877293ab54ed193282b3e00e2dd84011c5ed6bc0ab5b32b5ac0034019/paragraph2actions-1.5.0.tar.gz",
    "platform": null,
    "description": "# Extraction of actions from experimental procedures\n\nThis repository contains the code for [Automated Extraction of Chemical Synthesis Actions from Experimental Procedures](https://doi.org/10.1038/s41467-020-17266-6).\n\n- [Overview](#overview)\n- [System Requirements](#system-requirements)\n- [Installation Guide](#installation-guide)\n- [Training the transformer model for action extraction](#training-the-transformer-model-for-action-extraction)\n- [Data augmentation example](#data-augmentation)\n- [Action post-processing example](#action-post-processing)\n\n# Overview\n\nThis repository contains code to extract actions from experimental procedures. In particular, it contains the following:\n* Definition and handling of synthesis actions\n* Code for data augmentation\n* Training and usage of a transformer-based model\n\nA trained model can be freely used online at https://rxn.res.ibm.com or with the Python wrapper available [here](https://github.com/rxn4chemistry/rxn4chemistry).\n\nLinks:\n* [GitHub repository](https://github.com/rxn4chemistry/paragraph2actions)\n* [PyPI package](https://pypi.org/project/paragraph2actions/)\n\n# System Requirements\n\n## Hardware requirements\nThe code can run on any standard computer.\nIt is recommended to run the training scripts in a GPU-enabled environment.\n\n## Software requirements\n### OS Requirements\nThis package is supported for *macOS* and *Linux*. The package has been tested on the following systems:\n+ macOS: Catalina (10.15.4)\n+ Linux: Ubuntu 16.04.3\n\n### Python\nA Python version of 3.6 or greater is recommended.\nThe Python package dependencies are listed in [`setup.cfg`](./setup.cfg).\n\n# Installation guide\n\nTo use the package, we recommended to create a dedicated `conda` or `venv` environment:\n```bash\n# Conda\nconda create -n p2a python=3.8\nconda activate p2a\n\n# venv\npython3.8 -m venv myenv\nsource myenv/bin/activate\n```\n\nThe package can then be installed from Pypi:\n```bash\npip install paragraph2actions\n```\n\nFor local development, the package can be installed with:\n```bash\npip install -e .[dev]\n```\nThe installation should not take more than a few minutes.\n\n# Training the transformer model for action extraction\n\nThis section explains how to train the translation model for action extraction.\n\n## General setup\n\nFor simplicity, set the following environment variable:\n```bash\nexport DATA_DIR=\"$(pwd)/test_data\"\n```\n`DATA_DIR` can be changed to any other location containing the data to train on.\nWe assume that `DATA_DIR` contains the following files:\n```bash\nsrc-test.txt    src-train.txt   src-valid.txt   tgt-test.txt    tgt-train.txt   tgt-valid.txt\n```\n\n## Subword tokenization\n\nWe train a SentencePiece tokenizer on the train split:\n```bash\nexport VOCAB_SIZE=200  # for the production model, a size of 16000 is used\nparagraph2actions-create-tokenizer -i $DATA_DIR/src-train.txt -i $DATA_DIR/tgt-train.txt -m $DATA_DIR/sp_model -v $VOCAB_SIZE\n```\n\nWe then tokenize the data:\n```bash\nparagraph2actions-tokenize -m $DATA_DIR/sp_model.model -i $DATA_DIR/src-train.txt -o $DATA_DIR/tok-src-train.txt\nparagraph2actions-tokenize -m $DATA_DIR/sp_model.model -i $DATA_DIR/src-valid.txt -o $DATA_DIR/tok-src-valid.txt\nparagraph2actions-tokenize -m $DATA_DIR/sp_model.model -i $DATA_DIR/tgt-train.txt -o $DATA_DIR/tok-tgt-train.txt\nparagraph2actions-tokenize -m $DATA_DIR/sp_model.model -i $DATA_DIR/tgt-valid.txt -o $DATA_DIR/tok-tgt-valid.txt\n```\n\n## Training\n\nConvert the data to the format required by OpenNMT:\n```bash\nonmt_preprocess \\\n  -train_src $DATA_DIR/tok-src-train.txt -train_tgt $DATA_DIR/tok-tgt-train.txt \\\n  -valid_src $DATA_DIR/tok-src-valid.txt -valid_tgt $DATA_DIR/tok-tgt-valid.txt \\\n  -save_data $DATA_DIR/preprocessed -src_seq_length 300 -tgt_seq_length 300 \\\n  -src_vocab_size $VOCAB_SIZE -tgt_vocab_size $VOCAB_SIZE -share_vocab\n```\n\nTo then train the transformer model with OpenNMT: \n```bash\nonmt_train \\\n  -data $DATA_DIR/preprocessed  -save_model  $DATA_DIR/models/model  \\\n  -seed 42 -save_checkpoint_steps 10000 -keep_checkpoint 5 \\\n  -train_steps 500000 -param_init 0  -param_init_glorot -max_generator_batches 32 \\\n  -batch_size 4096 -batch_type tokens -normalization tokens -max_grad_norm 0  -accum_count 4 \\\n  -optim adam -adam_beta1 0.9 -adam_beta2 0.998 -decay_method noam -warmup_steps 8000  \\\n  -learning_rate 2 -label_smoothing 0.0 -report_every 1000  -valid_batch_size 32 \\\n  -layers 4 -rnn_size 256 -word_vec_size 256 -encoder_type transformer -decoder_type transformer \\\n  -dropout 0.1 -position_encoding -share_embeddings -valid_steps 20000 \\\n  -global_attention general -global_attention_function softmax -self_attn_type scaled-dot \\\n  -heads 8 -transformer_ff 2048\n```\nTraining the model can take up to a few days in a GPU-enabled environment.\nFor testing purposes in a CPU-only environment, the same command with `-save_checkpoint_steps 10` and `-train_steps 10` will take only a few minutes.\n\n## Finetuning\n\nFor finetuning, we first generate appropriate data in OpenNMT format by following the steps described above.\nWe assume that the preprocessed data is then available as `$DATA_DIR/preprocessed_finetuning `\n\nWe then use the same training command with slightly different parameters \n```bash\nonmt_train \\\n  -data $DATA_DIR/preprocessed_finetuning  \\\n  -train_from $DATA_DIR/models/model_step_500000.pt \\\n  -save_model  $DATA_DIR/models/model  \\\n  -seed 42 -save_checkpoint_steps 1000 -keep_checkpoint 40 \\\n  -train_steps 530000 -param_init 0  -param_init_glorot -max_generator_batches 32 \\\n  -batch_size 4096 -batch_type tokens -normalization tokens -max_grad_norm 0  -accum_count 4 \\\n  -optim adam -adam_beta1 0.9 -adam_beta2 0.998 -decay_method noam -warmup_steps 8000  \\\n  -learning_rate 2 -label_smoothing 0.0 -report_every 200  -valid_batch_size 512 \\\n  -layers 4 -rnn_size 256 -word_vec_size 256 -encoder_type transformer -decoder_type transformer \\\n  -dropout 0.1 -position_encoding -share_embeddings -valid_steps 200 \\\n  -global_attention general -global_attention_function softmax -self_attn_type scaled-dot \\\n  -heads 8 -transformer_ff 2048\n```\n\n## Extraction of actions with the transformer model\n\nExperimental procedure sentences can then be translated to action sequences with the following:\n```bash\n# Update the path to the OpenNMT model as required\nexport MODEL=\"$DATA_DIR/models/model_step_520000.pt\"\n\nparagraph2actions-translate -t $MODEL -p $DATA_DIR/sp_model.model -s $DATA_DIR/src-test.txt -o $DATA_DIR/pred.txt\n```\n\n## Evaluation\n\nTo print the metrics on the predictions, the following command can be used:\n```bash\nparagraph2actions-calculate-metrics -g $DATA_DIR/tgt-test.txt -p $DATA_DIR/pred.txt\n```\n\n\n# Data augmentation\n\nThe following code illustrate how to augment the data for existing sentences and associated action sequences.\n\n```python\nfrom paragraph2actions.augmentation.compound_name_augmenter import CompoundNameAugmenter\nfrom paragraph2actions.augmentation.compound_quantity_augmenter import CompoundQuantityAugmenter\nfrom paragraph2actions.augmentation.duration_augmenter import DurationAugmenter\nfrom paragraph2actions.augmentation.temperature_augmenter import TemperatureAugmenter\nfrom paragraph2actions.misc import load_samples, TextWithActions\nfrom paragraph2actions.readable_converter import ReadableConverter\n\nconverter = ReadableConverter()\nsamples = load_samples('test_data/src-test.txt', 'test_data/tgt-test.txt', converter)\n\ncna = CompoundNameAugmenter(0.5, ['NaH', 'hydrogen', 'C2H6', 'water'])\ncqa = CompoundQuantityAugmenter(0.5, ['5.0 g', '8 mL', '3 mmol'])\nda = DurationAugmenter(0.5, ['overnight', '15 minutes', '6 h'])\nta = TemperatureAugmenter(0.5, ['room temperature', '30 \u00b0C', '-5 \u00b0C'])\n\n\ndef augment(sample: TextWithActions) -> TextWithActions:\n    sample = cna.augment(sample)\n    sample = cqa.augment(sample)\n    sample = da.augment(sample)\n    sample = ta.augment(sample)\n    return sample\n\n\nfor sample in samples:\n    print('Original:')\n    print(sample.text)\n    print(converter.actions_to_string(sample.actions))\n    for _ in range(5):\n        augmented = augment(sample)\n        print('  Augmented:')\n        print(' ', augmented.text)\n        print(' ', converter.actions_to_string(augmented.actions))\n    print()\n```\nThis script can produce the following output:\n```\nOriginal:\nThe reaction mixture is allowed to warm to room temperature and stirred overnight.\nSTIR for overnight at room temperature.\n  Augmented:\n  The reaction mixture is allowed to warm to -5 \u00b0C and stirred overnight.\n  STIR for overnight at -5 \u00b0C.\n  Augmented:\n  The reaction mixture is allowed to warm to room temperature and stirred 15 minutes.\n  STIR for 15 minutes at room temperature.\n[...]\n```\n\n# Action post-processing\n\nThe following code illustrate the postprocessing of actions.\n\n```python\nfrom paragraph2actions.postprocessing.filter_postprocessor import FilterPostprocessor\nfrom paragraph2actions.postprocessing.noaction_postprocessor import NoActionPostprocessor\nfrom paragraph2actions.postprocessing.postprocessor_combiner import PostprocessorCombiner\nfrom paragraph2actions.postprocessing.wait_postprocessor import WaitPostprocessor\nfrom paragraph2actions.readable_converter import ReadableConverter\n\nconverter = ReadableConverter()\npostprocessor = PostprocessorCombiner([\n    FilterPostprocessor(),\n    NoActionPostprocessor(),\n    WaitPostprocessor(),\n])\n\noriginal_action_string = 'NOACTION; STIR at 5 \u00b0C; WAIT for 10 minutes; FILTER; DRYSOLUTION over sodium sulfate.'\noriginal_actions = converter.string_to_actions(original_action_string)\n\npostprocessed_actions = postprocessor.postprocess(original_actions)\npostprocessed_action_string = converter.actions_to_string(postprocessed_actions)\n\nprint('Original actions     :', original_action_string)\nprint('Postprocessed actions:', postprocessed_action_string)\n```\n\nThe output of this code will be the following:\n```\nOriginal actions     : NOACTION; STIR at 5 \u00b0C; WAIT for 10 minutes; FILTER; DRYSOLUTION over sodium sulfate.\nPostprocessed actions: STIR for 10 minutes at 5 \u00b0C; FILTER keep filtrate; DRYSOLUTION over sodium sulfate.\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Extraction of actions from experimental procedures",
    "version": "1.5.0",
    "project_urls": {
        "Homepage": "https://github.com/rxn4chemistry/paragraph2actions",
        "Repository": "https://github.com/rxn4chemistry/paragraph2actions"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d4ed309c5411d6f6766794841b0f30923a7b602648590a2bf1d3ea2356dc9ee7",
                "md5": "e8dccf304088ee1ef2cb11b4773cc608",
                "sha256": "fa56aba3fbfb5c770b371cc8a872e139edbc8c20fa5f426af0ec7ed23970d03e"
            },
            "downloads": -1,
            "filename": "paragraph2actions-1.5.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e8dccf304088ee1ef2cb11b4773cc608",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 47786,
            "upload_time": "2023-10-13T11:01:43",
            "upload_time_iso_8601": "2023-10-13T11:01:43.590212Z",
            "url": "https://files.pythonhosted.org/packages/d4/ed/309c5411d6f6766794841b0f30923a7b602648590a2bf1d3ea2356dc9ee7/paragraph2actions-1.5.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fc9e9cf877293ab54ed193282b3e00e2dd84011c5ed6bc0ab5b32b5ac0034019",
                "md5": "8d86a52ecb5cdd870e28bd9f84c467f5",
                "sha256": "15d99be44e87299da79e91f1a6b6f9e20aed498f271ee4c57abe5522cd45f43f"
            },
            "downloads": -1,
            "filename": "paragraph2actions-1.5.0.tar.gz",
            "has_sig": false,
            "md5_digest": "8d86a52ecb5cdd870e28bd9f84c467f5",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 35420,
            "upload_time": "2023-10-13T11:01:45",
            "upload_time_iso_8601": "2023-10-13T11:01:45.429392Z",
            "url": "https://files.pythonhosted.org/packages/fc/9e/9cf877293ab54ed193282b3e00e2dd84011c5ed6bc0ab5b32b5ac0034019/paragraph2actions-1.5.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-13 11:01:45",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "rxn4chemistry",
    "github_project": "paragraph2actions",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "paragraph2actions"
}
        
Elapsed time: 0.23585s