sacreeos


Namesacreeos JSON
Version 1.0.2 PyPI version JSON
download
home_pagehttps://github.com/jchenghu/sacreeos
SummarySacreEOS is a signature generator and implementation helper for the Self Critical Sequence Training
upload_time2023-05-24 23:17:10
maintainer
docs_urlNone
authorJia Cheng Hu
requires_python>=3.7
licenseApache License 2.0
keywords self-critical sequence training computer vision image captioning evaluation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
### SacreEOS

[SacreEOS](https://arxiv.org/abs/2305.12254) is a signature generator and 
implementation helper for the [Self Critical Sequence Training](https://arxiv.org/abs/1612.00563). <br>

### Motivation

SacreEOS main goal consists of spreading awareness about easy-to-overlook aspects
of the Self-Critical Squence Training (SCST), in particular over the `End-of-Sequence (EOS)`
token. Currently most Image Captioning projects follow two different approaches, we call for simplicity, 
Standard and No&lt;Eos&gt;. The two versions differ in few subtle implementation details that significantly 
impact the outputs and performances, ultimately posing an obstacle to the evaluation and comparison of models.


The generation of shareable signatures serves the purpose of spreading awareness about the issue,
incrase transparency over previous works and guiding future works into an informed implementation.
To this end, the package provides also useful classes in the hope of saving time and preventing headaches 
to those implementing SCST for the first time.


### Features

##### Main feature

<ul>
    <li> <b>SacreEOS signature generation. </b>
    The package provides an easy interface for the generation of signatures
    that inform regarding the key aspects of the SCST implementation. This can be done
    either manually or automatically using the implementation helper.<br><br>
    </li>
</ul>

##### Optional features

The adoption of SacreEOS as an implementation helper is completely optional.
Established projects are only invited to manually generate the SacreEOS signature.


<ul>
    <li> <b>Mode and metric selection. </b>
    The user can customize the target SCST configuration according to its needs. The package currently 
    supports 2 SCST modes, 4 metrics and 2 bases. <br><br>
    </li>
    <li> <b>Implementation helper. </b>
    Through a collection of exceptions and input conditions the user is guided toward an informed selection of SCST implementation. 
    <br><br></li>
    <li> <b>Efficiency. </b>
    For each metric, originally proposed in Python, an additional and optional C implementation 
    is provided. It can be toggled by setting one of the loss computation arguments. In case of 
    platform incompatibilities, the portability is still preserved thanks to the Python implementation. <br><br>
    </li>
</ul>


- - -

### Installation

Requirements:
<ul>
    <li> torch, numpy, typing.
    </li>
    <li> Python >= 3.7
    </li>
</ul>


You can install the package using pip:
```
python -m pip install sacreeos
```
or clone the repository:
```
git clone https://github.com/jchenghu/sacreeos
```

- - -

### Usage


#### Signature generation

##### Manual 

The SacreEOS signature can be generated in two ways.
In case you installed the package using `pip` simply write: 
```
$ python3.7 -m sacreeos
```
otherwise in case you cloned the repository:
```
$ cd sacreeos
$ python sacreeos
```

It will ask few information regarding the SCST implementation. A default
answer is provided in most cases based on the most popular configurations.

Examples of the two most common signatures: <br>
`STANDARD_wInit+Cider-D[n4,s6.0]+average[nspi5]+1.0.0` <br>
`NO<EOS>_wInit+Cider-D[n4,s6.0]+average[nspi5]+1.0.0` <br>



##### Automatic

The signature is automatically generated in case the package is adopted as 
implementation helper. Once the `Scst` class is constructed, the 
signature can be generated using the method `get_signature()`.  We refer to the sections below.

#### Implementation helper

SacreEOS provide the `Scst` class which computes the SCST loss over the inputs using the
selected metrics and performs a series of assersions and input checks based on the selected
class.

`Scst` Constructor: 
```
class: Scst(scst_class, metric_class, base_class, eos_token, corpus_refss=None,
            verbose=True, metric_args=None, base_args=None):
--------------------------------------------------------------------------------------
Arguments:

    <> scst_class: scst mode class
              - Choices: Scst.SCST_CONFIG_STANDARD, Scst.SCST_CONFIG_NO_EOS                

    <> metric_class: metric class to be selected 
              - Choices: Scst.METRIC_BLEU: requires, Scst.METRIC_CIDER, Scst.METRIC_CIDER_D, Scst.METRIC_CIDER_R.
    
    <> base_class: base reward type 
              - Choices: Scst.BASE_AVERAGE, Scst.BASE_GREEDY.
    
    <> eos_token: str : end of sequence token
    
    <> corpus_refss: list of list of sentences. 
                     The document frequencies calculation won't be affected by how sentences 
                     are placed inside the list of list, in fact, the format is required only
                     for sake of consistency with the one required by the loss computation

    <> verbose: bool
    
    Further Customizations: 
        leave these arguments to None to keep the standard configurations.

    <> metric_args: dictionary with metric arg name as key and custom arg values as value
            - Scst.METRIC_BLEU: requires no args
            - Scst.METRIC_CIDER:  {'n': int}                # max number of ngram   
            - Scst.METRIC_CIDER_D : {'n': int,  
                                     'sigma': float}         
            - Scst.METRIC_CIDER_R: {'n': int,               
                                    'repeat_coeff': float,  # length and repeatition penalty weights, the two must sum to 1
                                    'length_coeff': float,  
                                    'alpha': float}         # 

    <> base_args: dictionary with base arg name as key and custom arg values as value
            - {'nspi': int}  # number of samples per input/image.



```

Scst loss computation method:
```
method compute_scst_loss(sampled_preds: List[List[str]], sampled_logprobs: torch.tensor,
                         refss: List[List[str]], base_preds=None, eos_pos_upthresh=None,
                         reduction='mean', get_stat_data=False):
--------------------------------------------------------------------------------------
ARGUMENTS:

    <> sampled_preds: List[List[str]] of shape (batch_size, nspi): is the list of predicted sequences.    

    <> sampled_logprobs: tensor of shape (batch size, nspi, seq_len): refers to the log-probabilities of the predicted sequences.           
            -  The function does not perform padding on `sampled_logprobs`. The value of padding elements must be zero.
             Because of the popular practice of sub-word tokenization (such as BPE), this method cannot perform padding
             safely by itself by aligning the elements in `sampled_logprobs` and `sampled_preds`.

    <> base_preds: None or List[List[str]] of shape (batch_size, nspi): if not None, it contains the base sequences.

    <> refss: List[List[str]] of shape (batch_size, int): is the list of references list.
            - refss[i] are expected to be the references of sampled_preds[i]
            - * can be any number 
    
    <> eos_pos_upthresh (end of sequence position upper threshold) defines the length up until which the
     method will ensure the `eos_token` termination. Since SCST is a learning process, in some cases,
     especially in the early iterations, sampled sequences may not end with the `eos_token` because they
     reached the maximum sequence length defined by the model. In case of `None`, is set to the last dimension of `sampled_logprobs` argument.

    <> It should preferably be set to the model's max_len if no sub-word techniques are applied, but
       None (hence the `sampled_logprobs` last dimension) should ensure a wide enough error catching web.

    <> reduction: str: reduction method (None, 'sum', 'mean').
    <> get_stat_data: bool: if true, return not only the loss, but also reward and base reward
    
RETURN:
      (r-b) * (-sampled_logprobs.sum(dim=-1))
    Where:
        <> r represents the reward of `sampled_preds`
        <> b is the reward calculated on base sentences.
```

For automatic signature generation, call `get_signature()` on the class instance.

An example of usage can be found in the demo.


- - - 

#### Demo

The demo sub-package provide an usage example of SacreEOS helper functionality. <br>
It implements a small image captioning system to be trained with SCST.

##### Requirements

The demo requires a pre-trained model and data sampled from COCO. They
can be found in this [drive](https://drive.google.com/drive/folders/1dCFLY0zBRKlV3QQlv6AiadzKaatnQHG8?usp=share_link)
(~170MB), all contents are expected to be placed in `/demo/data/`. <br>

The directories and files in `/demo/` should look like this:
```
demo
├── data
│   ├── demo_data.pickle            -> model pre-trained on XE and training samples
│   ├── coco_train_refss.pickle     -> COCO corpus references
│   ├── dataset.py                  
│   ├── sample_coco_refss.py        -> additional samples for testing
│   └── sample_coco_test.py
├── demo_launcher.py                -> training script
├── __init__.py
├── net
    ├── layers.py                   -> model sub-layers
    ├── model.py                    -> model definition and sampling procedures
    └── utils.py                    -> naive utility functions

```

The training script does not require arguments and can be launched as follows:
```
python demo_launcher.py
```

This is the expected result, although it may be different because of stochastic factors and
small dataset sample:

```
Standard Scst training started...
Standard Scst signature: STANDARD_wInit+Cider-D[n4,s6.0]+average[nspi5]+1.0.0
Training epoch 49: 100%|████████████████████████| 50/50 [45:20<00:00, 54.42s/it]
No artifacts.
Note: because of the poor train set, the standard case may present few artifacts as well
NoEos Scst training started...
NoEos scst signature: NO<EOS>_wInit+Cider-D[n4,s6.0]+average[nspi5]+1.0.0
Training epoch 49: 100%|████████████████████████| 50/50 [42:36<00:00, 51.12s/it]
There are 531 artifacts.
End.
```


- - -

#### Credits

This project is based on the work of [Rennie et al., 2017](https://arxiv.org/abs/1612.00563)
and inspired by [SacreBLEU (Matt Post, 2018)](https://arxiv.org/abs/1804.08771).

Reference:
```
@article{hu2023request,
  title={A request for clarity over the End of Sequence token in the Self-Critical Sequence Training},
  author={Hu, Jia Cheng and Cavicchioli, Roberto and Capotondi, Alessandro},
  journal={arXiv preprint arXiv:2305.12254},
  year={2023}
}
```



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/jchenghu/sacreeos",
    "name": "sacreeos",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "self-critical sequence training, computer vision, image captioning, evaluation",
    "author": "Jia Cheng Hu",
    "author_email": "jia.jiachenghu@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/9e/7f/61bc4e02f29afb57cdeb61c76c47c4953a7bc4dc7f4e6fb33625f687cb81/sacreeos-1.0.2.tar.gz",
    "platform": null,
    "description": "\n### SacreEOS\n\n[SacreEOS](https://arxiv.org/abs/2305.12254) is a signature generator and \nimplementation helper for the [Self Critical Sequence Training](https://arxiv.org/abs/1612.00563). <br>\n\n### Motivation\n\nSacreEOS main goal consists of spreading awareness about easy-to-overlook aspects\nof the Self-Critical Squence Training (SCST), in particular over the `End-of-Sequence (EOS)`\ntoken. Currently most Image Captioning projects follow two different approaches, we call for simplicity, \nStandard and No&lt;Eos&gt;. The two versions differ in few subtle implementation details that significantly \nimpact the outputs and performances, ultimately posing an obstacle to the evaluation and comparison of models.\n\n\nThe generation of shareable signatures serves the purpose of spreading awareness about the issue,\nincrase transparency over previous works and guiding future works into an informed implementation.\nTo this end, the package provides also useful classes in the hope of saving time and preventing headaches \nto those implementing SCST for the first time.\n\n\n### Features\n\n##### Main feature\n\n<ul>\n    <li> <b>SacreEOS signature generation. </b>\n    The package provides an easy interface for the generation of signatures\n    that inform regarding the key aspects of the SCST implementation. This can be done\n    either manually or automatically using the implementation helper.<br><br>\n    </li>\n</ul>\n\n##### Optional features\n\nThe adoption of SacreEOS as an implementation helper is completely optional.\nEstablished projects are only invited to manually generate the SacreEOS signature.\n\n\n<ul>\n    <li> <b>Mode and metric selection. </b>\n    The user can customize the target SCST configuration according to its needs. The package currently \n    supports 2 SCST modes, 4 metrics and 2 bases. <br><br>\n    </li>\n    <li> <b>Implementation helper. </b>\n    Through a collection of exceptions and input conditions the user is guided toward an informed selection of SCST implementation. \n    <br><br></li>\n    <li> <b>Efficiency. </b>\n    For each metric, originally proposed in Python, an additional and optional C implementation \n    is provided. It can be toggled by setting one of the loss computation arguments. In case of \n    platform incompatibilities, the portability is still preserved thanks to the Python implementation. <br><br>\n    </li>\n</ul>\n\n\n- - -\n\n### Installation\n\nRequirements:\n<ul>\n    <li> torch, numpy, typing.\n    </li>\n    <li> Python >= 3.7\n    </li>\n</ul>\n\n\nYou can install the package using pip:\n```\npython -m pip install sacreeos\n```\nor clone the repository:\n```\ngit clone https://github.com/jchenghu/sacreeos\n```\n\n- - -\n\n### Usage\n\n\n#### Signature generation\n\n##### Manual \n\nThe SacreEOS signature can be generated in two ways.\nIn case you installed the package using `pip` simply write: \n```\n$ python3.7 -m sacreeos\n```\notherwise in case you cloned the repository:\n```\n$ cd sacreeos\n$ python sacreeos\n```\n\nIt will ask few information regarding the SCST implementation. A default\nanswer is provided in most cases based on the most popular configurations.\n\nExamples of the two most common signatures: <br>\n`STANDARD_wInit+Cider-D[n4,s6.0]+average[nspi5]+1.0.0` <br>\n`NO<EOS>_wInit+Cider-D[n4,s6.0]+average[nspi5]+1.0.0` <br>\n\n\n\n##### Automatic\n\nThe signature is automatically generated in case the package is adopted as \nimplementation helper. Once the `Scst` class is constructed, the \nsignature can be generated using the method `get_signature()`.  We refer to the sections below.\n\n#### Implementation helper\n\nSacreEOS provide the `Scst` class which computes the SCST loss over the inputs using the\nselected metrics and performs a series of assersions and input checks based on the selected\nclass.\n\n`Scst` Constructor: \n```\nclass: Scst(scst_class, metric_class, base_class, eos_token, corpus_refss=None,\n            verbose=True, metric_args=None, base_args=None):\n--------------------------------------------------------------------------------------\nArguments:\n\n    <> scst_class: scst mode class\n              - Choices: Scst.SCST_CONFIG_STANDARD, Scst.SCST_CONFIG_NO_EOS                \n\n    <> metric_class: metric class to be selected \n              - Choices: Scst.METRIC_BLEU: requires, Scst.METRIC_CIDER, Scst.METRIC_CIDER_D, Scst.METRIC_CIDER_R.\n    \n    <> base_class: base reward type \n              - Choices: Scst.BASE_AVERAGE, Scst.BASE_GREEDY.\n    \n    <> eos_token: str : end of sequence token\n    \n    <> corpus_refss: list of list of sentences. \n                     The document frequencies calculation won't be affected by how sentences \n                     are placed inside the list of list, in fact, the format is required only\n                     for sake of consistency with the one required by the loss computation\n\n    <> verbose: bool\n    \n    Further Customizations: \n        leave these arguments to None to keep the standard configurations.\n\n    <> metric_args: dictionary with metric arg name as key and custom arg values as value\n            - Scst.METRIC_BLEU: requires no args\n            - Scst.METRIC_CIDER:  {'n': int}                # max number of ngram   \n            - Scst.METRIC_CIDER_D : {'n': int,  \n                                     'sigma': float}         \n            - Scst.METRIC_CIDER_R: {'n': int,               \n                                    'repeat_coeff': float,  # length and repeatition penalty weights, the two must sum to 1\n                                    'length_coeff': float,  \n                                    'alpha': float}         # \n\n    <> base_args: dictionary with base arg name as key and custom arg values as value\n            - {'nspi': int}  # number of samples per input/image.\n\n\n\n```\n\nScst loss computation method:\n```\nmethod compute_scst_loss(sampled_preds: List[List[str]], sampled_logprobs: torch.tensor,\n                         refss: List[List[str]], base_preds=None, eos_pos_upthresh=None,\n                         reduction='mean', get_stat_data=False):\n--------------------------------------------------------------------------------------\nARGUMENTS:\n\n    <> sampled_preds: List[List[str]] of shape (batch_size, nspi): is the list of predicted sequences.    \n\n    <> sampled_logprobs: tensor of shape (batch size, nspi, seq_len): refers to the log-probabilities of the predicted sequences.           \n            -  The function does not perform padding on `sampled_logprobs`. The value of padding elements must be zero.\n             Because of the popular practice of sub-word tokenization (such as BPE), this method cannot perform padding\n             safely by itself by aligning the elements in `sampled_logprobs` and `sampled_preds`.\n\n    <> base_preds: None or List[List[str]] of shape (batch_size, nspi): if not None, it contains the base sequences.\n\n    <> refss: List[List[str]] of shape (batch_size, int): is the list of references list.\n            - refss[i] are expected to be the references of sampled_preds[i]\n            - * can be any number \n    \n    <> eos_pos_upthresh (end of sequence position upper threshold) defines the length up until which the\n     method will ensure the `eos_token` termination. Since SCST is a learning process, in some cases,\n     especially in the early iterations, sampled sequences may not end with the `eos_token` because they\n     reached the maximum sequence length defined by the model. In case of `None`, is set to the last dimension of `sampled_logprobs` argument.\n\n    <> It should preferably be set to the model's max_len if no sub-word techniques are applied, but\n       None (hence the `sampled_logprobs` last dimension) should ensure a wide enough error catching web.\n\n    <> reduction: str: reduction method (None, 'sum', 'mean').\n    <> get_stat_data: bool: if true, return not only the loss, but also reward and base reward\n    \nRETURN:\n      (r-b) * (-sampled_logprobs.sum(dim=-1))\n    Where:\n        <> r represents the reward of `sampled_preds`\n        <> b is the reward calculated on base sentences.\n```\n\nFor automatic signature generation, call `get_signature()` on the class instance.\n\nAn example of usage can be found in the demo.\n\n\n- - - \n\n#### Demo\n\nThe demo sub-package provide an usage example of SacreEOS helper functionality. <br>\nIt implements a small image captioning system to be trained with SCST.\n\n##### Requirements\n\nThe demo requires a pre-trained model and data sampled from COCO. They\ncan be found in this [drive](https://drive.google.com/drive/folders/1dCFLY0zBRKlV3QQlv6AiadzKaatnQHG8?usp=share_link)\n(~170MB), all contents are expected to be placed in `/demo/data/`. <br>\n\nThe directories and files in `/demo/` should look like this:\n```\ndemo\n\u251c\u2500\u2500 data\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 demo_data.pickle            -> model pre-trained on XE and training samples\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 coco_train_refss.pickle     -> COCO corpus references\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 dataset.py                  \n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sample_coco_refss.py        -> additional samples for testing\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 sample_coco_test.py\n\u251c\u2500\u2500 demo_launcher.py                -> training script\n\u251c\u2500\u2500 __init__.py\n\u251c\u2500\u2500 net\n \u00a0\u00a0 \u251c\u2500\u2500 layers.py                   -> model sub-layers\n \u00a0\u00a0 \u251c\u2500\u2500 model.py                    -> model definition and sampling procedures\n \u00a0\u00a0 \u2514\u2500\u2500 utils.py                    -> naive utility functions\n\n```\n\nThe training script does not require arguments and can be launched as follows:\n```\npython demo_launcher.py\n```\n\nThis is the expected result, although it may be different because of stochastic factors and\nsmall dataset sample:\n\n```\nStandard Scst training started...\nStandard Scst signature: STANDARD_wInit+Cider-D[n4,s6.0]+average[nspi5]+1.0.0\nTraining epoch 49: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 50/50 [45:20<00:00, 54.42s/it]\nNo artifacts.\nNote: because of the poor train set, the standard case may present few artifacts as well\nNoEos Scst training started...\nNoEos scst signature: NO<EOS>_wInit+Cider-D[n4,s6.0]+average[nspi5]+1.0.0\nTraining epoch 49: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 50/50 [42:36<00:00, 51.12s/it]\nThere are 531 artifacts.\nEnd.\n```\n\n\n- - -\n\n#### Credits\n\nThis project is based on the work of [Rennie et al., 2017](https://arxiv.org/abs/1612.00563)\nand inspired by [SacreBLEU (Matt Post, 2018)](https://arxiv.org/abs/1804.08771).\n\nReference:\n```\n@article{hu2023request,\n  title={A request for clarity over the End of Sequence token in the Self-Critical Sequence Training},\n  author={Hu, Jia Cheng and Cavicchioli, Roberto and Capotondi, Alessandro},\n  journal={arXiv preprint arXiv:2305.12254},\n  year={2023}\n}\n```\n\n\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "SacreEOS is a signature generator and implementation helper for the Self Critical Sequence Training",
    "version": "1.0.2",
    "project_urls": {
        "Homepage": "https://github.com/jchenghu/sacreeos"
    },
    "split_keywords": [
        "self-critical sequence training",
        " computer vision",
        " image captioning",
        " evaluation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4ace207e2098379dc5f9c8a8a6cd897e8237c3e4d71821f08d83697ef067e759",
                "md5": "cd4f44b2762e636d565dcc1a4e0248fe",
                "sha256": "0fd46c1e5eed06e6d792fba4f8096356c757e5f9cbb5198222a120cb54394ca0"
            },
            "downloads": -1,
            "filename": "sacreeos-1.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "cd4f44b2762e636d565dcc1a4e0248fe",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 57264,
            "upload_time": "2023-05-24T23:17:05",
            "upload_time_iso_8601": "2023-05-24T23:17:05.949506Z",
            "url": "https://files.pythonhosted.org/packages/4a/ce/207e2098379dc5f9c8a8a6cd897e8237c3e4d71821f08d83697ef067e759/sacreeos-1.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9e7f61bc4e02f29afb57cdeb61c76c47c4953a7bc4dc7f4e6fb33625f687cb81",
                "md5": "8ef6ba54be8219027be784b54f2c5446",
                "sha256": "903314894b5842e7b17794d149e540db58a4119005321f8553124b8925a9edaa"
            },
            "downloads": -1,
            "filename": "sacreeos-1.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "8ef6ba54be8219027be784b54f2c5446",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 44434,
            "upload_time": "2023-05-24T23:17:10",
            "upload_time_iso_8601": "2023-05-24T23:17:10.023563Z",
            "url": "https://files.pythonhosted.org/packages/9e/7f/61bc4e02f29afb57cdeb61c76c47c4953a7bc4dc7f4e6fb33625f687cb81/sacreeos-1.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-24 23:17:10",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "jchenghu",
    "github_project": "sacreeos",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "sacreeos"
}
        
Elapsed time: 0.06713s