roiloc


Nameroiloc JSON
Version 0.4.0 PyPI version JSON
download
home_pageNone
SummaryA simple package to center and crop T1w & T2w MRIs around a given region of interest by its name.
upload_time2024-10-29 08:57:41
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT License
keywords mri brain t1w t2w registration
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            =================
Welcome to ROILoc
=================

ROILoc is a registration-based ROI locator, based on the MNI152 09c Sym template, and the CerebrA Atlas. It'll center and crop T1 or T2 MRIs around a given ROI.
Results are saved in "LPI-" (or "RAS+") format.

.. image:: https://raw.githubusercontent.com/clementpoiret/ROILoc/main/example.png
  :width: 800
  :alt: Example: using ROILoc for Hippocampus
  
If the results aren't correct, please consider performing BET/Skull Stripping on your subject's MRI beforehand, then pass ``-b True`` afterward.
You can use FSL or ANTs to perform BET. I personnally also had great and fast results from `deepbrain <https://github.com/iitzco/deepbrain>`_ which depends on TensorFlow 1.X.

It requires the following packages:

- ANTs (Can be a system installation or anaconda installation),
- ANTsPyX,
- Rich.


CLI
***

usage: roiloc [-h] -p PATH -i INPUTPATTERN [-r ROI [ROI ...]] -c CONTRAST [-b]
              [-t TRANSFORM] [-m MARGIN [MARGIN ...]] [--rightoffset RIGHTOFFSET [RIGHTOFFSET ...]]
              [--leftoffset LEFTOFFSET [LEFTOFFSET ...]] [--mask MASK]
              [--extracrops EXTRACROPS [EXTRACROPS ...]] [--savesteps]

arguments::

  -h, --help            show this help message and exit
  -p PATH, --path PATH  <Required> Input images path.
  -i INPUTPATTERN, --inputpattern INPUTPATTERN
                        <Required> Pattern to find input images in input path
                        (e.g.: `**/*t1*.nii.gz`).
  -r ROI [ROI ...], --roi ROI [ROI ...]
                        ROI included in CerebrA. See
                        `roiloc/MNI/cerebra/CerebrA_LabelDetails.csv` for more
                        details. Default: 'Hippocampus'.
  -c CONTRAST, --contrast CONTRAST
                        <Required> Contrast of the input MRI. Can be `t1` or
                        `t2`.
  -b, --bet             Flag use the BET version of the MNI152 template.
  -t TRANSFORM, --transform TRANSFORM
                        Type of registration. See `https://antspy.readthedocs.
                        io/en/latest/registration.html` for the complete list
                        of options. Default: `AffineFast`
  -m MARGIN [MARGIN ...], --margin MARGIN [MARGIN ...]
                        Margin to add around the bounding box in voxels. It
                        has to be a list of 3 integers, to control the margin
                        in the three axis (0: left/right margin, 1: post/ant
                        margin, 2: inf/sup margin). Default: [8,8,8]
  --rightoffset RIGHTOFFSET [RIGHTOFFSET ...]
                        Offset to add to the bounding box of the right ROI in
                        voxels. It has to be a list of 3 integers, to control
                        the offset in the three axis (0: from left to right,
                        1: from post to ant, 2: from inf to sup).
                        Default: [0,0,0]
  --leftoffset LEFTOFFSET [LEFTOFFSET ...]
                        Offset to add to the bounding box of the left ROI in
                        voxels. It has to be a list of 3 integers, to control
                        the offset in the three axis (0: from left to right,
                        1: from post to ant, 2: from inf to sup).
                        Default: [0,0,0]
  --mask MASK           Pattern for brain tissue mask to improve registration
                        (e.g.: `sub_*bet_mask.nii.gz`). If providing a BET
                        mask, please also pass `-b` to use a BET MNI template.
  --extracrops EXTRACROPS [EXTRACROPS ...]
                        Pattern for other files to crop (e.g. manual
                        segmentation: '*manual_segmentation_left*.nii.gz').
  --savesteps           Flag to save intermediate files (e.g. registered
                        atlas).


Python API
**********

Even if the CLI interface is the main use case, a Python API is also available since v0.2.0.

The API syntax retakes sklearn's API syntax, with a ``RoiLocator`` class, having ``fit``, ``transform``, ``fit_transform`` and ``inverse_transform`` methods as seen below.

.. code-block:: python

    import ants
    from roiloc.locator import RoiLocator

    image = ants.image_read("./sub00_t2w.nii.gz",
                            reorient="LPI")

    locator = RoiLocator(contrast="t2", roi="hippocampus", bet=False)

    # Fit the locator and get the transformed MRIs
    right, left = locator.fit_transform(image)
    # Coordinates can be obtained through the `coords` attribute
    print(locator.get_coords())

    # Let 'model' be a segmentation model of the hippocampus
    right_seg = model(right)
    left_seg = model(left)

    # Transform the segmentation back to the original image
    right_seg = locator.inverse_transform(right_seg)
    left_seg = locator.inverse_transform(left_seg)

    # Save the resulting segmentations in the original space
    ants.image_write(right_seg, "./sub00_hippocampus_right.nii.gz")
    ants.image_write(left_seg, "./sub00_hippocampus_left.nii.gz")

Development Environment
***********************

ROILoc relies on Nix_ and Devenv_.

.. _Nix: https://nixos.org/download/
.. _Devenv: https://devenv.sh

**Step 1**: Install Nix_:
::

    sh <(curl -L https://nixos.org/nix/install) --daemon

**Step 2**: Install Devenv_:
::

    nix-env -iA devenv -f https://github.com/NixOS/nixpkgs/tarball/nixpkgs-unstable

**Step 3**: 
::

    devenv shell

That's it :)

If you want something even easier, install direnv_ and
allow it to automatically activate the current env (``direnv allow``).

.. _direnv: https://direnv.net/


Installation
************

1/ Be sure to have a working ANTs installation: `see on GitHub <https://github.com/ANTsX/ANTs>`_,

2/ Simply run ``pip install roiloc`` (at least python 3.9).


Example:
********

Let's say I have a main database folder, containing one subfolder for each subject. In all those subjects folders, all of them have a t2w mri called ``tse.nii.gz`` and a brain mask call ``brain_mask.nii``.

Therefore, to extract both left and right hippocampi (``Hippocampus``), I can run: 

``roiloc -p "~/Datasets/MemoDev/ManualSegmentation/" -i "**/tse.nii.gz" -r "hippocampus" -c "t2" -b -t "AffineFast" -m 16 2 16 --mask "*brain_mask.nii``


Supported Registrations
***********************

(Taken from ANTsPyX's doc)

- ``Translation``: Translation transformation.
- ``Rigid``: Rigid transformation: Only rotation and translation.
- ``Similarity``: Similarity transformation: scaling, rotation and translation.
- ``QuickRigid``: Rigid transformation: Only rotation and translation. May be useful for quick visualization fixes.
- ``DenseRigid``: Rigid transformation: Only rotation and translation. Employs dense sampling during metric estimation.
- ``BOLDRigid``: Rigid transformation: Parameters typical for BOLD to BOLD intrasubject registration.
- ``Affine``: Affine transformation: Rigid + scaling.
- ``AffineFast``: Fast version of Affine.
- ``BOLDAffine``: Affine transformation: Parameters typical for BOLD to BOLD intrasubject registration.
- ``TRSAA``: translation, rigid, similarity, affine (twice). please set regIterations if using this option. this would be used in cases where you want a really high quality affine mapping (perhaps with mask).
- ``ElasticSyN``: Symmetric normalization: Affine + deformable transformation, with mutual information as optimization metric and elastic regularization.
- ``SyN``: Symmetric normalization: Affine + deformable transformation, with mutual information as optimization metric.
- ``SyNRA``: Symmetric normalization: Rigid + Affine + deformable transformation, with mutual information as optimization metric.
- ``SyNOnly``: Symmetric normalization: no initial transformation, with mutual information as optimization metric. Assumes images are aligned by an inital transformation. Can be useful if you want to run an unmasked affine followed by masked deformable registration.
- ``SyNCC``: SyN, but with cross-correlation as the metric.
- ``SyNabp``: SyN optimized for abpBrainExtraction.
- ``SyNBold``: SyN, but optimized for registrations between BOLD and T1 images.
- ``SyNBoldAff``: SyN, but optimized for registrations between BOLD and T1 images, with additional affine step.
- ``SyNAggro``: SyN, but with more aggressive registration (fine-scale matching and more deformation). Takes more time than SyN.
- ``TVMSQ``: time-varying diffeomorphism with mean square metric
- ``TVMSQC``: time-varying diffeomorphism with mean square metric for very large deformation


Supported ROIs
**************

- Caudal Anterior Cingulate,
- Caudal Middle Frontal,
- Cuneus,
- Entorhinal,
- Fusiform,
- Inferior Parietal,
- Inferior temporal,
- Isthmus Cingulate,
- Lateral Occipital,
- Lateral Orbitofrontal,
- Lingual,
- Medial Orbitofrontal,
- Middle Temporal,
- Parahippocampal,
- Paracentral,
- Pars Opercularis,
- Pars Orbitalis,
- Pars Triangularis,
- Pericalcarine,
- Postcentral,
- Posterior Cingulate,
- Precentral,
- Precuneus,
- Rostral Anterior Cingulate,
- Rostral Middle Frontal,
- Superior Frontal,
- Superior Parietal,
- Superior Temporal,
- Supramarginal,
- Transverse Temporal,
- Insula,
- Brainstem,
- Third Ventricle,
- Fourth Ventricle,
- Optic Chiasm,
- Lateral Ventricle,
- Inferior Lateral Ventricle,
- Cerebellum Gray Matter, 
- Cerebellum White Matter,
- Thalamus,
- Caudate,
- Putamen,
- Pallidum,
- Hippocampus,
- Amygdala,
- Accumbens Area,
- Ventral Diencephalon,
- Basal Forebrain,
- Vermal lobules I-V,
- Vermal lobules VI-VII,
- Vermal lobules VIII-X.


Cite this work
**************

If you use this software, please cite it as below.

authors:
  - family-names: Poiret
  - given-names: Clément
  - orcid: https://orcid.org/0000-0002-1571-2161
    
title: clementpoiret/ROILoc: Zenodo Release

version: v0.2.4

date-released: 2021-09-14

Example: 

``Clément POIRET. (2021). clementpoiret/ROILoc: Zenodo Release (v0.2.4). Zenodo. https://doi.org/10.5281/zenodo.5506959``


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "roiloc",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "mri, brain, t1w, t2w, registration",
    "author": null,
    "author_email": "Cl\u00e9ment POIRET <poiret.clement@outlook.fr>",
    "download_url": "https://files.pythonhosted.org/packages/11/11/1f4fec5e2ed6bdace72df27dd0d253bbadff0287099332e28f8935a35146/roiloc-0.4.0.tar.gz",
    "platform": null,
    "description": "=================\nWelcome to ROILoc\n=================\n\nROILoc is a registration-based ROI locator, based on the MNI152 09c Sym template, and the CerebrA Atlas. It'll center and crop T1 or T2 MRIs around a given ROI.\nResults are saved in \"LPI-\" (or \"RAS+\") format.\n\n.. image:: https://raw.githubusercontent.com/clementpoiret/ROILoc/main/example.png\n  :width: 800\n  :alt: Example: using ROILoc for Hippocampus\n  \nIf the results aren't correct, please consider performing BET/Skull Stripping on your subject's MRI beforehand, then pass ``-b True`` afterward.\nYou can use FSL or ANTs to perform BET. I personnally also had great and fast results from `deepbrain <https://github.com/iitzco/deepbrain>`_ which depends on TensorFlow 1.X.\n\nIt requires the following packages:\n\n- ANTs (Can be a system installation or anaconda installation),\n- ANTsPyX,\n- Rich.\n\n\nCLI\n***\n\nusage: roiloc [-h] -p PATH -i INPUTPATTERN [-r ROI [ROI ...]] -c CONTRAST [-b]\n              [-t TRANSFORM] [-m MARGIN [MARGIN ...]] [--rightoffset RIGHTOFFSET [RIGHTOFFSET ...]]\n              [--leftoffset LEFTOFFSET [LEFTOFFSET ...]] [--mask MASK]\n              [--extracrops EXTRACROPS [EXTRACROPS ...]] [--savesteps]\n\narguments::\n\n  -h, --help            show this help message and exit\n  -p PATH, --path PATH  <Required> Input images path.\n  -i INPUTPATTERN, --inputpattern INPUTPATTERN\n                        <Required> Pattern to find input images in input path\n                        (e.g.: `**/*t1*.nii.gz`).\n  -r ROI [ROI ...], --roi ROI [ROI ...]\n                        ROI included in CerebrA. See\n                        `roiloc/MNI/cerebra/CerebrA_LabelDetails.csv` for more\n                        details. Default: 'Hippocampus'.\n  -c CONTRAST, --contrast CONTRAST\n                        <Required> Contrast of the input MRI. Can be `t1` or\n                        `t2`.\n  -b, --bet             Flag use the BET version of the MNI152 template.\n  -t TRANSFORM, --transform TRANSFORM\n                        Type of registration. See `https://antspy.readthedocs.\n                        io/en/latest/registration.html` for the complete list\n                        of options. Default: `AffineFast`\n  -m MARGIN [MARGIN ...], --margin MARGIN [MARGIN ...]\n                        Margin to add around the bounding box in voxels. It\n                        has to be a list of 3 integers, to control the margin\n                        in the three axis (0: left/right margin, 1: post/ant\n                        margin, 2: inf/sup margin). Default: [8,8,8]\n  --rightoffset RIGHTOFFSET [RIGHTOFFSET ...]\n                        Offset to add to the bounding box of the right ROI in\n                        voxels. It has to be a list of 3 integers, to control\n                        the offset in the three axis (0: from left to right,\n                        1: from post to ant, 2: from inf to sup).\n                        Default: [0,0,0]\n  --leftoffset LEFTOFFSET [LEFTOFFSET ...]\n                        Offset to add to the bounding box of the left ROI in\n                        voxels. It has to be a list of 3 integers, to control\n                        the offset in the three axis (0: from left to right,\n                        1: from post to ant, 2: from inf to sup).\n                        Default: [0,0,0]\n  --mask MASK           Pattern for brain tissue mask to improve registration\n                        (e.g.: `sub_*bet_mask.nii.gz`). If providing a BET\n                        mask, please also pass `-b` to use a BET MNI template.\n  --extracrops EXTRACROPS [EXTRACROPS ...]\n                        Pattern for other files to crop (e.g. manual\n                        segmentation: '*manual_segmentation_left*.nii.gz').\n  --savesteps           Flag to save intermediate files (e.g. registered\n                        atlas).\n\n\nPython API\n**********\n\nEven if the CLI interface is the main use case, a Python API is also available since v0.2.0.\n\nThe API syntax retakes sklearn's API syntax, with a ``RoiLocator`` class, having ``fit``, ``transform``, ``fit_transform`` and ``inverse_transform`` methods as seen below.\n\n.. code-block:: python\n\n    import ants\n    from roiloc.locator import RoiLocator\n\n    image = ants.image_read(\"./sub00_t2w.nii.gz\",\n                            reorient=\"LPI\")\n\n    locator = RoiLocator(contrast=\"t2\", roi=\"hippocampus\", bet=False)\n\n    # Fit the locator and get the transformed MRIs\n    right, left = locator.fit_transform(image)\n    # Coordinates can be obtained through the `coords` attribute\n    print(locator.get_coords())\n\n    # Let 'model' be a segmentation model of the hippocampus\n    right_seg = model(right)\n    left_seg = model(left)\n\n    # Transform the segmentation back to the original image\n    right_seg = locator.inverse_transform(right_seg)\n    left_seg = locator.inverse_transform(left_seg)\n\n    # Save the resulting segmentations in the original space\n    ants.image_write(right_seg, \"./sub00_hippocampus_right.nii.gz\")\n    ants.image_write(left_seg, \"./sub00_hippocampus_left.nii.gz\")\n\nDevelopment Environment\n***********************\n\nROILoc relies on Nix_ and Devenv_.\n\n.. _Nix: https://nixos.org/download/\n.. _Devenv: https://devenv.sh\n\n**Step 1**: Install Nix_:\n::\n\n    sh <(curl -L https://nixos.org/nix/install) --daemon\n\n**Step 2**: Install Devenv_:\n::\n\n    nix-env -iA devenv -f https://github.com/NixOS/nixpkgs/tarball/nixpkgs-unstable\n\n**Step 3**: \n::\n\n    devenv shell\n\nThat's it :)\n\nIf you want something even easier, install direnv_ and\nallow it to automatically activate the current env (``direnv allow``).\n\n.. _direnv: https://direnv.net/\n\n\nInstallation\n************\n\n1/ Be sure to have a working ANTs installation: `see on GitHub <https://github.com/ANTsX/ANTs>`_,\n\n2/ Simply run ``pip install roiloc`` (at least python 3.9).\n\n\nExample:\n********\n\nLet's say I have a main database folder, containing one subfolder for each subject. In all those subjects folders, all of them have a t2w mri called ``tse.nii.gz`` and a brain mask call ``brain_mask.nii``.\n\nTherefore, to extract both left and right hippocampi (``Hippocampus``), I can run: \n\n``roiloc -p \"~/Datasets/MemoDev/ManualSegmentation/\" -i \"**/tse.nii.gz\" -r \"hippocampus\" -c \"t2\" -b -t \"AffineFast\" -m 16 2 16 --mask \"*brain_mask.nii``\n\n\nSupported Registrations\n***********************\n\n(Taken from ANTsPyX's doc)\n\n- ``Translation``: Translation transformation.\n- ``Rigid``: Rigid transformation: Only rotation and translation.\n- ``Similarity``: Similarity transformation: scaling, rotation and translation.\n- ``QuickRigid``: Rigid transformation: Only rotation and translation. May be useful for quick visualization fixes.\n- ``DenseRigid``: Rigid transformation: Only rotation and translation. Employs dense sampling during metric estimation.\n- ``BOLDRigid``: Rigid transformation: Parameters typical for BOLD to BOLD intrasubject registration.\n- ``Affine``: Affine transformation: Rigid + scaling.\n- ``AffineFast``: Fast version of Affine.\n- ``BOLDAffine``: Affine transformation: Parameters typical for BOLD to BOLD intrasubject registration.\n- ``TRSAA``: translation, rigid, similarity, affine (twice). please set regIterations if using this option. this would be used in cases where you want a really high quality affine mapping (perhaps with mask).\n- ``ElasticSyN``: Symmetric normalization: Affine + deformable transformation, with mutual information as optimization metric and elastic regularization.\n- ``SyN``: Symmetric normalization: Affine + deformable transformation, with mutual information as optimization metric.\n- ``SyNRA``: Symmetric normalization: Rigid + Affine + deformable transformation, with mutual information as optimization metric.\n- ``SyNOnly``: Symmetric normalization: no initial transformation, with mutual information as optimization metric. Assumes images are aligned by an inital transformation. Can be useful if you want to run an unmasked affine followed by masked deformable registration.\n- ``SyNCC``: SyN, but with cross-correlation as the metric.\n- ``SyNabp``: SyN optimized for abpBrainExtraction.\n- ``SyNBold``: SyN, but optimized for registrations between BOLD and T1 images.\n- ``SyNBoldAff``: SyN, but optimized for registrations between BOLD and T1 images, with additional affine step.\n- ``SyNAggro``: SyN, but with more aggressive registration (fine-scale matching and more deformation). Takes more time than SyN.\n- ``TVMSQ``: time-varying diffeomorphism with mean square metric\n- ``TVMSQC``: time-varying diffeomorphism with mean square metric for very large deformation\n\n\nSupported ROIs\n**************\n\n- Caudal Anterior Cingulate,\n- Caudal Middle Frontal,\n- Cuneus,\n- Entorhinal,\n- Fusiform,\n- Inferior Parietal,\n- Inferior temporal,\n- Isthmus Cingulate,\n- Lateral Occipital,\n- Lateral Orbitofrontal,\n- Lingual,\n- Medial Orbitofrontal,\n- Middle Temporal,\n- Parahippocampal,\n- Paracentral,\n- Pars Opercularis,\n- Pars Orbitalis,\n- Pars Triangularis,\n- Pericalcarine,\n- Postcentral,\n- Posterior Cingulate,\n- Precentral,\n- Precuneus,\n- Rostral Anterior Cingulate,\n- Rostral Middle Frontal,\n- Superior Frontal,\n- Superior Parietal,\n- Superior Temporal,\n- Supramarginal,\n- Transverse Temporal,\n- Insula,\n- Brainstem,\n- Third Ventricle,\n- Fourth Ventricle,\n- Optic Chiasm,\n- Lateral Ventricle,\n- Inferior Lateral Ventricle,\n- Cerebellum Gray Matter, \n- Cerebellum White Matter,\n- Thalamus,\n- Caudate,\n- Putamen,\n- Pallidum,\n- Hippocampus,\n- Amygdala,\n- Accumbens Area,\n- Ventral Diencephalon,\n- Basal Forebrain,\n- Vermal lobules I-V,\n- Vermal lobules VI-VII,\n- Vermal lobules VIII-X.\n\n\nCite this work\n**************\n\nIf you use this software, please cite it as below.\n\nauthors:\n  - family-names: Poiret\n  - given-names: Cl\u00e9ment\n  - orcid: https://orcid.org/0000-0002-1571-2161\n    \ntitle: clementpoiret/ROILoc: Zenodo Release\n\nversion: v0.2.4\n\ndate-released: 2021-09-14\n\nExample: \n\n``Cl\u00e9ment POIRET. (2021). clementpoiret/ROILoc: Zenodo Release (v0.2.4). Zenodo. https://doi.org/10.5281/zenodo.5506959``\n\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "A simple package to center and crop T1w & T2w MRIs around a given region of interest by its name.",
    "version": "0.4.0",
    "project_urls": {
        "homepage": "https://hippomnesis.dev",
        "repository": "https://github.com/clementpoiret/ROILoc"
    },
    "split_keywords": [
        "mri",
        " brain",
        " t1w",
        " t2w",
        " registration"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "6cab45f1d5f0d68ce060befbbd9a2373a94edddd25a6ce2500f97326ac588100",
                "md5": "4604c84040a3e635f366c2b20a609804",
                "sha256": "2e0b779aec42d0b5236aae1728d522f893093a66e8ded794e86e919037b1713a"
            },
            "downloads": -1,
            "filename": "roiloc-0.4.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4604c84040a3e635f366c2b20a609804",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 14222,
            "upload_time": "2024-10-29T08:57:39",
            "upload_time_iso_8601": "2024-10-29T08:57:39.086566Z",
            "url": "https://files.pythonhosted.org/packages/6c/ab/45f1d5f0d68ce060befbbd9a2373a94edddd25a6ce2500f97326ac588100/roiloc-0.4.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "11111f4fec5e2ed6bdace72df27dd0d253bbadff0287099332e28f8935a35146",
                "md5": "cc237e2ababd553c15d25f2da9689840",
                "sha256": "46bab7f4a9378f848cd54cf58165840d876b93bc58266890a7e4fbb3c9670274"
            },
            "downloads": -1,
            "filename": "roiloc-0.4.0.tar.gz",
            "has_sig": false,
            "md5_digest": "cc237e2ababd553c15d25f2da9689840",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 15446,
            "upload_time": "2024-10-29T08:57:41",
            "upload_time_iso_8601": "2024-10-29T08:57:41.170645Z",
            "url": "https://files.pythonhosted.org/packages/11/11/1f4fec5e2ed6bdace72df27dd0d253bbadff0287099332e28f8935a35146/roiloc-0.4.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-29 08:57:41",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "clementpoiret",
    "github_project": "ROILoc",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "roiloc"
}
        
Elapsed time: 0.37764s