soam


Namesoam JSON
Version 0.10.2 PyPI version JSON
download
home_page
SummaryTools for time series analysis, plotting and reporting.
upload_time2023-06-21 18:21:43
maintainer
docs_urlNone
authorMutt Data
requires_python~=3.6
license
keywords anomalies forecasting reporting
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # SoaM

[![pipeline status](https://gitlab.com/mutt_data/soam/badges/master/pipeline.svg)](https://gitlab.com/mutt_data/soam/-/commits/master) [![coverage report](https://gitlab.com/mutt_data/soam/badges/master/coverage.svg)](https://gitlab.com/mutt_data/soam/-/commits/master) [![pypi version](https://img.shields.io/pypi/v/soam?color=blue)](https://pypi.org/project/soam/)

SoaM is a [Prefect](https://docs.prefect.io/) based library created by [Mutt](https://muttdata.ai/).
Its goal is to create a forecasting framework, this tool is developed with conjunctions of experience on previous
projects. There come the name: Son of a Mutt = SoaM

## SoaM pipeline

<!-- gfmd-start -->
![Mermaid diagram](https://kroki.io/mermaid/svg/eNpljbEOgjAURXe_4o0wEBQQnJzUxEQSIm6EocITmwAlrx1MCP-uloIkdnu959xbEeuecLmu4PN4uc6sA1PsziTC2c4dZ89LL3NTwWK48QYhReIo4fhSxAolyM2NuVmas2pCT1_-VESslQ9BDf50XxOBIU6CsGBSLYBt30-_vK0gFiXWwzBaBgn0FWaW7kgIS14oLlppTyWhJqK_FUhqoZZrI7gz4BU7Qd_ZOY_G_A05RWCs)

<details>
<summary><sup><sub>Diagram source code</sub></sup></summary>

```mermaid
graph LR
    id0[(Database I)]-->id2[/SoaM Time Series Extractor/]
    id1[(Database II)]-->id2
    id2-->id3[/SoaM Transformer/]
    id3-->id4[/SoaM Forecaster/]
    id5{{Forecasting Model}}-->id4
    id4-->id6[(SoaM Predictions)]
    id6-->id7[/SoaM Forecaster Plotter/]
    id6-->id8[/SoaM Reporting/]
    id7-->id8
```
</details>
<!-- gfmd-end -->
This library pipeline supports any data source.
The process is structured in different stages:
* Extraction: manages the granularity and aggregation of the input data.
* Preprocessing: lets select among out of the box tools to perform standard tasks as normalization or fill nan values.
* Forecasting: fits a model and predict results.
* Postprocessing: modifies the results based on business/real information or create analysis with the predicted values,
such as an anomaly detection.

## Overview of the Steps Run in SoaM

### Extraction
This stage extracts data from the needed sources to build the condensed dataset for the next steps. This tends to be
project dependent. Then it converts the full dataset to the desired time granularity and aggregation level by some categorical attribute/s.

### Preprocessing
This step implements functions to further cleanup and prepare the data for the following steps, such as:
* Add feature/transformation
* Fill nan values
* Apply value normalizations
* Shift values

### Forecasting
This stage receives the clean data, performs the forecast and store the predicted values in the defined storages.
Currently there are implementations to store in CSV files and SQL databases.
A variety of models are currently supported to fit and predict data. They can be extended to create custom ones.
* [Exponential Smoothing](https://www.statsmodels.org/stable/examples/notebooks/generated/exponential_smoothing.html)
* [Orbit DLT Full](https://orbit-ml.readthedocs.io/en/latest/tutorials/dlt.html)
* [Prophet](https://pypi.org/project/prophet)
* [SARIMAX](https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html)

### Backtesting
#### Window policies
To do backtesting the data is splited in train and validation, there are two spliting methods:
- Sliding: create a fixed size window for the training data that ends at the beginning of the validation data.
- Expanding: create the training data from remaining data since the start of the series until the validation data.

For more information review this document: [backtesting at scale](https://eng.uber.com/backtesting-at-scale/)

### Postprocessing
This last stage is prepared to work on the forecasts generated by the pipeline. For example:
* Clip/Cleanup the predictions.
* Perform further analyses (such as anomaly detection).
* Export reports.

## Table of Contents
- [SoaM](#soam)
  - [SoaM pipeline](#soam-pipeline)
  - [Overview of the Steps Run in SoaM](#overview-of-the-steps-run-in-soam)
    - [Extraction](#extraction)
    - [Preprocessing](#preprocessing)
    - [Forecasting](#forecasting)
    - [Backtesting](#backtesting)
      - [Window policies](#window-policies)
    - [Postprocessing](#postprocessing)
  - [Table of Contents](#table-of-contents)
  - [Installation](#installation)
    - [Install extras](#install-extras)
  - [Quick start](#quick-start)
  - [Usage](#usage)
  - [Database management](#database-management)
    - [Alembic](#alembic)
  - [Developers guide](#developers-guide)
  - [Testing](#testing)
    - [Testing data extraction](#testing-data-extraction)
    - [Testing plots](#testing-plots)
  - [Contributing](#contributing)
  - [CI](#ci)
  - [Rules of Thumb](#rules-of-thumb)
  - [Credits](#credits)
  - [License](#license)

## Installation
Install the base lib via [pipy](https://pypi.org/project/soam/) by executing:
```bash
pip install soam
```
Or clone this repository:
```bash
git clone [soam-repo]
```
And then run:
```bash
pip install . or pip install -e .
```

### Install extras
The project contains some extra dependencies that are not included in the default installation to make it lightweight. If you want to install extensions use:
```bash
pip install -e ".[slack]"
pip install -e ".[prophet]"
pip install -e ".[pdf_report]"
pip install -e ".[gsheets_report]"
pip install -e ".[report]" # slack and *_report extras
pip install -e ".[all]" # all previous
```

*Note*: The `pdf_report` extra might need to run the following command before installation ([More info](https://nbconvert.readthedocs.io/en/latest/install.html#installing-tex))

$ `apt-get install texlive-xetex texlive-fonts-recommended libpoppler-cpp-dev`

## Quick start
[Here](https://gitlab.com/mutt_data/soam/-/blob/master/notebook/examples/quickstart.ipynb) is an example for a quick start into SoaM. In it a time series with AAPL stock prices is loaded, processed and forecasted. As well, there's [other example](https://gitlab.com/mutt_data/soam/-/blob/master/notebook/examples/soamflowrun.ipynb) with the same steps, but exploding the power of flows.

## Usage
For further info check our [end to end](https://mutt_data.gitlab.io/soam/end2end.html) example where we explained how SoaM will interact with Airflow and Cookiecutter on a generic project.

## Database management
For database storing there are complementary tools:
* [Decouple](https://github.com/henriquebastos/python-decouple) storing the database information in a separated file. With a `settings.ini` file to store the database credentials and general configurations, when modifying it don't change the keys names.
* [Alembic](https://alembic.sqlalchemy.org/en/latest/) to create the database migrations. A brief description is below.
* [SQLAlchemy](https://docs.sqlalchemy.org/en/) as an ORM, the schemas of the tables are defined in [data_models](https://gitlab.com/mutt_data/soam/-/blob/master/soam/data_models.py).

### Alembic
This package uses alembic and expects you to use it!

Alembic is a database migration tool for usage with SQLAlchemy.
After defining the schemas, with SQLAlchemy, Alembic keeps track of the database modifications such as add new
columns, modify a schema or add new tables.

Alembic is set up to use the credentials from the `settings.ini` file and get the defined models from `data_models`.
Be aware that alembic needs this package installed to run!

When making any change of the data models you need them to impact into the database for this you will have to run:

```bash
alembic revision --autogenerate
alembic upgrade head
```

The first command will check the last version of the database and will
[autogenerate](https://alembic.sqlalchemy.org/en/latest/autogenerate.html#what-does-autogenerate-detect-and-what-does-it-not-detect)
the python file with the necessary changes.
It is always necessary to manually review and correct the candidate migrations that autogenerate produces.

The second command will use this file to impact the changes in the database.

For more alembic commands visit the [documentation](https://alembic.sqlalchemy.org/en/latest/)

## Developers guide
If you are going to develop SoaM, you should checkout the documentation directory before adding code,
you can start in the [project structure document](https://mutt_data.gitlab.io/soam/project_structure.html).

## Testing
To run the default testsuite run this:
```
pytest
```
To run the tests with nox:
```bash
nox --session tests
```

### Testing data extraction
The tests for the extractor currently depends on having a local Postgres database and
the variable `TEST_DB_CONNSTR` set with it's connection string.

The easiest way to to this is as follows:
```
docker run --network=host \
    -e "POSTGRES_USER=soam" \
    -e "POSTGRES_PASSWORD=soam" \
    -e "POSTGRES_DB=soam" \
    --rm postgres

TEST_DB_CONNSTR="postgresql://soam:soam@localhost/soam" pytest
```

To run a specific test file:
```bash
TEST_DB_CONNSTR="postgresql://soam:soam@localhost/soam" pytest -v tests/test_file.py
```

Note that even though the example has a DB name during the tests a new database is created and dropped to ensure that no state is maintened between runs.

### Testing plots
To generate images for testing we use [pytest-mpl](https://github.com/matplotlib/pytest-mpl) as follows:

```
pytest --mpl-generate-path=tests/plotting/baseline
```

To run the image based tests:
```
pytest --mpl
```

## Contributing
We appreciate for considering to help out maintaining this project. If you'd like to contribute please read our [contributing guidelines](https://mutt_data.gitlab.io/soam/CONTRIBUTING.html).

## CI

To run the CI jobs locally you have to run it with [nox](https://nox.thea.codes/en/stable/):
In the project root directory, there is a noxfile.py file defining all the jobs, these jobs will be executed when calling from CI or you can call them locally.

You can run all the jobs with the command `nox`, from the project root directory or run just one job with `nox --session test` command, for example.

The .gitlab-ci.yml file configures the gitlab CI to run nox.
Nox let us execute some test and checks before making the commit.
We are using:
* Linting job:
  * [isort](https://pycqa.github.io/isort/) to reorder imports
  * [pylint](https://github.com/PyCQA/pylint) to be pep8 compliant
  * [black](https://github.com/psf/black) to format for code conventions
  * [mypy](http://mypy-lang.org/) for static type checking
* [bandit](https://bandit.readthedocs.io/en/latest/) for security checks
* [pytest](https://docs.pytest.org/) to run all the tests in the test folder.
* [pyreverse](https://pythonhosted.org/theape/documentation/developer/explorations/explore_graphs/explore_pyreverse.html) to create diagrams of the project

This runs on a gitlab machine after every commit.

We are caching the environments for each job on each branch.
On every first commit of a branch, you will have to change the policy also if you add dependencies or a new package to the project.
Gitlab cache policy:
* `pull`: pull the cached files from the cloud.
* `push`: push the created files to the cloud.
* `pull-push`: pull the cached files and push the newly created files.

## Rules of Thumb
This section contains some recommendations when working with SoaM to avoid common mistakes:

* When possible reuse objects to preserve their configuration.
Eg: Transformations, forecasters, etc.
* Use the same train-test windows when backtesting and training to deploy and on later usage.

## Credits
Alejandro Rusi <br>
Diego Leguizamón <br>
Diego Lizondo <br>
Eugenio Scafati <br>
Fabian Wolfmann <br>
Federico Font <br>
Francisco Lopez Destain <br>
Guido Trucco <br>
Hugo Daniel Viotti <br>
Juan Martin Pampliega <br>
Pablo Andres Lorenzatto <br>
Wenceslao Villegas

## License
`soam` is licensed under the [Apache License 2.0](https://gitlab.com/mutt_data/muttlib/-/blob/master/LICENCE).




            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "soam",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "~=3.6",
    "maintainer_email": "",
    "keywords": "anomalies forecasting reporting",
    "author": "Mutt Data",
    "author_email": "info@muttdata.ai",
    "download_url": "https://files.pythonhosted.org/packages/5d/30/42d1cc167acf48114cabe2be3a55fb6140bbc7d090cb626d6bca1de619c9/soam-0.10.2.tar.gz",
    "platform": null,
    "description": "# SoaM\n\n[![pipeline status](https://gitlab.com/mutt_data/soam/badges/master/pipeline.svg)](https://gitlab.com/mutt_data/soam/-/commits/master) [![coverage report](https://gitlab.com/mutt_data/soam/badges/master/coverage.svg)](https://gitlab.com/mutt_data/soam/-/commits/master) [![pypi version](https://img.shields.io/pypi/v/soam?color=blue)](https://pypi.org/project/soam/)\n\nSoaM is a [Prefect](https://docs.prefect.io/) based library created by [Mutt](https://muttdata.ai/).\nIts goal is to create a forecasting framework, this tool is developed with conjunctions of experience on previous\nprojects. There come the name: Son of a Mutt = SoaM\n\n## SoaM pipeline\n\n<!-- gfmd-start -->\n![Mermaid diagram](https://kroki.io/mermaid/svg/eNpljbEOgjAURXe_4o0wEBQQnJzUxEQSIm6EocITmwAlrx1MCP-uloIkdnu959xbEeuecLmu4PN4uc6sA1PsziTC2c4dZ89LL3NTwWK48QYhReIo4fhSxAolyM2NuVmas2pCT1_-VESslQ9BDf50XxOBIU6CsGBSLYBt30-_vK0gFiXWwzBaBgn0FWaW7kgIS14oLlppTyWhJqK_FUhqoZZrI7gz4BU7Qd_ZOY_G_A05RWCs)\n\n<details>\n<summary><sup><sub>Diagram source code</sub></sup></summary>\n\n```mermaid\ngraph LR\n    id0[(Database I)]-->id2[/SoaM Time Series Extractor/]\n    id1[(Database II)]-->id2\n    id2-->id3[/SoaM Transformer/]\n    id3-->id4[/SoaM Forecaster/]\n    id5{{Forecasting Model}}-->id4\n    id4-->id6[(SoaM Predictions)]\n    id6-->id7[/SoaM Forecaster Plotter/]\n    id6-->id8[/SoaM Reporting/]\n    id7-->id8\n```\n</details>\n<!-- gfmd-end -->\nThis library pipeline supports any data source.\nThe process is structured in different stages:\n* Extraction: manages the granularity and aggregation of the input data.\n* Preprocessing: lets select among out of the box tools to perform standard tasks as normalization or fill nan values.\n* Forecasting: fits a model and predict results.\n* Postprocessing: modifies the results based on business/real information or create analysis with the predicted values,\nsuch as an anomaly detection.\n\n## Overview of the Steps Run in SoaM\n\n### Extraction\nThis stage extracts data from the needed sources to build the condensed dataset for the next steps. This tends to be\nproject dependent. Then it converts the full dataset to the desired time granularity and aggregation level by some categorical attribute/s.\n\n### Preprocessing\nThis step implements functions to further cleanup and prepare the data for the following steps, such as:\n* Add feature/transformation\n* Fill nan values\n* Apply value normalizations\n* Shift values\n\n### Forecasting\nThis stage receives the clean data, performs the forecast and store the predicted values in the defined storages.\nCurrently there are implementations to store in CSV files and SQL databases.\nA variety of models are currently supported to fit and predict data. They can be extended to create custom ones.\n* [Exponential Smoothing](https://www.statsmodels.org/stable/examples/notebooks/generated/exponential_smoothing.html)\n* [Orbit DLT Full](https://orbit-ml.readthedocs.io/en/latest/tutorials/dlt.html)\n* [Prophet](https://pypi.org/project/prophet)\n* [SARIMAX](https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html)\n\n### Backtesting\n#### Window policies\nTo do backtesting the data is splited in train and validation, there are two spliting methods:\n- Sliding: create a fixed size window for the training data that ends at the beginning of the validation data.\n- Expanding: create the training data from remaining data since the start of the series until the validation data.\n\nFor more information review this document: [backtesting at scale](https://eng.uber.com/backtesting-at-scale/)\n\n### Postprocessing\nThis last stage is prepared to work on the forecasts generated by the pipeline. For example:\n* Clip/Cleanup the predictions.\n* Perform further analyses (such as anomaly detection).\n* Export reports.\n\n## Table of Contents\n- [SoaM](#soam)\n  - [SoaM pipeline](#soam-pipeline)\n  - [Overview of the Steps Run in SoaM](#overview-of-the-steps-run-in-soam)\n    - [Extraction](#extraction)\n    - [Preprocessing](#preprocessing)\n    - [Forecasting](#forecasting)\n    - [Backtesting](#backtesting)\n      - [Window policies](#window-policies)\n    - [Postprocessing](#postprocessing)\n  - [Table of Contents](#table-of-contents)\n  - [Installation](#installation)\n    - [Install extras](#install-extras)\n  - [Quick start](#quick-start)\n  - [Usage](#usage)\n  - [Database management](#database-management)\n    - [Alembic](#alembic)\n  - [Developers guide](#developers-guide)\n  - [Testing](#testing)\n    - [Testing data extraction](#testing-data-extraction)\n    - [Testing plots](#testing-plots)\n  - [Contributing](#contributing)\n  - [CI](#ci)\n  - [Rules of Thumb](#rules-of-thumb)\n  - [Credits](#credits)\n  - [License](#license)\n\n## Installation\nInstall the base lib via [pipy](https://pypi.org/project/soam/) by executing:\n```bash\npip install soam\n```\nOr clone this repository:\n```bash\ngit clone [soam-repo]\n```\nAnd then run:\n```bash\npip install . or pip install -e .\n```\n\n### Install extras\nThe project contains some extra dependencies that are not included in the default installation to make it lightweight. If you want to install extensions use:\n```bash\npip install -e \".[slack]\"\npip install -e \".[prophet]\"\npip install -e \".[pdf_report]\"\npip install -e \".[gsheets_report]\"\npip install -e \".[report]\" # slack and *_report extras\npip install -e \".[all]\" # all previous\n```\n\n*Note*: The `pdf_report` extra might need to run the following command before installation ([More info](https://nbconvert.readthedocs.io/en/latest/install.html#installing-tex))\n\n$ `apt-get install texlive-xetex texlive-fonts-recommended libpoppler-cpp-dev`\n\n## Quick start\n[Here](https://gitlab.com/mutt_data/soam/-/blob/master/notebook/examples/quickstart.ipynb) is an example for a quick start into SoaM. In it a time series with AAPL stock prices is loaded, processed and forecasted. As well, there's [other example](https://gitlab.com/mutt_data/soam/-/blob/master/notebook/examples/soamflowrun.ipynb) with the same steps, but exploding the power of flows.\n\n## Usage\nFor further info check our [end to end](https://mutt_data.gitlab.io/soam/end2end.html) example where we explained how SoaM will interact with Airflow and Cookiecutter on a generic project.\n\n## Database management\nFor database storing there are complementary tools:\n* [Decouple](https://github.com/henriquebastos/python-decouple) storing the database information in a separated file. With a `settings.ini` file to store the database credentials and general configurations, when modifying it don't change the keys names.\n* [Alembic](https://alembic.sqlalchemy.org/en/latest/) to create the database migrations. A brief description is below.\n* [SQLAlchemy](https://docs.sqlalchemy.org/en/) as an ORM, the schemas of the tables are defined in [data_models](https://gitlab.com/mutt_data/soam/-/blob/master/soam/data_models.py).\n\n### Alembic\nThis package uses alembic and expects you to use it!\n\nAlembic is a database migration tool for usage with SQLAlchemy.\nAfter defining the schemas, with SQLAlchemy, Alembic keeps track of the database modifications such as add new\ncolumns, modify a schema or add new tables.\n\nAlembic is set up to use the credentials from the `settings.ini` file and get the defined models from `data_models`.\nBe aware that alembic needs this package installed to run!\n\nWhen making any change of the data models you need them to impact into the database for this you will have to run:\n\n```bash\nalembic revision --autogenerate\nalembic upgrade head\n```\n\nThe first command will check the last version of the database and will\n[autogenerate](https://alembic.sqlalchemy.org/en/latest/autogenerate.html#what-does-autogenerate-detect-and-what-does-it-not-detect)\nthe python file with the necessary changes.\nIt is always necessary to manually review and correct the candidate migrations that autogenerate produces.\n\nThe second command will use this file to impact the changes in the database.\n\nFor more alembic commands visit the [documentation](https://alembic.sqlalchemy.org/en/latest/)\n\n## Developers guide\nIf you are going to develop SoaM, you should checkout the documentation directory before adding code,\nyou can start in the [project structure document](https://mutt_data.gitlab.io/soam/project_structure.html).\n\n## Testing\nTo run the default testsuite run this:\n```\npytest\n```\nTo run the tests with nox:\n```bash\nnox --session tests\n```\n\n### Testing data extraction\nThe tests for the extractor currently depends on having a local Postgres database and\nthe variable `TEST_DB_CONNSTR` set with it's connection string.\n\nThe easiest way to to this is as follows:\n```\ndocker run --network=host \\\n    -e \"POSTGRES_USER=soam\" \\\n    -e \"POSTGRES_PASSWORD=soam\" \\\n    -e \"POSTGRES_DB=soam\" \\\n    --rm postgres\n\nTEST_DB_CONNSTR=\"postgresql://soam:soam@localhost/soam\" pytest\n```\n\nTo run a specific test file:\n```bash\nTEST_DB_CONNSTR=\"postgresql://soam:soam@localhost/soam\" pytest -v tests/test_file.py\n```\n\nNote that even though the example has a DB name during the tests a new database is created and dropped to ensure that no state is maintened between runs.\n\n### Testing plots\nTo generate images for testing we use [pytest-mpl](https://github.com/matplotlib/pytest-mpl) as follows:\n\n```\npytest --mpl-generate-path=tests/plotting/baseline\n```\n\nTo run the image based tests:\n```\npytest --mpl\n```\n\n## Contributing\nWe appreciate for considering to help out maintaining this project. If you'd like to contribute please read our [contributing guidelines](https://mutt_data.gitlab.io/soam/CONTRIBUTING.html).\n\n## CI\n\nTo run the CI jobs locally you have to run it with [nox](https://nox.thea.codes/en/stable/):\nIn the project root directory, there is a noxfile.py file defining all the jobs, these jobs will be executed when calling from CI or you can call them locally.\n\nYou can run all the jobs with the command `nox`, from the project root directory or run just one job with `nox --session test` command, for example.\n\nThe .gitlab-ci.yml file configures the gitlab CI to run nox.\nNox let us execute some test and checks before making the commit.\nWe are using:\n* Linting job:\n  * [isort](https://pycqa.github.io/isort/) to reorder imports\n  * [pylint](https://github.com/PyCQA/pylint) to be pep8 compliant\n  * [black](https://github.com/psf/black) to format for code conventions\n  * [mypy](http://mypy-lang.org/) for static type checking\n* [bandit](https://bandit.readthedocs.io/en/latest/) for security checks\n* [pytest](https://docs.pytest.org/) to run all the tests in the test folder.\n* [pyreverse](https://pythonhosted.org/theape/documentation/developer/explorations/explore_graphs/explore_pyreverse.html) to create diagrams of the project\n\nThis runs on a gitlab machine after every commit.\n\nWe are caching the environments for each job on each branch.\nOn every first commit of a branch, you will have to change the policy also if you add dependencies or a new package to the project.\nGitlab cache policy:\n* `pull`: pull the cached files from the cloud.\n* `push`: push the created files to the cloud.\n* `pull-push`: pull the cached files and push the newly created files.\n\n## Rules of Thumb\nThis section contains some recommendations when working with SoaM to avoid common mistakes:\n\n* When possible reuse objects to preserve their configuration.\nEg: Transformations, forecasters, etc.\n* Use the same train-test windows when backtesting and training to deploy and on later usage.\n\n## Credits\nAlejandro Rusi <br>\nDiego Leguizam\u00f3n <br>\nDiego Lizondo <br>\nEugenio Scafati <br>\nFabian Wolfmann <br>\nFederico Font <br>\nFrancisco Lopez Destain <br>\nGuido Trucco <br>\nHugo Daniel Viotti <br>\nJuan Martin Pampliega <br>\nPablo Andres Lorenzatto <br>\nWenceslao Villegas\n\n## License\n`soam` is licensed under the [Apache License 2.0](https://gitlab.com/mutt_data/muttlib/-/blob/master/LICENCE).\n\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Tools for time series analysis, plotting and reporting.",
    "version": "0.10.2",
    "project_urls": null,
    "split_keywords": [
        "anomalies",
        "forecasting",
        "reporting"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ffc3d3eda7c9236c2b810e53ef1baffae32a4e2f9f1eb2e92e09c731dd8a5a20",
                "md5": "35c6de1e6c13d25e314f07c3fc098607",
                "sha256": "38d57fa193caa981deb4fdbb49d0b76621475fdedc5b195508d1254781618fa3"
            },
            "downloads": -1,
            "filename": "soam-0.10.2-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "35c6de1e6c13d25e314f07c3fc098607",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": "~=3.6",
            "size": 194904,
            "upload_time": "2023-06-21T18:21:38",
            "upload_time_iso_8601": "2023-06-21T18:21:38.551348Z",
            "url": "https://files.pythonhosted.org/packages/ff/c3/d3eda7c9236c2b810e53ef1baffae32a4e2f9f1eb2e92e09c731dd8a5a20/soam-0.10.2-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5d3042d1cc167acf48114cabe2be3a55fb6140bbc7d090cb626d6bca1de619c9",
                "md5": "bcf189af62257aad484f496c343c9b28",
                "sha256": "ea4eed459046fcf6fb1de6d63352e01a6cdc0a0d90c44656aad46288044eef3a"
            },
            "downloads": -1,
            "filename": "soam-0.10.2.tar.gz",
            "has_sig": false,
            "md5_digest": "bcf189af62257aad484f496c343c9b28",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "~=3.6",
            "size": 980102,
            "upload_time": "2023-06-21T18:21:43",
            "upload_time_iso_8601": "2023-06-21T18:21:43.050148Z",
            "url": "https://files.pythonhosted.org/packages/5d/30/42d1cc167acf48114cabe2be3a55fb6140bbc7d090cb626d6bca1de619c9/soam-0.10.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-21 18:21:43",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "soam"
}
        
Elapsed time: 0.08144s