viadot


Nameviadot JSON
Version 0.4.23 PyPI version JSON
download
home_pagehttps://github.com/dyvenia/viadot
SummaryA simple data ingestion library to guide data flows from some places to other places
upload_time2023-12-07 12:03:26
maintainer
docs_urlNone
authorAlessio Civitillo
requires_python
license
keywords
VCS
bugtrack_url
requirements azure-core azure-storage-blob click black pandas prefect pyarrow pyodbc pytest pytest-cov requests openpyxl jupyterlab azure-keyvault azure-identity great-expectations matplotlib adlfs PyGithub Shapely imagehash visions sharepy simple_salesforce sql-metadata duckdb google-auth sendgrid pandas-gbq pydantic PyMySQL paramiko sshtunnel databricks-connect tzlocal O365 aiohttp aiolimiter protobuf avro-python3 pygit2 dbt-core dbt-sqlserver lumaCLI Office365-REST-Python-Client TM1py nltk scikit-learn
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Viadot
[![build status](https://github.com/dyvenia/viadot/actions/workflows/build.yml/badge.svg)](https://github.com/dyvenia/viadot/actions/workflows/build.yml)
[![formatting](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![codecov](https://codecov.io/gh/Trymzet/dyvenia/branch/main/graph/badge.svg?token=k40ALkXbNq)](https://codecov.io/gh/Trymzet/dyvenia)
---

**Documentation**: <a href="https://dyvenia.github.io/viadot/" target="_blank">https://dyvenia.github.io/viadot/</a>

**Source Code**: <a href="https://github.com/dyvenia/viadot" target="_blank">https://github.com/dyvenia/viadot</a>

---

A simple data ingestion library to guide data flows from some places to other places.

## Getting Data from a Source

Viadot supports several API and RDBMS sources, private and public. Currently, we support the UK Carbon Intensity public API and base the examples on it.

```python
from viadot.sources.uk_carbon_intensity import UKCarbonIntensity
ukci = UKCarbonIntensity()
ukci.query("/intensity")
df = ukci.to_df()
df
```

**Output:**
|    | from              | to                |   forecast |   actual | index    |
|---:|:------------------|:------------------|-----------:|---------:|:---------|
|  0 | 2021-08-10T11:00Z | 2021-08-10T11:30Z |        211 |      216 | moderate |

The above `df` is a python pandas `DataFrame` object. The above df contains data downloaded from viadot from the Carbon Intensity UK API.

## Loading Data to a Source
Depending on the source, viadot provides different methods of uploading data. For instance, for SQL sources, this would be bulk inserts. For data lake sources, it would be a file upload. We also provide ready-made pipelines including data validation steps using Great Expectations.

An example of loading data into SQLite from a pandas `DataFrame` using the `SQLiteInsert` Prefect task:

```python
from viadot.tasks import SQLiteInsert

insert_task = SQLiteInsert()
insert_task.run(table_name=TABLE_NAME, dtypes=dtypes, db_path=database_path, df=df, if_exists="replace")
```

## Set up

__Note__: If you're running on Unix, after cloning the repo, you may need to grant executable privileges to the `update.sh` and `run.sh` scripts: 
```
sudo chmod +x viadot/docker/update.sh && \
sudo chmod +x viadot/docker/run.sh
```

### a) user
Clone the `main` branch, enter the `docker` folder, and set up the environment:
```
git clone https://github.com/dyvenia/viadot.git && \
cd viadot/docker && \
./update.sh
```

Run the enviroment:
```
./run.sh
```

### b) developer
Clone the `dev` branch, enter the `docker` folder, and set up the environment:
```
git clone -b dev https://github.com/dyvenia/viadot.git && \
cd viadot/docker && \
./update.sh -t dev
```

Run the enviroment:
```
./run.sh -t dev
```

Install the library in development mode (repeat for the `viadot_jupyter_lab` container if needed):
```
docker exec -it viadot_testing pip install -e . --user
```

## Running tests

To run tests, log into the container and run pytest:
```
docker exec -it viadot_testing bash
pytest
```

## Running flows locally

You can run the example flows from the terminal:
```
docker exec -it viadot_testing bash
FLOW_NAME=hello_world; python -m viadot.examples.$FLOW_NAME
```

However, when developing, the easiest way is to use the provided Jupyter Lab container available in the browser at `http://localhost:9000/`.

## Executing Spark jobs
### Setting up
To begin using Spark, you must first declare the environmental variables as follows:
```
DATABRICKS_HOST = os.getenv("DATABRICKS_HOST")
DATABRICKS_API_TOKEN = os.getenv("DATABRICKS_API_TOKEN")
DATABRICKS_ORG_ID = os.getenv("DATABRICKS_ORG_ID")
DATABRICKS_PORT = os.getenv("DATABRICKS_PORT")
DATABRICKS_CLUSTER_ID = os.getenv("DATABRICKS_CLUSTER_ID")
```

Alternatively, you can also create a file called `.databricks-connect` in the root directory of viadot and add the required variables there. It should follow the following format:
```
{
  "host": "",
  "token": "",
  "cluster_id": "",
  "org_id": "",
  "port": ""
}
```
To retrieve the values, follow step 2 in this [link](https://docs.microsoft.com/en-us/azure/databricks/dev-tools/databricks-connect)

### Executing Spark functions
To begin using Spark, you must first create a Spark Session: `spark = SparkSession.builder.appName('session_name').getOrCreate()`. `spark` will be used to access all the Spark methods. Here is a list of commonly used Spark methods (WIP):
* `spark.createDataFrame(df)`: Create a Spark DataFrame from a Pandas DataFrame
* `sparkdf.write.saveAsTable("schema.table")`: Takes a Spark DataFrame and saves it as a table in Databricks.
* Ensure to use the correct schema, as it should be created and specified by the administrator
* `table = spark.sql("select * from schema.table")`: example of a simple query ran through Python


## How to contribute

1. Fork repository if you do not have write access
2. Set up locally
3. Test your changes with `pytest`
4. Submit a PR. The PR should contain the following:
    - new/changed functionality
    - tests for the changes
    - changes added to `CHANGELOG.md`
    - any other relevant resources updated (esp. `viadot/docs`)

The general flow of working for this repository in case of forking:
1. Pull before making any changes
2. Create a new branch with 
```
git checkout -b <name>
```
3. Make some work on repository
4. Stage changes with 
```
git add <files>
```
5. Commit the changes with 
```
git commit -m <message>
``` 
__Note__: See out Style Guidelines for more information about commit messages and PR names

6. Fetch and pull the changes that could happen while working with 
```
git fetch <remote> <branch>
git checkout <remote>/<branch>
```
7. Push your changes on repostory using 
```
git push origin <name>
```
8. Use merge to finish your push to repository 
```
git checkout <where_merging_to>
git merge <branch_to_merge>
```

Please follow the standards and best practices used within the library (eg. when adding tasks, see how other tasks are constructed, etc.). For any questions, please reach out to us here on GitHub.


### Style guidelines
- the code should be formatted with Black using default settings (easiest way is to use the VSCode extension)
- commit messages should:
    - begin with an emoji
    - start with one of the following verbs, capitalized, immediately after the summary emoji: "Added", "Updated", "Removed", "Fixed", "Renamed", and, sporadically, other ones, such as "Upgraded", "Downgraded", or whatever you find relevant for your particular situation
    - contain a useful description of what the commit is doing

## Set up Black for development in VSCode
Your code should be formatted with Black when you want to contribute. To set up Black in Visual Studio Code follow instructions below. 
1. Install `black` in your environment by writing in the terminal:
```
pip install black
```
2. Go to the settings - gear icon in the bottom left corner and select `Settings` or type "Ctrl" + ",".
3. Find the `Format On Save` setting - check the box.
4. Find the `Python Formatting Provider` and select "black" in the drop-down list.
5. Your code should auto format on save now.



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/dyvenia/viadot",
    "name": "viadot",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Alessio Civitillo",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/7d/77/b78c93d698aa8d33153c945646090b7980b68e2fab3f35f8959f7256fe03/viadot-0.4.23.tar.gz",
    "platform": null,
    "description": "# Viadot\n[![build status](https://github.com/dyvenia/viadot/actions/workflows/build.yml/badge.svg)](https://github.com/dyvenia/viadot/actions/workflows/build.yml)\n[![formatting](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![codecov](https://codecov.io/gh/Trymzet/dyvenia/branch/main/graph/badge.svg?token=k40ALkXbNq)](https://codecov.io/gh/Trymzet/dyvenia)\n---\n\n**Documentation**: <a href=\"https://dyvenia.github.io/viadot/\" target=\"_blank\">https://dyvenia.github.io/viadot/</a>\n\n**Source Code**: <a href=\"https://github.com/dyvenia/viadot\" target=\"_blank\">https://github.com/dyvenia/viadot</a>\n\n---\n\nA simple data ingestion library to guide data flows from some places to other places.\n\n## Getting Data from a Source\n\nViadot supports several API and RDBMS sources, private and public. Currently, we support the UK Carbon Intensity public API and base the examples on it.\n\n```python\nfrom viadot.sources.uk_carbon_intensity import UKCarbonIntensity\nukci = UKCarbonIntensity()\nukci.query(\"/intensity\")\ndf = ukci.to_df()\ndf\n```\n\n**Output:**\n|    | from              | to                |   forecast |   actual | index    |\n|---:|:------------------|:------------------|-----------:|---------:|:---------|\n|  0 | 2021-08-10T11:00Z | 2021-08-10T11:30Z |        211 |      216 | moderate |\n\nThe above `df` is a python pandas `DataFrame` object. The above df contains data downloaded from viadot from the Carbon Intensity UK API.\n\n## Loading Data to a Source\nDepending on the source, viadot provides different methods of uploading data. For instance, for SQL sources, this would be bulk inserts. For data lake sources, it would be a file upload. We also provide ready-made pipelines including data validation steps using Great Expectations.\n\nAn example of loading data into SQLite from a pandas `DataFrame` using the `SQLiteInsert` Prefect task:\n\n```python\nfrom viadot.tasks import SQLiteInsert\n\ninsert_task = SQLiteInsert()\ninsert_task.run(table_name=TABLE_NAME, dtypes=dtypes, db_path=database_path, df=df, if_exists=\"replace\")\n```\n\n## Set up\n\n__Note__: If you're running on Unix, after cloning the repo, you may need to grant executable privileges to the `update.sh` and `run.sh` scripts: \n```\nsudo chmod +x viadot/docker/update.sh && \\\nsudo chmod +x viadot/docker/run.sh\n```\n\n### a) user\nClone the `main` branch, enter the `docker` folder, and set up the environment:\n```\ngit clone https://github.com/dyvenia/viadot.git && \\\ncd viadot/docker && \\\n./update.sh\n```\n\nRun the enviroment:\n```\n./run.sh\n```\n\n### b) developer\nClone the `dev` branch, enter the `docker` folder, and set up the environment:\n```\ngit clone -b dev https://github.com/dyvenia/viadot.git && \\\ncd viadot/docker && \\\n./update.sh -t dev\n```\n\nRun the enviroment:\n```\n./run.sh -t dev\n```\n\nInstall the library in development mode (repeat for the `viadot_jupyter_lab` container if needed):\n```\ndocker exec -it viadot_testing pip install -e . --user\n```\n\n## Running tests\n\nTo run tests, log into the container and run pytest:\n```\ndocker exec -it viadot_testing bash\npytest\n```\n\n## Running flows locally\n\nYou can run the example flows from the terminal:\n```\ndocker exec -it viadot_testing bash\nFLOW_NAME=hello_world; python -m viadot.examples.$FLOW_NAME\n```\n\nHowever, when developing, the easiest way is to use the provided Jupyter Lab container available in the browser at `http://localhost:9000/`.\n\n## Executing Spark jobs\n### Setting up\nTo begin using Spark, you must first declare the environmental variables as follows:\n```\nDATABRICKS_HOST = os.getenv(\"DATABRICKS_HOST\")\nDATABRICKS_API_TOKEN = os.getenv(\"DATABRICKS_API_TOKEN\")\nDATABRICKS_ORG_ID = os.getenv(\"DATABRICKS_ORG_ID\")\nDATABRICKS_PORT = os.getenv(\"DATABRICKS_PORT\")\nDATABRICKS_CLUSTER_ID = os.getenv(\"DATABRICKS_CLUSTER_ID\")\n```\n\nAlternatively, you can also create a file called `.databricks-connect` in the root directory of viadot and add the required variables there. It should follow the following format:\n```\n{\n  \"host\": \"\",\n  \"token\": \"\",\n  \"cluster_id\": \"\",\n  \"org_id\": \"\",\n  \"port\": \"\"\n}\n```\nTo retrieve the values, follow step 2 in this [link](https://docs.microsoft.com/en-us/azure/databricks/dev-tools/databricks-connect)\n\n### Executing Spark functions\nTo begin using Spark, you must first create a Spark Session: `spark = SparkSession.builder.appName('session_name').getOrCreate()`. `spark` will be used to access all the Spark methods. Here is a list of commonly used Spark methods (WIP):\n* `spark.createDataFrame(df)`: Create a Spark DataFrame from a Pandas DataFrame\n* `sparkdf.write.saveAsTable(\"schema.table\")`: Takes a Spark DataFrame and saves it as a table in Databricks.\n* Ensure to use the correct schema, as it should be created and specified by the administrator\n* `table = spark.sql(\"select * from schema.table\")`: example of a simple query ran through Python\n\n\n## How to contribute\n\n1. Fork repository if you do not have write access\n2. Set up locally\n3. Test your changes with `pytest`\n4. Submit a PR. The PR should contain the following:\n    - new/changed functionality\n    - tests for the changes\n    - changes added to `CHANGELOG.md`\n    - any other relevant resources updated (esp. `viadot/docs`)\n\nThe general flow of working for this repository in case of forking:\n1. Pull before making any changes\n2. Create a new branch with \n```\ngit checkout -b <name>\n```\n3. Make some work on repository\n4. Stage changes with \n```\ngit add <files>\n```\n5. Commit the changes with \n```\ngit commit -m <message>\n``` \n__Note__: See out Style Guidelines for more information about commit messages and PR names\n\n6. Fetch and pull the changes that could happen while working with \n```\ngit fetch <remote> <branch>\ngit checkout <remote>/<branch>\n```\n7. Push your changes on repostory using \n```\ngit push origin <name>\n```\n8. Use merge to finish your push to repository \n```\ngit checkout <where_merging_to>\ngit merge <branch_to_merge>\n```\n\nPlease follow the standards and best practices used within the library (eg. when adding tasks, see how other tasks are constructed, etc.). For any questions, please reach out to us here on GitHub.\n\n\n### Style guidelines\n- the code should be formatted with Black using default settings (easiest way is to use the VSCode extension)\n- commit messages should:\n    - begin with an emoji\n    - start with one of the following verbs, capitalized, immediately after the summary emoji: \"Added\", \"Updated\", \"Removed\", \"Fixed\", \"Renamed\", and, sporadically, other ones, such as \"Upgraded\", \"Downgraded\", or whatever you find relevant for your particular situation\n    - contain a useful description of what the commit is doing\n\n## Set up Black for development in VSCode\nYour code should be formatted with Black when you want to contribute. To set up Black in Visual Studio Code follow instructions below. \n1. Install `black` in your environment by writing in the terminal:\n```\npip install black\n```\n2. Go to the settings - gear icon in the bottom left corner and select `Settings` or type \"Ctrl\" + \",\".\n3. Find the `Format On Save` setting - check the box.\n4. Find the `Python Formatting Provider` and select \"black\" in the drop-down list.\n5. Your code should auto format on save now.\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A simple data ingestion library to guide data flows from some places to other places",
    "version": "0.4.23",
    "project_urls": {
        "Homepage": "https://github.com/dyvenia/viadot"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1fa308ce7d91896e9e10f20509e66319465af2a41b30d7c2ea2ff0fbb92d8041",
                "md5": "2262e6e9cf4df8efad3e7bdab3064fa5",
                "sha256": "39290b46b306c76017a170f107286d81ba30cc3b965281a9e45b2be6e2d23b5f"
            },
            "downloads": -1,
            "filename": "viadot-0.4.23-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2262e6e9cf4df8efad3e7bdab3064fa5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 334116,
            "upload_time": "2023-12-07T12:03:24",
            "upload_time_iso_8601": "2023-12-07T12:03:24.490319Z",
            "url": "https://files.pythonhosted.org/packages/1f/a3/08ce7d91896e9e10f20509e66319465af2a41b30d7c2ea2ff0fbb92d8041/viadot-0.4.23-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7d77b78c93d698aa8d33153c945646090b7980b68e2fab3f35f8959f7256fe03",
                "md5": "96c1c1cefa2529ab04f2ddcef1760877",
                "sha256": "0059cdbd0cb7542321ef2b5ebb25257ac99da759f19a6dbd54d7149384f1db8f"
            },
            "downloads": -1,
            "filename": "viadot-0.4.23.tar.gz",
            "has_sig": false,
            "md5_digest": "96c1c1cefa2529ab04f2ddcef1760877",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 228536,
            "upload_time": "2023-12-07T12:03:26",
            "upload_time_iso_8601": "2023-12-07T12:03:26.460385Z",
            "url": "https://files.pythonhosted.org/packages/7d/77/b78c93d698aa8d33153c945646090b7980b68e2fab3f35f8959f7256fe03/viadot-0.4.23.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-07 12:03:26",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "dyvenia",
    "github_project": "viadot",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "azure-core",
            "specs": [
                [
                    "==",
                    "1.25.0"
                ]
            ]
        },
        {
            "name": "azure-storage-blob",
            "specs": [
                [
                    "==",
                    "12.9.0"
                ]
            ]
        },
        {
            "name": "click",
            "specs": [
                [
                    "==",
                    "8.0.1"
                ]
            ]
        },
        {
            "name": "black",
            "specs": [
                [
                    "==",
                    "21.11b1"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    "==",
                    "1.3.4"
                ]
            ]
        },
        {
            "name": "prefect",
            "specs": [
                [
                    "==",
                    "0.15.11"
                ]
            ]
        },
        {
            "name": "pyarrow",
            "specs": [
                [
                    "==",
                    "6.0.1"
                ]
            ]
        },
        {
            "name": "pyodbc",
            "specs": [
                [
                    "==",
                    "4.0.32"
                ]
            ]
        },
        {
            "name": "pytest",
            "specs": [
                [
                    "==",
                    "6.2.5"
                ]
            ]
        },
        {
            "name": "pytest-cov",
            "specs": [
                [
                    "==",
                    "3.0.0"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    "==",
                    "2.26.0"
                ]
            ]
        },
        {
            "name": "openpyxl",
            "specs": [
                [
                    "==",
                    "3.0.9"
                ]
            ]
        },
        {
            "name": "jupyterlab",
            "specs": [
                [
                    "==",
                    "3.4.5"
                ]
            ]
        },
        {
            "name": "azure-keyvault",
            "specs": [
                [
                    "==",
                    "4.1.0"
                ]
            ]
        },
        {
            "name": "azure-identity",
            "specs": [
                [
                    "<",
                    "2.0.0"
                ],
                [
                    ">=",
                    "1.10.0"
                ]
            ]
        },
        {
            "name": "great-expectations",
            "specs": [
                [
                    "==",
                    "0.15.50"
                ]
            ]
        },
        {
            "name": "matplotlib",
            "specs": []
        },
        {
            "name": "adlfs",
            "specs": [
                [
                    "==",
                    "2021.10.0"
                ]
            ]
        },
        {
            "name": "PyGithub",
            "specs": [
                [
                    "==",
                    "1.55"
                ]
            ]
        },
        {
            "name": "Shapely",
            "specs": [
                [
                    "==",
                    "1.8.0"
                ]
            ]
        },
        {
            "name": "imagehash",
            "specs": [
                [
                    "==",
                    "4.2.1"
                ]
            ]
        },
        {
            "name": "visions",
            "specs": [
                [
                    "==",
                    "0.7.4"
                ]
            ]
        },
        {
            "name": "sharepy",
            "specs": [
                [
                    "==",
                    "1.3.0"
                ]
            ]
        },
        {
            "name": "simple_salesforce",
            "specs": [
                [
                    "==",
                    "1.11.5"
                ]
            ]
        },
        {
            "name": "sql-metadata",
            "specs": [
                [
                    "==",
                    "2.3.0"
                ]
            ]
        },
        {
            "name": "duckdb",
            "specs": [
                [
                    "==",
                    "0.5.1"
                ]
            ]
        },
        {
            "name": "google-auth",
            "specs": [
                [
                    "==",
                    "2.6.2"
                ]
            ]
        },
        {
            "name": "sendgrid",
            "specs": [
                [
                    "==",
                    "6.9.7"
                ]
            ]
        },
        {
            "name": "pandas-gbq",
            "specs": [
                [
                    "==",
                    "0.17.4"
                ]
            ]
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    "==",
                    "1.10.12"
                ]
            ]
        },
        {
            "name": "PyMySQL",
            "specs": [
                [
                    "==",
                    "1.0.2"
                ]
            ]
        },
        {
            "name": "paramiko",
            "specs": [
                [
                    "==",
                    "2.11.0"
                ]
            ]
        },
        {
            "name": "sshtunnel",
            "specs": [
                [
                    "==",
                    "0.4.0"
                ]
            ]
        },
        {
            "name": "databricks-connect",
            "specs": [
                [
                    "==",
                    "10.4.*"
                ]
            ]
        },
        {
            "name": "tzlocal",
            "specs": [
                [
                    "==",
                    "4.3"
                ]
            ]
        },
        {
            "name": "O365",
            "specs": [
                [
                    "==",
                    "2.0.18.1"
                ]
            ]
        },
        {
            "name": "aiohttp",
            "specs": [
                [
                    "==",
                    "3.8.1"
                ]
            ]
        },
        {
            "name": "aiolimiter",
            "specs": [
                [
                    "==",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "protobuf",
            "specs": [
                [
                    "<",
                    "3.20"
                ],
                [
                    ">=",
                    "3.19.0"
                ]
            ]
        },
        {
            "name": "avro-python3",
            "specs": [
                [
                    "==",
                    "1.10.2"
                ]
            ]
        },
        {
            "name": "pygit2",
            "specs": [
                [
                    "<",
                    "1.11.0"
                ],
                [
                    ">=",
                    "1.10.1"
                ]
            ]
        },
        {
            "name": "dbt-core",
            "specs": [
                [
                    "==",
                    "1.3.2"
                ]
            ]
        },
        {
            "name": "dbt-sqlserver",
            "specs": [
                [
                    "==",
                    "1.3.1"
                ]
            ]
        },
        {
            "name": "lumaCLI",
            "specs": [
                [
                    "==",
                    "0.0.19"
                ]
            ]
        },
        {
            "name": "Office365-REST-Python-Client",
            "specs": [
                [
                    "==",
                    "2.4.4"
                ]
            ]
        },
        {
            "name": "TM1py",
            "specs": [
                [
                    "==",
                    "1.11.3"
                ]
            ]
        },
        {
            "name": "nltk",
            "specs": [
                [
                    "==",
                    "3.8.1"
                ]
            ]
        },
        {
            "name": "scikit-learn",
            "specs": [
                [
                    "==",
                    "1.3.2"
                ]
            ]
        }
    ],
    "lcname": "viadot"
}
        
Elapsed time: 0.15645s