azuremlconstructor


Nameazuremlconstructor JSON
Version 0.0.4 PyPI version JSON
download
home_pageNone
SummaryAML Pipeline Constructor
upload_time2023-08-30 18:30:58
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9,<3.11
licenseNone
keywords azure machine leaning aml pipeline
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # aml-constructor

## Azure Machine Learning Pipeline Constructor

`aml-constructor` - or shortly - `azuremlconstructor` allows you to create Azure Machine Learning(shortly - `AML`)  [Pipeline](https://learn.microsoft.com/en-us/azure/machine-learning/concept-ml-pipelines?view=azureml-api-2). `azuremlconstructor` based on the [Azure Machine Learning SDK](https://learn.microsoft.com/en-us/azure/machine-learning/v1/how-to-create-machine-learning-pipelines?view=azureml-api-1&preserve-view=true), and implements main operations of the Pipeline creation. You can create pipelines with AML Steps, which can take DataInputs.
In azuremlconstructor pipeline creation consists of 3 steps:

### 0. Preporation

It's highly recommended to create separated folder your pipeline projects. And also, virtual environment(venv) - [article on RealPython](https://realpython.com/python-virtual-environments-a-primer/). You can create separated venv for future AML projects. It's specially useful if you are working with different kinds of libraries: data science oriented, web and so on.

### 1. Pipeline initialisation

Something like project initialisation. You choose pipeline name, directory and credential `.env` file. For storing azuremlconstructor has denv storage - or **EnvBank**. Initialise pipeline as:

```bash
python -m azuremlconstructor init [path] -n myfirstpipe -e denv_name
```

Here `-n` shows pipeline name, `path` - directory in which pipeline will be created - by default = `.`, `-e` - dotenv name. I will talk about denv's a little bit later. After this, in the passed directory will be created named as pipeline passed name.

```directory
myfirstpipe
---|settings/
------|settings.py
------|.amlignore
------|.env
------|conda_dependencies.yml
```

 Inside the directory `settings` directory which contains: `settings.py`, `.amlignore`, `.env` and `conda_dependencies.yml` files. `conda_dependencies.yml` will be used for environment creation on AML side. `.amlignore` something like `.gitignore` but for AML. `.env` is file form of our EnvBank instance. `-e` is optional, if it's skipped, will be created `.env` template with necessary fields, which you have to fill before *running* pipeline.

 **`settings.py`**:

 This module contains all necessary configuractions:

 ```python
from azuremlconstructor.input import FileInputSchema, PathInputSchema
from azuremlconstructor.core import StepSchema

# --------------------------| Module Names |----------------------------
AML_MODULE_NAME: str =       'aml'
SCRIPT_MODULE_NAME: str =    'script'
DATALOADER_MODULE_NAME: str = 'data_loader'



# ---------------------------| General |---------------------------------

NAME = "{{pipe_name}}"
DESCRIPTION = "Your pipeline description"


# ---------------------------| DataInputs |-------------------------------

file = FileInputSchema(
                        name='name', 
                        datastore_name='datastore', 
                        path_on_datastore='', 
                        files = ['file.ext'], 
                        data_reference_name = ''
    )

path = PathInputSchema(
                        name='name', 
                        datastore_name='datastore', 
                        path_on_datastore='',
                        data_reference_name=''
    )
# ---------------------------| Steps |---------------------------------
step1 = StepSchema(
                        name='step_name', 
                        compute_target='compute_name', 
                        input_data=[file, path], 
                        allow_reuse=False
            )
STEPS = [step1, ]

# ---------------------------| extra |---------------------------------

# 'submit' option will apply if set `is_active = True`

EXTRA = {
            'continue_on_step_failure': False,
            'submit': {'is_active': False, 'experiment_name': 'DebugPipeline', 'job_name': NAME, 'tags': None, 'kwargs': None}
}
 ```

Lets look at the variables we have here.

`AML_MODULE_NAME` - initially, pipeline project has 3 main scripts: `dataloader.py` - loads all the DataInputs into the pipeline, `aml.py` - main script of the pipeline, loaded data inputs imported here automaticaly, `script.py` - just empty script for implement your deep logic. You are free for remove this module or add so many as you need, however - the entry point of project is `aml.py`. `AML_MODULE_NAME` is the name of aml.py module. And the same thing for `DATALOADER_MODULE_NAME` and `SCRIPT_MODULE_NAME`.

`NAME` - name of your pipeline.

`DESCRIPTION` - description of the pipeline.

`PathInputSchema` and `FileInputSchema` DataInput of your pipeline. You create instances of the classes and pass into `StepSchema` class. Each `StepSchema` class is abstraction of [`PythonScriptStep`](https://learn.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.python_script_step.pythonscriptstep?view=azure-ml-py). All steps must be inside `STEPS` list.

#### `EXTRA` options

There are extra - additional options that can be helpfull.

- `continue_on_step_failure` - Indicates whether to continue execution of other steps in the PipelineRun if a step fails; the default is false. If True, only steps that have no dependency on the output of the failed step will continue execution.
  
- `submit` - submit options. Pipeline will be submitted, if `is_active` is `True`.

After filling settings, you have to apply your settings.

### 2. **Apply** Settings

```bash
python -m azuremlconstructor apply <path_to_pipeline>
```

Applying pipeline means - create structure based on the `settings.py` module. For each step will be created directory inside pipeline directory and each directory will contain: `aml.py`, `dataloader.py` and `script.py`.

After applying, your project structure will be like this:

```directory
myfirstpipe
---|settings/
------| settings.py
------| .amlignore
------| .env
------| conda_dependencies.yml
---| step_name/
------| dataloader.py
------| aml.py
------| script.py
---| step2_name/
------| dataloader.py
------| aml.py
------| script.py
```

**Note**: names of the modules setted in the `settings.py` module.

### 3. **Run** Pipeline

bash```
python -m azuremlconstructor run <path_to_pipeline>```

This command will publish your pipeline into your AML. Additionally, can submit according to the `EXTRA.submit` option.

## EnvBank

For work on AML pipeline you have to use your credentials: `workspace_name`, `resource_group`, `subscription_id`, `build_id`, `environment_name` and `tenant_id`. In amltor these variables store as instances of `EnvBank`, which is encrypted jsonlike file. You can create, retrieve or remove `EnvBank` instances(I'll name it as `denv`). In this purpose you've to use `denv` command.

### **Create denv**

You can create denv in 2 ways: pass path of existing `.env` file or in interactive mode - via terminal. In the first case:

```bash
python -m azuremlconstructor denv create -p <path_to_.env file> -n <new_name>
```

Then you'll type new password twise for encryption. After that, denv will save into local storage and you will be able to use it for future pipeline creation.

For create denv in interactive mode, you have to pass `-i` or `--interactive` arg:

```bash
python -m azuremlconstructor denv create -i
```

After that you have to type each asked field and set password.

### Get denv

For retrieve denv use:

```bash
python -m azuremlconstructor denv get -n <name_of_denv>
```

For list all existing denv names add -`-all` argument:

```bash
python -m azuremlconstructor denv get --all
```

**Note**: *for view the denv, you have to type password*.

### Remove denv

For removing denv:

```bash
python -m azuremlconstructor denv rm -n <name_of_denv>
```

## DataInputs

DataInputs can be files or paths from [AML Datastore](https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py). Whole process is creating [DataReference](https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.data.data_reference.datareference?view=azure-ml-py) object behind the scenes... All inputs will be loaded in the `dataloader.py` and imported into `aml.py` module. Lets look at `azuremlconstructor` DataInputs.

### PathInputSchema

Allows you to create data reference link to any directory inside the datastore. class looks like this:

```python
class PathInputSchema:
    name: str
    datastore_name: str
    path_on_datastore: str
    data_reference_name: str
```

Where: `name` name of your PathInput, this name will be used as variable name for importing. `datastore_name` - Datastore name, `path_on_datastore` - target path related to the Datastore. `data_reference_name` - data reference name for `DataReference` class, optional - if empty, will be used name.

### FileInputSchema

Allows you to mount files from Datastore. Behind the scines, very similar to PathInput, but with *file oriented* additions.

```python
class FileInputSchema:
    name: str
    datastore_name: str
    path_on_datastore: str
    data_reference_name: str
    files: List[str]
```

First 4 fields as previous. `files` - you can list file or files as list, which will be mounted from Datastore. If you want to get one file, pass as string, for more files - list of strings. File inputs will be assigned to variable names - generated  on the base of file name itself. You can use `FileInputSchema.files` *dict notation*, which allows you pass `{'file_name.extention': 'variable_name', 'file_name2.extention': 'variable_name2', ...}` for map files with variable names to use. Remember that, variable names must be unique in the scope of step. When you pass multiple filename, they must be on the same path.

**Supported file types**: `azuremlconstructor` uses `pandas` `pandas.read_...` methods for read the mounted files. At the moment, suported file types:

```directory
csv, parquet, excell sheet, json
```

Slugged file names will be used as variable names for importing files.

## Other commands

### Update

You can update project according to the `settings.py`. Updates  will affect whole project if passed `--overwrite`. Otherwise, user have to choose what to do with already existing modules - `overwrite`, `skeep` or `cancel` updating. It can be useful when you maked some changes into `settings.py` and don't want to overwrite whole pipeline structure by scratch, in this case you can use `update`:

```bash
python -m azuremlconstructor update <path_to_pipe> --overwrite [Optional]
```

### Rename

```bash
python -m azuremlconstructor rename <path_to_pipe> -n <new_name>
```

Renames pipeline into `new_name`. Renaming pipeline means: rename pipeline project directory, change `NAME` variable in `settings.py` and edit `ENVIRONMENT_FILE` in the `.env` file.

### Some usefull utils

`azuremlconstructor.utils` module has a banch of usefull tools, that can be usefull.
    - `utils.upload_data(datastore_name: str, files: List[str], target_path: str=".")` - uploads file(s) to the blob;
    - *recursive read_concat* functions: `utils.read_concat_csvfiles: List[str], return_types: bool=False, sep: str = ','`, `utils.read_concat_parquet(files: List[str], return_types: bool=False, engine: Literal['fastparquet', 'pyarrow'] = 'fastparquet')`, `utils.recursive_glob_list(folders: List[str], file_ext: str='parquet')`. Each function has doc

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "azuremlconstructor",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9,<3.11",
    "maintainer_email": null,
    "keywords": "azure,machine leaning,aml,pipeline",
    "author": null,
    "author_email": "Aziz Nadirov <aziznadirov@yahoo.com>",
    "download_url": "https://files.pythonhosted.org/packages/9d/8e/091f6c76a0eb6fa33d065814917adff2e72339f8dd3ea45df2bc67d1de44/azuremlconstructor-0.0.4.tar.gz",
    "platform": null,
    "description": "# aml-constructor\n\n## Azure Machine Learning Pipeline Constructor\n\n`aml-constructor` - or shortly - `azuremlconstructor` allows you to create Azure Machine Learning(shortly - `AML`)  [Pipeline](https://learn.microsoft.com/en-us/azure/machine-learning/concept-ml-pipelines?view=azureml-api-2). `azuremlconstructor` based on the [Azure Machine Learning SDK](https://learn.microsoft.com/en-us/azure/machine-learning/v1/how-to-create-machine-learning-pipelines?view=azureml-api-1&preserve-view=true), and implements main operations of the Pipeline creation. You can create pipelines with AML Steps, which can take DataInputs.\nIn azuremlconstructor pipeline creation consists of 3 steps:\n\n### 0. Preporation\n\nIt's highly recommended to create separated folder your pipeline projects. And also, virtual environment(venv) - [article on RealPython](https://realpython.com/python-virtual-environments-a-primer/). You can create separated venv for future AML projects. It's specially useful if you are working with different kinds of libraries: data science oriented, web and so on.\n\n### 1. Pipeline initialisation\n\nSomething like project initialisation. You choose pipeline name, directory and credential `.env` file. For storing azuremlconstructor has denv storage - or **EnvBank**. Initialise pipeline as:\n\n```bash\npython -m azuremlconstructor init [path] -n myfirstpipe -e denv_name\n```\n\nHere `-n` shows pipeline name, `path` - directory in which pipeline will be created - by default = `.`, `-e` - dotenv name. I will talk about denv's a little bit later. After this, in the passed directory will be created named as pipeline passed name.\n\n```directory\nmyfirstpipe\n---|settings/\n------|settings.py\n------|.amlignore\n------|.env\n------|conda_dependencies.yml\n```\n\n Inside the directory `settings` directory which contains: `settings.py`, `.amlignore`, `.env` and `conda_dependencies.yml` files. `conda_dependencies.yml` will be used for environment creation on AML side. `.amlignore` something like `.gitignore` but for AML. `.env` is file form of our EnvBank instance. `-e` is optional, if it's skipped, will be created `.env` template with necessary fields, which you have to fill before *running* pipeline.\n\n **`settings.py`**:\n\n This module contains all necessary configuractions:\n\n ```python\nfrom azuremlconstructor.input import FileInputSchema, PathInputSchema\nfrom azuremlconstructor.core import StepSchema\n\n# --------------------------| Module Names |----------------------------\nAML_MODULE_NAME: str =       'aml'\nSCRIPT_MODULE_NAME: str =    'script'\nDATALOADER_MODULE_NAME: str = 'data_loader'\n\n\n\n# ---------------------------| General |---------------------------------\n\nNAME = \"{{pipe_name}}\"\nDESCRIPTION = \"Your pipeline description\"\n\n\n# ---------------------------| DataInputs |-------------------------------\n\nfile = FileInputSchema(\n                        name='name', \n                        datastore_name='datastore', \n                        path_on_datastore='', \n                        files = ['file.ext'], \n                        data_reference_name = ''\n    )\n\npath = PathInputSchema(\n                        name='name', \n                        datastore_name='datastore', \n                        path_on_datastore='',\n                        data_reference_name=''\n    )\n# ---------------------------| Steps |---------------------------------\nstep1 = StepSchema(\n                        name='step_name', \n                        compute_target='compute_name', \n                        input_data=[file, path], \n                        allow_reuse=False\n            )\nSTEPS = [step1, ]\n\n# ---------------------------| extra |---------------------------------\n\n# 'submit' option will apply if set `is_active = True`\n\nEXTRA = {\n            'continue_on_step_failure': False,\n            'submit': {'is_active': False, 'experiment_name': 'DebugPipeline', 'job_name': NAME, 'tags': None, 'kwargs': None}\n}\n ```\n\nLets look at the variables we have here.\n\n`AML_MODULE_NAME` - initially, pipeline project has 3 main scripts: `dataloader.py` - loads all the DataInputs into the pipeline, `aml.py` - main script of the pipeline, loaded data inputs imported here automaticaly, `script.py` - just empty script for implement your deep logic. You are free for remove this module or add so many as you need, however - the entry point of project is `aml.py`. `AML_MODULE_NAME` is the name of aml.py module. And the same thing for `DATALOADER_MODULE_NAME` and `SCRIPT_MODULE_NAME`.\n\n`NAME` - name of your pipeline.\n\n`DESCRIPTION` - description of the pipeline.\n\n`PathInputSchema` and `FileInputSchema` DataInput of your pipeline. You create instances of the classes and pass into `StepSchema` class. Each `StepSchema` class is abstraction of [`PythonScriptStep`](https://learn.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.python_script_step.pythonscriptstep?view=azure-ml-py). All steps must be inside `STEPS` list.\n\n#### `EXTRA` options\n\nThere are extra - additional options that can be helpfull.\n\n- `continue_on_step_failure` - Indicates whether to continue execution of other steps in the PipelineRun if a step fails; the default is false. If True, only steps that have no dependency on the output of the failed step will continue execution.\n  \n- `submit` - submit options. Pipeline will be submitted, if `is_active` is `True`.\n\nAfter filling settings, you have to apply your settings.\n\n### 2. **Apply** Settings\n\n```bash\npython -m azuremlconstructor apply <path_to_pipeline>\n```\n\nApplying pipeline means - create structure based on the `settings.py` module. For each step will be created directory inside pipeline directory and each directory will contain: `aml.py`, `dataloader.py` and `script.py`.\n\nAfter applying, your project structure will be like this:\n\n```directory\nmyfirstpipe\n---|settings/\n------| settings.py\n------| .amlignore\n------| .env\n------| conda_dependencies.yml\n---| step_name/\n------| dataloader.py\n------| aml.py\n------| script.py\n---| step2_name/\n------| dataloader.py\n------| aml.py\n------| script.py\n```\n\n**Note**: names of the modules setted in the `settings.py` module.\n\n### 3. **Run** Pipeline\n\nbash```\npython -m azuremlconstructor run <path_to_pipeline>```\n\nThis command will publish your pipeline into your AML. Additionally, can submit according to the `EXTRA.submit` option.\n\n## EnvBank\n\nFor work on AML pipeline you have to use your credentials: `workspace_name`, `resource_group`, `subscription_id`, `build_id`, `environment_name` and `tenant_id`. In amltor these variables store as instances of `EnvBank`, which is encrypted jsonlike file. You can create, retrieve or remove `EnvBank` instances(I'll name it as `denv`). In this purpose you've to use `denv` command.\n\n### **Create denv**\n\nYou can create denv in 2 ways: pass path of existing `.env` file or in interactive mode - via terminal. In the first case:\n\n```bash\npython -m azuremlconstructor denv create -p <path_to_.env file> -n <new_name>\n```\n\nThen you'll type new password twise for encryption. After that, denv will save into local storage and you will be able to use it for future pipeline creation.\n\nFor create denv in interactive mode, you have to pass `-i` or `--interactive` arg:\n\n```bash\npython -m azuremlconstructor denv create -i\n```\n\nAfter that you have to type each asked field and set password.\n\n### Get denv\n\nFor retrieve denv use:\n\n```bash\npython -m azuremlconstructor denv get -n <name_of_denv>\n```\n\nFor list all existing denv names add -`-all` argument:\n\n```bash\npython -m azuremlconstructor denv get --all\n```\n\n**Note**: *for view the denv, you have to type password*.\n\n### Remove denv\n\nFor removing denv:\n\n```bash\npython -m azuremlconstructor denv rm -n <name_of_denv>\n```\n\n## DataInputs\n\nDataInputs can be files or paths from [AML Datastore](https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py). Whole process is creating [DataReference](https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.data.data_reference.datareference?view=azure-ml-py) object behind the scenes... All inputs will be loaded in the `dataloader.py` and imported into `aml.py` module. Lets look at `azuremlconstructor` DataInputs.\n\n### PathInputSchema\n\nAllows you to create data reference link to any directory inside the datastore. class looks like this:\n\n```python\nclass PathInputSchema:\n    name: str\n    datastore_name: str\n    path_on_datastore: str\n    data_reference_name: str\n```\n\nWhere: `name` name of your PathInput, this name will be used as variable name for importing. `datastore_name` - Datastore name, `path_on_datastore` - target path related to the Datastore. `data_reference_name` - data reference name for `DataReference` class, optional - if empty, will be used name.\n\n### FileInputSchema\n\nAllows you to mount files from Datastore. Behind the scines, very similar to PathInput, but with *file oriented* additions.\n\n```python\nclass FileInputSchema:\n    name: str\n    datastore_name: str\n    path_on_datastore: str\n    data_reference_name: str\n    files: List[str]\n```\n\nFirst 4 fields as previous. `files` - you can list file or files as list, which will be mounted from Datastore. If you want to get one file, pass as string, for more files - list of strings. File inputs will be assigned to variable names - generated  on the base of file name itself. You can use `FileInputSchema.files` *dict notation*, which allows you pass `{'file_name.extention': 'variable_name', 'file_name2.extention': 'variable_name2', ...}` for map files with variable names to use. Remember that, variable names must be unique in the scope of step. When you pass multiple filename, they must be on the same path.\n\n**Supported file types**: `azuremlconstructor` uses `pandas` `pandas.read_...` methods for read the mounted files. At the moment, suported file types:\n\n```directory\ncsv, parquet, excell sheet, json\n```\n\nSlugged file names will be used as variable names for importing files.\n\n## Other commands\n\n### Update\n\nYou can update project according to the `settings.py`. Updates  will affect whole project if passed `--overwrite`. Otherwise, user have to choose what to do with already existing modules - `overwrite`, `skeep` or `cancel` updating. It can be useful when you maked some changes into `settings.py` and don't want to overwrite whole pipeline structure by scratch, in this case you can use `update`:\n\n```bash\npython -m azuremlconstructor update <path_to_pipe> --overwrite [Optional]\n```\n\n### Rename\n\n```bash\npython -m azuremlconstructor rename <path_to_pipe> -n <new_name>\n```\n\nRenames pipeline into `new_name`. Renaming pipeline means: rename pipeline project directory, change `NAME` variable in `settings.py` and edit `ENVIRONMENT_FILE` in the `.env` file.\n\n### Some usefull utils\n\n`azuremlconstructor.utils` module has a banch of usefull tools, that can be usefull.\n    - `utils.upload_data(datastore_name: str, files: List[str], target_path: str=\".\")` - uploads file(s) to the blob;\n    - *recursive read_concat* functions: `utils.read_concat_csvfiles: List[str], return_types: bool=False, sep: str = ','`, `utils.read_concat_parquet(files: List[str], return_types: bool=False, engine: Literal['fastparquet', 'pyarrow'] = 'fastparquet')`, `utils.recursive_glob_list(folders: List[str], file_ext: str='parquet')`. Each function has doc\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "AML Pipeline Constructor ",
    "version": "0.0.4",
    "project_urls": {
        "Source": "https://github.com/AzizNadirov/azuremlconstructor"
    },
    "split_keywords": [
        "azure",
        "machine leaning",
        "aml",
        "pipeline"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "498f86963e6f520713bc383816cada2bea7919dc1cd73a0ccf13931bef1efab6",
                "md5": "73561ea3e875678c68dbcdd7ee39e206",
                "sha256": "6246b86967ed437b57453b971423b69ff1cceac8fba7119146f916414cda3dc6"
            },
            "downloads": -1,
            "filename": "azuremlconstructor-0.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "73561ea3e875678c68dbcdd7ee39e206",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9,<3.11",
            "size": 39089,
            "upload_time": "2023-08-30T18:30:49",
            "upload_time_iso_8601": "2023-08-30T18:30:49.852020Z",
            "url": "https://files.pythonhosted.org/packages/49/8f/86963e6f520713bc383816cada2bea7919dc1cd73a0ccf13931bef1efab6/azuremlconstructor-0.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9d8e091f6c76a0eb6fa33d065814917adff2e72339f8dd3ea45df2bc67d1de44",
                "md5": "58ed86cce5f37b0a6d96f88d715b8a92",
                "sha256": "388c28d6c5ee7fb3ca7d4d27c1865fae8a014d52114b16a434f116a643b6e9c5"
            },
            "downloads": -1,
            "filename": "azuremlconstructor-0.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "58ed86cce5f37b0a6d96f88d715b8a92",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9,<3.11",
            "size": 33571,
            "upload_time": "2023-08-30T18:30:58",
            "upload_time_iso_8601": "2023-08-30T18:30:58.240916Z",
            "url": "https://files.pythonhosted.org/packages/9d/8e/091f6c76a0eb6fa33d065814917adff2e72339f8dd3ea45df2bc67d1de44/azuremlconstructor-0.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-30 18:30:58",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "AzizNadirov",
    "github_project": "azuremlconstructor",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "azuremlconstructor"
}
        
Elapsed time: 0.10558s