blackdynamite


Nameblackdynamite JSON
Version 1.0.8 PyPI version JSON
download
home_pagehttps://gitlab.com/ganciaux/blackdynamite
SummaryScientific Parametric Study Tool
upload_time2024-04-12 20:57:52
maintainerNone
docs_urlNone
authorGuillaume Anciaux
requires_python>=3.8
licenseGPL
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <img width="30%" style="display: block; margin-left: auto; margin-right: auto;" src=https://www.nicepng.com/png/detail/180-1803537_177kib-900x900-black-dynamite-black-dynamite.png>

# Quick start

## Documentation

The complete documentation of the project is on [readthedocs](https://blackdynamite.readthedocs.io/en/latest/)

## Installation

The easiest is through pip:

```bash
pip install blackdynamite
```

For a user scope installation (recommended):

```bash
pip install --user  blackdynamite
```

Or directly for the GitLab repository:

```bash
pip install  https://gitlab.com/ganciaux/blackdynamite.git
```


## Getting the sources

You can clone the GIT repository:

```bash
git clone https://gitlab.com/ganciaux/blackdynamite.git
```

## Installing completion

To benefit the autocompletion for **BlackDynamite** you need to
activate the global completion as described in the argcomplete website:
[Howto activate global completion](https://kislyuk.github.io/argcomplete/#activating-global-completion).

## Introduction and philosophy

**Blackdynamite** is merely a tool to help
managing parametric studies. In details it comprises:

1) Launching a program repeatedly with varying parameters, to explore the
  chosen parametric space.
  
2) Collect and sort results of **Small sizes** benefiting
  from the power of modern databases.
  
3) Analyze the results by making requests to the associated databases.


**Launching** is made simple by allowing any executable
to be launched. The set of directories will be generated and managed
by BlackDynamite to prevent errors. Requests of any kind will then
be made to the underlying database through friendly commands of BlackDynamite.

**Collecting** the results will be possible thanks to the Blackdynamite C/C++ and python
API which will let you send results directly to the database and thus automatically sort them. This is extremely useful. However heavy data such as Paraview files or any other kind of data should not be pushed to the database for obvious performance issues.

**Analysis** of the results can be made easy thanks to Blackdynamite which
can retrieve data information in the form of Numpy array to be used, analyzed or plotted
thanks to the powerful and vast Python libraries such as Matplotlib and Scipy.

The construction of a **BlackDynamite** parametric study follows these steps:

- Describing the parametric space
- Creating jobs (specific points in the parametric space)
- Creating runs (instances of the jobs)
- Launching runs
- Intrumenting the simulation to send results
- Analyzing the results

# Setting up a parametric study

The parametrization of a study is done in a YAML file, labelled
[bd.yaml](https://gitlab.com/ganciaux/blackdynamite/-/blob/master/example/bd.yaml).
It contains the information of the parametric space, spanned exploration and
configuration of your simulations. An example of a working study is provided in
the [example](https://gitlab.com/ganciaux/blackdynamite/-/tree/master/example) directory.

A study description starts with a provided name in the YAML format:

```yaml
---

study: bd_study
```

## Choose the parameters of the study

### Job description

The first thing to do is to list all the parameters characterizing 
a specific case. These parameters can
be of simple scalar types (e.g. string, integers, floats), however no
vectorial quantity can be considered as an input parameter.
It describes the `Job` pattern of the study.
This must be defined in a section in the the
[bd.yaml](https://gitlab.com/ganciaux/blackdynamite/-/blob/master/example/bd.yaml) file.
For instance a three parameter space can be declared as:

```yaml
job:
  param1: float
  param2: float
  param3: str

```

By default there is one more entry to every job: its unique `id`.


### Run description

Aside from the jobs, a run will represent a particular realisation (computation)
of a job. For instance, the run will contain information of the machine
it was run on, the executable version, or the number of processors employed.
For instance creating the run pattern can be done with:

```yaml
run:
  compiler: str
```

By default there are entries created for the user:

- id: the id of the run
- machine_name: the name of the machine where the run must be executed
- nproc: number of processors used to perform the computation (default: 1)
- run_path: the directory where the run will be created and launched
- job_id (integer): the ID of the running job
- state (str): the current state of the run (`CREATED`, `FINISHED`, `ERROR`)
- run_name (string): the name of the run (usually a name is given to a collection of runs, at creation)
- start_time (datetime): time when the run started
- last_step_time (datetime): last time a quantity was pushed to the database


## Create the database

Then you have to request for the creation of the database which can be done
with a simple command:
```bash
canYouDigIt init --truerun
```

As mentioned, all BlackDynamite
scripts inherit from the parsing system. So that when needing to launch
one of these codes, you can always claim for the valid keywords:
```bash
canYouDigIt init --help

usage: canYouDigIt [--study STUDY] [--host HOST] [--port PORT] [--user USER] [--password PASSWORD] [--bdconf BDCONF] [--truerun] [--constraints CONSTRAINTS]
                   [--binary_operator BINARY_OPERATOR] [--list_parameters] [--yes] [--logging] [--help]

createDB

General:
  --logging             Activate the file logging system (default: False)
  --help                Activate the file logging system (default: False)

BDParser:
  --study STUDY         Specify the study from the BlackDynamite database. This refers to the schemas in PostgreSQL language (default: None)
  --host HOST           Specify data base server address (default: None)
  --port PORT           Specify data base server port (default: None)
  --user USER           Specify user name to connect to data base server (default: tarantino)
  --password PASSWORD   Provides the password (default: None)
  --bdconf BDCONF       Path to a BlackDynamite file (*.bd) configuring current options (default: None)
  --truerun             Set this flag if you want to truly perform the action on base. If not set all action are mainly dryrun (default: False)
  --constraints CONSTRAINTS
                        This allows to constraint run/job selections by properties (default: None)
  --binary_operator BINARY_OPERATOR
                        Set the default binary operator to make requests to database (default: and)
  --list_parameters     Request to list the possible job/run parameters (default: False)
  --yes                 Answer all questions to yes (default: False)
```

An important point is that most of the actions are only applied
when the `truerun` flag is set. 

## Creating the jobs

The goal of the parametric study is to explore a subpart
of the parametric space. We need to create jobs that are
the points to explore. 

We need to describe the desired set of jobs, to be explored.
This is done in the YAML file describing the study, under the
section `job_space`. For instance it could be:

```yaml
job_space:
  param1: 10
  param2: [3.14, 1., 2.]
  param3: 'toto'
```

The actual insertion of jobs can be done with the command:

```python
canYouDigIt jobs create --truerun
```

You can control the created jobs with:

```python
canYouDigIt jobs info
```

In the case of our [example](https://gitlab.com/ganciaux/blackdynamite/-/tree/master/example), 3 jobs should be created
as a range of values for the second parameter was provided.


## Creating the runs

At this point the jobs are in the database. You need to create runs
that will precise the conditions of the realization of the jobs,
by giving the value of the run space.


This is specified in the YAML file under the section run_space. For instance
with:

```yaml
run_space:
  compiler: 'gcc'
```

The default parameters for runs will then be automatically
included in the parameters for the not provided ones (e.g. `state`).

A run now specify what action to perform to realize the job.
Therefore, one script must be provided as an entry point to each run execution.
This will be given in the YAML file as the `exec_file`. For instance from the
[example](https://gitlab.com/ganciaux/blackdynamite/-/tree/master/example)
a bash script is the entry point and provided as follows:

```yaml
exec_file: launch.sh
```

Usually, an end-user has a script(s) and configuration files
that he wishes to link to the run.
This can be done with:

```yaml
config_files:
  - config.txt
  - script.py
```

Finally, we have to create Run objects and attach them to jobs,
which is done with the command:

```bash
canYouDigIt runs create --truerun
```

After that, all created runs should be present in the database in the state
`CREATED`, ready to be launched. This can be controled with the command:

```bash
canYouDigIt runs info
```

## Instrumenting *Text* simulation files (e.g. a bash script)

`BlackDynamite` will replace specific text marks in the registered files
with the values from the job and run particular point. A precise syntax is
expected for `BlackDynamite` to recognize a replacement to be performed.

For instance:

```
echo __BLACKDYNAMITE__param1__
```

shall be replaced by the value of `param1` parameter at the run creation.

As an additional example, the script `launch.sh` taken 
from the [example](https://gitlab.com/ganciaux/blackdynamite/-/tree/master/example) has lines such as:

```
echo 'here is the job'
echo __BLACKDYNAMITE__id__
echo __BLACKDYNAMITE__param1__
echo __BLACKDYNAMITE__param2__
echo __BLACKDYNAMITE__param3__
```

## Instrumenting a *Python* simulation

In a python program, one can benefit from the possibilities of `Python` to
get a handle object on the current job and run.
This will also allow to push produced data to the database.
This is done by the simplified commands:

```python
# import BlackDynamite
import BlackDynamite as BD
# get the run from the current scope
myrun, myjob = BD.getRunFromScript()
```

In order to have time entries for runs, the functions `start` and `finish`
need to be called:

```python
myrun.start()
...
# your code
...
myrun.finish()
```

Finally, to push data directly to the database, one can use
`pushVectorQuantity` and/or `pushScalarQuantity`, attached to
meaurable `quantities`:

```python
# pushing vector types (numpy)
myrun.pushVectorQuantity(vector, step, "quantity_id")
# pushing scalar types 
myrun.pushScalarQuantity(scalar, step, "quantity_id")
```

## Executing the runs

Once the runs are created, they can be launched with a command like

```
canYouDigIt runs launch --truerun
```

During and after the run the status can be controlled, once again, with:

```bash
canYouDigIt runs info
```

For detailed information on a specific run:

```bash
canYouDigIt runs info --run_id RUN_ID_NUMBER
```

in order to be placed in the context of a specific run:

```bash
canYouDigIt runs info --run_id RUN_ID_NUMBER --enter
```

to execute a specific command

```bash
canYouDigIt runs info --exec COMMAND
```

and applied for a specific run:

```bash
canYouDigIt runs info --run_id RUN_ID_NUMBER --exec COMMAND
```

# Manipulating the database
## Selecting jobs and runs

All the previous commands may be applied to a subset of runs/jobs.
In order to select them one should place constraints, provided by
the option `--constraint`.
For instance, listing the runs constraining parameters
labeled `param1` and `param2` could be made with:

```bash
canYouDigIt runs info --constraint 'param1 > 1, param2 = 2'
```

In the exceptional case where parameters of jobs and runs would bear the same name
(you should avoid to do that), one can disambiguate the situation with:


```bash
canYouDigIt runs info --constraint 'jobs.id > 1, runs.id = 2'
```

## Cleaning Runs

Sometimes it can be necessary to re-launch a set of runs. Sometimes it
can be necessary to delete a run. In order to reset some runs,
making them ready to relaunch, one should use the following:

```bash
canYouDigIt runs clean --constraint 'jobs.id > 1, runs.id = 2' --truerun
```

To completely delete them:

```bash
canYouDigIt runs clean --constraint 'jobs.id > 1, runs.id = 2' --delete --truerun
```

## Altering runs

Rarely it can be necessary to manually change a set of runs. For instance changing the state of a set of runs can be done with:

```bash
canYouDigIt runs update --truerun state = toto
```

## Plotting results

For starting the exploration of the collected data, and thus producing graphs,
the `plot`command can be employed. While tunable, it cannot produce any type of graphs. However, for quick exploration of the data, one could do:

```bash
canYouDigIt runs plot --quantity ekin --legend "%r.id" --marker o
```

## Exporting the results (to zip file)

Under Construction

## Fecthing the results

Under construction...




            

Raw data

            {
    "_id": null,
    "home_page": "https://gitlab.com/ganciaux/blackdynamite",
    "name": "blackdynamite",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Guillaume Anciaux",
    "author_email": "guillaume.anciaux@epfl.ch",
    "download_url": "https://files.pythonhosted.org/packages/ce/6a/4a4862e3ed4f57cde835597dbf6718691c0c43e51510dc81f29ddf7d0fca/blackdynamite-1.0.8.tar.gz",
    "platform": null,
    "description": "<img width=\"30%\" style=\"display: block; margin-left: auto; margin-right: auto;\" src=https://www.nicepng.com/png/detail/180-1803537_177kib-900x900-black-dynamite-black-dynamite.png>\n\n# Quick start\n\n## Documentation\n\nThe complete documentation of the project is on [readthedocs](https://blackdynamite.readthedocs.io/en/latest/)\n\n## Installation\n\nThe easiest is through pip:\n\n```bash\npip install blackdynamite\n```\n\nFor a user scope installation (recommended):\n\n```bash\npip install --user  blackdynamite\n```\n\nOr directly for the GitLab repository:\n\n```bash\npip install  https://gitlab.com/ganciaux/blackdynamite.git\n```\n\n\n## Getting the sources\n\nYou can clone the GIT repository:\n\n```bash\ngit clone https://gitlab.com/ganciaux/blackdynamite.git\n```\n\n## Installing completion\n\nTo benefit the autocompletion for **BlackDynamite** you need to\nactivate the global completion as described in the argcomplete website:\n[Howto activate global completion](https://kislyuk.github.io/argcomplete/#activating-global-completion).\n\n## Introduction and philosophy\n\n**Blackdynamite** is merely a tool to help\nmanaging parametric studies. In details it comprises:\n\n1) Launching a program repeatedly with varying parameters, to explore the\n  chosen parametric space.\n  \n2) Collect and sort results of **Small sizes** benefiting\n  from the power of modern databases.\n  \n3) Analyze the results by making requests to the associated databases.\n\n\n**Launching** is made simple by allowing any executable\nto be launched. The set of directories will be generated and managed\nby BlackDynamite to prevent errors. Requests of any kind will then\nbe made to the underlying database through friendly commands of BlackDynamite.\n\n**Collecting** the results will be possible thanks to the Blackdynamite C/C++ and python\nAPI which will let you send results directly to the database and thus automatically sort them. This is extremely useful. However heavy data such as Paraview files or any other kind of data should not be pushed to the database for obvious performance issues.\n\n**Analysis** of the results can be made easy thanks to Blackdynamite which\ncan retrieve data information in the form of Numpy array to be used, analyzed or plotted\nthanks to the powerful and vast Python libraries such as Matplotlib and Scipy.\n\nThe construction of a **BlackDynamite** parametric study follows these steps:\n\n- Describing the parametric space\n- Creating jobs (specific points in the parametric space)\n- Creating runs (instances of the jobs)\n- Launching runs\n- Intrumenting the simulation to send results\n- Analyzing the results\n\n# Setting up a parametric study\n\nThe parametrization of a study is done in a YAML file, labelled\n[bd.yaml](https://gitlab.com/ganciaux/blackdynamite/-/blob/master/example/bd.yaml).\nIt contains the information of the parametric space, spanned exploration and\nconfiguration of your simulations. An example of a working study is provided in\nthe [example](https://gitlab.com/ganciaux/blackdynamite/-/tree/master/example) directory.\n\nA study description starts with a provided name in the YAML format:\n\n```yaml\n---\n\nstudy: bd_study\n```\n\n## Choose the parameters of the study\n\n### Job description\n\nThe first thing to do is to list all the parameters characterizing \na specific case. These parameters can\nbe of simple scalar types (e.g. string, integers, floats), however no\nvectorial quantity can be considered as an input parameter.\nIt describes the `Job` pattern of the study.\nThis must be defined in a section in the the\n[bd.yaml](https://gitlab.com/ganciaux/blackdynamite/-/blob/master/example/bd.yaml) file.\nFor instance a three parameter space can be declared as:\n\n```yaml\njob:\n  param1: float\n  param2: float\n  param3: str\n\n```\n\nBy default there is one more entry to every job: its unique `id`.\n\n\n### Run description\n\nAside from the jobs, a run will represent a particular realisation (computation)\nof a job. For instance, the run will contain information of the machine\nit was run on, the executable version, or the number of processors employed.\nFor instance creating the run pattern can be done with:\n\n```yaml\nrun:\n  compiler: str\n```\n\nBy default there are entries created for the user:\n\n- id: the id of the run\n- machine_name: the name of the machine where the run must be executed\n- nproc: number of processors used to perform the computation (default: 1)\n- run_path: the directory where the run will be created and launched\n- job_id (integer): the ID of the running job\n- state (str): the current state of the run (`CREATED`, `FINISHED`, `ERROR`)\n- run_name (string): the name of the run (usually a name is given to a collection of runs, at creation)\n- start_time (datetime): time when the run started\n- last_step_time (datetime): last time a quantity was pushed to the database\n\n\n## Create the database\n\nThen you have to request for the creation of the database which can be done\nwith a simple command:\n```bash\ncanYouDigIt init --truerun\n```\n\nAs mentioned, all BlackDynamite\nscripts inherit from the parsing system. So that when needing to launch\none of these codes, you can always claim for the valid keywords:\n```bash\ncanYouDigIt init --help\n\nusage: canYouDigIt [--study STUDY] [--host HOST] [--port PORT] [--user USER] [--password PASSWORD] [--bdconf BDCONF] [--truerun] [--constraints CONSTRAINTS]\n                   [--binary_operator BINARY_OPERATOR] [--list_parameters] [--yes] [--logging] [--help]\n\ncreateDB\n\nGeneral:\n  --logging             Activate the file logging system (default: False)\n  --help                Activate the file logging system (default: False)\n\nBDParser:\n  --study STUDY         Specify the study from the BlackDynamite database. This refers to the schemas in PostgreSQL language (default: None)\n  --host HOST           Specify data base server address (default: None)\n  --port PORT           Specify data base server port (default: None)\n  --user USER           Specify user name to connect to data base server (default: tarantino)\n  --password PASSWORD   Provides the password (default: None)\n  --bdconf BDCONF       Path to a BlackDynamite file (*.bd) configuring current options (default: None)\n  --truerun             Set this flag if you want to truly perform the action on base. If not set all action are mainly dryrun (default: False)\n  --constraints CONSTRAINTS\n                        This allows to constraint run/job selections by properties (default: None)\n  --binary_operator BINARY_OPERATOR\n                        Set the default binary operator to make requests to database (default: and)\n  --list_parameters     Request to list the possible job/run parameters (default: False)\n  --yes                 Answer all questions to yes (default: False)\n```\n\nAn important point is that most of the actions are only applied\nwhen the `truerun` flag is set. \n\n## Creating the jobs\n\nThe goal of the parametric study is to explore a subpart\nof the parametric space. We need to create jobs that are\nthe points to explore. \n\nWe need to describe the desired set of jobs, to be explored.\nThis is done in the YAML file describing the study, under the\nsection `job_space`. For instance it could be:\n\n```yaml\njob_space:\n  param1: 10\n  param2: [3.14, 1., 2.]\n  param3: 'toto'\n```\n\nThe actual insertion of jobs can be done with the command:\n\n```python\ncanYouDigIt jobs create --truerun\n```\n\nYou can control the created jobs with:\n\n```python\ncanYouDigIt jobs info\n```\n\nIn the case of our [example](https://gitlab.com/ganciaux/blackdynamite/-/tree/master/example), 3 jobs should be created\nas a range of values for the second parameter was provided.\n\n\n## Creating the runs\n\nAt this point the jobs are in the database. You need to create runs\nthat will precise the conditions of the realization of the jobs,\nby giving the value of the run space.\n\n\nThis is specified in the YAML file under the section run_space. For instance\nwith:\n\n```yaml\nrun_space:\n  compiler: 'gcc'\n```\n\nThe default parameters for runs will then be automatically\nincluded in the parameters for the not provided ones (e.g. `state`).\n\nA run now specify what action to perform to realize the job.\nTherefore, one script must be provided as an entry point to each run execution.\nThis will be given in the YAML file as the `exec_file`. For instance from the\n[example](https://gitlab.com/ganciaux/blackdynamite/-/tree/master/example)\na bash script is the entry point and provided as follows:\n\n```yaml\nexec_file: launch.sh\n```\n\nUsually, an end-user has a script(s) and configuration files\nthat he wishes to link to the run.\nThis can be done with:\n\n```yaml\nconfig_files:\n  - config.txt\n  - script.py\n```\n\nFinally, we have to create Run objects and attach them to jobs,\nwhich is done with the command:\n\n```bash\ncanYouDigIt runs create --truerun\n```\n\nAfter that, all created runs should be present in the database in the state\n`CREATED`, ready to be launched. This can be controled with the command:\n\n```bash\ncanYouDigIt runs info\n```\n\n## Instrumenting *Text* simulation files (e.g. a bash script)\n\n`BlackDynamite` will replace specific text marks in the registered files\nwith the values from the job and run particular point. A precise syntax is\nexpected for `BlackDynamite` to recognize a replacement to be performed.\n\nFor instance:\n\n```\necho __BLACKDYNAMITE__param1__\n```\n\nshall be replaced by the value of `param1` parameter at the run creation.\n\nAs an additional example, the script `launch.sh` taken \nfrom the [example](https://gitlab.com/ganciaux/blackdynamite/-/tree/master/example) has lines such as:\n\n```\necho 'here is the job'\necho __BLACKDYNAMITE__id__\necho __BLACKDYNAMITE__param1__\necho __BLACKDYNAMITE__param2__\necho __BLACKDYNAMITE__param3__\n```\n\n## Instrumenting a *Python* simulation\n\nIn a python program, one can benefit from the possibilities of `Python` to\nget a handle object on the current job and run.\nThis will also allow to push produced data to the database.\nThis is done by the simplified commands:\n\n```python\n# import BlackDynamite\nimport BlackDynamite as BD\n# get the run from the current scope\nmyrun, myjob = BD.getRunFromScript()\n```\n\nIn order to have time entries for runs, the functions `start` and `finish`\nneed to be called:\n\n```python\nmyrun.start()\n...\n# your code\n...\nmyrun.finish()\n```\n\nFinally, to push data directly to the database, one can use\n`pushVectorQuantity` and/or `pushScalarQuantity`, attached to\nmeaurable `quantities`:\n\n```python\n# pushing vector types (numpy)\nmyrun.pushVectorQuantity(vector, step, \"quantity_id\")\n# pushing scalar types \nmyrun.pushScalarQuantity(scalar, step, \"quantity_id\")\n```\n\n## Executing the runs\n\nOnce the runs are created, they can be launched with a command like\n\n```\ncanYouDigIt runs launch --truerun\n```\n\nDuring and after the run the status can be controlled, once again, with:\n\n```bash\ncanYouDigIt runs info\n```\n\nFor detailed information on a specific run:\n\n```bash\ncanYouDigIt runs info --run_id RUN_ID_NUMBER\n```\n\nin order to be placed in the context of a specific run:\n\n```bash\ncanYouDigIt runs info --run_id RUN_ID_NUMBER --enter\n```\n\nto execute a specific command\n\n```bash\ncanYouDigIt runs info --exec COMMAND\n```\n\nand applied for a specific run:\n\n```bash\ncanYouDigIt runs info --run_id RUN_ID_NUMBER --exec COMMAND\n```\n\n# Manipulating the database\n## Selecting jobs and runs\n\nAll the previous commands may be applied to a subset of runs/jobs.\nIn order to select them one should place constraints, provided by\nthe option `--constraint`.\nFor instance, listing the runs constraining parameters\nlabeled `param1` and `param2` could be made with:\n\n```bash\ncanYouDigIt runs info --constraint 'param1 > 1, param2 = 2'\n```\n\nIn the exceptional case where parameters of jobs and runs would bear the same name\n(you should avoid to do that), one can disambiguate the situation with:\n\n\n```bash\ncanYouDigIt runs info --constraint 'jobs.id > 1, runs.id = 2'\n```\n\n## Cleaning Runs\n\nSometimes it can be necessary to re-launch a set of runs. Sometimes it\ncan be necessary to delete a run. In order to reset some runs,\nmaking them ready to relaunch, one should use the following:\n\n```bash\ncanYouDigIt runs clean --constraint 'jobs.id > 1, runs.id = 2' --truerun\n```\n\nTo completely delete them:\n\n```bash\ncanYouDigIt runs clean --constraint 'jobs.id > 1, runs.id = 2' --delete --truerun\n```\n\n## Altering runs\n\nRarely it can be necessary to manually change a set of runs. For instance changing the state of a set of runs can be done with:\n\n```bash\ncanYouDigIt runs update --truerun state = toto\n```\n\n## Plotting results\n\nFor starting the exploration of the collected data, and thus producing graphs,\nthe `plot`command can be employed. While tunable, it cannot produce any type of graphs. However, for quick exploration of the data, one could do:\n\n```bash\ncanYouDigIt runs plot --quantity ekin --legend \"%r.id\" --marker o\n```\n\n## Exporting the results (to zip file)\n\nUnder Construction\n\n## Fecthing the results\n\nUnder construction...\n\n\n\n",
    "bugtrack_url": null,
    "license": "GPL",
    "summary": "Scientific Parametric Study Tool",
    "version": "1.0.8",
    "project_urls": {
        "Documentation": "https://blackdynamite.readthedocs.io/en/latest/",
        "Homepage": "https://gitlab.com/ganciaux/blackdynamite",
        "Repository": "https://gitlab.com/ganciaux/blackdynamite"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6cb5b76883acba2914175ed2b60d471e3625e5cfe3499e8fe0d5fe2e9d3f53f4",
                "md5": "d29766fe6a353012f8e650730d15a7e1",
                "sha256": "a0befdb0d00a4f6147cb1b71e96775a1968c383072934be7c3f0d556231cad7c"
            },
            "downloads": -1,
            "filename": "blackdynamite-1.0.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d29766fe6a353012f8e650730d15a7e1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 94721,
            "upload_time": "2024-04-12T20:57:50",
            "upload_time_iso_8601": "2024-04-12T20:57:50.655166Z",
            "url": "https://files.pythonhosted.org/packages/6c/b5/b76883acba2914175ed2b60d471e3625e5cfe3499e8fe0d5fe2e9d3f53f4/blackdynamite-1.0.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ce6a4a4862e3ed4f57cde835597dbf6718691c0c43e51510dc81f29ddf7d0fca",
                "md5": "25706a272800d92a7ea645bc729c0340",
                "sha256": "8e102916fa594c85725a3b11a7fb105bb418fb623aae52e551b0dc3833267267"
            },
            "downloads": -1,
            "filename": "blackdynamite-1.0.8.tar.gz",
            "has_sig": false,
            "md5_digest": "25706a272800d92a7ea645bc729c0340",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 61612,
            "upload_time": "2024-04-12T20:57:52",
            "upload_time_iso_8601": "2024-04-12T20:57:52.566158Z",
            "url": "https://files.pythonhosted.org/packages/ce/6a/4a4862e3ed4f57cde835597dbf6718691c0c43e51510dc81f29ddf7d0fca/blackdynamite-1.0.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-12 20:57:52",
    "github": false,
    "gitlab": true,
    "bitbucket": false,
    "codeberg": false,
    "gitlab_user": "ganciaux",
    "gitlab_project": "blackdynamite",
    "lcname": "blackdynamite"
}
        
Elapsed time: 0.25326s