nemo-bo


Namenemo-bo JSON
Version 0.1.16 PyPI version JSON
download
home_pagehttps://github.com/sustainable-processes/nemo
SummaryMulti-objective optimization of chemical processes with automated machine learning workflows
upload_time2023-04-24 14:45:40
maintainer
docs_urlNone
authorSimon Sung
requires_python>=3.9.0,<=3.9.13
licenseMIT
keywords machine-learning bayesian-optimization multi-objective-optimization
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Nomadic Exploratory Multi-objective Optimisation (NEMO)

Meet NEMO - our ‘Nomadic Explorer’. NEMO is quite the connoisseur when it comes to machine learning optimisation - only the best model types and model parameters will suffice. In the case of a single dataset, NEMO will scour the lands for the optimal model type and parameters to fit a given dataset. A range of outputs will then be generated for you to assess, interpret and utilise your newly created model.

If you decide to take your analyses a step further into the realms of Multi-objective Bayesian optimisation, then our Nomadic Explorer will tirelessly search for the best model type and parameters at each iteration of the optimisation. At each stage, the optimal set of conditions will be provided to aid your pursuit of the elusive multi-dimensional pareto front.

NEMO is prepared for the journey with a cavernous bag of tools. However, if your aspirations are more exotic, then NEMO supports the inclusion of custom models, samplers and functions.

Check out the examples to see NEMO in action and get started with your own ML workflows.

## What is NEMO?

NEMO is a package designed for Bayesian optimisation of one or multiple objectives simultaneously, with a focus on applying to chemical processes.

## Installation
To install NEMO via pip:

`pip install nemo-bo`


## How does NEMO work?

Firstly, the parameters (variables) and targets (objectives) of a chemical process are provided to the algorithm. After 
providing NEMO with a dataset from prior experiments, it will then identify the relationship between the parameters and targets 
and then suggest the ideal parameters to use for the optimisation iteration.

In comparison to other open-source optimisation libraries, NEMO will automatically optimise the hyperparameters for 
various machine learning models and select the one with the best predictive accuracy for a given objective. This 
ensures that the model is continuously optimised over the course of an optimisation campaign. Furthermore, NEMO 
natively supports objectives that can be calculated if the exact relationship between the parameters and the target 
(e.g. materials cost) is known.

## What features are in NEMO?
Although NEMO includes many machine learning models, acquisition functions, constraints, and sample generators, the 
base classes for these are all included and can be utilised as a template for adding your own custom solutions.

The features natively found in NEMO are the following:

### Resuming an optimisation
Every iteration, the progress of optimisation runs and ML model information are also saved at two points:
1. Firstly, when candidates have been suggested
2. And secondly, after the new results have been inputted, at the end of the iteration

This allows users to resume optimisation runs from two convenient positions

### Machine learning models available
1. Gaussian processes (GPs) using the [BoTorch](https://github.com/pytorch/botorch) library

2. Various neural networks from the [Deeply Uncertain code repository](https://github.com/deepskies/DeeplyUncertain-Public):
   1. Bayesian neural networks
   2. Concrete dropout
   3. Deep ensembles

3. Various decision-tree based models:
   1. [XGBoost Distribution](https://github.com/CDonnerer/xgboost-distribution)
   2. [NGBoost](https://github.com/stanfordmlgroup/ngboost)
   3. Random Forest using the [forest-confidence-interval](https://github.com/scikit-learn-contrib/forest-confidence-interval)

### Variable types available
1. Continuous variables (`ContinuousVariable`)
2. Categorical variables with discrete variables (`CategoricalVariableDiscreteValues`)
3. Categorical variables with descriptors (`CategoricalVariableWithDescriptors`)

Categorical variables without any description (e.g. one-hot encoding) is not currently supported

### Objective types available
1. Objectives modelled using machine learning models (`RegressionObjective`)
2. Calculated objectives using a user-provided function (`CalculableObjective`)

Classification objectives are not currently supported

### User-selectable acquisition functions available
1. Expected improvement based methods (`ExpectedImprovement`)
   1. A modifed single-objective expected improvement algorithm that is better at exploration than the standard analytical method
   2. A modifed multi-objective expected hypervolume improvement algorithm that is better at exploration than the standard analytical method
   3. [qNEI](https://botorch.org/api/_modules/botorch/acquisition/monte_carlo.html#qNoisyExpectedImprovement) and [qNEHVI](https://botorch.org/api/_modules/botorch/acquisition/multi_objective/monte_carlo.html#qNoisyExpectedHypervolumeImprovement) BoTorch methods (only compatible with GP models)

2. A Unified evolutionary optimization algorithm [U-NSGA-III](https://pymoo.org/algorithms/moo/unsga3.html) based method that derives uncertainty in the inference by sampling from a distribution (`NSGAImprovement`)
3. A fully explorative method that identifies the candidates that have the highest uncertainty in the objective predictions (`HighestUncertainty`)

### Input constraints available
1. Linear equality and inequality constraints(`LinearConstraint`)
2. Basic non-linear equality and inequality constraints that incorporates an exponent for each input variable (`NonLinearPowerConstraint`)
3. Equality and inequality constraints that allows the user to pass a function to calculate the left-hand-side of the constraint (`FunctionalConstraint`)
4. Stoichiometry constraints that forces the ratio between two input variable to be equal to or greater than a specified value (`StoichiometricConstraint`)
5. A constraint type to limit the number of active variables (`MaxActiveFeaturesConstraint`)
6. A constraint type that prevents certain categorical constraints from being selected simulatenously (`CategoricalConstraint`)

### Benchmarking functionality available
Benchmark functions are typically used to simulate the outcomes of experiments in a closed-loop manner, and therefore 
the user is not promoted to input the actual output values of suggested candidates. Therefore, they can be helpful to 
evaluate the quality of an optimisation (inferred from the effectiveness of the utilised model(s) and/or acquisition 
function to identify the optimum)

1. Machine learning model based on a provided dataset (`ModelBenchmark`)
2. Single objective synthetic functions (`SingleObjectiveSyntheticBenchmark`)
3. Multi-objective synthetic functions (`MultiObjectiveSyntheticBenchmark`)

### Sample generators available
Methods for generating a samples of parameter values during an optimisation. These can be used independently outside of an optimisation too by calling the `generate_samples` function

1. Latin hypercube sampling (with a mixed-integer implementation for efficient sampling of categorical variables) (`LatinHyperCubeSampling`)
2. Sobol sampling (`SobolSampling`)
3. Polytope sampling (`PolytopeSampling`)
4. Random sampling (`RandomSampling`)
5. Pool-based sampling using a user-defined set of data points. Typically used as an alternative to a machine learning model benchmark function (`PoolBased`)

### Other utilities/functions available
1. Included template for provided the dataset with automated extraction
2. Scatter and bar chart plotting functionality for displaying model quality and optimisation progress


## Getting started
The following code demonstrates how to set-up a simple bayesian optimisation using a user-provided dataset containing four continuous variables (X) and two objectives (Y):

```python
# Import the variable, objectives, sampler, acquisition function, and the optimisation classes
import numpy as np
from nemo_bo.opt.variables import ContinuousVariable, VariablesList
from nemo_bo.opt.objectives import RegressionObjective, ObjectivesList
from nemo_bo.acquisition_functions.expected_improvement.expected_improvement import ExpectedImprovement
from nemo_bo.opt.samplers import LatinHyperCubeSampling
from nemo_bo.opt.optimisation import Optimisation

# Create the variable objects
var1 = ContinuousVariable(name="variable1", lower_bound=1.0, upper_bound=10.0)
var2 = ContinuousVariable(name="variable2", lower_bound=0.02, upper_bound=0.2)
var3 = ContinuousVariable(name="variable3", lower_bound=30.0, upper_bound=70.0)
var4 = ContinuousVariable(name="variable4", lower_bound=5.0, upper_bound=15.0)
var_list = VariablesList([var1, var2, var3, var4])

# Create the objective objects
obj1 = RegressionObjective(
    name="objective1", # obj_max_bool when True defines the objective is to be maximised
    obj_max_bool=True,
    lower_bound=0.0,
    upper_bound=100.0,
    predictor_type=["gp", "xgb"],
)
obj2 = RegressionObjective(
    name="objective2",
    obj_max_bool=False, # obj_max_bool when False defines the objective is to be minimised
    lower_bound=0.01,
    upper_bound=0.15,
    predictor_type=["gp", "xgb"],
)
obj_list = ObjectivesList([obj1, obj2])

# Instantiate the sampler
sampler = LatinHyperCubeSampling()

# Instantiate the acquisition function
acq_func = ExpectedImprovement(num_candidates=4) # num_candidates defines how many sets of parameters to return at each optimisation iteration

# Set up the optimisation instance
# opt_name is used to store the optimisation information in a sub-folder with this name
optimisation = Optimisation(var_list, obj_list, acq_func, sampler=sampler, opt_name="README optimisation")

# Start the optimisation using the convenient run function that will run for the specified number of iterations
# X and Y arrays represent an initial user-provided dataset
X = np.array(
    [
        [6.82, 0.16, 34, 6.2],
        [6.15, 0.08, 47, 8.5],
        [4.92, 0.05, 32, 11.1],
        [9.24, 0.15, 41, 12.1],
        [1.07, 0.12, 67, 8.2],
        [5.66, 0.09, 53, 12.7],
        [8.08, 0.19, 54, 5.4],
        [1.87, 0.11, 68, 9.2],
        [4.08, 0.13, 58, 10.4],
        [4.38, 0.18, 36, 14.6],
    ]
)
Y = np.array(
    [
        [33.31, 0.12],
        [41.89, 0.10],
        [36.87, 0.09],
        [46.32, 0.13],
        [0.00, 0.09],
        [36.52, 0.10],
        [45.77, 0.14],
        [0.00, 0.09],
        [30.95, 0.11],
        [34.89, 0.12],
    ]
)
optimisation_data = optimisation.run(X, Y, number_of_iterations=10)

# During the optimisation, after candidates have been suggested, the user will be prompted to input the actual output 
# values into the python console. At this point, the model information, optimisation progress, and candidates have been 
# saved and the user can either choose to leave the python console open whilst they obtain the results, or they can 
# stop the python process, and then resume the optimisation and input the values at a more convenient time later

# After the actual output values have been inputted, the optimisation run will be saved again, and then the next
# iteration starts automatically

```

## More tutorials
We encourage you to look through the tutorials written in the `tutorials` folder to see how to use some other 
NEMO functions
1. How to select specific machine learning models types for the objectives
2. Setting up a single objective optimisation
3. How to use calculable objectives
4. How to define transformers for variables and objectives
5. How to define categorical variables with descriptors
6. Utilising the machine learning model fitting in NEMO without Bayesian optimisation
7. How to create a closed-loop optimisation using a machine learning model as the benchmark function
8. How to create a closed-loop optimisation using a multiobjective synthetic function as the benchmark function
9. How to create a closed-loop optimisation using a single objective synthetic function as the benchmark function
10. How to create a closed-loop optimisation using a pool-based sampler as the benchmark
11. Setting up an optimisation with input constraints
12. Generating samples without needing to perform an optimisation
13. How to set up a manual optimisation
14. How to resume an optimisation run
15. How to use the BoTorch (quasi-) Monte-Carlo based acquisition functions in NEMO
16. How to set up an optimisation that uses U-NSGA-III as the acquisition function
17. Using the input template excel file template to import the variables and objectives data
18. How to set up an optimisation that uses the highest uncertainty acquisition function


## What to do if you find any issues?
Leave a message in the issues section and we will get back to you as soon as we can.


## Acknowledgements
Much of the functionality in NEMO is built on top of the work by the authors of the features we incorporate. We are grateful 
to them for continuously supporting their libraries and establishing their platforms for optimisation work. We reference 
the works throughout the .py files.


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/sustainable-processes/nemo",
    "name": "nemo-bo",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9.0,<=3.9.13",
    "maintainer_email": "",
    "keywords": "machine-learning,bayesian-optimization,multi-objective-optimization",
    "author": "Simon Sung",
    "author_email": "simon.sung06@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/fa/49/87afb356d75d4cdc058c1d10e9f1c6ebfe0bffa74ab1eddc72830d28b24e/nemo_bo-0.1.16.tar.gz",
    "platform": null,
    "description": "# Nomadic Exploratory Multi-objective Optimisation (NEMO)\n\nMeet NEMO - our \u2018Nomadic Explorer\u2019. NEMO is quite the connoisseur when it comes to machine learning optimisation - only the best model types and model parameters will suffice. In the case of a single dataset, NEMO will scour the lands for the optimal model type and parameters to fit a given dataset. A range of outputs will then be generated for you to assess, interpret and utilise your newly created model.\n\nIf you decide to take your analyses a step further into the realms of Multi-objective Bayesian optimisation, then our Nomadic Explorer will tirelessly search for the best model type and parameters at each iteration of the optimisation. At each stage, the optimal set of conditions will be provided to aid your pursuit of the elusive multi-dimensional pareto front.\n\nNEMO is prepared for the journey with a cavernous bag of tools. However, if your aspirations are more exotic, then NEMO supports the inclusion of custom models, samplers and functions.\n\nCheck out the examples to see NEMO in action and get started with your own ML workflows.\n\n## What is NEMO?\n\nNEMO is a package designed for Bayesian optimisation of one or multiple objectives simultaneously, with a focus on applying to chemical processes.\n\n## Installation\nTo install NEMO via pip:\n\n`pip install nemo-bo`\n\n\n## How does NEMO work?\n\nFirstly, the parameters (variables) and targets (objectives) of a chemical process are provided to the algorithm. After \nproviding NEMO with a dataset from prior experiments, it will then identify the relationship between the parameters and targets \nand then suggest the ideal parameters to use for the optimisation iteration.\n\nIn comparison to other open-source optimisation libraries, NEMO will automatically optimise the hyperparameters for \nvarious machine learning models and select the one with the best predictive accuracy for a given objective. This \nensures that the model is continuously optimised over the course of an optimisation campaign. Furthermore, NEMO \nnatively supports objectives that can be calculated if the exact relationship between the parameters and the target \n(e.g. materials cost) is known.\n\n## What features are in NEMO?\nAlthough NEMO includes many machine learning models, acquisition functions, constraints, and sample generators, the \nbase classes for these are all included and can be utilised as a template for adding your own custom solutions.\n\nThe features natively found in NEMO are the following:\n\n### Resuming an optimisation\nEvery iteration, the progress of optimisation runs and ML model information are also saved at two points:\n1. Firstly, when candidates have been suggested\n2. And secondly, after the new results have been inputted, at the end of the iteration\n\nThis allows users to resume optimisation runs from two convenient positions\n\n### Machine learning models available\n1. Gaussian processes (GPs) using the [BoTorch](https://github.com/pytorch/botorch) library\n\n2. Various neural networks from the [Deeply Uncertain code repository](https://github.com/deepskies/DeeplyUncertain-Public):\n   1. Bayesian neural networks\n   2. Concrete dropout\n   3. Deep ensembles\n\n3. Various decision-tree based models:\n   1. [XGBoost Distribution](https://github.com/CDonnerer/xgboost-distribution)\n   2. [NGBoost](https://github.com/stanfordmlgroup/ngboost)\n   3. Random Forest using the [forest-confidence-interval](https://github.com/scikit-learn-contrib/forest-confidence-interval)\n\n### Variable types available\n1. Continuous variables (`ContinuousVariable`)\n2. Categorical variables with discrete variables (`CategoricalVariableDiscreteValues`)\n3. Categorical variables with descriptors (`CategoricalVariableWithDescriptors`)\n\nCategorical variables without any description (e.g. one-hot encoding) is not currently supported\n\n### Objective types available\n1. Objectives modelled using machine learning models (`RegressionObjective`)\n2. Calculated objectives using a user-provided function (`CalculableObjective`)\n\nClassification objectives are not currently supported\n\n### User-selectable acquisition functions available\n1. Expected improvement based methods (`ExpectedImprovement`)\n   1. A modifed single-objective expected improvement algorithm that is better at exploration than the standard analytical method\n   2. A modifed multi-objective expected hypervolume improvement algorithm that is better at exploration than the standard analytical method\n   3. [qNEI](https://botorch.org/api/_modules/botorch/acquisition/monte_carlo.html#qNoisyExpectedImprovement) and [qNEHVI](https://botorch.org/api/_modules/botorch/acquisition/multi_objective/monte_carlo.html#qNoisyExpectedHypervolumeImprovement) BoTorch methods (only compatible with GP models)\n\n2. A Unified evolutionary optimization algorithm [U-NSGA-III](https://pymoo.org/algorithms/moo/unsga3.html) based method that derives uncertainty in the inference by sampling from a distribution (`NSGAImprovement`)\n3. A fully explorative method that identifies the candidates that have the highest uncertainty in the objective predictions (`HighestUncertainty`)\n\n### Input constraints available\n1. Linear equality and inequality constraints(`LinearConstraint`)\n2. Basic non-linear equality and inequality constraints that incorporates an exponent for each input variable (`NonLinearPowerConstraint`)\n3. Equality and inequality constraints that allows the user to pass a function to calculate the left-hand-side of the constraint (`FunctionalConstraint`)\n4. Stoichiometry constraints that forces the ratio between two input variable to be equal to or greater than a specified value (`StoichiometricConstraint`)\n5. A constraint type to limit the number of active variables (`MaxActiveFeaturesConstraint`)\n6. A constraint type that prevents certain categorical constraints from being selected simulatenously (`CategoricalConstraint`)\n\n### Benchmarking functionality available\nBenchmark functions are typically used to simulate the outcomes of experiments in a closed-loop manner, and therefore \nthe user is not promoted to input the actual output values of suggested candidates. Therefore, they can be helpful to \nevaluate the quality of an optimisation (inferred from the effectiveness of the utilised model(s) and/or acquisition \nfunction to identify the optimum)\n\n1. Machine learning model based on a provided dataset (`ModelBenchmark`)\n2. Single objective synthetic functions (`SingleObjectiveSyntheticBenchmark`)\n3. Multi-objective synthetic functions (`MultiObjectiveSyntheticBenchmark`)\n\n### Sample generators available\nMethods for generating a samples of parameter values during an optimisation. These can be used independently outside of an optimisation too by calling the `generate_samples` function\n\n1. Latin hypercube sampling (with a mixed-integer implementation for efficient sampling of categorical variables) (`LatinHyperCubeSampling`)\n2. Sobol sampling (`SobolSampling`)\n3. Polytope sampling (`PolytopeSampling`)\n4. Random sampling (`RandomSampling`)\n5. Pool-based sampling using a user-defined set of data points. Typically used as an alternative to a machine learning model benchmark function (`PoolBased`)\n\n### Other utilities/functions available\n1. Included template for provided the dataset with automated extraction\n2. Scatter and bar chart plotting functionality for displaying model quality and optimisation progress\n\n\n## Getting started\nThe following code demonstrates how to set-up a simple bayesian optimisation using a user-provided dataset containing four continuous variables (X) and two objectives (Y):\n\n```python\n# Import the variable, objectives, sampler, acquisition function, and the optimisation classes\nimport numpy as np\nfrom nemo_bo.opt.variables import ContinuousVariable, VariablesList\nfrom nemo_bo.opt.objectives import RegressionObjective, ObjectivesList\nfrom nemo_bo.acquisition_functions.expected_improvement.expected_improvement import ExpectedImprovement\nfrom nemo_bo.opt.samplers import LatinHyperCubeSampling\nfrom nemo_bo.opt.optimisation import Optimisation\n\n# Create the variable objects\nvar1 = ContinuousVariable(name=\"variable1\", lower_bound=1.0, upper_bound=10.0)\nvar2 = ContinuousVariable(name=\"variable2\", lower_bound=0.02, upper_bound=0.2)\nvar3 = ContinuousVariable(name=\"variable3\", lower_bound=30.0, upper_bound=70.0)\nvar4 = ContinuousVariable(name=\"variable4\", lower_bound=5.0, upper_bound=15.0)\nvar_list = VariablesList([var1, var2, var3, var4])\n\n# Create the objective objects\nobj1 = RegressionObjective(\n    name=\"objective1\", # obj_max_bool when True defines the objective is to be maximised\n    obj_max_bool=True,\n    lower_bound=0.0,\n    upper_bound=100.0,\n    predictor_type=[\"gp\", \"xgb\"],\n)\nobj2 = RegressionObjective(\n    name=\"objective2\",\n    obj_max_bool=False, # obj_max_bool when False defines the objective is to be minimised\n    lower_bound=0.01,\n    upper_bound=0.15,\n    predictor_type=[\"gp\", \"xgb\"],\n)\nobj_list = ObjectivesList([obj1, obj2])\n\n# Instantiate the sampler\nsampler = LatinHyperCubeSampling()\n\n# Instantiate the acquisition function\nacq_func = ExpectedImprovement(num_candidates=4) # num_candidates defines how many sets of parameters to return at each optimisation iteration\n\n# Set up the optimisation instance\n# opt_name is used to store the optimisation information in a sub-folder with this name\noptimisation = Optimisation(var_list, obj_list, acq_func, sampler=sampler, opt_name=\"README optimisation\")\n\n# Start the optimisation using the convenient run function that will run for the specified number of iterations\n# X and Y arrays represent an initial user-provided dataset\nX = np.array(\n    [\n        [6.82, 0.16, 34, 6.2],\n        [6.15, 0.08, 47, 8.5],\n        [4.92, 0.05, 32, 11.1],\n        [9.24, 0.15, 41, 12.1],\n        [1.07, 0.12, 67, 8.2],\n        [5.66, 0.09, 53, 12.7],\n        [8.08, 0.19, 54, 5.4],\n        [1.87, 0.11, 68, 9.2],\n        [4.08, 0.13, 58, 10.4],\n        [4.38, 0.18, 36, 14.6],\n    ]\n)\nY = np.array(\n    [\n        [33.31, 0.12],\n        [41.89, 0.10],\n        [36.87, 0.09],\n        [46.32, 0.13],\n        [0.00, 0.09],\n        [36.52, 0.10],\n        [45.77, 0.14],\n        [0.00, 0.09],\n        [30.95, 0.11],\n        [34.89, 0.12],\n    ]\n)\noptimisation_data = optimisation.run(X, Y, number_of_iterations=10)\n\n# During the optimisation, after candidates have been suggested, the user will be prompted to input the actual output \n# values into the python console. At this point, the model information, optimisation progress, and candidates have been \n# saved and the user can either choose to leave the python console open whilst they obtain the results, or they can \n# stop the python process, and then resume the optimisation and input the values at a more convenient time later\n\n# After the actual output values have been inputted, the optimisation run will be saved again, and then the next\n# iteration starts automatically\n\n```\n\n## More tutorials\nWe encourage you to look through the tutorials written in the `tutorials` folder to see how to use some other \nNEMO functions\n1. How to select specific machine learning models types for the objectives\n2. Setting up a single objective optimisation\n3. How to use calculable objectives\n4. How to define transformers for variables and objectives\n5. How to define categorical variables with descriptors\n6. Utilising the machine learning model fitting in NEMO without Bayesian optimisation\n7. How to create a closed-loop optimisation using a machine learning model as the benchmark function\n8. How to create a closed-loop optimisation using a multiobjective synthetic function as the benchmark function\n9. How to create a closed-loop optimisation using a single objective synthetic function as the benchmark function\n10. How to create a closed-loop optimisation using a pool-based sampler as the benchmark\n11. Setting up an optimisation with input constraints\n12. Generating samples without needing to perform an optimisation\n13. How to set up a manual optimisation\n14. How to resume an optimisation run\n15. How to use the BoTorch (quasi-) Monte-Carlo based acquisition functions in NEMO\n16. How to set up an optimisation that uses U-NSGA-III as the acquisition function\n17. Using the input template excel file template to import the variables and objectives data\n18. How to set up an optimisation that uses the highest uncertainty acquisition function\n\n\n## What to do if you find any issues?\nLeave a message in the issues section and we will get back to you as soon as we can.\n\n\n## Acknowledgements\nMuch of the functionality in NEMO is built on top of the work by the authors of the features we incorporate. We are grateful \nto them for continuously supporting their libraries and establishing their platforms for optimisation work. We reference \nthe works throughout the .py files.\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Multi-objective optimization of chemical processes with automated machine learning workflows",
    "version": "0.1.16",
    "split_keywords": [
        "machine-learning",
        "bayesian-optimization",
        "multi-objective-optimization"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "474becb8bed70d0db8d5b79a1a0f5d3ccb412b4c021e7cf85f18346859490f73",
                "md5": "74cfe30abff49430825897253fba0cd7",
                "sha256": "e5bdaa8c3fc4fd586557377911f5c87a07dbed90cc5b580c2754a6cef3d476f5"
            },
            "downloads": -1,
            "filename": "nemo_bo-0.1.16-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "74cfe30abff49430825897253fba0cd7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9.0,<=3.9.13",
            "size": 132436,
            "upload_time": "2023-04-24T14:45:38",
            "upload_time_iso_8601": "2023-04-24T14:45:38.203970Z",
            "url": "https://files.pythonhosted.org/packages/47/4b/ecb8bed70d0db8d5b79a1a0f5d3ccb412b4c021e7cf85f18346859490f73/nemo_bo-0.1.16-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fa4987afb356d75d4cdc058c1d10e9f1c6ebfe0bffa74ab1eddc72830d28b24e",
                "md5": "7a1dfea41c0fd2c5edaa3327465475da",
                "sha256": "74a5982bf918b42b0cd6c7ba25f78a74c496f6b6d01c082008cf1601771967ac"
            },
            "downloads": -1,
            "filename": "nemo_bo-0.1.16.tar.gz",
            "has_sig": false,
            "md5_digest": "7a1dfea41c0fd2c5edaa3327465475da",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9.0,<=3.9.13",
            "size": 96477,
            "upload_time": "2023-04-24T14:45:40",
            "upload_time_iso_8601": "2023-04-24T14:45:40.864854Z",
            "url": "https://files.pythonhosted.org/packages/fa/49/87afb356d75d4cdc058c1d10e9f1c6ebfe0bffa74ab1eddc72830d28b24e/nemo_bo-0.1.16.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-04-24 14:45:40",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "sustainable-processes",
    "github_project": "nemo",
    "lcname": "nemo-bo"
}
        
Elapsed time: 0.06692s