birm-nm-foo


Namebirm-nm-foo JSON
Version 1.5.0 PyPI version JSON
download
home_pageNone
SummaryNeural Modules with Adaptive Nonlinear Constraints and Efficient Regularization
upload_time2024-04-10 22:54:09
maintainerNone
docs_urlNone
authorNone
requires_python<3.11,>=3.9
licenseBSD-2-Clause
keywords deep learning pytorch linear models dynamical systems data-driven control
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
<p align="center">
  <img src="figs/Neuromancer.png" width="250">  
</p>

# NeuroMANCER v1.5.0

**Neural Modules with Adaptive Nonlinear Constraints and Efficient Regularizations (NeuroMANCER)**
is an open-source differentiable programming (DP) library for solving parametric constrained optimization problems, 
physics-informed system identification, and parametric model-based optimal control.
NeuroMANCER is written in [PyTorch](https://pytorch.org/) and allows for systematic 
integration of machine learning with scientific computing for creating end-to-end 
differentiable models and algorithms embedded with prior knowledge and physics.

### ⭐ Now available on PyPi! ⭐
![Static Badge](https://img.shields.io/badge/pip_install-neuromancer-blue)
 ![PyPI - Version](https://img.shields.io/pypi/v/neuromancer)


### New in v1.5.0
![Lightning](https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white)
Now supports integration with PyTorch Lightning (https://lightning.ai/docs/pytorch/stable/), bringing:
* User workflow simplifications: zero boilerplate code and increased modularity 
* Ability for user to define custom training logic easily 
* Easy support for distributed GPU training
* Weights and Biases hyperparameter tuning 

Please refer to the Lightning folder and its [README](examples/lightning_integration_examples/README.md).

**New Colab Examples:**
> ⭐ [Various domain examples, such as system identification of building thermal dynamics, in NeuroMANCER](#domain-examples)

> ⭐ [PyTorch lightning integration Examples ](#lightning-integration-examples)


## Features and Examples

Extensive set of tutorials can be found in the 
[examples](https://github.com/pnnl/neuromancer/tree/master/examples) folder.
Interactive notebook versions of examples are available on Google Colab!
Test out NeuroMANCER functionality before cloning the repository and setting up an
environment.

### Intro to NeuroMANCER

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/tutorials/part_1_linear_regression.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Part 1: Linear regression in PyTorch vs NeuroMANCER.  

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/tutorials/part_2_variable.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Part 2: NeuroMANCER syntax tutorial: variables, constraints, and objectives.  

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/tutorials/part_3_node.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Part 3: NeuroMANCER syntax tutorial: modules, Node, and System class.


### Parametric Programming

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/parametric_programming/Part_1_basics.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Part 1: Learning to solve a constrained optimization problem.

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/parametric_programming/Part_2_pQP.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Part 2: Learning to solve a quadratically-constrained optimization problem.

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/parametric_programming/Part_3_pNLP.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Part 3: Learning to solve a set of 2D constrained optimization problems.

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/parametric_programming/Part_4_projectedGradient.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> 
Part 4: Learning to solve a constrained optimization problem with the projected gradient.

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/parametric_programming/Part_5_cvxpy_layers.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> 
Part 5: Using Cvxpylayers for differentiable projection onto the polytopic feasible set.  


### Ordinary Differential Equations (ODEs)
+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_1_NODE.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 1: Neural Ordinary Differential Equations (NODEs)

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_2_param_estim_ODE.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 2: Parameter estimation of ODE system

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_3_UDE.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 3: Universal Differential Equations (UDEs)

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_4_nonauto_NODE.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 4: NODEs with exogenous inputs

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_5_nonauto_NSSM.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 5: Neural State Space Models (NSSMs) with exogenous inputs

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_6_NetworkODE.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 6: Data-driven modeling of resistance-capacitance (RC) network ODEs

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_7_DeepKoopman.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 7: Deep Koopman operator

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_8_nonauto_DeepKoopman.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 8: control-oriented Deep Koopman operator


### Physics-Informed Neural Networks (PINNs) for Partial Differential Equations (PDEs)
+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/PDEs/Part_1_PINN_DiffusionEquation.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 1: Diffusion Equation
+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/PDEs/Part_2_PINN_BurgersEquation.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 2: Burgers' Equation
+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/PDEs/Part_3_PINN_BurgersEquation_inverse.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 3: Burgers' Equation w/ Parameter Estimation (Inverse Problem)

### Control

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/control/Part_1_stabilize_linear_system.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 1: Learning to stabilize a linear dynamical system.

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/control/Part_2_stabilize_ODE.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 2: Learning to stabilize a nonlinear differential equation.

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/control/Part_3_ref_tracking_ODE.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 3: Learning to control a nonlinear differential equation.

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/control/Part_4_NODE_control.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 4: Learning neural ODE model and control policy for an unknown dynamical system.

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/control/Part_5_neural_Lyapunov.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 5: Learning neural Lyapunov function for a nonlinear dynamical system.

### Domain Examples 

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/domain_examples/DPC_building_control.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 1: Learning to Control Indoor Air Temperature in Buildings.

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/domain_examples/DPC_PSH.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 2: Learning to Control an Pumped-Hydroelectricity Energy Storage System.

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/domain_examples/NODE_building_dynamics.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 3: Learning Building Thermal Dynamics using Neural ODEs.

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/domain_examples/NODE_rc_networks.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 4: Data-driven modeling of a Resistance-Capacitance network with Neural ODEs.

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/domain_examples/NODE_swing_equation.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 5: Learning Swing Equation Dynamics using Neural ODEs.

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/domain_examples/NSSM_building_dynamics.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 6: Learning Building Thermal Dynamics using Neural State Space Models.

### Lightning Integration Examples 

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/lightning_integration_examples/Part_1_lightning_basics_tutorial.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 1: Lightning Integration Basics.

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/lightning_integration_examples/Part_2_lightning_advanced_and_gpu_tutorial.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 2: Lightning Advanced Features and Automatic GPU Support.

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/lightning_integration_examples/Part_4_lightning_wanb_hyperparameter_tuning.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 3: Hyperparameter Tuning With Lightning & WandB

+ <a target="_blank" href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/lightning_integration_examples/other_examples/lightning_custom_training_example.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Part 4: Defining Custom Training Logic via Lightning Modularized Code.
 
## Documentation
The documentation for the library can be found [online](https://pnnl.github.io/neuromancer/). 
There is also an [introduction video](https://www.youtube.com/watch?v=YkFKz-DgC98) covering 
core features of the library. 


```python 
# Neuromancer syntax example for constrained optimization
import neuromancer as nm
import torch 

# define neural architecture 
func = nm.modules.blocks.MLP(insize=1, outsize=2, 
                             linear_map=nm.slim.maps['linear'], 
                             nonlin=torch.nn.ReLU, hsizes=[80] * 4)
# wrap neural net into symbolic representation via the Node class: map(p) -> x
map = nm.system.Node(func, ['p'], ['x'], name='map')
    
# define decision variables
x = nm.constraint.variable("x")[:, [0]]
y = nm.constraint.variable("x")[:, [1]]
# problem parameters sampled in the dataset
p = nm.constraint.variable('p')

# define objective function
f = (1-x)**2 + (y-x**2)**2
obj = f.minimize(weight=1.0)

# define constraints
con_1 = 100.*(x >= y)
con_2 = 100.*(x**2+y**2 <= p**2)

# create penalty method-based loss function
loss = nm.loss.PenaltyLoss(objectives=[obj], constraints=[con_1, con_2])
# construct differentiable constrained optimization problem
problem = nm.problem.Problem(nodes=[map], loss=loss)
```

![UML diagram](figs/class_diagram.png)
*UML diagram of NeuroMANCER classes.*


## Installation

### PIP Install (recommended)

Consider using a dedicated virtual environment (conda or otherwise) with Python 3.9+ installed. 

```bash
pip install neuromancer
```
Example usage: 

```bash
import torch
from neuromancer.system import Node

fun_1 = lambda x1, x2: 2.*x1 - x2**2
node_3 = Node(fun_1, ['y1', 'y2'], ['y3'], name='quadratic')
# evaluate forward pass of the node with dictionary input dataset
print(node_3({'y1': torch.rand(2), 'y2': torch.rand(2)}))

```
### Manual Install

First clone the neuromancer package.
A dedicated virtual environment (conda or otherwise) is recommended. 

Note: If you have a previous neuromancer env it would be best at this point to create a new environment given the following instructions.

```bash
git clone -b master https://github.com/pnnl/neuromancer.git --single-branch
```

#### Create and activate virtual environment

``` bash
conda create -n neuromancer python=3.10.4
conda activate neuromancer
```

#### Install neuromancer and all dependencies.
From top level directory of cloned neuromancer run:

```bash
pip install -e.[docs,tests,examples]
```

OR, for zsh users:
```zsh
pip install -e.'[docs,tests,examples]'
```

See the `pyproject.toml` file for reference.

``` toml
[project.optional-dependencies]
tests = ["pytest", "hypothesis"]
examples = ["casadi", "cvxpy", "imageio", "cvxpylayers"]
docs = ["sphinx", "sphinx-rtd-theme"]
```

#### Note on pip install with `examples` on MacOS (Apple M1)
Before CVXPY can be installed on Apple M1, you must install `cmake` via Homebrew:

```zsh
brew install cmake
```

See [CVXPY installation instructions](https://www.cvxpy.org/install/index.html) for more details.


### Conda install
Conda install is recommended for GPU acceleration. 

> ❗️Warning: `linux_env.yml`, `windows_env.yml`, and `osxarm64_env.yml` are out of date. Manual installation of dependencies is recommended for conda.


#### Create environment & install dependencies
##### Ubuntu

``` bash
conda env create -f linux_env.yml
conda activate neuromancer
```

##### Windows

``` bash
conda env create -f windows_env.yml
conda activate neuromancer
conda install -c defaults intel-openmp -f
```

##### MacOS (Apple M1)

``` bash
conda env create -f osxarm64_env.yml
conda activate neuromancer
```

##### Other (manually install all dependencies)

!!! Pay attention to comments for non-Linux OS !!!

``` bash
conda create -n neuromancer python=3.10.4
conda activate neuromancer
conda install pytorch pytorch-cuda=11.6 -c pytorch -c nvidia
## OR (for Mac): conda install pytorch -c pytorch
conda config --append channels conda-forge
conda install scipy numpy"<1.24.0" matplotlib scikit-learn pandas dill mlflow pydot=1.4.2 pyts numba
conda install networkx=3.0 plum-dispatch=1.7.3 
conda install -c anaconda pytest hypothesis
conda install cvxpy cvxopt casadi seaborn imageio
conda install tqdm torchdiffeq toml
## (for Windows): conda install -c defaults intel-openmp -f
```

#### Install NeuroMANCER package
From the top level directory of cloned neuromancer
(in the activated environment where the dependencies have been installed):

```bash
pip install -e . --no-deps
```

### Test NeuroMANCER install
Run pytest on the [tests folder](https://github.com/pnnl/neuromancer/tree/master/tests). 
It should take about 2 minutes to run the tests on CPU. 
There will be a lot of warnings that you can safely ignore. These warnings will be cleaned 
up in a future release.

## Community Information
We welcome contributions and feedback from the open-source community!  

### Contributions, Discussions, and Issues
Please read the [Community Development Guidelines](https://github.com/pnnl/neuromancer/blob/master/CONTRIBUTING.md) 
for further information on contributions, [discussions](https://github.com/pnnl/neuromancer/discussions), and [Issues](https://github.com/pnnl/neuromancer/issues).

###  Release notes
See the [Release notes](https://github.com/pnnl/neuromancer/blob/master/RELEASE_NOTES.md) documenting new features.

###  License
NeuroMANCER comes with [BSD license](https://en.wikipedia.org/wiki/BSD_licenses).
See the [license](https://github.com/pnnl/neuromancer/blob/master/LICENSE.md) for further details. 


## Publications 
+ [James Koch, Zhao Chen, Aaron Tuor, Jan Drgona, Draguna Vrabie, Structural Inference of Networked Dynamical Systems with Universal Differential Equations, arXiv:2207.04962, (2022)](https://aps.arxiv.org/abs/2207.04962)
+ [Ján Drgoňa, Sayak Mukherjee, Aaron Tuor, Mahantesh Halappanavar, Draguna Vrabie, Learning Stochastic Parametric Differentiable Predictive Control Policies, IFAC ROCOND conference (2022)](https://www.sciencedirect.com/science/article/pii/S2405896322015877)
+ [Sayak Mukherjee, Ján Drgoňa, Aaron Tuor, Mahantesh Halappanavar, Draguna Vrabie, Neural Lyapunov Differentiable Predictive Control, IEEE Conference on Decision and Control Conference 2022](https://arxiv.org/abs/2205.10728)
+ [Wenceslao Shaw Cortez, Jan Drgona, Aaron Tuor, Mahantesh Halappanavar, Draguna Vrabie, Differentiable Predictive Control with Safety Guarantees: A Control Barrier Function Approach, IEEE Conference on Decision and Control Conference 2022](https://arxiv.org/abs/2208.02319)
+ [Ethan King, Jan Drgona, Aaron Tuor, Shrirang Abhyankar, Craig Bakker, Arnab Bhattacharya, Draguna Vrabie, Koopman-based Differentiable Predictive Control for the Dynamics-Aware Economic Dispatch Problem, 2022 American Control Conference (ACC)](https://ieeexplore.ieee.org/document/9867379)
+ [Drgoňa, J., Tuor, A. R., Chandan, V., & Vrabie, D. L., Physics-constrained deep learning of multi-zone building thermal dynamics. Energy and Buildings, 243, 110992, (2021)](https://www.sciencedirect.com/science/article/pii/S0378778821002760)
+ [E. Skomski, S. Vasisht, C. Wight, A. Tuor, J. Drgoňa and D. Vrabie, "Constrained Block Nonlinear Neural Dynamical Models," 2021 American Control Conference (ACC), 2021, pp. 3993-4000, doi: 10.23919/ACC50511.2021.9482930.](https://ieeexplore.ieee.org/document/9482930)
+ [Skomski, E., Drgoňa, J., & Tuor, A. (2021, May). Automating Discovery of Physics-Informed Neural State Space Models via Learning and Evolution. In Learning for Dynamics and Control (pp. 980-991). PMLR.](https://proceedings.mlr.press/v144/skomski21a.html)
+ [Drgoňa, J., Tuor, A., Skomski, E., Vasisht, S., & Vrabie, D. (2021). Deep Learning Explicit Differentiable Predictive Control Laws for Buildings. IFAC-PapersOnLine, 54(6), 14-19.](https://www.sciencedirect.com/science/article/pii/S2405896321012933)
+ [Tuor, A., Drgona, J., & Vrabie, D. (2020). Constrained neural ordinary differential equations with stability guarantees. arXiv preprint arXiv:2004.10883.](https://arxiv.org/abs/2004.10883)
+ [Drgona, Jan, et al. "Differentiable Predictive Control: An MPC Alternative for Unknown Nonlinear Systems using Constrained Deep Learning." Journal of Process Control Volume 116, August 2022, Pages 80-92](https://www.sciencedirect.com/science/article/pii/S0959152422000981)
+ [Drgona, J., Skomski, E., Vasisht, S., Tuor, A., & Vrabie, D. (2020). Dissipative Deep Neural Dynamical Systems, in IEEE Open Journal of Control Systems, vol. 1, pp. 100-112, 2022](https://ieeexplore.ieee.org/document/9809789)
+ [Drgona, J., Tuor, A., & Vrabie, D., Learning Constrained Adaptive Differentiable Predictive Control Policies With Guarantees, arXiv preprint arXiv:2004.11184, (2020)](https://arxiv.org/abs/2004.11184)


## Cite as
```yaml
@article{Neuromancer2023,
  title={{NeuroMANCER: Neural Modules with Adaptive Nonlinear Constraints and Efficient Regularizations}},
  author={Drgona, Jan and Tuor, Aaron and Koch, James and Shapiro, Madelyn and Vrabie, Draguna},
  Url= {https://github.com/pnnl/neuromancer}, 
  year={2023}
}
```

## Development team

**Lead developers**: [Jan Drgona](https://drgona.github.io/), [Aaron Tuor](https://sw.cs.wwu.edu/~tuora/aarontuor/)   
**Active core developers**: Madelyn Shapiro, James Koch, Rahul Birmiwal  
**Scientific advisors**: Draguna Vrabie  
**Notable contributors**: Seth Briney, Bo Tang, Ethan King, Shrirang Abhyankar, 
Mia Skomski, Stefan Dernbach, Zhao Chen, Christian Møldrup Legaard

Open-source contributions made by:  
<a href="https://github.com/pnnl/neuromancer/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=pnnl/neuromancer" />
</a>

Made with [contrib.rocks](https://contrib.rocks).

## Acknowledgments
This research was partially supported by the Mathematics for Artificial Reasoning in Science (MARS) and Data Model Convergence (DMC) initiatives via the Laboratory Directed Research and Development (LDRD) investments at Pacific Northwest National Laboratory (PNNL), by the U.S. Department of Energy, through the Office of Advanced Scientific Computing Research's “Data-Driven Decision Control for Complex Systems (DnC2S)” project, and through the Energy Efficiency and Renewable Energy, Building Technologies Office under the “Dynamic decarbonization through autonomous physics-centric deep learning and optimization of building operations” and the “Advancing Market-Ready Building Energy Management by Cost-Effective Differentiable Predictive Control” projects. 
PNNL is a multi-program national laboratory operated for the U.S. Department of Energy (DOE) by Battelle Memorial Institute under Contract No. DE-AC05-76RL0-1830.


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "birm-nm-foo",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.11,>=3.9",
    "maintainer_email": null,
    "keywords": "Deep Learning, Pytorch, Linear Models, Dynamical Systems, Data-driven control",
    "author": null,
    "author_email": "Aaron Tuor <aaron.tuor@pnnl.gov>, Jan Drgona <jan.drgona@pnnl.gov>, James Koch <james.koch@pnnl.gov>, Madelyn Shapiro <madelyn.shapiro@pnnl.gov>, Rahul Birmiwal <rahul.birmiwal@pnnl.gov>, Draguna Vrabie <draguna.vrabie@pnnl.gov>",
    "download_url": null,
    "platform": null,
    "description": "\n<p align=\"center\">\n  <img src=\"figs/Neuromancer.png\" width=\"250\">  \n</p>\n\n# NeuroMANCER v1.5.0\n\n**Neural Modules with Adaptive Nonlinear Constraints and Efficient Regularizations (NeuroMANCER)**\nis an open-source differentiable programming (DP) library for solving parametric constrained optimization problems, \nphysics-informed system identification, and parametric model-based optimal control.\nNeuroMANCER is written in [PyTorch](https://pytorch.org/) and allows for systematic \nintegration of machine learning with scientific computing for creating end-to-end \ndifferentiable models and algorithms embedded with prior knowledge and physics.\n\n### \u2b50 Now available on PyPi! \u2b50\n![Static Badge](https://img.shields.io/badge/pip_install-neuromancer-blue)\n ![PyPI - Version](https://img.shields.io/pypi/v/neuromancer)\n\n\n### New in v1.5.0\n![Lightning](https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white)\nNow supports integration with PyTorch Lightning (https://lightning.ai/docs/pytorch/stable/), bringing:\n* User workflow simplifications: zero boilerplate code and increased modularity \n* Ability for user to define custom training logic easily \n* Easy support for distributed GPU training\n* Weights and Biases hyperparameter tuning \n\nPlease refer to the Lightning folder and its [README](examples/lightning_integration_examples/README.md).\n\n**New Colab Examples:**\n> \u2b50 [Various domain examples, such as system identification of building thermal dynamics, in NeuroMANCER](#domain-examples)\n\n> \u2b50 [PyTorch lightning integration Examples ](#lightning-integration-examples)\n\n\n## Features and Examples\n\nExtensive set of tutorials can be found in the \n[examples](https://github.com/pnnl/neuromancer/tree/master/examples) folder.\nInteractive notebook versions of examples are available on Google Colab!\nTest out NeuroMANCER functionality before cloning the repository and setting up an\nenvironment.\n\n### Intro to NeuroMANCER\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/tutorials/part_1_linear_regression.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nPart 1: Linear regression in PyTorch vs NeuroMANCER.  \n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/tutorials/part_2_variable.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nPart 2: NeuroMANCER syntax tutorial: variables, constraints, and objectives.  \n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/tutorials/part_3_node.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nPart 3: NeuroMANCER syntax tutorial: modules, Node, and System class.\n\n\n### Parametric Programming\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/parametric_programming/Part_1_basics.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nPart 1: Learning to solve a constrained optimization problem.\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/parametric_programming/Part_2_pQP.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nPart 2: Learning to solve a quadratically-constrained optimization problem.\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/parametric_programming/Part_3_pNLP.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nPart 3: Learning to solve a set of 2D constrained optimization problems.\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/parametric_programming/Part_4_projectedGradient.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> \nPart 4: Learning to solve a constrained optimization problem with the projected gradient.\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/parametric_programming/Part_5_cvxpy_layers.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> \nPart 5: Using Cvxpylayers for differentiable projection onto the polytopic feasible set.  \n\n\n### Ordinary Differential Equations (ODEs)\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_1_NODE.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 1: Neural Ordinary Differential Equations (NODEs)\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_2_param_estim_ODE.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 2: Parameter estimation of ODE system\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_3_UDE.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 3: Universal Differential Equations (UDEs)\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_4_nonauto_NODE.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 4: NODEs with exogenous inputs\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_5_nonauto_NSSM.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 5: Neural State Space Models (NSSMs) with exogenous inputs\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_6_NetworkODE.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 6: Data-driven modeling of resistance-capacitance (RC) network ODEs\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_7_DeepKoopman.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 7: Deep Koopman operator\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/ODEs/Part_8_nonauto_DeepKoopman.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 8: control-oriented Deep Koopman operator\n\n\n### Physics-Informed Neural Networks (PINNs) for Partial Differential Equations (PDEs)\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/PDEs/Part_1_PINN_DiffusionEquation.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 1: Diffusion Equation\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/PDEs/Part_2_PINN_BurgersEquation.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 2: Burgers' Equation\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/PDEs/Part_3_PINN_BurgersEquation_inverse.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 3: Burgers' Equation w/ Parameter Estimation (Inverse Problem)\n\n### Control\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/control/Part_1_stabilize_linear_system.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 1: Learning to stabilize a linear dynamical system.\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/control/Part_2_stabilize_ODE.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 2: Learning to stabilize a nonlinear differential equation.\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/control/Part_3_ref_tracking_ODE.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 3: Learning to control a nonlinear differential equation.\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/control/Part_4_NODE_control.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 4: Learning neural ODE model and control policy for an unknown dynamical system.\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/control/Part_5_neural_Lyapunov.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 5: Learning neural Lyapunov function for a nonlinear dynamical system.\n\n### Domain Examples \n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/domain_examples/DPC_building_control.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 1: Learning to Control Indoor Air Temperature in Buildings.\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/domain_examples/DPC_PSH.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 2: Learning to Control an Pumped-Hydroelectricity Energy Storage System.\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/domain_examples/NODE_building_dynamics.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 3: Learning Building Thermal Dynamics using Neural ODEs.\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/domain_examples/NODE_rc_networks.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 4: Data-driven modeling of a Resistance-Capacitance network with Neural ODEs.\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/domain_examples/NODE_swing_equation.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 5: Learning Swing Equation Dynamics using Neural ODEs.\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/domain_examples/NSSM_building_dynamics.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 6: Learning Building Thermal Dynamics using Neural State Space Models.\n\n### Lightning Integration Examples \n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/lightning_integration_examples/Part_1_lightning_basics_tutorial.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 1: Lightning Integration Basics.\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/lightning_integration_examples/Part_2_lightning_advanced_and_gpu_tutorial.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 2: Lightning Advanced Features and Automatic GPU Support.\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/lightning_integration_examples/Part_4_lightning_wanb_hyperparameter_tuning.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 3: Hyperparameter Tuning With Lightning & WandB\n\n+ <a target=\"_blank\" href=\"https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/lightning_integration_examples/other_examples/lightning_custom_training_example.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> Part 4: Defining Custom Training Logic via Lightning Modularized Code.\n \n## Documentation\nThe documentation for the library can be found [online](https://pnnl.github.io/neuromancer/). \nThere is also an [introduction video](https://www.youtube.com/watch?v=YkFKz-DgC98) covering \ncore features of the library. \n\n\n```python \n# Neuromancer syntax example for constrained optimization\nimport neuromancer as nm\nimport torch \n\n# define neural architecture \nfunc = nm.modules.blocks.MLP(insize=1, outsize=2, \n                             linear_map=nm.slim.maps['linear'], \n                             nonlin=torch.nn.ReLU, hsizes=[80] * 4)\n# wrap neural net into symbolic representation via the Node class: map(p) -> x\nmap = nm.system.Node(func, ['p'], ['x'], name='map')\n    \n# define decision variables\nx = nm.constraint.variable(\"x\")[:, [0]]\ny = nm.constraint.variable(\"x\")[:, [1]]\n# problem parameters sampled in the dataset\np = nm.constraint.variable('p')\n\n# define objective function\nf = (1-x)**2 + (y-x**2)**2\nobj = f.minimize(weight=1.0)\n\n# define constraints\ncon_1 = 100.*(x >= y)\ncon_2 = 100.*(x**2+y**2 <= p**2)\n\n# create penalty method-based loss function\nloss = nm.loss.PenaltyLoss(objectives=[obj], constraints=[con_1, con_2])\n# construct differentiable constrained optimization problem\nproblem = nm.problem.Problem(nodes=[map], loss=loss)\n```\n\n![UML diagram](figs/class_diagram.png)\n*UML diagram of NeuroMANCER classes.*\n\n\n## Installation\n\n### PIP Install (recommended)\n\nConsider using a dedicated virtual environment (conda or otherwise) with Python 3.9+ installed. \n\n```bash\npip install neuromancer\n```\nExample usage: \n\n```bash\nimport torch\nfrom neuromancer.system import Node\n\nfun_1 = lambda x1, x2: 2.*x1 - x2**2\nnode_3 = Node(fun_1, ['y1', 'y2'], ['y3'], name='quadratic')\n# evaluate forward pass of the node with dictionary input dataset\nprint(node_3({'y1': torch.rand(2), 'y2': torch.rand(2)}))\n\n```\n### Manual Install\n\nFirst clone the neuromancer package.\nA dedicated virtual environment (conda or otherwise) is recommended. \n\nNote: If you have a previous neuromancer env it would be best at this point to create a new environment given the following instructions.\n\n```bash\ngit clone -b master https://github.com/pnnl/neuromancer.git --single-branch\n```\n\n#### Create and activate virtual environment\n\n``` bash\nconda create -n neuromancer python=3.10.4\nconda activate neuromancer\n```\n\n#### Install neuromancer and all dependencies.\nFrom top level directory of cloned neuromancer run:\n\n```bash\npip install -e.[docs,tests,examples]\n```\n\nOR, for zsh users:\n```zsh\npip install -e.'[docs,tests,examples]'\n```\n\nSee the `pyproject.toml` file for reference.\n\n``` toml\n[project.optional-dependencies]\ntests = [\"pytest\", \"hypothesis\"]\nexamples = [\"casadi\", \"cvxpy\", \"imageio\", \"cvxpylayers\"]\ndocs = [\"sphinx\", \"sphinx-rtd-theme\"]\n```\n\n#### Note on pip install with `examples` on MacOS (Apple M1)\nBefore CVXPY can be installed on Apple M1, you must install `cmake` via Homebrew:\n\n```zsh\nbrew install cmake\n```\n\nSee [CVXPY installation instructions](https://www.cvxpy.org/install/index.html) for more details.\n\n\n### Conda install\nConda install is recommended for GPU acceleration. \n\n> \u2757\ufe0fWarning: `linux_env.yml`, `windows_env.yml`, and `osxarm64_env.yml` are out of date. Manual installation of dependencies is recommended for conda.\n\n\n#### Create environment & install dependencies\n##### Ubuntu\n\n``` bash\nconda env create -f linux_env.yml\nconda activate neuromancer\n```\n\n##### Windows\n\n``` bash\nconda env create -f windows_env.yml\nconda activate neuromancer\nconda install -c defaults intel-openmp -f\n```\n\n##### MacOS (Apple M1)\n\n``` bash\nconda env create -f osxarm64_env.yml\nconda activate neuromancer\n```\n\n##### Other (manually install all dependencies)\n\n!!! Pay attention to comments for non-Linux OS !!!\n\n``` bash\nconda create -n neuromancer python=3.10.4\nconda activate neuromancer\nconda install pytorch pytorch-cuda=11.6 -c pytorch -c nvidia\n## OR (for Mac): conda install pytorch -c pytorch\nconda config --append channels conda-forge\nconda install scipy numpy\"<1.24.0\" matplotlib scikit-learn pandas dill mlflow pydot=1.4.2 pyts numba\nconda install networkx=3.0 plum-dispatch=1.7.3 \nconda install -c anaconda pytest hypothesis\nconda install cvxpy cvxopt casadi seaborn imageio\nconda install tqdm torchdiffeq toml\n## (for Windows): conda install -c defaults intel-openmp -f\n```\n\n#### Install NeuroMANCER package\nFrom the top level directory of cloned neuromancer\n(in the activated environment where the dependencies have been installed):\n\n```bash\npip install -e . --no-deps\n```\n\n### Test NeuroMANCER install\nRun pytest on the [tests folder](https://github.com/pnnl/neuromancer/tree/master/tests). \nIt should take about 2 minutes to run the tests on CPU. \nThere will be a lot of warnings that you can safely ignore. These warnings will be cleaned \nup in a future release.\n\n## Community Information\nWe welcome contributions and feedback from the open-source community!  \n\n### Contributions, Discussions, and Issues\nPlease read the [Community Development Guidelines](https://github.com/pnnl/neuromancer/blob/master/CONTRIBUTING.md) \nfor further information on contributions, [discussions](https://github.com/pnnl/neuromancer/discussions), and [Issues](https://github.com/pnnl/neuromancer/issues).\n\n###  Release notes\nSee the [Release notes](https://github.com/pnnl/neuromancer/blob/master/RELEASE_NOTES.md) documenting new features.\n\n###  License\nNeuroMANCER comes with [BSD license](https://en.wikipedia.org/wiki/BSD_licenses).\nSee the [license](https://github.com/pnnl/neuromancer/blob/master/LICENSE.md) for further details. \n\n\n## Publications \n+ [James Koch, Zhao Chen, Aaron Tuor, Jan Drgona, Draguna Vrabie, Structural Inference of Networked Dynamical Systems with Universal Differential Equations, arXiv:2207.04962, (2022)](https://aps.arxiv.org/abs/2207.04962)\n+ [J\u00e1n Drgo\u0148a, Sayak Mukherjee, Aaron Tuor, Mahantesh Halappanavar, Draguna Vrabie, Learning Stochastic Parametric Differentiable Predictive Control Policies, IFAC ROCOND conference (2022)](https://www.sciencedirect.com/science/article/pii/S2405896322015877)\n+ [Sayak Mukherjee, J\u00e1n Drgo\u0148a, Aaron Tuor, Mahantesh Halappanavar, Draguna Vrabie, Neural Lyapunov Differentiable Predictive Control, IEEE Conference on Decision and Control Conference 2022](https://arxiv.org/abs/2205.10728)\n+ [Wenceslao Shaw Cortez, Jan Drgona, Aaron Tuor, Mahantesh Halappanavar, Draguna Vrabie, Differentiable Predictive Control with Safety Guarantees: A Control Barrier Function Approach, IEEE Conference on Decision and Control Conference 2022](https://arxiv.org/abs/2208.02319)\n+ [Ethan King, Jan Drgona, Aaron Tuor, Shrirang Abhyankar, Craig Bakker, Arnab Bhattacharya, Draguna Vrabie, Koopman-based Differentiable Predictive Control for the Dynamics-Aware Economic Dispatch Problem, 2022 American Control Conference (ACC)](https://ieeexplore.ieee.org/document/9867379)\n+ [Drgo\u0148a, J., Tuor, A. R., Chandan, V., & Vrabie, D. L., Physics-constrained deep learning of multi-zone building thermal dynamics. Energy and Buildings, 243, 110992, (2021)](https://www.sciencedirect.com/science/article/pii/S0378778821002760)\n+ [E. Skomski, S. Vasisht, C. Wight, A. Tuor, J. Drgo\u0148a and D. Vrabie, \"Constrained Block Nonlinear Neural Dynamical Models,\" 2021 American Control Conference (ACC), 2021, pp. 3993-4000, doi: 10.23919/ACC50511.2021.9482930.](https://ieeexplore.ieee.org/document/9482930)\n+ [Skomski, E., Drgo\u0148a, J., & Tuor, A. (2021, May). Automating Discovery of Physics-Informed Neural State Space Models via Learning and Evolution. In Learning for Dynamics and Control (pp. 980-991). PMLR.](https://proceedings.mlr.press/v144/skomski21a.html)\n+ [Drgo\u0148a, J., Tuor, A., Skomski, E., Vasisht, S., & Vrabie, D. (2021). Deep Learning Explicit Differentiable Predictive Control Laws for Buildings. IFAC-PapersOnLine, 54(6), 14-19.](https://www.sciencedirect.com/science/article/pii/S2405896321012933)\n+ [Tuor, A., Drgona, J., & Vrabie, D. (2020). Constrained neural ordinary differential equations with stability guarantees. arXiv preprint arXiv:2004.10883.](https://arxiv.org/abs/2004.10883)\n+ [Drgona, Jan, et al. \"Differentiable Predictive Control: An MPC Alternative for Unknown Nonlinear Systems using Constrained Deep Learning.\" Journal of Process Control Volume 116, August 2022, Pages 80-92](https://www.sciencedirect.com/science/article/pii/S0959152422000981)\n+ [Drgona, J., Skomski, E., Vasisht, S., Tuor, A., & Vrabie, D. (2020). Dissipative Deep Neural Dynamical Systems, in IEEE Open Journal of Control Systems, vol. 1, pp. 100-112, 2022](https://ieeexplore.ieee.org/document/9809789)\n+ [Drgona, J., Tuor, A., & Vrabie, D., Learning Constrained Adaptive Differentiable Predictive Control Policies With Guarantees, arXiv preprint arXiv:2004.11184, (2020)](https://arxiv.org/abs/2004.11184)\n\n\n## Cite as\n```yaml\n@article{Neuromancer2023,\n  title={{NeuroMANCER: Neural Modules with Adaptive Nonlinear Constraints and Efficient Regularizations}},\n  author={Drgona, Jan and Tuor, Aaron and Koch, James and Shapiro, Madelyn and Vrabie, Draguna},\n  Url= {https://github.com/pnnl/neuromancer}, \n  year={2023}\n}\n```\n\n## Development team\n\n**Lead developers**: [Jan Drgona](https://drgona.github.io/), [Aaron Tuor](https://sw.cs.wwu.edu/~tuora/aarontuor/)   \n**Active core developers**: Madelyn Shapiro, James Koch, Rahul Birmiwal  \n**Scientific advisors**: Draguna Vrabie  \n**Notable contributors**: Seth Briney, Bo Tang, Ethan King, Shrirang Abhyankar, \nMia Skomski, Stefan Dernbach, Zhao Chen, Christian M\u00f8ldrup Legaard\n\nOpen-source contributions made by:  \n<a href=\"https://github.com/pnnl/neuromancer/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=pnnl/neuromancer\" />\n</a>\n\nMade with [contrib.rocks](https://contrib.rocks).\n\n## Acknowledgments\nThis research was partially supported by the Mathematics for Artificial Reasoning in Science (MARS) and Data Model Convergence (DMC) initiatives via the Laboratory Directed Research and Development (LDRD) investments at Pacific Northwest National Laboratory (PNNL), by the U.S. Department of Energy, through the Office of Advanced Scientific Computing Research's \u201cData-Driven Decision Control for Complex Systems (DnC2S)\u201d project, and through the Energy Efficiency and Renewable Energy, Building Technologies Office under the \u201cDynamic decarbonization through autonomous physics-centric deep learning and optimization of building operations\u201d and the \u201cAdvancing Market-Ready Building Energy Management by Cost-Effective Differentiable Predictive Control\u201d projects. \nPNNL is a multi-program national laboratory operated for the U.S. Department of Energy (DOE) by Battelle Memorial Institute under Contract No. DE-AC05-76RL0-1830.\n\n",
    "bugtrack_url": null,
    "license": "BSD-2-Clause",
    "summary": "Neural Modules with Adaptive Nonlinear Constraints and Efficient Regularization",
    "version": "1.5.0",
    "project_urls": {
        "documentation": "https://github.com/pnnl/neuromancer/",
        "homepage": "https://github.com/pnnl/neuromancer/",
        "repository": "https://github.com/pnnl/neuromancer/"
    },
    "split_keywords": [
        "deep learning",
        " pytorch",
        " linear models",
        " dynamical systems",
        " data-driven control"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9d1735297622088d9c007b482fd6495e2b270bc5d43ad96ef78ee1aefc760905",
                "md5": "1d86201c1d4b646077f46b864c2aecd4",
                "sha256": "f9c508046663572362e3401669c1533ee7f9830b51b0b9ef1f7352f76dc3d697"
            },
            "downloads": -1,
            "filename": "birm_nm_foo-1.5.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1d86201c1d4b646077f46b864c2aecd4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.11,>=3.9",
            "size": 159689,
            "upload_time": "2024-04-10T22:54:09",
            "upload_time_iso_8601": "2024-04-10T22:54:09.970472Z",
            "url": "https://files.pythonhosted.org/packages/9d/17/35297622088d9c007b482fd6495e2b270bc5d43ad96ef78ee1aefc760905/birm_nm_foo-1.5.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-10 22:54:09",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "pnnl",
    "github_project": "neuromancer",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "birm-nm-foo"
}
        
Elapsed time: 0.23835s