Name | lume-model JSON |
Version |
1.7.1
JSON |
| download |
home_page | None |
Summary | Data structures used in the LUME modeling toolset. |
upload_time | 2024-11-06 22:58:53 |
maintainer | None |
docs_url | None |
author | SLAC National Accelerator Laboratory |
requires_python | >=3.9 |
license | Copyright (c) 2017-2020, The Board of Trustees of the Leland Stanford Junior University, through SLAC National Accelerator Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: (1) Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. (2) Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. (3) Neither the name of the Leland Stanford Junior University, SLAC National Accelerator Laboratory, U.S. Dept. of Energy nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER, THE UNITED STATES GOVERNMENT, OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You are under no obligation whatsoever to provide any bug fixes, patches, or upgrades to the features, functionality or performance of the source code ("Enhancements") to anyone; however, if you choose to make your Enhancements available either publicly, or directly to SLAC National Accelerator Laboratory, without imposing a separate written license agreement for such Enhancements, then you hereby grant the following license: a non-exclusive, royalty-free perpetual license to install, use, modify, prepare derivative works, incorporate into other computer software, distribute, and sublicense such Enhancements or derivative works thereof, in binary and source code form. |
keywords |
machine learning
accelerator physics
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LUME-model
LUME-model holds data structures used in the LUME modeling toolset. Variables and models built using LUME-model will be compatible with other tools. LUME-model uses [pydantic](https://pydantic-docs.helpmanual.io/) models to enforce typed attributes upon instantiation.
## Requirements
* Python >= 3.9
* pydantic
* numpy
## Install
LUME-model can be installed with conda using the command:
``` $ conda install lume-model -c conda-forge ```
## Developer
A development environment may be created using the packaged `dev-environment.yml` file.
```
conda env create -f dev-environment.yml
```
## Variables
The lume-model variables are intended to enforce requirements for input and output variables by variable type. For now, only scalar variables (floats) are supported.
Minimal example of scalar input and output variables:
```python
from lume_model.variables import ScalarInputVariable, ScalarOutputVariable
input_variable = ScalarInputVariable(
name="example_input",
default=0.1,
value_range=[0.0, 1.0],
)
output_variable = ScalarOutputVariable(name="example_output")
```
All input variables may be made into constants by passing the `is_constant=True` keyword argument. Value assingments on these constant variables will raise an error message.
## Models
The lume-model base class `lume_model.base.LUMEBaseModel` is intended to guide user development while allowing for flexibility and customizability. It is used to enforce LUME tool compatible classes for the execution of trained models.
Requirements for model classes:
* input_variables: A list defining the input variables for the model. Variable names must be unique. Required for use with lume-epics tools.
* output_variables: A list defining the output variables for the model. Variable names must be unique. Required for use with lume-epics tools.
* evaluate: The evaluate method is called by the serving model. Subclasses must implement this method, accepting and returning a dictionary.
Example model implementation and instantiation:
```python
from lume_model.base import LUMEBaseModel
from lume_model.variables import ScalarInputVariable, ScalarOutputVariable
class ExampleModel(LUMEBaseModel):
def evaluate(self, input_dict):
output_dict = {
"output1": input_dict[self.input_variables[0].name] ** 2,
"output2": input_dict[self.input_variables[1].name] ** 2,
}
return output_dict
input_variables = [
ScalarInputVariable(name="input1", default=0.1, value_range=[0.0, 1.0]),
ScalarInputVariable(name="input2", default=0.2, value_range=[0.0, 1.0]),
]
output_variables = [
ScalarOutputVariable(name="output1"),
ScalarOutputVariable(name="output2"),
]
m = ExampleModel(input_variables=input_variables, output_variables=output_variables)
```
## Configuration files
Models and variables may be constructed using a YAML configuration file. The configuration file consists of three sections:
* model (optional, can alternatively pass a custom model class into the `model_from_yaml` method)
* input_variables
* output_variables
The model section is used for the initialization of model classes. The `model_class` entry is used to specify the model class to initialize. The `model_from_yaml` method will attempt to import the specified class. Additional model-specific requirements may be provided. These requirements will be checked before model construction. Model keyword arguments may be passed via the config file or with the function kwarg `model_kwargs`. All models are assumed to accept `input_variables` and `output_variables` as keyword arguments.
For example, `m.dump("example_model.yml")` writes the following to file
```yaml
model_class: ExampleModel
input_variables:
input1:
variable_type: scalar
default: 0.1
is_constant: false
value_range: [0.0, 1.0]
input2:
variable_type: scalar
default: 0.2
is_constant: false
value_range: [0.0, 1.0]
output_variables:
output1: {variable_type: scalar}
output2: {variable_type: scalar}
```
and can be loaded by simply passing the file to the model constructor:
```python
from lume_model.base import LUMEBaseModel
class ExampleModel(LUMEBaseModel):
def evaluate(self, input_dict):
output_dict = {
"output1": input_dict[self.input_variables[0].name] ** 2,
"output2": input_dict[self.input_variables[1].name] ** 2,
}
return output_dict
m = ExampleModel("example_model.yml")
```
## PyTorch Toolkit
In the same way as the KerasModel, a PyTorchModel can also be loaded using the `lume_model.utils.model_from_yaml` method, specifying `PyTorchModel` in the `model_class` of the configuration file.
```yaml
model:
kwargs:
model_file: /path/to/california_regression.pt
model_class: lume_model.torch.PyTorchModel
model_info: path/to/model_info.json
output_format:
type: tensor
requirements:
torch: 1.12
```
In addition to the model_class, we also specify the path to the pytorch model (saved using `torch.save()`) and additional information about the model through the `model_info.json` file such as the order of the feature names and outputs of the model:
```json
{
"train_input_mins": [
0.4999000132083893,
...
-124.3499984741211
],
"train_input_maxs": [
15.000100135803223,
...
-114.30999755859375
],
"model_in_list": [
"MedInc",
...
"Longitude"
],
"model_out_list": [
"MedHouseVal"
],
"loc_in": {
"MedInc": 0,
...
"Longitude": 7
},
"loc_out": {
"MedHouseVal": 0
}
}
```
The `output_format` specification indicates which form the outputs of the model's `evaluate()` function should take, which may vary depending on the application. PyTorchModels working with the [LUME-EPICS](https://github.com/slaclab/lume-epics) service will require an `OutputVariable` type, while [Xopt](https://github.com/ChristopherMayes/Xopt) requires either a dictionary of float values or tensors as output.
It is important to note that currently the **transformers are not loaded** into the model when using the `model_from_yaml` method. These need to be created separately and added either:
* to the model's `kwargs` before instantiating
```python
import torch
import json
from lume_model.torch import PyTorchModel
# load the model class and kwargs
with open(f"california_variables.yml","r") as f:
yaml_model, yaml_kwargs = model_from_yaml(f, load_model=False)
# construct the transformers
with open("normalization.json", "r") as f:
normalizations = json.load(f)
input_transformer = AffineInputTransform(
len(normalizations["x_mean"]),
coefficient=torch.tensor(normalizations["x_scale"]),
offset=torch.tensor(normalizations["x_mean"]),
)
output_transformer = AffineInputTransform(
len(normalizations["y_mean"]),
coefficient=torch.tensor(normalizations["y_scale"]),
offset=torch.tensor(normalizations["y_mean"]),
)
model_kwargs["input_transformers"] = [input_transformer]
model_kwargs["output_transformers"] = [output_transformer]
model = PyTorchModel(**model_kwargs)
```
* using the setters for the transformer attributes in the model.
```python
# load the model
with open("california_variables.yml", "r") as f:
model = model_from_yaml(f, load_model=True)
# construct the transformers
with open("normalization.json", "r") as f:
normalizations = json.load(f)
input_transformer = AffineInputTransform(
len(normalizations["x_mean"]),
coefficient=torch.tensor(normalizations["x_scale"]),
offset=torch.tensor(normalizations["x_mean"]),
)
output_transformer = AffineInputTransform(
len(normalizations["y_mean"]),
coefficient=torch.tensor(normalizations["y_scale"]),
offset=torch.tensor(normalizations["y_mean"]),
)
# use the model's setter to add the transformers. Here we use a tuple
# to tell the setter where in the list the transformer should be inserted.
# In this case because we only have one, we add them at the beginning
# of the lists.
model.input_transformers = (input_transformer, 0)
model.output_transformers = (output_transformer, 0)
```
Raw data
{
"_id": null,
"home_page": null,
"name": "lume-model",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "machine learning, accelerator physics",
"author": "SLAC National Accelerator Laboratory",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/80/56/d9b5fa2daf28f697c3c3402332768faef014095e52939e01d05be228399d/lume_model-1.7.1.tar.gz",
"platform": null,
"description": "# LUME-model\n\nLUME-model holds data structures used in the LUME modeling toolset. Variables and models built using LUME-model will be compatible with other tools. LUME-model uses [pydantic](https://pydantic-docs.helpmanual.io/) models to enforce typed attributes upon instantiation.\n\n## Requirements\n\n* Python >= 3.9\n* pydantic\n* numpy\n\n## Install\n\nLUME-model can be installed with conda using the command:\n\n``` $ conda install lume-model -c conda-forge ```\n\n## Developer\n\nA development environment may be created using the packaged `dev-environment.yml` file.\n\n```\nconda env create -f dev-environment.yml\n```\n\n## Variables\n\nThe lume-model variables are intended to enforce requirements for input and output variables by variable type. For now, only scalar variables (floats) are supported.\n\nMinimal example of scalar input and output variables:\n\n```python\nfrom lume_model.variables import ScalarInputVariable, ScalarOutputVariable\n\ninput_variable = ScalarInputVariable(\n name=\"example_input\",\n default=0.1,\n value_range=[0.0, 1.0],\n)\noutput_variable = ScalarOutputVariable(name=\"example_output\")\n```\n\nAll input variables may be made into constants by passing the `is_constant=True` keyword argument. Value assingments on these constant variables will raise an error message.\n\n## Models\n\nThe lume-model base class `lume_model.base.LUMEBaseModel` is intended to guide user development while allowing for flexibility and customizability. It is used to enforce LUME tool compatible classes for the execution of trained models.\n\nRequirements for model classes:\n\n* input_variables: A list defining the input variables for the model. Variable names must be unique. Required for use with lume-epics tools.\n* output_variables: A list defining the output variables for the model. Variable names must be unique. Required for use with lume-epics tools.\n* evaluate: The evaluate method is called by the serving model. Subclasses must implement this method, accepting and returning a dictionary.\n\nExample model implementation and instantiation:\n\n```python\nfrom lume_model.base import LUMEBaseModel\nfrom lume_model.variables import ScalarInputVariable, ScalarOutputVariable\n\n\nclass ExampleModel(LUMEBaseModel):\n def evaluate(self, input_dict):\n output_dict = {\n \"output1\": input_dict[self.input_variables[0].name] ** 2,\n \"output2\": input_dict[self.input_variables[1].name] ** 2,\n }\n return output_dict\n\n\ninput_variables = [\n ScalarInputVariable(name=\"input1\", default=0.1, value_range=[0.0, 1.0]),\n ScalarInputVariable(name=\"input2\", default=0.2, value_range=[0.0, 1.0]),\n]\noutput_variables = [\n ScalarOutputVariable(name=\"output1\"),\n ScalarOutputVariable(name=\"output2\"),\n]\n\nm = ExampleModel(input_variables=input_variables, output_variables=output_variables)\n```\n\n## Configuration files\n\nModels and variables may be constructed using a YAML configuration file. The configuration file consists of three sections:\n\n* model (optional, can alternatively pass a custom model class into the `model_from_yaml` method)\n* input_variables\n* output_variables\n\nThe model section is used for the initialization of model classes. The `model_class` entry is used to specify the model class to initialize. The `model_from_yaml` method will attempt to import the specified class. Additional model-specific requirements may be provided. These requirements will be checked before model construction. Model keyword arguments may be passed via the config file or with the function kwarg `model_kwargs`. All models are assumed to accept `input_variables` and `output_variables` as keyword arguments.\n\nFor example, `m.dump(\"example_model.yml\")` writes the following to file\n\n```yaml\nmodel_class: ExampleModel\ninput_variables:\n input1:\n variable_type: scalar\n default: 0.1\n is_constant: false\n value_range: [0.0, 1.0]\n input2:\n variable_type: scalar\n default: 0.2\n is_constant: false\n value_range: [0.0, 1.0]\noutput_variables:\n output1: {variable_type: scalar}\n output2: {variable_type: scalar}\n```\n\nand can be loaded by simply passing the file to the model constructor:\n\n```python\nfrom lume_model.base import LUMEBaseModel\n\n\nclass ExampleModel(LUMEBaseModel):\n def evaluate(self, input_dict):\n output_dict = {\n \"output1\": input_dict[self.input_variables[0].name] ** 2,\n \"output2\": input_dict[self.input_variables[1].name] ** 2,\n }\n return output_dict\n\n\nm = ExampleModel(\"example_model.yml\")\n```\n\n## PyTorch Toolkit\n\nIn the same way as the KerasModel, a PyTorchModel can also be loaded using the `lume_model.utils.model_from_yaml` method, specifying `PyTorchModel` in the `model_class` of the configuration file.\n\n```yaml\nmodel:\n kwargs:\n model_file: /path/to/california_regression.pt\n model_class: lume_model.torch.PyTorchModel\n model_info: path/to/model_info.json\n output_format:\n type: tensor\n requirements:\n torch: 1.12\n```\n\nIn addition to the model_class, we also specify the path to the pytorch model (saved using `torch.save()`) and additional information about the model through the `model_info.json` file such as the order of the feature names and outputs of the model:\n\n```json\n{\n \"train_input_mins\": [\n 0.4999000132083893,\n ...\n -124.3499984741211\n ],\n \"train_input_maxs\": [\n 15.000100135803223,\n ...\n -114.30999755859375\n ],\n \"model_in_list\": [\n \"MedInc\",\n ...\n \"Longitude\"\n ],\n \"model_out_list\": [\n \"MedHouseVal\"\n ],\n \"loc_in\": {\n \"MedInc\": 0,\n ...\n \"Longitude\": 7\n },\n \"loc_out\": {\n \"MedHouseVal\": 0\n }\n}\n```\n\nThe `output_format` specification indicates which form the outputs of the model's `evaluate()` function should take, which may vary depending on the application. PyTorchModels working with the [LUME-EPICS](https://github.com/slaclab/lume-epics) service will require an `OutputVariable` type, while [Xopt](https://github.com/ChristopherMayes/Xopt) requires either a dictionary of float values or tensors as output.\n\nIt is important to note that currently the **transformers are not loaded** into the model when using the `model_from_yaml` method. These need to be created separately and added either:\n\n* to the model's `kwargs` before instantiating\n\n```python\nimport torch\nimport json\nfrom lume_model.torch import PyTorchModel\n\n# load the model class and kwargs\nwith open(f\"california_variables.yml\",\"r\") as f:\n yaml_model, yaml_kwargs = model_from_yaml(f, load_model=False)\n\n# construct the transformers\nwith open(\"normalization.json\", \"r\") as f:\n normalizations = json.load(f)\n\ninput_transformer = AffineInputTransform(\n len(normalizations[\"x_mean\"]),\n coefficient=torch.tensor(normalizations[\"x_scale\"]),\n offset=torch.tensor(normalizations[\"x_mean\"]),\n)\noutput_transformer = AffineInputTransform(\n len(normalizations[\"y_mean\"]),\n coefficient=torch.tensor(normalizations[\"y_scale\"]),\n offset=torch.tensor(normalizations[\"y_mean\"]),\n)\n\nmodel_kwargs[\"input_transformers\"] = [input_transformer]\nmodel_kwargs[\"output_transformers\"] = [output_transformer]\n\nmodel = PyTorchModel(**model_kwargs)\n```\n\n* using the setters for the transformer attributes in the model.\n\n```python\n# load the model\nwith open(\"california_variables.yml\", \"r\") as f:\n model = model_from_yaml(f, load_model=True)\n\n# construct the transformers\nwith open(\"normalization.json\", \"r\") as f:\n normalizations = json.load(f)\n\ninput_transformer = AffineInputTransform(\n len(normalizations[\"x_mean\"]),\n coefficient=torch.tensor(normalizations[\"x_scale\"]),\n offset=torch.tensor(normalizations[\"x_mean\"]),\n)\noutput_transformer = AffineInputTransform(\n len(normalizations[\"y_mean\"]),\n coefficient=torch.tensor(normalizations[\"y_scale\"]),\n offset=torch.tensor(normalizations[\"y_mean\"]),\n)\n\n# use the model's setter to add the transformers. Here we use a tuple\n# to tell the setter where in the list the transformer should be inserted.\n# In this case because we only have one, we add them at the beginning\n# of the lists.\nmodel.input_transformers = (input_transformer, 0)\nmodel.output_transformers = (output_transformer, 0)\n```\n",
"bugtrack_url": null,
"license": "Copyright (c) 2017-2020, The Board of Trustees of the Leland Stanford Junior University, through SLAC National Accelerator Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: (1) Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. (2) Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. (3) Neither the name of the Leland Stanford Junior University, SLAC National Accelerator Laboratory, U.S. Dept. of Energy nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER, THE UNITED STATES GOVERNMENT, OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You are under no obligation whatsoever to provide any bug fixes, patches, or upgrades to the features, functionality or performance of the source code (\"Enhancements\") to anyone; however, if you choose to make your Enhancements available either publicly, or directly to SLAC National Accelerator Laboratory, without imposing a separate written license agreement for such Enhancements, then you hereby grant the following license: a non-exclusive, royalty-free perpetual license to install, use, modify, prepare derivative works, incorporate into other computer software, distribute, and sublicense such Enhancements or derivative works thereof, in binary and source code form. ",
"summary": "Data structures used in the LUME modeling toolset.",
"version": "1.7.1",
"project_urls": {
"Bug Tracker": "https://github.com/slaclab/lume-model/issues",
"Documentation": "https://slaclab.github.io/lume-model/",
"Homepage": "https://github.com/slaclab/lume-model"
},
"split_keywords": [
"machine learning",
" accelerator physics"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "fb21aed4915d6ea56225a3ee6be1b0225a579add23ea1cccbb282d42e62a90ef",
"md5": "5b4748242aa292307266b9c3b2a2990b",
"sha256": "516132b5a5c1736b3126d9bd5e8efa46de2c6ed4dffacedd7dfc4e3c35309029"
},
"downloads": -1,
"filename": "lume_model-1.7.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5b4748242aa292307266b9c3b2a2990b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 22968,
"upload_time": "2024-11-06T22:58:52",
"upload_time_iso_8601": "2024-11-06T22:58:52.065986Z",
"url": "https://files.pythonhosted.org/packages/fb/21/aed4915d6ea56225a3ee6be1b0225a579add23ea1cccbb282d42e62a90ef/lume_model-1.7.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "8056d9b5fa2daf28f697c3c3402332768faef014095e52939e01d05be228399d",
"md5": "3ce529b28209e4b35424a64fdb7a8521",
"sha256": "07a185f5096b45aac05a60ca3e17bc0bdaac7f87f894d5bc0d04a14f81c0299b"
},
"downloads": -1,
"filename": "lume_model-1.7.1.tar.gz",
"has_sig": false,
"md5_digest": "3ce529b28209e4b35424a64fdb7a8521",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 178268,
"upload_time": "2024-11-06T22:58:53",
"upload_time_iso_8601": "2024-11-06T22:58:53.725545Z",
"url": "https://files.pythonhosted.org/packages/80/56/d9b5fa2daf28f697c3c3402332768faef014095e52939e01d05be228399d/lume_model-1.7.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-06 22:58:53",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "slaclab",
"github_project": "lume-model",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "lume-model"
}