lume-model


Namelume-model JSON
Version 2.0.1 PyPI version JSON
download
home_pageNone
SummaryData structures used in the LUME modeling toolset.
upload_time2025-10-20 18:29:08
maintainerNone
docs_urlNone
authorSLAC National Accelerator Laboratory
requires_python>=3.10
licenseCopyright (c) 2017-2020, The Board of Trustees of the Leland Stanford Junior University, through SLAC National Accelerator Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: (1) Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. (2) Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. (3) Neither the name of the Leland Stanford Junior University, SLAC National Accelerator Laboratory, U.S. Dept. of Energy nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER, THE UNITED STATES GOVERNMENT, OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You are under no obligation whatsoever to provide any bug fixes, patches, or upgrades to the features, functionality or performance of the source code ("Enhancements") to anyone; however, if you choose to make your Enhancements available either publicly, or directly to SLAC National Accelerator Laboratory, without imposing a separate written license agreement for such Enhancements, then you hereby grant the following license: a non-exclusive, royalty-free perpetual license to install, use, modify, prepare derivative works, incorporate into other computer software, distribute, and sublicense such Enhancements or derivative works thereof, in binary and source code form.
keywords machine learning accelerator physics
VCS
bugtrack_url
requirements pydantic numpy pyyaml mlflow
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LUME-model

LUME-model holds data structures used in the LUME modeling toolset. Variables and models built using LUME-model will be compatible with other tools. LUME-model uses [pydantic](https://pydantic-docs.helpmanual.io/) models to enforce typed attributes upon instantiation.

## Requirements

* Python >= 3.10
* pydantic
* numpy
* pyyaml
* mlflow

## Install

LUME-model can be installed with conda using the command:

``` $ conda install lume-model -c conda-forge ```

or through pip (coming soon):

``` $ pip install lume-model ```

## Developer

A development environment may be created using the packaged `dev-environment.yml` file.

```
conda env create -f dev-environment.yml
```

Install as editable:

```
conda activate lume-model-dev
pip install --no-dependencies -e .
```

Or by creating a fresh environment and installing the package:

```
pip install -e ".[dev]"
```

Note that this repository uses pre-commit hooks. To install these hooks, run:

```
pre-commit install
```

## Variables

The lume-model variables are intended to enforce requirements for input and output variables by variable type. For now, only scalar variables (floats) are supported.

Minimal example of scalar input and output variables:

```python
from lume_model.variables import ScalarVariable

input_variable = ScalarVariable(
    name="example_input",
    default_value=0.1,
    value_range=[0.0, 1.0],
)
output_variable = ScalarVariable(name="example_output")
```

All input variables may be made into constants by passing the
`is_constant=True` keyword argument. These constant variables are always
set to their default value and any other value assignments on
them will raise an error message.

## Models

The lume-model base class `lume_model.base.LUMEBaseModel` is intended to guide user development while allowing for flexibility and customizability. It is used to enforce LUME tool compatible classes for the execution of trained models.

Requirements for model classes:

* input_variables: A list defining the input variables for the model. Variable names must be unique. Required for use with lume-epics tools.
* output_variables: A list defining the output variables for the model. Variable names must be unique. Required for use with lume-epics tools.
* _evaluate: The evaluate method is called by the serving model.
  Subclasses must implement this method, accepting and returning a dictionary.

Example model implementation and instantiation:

```python
from lume_model.base import LUMEBaseModel
from lume_model.variables import ScalarVariable


class ExampleModel(LUMEBaseModel):
    def _evaluate(self, input_dict):
        output_dict = {
            "output1": input_dict[self.input_variables[0].name] ** 2,
            "output2": input_dict[self.input_variables[1].name] ** 2,
        }
        return output_dict


input_variables = [
    ScalarVariable(name="input1", default=0.1, value_range=[0.0, 1.0]),
    ScalarVariable(name="input2", default=0.2, value_range=[0.0, 1.0]),
]
output_variables = [
    ScalarVariable(name="output1"),
    ScalarVariable(name="output2"),
]

m = ExampleModel(input_variables=input_variables, output_variables=output_variables)
```

## Configuration files

Models and variables may be constructed using a YAML configuration file. The configuration file consists of three sections:

* model (optional, can alternatively pass a custom model class into the `model_from_yaml` method)
* input_variables
* output_variables

The model section is used for the initialization of model classes. The `model_class` entry is used to specify the model class to initialize. The `model_from_yaml` method will attempt to import the specified class. Additional model-specific requirements may be provided. These requirements will be checked before model construction. Model keyword arguments may be passed via the config file or with the function kwarg `model_kwargs`. All models are assumed to accept `input_variables` and `output_variables` as keyword arguments.

For example, `m.dump("example_model.yml")` writes the following to file

```yaml
model_class: ExampleModel
input_variables:
  input1:
    variable_class: ScalarVariable
    default_value: 0.1
    is_constant: false
    value_range: [0.0, 1.0]
  input2:
    variable_class: ScalarVariable
    default_value: 0.2
    is_constant: false
    value_range: [0.0, 1.0]
output_variables:
  output1: {variable_class: ScalarVariable}
  output2: {variable_class: ScalarVariable}
```

and can be loaded by simply passing the file to the model constructor:

```python
from lume_model.base import LUMEBaseModel


class ExampleModel(LUMEBaseModel):
    def _evaluate(self, input_dict):
        output_dict = {
            "output1": input_dict[self.input_variables[0].name] ** 2,
            "output2": input_dict[self.input_variables[1].name] ** 2,
        }
        return output_dict


m = ExampleModel("example_model.yml")
```

## PyTorch Toolkit

A TorchModel can also be loaded from a YAML, specifying `TorchModel` in
the `model_class` of the configuration file.

```yaml
model_class: TorchModel
model: model.pt
output_format: tensor
device: cpu
fixed_model: true
```

In addition to the model_class, we also specify the path to the
TorchModel's model and transformers (saved using `torch.save()`).

The `output_format` specification indicates which form the outputs
of the model's `evaluate()` function should take, which may vary
depending on the application. TorchModel instances working with the
[LUME-EPICS](https://github.com/slaclab/lume-epics) service will
require an `OutputVariable` type, while [Xopt](https://github.
com/xopt-org/Xopt) requires either a dictionary of float
values or tensors as output.

The variables and any transformers can also be added to the YAML
configuration file:

```yaml
model_class: TorchModel
input_variables:
  input1:
    variable_class: ScalarVariable
    default_value: 0.1
    value_range: [0.0, 1.0]
    is_constant: false
  input2:
    variable_class: ScalarVariable
    default_value: 0.2
    value_range: [0.0, 1.0]
    is_constant: false
output_variables:
  output:
    variable_class: ScalarVariable
    value_range: [-.inf, .inf]
    is_constant: false
input_validation_config: null
output_validation_config: null
model: model.pt
input_transformers: [input_transformers_0.pt]
output_transformers: [output_transformers_0.pt]
output_format: tensor
device: cpu
fixed_model: true
precision: double
```

The TorchModel can then be loaded:

```python
from lume_model.models.torch_model import TorchModel

# Load the model from a YAML file
torch_model = TorchModel("path/to/model_config.yml")
```


## TorchModule Usage

The `TorchModule` wrapper around the `TorchModel` is used to provide
a consistent API with PyTorch, making it easier to integrate with
other PyTorch-based tools and workflows.

### Initialization

To initialize a `TorchModule`, you need to provide the TorchModel object
or a YAML file containing the TorchModule model configuration.

```python
#  Wrap in TorchModule
torch_module = TorchModule(model=torch_model)

# Or load the model configuration from a YAML file
torch_module = TorchModule("path/to/module_config.yml")
```

### Model Configuration

The YAML configuration file should specify the `TorchModule` class
as well as the `TorchModel` configuration:

```yaml
model_class: TorchModule
input_order: [input1, input2]
output_order: [output]
model:
  model_class: TorchModel
  input_variables:
    input1:
      variable_class: ScalarVariable
      default_value: 0.1
      value_range: [0.0, 1.0]
      is_constant: false
    input2:
      variable_class: ScalarVariable
      default_value: 0.2
      value_range: [0.0, 1.0]
      is_constant: false
  output_variables:
    output:
      variable_class: ScalarVariable
  model: model.pt
  output_format: tensor
  device: cpu
  fixed_model: true
  precision: double
```

### Using the Model

Once the `TorchModule` is initialized, you can use it just like a
regular PyTorch model. You can pass tensor-type inputs to the model and
get tensor-type outputs.

```python
from torch import tensor
from lume_model.models.torch_module import TorchModule


# Example input tensor
input_data = tensor([[0.1, 0.2]])

# Evaluate the model
output = torch_module(input_data)

# Output will be a tensor
print(output)
```
### Saving using TorchScript

The `TorchModule` class' dump method has the option to save as a scripted JIT model by passing `save_jit=True` when calling the dump method. This will save the model as a TorchScript model, which can be loaded and used without the need for the original model file.

Note that saving as JIT through scripting has only been evaluated for NN models that don't depend on BoTorch modules.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "lume-model",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "machine learning, accelerator physics",
    "author": "SLAC National Accelerator Laboratory",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/f0/fe/108531cfa9518f6a0663aca0e79e94df7c71a5262569dc2e2df75cb6d3d0/lume_model-2.0.1.tar.gz",
    "platform": null,
    "description": "# LUME-model\n\nLUME-model holds data structures used in the LUME modeling toolset. Variables and models built using LUME-model will be compatible with other tools. LUME-model uses [pydantic](https://pydantic-docs.helpmanual.io/) models to enforce typed attributes upon instantiation.\n\n## Requirements\n\n* Python >= 3.10\n* pydantic\n* numpy\n* pyyaml\n* mlflow\n\n## Install\n\nLUME-model can be installed with conda using the command:\n\n``` $ conda install lume-model -c conda-forge ```\n\nor through pip (coming soon):\n\n``` $ pip install lume-model ```\n\n## Developer\n\nA development environment may be created using the packaged `dev-environment.yml` file.\n\n```\nconda env create -f dev-environment.yml\n```\n\nInstall as editable:\n\n```\nconda activate lume-model-dev\npip install --no-dependencies -e .\n```\n\nOr by creating a fresh environment and installing the package:\n\n```\npip install -e \".[dev]\"\n```\n\nNote that this repository uses pre-commit hooks. To install these hooks, run:\n\n```\npre-commit install\n```\n\n## Variables\n\nThe lume-model variables are intended to enforce requirements for input and output variables by variable type. For now, only scalar variables (floats) are supported.\n\nMinimal example of scalar input and output variables:\n\n```python\nfrom lume_model.variables import ScalarVariable\n\ninput_variable = ScalarVariable(\n    name=\"example_input\",\n    default_value=0.1,\n    value_range=[0.0, 1.0],\n)\noutput_variable = ScalarVariable(name=\"example_output\")\n```\n\nAll input variables may be made into constants by passing the\n`is_constant=True` keyword argument. These constant variables are always\nset to their default value and any other value assignments on\nthem will raise an error message.\n\n## Models\n\nThe lume-model base class `lume_model.base.LUMEBaseModel` is intended to guide user development while allowing for flexibility and customizability. It is used to enforce LUME tool compatible classes for the execution of trained models.\n\nRequirements for model classes:\n\n* input_variables: A list defining the input variables for the model. Variable names must be unique. Required for use with lume-epics tools.\n* output_variables: A list defining the output variables for the model. Variable names must be unique. Required for use with lume-epics tools.\n* _evaluate: The evaluate method is called by the serving model.\n  Subclasses must implement this method, accepting and returning a dictionary.\n\nExample model implementation and instantiation:\n\n```python\nfrom lume_model.base import LUMEBaseModel\nfrom lume_model.variables import ScalarVariable\n\n\nclass ExampleModel(LUMEBaseModel):\n    def _evaluate(self, input_dict):\n        output_dict = {\n            \"output1\": input_dict[self.input_variables[0].name] ** 2,\n            \"output2\": input_dict[self.input_variables[1].name] ** 2,\n        }\n        return output_dict\n\n\ninput_variables = [\n    ScalarVariable(name=\"input1\", default=0.1, value_range=[0.0, 1.0]),\n    ScalarVariable(name=\"input2\", default=0.2, value_range=[0.0, 1.0]),\n]\noutput_variables = [\n    ScalarVariable(name=\"output1\"),\n    ScalarVariable(name=\"output2\"),\n]\n\nm = ExampleModel(input_variables=input_variables, output_variables=output_variables)\n```\n\n## Configuration files\n\nModels and variables may be constructed using a YAML configuration file. The configuration file consists of three sections:\n\n* model (optional, can alternatively pass a custom model class into the `model_from_yaml` method)\n* input_variables\n* output_variables\n\nThe model section is used for the initialization of model classes. The `model_class` entry is used to specify the model class to initialize. The `model_from_yaml` method will attempt to import the specified class. Additional model-specific requirements may be provided. These requirements will be checked before model construction. Model keyword arguments may be passed via the config file or with the function kwarg `model_kwargs`. All models are assumed to accept `input_variables` and `output_variables` as keyword arguments.\n\nFor example, `m.dump(\"example_model.yml\")` writes the following to file\n\n```yaml\nmodel_class: ExampleModel\ninput_variables:\n  input1:\n    variable_class: ScalarVariable\n    default_value: 0.1\n    is_constant: false\n    value_range: [0.0, 1.0]\n  input2:\n    variable_class: ScalarVariable\n    default_value: 0.2\n    is_constant: false\n    value_range: [0.0, 1.0]\noutput_variables:\n  output1: {variable_class: ScalarVariable}\n  output2: {variable_class: ScalarVariable}\n```\n\nand can be loaded by simply passing the file to the model constructor:\n\n```python\nfrom lume_model.base import LUMEBaseModel\n\n\nclass ExampleModel(LUMEBaseModel):\n    def _evaluate(self, input_dict):\n        output_dict = {\n            \"output1\": input_dict[self.input_variables[0].name] ** 2,\n            \"output2\": input_dict[self.input_variables[1].name] ** 2,\n        }\n        return output_dict\n\n\nm = ExampleModel(\"example_model.yml\")\n```\n\n## PyTorch Toolkit\n\nA TorchModel can also be loaded from a YAML, specifying `TorchModel` in\nthe `model_class` of the configuration file.\n\n```yaml\nmodel_class: TorchModel\nmodel: model.pt\noutput_format: tensor\ndevice: cpu\nfixed_model: true\n```\n\nIn addition to the model_class, we also specify the path to the\nTorchModel's model and transformers (saved using `torch.save()`).\n\nThe `output_format` specification indicates which form the outputs\nof the model's `evaluate()` function should take, which may vary\ndepending on the application. TorchModel instances working with the\n[LUME-EPICS](https://github.com/slaclab/lume-epics) service will\nrequire an `OutputVariable` type, while [Xopt](https://github.\ncom/xopt-org/Xopt) requires either a dictionary of float\nvalues or tensors as output.\n\nThe variables and any transformers can also be added to the YAML\nconfiguration file:\n\n```yaml\nmodel_class: TorchModel\ninput_variables:\n  input1:\n    variable_class: ScalarVariable\n    default_value: 0.1\n    value_range: [0.0, 1.0]\n    is_constant: false\n  input2:\n    variable_class: ScalarVariable\n    default_value: 0.2\n    value_range: [0.0, 1.0]\n    is_constant: false\noutput_variables:\n  output:\n    variable_class: ScalarVariable\n    value_range: [-.inf, .inf]\n    is_constant: false\ninput_validation_config: null\noutput_validation_config: null\nmodel: model.pt\ninput_transformers: [input_transformers_0.pt]\noutput_transformers: [output_transformers_0.pt]\noutput_format: tensor\ndevice: cpu\nfixed_model: true\nprecision: double\n```\n\nThe TorchModel can then be loaded:\n\n```python\nfrom lume_model.models.torch_model import TorchModel\n\n# Load the model from a YAML file\ntorch_model = TorchModel(\"path/to/model_config.yml\")\n```\n\n\n## TorchModule Usage\n\nThe `TorchModule` wrapper around the `TorchModel` is used to provide\na consistent API with PyTorch, making it easier to integrate with\nother PyTorch-based tools and workflows.\n\n### Initialization\n\nTo initialize a `TorchModule`, you need to provide the TorchModel object\nor a YAML file containing the TorchModule model configuration.\n\n```python\n#  Wrap in TorchModule\ntorch_module = TorchModule(model=torch_model)\n\n# Or load the model configuration from a YAML file\ntorch_module = TorchModule(\"path/to/module_config.yml\")\n```\n\n### Model Configuration\n\nThe YAML configuration file should specify the `TorchModule` class\nas well as the `TorchModel` configuration:\n\n```yaml\nmodel_class: TorchModule\ninput_order: [input1, input2]\noutput_order: [output]\nmodel:\n  model_class: TorchModel\n  input_variables:\n    input1:\n      variable_class: ScalarVariable\n      default_value: 0.1\n      value_range: [0.0, 1.0]\n      is_constant: false\n    input2:\n      variable_class: ScalarVariable\n      default_value: 0.2\n      value_range: [0.0, 1.0]\n      is_constant: false\n  output_variables:\n    output:\n      variable_class: ScalarVariable\n  model: model.pt\n  output_format: tensor\n  device: cpu\n  fixed_model: true\n  precision: double\n```\n\n### Using the Model\n\nOnce the `TorchModule` is initialized, you can use it just like a\nregular PyTorch model. You can pass tensor-type inputs to the model and\nget tensor-type outputs.\n\n```python\nfrom torch import tensor\nfrom lume_model.models.torch_module import TorchModule\n\n\n# Example input tensor\ninput_data = tensor([[0.1, 0.2]])\n\n# Evaluate the model\noutput = torch_module(input_data)\n\n# Output will be a tensor\nprint(output)\n```\n### Saving using TorchScript\n\nThe `TorchModule` class' dump method has the option to save as a scripted JIT model by passing `save_jit=True` when calling the dump method. This will save the model as a TorchScript model, which can be loaded and used without the need for the original model file.\n\nNote that saving as JIT through scripting has only been evaluated for NN models that don't depend on BoTorch modules.\n",
    "bugtrack_url": null,
    "license": "Copyright (c) 2017-2020, The Board of Trustees of the Leland Stanford\n        Junior University, through SLAC National Accelerator Laboratory\n        (subject to receipt of any required approvals from the U.S. Dept. of\n        Energy). All rights reserved. Redistribution and use in source and\n        binary forms, with or without modification, are permitted provided\n        that the following conditions are met:\n        \n        (1) Redistributions of source code must retain the above copyright\n        notice, this list of conditions and the following disclaimer.\n        \n        (2) Redistributions in binary form must reproduce the above copyright\n        notice, this list of conditions and the following disclaimer in the\n        documentation and/or other materials provided with the distribution.\n        \n        (3) Neither the name of the Leland Stanford Junior University, SLAC\n        National Accelerator Laboratory, U.S. Dept. of Energy nor the names of\n        its contributors may be used to endorse or promote products derived\n        from this software without specific prior written permission.\n        \n        THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n        \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n        LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n        A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n        OWNER, THE UNITED STATES GOVERNMENT, OR CONTRIBUTORS BE LIABLE FOR ANY\n        DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n        DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE\n        GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n        INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER\n        IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR\n        OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF\n        ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n        \n        You are under no obligation whatsoever to provide any bug fixes,\n        patches, or upgrades to the features, functionality or performance of\n        the source code (\"Enhancements\") to anyone; however, if you choose to\n        make your Enhancements available either publicly, or directly to SLAC\n        National Accelerator Laboratory, without imposing a separate written\n        license agreement for such Enhancements, then you hereby grant the\n        following license: a non-exclusive, royalty-free perpetual license to\n        install, use, modify, prepare derivative works, incorporate into other\n        computer software, distribute, and sublicense such Enhancements or\n        derivative works thereof, in binary and source code form.\n        ",
    "summary": "Data structures used in the LUME modeling toolset.",
    "version": "2.0.1",
    "project_urls": {
        "Bug Tracker": "https://github.com/slaclab/lume-model/issues",
        "Documentation": "https://slaclab.github.io/lume-model/",
        "Homepage": "https://github.com/slaclab/lume-model"
    },
    "split_keywords": [
        "machine learning",
        " accelerator physics"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "71e43116c0a6cde5243476bca4c14991aec503f6de7abbe09fcef5677b48dea1",
                "md5": "17498218b253311b53dfb1393973cf37",
                "sha256": "67486b3e731c5cc99622899966cbf2f458e1817977c59c134fb27a597ccbecf7"
            },
            "downloads": -1,
            "filename": "lume_model-2.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "17498218b253311b53dfb1393973cf37",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 36423,
            "upload_time": "2025-10-20T18:29:07",
            "upload_time_iso_8601": "2025-10-20T18:29:07.320402Z",
            "url": "https://files.pythonhosted.org/packages/71/e4/3116c0a6cde5243476bca4c14991aec503f6de7abbe09fcef5677b48dea1/lume_model-2.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f0fe108531cfa9518f6a0663aca0e79e94df7c71a5262569dc2e2df75cb6d3d0",
                "md5": "25cd365365088c981343a8226fad80b7",
                "sha256": "ef467b5a33c4afa7b47076af5391df448df92984e806c079db49bfde6c2db275"
            },
            "downloads": -1,
            "filename": "lume_model-2.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "25cd365365088c981343a8226fad80b7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 1191884,
            "upload_time": "2025-10-20T18:29:08",
            "upload_time_iso_8601": "2025-10-20T18:29:08.799962Z",
            "url": "https://files.pythonhosted.org/packages/f0/fe/108531cfa9518f6a0663aca0e79e94df7c71a5262569dc2e2df75cb6d3d0/lume_model-2.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-20 18:29:08",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "slaclab",
    "github_project": "lume-model",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "pydantic",
            "specs": []
        },
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "pyyaml",
            "specs": []
        },
        {
            "name": "mlflow",
            "specs": []
        }
    ],
    "lcname": "lume-model"
}
        
Elapsed time: 3.49892s