evopt


Nameevopt JSON
Version 0.14.3 PyPI version JSON
download
home_pagehttps://github.com/Robh96/Evopt
SummaryUser Friendly Data-Driven Numerical Optimization
upload_time2025-03-18 21:03:10
maintainerNone
docs_urlNone
authorRoberto Hart-Villamil
requires_python>=3.10
licenseGNU v3.0
keywords optimization evolutionary cmaes calibration simulation fine-tuning simple
VCS
bugtrack_url
requirements numpy pandas cma cloudpickle scipy plotly matplotlib pysr
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # evopt
### User Friendly Black-Box Numerical Optimization
`evopt` is a package for efficient parameter optimization using the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) algorithm. It provides a user-friendly way to find the best set of parameters for a given problem, especially when the problem is complex, non-linear, and doesn't have easily calculable derivatives. It also includes PySR's symbolic regression engine to understand relationships between variables in the results.

<div align="center">
  <img src="https://raw.githubusercontent.com/robh96/evopt/main/images/cover_img.png" alt="Optimization of the two parameter Ackley function." width="800">
  <br>
  <em>Optimization of the two parameter Ackley function.</em>
</div>


## Documentation

Complete documentation is available at [evopt.readthedocs.io](https://evopt.readthedocs.io/en/latest/index.html).

## Scope

*   **Focus**: `evopt` provides a CMA-ES-based optimization routine that is easy to set up and use.
*   **Parameter Optimization**: The package is designed for problems where you need to find the optimal values for a set of parameters.
*   **Function-Value-Free Optimization**: It is designed to work without needing derivative information.
*   **Directory Management**: The package includes robust directory management to organise results, checkpoints, and logs.
*   **Logging**: It provides logging capabilities to track the optimization process.
*   **Checkpointing**: It supports saving and loading checkpoints to resume interrupted optimization runs.
*   **CSV Output**: It writes results and epoch data to CSV files for easy analysis.
*   **Easy results plotting**: Simple pain-free methods to plot the results.
*   **High Performance Computing**: It can leverage HPC resources for increased performance.

## Key Advantages

*   **Ease of Use**: Simple API for defining parameters, evaluator, and optimization settings.
*   **Derivative-Free**: Works well for problems where derivatives are unavailable or difficult to compute.
*   **Robustness**: CMA-ES is a powerful optimization algorithm that can handle non-convex and noisy problems.
*   **Organization**: Automatic directory management and logging for easy tracking and analysis.

## Installation

You can install the package using `pip`:

```
pip install evopt
```

## Usage

Here is an example of how to use the `evopt` package to optimize the Rosenbrock function:

```python
import evopt

# Define your parameters, their bounds, and evaluator function
params = {
    'param1': (-5, 5),
    'param2': (-5, 5),
}
def evaluator(param_dict):
    # Your evaluation logic here, in this case the Rosenbrock function
    p1 = param_dict['param1']
    p2 = param_dict['param2']
    error = (1 - p1) ** 2 + 100*(p2 - p1 ** 2) ** 2
    return error

# Run the optimization using .optimize method
results = evopt.optimize(params, evaluator)
```

Here is the corresponding output:

```terminal
Starting new CMAES run in directory path\to\base\dir\evolve_0
Epoch 0 | (1/16) | Params: [1.477, -2.369] | Error: 2069.985
Epoch 0 | (2/16) | Params: [-2.644, -1.651] | Error: 7481.172
Epoch 0 | (3/16) | Params: [0.763, -4.475] | Error: 2557.411
Epoch 0 | (4/16) | Params: [4.269, -0.929] | Error: 36687.174
Epoch 0 | (5/16) | Params: [-1.879, -4.211] | Error: 5999.711
Epoch 0 | (6/16) | Params: [4.665, -2.186] | Error: 57374.982
Epoch 0 | (7/16) | Params: [-1.969, -2.326] | Error: 3856.201
Epoch 0 | (8/16) | Params: [-1.588, -3.167] | Error: 3244.840
Epoch 0 | (9/16) | Params: [-2.191, -2.107] | Error: 4780.562
Epoch 0 | (10/16) | Params: [2.632, -0.398] | Error: 5369.439
Epoch 0 | (11/16) | Params: [-2.525, -1.427] | Error: 6099.094
Epoch 0 | (12/16) | Params: [4.161, -2.418] | Error: 38955.920
Epoch 0 | (13/16) | Params: [-0.435, -1.422] | Error: 261.646
Epoch 0 | (14/16) | Params: [-0.008, -3.759] | Error: 1414.379
Epoch 0 | (15/16) | Params: [-4.243, -0.564] | Error: 34496.083
Epoch 0 | (16/16) | Params: [0.499, -3.170] | Error: 1169.217
Epoch 0 | Mean Error: 13238.614 | Sigma Error: 17251.295
Epoch 0 | Mean Parameters: [0.062, -2.286] | Sigma parameters: [2.663, 1.187]
Epoch 0 | Normalised Sigma parameters: [1.065, 0.475]
...
Epoch 21 | Mean Error: 2.315 | Sigma Error: 0.454
Epoch 21 | Mean Parameters: [-0.391, 0.192] | Sigma parameters: [0.140, 0.154]
Epoch 21 | Normalised Sigma parameters: [0.056, 0.062]
Terminating after meeting termination criteria at epoch 22.
```

```python
print(results.best_parameters)
```
```terminal
{param1: -0.391, param2: 0.192}
```


## Multi-objective target optimization
Sometimes when using black-box functions like simulations, your result may be a specific variable such as mean pressure, temperature, or velocity. With `evopt` it is possible to specify a target value for the optimizer to reach, and in cases where targets are in conflict, you can specify `hard` or `soft` target preference such that the optimizer can weigh target priority.

For example:
```python
import evopt

# example black-box function
def example_eval(param_dict):
    x1 = param_dict['x1']
    x2 = param_dict['x2']
    target1 = (1 - 2 * (x1 - 3))
    target2 = x1 ** 2 + 1 + x2
    return {'target1': target1, 'target2': target2}

# define objectives
target_dict={
            "target1": {"value": (2.8), "hard": True},
            "target2": {"value": (2.9), "hard": False},
}

# define free parameters (evaluated by black-box function)
params = {
    "x1": (-5, 5),
    "x2": (-5, 5),
}

results = evopt.optimize(params, example_eval, target_dict=target_dict)
```

and corresponding output:
```terminal
Starting new CMAES run in directory path\to\base\dir\evolve_0
target1: 100% of values outside [2.66e+00, 2.94e+00]
target1: 16.10 | loss: 4.47e-01 | Hard: True | Constraint met: False
target2: 100% of values outside [2.75e+00, 3.04e+00]
target2: 23.90 | loss: 5.71e-01 | Hard: False | Constraint met: False
Epoch 0 | (1/64) | Params: [-4.551, 2.191] | Error: 0.472
target1: 100% of values outside [2.66e+00, 2.94e+00]
target1: 15.94 | loss: 4.43e-01 | Hard: True | Constraint met: False
target2: 100% of values outside [2.75e+00, 3.04e+00]
target2: 23.39 | loss: 5.64e-01 | Hard: False | Constraint met: False
Epoch 0 | (2/64) | Params: [-4.468, 2.431] | Error: 0.467
target1: 100% of values outside [2.66e+00, 2.94e+00]
target1: 15.39 | loss: 4.30e-01 | Hard: True | Constraint met: False
target2: 100% of values outside [2.75e+00, 3.04e+00]
target2: 21.51 | loss: 5.36e-01 | Hard: False | Constraint met: False
Epoch 0 | (3/64) | Params: [-4.196, 2.901] | Error: 0.452
...
Epoch 11 | Mean Error: 0.000 | Sigma Error: 0.000
Epoch 11 | Mean Parameters: [2.105, -2.501] | Sigma parameters: [0.039, 0.202]
Epoch 11 | Normalised Sigma parameters: [0.015, 0.081]
Terminating after meeting termination criteria at epoch 12.
```
Note that verbosity can be controlled with verbose: bool option in evopt.optimize().

## Keywords for `optimize()` Function

The `evopt.optimize()` function takes several keyword arguments to control the optimization process:

*   `params (dict)`: A dictionary defining the parameters to optimize. Keys are parameter names, and values are tuples of `(min, max)` bounds.
*   `evaluator (Callable)`: A callable (usually a function) that evaluates the parameters and returns an error value. This function is the core of your optimization problem.
*   `optimizer (str, optional)`: The optimization algorithm to use. Currently, only 'cmaes' (Covariance Matrix Adaptation Evolution Strategy) is supported. Defaults to `'cmaes'`.
*   `base_dir (str, optional)`: The base directory where the optimization results (checkpoints, logs, CSV files) will be stored. If not specified, it defaults to the current working directory.
*   `dir_id (int, optional)`: A specific directory ID for the optimization run. If provided, the results will be stored in base_dir/evolve_{dir_id}. If not provided, a new unique ID will be generated automatically.
*   `sigma_threshold (float, optional)`: The threshold for the sigma values (step size) of the CMA-ES algorithm. The optimization will terminate when all sigma values are below this threshold, indicating convergence. Defaults to `0.1`.
*   `batch_size (int, optional)`: The number of solutions to evaluate in each epoch (generation) of the CMA-ES algorithm. A larger batch size can speed up the optimization but may require more computational resources. Defaults to `16`.
*   `start_epoch (int, optional)`: The epoch number to start from. This is useful for resuming an interrupted optimization run from a checkpoint. Defaults to `None`.
*   `verbose (bool, optional)`: Whether to print detailed information about the optimization process to the console. If `True`, the optimization will print information about each epoch and solution. Defaults to `True`.
*   `num_epochs (int, optional)`: The maximum number of epochs to run the optimization for. If specified, the optimization will terminate after this number of epochs, even if the convergence criteria (`sigma_threshold`) has not been met. If None, the optimization will run until the convergence criteria is met. Defaults to `None`.
*   `max_workers (int, optional)`: The number of multi-processing workers to operate concurrently. Defaults to 1. Each worker operates on a different processor.
*   `rand_seed (int, optional)`: Specify the deterministic seed.
*   `hpc_cores_per_worker (int, optional)`: Number of CPU cores to allocate per HPC worker.
*   `hpc_memory_gb_per_worker (int)`: Memory in GB to allocate per worker on the HPC.
*   `hpc_wall_time (str)`: Wall time limit for each HPC worker, must be in the format "DD:HH:MM:SS" or "HH:MM:SS".
*   `hpc_qos (str)`: Quality of Service for HPC jobs.


## Plotting convergence

`Evopt` provides an overview of the convergence for each parameter over the epochs, through the `evopt.Plotting.plot_epochs()` method.

```python
# path to your evolve folder that contains epochs.csv and results.csv
evolve_dir = r"path\to\base\dir\evolve_0" 
evopt.Plotting.plot_epochs(evolve_dir_path=evolve_dir)
```
**Output:**

<div align="center">
  <img src="https://raw.githubusercontent.com/robh96/evopt/main/images/convergence_plots.png" alt="Error convergence." width="800">
  <br>
  <em>Convergence plots displaying error, parameters, targets, and normalised standard-deviation of the solution (normalised sigma) as a function of the number of epochs.</em>
</div>
<br>

## Plotting variables

`Evopt` also supports hassle free plotting of 1-D, 2-D, 3-D, and even 4-D results data using the same method: `evopt.Plotting.plot_vars()`. Simply specify the `Evolve_{dir_id}` file directory and the columns of the results.csv file you want to plot. By default the figures will save to `Evolve_{dir_id}\figures`.


### 2-D example (simple xy plot):

```python
evopt.Plotting.plot_vars(evolve_dir_path=evolve_dir, x="x1", y="error")
```
**Output:**

<div align="center">
  <img src="https://raw.githubusercontent.com/robh96/evopt/main/images/x1_vs_error.png" alt="Parameter versus error." width="400">
  <br>
  <em>Scatter plot showing parameter versus error. The axis handle is returned to the user for any modifications.</em>
</div>
<br>

### 2-D example (Voronoi plot):

```python
evopt.Plotting.plot_vars(evolve_dir_path=evolve_dir, x="x1", y="x2", cval="error")
```
**Output:**

<div align="center">
  <img src="https://raw.githubusercontent.com/robh96/evopt/main/images/x1_vs_x2_vs_error_Voronoi.png" alt="Parameters versus error Voronoi plot." width="400">
  <br>
  <em>2-D Voronoi plot illustrating parameters versus error. Each cell contains a single solution, with cell line is equidistant between points on either size. In this sense the plot conveys the exploration/explotation nature of the evolutionary algorithm as it hones in on the global optimum. The axis handle is returned to the user for any modifications.
  </em>
</div>
<br>


### 4-D example (interactive html surface plot with color)
```python
evopt.Plotting.plot_vars(evolve_dir_path=evolve_dir, x="x1", y="x2", z="error", cval="epoch")
```
**Output:**

<div align="center">
  <img src="https://raw.githubusercontent.com/robh96/evopt/main/images/x1_vs_x2_vs_error_vs_epoch_surface.png" alt="Parameters versus error coloured by epoch 3-D surface plot." width="400">
  <br>
  <em>3-D surface plot of the parameters versus the error values, coloured by epoch. As is the nature of convergent optimization, the latest epochs show the lowest error values.
  </em>
</div>
<br>

## Directory Structure

When you run an optimization with `evopt`, it creates the following directory structure to organise the results:
Each evaluation function call operates in its respective solution directory. This means that files can be created locally without needing absolute paths.
For example: 
```python
def evaluator(dict_params:dict) -> float:
    ...
    with open("your_file.txt", 'a') as f:
        f.write(error)
    ...
    return error
```
Would result in the creation of a file "your_file.txt" in each solution folder:

```
base_directory/
└── evolve_{dir_id}/
    ├── epochs/
    │   └── epoch0000/
    │       └── solution0000/
    |           └── your_file.txt
    │       └── solution0001/
    |           └── your_file.txt
    │       └── ...
    │   └── epoch0001/
    │       └── ...
    │   └── ...
    ├── checkpoints/
    │   └── checkpoint_epoch0000.pkl
    │   └── checkpoint_epoch0001.pkl
    │   └── ...
    ├── logs/
    │   └── logfile.log
    ├── epochs.csv
    └── results.csv
```

*   `base_directory`: This is the base directory where the optimization runs are stored. If not specified, it defaults to the current working directory.
*   `evolve_{dir_id}`: Each optimization run gets its own directory named `evolve_{dir_id}`, where `dir_id` is a unique integer.
*   `epochs`: This directory contains subdirectories for each epoch of the optimization.
*   `epoch####`: Each epoch directory contains subdirectories for each solution evaluated in that epoch. Epoch folders are only produced if solution files contain files.
*   `solution####`: Each solution directory can contain files generated by the evaluator function for that specific solution. Solution folders are only produced if files are created during an evaluation.
*   `checkpoints`: This directory stores checkpoint files, allowing you to resume interrupted optimization runs.
*   `logs`: This directory contains the log file (`logfile.log`) which captures the output of the optimization process.
*   `epochs.csv`: This file contains summary statistics for each epoch, such as mean error, parameter values, and sigma values.
*   `results.csv`: This file contains the results for each solution evaluated during the optimization, including parameter values and the corresponding error.

## Citing
If you publish research making use of this library, we encourage you to cite this repository:
> Hart-Villamil, R. (2024). Evopt, simple but powerful gradient-free numerical optimization.

This library makes fundamental use of the `pycma` implementation of the state-of-the-art CMA-ES algorithm.
Hence we kindly ask that research using this library cites:
> Nikolaus Hansen, Youhei Akimoto, and Petr Baudis. CMA-ES/pycma on Github. Zenodo, DOI:10.5281/zenodo.2559634, February 2019.

This work was also inspired by 'ACCES', a package for derivative-free numerical optimization designed for simulations.
> Nicusan, A., Werner, D., Sykes, J. A., Seville, J., & Windows-Yule, K. (2022). ACCES: Autonomous Characterisation and Calibration via Evolutionary Simulation (Version 0.2.0) [Computer software]

The symbolic regression functionality of this package is built upon PySR. We ask that research cites:
> Cranmer, M. (2023). Interpretable machine learning for science with PySR and SymbolicRegression. jl. arXiv preprint arXiv:2305.01582.

## License

This project is licensed under the GNU General Public License v3.0 License.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Robh96/Evopt",
    "name": "evopt",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "optimization, evolutionary, cmaes, calibration, simulation, fine-tuning, simple",
    "author": "Roberto Hart-Villamil",
    "author_email": "Roberto Hart-Villamil <rob.hartvillamil@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/49/16/5a640ef2320f3a72dd7e364e3aa7b3b7a47e7ee3c3d903cbd6b1e0919179/evopt-0.14.3.tar.gz",
    "platform": null,
    "description": "# evopt\r\n### User Friendly Black-Box Numerical Optimization\r\n`evopt` is a package for efficient parameter optimization using the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) algorithm. It provides a user-friendly way to find the best set of parameters for a given problem, especially when the problem is complex, non-linear, and doesn't have easily calculable derivatives. It also includes PySR's symbolic regression engine to understand relationships between variables in the results.\r\n\r\n<div align=\"center\">\r\n  <img src=\"https://raw.githubusercontent.com/robh96/evopt/main/images/cover_img.png\" alt=\"Optimization of the two parameter Ackley function.\" width=\"800\">\r\n  <br>\r\n  <em>Optimization of the two parameter Ackley function.</em>\r\n</div>\r\n\r\n\r\n## Documentation\r\n\r\nComplete documentation is available at [evopt.readthedocs.io](https://evopt.readthedocs.io/en/latest/index.html).\r\n\r\n## Scope\r\n\r\n*   **Focus**: `evopt` provides a CMA-ES-based optimization routine that is easy to set up and use.\r\n*   **Parameter Optimization**: The package is designed for problems where you need to find the optimal values for a set of parameters.\r\n*   **Function-Value-Free Optimization**: It is designed to work without needing derivative information.\r\n*   **Directory Management**: The package includes robust directory management to organise results, checkpoints, and logs.\r\n*   **Logging**: It provides logging capabilities to track the optimization process.\r\n*   **Checkpointing**: It supports saving and loading checkpoints to resume interrupted optimization runs.\r\n*   **CSV Output**: It writes results and epoch data to CSV files for easy analysis.\r\n*   **Easy results plotting**: Simple pain-free methods to plot the results.\r\n*   **High Performance Computing**: It can leverage HPC resources for increased performance.\r\n\r\n## Key Advantages\r\n\r\n*   **Ease of Use**: Simple API for defining parameters, evaluator, and optimization settings.\r\n*   **Derivative-Free**: Works well for problems where derivatives are unavailable or difficult to compute.\r\n*   **Robustness**: CMA-ES is a powerful optimization algorithm that can handle non-convex and noisy problems.\r\n*   **Organization**: Automatic directory management and logging for easy tracking and analysis.\r\n\r\n## Installation\r\n\r\nYou can install the package using `pip`:\r\n\r\n```\r\npip install evopt\r\n```\r\n\r\n## Usage\r\n\r\nHere is an example of how to use the `evopt` package to optimize the Rosenbrock function:\r\n\r\n```python\r\nimport evopt\r\n\r\n# Define your parameters, their bounds, and evaluator function\r\nparams = {\r\n    'param1': (-5, 5),\r\n    'param2': (-5, 5),\r\n}\r\ndef evaluator(param_dict):\r\n    # Your evaluation logic here, in this case the Rosenbrock function\r\n    p1 = param_dict['param1']\r\n    p2 = param_dict['param2']\r\n    error = (1 - p1) ** 2 + 100*(p2 - p1 ** 2) ** 2\r\n    return error\r\n\r\n# Run the optimization using .optimize method\r\nresults = evopt.optimize(params, evaluator)\r\n```\r\n\r\nHere is the corresponding output:\r\n\r\n```terminal\r\nStarting new CMAES run in directory path\\to\\base\\dir\\evolve_0\r\nEpoch 0 | (1/16) | Params: [1.477, -2.369] | Error: 2069.985\r\nEpoch 0 | (2/16) | Params: [-2.644, -1.651] | Error: 7481.172\r\nEpoch 0 | (3/16) | Params: [0.763, -4.475] | Error: 2557.411\r\nEpoch 0 | (4/16) | Params: [4.269, -0.929] | Error: 36687.174\r\nEpoch 0 | (5/16) | Params: [-1.879, -4.211] | Error: 5999.711\r\nEpoch 0 | (6/16) | Params: [4.665, -2.186] | Error: 57374.982\r\nEpoch 0 | (7/16) | Params: [-1.969, -2.326] | Error: 3856.201\r\nEpoch 0 | (8/16) | Params: [-1.588, -3.167] | Error: 3244.840\r\nEpoch 0 | (9/16) | Params: [-2.191, -2.107] | Error: 4780.562\r\nEpoch 0 | (10/16) | Params: [2.632, -0.398] | Error: 5369.439\r\nEpoch 0 | (11/16) | Params: [-2.525, -1.427] | Error: 6099.094\r\nEpoch 0 | (12/16) | Params: [4.161, -2.418] | Error: 38955.920\r\nEpoch 0 | (13/16) | Params: [-0.435, -1.422] | Error: 261.646\r\nEpoch 0 | (14/16) | Params: [-0.008, -3.759] | Error: 1414.379\r\nEpoch 0 | (15/16) | Params: [-4.243, -0.564] | Error: 34496.083\r\nEpoch 0 | (16/16) | Params: [0.499, -3.170] | Error: 1169.217\r\nEpoch 0 | Mean Error: 13238.614 | Sigma Error: 17251.295\r\nEpoch 0 | Mean Parameters: [0.062, -2.286] | Sigma parameters: [2.663, 1.187]\r\nEpoch 0 | Normalised Sigma parameters: [1.065, 0.475]\r\n...\r\nEpoch 21 | Mean Error: 2.315 | Sigma Error: 0.454\r\nEpoch 21 | Mean Parameters: [-0.391, 0.192] | Sigma parameters: [0.140, 0.154]\r\nEpoch 21 | Normalised Sigma parameters: [0.056, 0.062]\r\nTerminating after meeting termination criteria at epoch 22.\r\n```\r\n\r\n```python\r\nprint(results.best_parameters)\r\n```\r\n```terminal\r\n{param1: -0.391, param2: 0.192}\r\n```\r\n\r\n\r\n## Multi-objective target optimization\r\nSometimes when using black-box functions like simulations, your result may be a specific variable such as mean pressure, temperature, or velocity. With `evopt` it is possible to specify a target value for the optimizer to reach, and in cases where targets are in conflict, you can specify `hard` or `soft` target preference such that the optimizer can weigh target priority.\r\n\r\nFor example:\r\n```python\r\nimport evopt\r\n\r\n# example black-box function\r\ndef example_eval(param_dict):\r\n    x1 = param_dict['x1']\r\n    x2 = param_dict['x2']\r\n    target1 = (1 - 2 * (x1 - 3))\r\n    target2 = x1 ** 2 + 1 + x2\r\n    return {'target1': target1, 'target2': target2}\r\n\r\n# define objectives\r\ntarget_dict={\r\n            \"target1\": {\"value\": (2.8), \"hard\": True},\r\n            \"target2\": {\"value\": (2.9), \"hard\": False},\r\n}\r\n\r\n# define free parameters (evaluated by black-box function)\r\nparams = {\r\n    \"x1\": (-5, 5),\r\n    \"x2\": (-5, 5),\r\n}\r\n\r\nresults = evopt.optimize(params, example_eval, target_dict=target_dict)\r\n```\r\n\r\nand corresponding output:\r\n```terminal\r\nStarting new CMAES run in directory path\\to\\base\\dir\\evolve_0\r\ntarget1: 100% of values outside [2.66e+00, 2.94e+00]\r\ntarget1: 16.10 | loss: 4.47e-01 | Hard: True | Constraint met: False\r\ntarget2: 100% of values outside [2.75e+00, 3.04e+00]\r\ntarget2: 23.90 | loss: 5.71e-01 | Hard: False | Constraint met: False\r\nEpoch 0 | (1/64) | Params: [-4.551, 2.191] | Error: 0.472\r\ntarget1: 100% of values outside [2.66e+00, 2.94e+00]\r\ntarget1: 15.94 | loss: 4.43e-01 | Hard: True | Constraint met: False\r\ntarget2: 100% of values outside [2.75e+00, 3.04e+00]\r\ntarget2: 23.39 | loss: 5.64e-01 | Hard: False | Constraint met: False\r\nEpoch 0 | (2/64) | Params: [-4.468, 2.431] | Error: 0.467\r\ntarget1: 100% of values outside [2.66e+00, 2.94e+00]\r\ntarget1: 15.39 | loss: 4.30e-01 | Hard: True | Constraint met: False\r\ntarget2: 100% of values outside [2.75e+00, 3.04e+00]\r\ntarget2: 21.51 | loss: 5.36e-01 | Hard: False | Constraint met: False\r\nEpoch 0 | (3/64) | Params: [-4.196, 2.901] | Error: 0.452\r\n...\r\nEpoch 11 | Mean Error: 0.000 | Sigma Error: 0.000\r\nEpoch 11 | Mean Parameters: [2.105, -2.501] | Sigma parameters: [0.039, 0.202]\r\nEpoch 11 | Normalised Sigma parameters: [0.015, 0.081]\r\nTerminating after meeting termination criteria at epoch 12.\r\n```\r\nNote that verbosity can be controlled with verbose: bool option in evopt.optimize().\r\n\r\n## Keywords for `optimize()` Function\r\n\r\nThe `evopt.optimize()` function takes several keyword arguments to control the optimization process:\r\n\r\n*   `params (dict)`: A dictionary defining the parameters to optimize. Keys are parameter names, and values are tuples of `(min, max)` bounds.\r\n*   `evaluator (Callable)`: A callable (usually a function) that evaluates the parameters and returns an error value. This function is the core of your optimization problem.\r\n*   `optimizer (str, optional)`: The optimization algorithm to use. Currently, only 'cmaes' (Covariance Matrix Adaptation Evolution Strategy) is supported. Defaults to `'cmaes'`.\r\n*   `base_dir (str, optional)`: The base directory where the optimization results (checkpoints, logs, CSV files) will be stored. If not specified, it defaults to the current working directory.\r\n*   `dir_id (int, optional)`: A specific directory ID for the optimization run. If provided, the results will be stored in base_dir/evolve_{dir_id}. If not provided, a new unique ID will be generated automatically.\r\n*   `sigma_threshold (float, optional)`: The threshold for the sigma values (step size) of the CMA-ES algorithm. The optimization will terminate when all sigma values are below this threshold, indicating convergence. Defaults to `0.1`.\r\n*   `batch_size (int, optional)`: The number of solutions to evaluate in each epoch (generation) of the CMA-ES algorithm. A larger batch size can speed up the optimization but may require more computational resources. Defaults to `16`.\r\n*   `start_epoch (int, optional)`: The epoch number to start from. This is useful for resuming an interrupted optimization run from a checkpoint. Defaults to `None`.\r\n*   `verbose (bool, optional)`: Whether to print detailed information about the optimization process to the console. If `True`, the optimization will print information about each epoch and solution. Defaults to `True`.\r\n*   `num_epochs (int, optional)`: The maximum number of epochs to run the optimization for. If specified, the optimization will terminate after this number of epochs, even if the convergence criteria (`sigma_threshold`) has not been met. If None, the optimization will run until the convergence criteria is met. Defaults to `None`.\r\n*   `max_workers (int, optional)`: The number of multi-processing workers to operate concurrently. Defaults to 1. Each worker operates on a different processor.\r\n*   `rand_seed (int, optional)`: Specify the deterministic seed.\r\n*   `hpc_cores_per_worker (int, optional)`: Number of CPU cores to allocate per HPC worker.\r\n*   `hpc_memory_gb_per_worker (int)`: Memory in GB to allocate per worker on the HPC.\r\n*   `hpc_wall_time (str)`: Wall time limit for each HPC worker, must be in the format \"DD:HH:MM:SS\" or \"HH:MM:SS\".\r\n*   `hpc_qos (str)`: Quality of Service for HPC jobs.\r\n\r\n\r\n## Plotting convergence\r\n\r\n`Evopt` provides an overview of the convergence for each parameter over the epochs, through the `evopt.Plotting.plot_epochs()` method.\r\n\r\n```python\r\n# path to your evolve folder that contains epochs.csv and results.csv\r\nevolve_dir = r\"path\\to\\base\\dir\\evolve_0\" \r\nevopt.Plotting.plot_epochs(evolve_dir_path=evolve_dir)\r\n```\r\n**Output:**\r\n\r\n<div align=\"center\">\r\n  <img src=\"https://raw.githubusercontent.com/robh96/evopt/main/images/convergence_plots.png\" alt=\"Error convergence.\" width=\"800\">\r\n  <br>\r\n  <em>Convergence plots displaying error, parameters, targets, and normalised standard-deviation of the solution (normalised sigma) as a function of the number of epochs.</em>\r\n</div>\r\n<br>\r\n\r\n## Plotting variables\r\n\r\n`Evopt` also supports hassle free plotting of 1-D, 2-D, 3-D, and even 4-D results data using the same method: `evopt.Plotting.plot_vars()`. Simply specify the `Evolve_{dir_id}` file directory and the columns of the results.csv file you want to plot. By default the figures will save to `Evolve_{dir_id}\\figures`.\r\n\r\n\r\n### 2-D example (simple xy plot):\r\n\r\n```python\r\nevopt.Plotting.plot_vars(evolve_dir_path=evolve_dir, x=\"x1\", y=\"error\")\r\n```\r\n**Output:**\r\n\r\n<div align=\"center\">\r\n  <img src=\"https://raw.githubusercontent.com/robh96/evopt/main/images/x1_vs_error.png\" alt=\"Parameter versus error.\" width=\"400\">\r\n  <br>\r\n  <em>Scatter plot showing parameter versus error. The axis handle is returned to the user for any modifications.</em>\r\n</div>\r\n<br>\r\n\r\n### 2-D example (Voronoi plot):\r\n\r\n```python\r\nevopt.Plotting.plot_vars(evolve_dir_path=evolve_dir, x=\"x1\", y=\"x2\", cval=\"error\")\r\n```\r\n**Output:**\r\n\r\n<div align=\"center\">\r\n  <img src=\"https://raw.githubusercontent.com/robh96/evopt/main/images/x1_vs_x2_vs_error_Voronoi.png\" alt=\"Parameters versus error Voronoi plot.\" width=\"400\">\r\n  <br>\r\n  <em>2-D Voronoi plot illustrating parameters versus error. Each cell contains a single solution, with cell line is equidistant between points on either size. In this sense the plot conveys the exploration/explotation nature of the evolutionary algorithm as it hones in on the global optimum. The axis handle is returned to the user for any modifications.\r\n  </em>\r\n</div>\r\n<br>\r\n\r\n\r\n### 4-D example (interactive html surface plot with color)\r\n```python\r\nevopt.Plotting.plot_vars(evolve_dir_path=evolve_dir, x=\"x1\", y=\"x2\", z=\"error\", cval=\"epoch\")\r\n```\r\n**Output:**\r\n\r\n<div align=\"center\">\r\n  <img src=\"https://raw.githubusercontent.com/robh96/evopt/main/images/x1_vs_x2_vs_error_vs_epoch_surface.png\" alt=\"Parameters versus error coloured by epoch 3-D surface plot.\" width=\"400\">\r\n  <br>\r\n  <em>3-D surface plot of the parameters versus the error values, coloured by epoch. As is the nature of convergent optimization, the latest epochs show the lowest error values.\r\n  </em>\r\n</div>\r\n<br>\r\n\r\n## Directory Structure\r\n\r\nWhen you run an optimization with `evopt`, it creates the following directory structure to organise the results:\r\nEach evaluation function call operates in its respective solution directory. This means that files can be created locally without needing absolute paths.\r\nFor example: \r\n```python\r\ndef evaluator(dict_params:dict) -> float:\r\n    ...\r\n    with open(\"your_file.txt\", 'a') as f:\r\n        f.write(error)\r\n    ...\r\n    return error\r\n```\r\nWould result in the creation of a file \"your_file.txt\" in each solution folder:\r\n\r\n```\r\nbase_directory/\r\n\u2514\u2500\u2500 evolve_{dir_id}/\r\n    \u251c\u2500\u2500 epochs/\r\n    \u2502   \u2514\u2500\u2500 epoch0000/\r\n    \u2502       \u2514\u2500\u2500 solution0000/\r\n    |           \u2514\u2500\u2500 your_file.txt\r\n    \u2502       \u2514\u2500\u2500 solution0001/\r\n    |           \u2514\u2500\u2500 your_file.txt\r\n    \u2502       \u2514\u2500\u2500 ...\r\n    \u2502   \u2514\u2500\u2500 epoch0001/\r\n    \u2502       \u2514\u2500\u2500 ...\r\n    \u2502   \u2514\u2500\u2500 ...\r\n    \u251c\u2500\u2500 checkpoints/\r\n    \u2502   \u2514\u2500\u2500 checkpoint_epoch0000.pkl\r\n    \u2502   \u2514\u2500\u2500 checkpoint_epoch0001.pkl\r\n    \u2502   \u2514\u2500\u2500 ...\r\n    \u251c\u2500\u2500 logs/\r\n    \u2502   \u2514\u2500\u2500 logfile.log\r\n    \u251c\u2500\u2500 epochs.csv\r\n    \u2514\u2500\u2500 results.csv\r\n```\r\n\r\n*   `base_directory`: This is the base directory where the optimization runs are stored. If not specified, it defaults to the current working directory.\r\n*   `evolve_{dir_id}`: Each optimization run gets its own directory named `evolve_{dir_id}`, where `dir_id` is a unique integer.\r\n*   `epochs`: This directory contains subdirectories for each epoch of the optimization.\r\n*   `epoch####`: Each epoch directory contains subdirectories for each solution evaluated in that epoch. Epoch folders are only produced if solution files contain files.\r\n*   `solution####`: Each solution directory can contain files generated by the evaluator function for that specific solution. Solution folders are only produced if files are created during an evaluation.\r\n*   `checkpoints`: This directory stores checkpoint files, allowing you to resume interrupted optimization runs.\r\n*   `logs`: This directory contains the log file (`logfile.log`) which captures the output of the optimization process.\r\n*   `epochs.csv`: This file contains summary statistics for each epoch, such as mean error, parameter values, and sigma values.\r\n*   `results.csv`: This file contains the results for each solution evaluated during the optimization, including parameter values and the corresponding error.\r\n\r\n## Citing\r\nIf you publish research making use of this library, we encourage you to cite this repository:\r\n> Hart-Villamil, R. (2024). Evopt, simple but powerful gradient-free numerical optimization.\r\n\r\nThis library makes fundamental use of the `pycma` implementation of the state-of-the-art CMA-ES algorithm.\r\nHence we kindly ask that research using this library cites:\r\n> Nikolaus Hansen, Youhei Akimoto, and Petr Baudis. CMA-ES/pycma on Github. Zenodo, DOI:10.5281/zenodo.2559634, February 2019.\r\n\r\nThis work was also inspired by 'ACCES', a package for derivative-free numerical optimization designed for simulations.\r\n> Nicusan, A., Werner, D., Sykes, J. A., Seville, J., & Windows-Yule, K. (2022). ACCES: Autonomous Characterisation and Calibration via Evolutionary Simulation (Version 0.2.0) [Computer software]\r\n\r\nThe symbolic regression functionality of this package is built upon PySR. We ask that research cites:\r\n> Cranmer, M. (2023). Interpretable machine learning for science with PySR and SymbolicRegression. jl. arXiv preprint arXiv:2305.01582.\r\n\r\n## License\r\n\r\nThis project is licensed under the GNU General Public License v3.0 License.\r\n",
    "bugtrack_url": null,
    "license": "GNU v3.0",
    "summary": "User Friendly Data-Driven Numerical Optimization",
    "version": "0.14.3",
    "project_urls": {
        "Homepage": "https://github.com/Robh96/Evopt"
    },
    "split_keywords": [
        "optimization",
        " evolutionary",
        " cmaes",
        " calibration",
        " simulation",
        " fine-tuning",
        " simple"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "456cf348194df6bdf38012f79d242ad0578e51f471f63808f1caa0f6c8ec669f",
                "md5": "ddddd0d1cbd521c114477e5008e69e3f",
                "sha256": "3bd2d3025c8ffae986c66616eb96743669d540a8c0b2c1b4ef8d21e91194bc28"
            },
            "downloads": -1,
            "filename": "evopt-0.14.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ddddd0d1cbd521c114477e5008e69e3f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 69742,
            "upload_time": "2025-03-18T21:03:08",
            "upload_time_iso_8601": "2025-03-18T21:03:08.467537Z",
            "url": "https://files.pythonhosted.org/packages/45/6c/f348194df6bdf38012f79d242ad0578e51f471f63808f1caa0f6c8ec669f/evopt-0.14.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "49165a640ef2320f3a72dd7e364e3aa7b3b7a47e7ee3c3d903cbd6b1e0919179",
                "md5": "1e73524b6fc50946b393fb7d125cfeec",
                "sha256": "3718fd12a2aa676357cd8cf790fbb35e789b73d8382874a39a1e9297269c2b44"
            },
            "downloads": -1,
            "filename": "evopt-0.14.3.tar.gz",
            "has_sig": false,
            "md5_digest": "1e73524b6fc50946b393fb7d125cfeec",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 1358038,
            "upload_time": "2025-03-18T21:03:10",
            "upload_time_iso_8601": "2025-03-18T21:03:10.534028Z",
            "url": "https://files.pythonhosted.org/packages/49/16/5a640ef2320f3a72dd7e364e3aa7b3b7a47e7ee3c3d903cbd6b1e0919179/evopt-0.14.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-03-18 21:03:10",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Robh96",
    "github_project": "Evopt",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "numpy",
            "specs": [
                [
                    ">=",
                    "2.2.1"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    ">=",
                    "2.2.3"
                ]
            ]
        },
        {
            "name": "cma",
            "specs": [
                [
                    ">=",
                    "4.0.0"
                ]
            ]
        },
        {
            "name": "cloudpickle",
            "specs": [
                [
                    ">=",
                    "3.1.1"
                ]
            ]
        },
        {
            "name": "scipy",
            "specs": [
                [
                    ">=",
                    "1.15.0"
                ]
            ]
        },
        {
            "name": "plotly",
            "specs": [
                [
                    ">=",
                    "5.24.1"
                ]
            ]
        },
        {
            "name": "matplotlib",
            "specs": [
                [
                    ">=",
                    "3.10.0"
                ]
            ]
        },
        {
            "name": "pysr",
            "specs": [
                [
                    ">=",
                    "1.5.2"
                ]
            ]
        }
    ],
    "lcname": "evopt"
}
        
Elapsed time: 2.07842s