<p align="center">
<br>
<a href="https://github.com/SimonBlanke/Gradient-Free-Optimizers"><img src="./docs/images/gradient_logo_ink.png" height="280"></a>
<br>
</p>
<br>
---
<h2 align="center">
Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.
</h2>
<br>
<table>
<tbody>
<tr align="left" valign="center">
<td>
<strong>Master status:</strong>
</td>
<td>
<a href="https://github.com/SimonBlanke/Gradient-Free-Optimizers/actions">
<img src="https://github.com/SimonBlanke/Gradient-Free-Optimizers/actions/workflows/tests.yml/badge.svg?branch=master" alt="img not loaded: try F5 :)">
</a>
<a href="https://app.codecov.io/gh/SimonBlanke/Gradient-Free-Optimizers">
<img src="https://img.shields.io/codecov/c/github/SimonBlanke/Gradient-Free-Optimizers/master" alt="img not loaded: try F5 :)">
</a>
</td>
</tr>
<tr/>
<tr align="left" valign="center">
<td>
<strong>Code quality:</strong>
</td>
<td>
<a href="https://codeclimate.com/github/SimonBlanke/Gradient-Free-Optimizers">
<img src="https://img.shields.io/codeclimate/maintainability/SimonBlanke/Gradient-Free-Optimizers?style=flat-square&logo=code-climate" alt="img not loaded: try F5 :)">
</a>
<a href="https://scrutinizer-ci.com/g/SimonBlanke/Gradient-Free-Optimizers/">
<img src="https://img.shields.io/scrutinizer/quality/g/SimonBlanke/Gradient-Free-Optimizers?style=flat-square&logo=scrutinizer-ci" alt="img not loaded: try F5 :)">
</a>
</td>
</tr>
<tr/> <tr align="left" valign="center">
<td>
<strong>Latest versions:</strong>
</td>
<td>
<a href="https://pypi.org/project/gradient_free_optimizers/">
<img src="https://img.shields.io/pypi/v/Gradient-Free-Optimizers?style=flat-square&logo=PyPi&logoColor=white&color=blue" alt="img not loaded: try F5 :)">
</a>
</td>
</tr>
</tbody>
</table>
<br>
## Introduction
Gradient-Free-Optimizers provides a collection of easy to use optimization techniques,
whose objective function only requires an arbitrary score that gets maximized.
This makes gradient-free methods capable of solving various optimization problems, including:
- Optimizing arbitrary mathematical functions.
- Fitting multiple gauss-distributions to data.
- Hyperparameter-optimization of machine-learning methods.
Gradient-Free-Optimizers is the optimization backend of <a href="https://github.com/SimonBlanke/Hyperactive">Hyperactive</a> (in v3.0.0 and higher) but it can also be used by itself as a leaner and simpler optimization toolkit.
<br>
---
<div align="center"><a name="menu"></a>
<h3>
<a href="https://github.com/SimonBlanke/Gradient-Free-Optimizers#optimization-algorithms">Optimization algorithms</a> •
<a href="https://github.com/SimonBlanke/Gradient-Free-Optimizers#installation">Installation</a> •
<a href="https://github.com/SimonBlanke/Gradient-Free-Optimizers#examples">Examples</a> •
<a href="https://simonblanke.github.io/gradient-free-optimizers-documentation">API reference</a> •
<a href="https://github.com/SimonBlanke/Gradient-Free-Optimizers#roadmap">Roadmap</a>
</h3>
</div>
---
<br>
## Main features
- Easy to use:
<details>
<summary><b> Simple API-design</b></summary>
<br>
You can optimize anything that can be defined in a python function. For example a simple parabola function:
```python
def objective_function(para):
score = para["x1"] * para["x1"]
return -score
```
Define where to search via numpy ranges:
```python
search_space = {
"x": np.arange(0, 5, 0.1),
}
```
That`s all the information the algorithm needs to search for the maximum in the objective function:
```python
from gradient_free_optimizers import RandomSearchOptimizer
opt = RandomSearchOptimizer(search_space)
opt.search(objective_function, n_iter=100000)
```
</details>
<details>
<summary><b> Receive prepared information about ongoing and finished optimization runs</b></summary>
<br>
During the optimization you will receive ongoing information in a progress bar:
- current best score
- the position in the search space of the current best score
- the iteration when the current best score was found
- other information about the progress native to tqdm
</details>
- High performance:
<details>
<summary><b> Modern optimization techniques</b></summary>
<br>
Gradient-Free-Optimizers provides not just meta-heuristic optimization methods but also sequential model based optimizers like bayesian optimization, which delivers good results for expensive objetive functions like deep-learning models.
</details>
<details>
<summary><b> Lightweight backend</b></summary>
<br>
Even for the very simple parabola function the optimization time is about 60% of the entire iteration time when optimizing with random search. This shows, that (despite all its features) Gradient-Free-Optimizers has an efficient optimization backend without any unnecessary slowdown.
</details>
<details>
<summary><b> Save time with memory dictionary</b></summary>
<br>
Per default Gradient-Free-Optimizers will look for the current position in a memory dictionary before evaluating the objective function.
- If the position is not in the dictionary the objective function will be evaluated and the position and score is saved in the dictionary.
- If a position is already saved in the dictionary Gradient-Free-Optimizers will just extract the score from it instead of evaluating the objective function. This avoids reevaluating computationally expensive objective functions (machine- or deep-learning) and therefore saves time.
</details>
- High reliability:
<details>
<summary><b> Extensive testing</b></summary>
<br>
Gradient-Free-Optimizers is extensivly tested with more than 400 tests in 2500 lines of test code. This includes the testing of:
- Each optimization algorithm
- Each optimization parameter
- All attributes that are part of the public api
</details>
<details>
<summary><b> Performance test for each optimizer</b></summary>
<br>
Each optimization algorithm must perform above a certain threshold to be included. Poorly performing algorithms are reworked or scraped.
</details>
<br>
## Optimization algorithms:
Gradient-Free-Optimizers supports a variety of optimization algorithms, which can make choosing the right algorithm a tedious endeavor. The gifs in this section give a visual representation how the different optimization algorithms explore the search space and exploit the collected information about the search space for a convex and non-convex objective function. More detailed explanations of all optimization algorithms can be found in the [official documentation](https://simonblanke.github.io/gradient-free-optimizers-documentation).
<br>
### Local Optimization
<details>
<summary><b>Hill Climbing</b></summary>
<br>
Evaluates the score of n neighbours in an epsilon environment and moves to the best one.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/hill_climbing_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/hill_climbing_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>Stochastic Hill Climbing</b></summary>
<br>
Adds a probability to the hill climbing to move to a worse position in the search-space to escape local optima.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/stochastic_hill_climbing_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/stochastic_hill_climbing_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>Repulsing Hill Climbing</b></summary>
<br>
Hill climbing algorithm with the addition of increasing epsilon by a factor if no better neighbour was found.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/repulsing_hill_climbing_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/repulsing_hill_climbing_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>Simulated Annealing</b></summary>
<br>
Adds a probability to the hill climbing to move to a worse position in the search-space to escape local optima with decreasing probability over time.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/simulated_annealing_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/simulated_annealing_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>Downhill Simplex Optimization</b></summary>
<br>
Constructs a simplex from multiple positions that moves through the search-space by reflecting, expanding, contracting or shrinking.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/downhill_simplex_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/downhill_simplex_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<br>
### Global Optimization
<details>
<summary><b>Random Search</b></summary>
<br>
Moves to random positions in each iteration.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/random_search_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/random_search_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>Grid Search</b></summary>
<br>
Grid-search that moves through search-space diagonal (with step-size=1) starting from a corner.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/grid_search_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/grid_search_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>Random Restart Hill Climbing</b></summary>
<br>
Hill climbingm, that moves to a random position after n iterations.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/random_restart_hill_climbing_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/random_restart_hill_climbing_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>Random Annealing</b></summary>
<br>
Hill Climbing, that has large epsilon at the start of the search decreasing over time.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/random_annealing_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/random_annealing_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>Pattern Search</b></summary>
<br>
Creates cross-shaped collection of positions that move through search-space by moving as a whole towards optima or shrinking the cross.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/pattern_search_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/pattern_search_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>Powell's Method</b></summary>
<br>
Optimizes each search-space dimension at a time with a hill-climbing algorithm.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/powells_method_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/powells_method_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<br>
### Population-Based Optimization
<details>
<summary><b>Parallel Tempering</b></summary>
<br>
Population of n simulated annealers, which occasionally swap transition probabilities.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/parallel_tempering_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/parallel_tempering_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>Particle Swarm Optimization</b></summary>
<br>
Population of n particles attracting each other and moving towards the best particle.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/particle_swarm_optimization_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/particle_swarm_optimization_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>Spiral Optimization</b></summary>
<br>
Population of n particles moving in a spiral pattern around the best position.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/spiral_optimization_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/spiral_optimization_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>Evolution Strategy</b></summary>
<br>
Population of n hill climbers occasionally mixing positional information and removing worst positions from population.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/evolution_strategy_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/evolution_strategy_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<br>
### Sequential Model-Based Optimization
<details>
<summary><b>Bayesian Optimization</b></summary>
<br>
Gaussian process fitting to explored positions and predicting promising new positions.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/bayesian_optimization_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/bayesian_optimization_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>Lipschitz Optimization</b></summary>
<br>
Calculates an upper bound from the distances of the previously explored positions to find new promising positions.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/lipschitz_optimizer_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/lipschitz_optimizer_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>DIRECT algorithm</b></summary>
<br>
Separates search space into subspaces. It evaluates the center position of each subspace to decide which subspace to sepate further.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/direct_algorithm_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/direct_algorithm_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>Tree of Parzen Estimators</b></summary>
<br>
Kernel density estimators fitting to good and bad explored positions and predicting promising new positions.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/tree_structured_parzen_estimators_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/tree_structured_parzen_estimators_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<details>
<summary><b>Forest Optimizer</b></summary>
<br>
Ensemble of decision trees fitting to explored positions and predicting promising new positions.
<br>
<table style="width:100%">
<tr>
<th> <b>Convex Function</b> </th>
<th> <b>Non-convex Function</b> </th>
</tr>
<tr>
<td> <img src="./docs/gifs/forest_optimization_sphere_function_.gif" width="100%"> </td>
<td> <img src="./docs/gifs/forest_optimization_ackley_function_.gif" width="100%"> </td>
</tr>
</table>
</details>
<br>
## Sideprojects and Tools
The following packages are designed to support Gradient-Free-Optimizers and expand its use cases.
| Package | Description |
|-------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|
| [Search-Data-Collector](https://github.com/SimonBlanke/search-data-collector) | Simple tool to save search-data during or after the optimization run into csv-files. |
| [Search-Data-Explorer](https://github.com/SimonBlanke/search-data-explorer) | Visualize search-data with plotly inside a streamlit dashboard.
If you want news about Gradient-Free-Optimizers and related projects you can follow me on [twitter](https://twitter.com/blanke_simon).
<br>
## Installation
[![PyPI version](https://badge.fury.io/py/gradient-free-optimizers.svg)](https://badge.fury.io/py/gradient-free-optimizers)
The most recent version of Gradient-Free-Optimizers is available on PyPi:
```console
pip install gradient-free-optimizers
```
<br>
## Examples
<details>
<summary><b>Convex function</b></summary>
```python
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizer
def parabola_function(para):
loss = para["x"] * para["x"]
return -loss
search_space = {"x": np.arange(-10, 10, 0.1)}
opt = RandomSearchOptimizer(search_space)
opt.search(parabola_function, n_iter=100000)
```
</details>
<details>
<summary><b>Non-convex function</b></summary>
```python
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizer
def ackley_function(pos_new):
x = pos_new["x1"]
y = pos_new["x2"]
a1 = -20 * np.exp(-0.2 * np.sqrt(0.5 * (x * x + y * y)))
a2 = -np.exp(0.5 * (np.cos(2 * np.pi * x) + np.cos(2 * np.pi * y)))
score = a1 + a2 + 20
return -score
search_space = {
"x1": np.arange(-100, 101, 0.1),
"x2": np.arange(-100, 101, 0.1),
}
opt = RandomSearchOptimizer(search_space)
opt.search(ackley_function, n_iter=30000)
```
</details>
<details>
<summary><b>Machine learning example</b></summary>
```python
import numpy as np
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.datasets import load_wine
from gradient_free_optimizers import HillClimbingOptimizer
data = load_wine()
X, y = data.data, data.target
def model(para):
gbc = GradientBoostingClassifier(
n_estimators=para["n_estimators"],
max_depth=para["max_depth"],
min_samples_split=para["min_samples_split"],
min_samples_leaf=para["min_samples_leaf"],
)
scores = cross_val_score(gbc, X, y, cv=3)
return scores.mean()
search_space = {
"n_estimators": np.arange(20, 120, 1),
"max_depth": np.arange(2, 12, 1),
"min_samples_split": np.arange(2, 12, 1),
"min_samples_leaf": np.arange(1, 12, 1),
}
opt = HillClimbingOptimizer(search_space)
opt.search(model, n_iter=50)
```
</details>
<details>
<summary><b>Constrained Optimization example</b></summary>
```python
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizer
def convex_function(pos_new):
score = -(pos_new["x1"] * pos_new["x1"] + pos_new["x2"] * pos_new["x2"])
return score
search_space = {
"x1": np.arange(-100, 101, 0.1),
"x2": np.arange(-100, 101, 0.1),
}
def constraint_1(para):
# only values in 'x1' higher than -5 are valid
return para["x1"] > -5
# put one or more constraints inside a list
constraints_list = [constraint_1]
# pass list of constraints to the optimizer
opt = RandomSearchOptimizer(search_space, constraints=constraints_list)
opt.search(convex_function, n_iter=50)
search_data = opt.search_data
# the search-data does not contain any samples where x1 is equal or below -5
print("\n search_data \n", search_data, "\n")
```
</details>
<br>
## Roadmap
<details>
<summary><b>v0.3.0</b> :heavy_check_mark:</summary>
- [x] add sampling parameter to Bayesian optimizer
- [x] add warnings parameter to Bayesian optimizer
- [x] improve access to parameters of optimizers within population-based-optimizers (e.g. annealing rate of simulated annealing population in parallel tempering)
</details>
<details>
<summary><b>v0.4.0</b> :heavy_check_mark:</summary>
- [x] add early stopping parameter
</details>
<details>
<summary><b>v0.5.0</b> :heavy_check_mark:</summary>
- [x] add grid-search to optimizers
- [x] impoved performance testing for optimizers
</details>
<details>
<summary><b>v1.0.0</b> :heavy_check_mark:</summary>
- [x] Finalize API (1.0.0)
- [x] add Downhill-simplex algorithm to optimizers
- [x] add Pattern search to optimizers
- [x] add Powell's method to optimizers
- [x] add parallel random annealing to optimizers
- [x] add ensemble-optimizer to optimizers
</details>
<details>
<summary><b>v1.1.0</b> :heavy_check_mark:</summary>
- [x] add Spiral Optimization
- [x] add Lipschitz Optimizer
- [x] print the random seed for reproducibility
</details>
<details>
<summary><b>v1.2.0</b> :heavy_check_mark:</summary>
- [x] add DIRECT algorithm
- [x] automatically add random initial positions if necessary (often requested)
</details>
<details>
<summary><b>v1.3.0</b> :heavy_check_mark:</summary>
- [x] add support for constrained optimization
</details>
<details>
<summary><b>v1.4.0</b> :heavy_check_mark:</summary>
- [x] add Grid search parameter that changes direction of search
- [x] add SMBO parameter that enables to avoid replacement of the sampling
</details>
<details>
<summary><b>Future releases</b> </summary>
- [ ] add Ant-colony optimization
- [ ] add API, testing and doc to (better) use GFO as backend-optimization package
- [ ] add Random search parameter that enables to avoid replacement of the sampling
- [ ] add other acquisition functions to smbo (Probability of improvement, Entropy search, ...)
</details>
<br>
## Gradient Free Optimizers <=> Hyperactive
Gradient-Free-Optimizers was created as the optimization backend of the [Hyperactive package](https://github.com/SimonBlanke/Hyperactive). Therefore the algorithms are exactly the same in both packages and deliver the same results.
However you can still use Gradient-Free-Optimizers as a standalone package.
The separation of Gradient-Free-Optimizers from Hyperactive enables multiple advantages:
- Even easier to use than Hyperactive
- Separate and more thorough testing
- Other developers can easily use GFOs as an optimizaton backend if desired
- Better isolation from the complex information flow in Hyperactive. GFOs only uses positions and scores in a N-dimensional search-space. It returns only the new position after each iteration.
- a smaller and cleaner code base, if you want to explore my implementation of these optimization techniques.
While Gradient-Free-Optimizers is relatively simple, Hyperactive is a more complex project with additional features. The differences between Gradient-Free-Optimizers and Hyperactive are listed in the following table:
<table>
<tr>
<th> </th>
<th>Gradient-Free-Optimizers</th>
<th>Hyperactive</th>
</tr>
<tr>
<td> Search space composition </td>
<td> only numerical </td>
<td> numbers, strings and functions </td>
</tr>
<tr>
<td> Parallel Computing </td>
<td> not supported </td>
<td> yes, via multiprocessing or joblib </td>
</tr>
<tr>
<td> Distributed computing </td>
<td> not supported</td>
<td> yes, via data sharing at runtime</td>
</tr>
<tr>
<td> Visualization </td>
<td> via Search-Data-Explorer</td>
<td> via Search-Data-Explorer and Progress Board</td>
</tr>
</tr>
</table>
<br>
## Citation
@Misc{gfo2020,
author = {{Simon Blanke}},
title = {{Gradient-Free-Optimizers}: Simple and reliable optimization with local, global, population-based and sequential techniques in numerical search spaces.},
howpublished = {\url{https://github.com/SimonBlanke}},
year = {since 2020}
}
<br>
## License
Gradient-Free-Optimizers is licensed under the following License:
[![LICENSE](https://img.shields.io/github/license/SimonBlanke/Gradient-Free-Optimizers?style=for-the-badge)](https://github.com/SimonBlanke/Gradient-Free-Optimizers/blob/master/LICENSE)
Raw data
{
"_id": null,
"home_page": "https://github.com/SimonBlanke/Gradient-Free-Optimizers",
"name": "gradient-free-optimizers",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.5",
"maintainer_email": null,
"keywords": "optimization",
"author": "Simon Blanke",
"author_email": "simon.blanke@yahoo.com",
"download_url": null,
"platform": null,
"description": "<p align=\"center\">\n <br>\n <a href=\"https://github.com/SimonBlanke/Gradient-Free-Optimizers\"><img src=\"./docs/images/gradient_logo_ink.png\" height=\"280\"></a>\n <br>\n</p>\n\n<br>\n\n---\n\n\n\n<h2 align=\"center\">\n Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.\n</h2>\n\n<br>\n\n<table>\n <tbody>\n <tr align=\"left\" valign=\"center\">\n <td>\n <strong>Master status:</strong>\n </td>\n <td>\n <a href=\"https://github.com/SimonBlanke/Gradient-Free-Optimizers/actions\">\n <img src=\"https://github.com/SimonBlanke/Gradient-Free-Optimizers/actions/workflows/tests.yml/badge.svg?branch=master\" alt=\"img not loaded: try F5 :)\">\n </a>\n <a href=\"https://app.codecov.io/gh/SimonBlanke/Gradient-Free-Optimizers\">\n <img src=\"https://img.shields.io/codecov/c/github/SimonBlanke/Gradient-Free-Optimizers/master\" alt=\"img not loaded: try F5 :)\">\n </a>\n </td>\n </tr>\n <tr/>\n <tr align=\"left\" valign=\"center\">\n <td>\n <strong>Code quality:</strong>\n </td>\n <td>\n <a href=\"https://codeclimate.com/github/SimonBlanke/Gradient-Free-Optimizers\">\n <img src=\"https://img.shields.io/codeclimate/maintainability/SimonBlanke/Gradient-Free-Optimizers?style=flat-square&logo=code-climate\" alt=\"img not loaded: try F5 :)\">\n </a>\n <a href=\"https://scrutinizer-ci.com/g/SimonBlanke/Gradient-Free-Optimizers/\">\n <img src=\"https://img.shields.io/scrutinizer/quality/g/SimonBlanke/Gradient-Free-Optimizers?style=flat-square&logo=scrutinizer-ci\" alt=\"img not loaded: try F5 :)\">\n </a>\n </td>\n </tr>\n <tr/> <tr align=\"left\" valign=\"center\">\n <td>\n <strong>Latest versions:</strong>\n </td>\n <td>\n <a href=\"https://pypi.org/project/gradient_free_optimizers/\">\n <img src=\"https://img.shields.io/pypi/v/Gradient-Free-Optimizers?style=flat-square&logo=PyPi&logoColor=white&color=blue\" alt=\"img not loaded: try F5 :)\">\n </a>\n </td>\n </tr>\n </tbody>\n</table>\n\n<br>\n\n\n\n\n\n\n## Introduction\n\nGradient-Free-Optimizers provides a collection of easy to use optimization techniques, \nwhose objective function only requires an arbitrary score that gets maximized. \nThis makes gradient-free methods capable of solving various optimization problems, including: \n- Optimizing arbitrary mathematical functions.\n- Fitting multiple gauss-distributions to data.\n- Hyperparameter-optimization of machine-learning methods.\n\nGradient-Free-Optimizers is the optimization backend of <a href=\"https://github.com/SimonBlanke/Hyperactive\">Hyperactive</a> (in v3.0.0 and higher) but it can also be used by itself as a leaner and simpler optimization toolkit. \n\n\n<br>\n\n---\n\n<div align=\"center\"><a name=\"menu\"></a>\n <h3>\n <a href=\"https://github.com/SimonBlanke/Gradient-Free-Optimizers#optimization-algorithms\">Optimization algorithms</a> \u2022\n <a href=\"https://github.com/SimonBlanke/Gradient-Free-Optimizers#installation\">Installation</a> \u2022\n <a href=\"https://github.com/SimonBlanke/Gradient-Free-Optimizers#examples\">Examples</a> \u2022\n <a href=\"https://simonblanke.github.io/gradient-free-optimizers-documentation\">API reference</a> \u2022\n <a href=\"https://github.com/SimonBlanke/Gradient-Free-Optimizers#roadmap\">Roadmap</a>\n </h3>\n</div>\n\n---\n\n\n<br>\n\n## Main features\n\n- Easy to use:\n <details>\n <summary><b> Simple API-design</b></summary>\n\n <br>\n\n You can optimize anything that can be defined in a python function. For example a simple parabola function:\n ```python\n def objective_function(para):\n score = para[\"x1\"] * para[\"x1\"]\n return -score\n ```\n\n Define where to search via numpy ranges:\n ```python\n search_space = {\n \"x\": np.arange(0, 5, 0.1),\n }\n ```\n\n That`s all the information the algorithm needs to search for the maximum in the objective function:\n ```python\n from gradient_free_optimizers import RandomSearchOptimizer\n\n opt = RandomSearchOptimizer(search_space)\n opt.search(objective_function, n_iter=100000)\n ```\n\n\n </details>\n\n\n <details>\n <summary><b> Receive prepared information about ongoing and finished optimization runs</b></summary>\n\n <br>\n\n During the optimization you will receive ongoing information in a progress bar:\n - current best score\n - the position in the search space of the current best score\n - the iteration when the current best score was found\n - other information about the progress native to tqdm\n\n </details>\n\n\n- High performance:\n <details>\n <summary><b> Modern optimization techniques</b></summary>\n\n <br>\n\n Gradient-Free-Optimizers provides not just meta-heuristic optimization methods but also sequential model based optimizers like bayesian optimization, which delivers good results for expensive objetive functions like deep-learning models.\n\n </details>\n\n\n <details>\n <summary><b> Lightweight backend</b></summary>\n\n <br>\n\n Even for the very simple parabola function the optimization time is about 60% of the entire iteration time when optimizing with random search. This shows, that (despite all its features) Gradient-Free-Optimizers has an efficient optimization backend without any unnecessary slowdown.\n\n </details>\n\n\n <details>\n <summary><b> Save time with memory dictionary</b></summary>\n\n <br>\n\n Per default Gradient-Free-Optimizers will look for the current position in a memory dictionary before evaluating the objective function. \n \n - If the position is not in the dictionary the objective function will be evaluated and the position and score is saved in the dictionary. \n \n - If a position is already saved in the dictionary Gradient-Free-Optimizers will just extract the score from it instead of evaluating the objective function. This avoids reevaluating computationally expensive objective functions (machine- or deep-learning) and therefore saves time.\n\n\n </details>\n\n\n- High reliability:\n <details>\n <summary><b> Extensive testing</b></summary>\n\n <br>\n\n Gradient-Free-Optimizers is extensivly tested with more than 400 tests in 2500 lines of test code. This includes the testing of:\n - Each optimization algorithm \n - Each optimization parameter\n - All attributes that are part of the public api\n\n </details>\n\n\n <details>\n <summary><b> Performance test for each optimizer</b></summary>\n\n <br>\n\n Each optimization algorithm must perform above a certain threshold to be included. Poorly performing algorithms are reworked or scraped.\n\n </details>\n\n\n<br>\n\n## Optimization algorithms:\n\nGradient-Free-Optimizers supports a variety of optimization algorithms, which can make choosing the right algorithm a tedious endeavor. The gifs in this section give a visual representation how the different optimization algorithms explore the search space and exploit the collected information about the search space for a convex and non-convex objective function. More detailed explanations of all optimization algorithms can be found in the [official documentation](https://simonblanke.github.io/gradient-free-optimizers-documentation).\n\n\n\n<br>\n\n### Local Optimization\n\n<details>\n<summary><b>Hill Climbing</b></summary>\n\n<br>\n\nEvaluates the score of n neighbours in an epsilon environment and moves to the best one.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/hill_climbing_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/hill_climbing_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>Stochastic Hill Climbing</b></summary>\n\n<br>\n\nAdds a probability to the hill climbing to move to a worse position in the search-space to escape local optima.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/stochastic_hill_climbing_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/stochastic_hill_climbing_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>Repulsing Hill Climbing</b></summary>\n\n<br>\n\nHill climbing algorithm with the addition of increasing epsilon by a factor if no better neighbour was found.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/repulsing_hill_climbing_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/repulsing_hill_climbing_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>Simulated Annealing</b></summary>\n\n<br>\n\nAdds a probability to the hill climbing to move to a worse position in the search-space to escape local optima with decreasing probability over time.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/simulated_annealing_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/simulated_annealing_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>Downhill Simplex Optimization</b></summary>\n\n<br>\n\nConstructs a simplex from multiple positions that moves through the search-space by reflecting, expanding, contracting or shrinking.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/downhill_simplex_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/downhill_simplex_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n<br>\n\n### Global Optimization\n\n<details>\n<summary><b>Random Search</b></summary>\n\n<br>\n\nMoves to random positions in each iteration.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/random_search_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/random_search_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>Grid Search</b></summary>\n\n<br>\n\nGrid-search that moves through search-space diagonal (with step-size=1) starting from a corner.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/grid_search_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/grid_search_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>Random Restart Hill Climbing</b></summary>\n\n<br>\n\nHill climbingm, that moves to a random position after n iterations.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/random_restart_hill_climbing_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/random_restart_hill_climbing_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>Random Annealing</b></summary>\n\n<br>\n\nHill Climbing, that has large epsilon at the start of the search decreasing over time.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/random_annealing_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/random_annealing_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>Pattern Search</b></summary>\n\n<br>\n\nCreates cross-shaped collection of positions that move through search-space by moving as a whole towards optima or shrinking the cross.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/pattern_search_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/pattern_search_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>Powell's Method</b></summary>\n\n<br>\n\nOptimizes each search-space dimension at a time with a hill-climbing algorithm.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/powells_method_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/powells_method_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<br>\n\n\n\n\n### Population-Based Optimization\n\n<details>\n<summary><b>Parallel Tempering</b></summary>\n\n<br>\n\nPopulation of n simulated annealers, which occasionally swap transition probabilities.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/parallel_tempering_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/parallel_tempering_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>Particle Swarm Optimization</b></summary>\n\n<br>\n\nPopulation of n particles attracting each other and moving towards the best particle.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/particle_swarm_optimization_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/particle_swarm_optimization_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>Spiral Optimization</b></summary>\n\n<br>\n\nPopulation of n particles moving in a spiral pattern around the best position.\n\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/spiral_optimization_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/spiral_optimization_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>Evolution Strategy</b></summary>\n\n<br>\n\nPopulation of n hill climbers occasionally mixing positional information and removing worst positions from population.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/evolution_strategy_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/evolution_strategy_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<br>\n\n### Sequential Model-Based Optimization\n\n<details>\n<summary><b>Bayesian Optimization</b></summary>\n\n<br>\n\nGaussian process fitting to explored positions and predicting promising new positions.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/bayesian_optimization_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/bayesian_optimization_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>Lipschitz Optimization</b></summary>\n\n<br>\n\nCalculates an upper bound from the distances of the previously explored positions to find new promising positions.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/lipschitz_optimizer_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/lipschitz_optimizer_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>DIRECT algorithm</b></summary>\n\n<br>\n\nSeparates search space into subspaces. It evaluates the center position of each subspace to decide which subspace to sepate further.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/direct_algorithm_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/direct_algorithm_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>Tree of Parzen Estimators</b></summary>\n\n<br>\n\nKernel density estimators fitting to good and bad explored positions and predicting promising new positions.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/tree_structured_parzen_estimators_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/tree_structured_parzen_estimators_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n<details>\n<summary><b>Forest Optimizer</b></summary>\n\n<br>\n\nEnsemble of decision trees fitting to explored positions and predicting promising new positions.\n\n<br>\n\n<table style=\"width:100%\">\n <tr>\n <th> <b>Convex Function</b> </th> \n <th> <b>Non-convex Function</b> </th>\n </tr>\n <tr>\n <td> <img src=\"./docs/gifs/forest_optimization_sphere_function_.gif\" width=\"100%\"> </td>\n <td> <img src=\"./docs/gifs/forest_optimization_ackley_function_.gif\" width=\"100%\"> </td>\n </tr>\n</table>\n\n</details>\n\n\n\n<br>\n\n## Sideprojects and Tools\n\nThe following packages are designed to support Gradient-Free-Optimizers and expand its use cases. \n\n| Package | Description |\n|-------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|\n| [Search-Data-Collector](https://github.com/SimonBlanke/search-data-collector) | Simple tool to save search-data during or after the optimization run into csv-files. |\n| [Search-Data-Explorer](https://github.com/SimonBlanke/search-data-explorer) | Visualize search-data with plotly inside a streamlit dashboard.\n\nIf you want news about Gradient-Free-Optimizers and related projects you can follow me on [twitter](https://twitter.com/blanke_simon).\n\n\n<br>\n\n## Installation\n\n[![PyPI version](https://badge.fury.io/py/gradient-free-optimizers.svg)](https://badge.fury.io/py/gradient-free-optimizers)\n\nThe most recent version of Gradient-Free-Optimizers is available on PyPi:\n\n```console\npip install gradient-free-optimizers\n```\n\n<br>\n\n\n## Examples\n\n<details>\n<summary><b>Convex function</b></summary>\n\n```python\nimport numpy as np\nfrom gradient_free_optimizers import RandomSearchOptimizer\n\n\ndef parabola_function(para):\n loss = para[\"x\"] * para[\"x\"]\n return -loss\n\n\nsearch_space = {\"x\": np.arange(-10, 10, 0.1)}\n\nopt = RandomSearchOptimizer(search_space)\nopt.search(parabola_function, n_iter=100000)\n```\n\n</details>\n\n\n<details>\n<summary><b>Non-convex function</b></summary>\n\n```python\nimport numpy as np\nfrom gradient_free_optimizers import RandomSearchOptimizer\n\n\ndef ackley_function(pos_new):\n x = pos_new[\"x1\"]\n y = pos_new[\"x2\"]\n\n a1 = -20 * np.exp(-0.2 * np.sqrt(0.5 * (x * x + y * y)))\n a2 = -np.exp(0.5 * (np.cos(2 * np.pi * x) + np.cos(2 * np.pi * y)))\n score = a1 + a2 + 20\n return -score\n\n\nsearch_space = {\n \"x1\": np.arange(-100, 101, 0.1),\n \"x2\": np.arange(-100, 101, 0.1),\n}\n\nopt = RandomSearchOptimizer(search_space)\nopt.search(ackley_function, n_iter=30000)\n```\n\n</details>\n\n\n<details>\n<summary><b>Machine learning example</b></summary>\n\n```python\nimport numpy as np\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.ensemble import GradientBoostingClassifier\nfrom sklearn.datasets import load_wine\n\nfrom gradient_free_optimizers import HillClimbingOptimizer\n\n\ndata = load_wine()\nX, y = data.data, data.target\n\n\ndef model(para):\n gbc = GradientBoostingClassifier(\n n_estimators=para[\"n_estimators\"],\n max_depth=para[\"max_depth\"],\n min_samples_split=para[\"min_samples_split\"],\n min_samples_leaf=para[\"min_samples_leaf\"],\n )\n scores = cross_val_score(gbc, X, y, cv=3)\n\n return scores.mean()\n\n\nsearch_space = {\n \"n_estimators\": np.arange(20, 120, 1),\n \"max_depth\": np.arange(2, 12, 1),\n \"min_samples_split\": np.arange(2, 12, 1),\n \"min_samples_leaf\": np.arange(1, 12, 1),\n}\n\nopt = HillClimbingOptimizer(search_space)\nopt.search(model, n_iter=50)\n```\n\n</details>\n\n\n<details>\n<summary><b>Constrained Optimization example</b></summary>\n\n```python\nimport numpy as np\nfrom gradient_free_optimizers import RandomSearchOptimizer\n\n\ndef convex_function(pos_new):\n score = -(pos_new[\"x1\"] * pos_new[\"x1\"] + pos_new[\"x2\"] * pos_new[\"x2\"])\n return score\n\n\nsearch_space = {\n \"x1\": np.arange(-100, 101, 0.1),\n \"x2\": np.arange(-100, 101, 0.1),\n}\n\n\ndef constraint_1(para):\n # only values in 'x1' higher than -5 are valid\n return para[\"x1\"] > -5\n\n\n# put one or more constraints inside a list\nconstraints_list = [constraint_1]\n\n\n# pass list of constraints to the optimizer\nopt = RandomSearchOptimizer(search_space, constraints=constraints_list)\nopt.search(convex_function, n_iter=50)\n\nsearch_data = opt.search_data\n\n# the search-data does not contain any samples where x1 is equal or below -5\nprint(\"\\n search_data \\n\", search_data, \"\\n\")\n```\n\n</details>\n\n\n<br>\n\n## Roadmap\n\n\n<details>\n<summary><b>v0.3.0</b> :heavy_check_mark:</summary>\n\n - [x] add sampling parameter to Bayesian optimizer\n - [x] add warnings parameter to Bayesian optimizer\n - [x] improve access to parameters of optimizers within population-based-optimizers (e.g. annealing rate of simulated annealing population in parallel tempering)\n\n</details>\n\n\n<details>\n<summary><b>v0.4.0</b> :heavy_check_mark:</summary>\n\n - [x] add early stopping parameter\n\n</details>\n\n\n<details>\n<summary><b>v0.5.0</b> :heavy_check_mark:</summary>\n\n - [x] add grid-search to optimizers\n - [x] impoved performance testing for optimizers\n\n</details>\n\n\n<details>\n<summary><b>v1.0.0</b> :heavy_check_mark:</summary>\n\n - [x] Finalize API (1.0.0)\n - [x] add Downhill-simplex algorithm to optimizers\n - [x] add Pattern search to optimizers\n - [x] add Powell's method to optimizers\n - [x] add parallel random annealing to optimizers\n - [x] add ensemble-optimizer to optimizers\n\n</details>\n\n\n<details>\n<summary><b>v1.1.0</b> :heavy_check_mark:</summary>\n\n - [x] add Spiral Optimization\n - [x] add Lipschitz Optimizer\n - [x] print the random seed for reproducibility\n\n</details>\n\n\n<details>\n<summary><b>v1.2.0</b> :heavy_check_mark:</summary>\n\n - [x] add DIRECT algorithm\n - [x] automatically add random initial positions if necessary (often requested)\n\n</details>\n\n\n<details>\n<summary><b>v1.3.0</b> :heavy_check_mark:</summary>\n\n - [x] add support for constrained optimization\n\n</details>\n\n\n<details>\n<summary><b>v1.4.0</b> :heavy_check_mark:</summary>\n\n - [x] add Grid search parameter that changes direction of search\n - [x] add SMBO parameter that enables to avoid replacement of the sampling\n\n</details>\n\n\n<details>\n<summary><b>Future releases</b> </summary>\n\n - [ ] add Ant-colony optimization\n - [ ] add API, testing and doc to (better) use GFO as backend-optimization package\n - [ ] add Random search parameter that enables to avoid replacement of the sampling\n - [ ] add other acquisition functions to smbo (Probability of improvement, Entropy search, ...)\n\n</details>\n\n\n\n\n<br>\n\n## Gradient Free Optimizers <=> Hyperactive\n\nGradient-Free-Optimizers was created as the optimization backend of the [Hyperactive package](https://github.com/SimonBlanke/Hyperactive). Therefore the algorithms are exactly the same in both packages and deliver the same results. \nHowever you can still use Gradient-Free-Optimizers as a standalone package.\nThe separation of Gradient-Free-Optimizers from Hyperactive enables multiple advantages:\n - Even easier to use than Hyperactive\n - Separate and more thorough testing\n - Other developers can easily use GFOs as an optimizaton backend if desired\n - Better isolation from the complex information flow in Hyperactive. GFOs only uses positions and scores in a N-dimensional search-space. It returns only the new position after each iteration.\n - a smaller and cleaner code base, if you want to explore my implementation of these optimization techniques.\n\nWhile Gradient-Free-Optimizers is relatively simple, Hyperactive is a more complex project with additional features. The differences between Gradient-Free-Optimizers and Hyperactive are listed in the following table:\n\n<table>\n <tr>\n <th> </th>\n <th>Gradient-Free-Optimizers</th>\n <th>Hyperactive</th>\n </tr>\n <tr>\n <td> Search space composition </td>\n <td> only numerical </td>\n <td> numbers, strings and functions </td>\n </tr>\n <tr>\n <td> Parallel Computing </td>\n <td> not supported </td>\n <td> yes, via multiprocessing or joblib </td>\n </tr>\n <tr>\n <td> Distributed computing </td>\n <td> not supported</td>\n <td> yes, via data sharing at runtime</td>\n </tr>\n <tr>\n <td> Visualization </td>\n <td> via Search-Data-Explorer</td>\n <td> via Search-Data-Explorer and Progress Board</td>\n </tr>\n </tr>\n</table>\n\n\n\n<br>\n\n## Citation\n\n @Misc{gfo2020,\n author = {{Simon Blanke}},\n title = {{Gradient-Free-Optimizers}: Simple and reliable optimization with local, global, population-based and sequential techniques in numerical search spaces.},\n howpublished = {\\url{https://github.com/SimonBlanke}},\n year = {since 2020}\n }\n\n\n<br>\n\n## License\n\nGradient-Free-Optimizers is licensed under the following License:\n\n[![LICENSE](https://img.shields.io/github/license/SimonBlanke/Gradient-Free-Optimizers?style=for-the-badge)](https://github.com/SimonBlanke/Gradient-Free-Optimizers/blob/master/LICENSE)\n\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": null,
"version": "1.4.1",
"project_urls": {
"Homepage": "https://github.com/SimonBlanke/Gradient-Free-Optimizers"
},
"split_keywords": [
"optimization"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "b2ca5316f8d150f92f3719f258d3d97ae9c3c4d23e8b842c2a7c29305f2a730d",
"md5": "4001e0243010b514bc39574ff801f636",
"sha256": "342ec639ab3404f07f099592ff1c4416c95b2ce8d2688e4fcecf155cbaedfe6c"
},
"downloads": -1,
"filename": "gradient_free_optimizers-1.4.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4001e0243010b514bc39574ff801f636",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.5",
"size": 102048,
"upload_time": "2024-05-17T16:44:10",
"upload_time_iso_8601": "2024-05-17T16:44:10.927946Z",
"url": "https://files.pythonhosted.org/packages/b2/ca/5316f8d150f92f3719f258d3d97ae9c3c4d23e8b842c2a7c29305f2a730d/gradient_free_optimizers-1.4.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-17 16:44:10",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "SimonBlanke",
"github_project": "Gradient-Free-Optimizers",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "gradient-free-optimizers"
}