## **optimization-benchmarks**
[](https://pypi.org/project/optimization-benchmarks/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
A comprehensive Python package providing 50+ classical mathematical benchmark functions for testing and evaluating optimization algorithms.
## 🎯 Features
- **50+ Standard Benchmark Functions**: Including Ackley, Rastrigin, Rosenbrock, Griewank, and many more
- **Vectorized NumPy Implementation**: Fast and efficient computation
- **Well-Documented**: Each function includes domain constraints and global minima
- **Type Hints**: Full type annotation support
- **Command-Line Interface**: Evaluate functions directly from the terminal
- **Zero Dependencies**: Only requires NumPy
- **Academic Citations**: Properly cited mathematical formulations
## 📦 Installation
### From PyPI
```
pip install optimization-benchmarks
```
### From Source
```
git clone https://github.com/ak-rahul/optimization-benchmarks.git
cd optimization-benchmarks
pip install -e .
```
----
## 🚀 Quick Start
```
import numpy as np
from optimization_benchmarks import ackley, rastrigin, rosenbrock
x = np.zeros(5)
result = ackley(x)
print(f"Ackley(0) = {result}") # Should be close to 0
x = np.ones(10)
result = rosenbrock(x)
print(f"Rosenbrock(1) = {result}") # Should be 0
x = np.random.randn(5)
result = rastrigin(x)
print(f"Rastrigin(x) = {result}")
```
---
## 📊 Usage Examples
### Benchmarking an Optimization Algorithm
```
import numpy as np
from optimization_benchmarks import ackley, rastrigin, sphere
def my_optimizer(func, bounds, max_iter=1000):
"""Your optimization algorithm here."""
# ... implementation ...
pass
test_functions = {
'Sphere': (sphere, [(-5.12, 5.12)] * 10),
'Ackley': (ackley, [(-32, 32)] * 10),
'Rastrigin': (rastrigin, [(-5.12, 5.12)] * 10),
}
for name, (func, bounds) in test_functions.items():
best_x, best_f = my_optimizer(func, bounds)
print(f"{name}: f(x*) = {best_f}")
```
---
## 🎯 Using Benchmark Metadata (New in v0.1.1)
Version 0.1.1 introduces comprehensive metadata for all 55 functions, eliminating the need to manually specify bounds and known minima:
```
from optimization_benchmarks import BENCHMARK_SUITE, get_function_info
import numpy as np
```
### Get all available functions
```
from optimization_benchmarks import get_all_functions
print(f"Total functions: {len(get_all_functions())}") # 55
```
### Get metadata for a specific function
```
info = get_function_info('ackley')
func = info['function']
bounds = info['bounds'] * info['default_dim'] # 10D by default
known_min = info['known_minimum']
```
### Test at known minimum
```
x = np.zeros(info['default_dim'])
result = func(x)
print(f"Ackley(0) = {result:.6f}, Expected: {known_min}")
```
### Simple Benchmarking with Metadata
```
from optimization_benchmarks import BENCHMARK_SUITE
import numpy as np
def simple_random_search(func, bounds, n_iter=1000):
"""Simple random search optimizer."""
best_x = None
best_cost = float('inf')
for _ in range(n_iter):
x = np.array([np.random.uniform(b, b) for b in bounds])
cost = func(x)
if cost < best_cost:
best_cost = cost
best_x = x
return best_x, best_cost
```
### Benchmark on all functions - no manual bounds needed!
```
for name, meta in BENCHMARK_SUITE.items():
func = meta['function']
bounds = meta['bounds'] * meta['default_dim']
known_min = meta['known_minimum']
best_x, best_cost = simple_random_search(func, bounds)
error = abs(best_cost - known_min)
print(f"{name:20s} | Found: {best_cost:12.6f} | "
f"Expected: {known_min:12.6f} | Error: {error:10.6f}")
```
### Metadata Helper Functions
| Function | Description |
|----------|-------------|
| `BENCHMARK_SUITE` | Dictionary with all 55 functions and metadata |
| `get_all_functions()` | Returns list of all function names |
| `get_function_info(name)` | Returns metadata for specific function |
| `get_bounds(name, dim=None)` | Returns bounds for given dimension |
| `get_function_list()` | Returns formatted string with all functions |
### Metadata Fields
Each entry in `BENCHMARK_SUITE` contains:
- **`function`**: The callable function
- **`bounds`**: List of (min, max) tuples for each dimension
- **`default_dim`**: Recommended test dimension
- **`known_minimum`**: Known global minimum value
- **`optimal_point`**: Location(s) of the global minimum
---
## 🎮 Command-Line Interface
The package includes a CLI for quick function evaluation:
### List all available functions
```
optbench --list
```
### Get function information
```
optbench --info ackley
```
### Evaluate a function
```
optbench --function rastrigin --values 0 0 0 0 0
```
### Batch evaluation from CSV
```
optbench --function sphere --input points.csv --output results.json
```
---
## 📚 Available Functions
### Multimodal Functions
- `ackley` - Multiple local minima with deep global minimum
- `rastrigin` - Highly multimodal with regular structure
- `griewank` - Multimodal with product term
- `schwefel2_26` - Deceptive with distant global minimum
- `levy` - Multimodal with sharp global minimum
- `michalewicz` - Steep ridges and valleys
### Unimodal Functions
- `sphere` - Simple convex quadratic
- `rosenbrock` - Narrow curved valley
- `sum_squares` - Weighted sphere function
- `hyperellipsoid` - Axis-parallel ellipsoid
### 2D Test Functions
- `beale` - Narrow valley
- `booth` - Simple quadratic
- `matyas` - Plate-like surface
- `himmelblau` - Four identical local minima
- `goldstein_price` - Multiple local minima
- `easom` - Flat surface with narrow peak
### Special Functions
- `branin` - Three global minima
- `camel3` - Three-hump camel function
- `camel6` - Six-hump camel function
- `kowalik` - Parameter estimation problem
- `langerman` - Multimodal test function
**And 30+ more functions!**
## 🔬 Function Properties
Each function includes:
- **Domain**: Valid input ranges
- **Dimension**: Number of variables (n for arbitrary dimensions)
- **Global Minimum**: Known optimal value and location
- **Mathematical Formula**: Documented in docstrings
## 🎓 Academic Use
This package is perfect for:
- **Algorithm Development**: Test new optimization algorithms
- **Comparative Studies**: Benchmark against existing methods
- **Academic Research**: Reproduce published results
- **Teaching**: Demonstrate optimization concepts
- **Thesis Projects**: Comprehensive evaluation suite
### Citing This Package
If you use this package in academic work, please cite:
```
@software{optimization_benchmarks,
author = {AK Rahul},
title = {optimization-benchmarks: Benchmark Functions for Optimization Algorithms},
year = {2025},
publisher = {PyPI},
url = {https://github.com/ak-rahul/optimization-benchmarks}
}
```
### Mathematical Formulations Based On
[1] Adorio, E. P. (2005). MVF - Multivariate Test Functions Library in C.
[2] Surjanovic, S. & Bingham, D. (2013). Virtual Library of Simulation Experiments.
[3] Jamil, M., & Yang, X. S. (2013). A literature survey of benchmark functions for global optimization problems.
## 🤝 Contributing
Contributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
### Quick Contribution Guide
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/new-function`)
3. Add your function to `functions.py`
4. Add tests to `tests/test_functions.py`
5. Run tests: `pytest`
6. Submit a pull request
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- Mathematical formulations based on the MVF C library by E.P. Adorio
- Function definitions from Virtual Library of Simulation Experiments
- Inspired by the optimization research community
## 📞 Support
- **Issues**: [GitHub Issues](https://github.com/ak-rahul/optimization-benchmarks/issues)
- **Discussions**: [GitHub Discussions](https://github.com/ak-rahul/optimization-benchmarks/discussions)
## 🔗 Related Projects
- [SciPy Optimize](https://docs.scipy.org/doc/scipy/reference/optimize.html) - Optimization algorithms
- [PyGMO](https://esa.github.io/pygmo2/) - Massively parallel optimization
- [DEAP](https://github.com/DEAP/deap) - Evolutionary algorithms
---
**Made with ❤️ for the optimization community**
Raw data
{
"_id": null,
"home_page": null,
"name": "optimization-benchmarks",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "Your Name <your.email@example.com>",
"keywords": "optimization, benchmark, test-functions, mathematical-optimization, global-optimization, algorithms, machine-learning, numerical-optimization, metaheuristics",
"author": "AK Rahul",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/b6/95/86677a17751cff5b92230265d0fce3b7df357eaff50870925e90722c1e38/optimization_benchmarks-0.1.1.tar.gz",
"platform": null,
"description": "\r\n## **optimization-benchmarks**\r\n\r\n[](https://pypi.org/project/optimization-benchmarks/)\r\n[](https://www.python.org/downloads/)\r\n[](https://opensource.org/licenses/MIT)\r\n\r\nA comprehensive Python package providing 50+ classical mathematical benchmark functions for testing and evaluating optimization algorithms.\r\n\r\n## \ud83c\udfaf Features\r\n\r\n- **50+ Standard Benchmark Functions**: Including Ackley, Rastrigin, Rosenbrock, Griewank, and many more\r\n- **Vectorized NumPy Implementation**: Fast and efficient computation\r\n- **Well-Documented**: Each function includes domain constraints and global minima\r\n- **Type Hints**: Full type annotation support\r\n- **Command-Line Interface**: Evaluate functions directly from the terminal\r\n- **Zero Dependencies**: Only requires NumPy\r\n- **Academic Citations**: Properly cited mathematical formulations\r\n\r\n## \ud83d\udce6 Installation\r\n\r\n### From PyPI\r\n```\r\npip install optimization-benchmarks\r\n```\r\n### From Source\r\n```\r\ngit clone https://github.com/ak-rahul/optimization-benchmarks.git\r\ncd optimization-benchmarks\r\npip install -e .\r\n```\r\n----\r\n\r\n## \ud83d\ude80 Quick Start\r\n```\r\nimport numpy as np\r\nfrom optimization_benchmarks import ackley, rastrigin, rosenbrock\r\n\r\nx = np.zeros(5)\r\nresult = ackley(x)\r\nprint(f\"Ackley(0) = {result}\") # Should be close to 0\r\n\r\nx = np.ones(10)\r\nresult = rosenbrock(x)\r\nprint(f\"Rosenbrock(1) = {result}\") # Should be 0\r\n\r\nx = np.random.randn(5)\r\nresult = rastrigin(x)\r\nprint(f\"Rastrigin(x) = {result}\")\r\n```\r\n\r\n---\r\n\r\n## \ud83d\udcca Usage Examples\r\n\r\n### Benchmarking an Optimization Algorithm\r\n\r\n```\r\nimport numpy as np\r\nfrom optimization_benchmarks import ackley, rastrigin, sphere\r\n\r\ndef my_optimizer(func, bounds, max_iter=1000):\r\n\"\"\"Your optimization algorithm here.\"\"\"\r\n# ... implementation ...\r\npass\r\n\r\ntest_functions = {\r\n'Sphere': (sphere, [(-5.12, 5.12)] * 10),\r\n'Ackley': (ackley, [(-32, 32)] * 10),\r\n'Rastrigin': (rastrigin, [(-5.12, 5.12)] * 10),\r\n}\r\n\r\nfor name, (func, bounds) in test_functions.items():\r\nbest_x, best_f = my_optimizer(func, bounds)\r\nprint(f\"{name}: f(x*) = {best_f}\")\r\n```\r\n\r\n---\r\n\r\n## \ud83c\udfaf Using Benchmark Metadata (New in v0.1.1)\r\n\r\nVersion 0.1.1 introduces comprehensive metadata for all 55 functions, eliminating the need to manually specify bounds and known minima:\r\n\r\n```\r\nfrom optimization_benchmarks import BENCHMARK_SUITE, get_function_info\r\nimport numpy as np\r\n```\r\n\r\n### Get all available functions\r\n```\r\nfrom optimization_benchmarks import get_all_functions\r\nprint(f\"Total functions: {len(get_all_functions())}\") # 55\r\n```\r\n\r\n### Get metadata for a specific function\r\n```\r\ninfo = get_function_info('ackley')\r\nfunc = info['function']\r\nbounds = info['bounds'] * info['default_dim'] # 10D by default\r\nknown_min = info['known_minimum']\r\n```\r\n\r\n### Test at known minimum\r\n```\r\nx = np.zeros(info['default_dim'])\r\nresult = func(x)\r\nprint(f\"Ackley(0) = {result:.6f}, Expected: {known_min}\")\r\n```\r\n\r\n### Simple Benchmarking with Metadata\r\n\r\n```\r\nfrom optimization_benchmarks import BENCHMARK_SUITE\r\nimport numpy as np\r\n\r\ndef simple_random_search(func, bounds, n_iter=1000):\r\n \"\"\"Simple random search optimizer.\"\"\"\r\n best_x = None\r\n best_cost = float('inf')\r\n \r\n for _ in range(n_iter):\r\n x = np.array([np.random.uniform(b, b) for b in bounds])\r\n cost = func(x)\r\n if cost < best_cost:\r\n best_cost = cost\r\n best_x = x\r\n \r\n return best_x, best_cost\r\n```\r\n\r\n### Benchmark on all functions - no manual bounds needed!\r\n```\r\nfor name, meta in BENCHMARK_SUITE.items():\r\n func = meta['function']\r\n bounds = meta['bounds'] * meta['default_dim']\r\n known_min = meta['known_minimum']\r\n \r\n best_x, best_cost = simple_random_search(func, bounds)\r\n error = abs(best_cost - known_min)\r\n \r\n print(f\"{name:20s} | Found: {best_cost:12.6f} | \"\r\n f\"Expected: {known_min:12.6f} | Error: {error:10.6f}\")\r\n```\r\n\r\n### Metadata Helper Functions\r\n\r\n| Function | Description |\r\n|----------|-------------|\r\n| `BENCHMARK_SUITE` | Dictionary with all 55 functions and metadata |\r\n| `get_all_functions()` | Returns list of all function names |\r\n| `get_function_info(name)` | Returns metadata for specific function |\r\n| `get_bounds(name, dim=None)` | Returns bounds for given dimension |\r\n| `get_function_list()` | Returns formatted string with all functions |\r\n\r\n### Metadata Fields\r\n\r\nEach entry in `BENCHMARK_SUITE` contains:\r\n- **`function`**: The callable function\r\n- **`bounds`**: List of (min, max) tuples for each dimension\r\n- **`default_dim`**: Recommended test dimension\r\n- **`known_minimum`**: Known global minimum value\r\n- **`optimal_point`**: Location(s) of the global minimum\r\n\r\n---\r\n\r\n\r\n## \ud83c\udfae Command-Line Interface\r\n\r\nThe package includes a CLI for quick function evaluation:\r\n\r\n### List all available functions\r\n```\r\noptbench --list\r\n```\r\n\r\n### Get function information\r\n```\r\noptbench --info ackley\r\n```\r\n\r\n### Evaluate a function\r\n```\r\noptbench --function rastrigin --values 0 0 0 0 0\r\n```\r\n\r\n### Batch evaluation from CSV\r\n\r\n```\r\noptbench --function sphere --input points.csv --output results.json\r\n```\r\n\r\n---\r\n\r\n\r\n## \ud83d\udcda Available Functions\r\n\r\n### Multimodal Functions\r\n- `ackley` - Multiple local minima with deep global minimum\r\n- `rastrigin` - Highly multimodal with regular structure\r\n- `griewank` - Multimodal with product term\r\n- `schwefel2_26` - Deceptive with distant global minimum\r\n- `levy` - Multimodal with sharp global minimum\r\n- `michalewicz` - Steep ridges and valleys\r\n\r\n### Unimodal Functions\r\n- `sphere` - Simple convex quadratic\r\n- `rosenbrock` - Narrow curved valley\r\n- `sum_squares` - Weighted sphere function\r\n- `hyperellipsoid` - Axis-parallel ellipsoid\r\n\r\n### 2D Test Functions\r\n- `beale` - Narrow valley\r\n- `booth` - Simple quadratic\r\n- `matyas` - Plate-like surface\r\n- `himmelblau` - Four identical local minima\r\n- `goldstein_price` - Multiple local minima\r\n- `easom` - Flat surface with narrow peak\r\n\r\n### Special Functions\r\n- `branin` - Three global minima\r\n- `camel3` - Three-hump camel function\r\n- `camel6` - Six-hump camel function\r\n- `kowalik` - Parameter estimation problem\r\n- `langerman` - Multimodal test function\r\n\r\n**And 30+ more functions!** \r\n\r\n## \ud83d\udd2c Function Properties\r\n\r\nEach function includes:\r\n- **Domain**: Valid input ranges\r\n- **Dimension**: Number of variables (n for arbitrary dimensions)\r\n- **Global Minimum**: Known optimal value and location\r\n- **Mathematical Formula**: Documented in docstrings\r\n\r\n\r\n## \ud83c\udf93 Academic Use\r\n\r\nThis package is perfect for:\r\n- **Algorithm Development**: Test new optimization algorithms\r\n- **Comparative Studies**: Benchmark against existing methods\r\n- **Academic Research**: Reproduce published results\r\n- **Teaching**: Demonstrate optimization concepts\r\n- **Thesis Projects**: Comprehensive evaluation suite\r\n\r\n### Citing This Package\r\n\r\nIf you use this package in academic work, please cite:\r\n```\r\n@software{optimization_benchmarks,\r\nauthor = {AK Rahul},\r\ntitle = {optimization-benchmarks: Benchmark Functions for Optimization Algorithms},\r\nyear = {2025},\r\npublisher = {PyPI},\r\nurl = {https://github.com/ak-rahul/optimization-benchmarks}\r\n}\r\n```\r\n\r\n### Mathematical Formulations Based On\r\n\r\n[1] Adorio, E. P. (2005). MVF - Multivariate Test Functions Library in C. \r\n[2] Surjanovic, S. & Bingham, D. (2013). Virtual Library of Simulation Experiments. \r\n[3] Jamil, M., & Yang, X. S. (2013). A literature survey of benchmark functions for global optimization problems.\r\n\r\n## \ud83e\udd1d Contributing\r\n\r\nContributions are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\r\n\r\n### Quick Contribution Guide\r\n\r\n 1. Fork the repository\r\n 2. Create your feature branch (`git checkout -b feature/new-function`)\r\n 3. Add your function to `functions.py`\r\n 4. Add tests to `tests/test_functions.py`\r\n 5. Run tests: `pytest`\r\n 6. Submit a pull request\r\n\r\n## \ud83d\udcc4 License\r\n\r\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\r\n\r\n## \ud83d\ude4f Acknowledgments\r\n\r\n- Mathematical formulations based on the MVF C library by E.P. Adorio\r\n- Function definitions from Virtual Library of Simulation Experiments\r\n- Inspired by the optimization research community\r\n\r\n## \ud83d\udcde Support\r\n\r\n- **Issues**: [GitHub Issues](https://github.com/ak-rahul/optimization-benchmarks/issues)\r\n- **Discussions**: [GitHub Discussions](https://github.com/ak-rahul/optimization-benchmarks/discussions)\r\n\r\n\r\n## \ud83d\udd17 Related Projects\r\n\r\n- [SciPy Optimize](https://docs.scipy.org/doc/scipy/reference/optimize.html) - Optimization algorithms\r\n- [PyGMO](https://esa.github.io/pygmo2/) - Massively parallel optimization\r\n- [DEAP](https://github.com/DEAP/deap) - Evolutionary algorithms\r\n\r\n---\r\n\r\n**Made with \u2764\ufe0f for the optimization community**\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A comprehensive collection of mathematical benchmark functions for testing optimization algorithms",
"version": "0.1.1",
"project_urls": {
"Bug Tracker": "https://github.com/yourusername/optimization-benchmarks/issues",
"Homepage": "https://github.com/ak-rahul/optimization-benchmarks",
"Repository": "https://github.com/yourusername/optimization-benchmarks"
},
"split_keywords": [
"optimization",
" benchmark",
" test-functions",
" mathematical-optimization",
" global-optimization",
" algorithms",
" machine-learning",
" numerical-optimization",
" metaheuristics"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "d1ef3d7e5307b3723537e2b49cbf84a53eeefbaadafd9342b57638fe575639be",
"md5": "3143194c9c59c5c880f9cd06ff0bf732",
"sha256": "99822fc04ce4265bac59fe481c441161e26aa4bd4a19877498bc83e74588faed"
},
"downloads": -1,
"filename": "optimization_benchmarks-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3143194c9c59c5c880f9cd06ff0bf732",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 19675,
"upload_time": "2025-10-16T22:16:05",
"upload_time_iso_8601": "2025-10-16T22:16:05.431854Z",
"url": "https://files.pythonhosted.org/packages/d1/ef/3d7e5307b3723537e2b49cbf84a53eeefbaadafd9342b57638fe575639be/optimization_benchmarks-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "b69586677a17751cff5b92230265d0fce3b7df357eaff50870925e90722c1e38",
"md5": "653df47cbde9bfa549530460657887ce",
"sha256": "ee8dbbf1ae2cea572488533004484ca7a2b883fd7209034c2d64f793fed46f5e"
},
"downloads": -1,
"filename": "optimization_benchmarks-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "653df47cbde9bfa549530460657887ce",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 24191,
"upload_time": "2025-10-16T22:16:06",
"upload_time_iso_8601": "2025-10-16T22:16:06.573884Z",
"url": "https://files.pythonhosted.org/packages/b6/95/86677a17751cff5b92230265d0fce3b7df357eaff50870925e90722c1e38/optimization_benchmarks-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-16 22:16:06",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "yourusername",
"github_project": "optimization-benchmarks",
"github_not_found": true,
"lcname": "optimization-benchmarks"
}