Name | adamz JSON |
Version |
0.1.0
JSON |
| download |
home_page | None |
Summary | An enhanced Adam optimizer with overshoot and stagnation handling. |
upload_time | 2024-11-27 01:58:39 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | MIT |
keywords |
adam
deep learning
machine learning
optimizer
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# AdamZ
A PyTorch implementation of AdamZ, an enhanced variant of the widely-used Adam optimizer, that is designed to improve upon its predecessor by offering more efficient convergence and potentially better generalization capabilities across various neural network training tasks.
#### Prerequisites
- Python 3.9 or later
- PyTorch 2.5.1
### Usage
Instantiate the AdamZ optimizer similarly to other standard optimizers, ensuring you configure the hyperparameters to suit your specific task. Note that the performance of AdamZ is highly sensitive to these parameters, and default settings may not be optimal for all applications.
```python
from adamz import AdamZ
import torch
model = torch.nn.Linear(10, 1)
optimizer = AdamZ(
model.parameters(),
lr=learning_rate,
overshoot_factor=0.5,
stagnation_factor=1.2,
stagnation_threshold=0.2,
patience=100,
stagnation_period=10
)
# Training loop
for input, target in dataset:
optimizer.zero_grad()
loss = loss_fn(model(input), target)
loss.backward()
optimizer.step()
```
## Citation
If you find this code helpful, please cite our paper in the following format.
```bibtex
@misc{zaznov2024adamzenhancedoptimisationmethod,
title={AdamZ: An Enhanced Optimisation Method for Neural Network Training},
author={Ilia Zaznov and Atta Badii and Alfonso Dufour and Julian Kunkel},
year={2024},
eprint={2411.15375},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2411.15375},
}
```
## Contributions
Contributions are welcome! Please feel free to submit a pull request or open an issue for suggestions and improvements.
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
## Contact
For any questions, please contact i.zaznov@pgr.reading.ac.uk or open an issue on GitHub.
Raw data
{
"_id": null,
"home_page": null,
"name": "adamz",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "adam, deep learning, machine learning, optimizer",
"author": null,
"author_email": "Ilia Zaznov <izaznov@gmail.com>, Atta Badii <atta.badii@reading.ac.uk>, Julian Kunkel <julian.kunkel@gwdg.de>, Alfonso Dufour <a.dufour@icmacentre.ac.uk>",
"download_url": "https://files.pythonhosted.org/packages/e5/c9/a72a84d90b8d262be2c162f1d6bfedaad16caf31fd09ec0561a18ffbb759/adamz-0.1.0.tar.gz",
"platform": null,
"description": "# AdamZ \nA PyTorch implementation of AdamZ, an enhanced variant of the widely-used Adam optimizer, that is designed to improve upon its predecessor by offering more efficient convergence and potentially better generalization capabilities across various neural network training tasks.\n\n#### Prerequisites\n\n- Python 3.9 or later\n- PyTorch 2.5.1\n\n### Usage\nInstantiate the AdamZ optimizer similarly to other standard optimizers, ensuring you configure the hyperparameters to suit your specific task. Note that the performance of AdamZ is highly sensitive to these parameters, and default settings may not be optimal for all applications.\n\n```python\nfrom adamz import AdamZ\nimport torch\n\nmodel = torch.nn.Linear(10, 1)\noptimizer = AdamZ(\n model.parameters(),\n lr=learning_rate,\n overshoot_factor=0.5,\n stagnation_factor=1.2,\n stagnation_threshold=0.2,\n patience=100,\n stagnation_period=10\n)\n# Training loop\nfor input, target in dataset:\n optimizer.zero_grad()\n loss = loss_fn(model(input), target)\n loss.backward()\n optimizer.step()\n```\n\n## Citation\nIf you find this code helpful, please cite our paper in the following format.\n\n```bibtex\n@misc{zaznov2024adamzenhancedoptimisationmethod,\n title={AdamZ: An Enhanced Optimisation Method for Neural Network Training}, \n author={Ilia Zaznov and Atta Badii and Alfonso Dufour and Julian Kunkel},\n year={2024},\n eprint={2411.15375},\n archivePrefix={arXiv},\n primaryClass={cs.LG},\n url={https://arxiv.org/abs/2411.15375}, \n}\n```\n\n## Contributions\n\nContributions are welcome! Please feel free to submit a pull request or open an issue for suggestions and improvements.\n\n## License\n\nThis project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.\n\n## Contact\n\nFor any questions, please contact i.zaznov@pgr.reading.ac.uk or open an issue on GitHub.\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "An enhanced Adam optimizer with overshoot and stagnation handling.",
"version": "0.1.0",
"project_urls": {
"Homepage": "https://github.com/zaznov/AdamZ",
"Issues": "https://github.com/zaznov/AdamZ/issues"
},
"split_keywords": [
"adam",
" deep learning",
" machine learning",
" optimizer"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "812d902a2d7228e7d3244c94026db1ed652e412f1b9aebf7a91b9bc1ef791d72",
"md5": "8ce5c85c342ad64cc4df617c0570d9ea",
"sha256": "ffffa5048014d5014179e5b44321cc7e618a05e7c08221839ecd132512fa11e1"
},
"downloads": -1,
"filename": "adamz-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8ce5c85c342ad64cc4df617c0570d9ea",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 23477,
"upload_time": "2024-11-27T01:58:38",
"upload_time_iso_8601": "2024-11-27T01:58:38.040570Z",
"url": "https://files.pythonhosted.org/packages/81/2d/902a2d7228e7d3244c94026db1ed652e412f1b9aebf7a91b9bc1ef791d72/adamz-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e5c9a72a84d90b8d262be2c162f1d6bfedaad16caf31fd09ec0561a18ffbb759",
"md5": "d2d3efa8a5d02788030f486282b71ec5",
"sha256": "5a6c73ddf44dfa713247deef318103d1877e43c1922def7fe887d4b8b628886b"
},
"downloads": -1,
"filename": "adamz-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "d2d3efa8a5d02788030f486282b71ec5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 21810,
"upload_time": "2024-11-27T01:58:39",
"upload_time_iso_8601": "2024-11-27T01:58:39.804447Z",
"url": "https://files.pythonhosted.org/packages/e5/c9/a72a84d90b8d262be2c162f1d6bfedaad16caf31fd09ec0561a18ffbb759/adamz-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-27 01:58:39",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "zaznov",
"github_project": "AdamZ",
"github_not_found": true,
"lcname": "adamz"
}