Name | loggerml JSON |
Version |
1.2.0
JSON |
| download |
home_page | None |
Summary | Log your ml training in the console in an attractive way. |
upload_time | 2025-01-13 13:38:30 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.7 |
license | None |
keywords |
logging
machine
learning
|
VCS |
|
bugtrack_url |
|
requirements |
rich
|
Travis-CI |
No Travis.
|
coveralls test coverage |
|
# LoggerML - Rich machine learning logger in the console
Log your Machine Learning training in the console in a beautiful way using
[rich](https://github.com/Textualize/rich)✨ with useful information but with
minimal code.
## Documentation [here](https://logml.readthedocs.io/en/latest/)
---
[![PyPI version](https://badge.fury.io/py/loggerml.svg)](https://badge.fury.io/py/loggerml)
![PythonVersion](https://img.shields.io/badge/python-3.7%20%7E%203.11-informational)
[![License](https://img.shields.io/github/license/valentingol/logml?color=999)](https://stringfixer.com/fr/MIT_license)
[![Ruff_logo](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v1.json)](https://github.com/charliermarsh/ruff)
[![Black_logo](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![Ruff](https://github.com/valentingol/logml/actions/workflows/ruff.yaml/badge.svg)](https://github.com/valentingol/logml/actions/workflows/ruff.yaml)
[![Flake8](https://github.com/valentingol/logml/actions/workflows/flake.yaml/badge.svg)](https://github.com/valentingol/logml/actions/workflows/flake.yaml)
[![Pydocstyle](https://github.com/valentingol/logml/actions/workflows/pydocstyle.yaml/badge.svg)](https://github.com/valentingol/logml/actions/workflows/pydocstyle.yaml)
[![MyPy](https://github.com/valentingol/logml/actions/workflows/mypy.yaml/badge.svg)](https://github.com/valentingol/logml/actions/workflows/mypy.yaml)
[![PyLint](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/valentingol/451f91cece4478ebc81377e27e432f8b/raw/logml_pylint.json)](https://github.com/valentingol/logml/actions/workflows/pylint.yaml)
[![Tests](https://github.com/valentingol/logml/actions/workflows/tests.yaml/badge.svg)](https://github.com/valentingol/logml/actions/workflows/tests.yaml)
[![Coverage](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/valentingol/451f91cece4478ebc81377e27e432f8b/raw/logml_tests.json)](https://github.com/valentingol/logml/actions/workflows/tests.yaml)
[![Documentation Status](https://readthedocs.org/projects/logml/badge/?version=latest)](https://logml.readthedocs.io/en/latest/?badge=latest)
## Installation
In a new virtual environment, install simply the package via
[pipy](https://pypi.org/project/loggerml/):
```bash
pip install loggerml
```
This package is supported on Linux, macOS and Windows.
It is also supported on Jupyter Notebooks.
## Quick start
### Minimal usage
Integrate the LogML logger in your training loops! For instance for 4 epochs
and 20 batches per epoch:
```python
import time
from logml import Logger
logger = Logger(n_epochs=4, n_batches=20)
for _ in range(4):
for _ in logger.tqdm(range(20)):
time.sleep(0.1) # Simulate a training step
# Log whatever you want (int, float, str, bool):
logger.log({
'loss': 0.54321256,
'accuracy': 0.85244777,
'loss name': 'MSE',
'improve baseline': True,
})
```
Yields:
![base-gif](https://raw.githubusercontent.com/valentingol/logml/main/docs/_static/base.gif))
Note that the expected remaining time of the overall train is displayed as well as
the one for the epoch. The logger also provides also the possibility to average the
logged values over an epoch or a full training.
### Pause and resume
You can also pause and resume the logger internal time with `logger.pause()` and
`logger.resume()`. You can check the internal time with `logger.get_current_time()`.
Note that the resume method continues the time from **the last pause**.
it means that if you pause the training logger at 10 seconds, then resume it
at 20 seconds, the logger will display 10 seconds of training time. The global and
the epoch time will be updated accordingly. You can also find examples in
the documentation.
### Save the logs
In Linux you can use `tee` to save the logs in a file and display them in the console.
However you need to use `unbuffer` to keep the style:
```bash
unbuffer python main.py --color=auto | tee output.log
```
See
[here](https://superuser.com/questions/352697/preserve-colors-while-piping-to-tee)
for details.
### Advanced usage
Now you can add a validation logger, customize the logger with your own styles
and colors, compute the average of some values over batch, add a dynamic
message at each batch, update the value only every some batches and more!
At initialization you can set default configuration for the logger that will be
eventually overwritten by the configuration passed to the `log` method.
An example with more features:
```python
train_logger = Logger(
n_epochs=2,
n_batches=20,
log_interval=2,
name='Training',
name_style='dark_orange',
styles='yellow', # Default style for all values
sizes={'accuracy': 4}, # only 4 characters for 'accuracy'
average=['loss'], # 'loss' will be averaged over the current epoch
bold_keys=True, # Bold the keys
)
val_logger = Logger(
n_epochs=2,
n_batches=10,
name='Validation',
name_style='cyan',
styles='blue',
bold_keys=True,
show_time=False, # Remove the time bar
)
for _ in range(2):
train_logger.new_epoch() # Manually declare a new epoch
for _ in range(20):
train_logger.new_batch() # Manually declare a new batch
time.sleep(0.1)
# Overwrite the default style for "loss" and add a message
train_logger.log(
{'loss': 0.54321256, 'accuracy': 85.244777},
styles={'loss': 'italic red'},
message="Training is going well?\nYes!",
)
val_logger.new_epoch()
for _ in range(10):
val_logger.new_batch()
time.sleep(0.1)
val_logger.log({'val loss': 0.65422135, 'val accuracy': 81.2658775})
val_logger.detach() # End the live display to print something else after
```
Yields:
![Alt Text](https://raw.githubusercontent.com/valentingol/logml/main/docs/_static/advanced.gif)
### Don't know the number of batches in advance?
If you don't have the number of batches in advance, you can initialize the logger
with `n_batches=None`. Only the available information will be displayed. For instance
with the configuration of the first example:
![Alt Text](https://raw.githubusercontent.com/valentingol/logml/main/docs/_static/no_n_batches.png)
The progress bar is replaced by a cyclic animation. The eta times are not know at the
first epoch but was estimated after the second epoch.
Note that if you use `Logger.tqdm(dataset)` and the dataset has a length, the number of
batches will be automatically set to the length of the dataset.
## How to contribute
For **development**, install the package dynamically and dev requirements with:
```bash
pip install -e .
pip install -r requirements-dev.txt
```
Everyone can contribute to LogML, and we value everyone’s contributions.
Please see our [contributing guidelines](CONTRIBUTING.md) for more information 🤗
### Todo
To do:
Done:
- [x] Allow multiple logs on the same batch
- [x] Finalize tests for 1.0.0 major release
- [x] Add docs sections: comparison with tqdm and how to use mean_vals
(with exp tracker)
- [x] Use regex for `styles`, `sizes` and `average` keys
- [x] Be compatible with notebooks
- [x] Get back the cursor when interrupting the training
- [x] `logger.tqdm()` feature (used like `tqdm.tqdm`)
- [x] Doc with Sphinx
- [x] Be compatible with Windows and Macs
- [x] Manage a validation loop (then multiple loggers)
- [x] Add color customization for message, epoch/batch number and time
- [x] Add pause/resume feature
## License
Copyright (C) 2023 Valentin Goldité
This program is free software: you can redistribute it and/or modify it under the
terms of the [MIT License](LICENSE). This program is distributed in the hope that
it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
This project is free to use for COMMERCIAL USE, MODIFICATION, DISTRIBUTION and
PRIVATE USE as long as the original license is include as well as this copy
right notice at the top of the modified files.
Raw data
{
"_id": null,
"home_page": null,
"name": "loggerml",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "logging, machine, learning",
"author": null,
"author_email": "Valentin Goldite <valentin.goldite@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/8c/6b/19c476b3992d057fc0df448de1d777b67ca4918e1171db1b6ec67c848638/loggerml-1.2.0.tar.gz",
"platform": null,
"description": "\n# LoggerML - Rich machine learning logger in the console\n\nLog your Machine Learning training in the console in a beautiful way using\n[rich](https://github.com/Textualize/rich)\u2728 with useful information but with\nminimal code.\n\n## Documentation [here](https://logml.readthedocs.io/en/latest/)\n\n---\n\n[![PyPI version](https://badge.fury.io/py/loggerml.svg)](https://badge.fury.io/py/loggerml)\n![PythonVersion](https://img.shields.io/badge/python-3.7%20%7E%203.11-informational)\n[![License](https://img.shields.io/github/license/valentingol/logml?color=999)](https://stringfixer.com/fr/MIT_license)\n\n[![Ruff_logo](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v1.json)](https://github.com/charliermarsh/ruff)\n[![Black_logo](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n[![Ruff](https://github.com/valentingol/logml/actions/workflows/ruff.yaml/badge.svg)](https://github.com/valentingol/logml/actions/workflows/ruff.yaml)\n[![Flake8](https://github.com/valentingol/logml/actions/workflows/flake.yaml/badge.svg)](https://github.com/valentingol/logml/actions/workflows/flake.yaml)\n[![Pydocstyle](https://github.com/valentingol/logml/actions/workflows/pydocstyle.yaml/badge.svg)](https://github.com/valentingol/logml/actions/workflows/pydocstyle.yaml)\n[![MyPy](https://github.com/valentingol/logml/actions/workflows/mypy.yaml/badge.svg)](https://github.com/valentingol/logml/actions/workflows/mypy.yaml)\n[![PyLint](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/valentingol/451f91cece4478ebc81377e27e432f8b/raw/logml_pylint.json)](https://github.com/valentingol/logml/actions/workflows/pylint.yaml)\n\n[![Tests](https://github.com/valentingol/logml/actions/workflows/tests.yaml/badge.svg)](https://github.com/valentingol/logml/actions/workflows/tests.yaml)\n[![Coverage](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/valentingol/451f91cece4478ebc81377e27e432f8b/raw/logml_tests.json)](https://github.com/valentingol/logml/actions/workflows/tests.yaml)\n[![Documentation Status](https://readthedocs.org/projects/logml/badge/?version=latest)](https://logml.readthedocs.io/en/latest/?badge=latest)\n\n## Installation\n\nIn a new virtual environment, install simply the package via\n[pipy](https://pypi.org/project/loggerml/):\n\n```bash\npip install loggerml\n```\n\nThis package is supported on Linux, macOS and Windows.\nIt is also supported on Jupyter Notebooks.\n\n## Quick start\n\n### Minimal usage\n\nIntegrate the LogML logger in your training loops! For instance for 4 epochs\nand 20 batches per epoch:\n\n```python\nimport time\n\nfrom logml import Logger\n\nlogger = Logger(n_epochs=4, n_batches=20)\n\nfor _ in range(4):\n for _ in logger.tqdm(range(20)):\n time.sleep(0.1) # Simulate a training step\n # Log whatever you want (int, float, str, bool):\n logger.log({\n 'loss': 0.54321256,\n 'accuracy': 0.85244777,\n 'loss name': 'MSE',\n 'improve baseline': True,\n })\n```\n\nYields:\n\n![base-gif](https://raw.githubusercontent.com/valentingol/logml/main/docs/_static/base.gif))\n\nNote that the expected remaining time of the overall train is displayed as well as\nthe one for the epoch. The logger also provides also the possibility to average the\nlogged values over an epoch or a full training.\n\n### Pause and resume\n\nYou can also pause and resume the logger internal time with `logger.pause()` and\n`logger.resume()`. You can check the internal time with `logger.get_current_time()`.\nNote that the resume method continues the time from **the last pause**.\nit means that if you pause the training logger at 10 seconds, then resume it\nat 20 seconds, the logger will display 10 seconds of training time. The global and\nthe epoch time will be updated accordingly. You can also find examples in\nthe documentation.\n\n### Save the logs\n\nIn Linux you can use `tee` to save the logs in a file and display them in the console.\nHowever you need to use `unbuffer` to keep the style:\n\n```bash\nunbuffer python main.py --color=auto | tee output.log\n```\n\nSee\n[here](https://superuser.com/questions/352697/preserve-colors-while-piping-to-tee)\nfor details.\n\n### Advanced usage\n\nNow you can add a validation logger, customize the logger with your own styles\nand colors, compute the average of some values over batch, add a dynamic\nmessage at each batch, update the value only every some batches and more!\n\nAt initialization you can set default configuration for the logger that will be\neventually overwritten by the configuration passed to the `log` method.\n\nAn example with more features:\n\n```python\ntrain_logger = Logger(\n n_epochs=2,\n n_batches=20,\n log_interval=2,\n name='Training',\n name_style='dark_orange',\n styles='yellow', # Default style for all values\n sizes={'accuracy': 4}, # only 4 characters for 'accuracy'\n average=['loss'], # 'loss' will be averaged over the current epoch\n bold_keys=True, # Bold the keys\n)\nval_logger = Logger(\n n_epochs=2,\n n_batches=10,\n name='Validation',\n name_style='cyan',\n styles='blue',\n bold_keys=True,\n show_time=False, # Remove the time bar\n)\nfor _ in range(2):\n train_logger.new_epoch() # Manually declare a new epoch\n for _ in range(20):\n train_logger.new_batch() # Manually declare a new batch\n time.sleep(0.1)\n # Overwrite the default style for \"loss\" and add a message\n train_logger.log(\n {'loss': 0.54321256, 'accuracy': 85.244777},\n styles={'loss': 'italic red'},\n message=\"Training is going well?\\nYes!\",\n )\n val_logger.new_epoch()\n for _ in range(10):\n val_logger.new_batch()\n time.sleep(0.1)\n val_logger.log({'val loss': 0.65422135, 'val accuracy': 81.2658775})\n val_logger.detach() # End the live display to print something else after\n```\n\nYields:\n\n![Alt Text](https://raw.githubusercontent.com/valentingol/logml/main/docs/_static/advanced.gif)\n\n### Don't know the number of batches in advance?\n\nIf you don't have the number of batches in advance, you can initialize the logger\nwith `n_batches=None`. Only the available information will be displayed. For instance\nwith the configuration of the first example:\n\n![Alt Text](https://raw.githubusercontent.com/valentingol/logml/main/docs/_static/no_n_batches.png)\n\nThe progress bar is replaced by a cyclic animation. The eta times are not know at the\nfirst epoch but was estimated after the second epoch.\n\nNote that if you use `Logger.tqdm(dataset)` and the dataset has a length, the number of\nbatches will be automatically set to the length of the dataset.\n\n## How to contribute\n\nFor **development**, install the package dynamically and dev requirements with:\n\n```bash\npip install -e .\npip install -r requirements-dev.txt\n```\n\nEveryone can contribute to LogML, and we value everyone\u2019s contributions.\nPlease see our [contributing guidelines](CONTRIBUTING.md) for more information \ud83e\udd17\n\n### Todo\n\nTo do:\n\nDone:\n\n- [x] Allow multiple logs on the same batch\n- [x] Finalize tests for 1.0.0 major release\n- [x] Add docs sections: comparison with tqdm and how to use mean_vals\n (with exp tracker)\n- [x] Use regex for `styles`, `sizes` and `average` keys\n- [x] Be compatible with notebooks\n- [x] Get back the cursor when interrupting the training\n- [x] `logger.tqdm()` feature (used like `tqdm.tqdm`)\n- [x] Doc with Sphinx\n- [x] Be compatible with Windows and Macs\n- [x] Manage a validation loop (then multiple loggers)\n- [x] Add color customization for message, epoch/batch number and time\n- [x] Add pause/resume feature\n\n## License\n\nCopyright (C) 2023 Valentin Goldit\u00e9\n\nThis program is free software: you can redistribute it and/or modify it under the\nterms of the [MIT License](LICENSE). This program is distributed in the hope that\nit will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\nThis project is free to use for COMMERCIAL USE, MODIFICATION, DISTRIBUTION and\nPRIVATE USE as long as the original license is include as well as this copy\nright notice at the top of the modified files.\n",
"bugtrack_url": null,
"license": null,
"summary": "Log your ml training in the console in an attractive way.",
"version": "1.2.0",
"project_urls": {
"Source": "https://github.com/valentingol/logml"
},
"split_keywords": [
"logging",
" machine",
" learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "b4fef5630abc479fd1624ed8f4ef3d0221802c230c1577329fdd3725e6555179",
"md5": "153bcfbe2a333e41663b1c90155a540e",
"sha256": "bcf91c97de1a25b9d4711563717d3faab10b9abb1ee38062aac841d36af5f04a"
},
"downloads": -1,
"filename": "loggerml-1.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "153bcfbe2a333e41663b1c90155a540e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 13403,
"upload_time": "2025-01-13T13:38:27",
"upload_time_iso_8601": "2025-01-13T13:38:27.702225Z",
"url": "https://files.pythonhosted.org/packages/b4/fe/f5630abc479fd1624ed8f4ef3d0221802c230c1577329fdd3725e6555179/loggerml-1.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "8c6b19c476b3992d057fc0df448de1d777b67ca4918e1171db1b6ec67c848638",
"md5": "b68b08b2bf308c6f818863d932241984",
"sha256": "22b7cd90a48e4e11f3987d75f69c5358a72cd0d4e49412e64cf9bcaf7764ec0b"
},
"downloads": -1,
"filename": "loggerml-1.2.0.tar.gz",
"has_sig": false,
"md5_digest": "b68b08b2bf308c6f818863d932241984",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 5786564,
"upload_time": "2025-01-13T13:38:30",
"upload_time_iso_8601": "2025-01-13T13:38:30.434005Z",
"url": "https://files.pythonhosted.org/packages/8c/6b/19c476b3992d057fc0df448de1d777b67ca4918e1171db1b6ec67c848638/loggerml-1.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-13 13:38:30",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "valentingol",
"github_project": "logml",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"requirements": [
{
"name": "rich",
"specs": []
}
],
"lcname": "loggerml"
}