# dlprog
*Deep Learning Progress*
[![PyPI](https://img.shields.io/pypi/v/dlprog)](https://pypi.org/project/dlprog/)
<br>
A Python library for progress bars with the function of aggregating each iteration's value.
It helps manage the loss of each epoch in deep learning or machine learning training.
![demo](docs/images/demo.gif)
- [PyPI](https://pypi.org/project/dlprog/)
- [API Reference](https://misya11p.github.io/dlprog/)
## Installation
```bash
pip install dlprog
```
## General Usage
Setup
```python
from dlprog import Progress
prog = Progress()
```
Example
```python
import random
import time
n_epochs = 3
n_iter = 10
prog.start(n_epochs=n_epochs, n_iter=n_iter, label='value') # Initialize start time and epoch.
for _ in range(n_epochs):
for _ in range(n_iter):
time.sleep(0.1)
value = random.random()
prog.update(value) # Update progress bar and aggregate value.
```
```
1/3: ######################################## 100% [00:00:01.06] value: 0.64755
2/3: ######################################## 100% [00:00:01.05] value: 0.41097
3/3: ######################################## 100% [00:00:01.06] value: 0.26648
```
Get each epoch's value
```
>>> prog.values
[0.6475490908029968, 0.4109736504929395, 0.26648041702649705]
```
Call `get_all_values()` method to get all values of each iteration.
And `get_all_times()` method to get all times of each iteration.
## In machine learning training
Setup.
`train_progress` function is a shortcut for `Progress` class.
Return a progress bar that is suited for machine learning training.
```python
from dlprog import train_progress
prog = train_progress()
```
Example. Case of training a deep learning model with PyTorch.
```python
n_epochs = 3
n_iter = len(dataloader)
prog.start(n_epochs=n_epochs, n_iter=n_iter)
for _ in range(n_epochs):
for x, label in dataloader:
optimizer.zero_grad()
y = model(x)
loss = criterion(y, label)
loss.backward()
optimizer.step()
prog.update(loss.item())
```
Output
```
1/3: ######################################## 100% [00:00:03.08] loss: 0.34099
2/3: ######################################## 100% [00:00:03.12] loss: 0.15259
3/3: ######################################## 100% [00:00:03.14] loss: 0.10684
```
If you want to obtain weighted exact values considering batch size:
```python
prog.update(loss.item(), weight=len(x))
```
## Advanced usage
Advanced arguments, functions, etc.
Also, see [API Reference](https://misya11p.github.io/dlprog/) if you want to know more.
### `leave_freq`
Argument that controls the frequency of leaving the progress bar.
```python
n_epochs = 12
n_iter = 10
prog.start(n_epochs=n_epochs, n_iter=n_iter, leave_freq=4)
for _ in range(n_epochs):
for _ in range(n_iter):
time.sleep(0.1)
value = random.random()
prog.update(value)
```
Output
```
4/12: ######################################## 100% [00:00:01.06] loss: 0.34203
8/12: ######################################## 100% [00:00:01.05] loss: 0.47886
12/12: ######################################## 100% [00:00:01.05] loss: 0.40241
```
### `unit`
Argument that multiple epochs as a unit.
```python
n_epochs = 12
n_iter = 10
prog.start(n_epochs=n_epochs, n_iter=n_iter, unit=4)
for _ in range(n_epochs):
for _ in range(n_iter):
time.sleep(0.1)
value = random.random()
prog.update(value)
```
Output
```
1-4/12: ######################################## 100% [00:00:04.21] value: 0.49179
5-8/12: ######################################## 100% [00:00:04.20] value: 0.51518
9-12/12: ######################################## 100% [00:00:04.18] value: 0.54546
```
### Add note
You can add a note to the progress bar.
```python
n_iter = 10
prog.start(n_iter=n_iter, note='This is a note')
for _ in range(n_iter):
time.sleep(0.1)
value = random.random()
prog.update(value)
```
Output
```
1: ######################################## 100% [00:00:01.05] 0.58703, This is a note
```
You can also add a note when `update()` as `note` argument.
Also, you can add a note when end of epoch usin memo() if `defer=True`.
```python
n_epochs = 3
prog.start(
n_epochs=n_epochs,
n_iter=len(trainloader),
label='train_loss',
defer=True,
width=20,
)
for _ in range(n_epochs):
for x, label in trainloader:
optimizer.zero_grad()
y = model(x)
loss = criterion(y, label)
loss.backward()
optimizer.step()
prog.update(loss.item())
test_loss = eval_model(model)
prog.memo(f'test_loss: {test_loss:.5f}')
```
Output
```
1/3: #################### 100% [00:00:02.83] train_loss: 0.34094, test_loss: 0.18194
2/3: #################### 100% [00:00:02.70] train_loss: 0.15433, test_loss: 0.12987
3/3: #################### 100% [00:00:02.79] train_loss: 0.10651, test_loss: 0.09783
```
### Multiple values
If you want to aggregate multiple values, set `n_values` and input values as a list.
```python
n_epochs = 3
n_iter = 10
prog.start(n_epochs=n_epochs, n_iter=n_iter, n_values=2)
for _ in range(n_epochs):
for _ in range(n_iter):
time.sleep(0.1)
value1 = random.random()
value2 = random.random() * 10
prog.update([value1, value2])
```
Output
```
1/3: ######################################## 100% [00:00:01.05] 0.47956, 4.96049
2/3: ######################################## 100% [00:00:01.05] 0.30275, 4.86003
3/3: ######################################## 100% [00:00:01.05] 0.43296, 3.31025
```
You can input multiple labels as a list instead of `n_values`.
```python
prog.start(n_iter=n_iter, label=['value1', 'value2'])
```
### Default attributes
`Progress` object keeps constructor arguments as default attributes.
These attributes are used when not specified in `start()`.
Attributes specified in `start()` is used preferentially while this running (until next `start()` or `reset()`).
If a required attribute (`n_iter`) has already been specified, `start()` can be skipped.
### `momentum`
Update values by exponential moving average.
```python
now_values = []
prog.start(n_iter=10, momentum=0.9, defer=True)
for i in range(10):
prog.update(i)
now_values.append(prog.now_values())
now_values
```
Output
```
1: ######################################## 100% [00:00:00.01] 3.48678
[0.0,
0.09999999999999998,
0.2899999999999999,
0.5609999999999999,
0.9048999999999999,
1.3144099999999999,
1.7829689999999998,
2.3046721,
2.8742048899999997,
3.4867844009999995]
```
## Version History
### [1.0.0](https://pypi.org/project/dlprog/1.0.0/) (2023-07-13)
- Add `Progress` class.
- Add `train_progress` function.
### [1.1.0](https://pypi.org/project/dlprog/1.1.0/) (2023-07-13)
- Add `values` attribute.
- Add `leave_freq` argument.
- Add `unit` argument.
### [1.2.0](https://pypi.org/project/dlprog/1.2.0/) (2023-09-24)
- Add `note` argument, `memo()` method, and `defer` argument.
- Support multiple values.
- Add `round` argument.
- Support changing separator strings.
- Support skipping `start()`.
- Write API Reference.
- Other minor adjustments.
### [1.2.1](https://pypi.org/project/dlprog/1.2.1/) (2023-09-25)
- Support `note=None` in `memo()`.
- Change timing of note reset from epoch_reset to bar_reset.
### [1.2.2](https://pypi.org/project/dlprog/1.2.2/) (2023-09-25)
- Fix bug that not set `note=None` defaultly in `memo()`.
### [1.2.3](https://pypi.org/project/dlprog/1.2.3/) (2023-11-28)
- Fix bug that argument `label` is not available when `with_test=True` in `train_progress()`.
### [1.2.4](https://pypi.org/project/dlprog/1.2.4/) (2023-11-29)
- Fix bug that argument `width` is not available when `with_test=True` in `train_progress()`.
### [1.2.5](https://pypi.org/project/dlprog/1.2.5/) (2024-01-17)
- Add `get_all_values()` method.
- Add `get_all_times()` method.
### [1.2.6](https://pypi.org/project/dlprog/1.2.6/) (2024-01-18)
- Fix bug that the time (minutes) is not displayed correctly.
### [1.2.7](https://pypi.org/project/dlprog/1.2.7/) (2024-05-10)
- Add `store_all_values` and `store_all_times` arguments.
### [1.2.8](https://pypi.org/project/dlprog/1.2.8/) (2024-06-23, Latest)
- Add `momentum` argument.
- Add `now_values()` method.
Raw data
{
"_id": null,
"home_page": "https://github.com/misya11p/dlprog",
"name": "dlprog",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3",
"maintainer_email": null,
"keywords": "iterator progress bar aggregate deep-learning machine-learning",
"author": "misya11p",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/03/16/63d67d3222044e702d0172cb412ef4a102240795740dacd6044bcc05786a/dlprog-1.2.8.tar.gz",
"platform": null,
"description": "# dlprog\n\n*Deep Learning Progress*\n\n[![PyPI](https://img.shields.io/pypi/v/dlprog)](https://pypi.org/project/dlprog/)\n\n<br>\n\nA Python library for progress bars with the function of aggregating each iteration's value. \nIt helps manage the loss of each epoch in deep learning or machine learning training.\n\n![demo](docs/images/demo.gif)\n\n- [PyPI](https://pypi.org/project/dlprog/)\n- [API Reference](https://misya11p.github.io/dlprog/)\n\n## Installation\n\n```bash\npip install dlprog\n```\n\n## General Usage\n\nSetup\n\n```python\nfrom dlprog import Progress\nprog = Progress()\n```\n\nExample\n\n```python\nimport random\nimport time\nn_epochs = 3\nn_iter = 10\n\nprog.start(n_epochs=n_epochs, n_iter=n_iter, label='value') # Initialize start time and epoch.\nfor _ in range(n_epochs):\n for _ in range(n_iter):\n time.sleep(0.1)\n value = random.random()\n prog.update(value) # Update progress bar and aggregate value.\n```\n\n```\n1/3: ######################################## 100% [00:00:01.06] value: 0.64755 \n2/3: ######################################## 100% [00:00:01.05] value: 0.41097 \n3/3: ######################################## 100% [00:00:01.06] value: 0.26648 \n```\n\nGet each epoch's value\n\n```\n>>> prog.values\n[0.6475490908029968, 0.4109736504929395, 0.26648041702649705]\n```\n\nCall `get_all_values()` method to get all values of each iteration.\nAnd `get_all_times()` method to get all times of each iteration.\n\n## In machine learning training\n\nSetup. \n`train_progress` function is a shortcut for `Progress` class.\nReturn a progress bar that is suited for machine learning training.\n\n```python\nfrom dlprog import train_progress\nprog = train_progress()\n```\n\nExample. Case of training a deep learning model with PyTorch.\n\n```python\nn_epochs = 3\nn_iter = len(dataloader)\n\nprog.start(n_epochs=n_epochs, n_iter=n_iter)\nfor _ in range(n_epochs):\n for x, label in dataloader:\n optimizer.zero_grad()\n y = model(x)\n loss = criterion(y, label)\n loss.backward()\n optimizer.step()\n prog.update(loss.item())\n```\n\nOutput\n\n```\n1/3: ######################################## 100% [00:00:03.08] loss: 0.34099 \n2/3: ######################################## 100% [00:00:03.12] loss: 0.15259 \n3/3: ######################################## 100% [00:00:03.14] loss: 0.10684 \n```\n\nIf you want to obtain weighted exact values considering batch size:\n\n```python\nprog.update(loss.item(), weight=len(x))\n```\n\n## Advanced usage\n\nAdvanced arguments, functions, etc. \nAlso, see [API Reference](https://misya11p.github.io/dlprog/) if you want to know more.\n\n### `leave_freq`\n\nArgument that controls the frequency of leaving the progress bar.\n\n```python\nn_epochs = 12\nn_iter = 10\nprog.start(n_epochs=n_epochs, n_iter=n_iter, leave_freq=4)\nfor _ in range(n_epochs):\n for _ in range(n_iter):\n time.sleep(0.1)\n value = random.random()\n prog.update(value)\n```\n\nOutput\n\n```\n 4/12: ######################################## 100% [00:00:01.06] loss: 0.34203 \n 8/12: ######################################## 100% [00:00:01.05] loss: 0.47886 \n12/12: ######################################## 100% [00:00:01.05] loss: 0.40241 \n```\n\n### `unit`\n\nArgument that multiple epochs as a unit.\n\n```python\nn_epochs = 12\nn_iter = 10\nprog.start(n_epochs=n_epochs, n_iter=n_iter, unit=4)\nfor _ in range(n_epochs):\n for _ in range(n_iter):\n time.sleep(0.1)\n value = random.random()\n prog.update(value)\n```\n\nOutput\n\n```\n 1-4/12: ######################################## 100% [00:00:04.21] value: 0.49179 \n 5-8/12: ######################################## 100% [00:00:04.20] value: 0.51518 \n 9-12/12: ######################################## 100% [00:00:04.18] value: 0.54546 \n```\n\n### Add note\n\nYou can add a note to the progress bar.\n\n```python\nn_iter = 10\nprog.start(n_iter=n_iter, note='This is a note')\nfor _ in range(n_iter):\n time.sleep(0.1)\n value = random.random()\n prog.update(value)\n```\n\nOutput\n\n```\n1: ######################################## 100% [00:00:01.05] 0.58703, This is a note \n```\n\nYou can also add a note when `update()` as `note` argument. \nAlso, you can add a note when end of epoch usin memo() if `defer=True`.\n\n```python\nn_epochs = 3\nprog.start(\n n_epochs=n_epochs,\n n_iter=len(trainloader),\n label='train_loss',\n defer=True,\n width=20,\n)\nfor _ in range(n_epochs):\n for x, label in trainloader:\n optimizer.zero_grad()\n y = model(x)\n loss = criterion(y, label)\n loss.backward()\n optimizer.step()\n prog.update(loss.item())\n test_loss = eval_model(model)\n prog.memo(f'test_loss: {test_loss:.5f}')\n```\n\nOutput\n\n```\n1/3: #################### 100% [00:00:02.83] train_loss: 0.34094, test_loss: 0.18194 \n2/3: #################### 100% [00:00:02.70] train_loss: 0.15433, test_loss: 0.12987 \n3/3: #################### 100% [00:00:02.79] train_loss: 0.10651, test_loss: 0.09783 \n```\n\n### Multiple values\n\nIf you want to aggregate multiple values, set `n_values` and input values as a list.\n\n```python\nn_epochs = 3\nn_iter = 10\nprog.start(n_epochs=n_epochs, n_iter=n_iter, n_values=2)\nfor _ in range(n_epochs):\n for _ in range(n_iter):\n time.sleep(0.1)\n value1 = random.random()\n value2 = random.random() * 10\n prog.update([value1, value2])\n```\n\nOutput\n\n```\n1/3: ######################################## 100% [00:00:01.05] 0.47956, 4.96049 \n2/3: ######################################## 100% [00:00:01.05] 0.30275, 4.86003 \n3/3: ######################################## 100% [00:00:01.05] 0.43296, 3.31025 \n```\n\nYou can input multiple labels as a list instead of `n_values`.\n\n```python\nprog.start(n_iter=n_iter, label=['value1', 'value2'])\n```\n\n### Default attributes\n\n`Progress` object keeps constructor arguments as default attributes. \nThese attributes are used when not specified in `start()`.\n\nAttributes specified in `start()` is used preferentially while this running (until next `start()` or `reset()`).\n\nIf a required attribute (`n_iter`) has already been specified, `start()` can be skipped.\n\n### `momentum`\n\nUpdate values by exponential moving average.\n\n```python\nnow_values = []\nprog.start(n_iter=10, momentum=0.9, defer=True)\nfor i in range(10):\n prog.update(i)\n now_values.append(prog.now_values())\nnow_values\n```\n\nOutput\n\n```\n1: ######################################## 100% [00:00:00.01] 3.48678\n\n[0.0,\n 0.09999999999999998,\n 0.2899999999999999,\n 0.5609999999999999,\n 0.9048999999999999,\n 1.3144099999999999,\n 1.7829689999999998,\n 2.3046721,\n 2.8742048899999997,\n 3.4867844009999995]\n```\n\n## Version History\n\n### [1.0.0](https://pypi.org/project/dlprog/1.0.0/) (2023-07-13)\n\n- Add `Progress` class.\n- Add `train_progress` function.\n\n### [1.1.0](https://pypi.org/project/dlprog/1.1.0/) (2023-07-13)\n\n- Add `values` attribute.\n- Add `leave_freq` argument.\n- Add `unit` argument.\n\n### [1.2.0](https://pypi.org/project/dlprog/1.2.0/) (2023-09-24)\n\n- Add `note` argument, `memo()` method, and `defer` argument.\n- Support multiple values.\n- Add `round` argument.\n- Support changing separator strings.\n- Support skipping `start()`.\n- Write API Reference.\n- Other minor adjustments.\n\n### [1.2.1](https://pypi.org/project/dlprog/1.2.1/) (2023-09-25)\n\n- Support `note=None` in `memo()`.\n- Change timing of note reset from epoch_reset to bar_reset.\n\n### [1.2.2](https://pypi.org/project/dlprog/1.2.2/) (2023-09-25)\n\n- Fix bug that not set `note=None` defaultly in `memo()`.\n\n### [1.2.3](https://pypi.org/project/dlprog/1.2.3/) (2023-11-28)\n\n- Fix bug that argument `label` is not available when `with_test=True` in `train_progress()`.\n\n### [1.2.4](https://pypi.org/project/dlprog/1.2.4/) (2023-11-29)\n\n- Fix bug that argument `width` is not available when `with_test=True` in `train_progress()`.\n\n### [1.2.5](https://pypi.org/project/dlprog/1.2.5/) (2024-01-17)\n\n- Add `get_all_values()` method.\n- Add `get_all_times()` method.\n\n### [1.2.6](https://pypi.org/project/dlprog/1.2.6/) (2024-01-18)\n\n- Fix bug that the time (minutes) is not displayed correctly.\n\n### [1.2.7](https://pypi.org/project/dlprog/1.2.7/) (2024-05-10)\n\n- Add `store_all_values` and `store_all_times` arguments.\n\n### [1.2.8](https://pypi.org/project/dlprog/1.2.8/) (2024-06-23, Latest)\n\n- Add `momentum` argument.\n- Add `now_values()` method.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A progress bar that aggregates the values of each iteration.",
"version": "1.2.8",
"project_urls": {
"Homepage": "https://github.com/misya11p/dlprog",
"Repository": "https://github.com/misya11p/dlprog"
},
"split_keywords": [
"iterator",
"progress",
"bar",
"aggregate",
"deep-learning",
"machine-learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "bf7c7ae526c454ae45f27d3bbc43fb9ddb59145dd119a38faf3a33d98ccdff5e",
"md5": "b6d07d76031aa5a0a825b2d31925abcf",
"sha256": "bd210407d4ad3ee9f346d63292fc02ecd4ac81069c1f9358d5b6e1d9a2d514b3"
},
"downloads": -1,
"filename": "dlprog-1.2.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b6d07d76031aa5a0a825b2d31925abcf",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3",
"size": 9373,
"upload_time": "2024-06-23T10:09:12",
"upload_time_iso_8601": "2024-06-23T10:09:12.378824Z",
"url": "https://files.pythonhosted.org/packages/bf/7c/7ae526c454ae45f27d3bbc43fb9ddb59145dd119a38faf3a33d98ccdff5e/dlprog-1.2.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "031663d67d3222044e702d0172cb412ef4a102240795740dacd6044bcc05786a",
"md5": "1ea7eae1dff82409b1d4163f9ca93a78",
"sha256": "7238116b8cdef84cb787a6755566c826f472fc8fc6273c4d5b96e9de273dce8c"
},
"downloads": -1,
"filename": "dlprog-1.2.8.tar.gz",
"has_sig": false,
"md5_digest": "1ea7eae1dff82409b1d4163f9ca93a78",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3",
"size": 11201,
"upload_time": "2024-06-23T10:09:13",
"upload_time_iso_8601": "2024-06-23T10:09:13.813096Z",
"url": "https://files.pythonhosted.org/packages/03/16/63d67d3222044e702d0172cb412ef4a102240795740dacd6044bcc05786a/dlprog-1.2.8.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-23 10:09:13",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "misya11p",
"github_project": "dlprog",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "dlprog"
}