# dlprog
*Deep Learning Progress*
[![PyPI](https://img.shields.io/pypi/v/dlprog)](https://pypi.org/project/dlprog/)
<br>
A Python library for progress bars with the function of aggregating each iteration's value.
It helps manage the loss of each epoch in deep learning or machine learning training.
![demo](docs/images/demo.gif)
- [PyPI](https://pypi.org/project/dlprog/)
- [API Reference](https://misya11p.github.io/dlprog/)
## Installation
```bash
pip install dlprog
```
## General Usage
Setup
```python
from dlprog import Progress
prog = Progress()
```
Example
```python
import random
import time
n_epochs = 3
n_iter = 10
prog.start(n_epochs=n_epochs, n_iter=n_iter, label='value') # Initialize start time and epoch.
for _ in range(n_epochs):
for _ in range(n_iter):
time.sleep(0.1)
value = random.random()
prog.update(value) # Update progress bar and aggregate value.
```
```
1/3: ######################################## 100% [00:00:01.06] value: 0.64755
2/3: ######################################## 100% [00:00:01.05] value: 0.41097
3/3: ######################################## 100% [00:00:01.06] value: 0.26648
```
Get each epoch's value
```
>>> prog.values
[0.6475490908029968, 0.4109736504929395, 0.26648041702649705]
```
Call `get_all_values()` method to get all values of each iteration.
And `get_all_times()` method to get all times of each iteration.
## In machine learning training
Setup.
`train_progress` function is a shortcut for `Progress` class.
Return a progress bar that is suited for machine learning training.
```python
from dlprog import train_progress
prog = train_progress()
```
Example. Case of training a deep learning model with PyTorch.
```python
n_epochs = 3
n_iter = len(dataloader)
prog.start(n_epochs=n_epochs, n_iter=n_iter)
for _ in range(n_epochs):
for x, label in dataloader:
optimizer.zero_grad()
y = model(x)
loss = criterion(y, label)
loss.backward()
optimizer.step()
prog.update(loss.item())
```
Output
```
1/3: ######################################## 100% [00:00:03.08] loss: 0.34099
2/3: ######################################## 100% [00:00:03.12] loss: 0.15259
3/3: ######################################## 100% [00:00:03.14] loss: 0.10684
```
If you want to obtain weighted exact values considering batch size:
```python
prog.update(loss.item(), weight=len(x))
```
## Advanced usage
Advanced arguments, functions, etc.
Also, see [API Reference](https://misya11p.github.io/dlprog/) if you want to know more.
### `leave_freq`
Argument that controls the frequency of leaving the progress bar.
```python
n_epochs = 12
n_iter = 10
prog.start(n_epochs=n_epochs, n_iter=n_iter, leave_freq=4)
for _ in range(n_epochs):
for _ in range(n_iter):
time.sleep(0.1)
value = random.random()
prog.update(value)
```
Output
```
4/12: ######################################## 100% [00:00:01.06] loss: 0.34203
8/12: ######################################## 100% [00:00:01.05] loss: 0.47886
12/12: ######################################## 100% [00:00:01.05] loss: 0.40241
```
### `unit`
Argument that multiple epochs as a unit.
```python
n_epochs = 12
n_iter = 10
prog.start(n_epochs=n_epochs, n_iter=n_iter, unit=4)
for _ in range(n_epochs):
for _ in range(n_iter):
time.sleep(0.1)
value = random.random()
prog.update(value)
```
Output
```
1-4/12: ######################################## 100% [00:00:04.21] value: 0.49179
5-8/12: ######################################## 100% [00:00:04.20] value: 0.51518
9-12/12: ######################################## 100% [00:00:04.18] value: 0.54546
```
### Add note
You can add a note to the progress bar.
```python
n_iter = 10
prog.start(n_iter=n_iter, note='This is a note')
for _ in range(n_iter):
time.sleep(0.1)
value = random.random()
prog.update(value)
```
Output
```
1: ######################################## 100% [00:00:01.05] 0.58703, This is a note
```
You can also add a note when `update()` as `note` argument.
Also, you can add a note when end of epoch usin memo() if `defer=True`.
```python
n_epochs = 3
prog.start(
n_epochs=n_epochs,
n_iter=len(trainloader),
label='train_loss',
defer=True,
width=20,
)
for _ in range(n_epochs):
for x, label in trainloader:
optimizer.zero_grad()
y = model(x)
loss = criterion(y, label)
loss.backward()
optimizer.step()
prog.update(loss.item())
test_loss = eval_model(model)
prog.memo(f'test_loss: {test_loss:.5f}')
```
Output
```
1/3: #################### 100% [00:00:02.83] train_loss: 0.34094, test_loss: 0.18194
2/3: #################### 100% [00:00:02.70] train_loss: 0.15433, test_loss: 0.12987
3/3: #################### 100% [00:00:02.79] train_loss: 0.10651, test_loss: 0.09783
```
### Multiple values
If you want to aggregate multiple values, set `n_values` and input values as a list.
```python
n_epochs = 3
n_iter = 10
prog.start(n_epochs=n_epochs, n_iter=n_iter, n_values=2)
for _ in range(n_epochs):
for _ in range(n_iter):
time.sleep(0.1)
value1 = random.random()
value2 = random.random() * 10
prog.update([value1, value2])
```
Output
```
1/3: ######################################## 100% [00:00:01.05] 0.47956, 4.96049
2/3: ######################################## 100% [00:00:01.05] 0.30275, 4.86003
3/3: ######################################## 100% [00:00:01.05] 0.43296, 3.31025
```
You can input multiple labels as a list instead of `n_values`.
```python
prog.start(n_iter=n_iter, label=['value1', 'value2'])
```
### Default attributes
`Progress` object keeps constructor arguments as default attributes.
These attributes are used when not specified in `start()`.
Attributes specified in `start()` is used preferentially while this running (until next `start()` or `reset()`).
If a required attribute (`n_iter`) has already been specified, `start()` can be skipped.
## Version History
### [1.0.0](https://pypi.org/project/dlprog/1.0.0/) (2023-07-13)
- Add `Progress` class.
- Add `train_progress` function.
### [1.1.0](https://pypi.org/project/dlprog/1.1.0/) (2023-07-13)
- Add `values` attribute.
- Add `leave_freq` argument.
- Add `unit` argument.
### [1.2.0](https://pypi.org/project/dlprog/1.2.0/) (2023-09-24)
- Add `note` argument, `memo()` method, and `defer` argument.
- Support multiple values.
- Add `round` argument.
- Support changing separator strings.
- Support skipping `start()`.
- Write API Reference.
- Other minor adjustments.
### [1.2.1](https://pypi.org/project/dlprog/1.2.1/) (2023-09-25)
- Support `note=None` in `memo()`.
- Change timing of note reset from epoch_reset to bar_reset.
### [1.2.2](https://pypi.org/project/dlprog/1.2.2/) (2023-09-25)
- Fix bug that not set `note=None` defaultly in `memo()`.
### [1.2.3](https://pypi.org/project/dlprog/1.2.3/) (2023-11-28)
- Fix bug that argument `label` is not available when `with_test=True` in `train_progress()`.
### [1.2.4](https://pypi.org/project/dlprog/1.2.4/) (2023-11-29)
- Fix bug that argument `width` is not available when `with_test=True` in `train_progress()`.
### [1.2.5](https://pypi.org/project/dlprog/1.2.5/) (2024-01-17)
- Add `get_all_values()` method.
- Add `get_all_times()` method.
### [1.2.6](https://pypi.org/project/dlprog/1.2.6/) (2024-01-18, Latest)
- Fix bug that the time (minutes) is not displayed correctly.
Raw data
{
"_id": null,
"home_page": "https://github.com/misya11p/dlprog",
"name": "dlprog",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3",
"maintainer_email": "",
"keywords": "iterator progress bar aggregate deep-learning machine-learning",
"author": "misya11p",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/66/c7/fd37fff0504ae1f8df59091e6798c77c6daebb8cfc6b70555fb1e6fe2db7/dlprog-1.2.6.tar.gz",
"platform": null,
"description": "# dlprog\n\n*Deep Learning Progress*\n\n[![PyPI](https://img.shields.io/pypi/v/dlprog)](https://pypi.org/project/dlprog/)\n\n<br>\n\nA Python library for progress bars with the function of aggregating each iteration's value. \nIt helps manage the loss of each epoch in deep learning or machine learning training.\n\n![demo](docs/images/demo.gif)\n\n- [PyPI](https://pypi.org/project/dlprog/)\n- [API Reference](https://misya11p.github.io/dlprog/)\n\n## Installation\n\n```bash\npip install dlprog\n```\n\n## General Usage\n\nSetup\n\n```python\nfrom dlprog import Progress\nprog = Progress()\n```\n\nExample\n\n```python\nimport random\nimport time\nn_epochs = 3\nn_iter = 10\n\nprog.start(n_epochs=n_epochs, n_iter=n_iter, label='value') # Initialize start time and epoch.\nfor _ in range(n_epochs):\n for _ in range(n_iter):\n time.sleep(0.1)\n value = random.random()\n prog.update(value) # Update progress bar and aggregate value.\n```\n\n```\n1/3: ######################################## 100% [00:00:01.06] value: 0.64755 \n2/3: ######################################## 100% [00:00:01.05] value: 0.41097 \n3/3: ######################################## 100% [00:00:01.06] value: 0.26648 \n```\n\nGet each epoch's value\n\n```\n>>> prog.values\n[0.6475490908029968, 0.4109736504929395, 0.26648041702649705]\n```\n\nCall `get_all_values()` method to get all values of each iteration.\nAnd `get_all_times()` method to get all times of each iteration.\n\n## In machine learning training\n\nSetup. \n`train_progress` function is a shortcut for `Progress` class.\nReturn a progress bar that is suited for machine learning training.\n\n```python\nfrom dlprog import train_progress\nprog = train_progress()\n```\n\nExample. Case of training a deep learning model with PyTorch.\n\n```python\nn_epochs = 3\nn_iter = len(dataloader)\n\nprog.start(n_epochs=n_epochs, n_iter=n_iter)\nfor _ in range(n_epochs):\n for x, label in dataloader:\n optimizer.zero_grad()\n y = model(x)\n loss = criterion(y, label)\n loss.backward()\n optimizer.step()\n prog.update(loss.item())\n```\n\nOutput\n\n```\n1/3: ######################################## 100% [00:00:03.08] loss: 0.34099 \n2/3: ######################################## 100% [00:00:03.12] loss: 0.15259 \n3/3: ######################################## 100% [00:00:03.14] loss: 0.10684 \n```\n\nIf you want to obtain weighted exact values considering batch size:\n\n```python\nprog.update(loss.item(), weight=len(x))\n```\n\n## Advanced usage\n\nAdvanced arguments, functions, etc. \nAlso, see [API Reference](https://misya11p.github.io/dlprog/) if you want to know more.\n\n### `leave_freq`\n\nArgument that controls the frequency of leaving the progress bar.\n\n```python\nn_epochs = 12\nn_iter = 10\nprog.start(n_epochs=n_epochs, n_iter=n_iter, leave_freq=4)\nfor _ in range(n_epochs):\n for _ in range(n_iter):\n time.sleep(0.1)\n value = random.random()\n prog.update(value)\n```\n\nOutput\n\n```\n 4/12: ######################################## 100% [00:00:01.06] loss: 0.34203 \n 8/12: ######################################## 100% [00:00:01.05] loss: 0.47886 \n12/12: ######################################## 100% [00:00:01.05] loss: 0.40241 \n```\n\n### `unit`\n\nArgument that multiple epochs as a unit.\n\n```python\nn_epochs = 12\nn_iter = 10\nprog.start(n_epochs=n_epochs, n_iter=n_iter, unit=4)\nfor _ in range(n_epochs):\n for _ in range(n_iter):\n time.sleep(0.1)\n value = random.random()\n prog.update(value)\n```\n\nOutput\n\n```\n 1-4/12: ######################################## 100% [00:00:04.21] value: 0.49179 \n 5-8/12: ######################################## 100% [00:00:04.20] value: 0.51518 \n 9-12/12: ######################################## 100% [00:00:04.18] value: 0.54546 \n```\n\n### Add note\n\nYou can add a note to the progress bar.\n\n```python\nn_iter = 10\nprog.start(n_iter=n_iter, note='This is a note')\nfor _ in range(n_iter):\n time.sleep(0.1)\n value = random.random()\n prog.update(value)\n```\n\nOutput\n\n```\n1: ######################################## 100% [00:00:01.05] 0.58703, This is a note \n```\n\nYou can also add a note when `update()` as `note` argument. \nAlso, you can add a note when end of epoch usin memo() if `defer=True`.\n\n```python\nn_epochs = 3\nprog.start(\n n_epochs=n_epochs,\n n_iter=len(trainloader),\n label='train_loss',\n defer=True,\n width=20,\n)\nfor _ in range(n_epochs):\n for x, label in trainloader:\n optimizer.zero_grad()\n y = model(x)\n loss = criterion(y, label)\n loss.backward()\n optimizer.step()\n prog.update(loss.item())\n test_loss = eval_model(model)\n prog.memo(f'test_loss: {test_loss:.5f}')\n```\n\nOutput\n\n```\n1/3: #################### 100% [00:00:02.83] train_loss: 0.34094, test_loss: 0.18194 \n2/3: #################### 100% [00:00:02.70] train_loss: 0.15433, test_loss: 0.12987 \n3/3: #################### 100% [00:00:02.79] train_loss: 0.10651, test_loss: 0.09783 \n```\n\n### Multiple values\n\nIf you want to aggregate multiple values, set `n_values` and input values as a list.\n\n```python\nn_epochs = 3\nn_iter = 10\nprog.start(n_epochs=n_epochs, n_iter=n_iter, n_values=2)\nfor _ in range(n_epochs):\n for _ in range(n_iter):\n time.sleep(0.1)\n value1 = random.random()\n value2 = random.random() * 10\n prog.update([value1, value2])\n```\n\nOutput\n\n```\n1/3: ######################################## 100% [00:00:01.05] 0.47956, 4.96049 \n2/3: ######################################## 100% [00:00:01.05] 0.30275, 4.86003 \n3/3: ######################################## 100% [00:00:01.05] 0.43296, 3.31025 \n```\n\nYou can input multiple labels as a list instead of `n_values`.\n\n```python\nprog.start(n_iter=n_iter, label=['value1', 'value2'])\n```\n\n### Default attributes\n\n`Progress` object keeps constructor arguments as default attributes. \nThese attributes are used when not specified in `start()`.\n\nAttributes specified in `start()` is used preferentially while this running (until next `start()` or `reset()`).\n\nIf a required attribute (`n_iter`) has already been specified, `start()` can be skipped.\n\n## Version History\n\n### [1.0.0](https://pypi.org/project/dlprog/1.0.0/) (2023-07-13)\n\n- Add `Progress` class.\n- Add `train_progress` function.\n\n### [1.1.0](https://pypi.org/project/dlprog/1.1.0/) (2023-07-13)\n\n- Add `values` attribute.\n- Add `leave_freq` argument.\n- Add `unit` argument.\n\n### [1.2.0](https://pypi.org/project/dlprog/1.2.0/) (2023-09-24)\n\n- Add `note` argument, `memo()` method, and `defer` argument.\n- Support multiple values.\n- Add `round` argument.\n- Support changing separator strings.\n- Support skipping `start()`.\n- Write API Reference.\n- Other minor adjustments.\n\n### [1.2.1](https://pypi.org/project/dlprog/1.2.1/) (2023-09-25)\n\n- Support `note=None` in `memo()`.\n- Change timing of note reset from epoch_reset to bar_reset.\n\n### [1.2.2](https://pypi.org/project/dlprog/1.2.2/) (2023-09-25)\n\n- Fix bug that not set `note=None` defaultly in `memo()`.\n\n### [1.2.3](https://pypi.org/project/dlprog/1.2.3/) (2023-11-28)\n\n- Fix bug that argument `label` is not available when `with_test=True` in `train_progress()`.\n\n### [1.2.4](https://pypi.org/project/dlprog/1.2.4/) (2023-11-29)\n\n- Fix bug that argument `width` is not available when `with_test=True` in `train_progress()`.\n\n### [1.2.5](https://pypi.org/project/dlprog/1.2.5/) (2024-01-17)\n\n- Add `get_all_values()` method.\n- Add `get_all_times()` method.\n\n### [1.2.6](https://pypi.org/project/dlprog/1.2.6/) (2024-01-18, Latest)\n\n- Fix bug that the time (minutes) is not displayed correctly.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A progress bar that aggregates the values of each iteration.",
"version": "1.2.6",
"project_urls": {
"Homepage": "https://github.com/misya11p/dlprog",
"Repository": "https://github.com/misya11p/dlprog"
},
"split_keywords": [
"iterator",
"progress",
"bar",
"aggregate",
"deep-learning",
"machine-learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e7555833c0261d693b59947de59a15c5a039b93d9cd8ab46b1238d9a0a9a015b",
"md5": "60ce43376b2238ba53ffc676d95a9f11",
"sha256": "541e02f5492562571cd4076112360121a5612ae8a2ddef108e5e385b511e09a1"
},
"downloads": -1,
"filename": "dlprog-1.2.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "60ce43376b2238ba53ffc676d95a9f11",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3",
"size": 8678,
"upload_time": "2024-01-18T05:18:29",
"upload_time_iso_8601": "2024-01-18T05:18:29.771864Z",
"url": "https://files.pythonhosted.org/packages/e7/55/5833c0261d693b59947de59a15c5a039b93d9cd8ab46b1238d9a0a9a015b/dlprog-1.2.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "66c7fd37fff0504ae1f8df59091e6798c77c6daebb8cfc6b70555fb1e6fe2db7",
"md5": "2b8e86ad398c9b035ca7e43125f4589f",
"sha256": "006e102a43268d777949f26c9855f0e5499be54400712ed043ea4bc768afc1bf"
},
"downloads": -1,
"filename": "dlprog-1.2.6.tar.gz",
"has_sig": false,
"md5_digest": "2b8e86ad398c9b035ca7e43125f4589f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3",
"size": 10235,
"upload_time": "2024-01-18T05:18:31",
"upload_time_iso_8601": "2024-01-18T05:18:31.425824Z",
"url": "https://files.pythonhosted.org/packages/66/c7/fd37fff0504ae1f8df59091e6798c77c6daebb8cfc6b70555fb1e6fe2db7/dlprog-1.2.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-01-18 05:18:31",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "misya11p",
"github_project": "dlprog",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "dlprog"
}