clean-loop-timer


Nameclean-loop-timer JSON
Version 0.0.4 PyPI version JSON
download
home_pagehttps://github.com/tylerlum/clean_loop_timer
SummaryEasy-to-use timer for profiling complex loops in dataset generation and neural network training
upload_time2023-10-08 04:30:38
maintainer
docs_urlNone
authorTyler Lum
requires_python
license
keywords python profiling timer loop
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # clean_loop_timer

Easy-to-use timer for profiling complex loops in dataset generation and neural network training

# Installing

Install:

```
pip install clean_loop_timer
```

# Usage

In this library, we have `SectionTimer` and `LoopTimer`. `SectionTimer` measures the time it takes to run a section of code using a `with` statement. `LoopTimer` uses multiple `SectionTimer` objects to measure multiple sections within a loop, aggregates the results, and prints out the results in a useful format. In particular, it prints out the timing both in absolute time and fraction of the total time. It also prints out both the time to run the section of code in the most recent loop, as well as aggregates the timing of the same section over multiple loops. This is very useful for situations in which a section of code is very slow on first run but is cached in subsequent runs, so we want to see both the most recent time and the aggregated time.

Example:

```
python timers.py
```

Full example:

```
from tqdm import tqdm
import time

from clean_loop_timer import LoopTimer

loop_timer = LoopTimer()
pbar = tqdm(range(100))
for i in pbar:
    with loop_timer.add_section_timer("test"):
        if i < 3:
            time.sleep(1.0)
        else:
            time.sleep(0.1)

    with loop_timer.add_section_timer("test2"):
        time.sleep(0.3)

    section_times_df = loop_timer.get_section_times_df()
    loop_timer.pretty_print_section_times(df=section_times_df)
    pbar.set_description(
        " | ".join(
            [
                f"{section_times_df['Section'].iloc[j]}: {section_times_df['Most Recent Time (ms)'].iloc[j]:.0f}"
                for j in range(len(section_times_df))
            ]
        )
    )
```

Full example with global loop timer:

```
from tqdm import tqdm
import time

from clean_loop_timer import get_loop_timer_instance

pbar = tqdm(range(100))
for i in pbar:
    with get_loop_timer_instance().add_section_timer("test"):
        if i < 3:
            time.sleep(1.0)
        else:
            time.sleep(0.1)

    with get_loop_timer_instance().add_section_timer("test2"):
        time.sleep(0.3)

    section_times_df = get_loop_timer_instance().get_section_times_df()
    get_loop_timer_instance().pretty_print_section_times(df=section_times_df)
    pbar.set_description(
        " | ".join(
            [
                f"{section_times_df['Section'].iloc[j]}: {section_times_df['Most Recent Time (ms)'].iloc[j]:.0f}"
                for j in range(len(section_times_df))
            ]
        )
    )
```


Example output:

```
(loop_timer_env) ➜  clean_loop_timer git:(main) ✗ python timers.py
  0%|                                                                                                                                          | 0/100 [00:00<?, ?it/s]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |
|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|
|  0 | test      |                  1000.7 |                   76.9 |          1000.7 |           76.9 |
|  1 | test2     |                   300.4 |                   23.1 |           300.4 |           23.1 |
test: 1001 | test2: 300:   1%|█                                                                                                        | 1/100 [00:01<02:11,  1.32s/it]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |
|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|
|  0 | test      |                  1000.4 |                   76.9 |          2001.2 |           76.9 |
|  1 | test2     |                   300.4 |                   23.1 |           600.8 |           23.1 |
test: 1000 | test2: 300:   2%|██                                                                                                       | 2/100 [00:02<02:08,  1.31s/it]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |
|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|
|  0 | test      |                  1001.1 |                   76.9 |          3002.3 |           76.9 |
|  1 | test2     |                   300.6 |                   23.1 |           901.4 |           23.1 |
test: 1001 | test2: 301:   3%|███▏                                                                                                     | 3/100 [00:03<02:07,  1.31s/it]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |
|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|
|  0 | test      |                   100.2 |                   25.0 |          3102.5 |           72.1 |
|  1 | test2     |                   300.2 |                   75.0 |          1201.6 |           27.9 |
test: 100 | test2: 300:   4%|████▏                                                                                                     | 4/100 [00:04<01:31,  1.05it/s]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |
|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|
|  0 | test      |                   100.1 |                   25.0 |          3202.6 |           68.1 |
|  1 | test2     |                   300.6 |                   75.0 |          1502.2 |           31.9 |
test: 100 | test2: 301:   5%|█████▎                                                                                                    | 5/100 [00:04<01:11,  1.32it/s]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |
|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|
|  0 | test      |                   100.1 |                   25.0 |          3302.7 |           64.7 |
|  1 | test2     |                   300.4 |                   75.0 |          1802.6 |           35.3 |
test: 100 | test2: 300:   6%|██████▎                                                                                                   | 6/100 [00:05<00:59,  1.57it/s]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |
|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|
|  0 | test      |                   100.1 |                   25.0 |          3402.8 |           61.8 |
|  1 | test2     |                   300.4 |                   75.0 |          2103.0 |           38.2 |
test: 100 | test2: 300:   7%|███████▍                                                                                                  | 7/100 [00:05<00:52,  1.78it/s]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |
|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|
|  0 | test      |                   100.1 |                   25.0 |          3503.0 |           59.3 |
|  1 | test2     |                   300.4 |                   75.0 |          2403.4 |           40.7 |
```

![loop_timer](https://github.com/tylerlum/loop_timer/assets/26510814/2150e7e1-47b9-405a-b83f-b6b705bff263)

# Improvements Over Alternatives

## Alterative 1: Manually adding timing code

We could produce similar behavior by adding `start_time = time.time()` and `end_time = time.time()` around the relevant section, and `print(f"<section name> took {(end_time - start_time) * 1000} ms")` afterwards. This is how I used to profile code, but this has issues:

- There will be many lines of code dedicated to these new variables and prints, which can make it hard to read the code

- When timing multple sections of code, we often need to either use new variable names for each timing or do the print before the next timing. Both of these solutions are not great because it is easy to make a mistake with variable names or the prints will be scattered

- This method does not aggregate across loops. Even more variables would be needed to aggregate each sections times and print them.

## Alternative 2: Other timing libraries

Most other timing libraries either require the section of code to be measured to be put in a function (can take time to do this and is error prone since it requires a lot of code change) or is not flexible (can't control frequency of prints, prints right away on completion of each section, no aggregation)

# Notes

- TODO: Add in something like `get_instance()` function so that all the code can share a single `LoopTimer` instance so that we don't need to pass around a `loop_timer` variable all over the place

- In many neural network training situations, the data loading is an important part of the training that needs to be profiled. However, this is not done in an explicit section, so it cannot be easily wrapped in a `with` statement. Below is an example of how to handle this.

# Complex Example

Original data load code:

```
for batch_idx, batch_data in tqdm(enumerate(dataloader), total=len(dataloader)):
    # Forward pass
    output = model(batch_data.input)

    # Loss
    loss = loss_fn(output, batch_data.output)

    # Backward pass
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    # Log
    wandb.log({"loss": loss.item()})
```

Manual profiling (ugly)

```
total_time_aggregate = 0
data_time_aggregate = 0
forward_time_aggregate = 0
loss_time_aggregate = 0
backward_time_aggregate = 0
log_time_aggregate = 0

t0 = time.time()
for batch_idx, batch_data in tqdm(enumerate(dataloader), total=len(dataloader)):
    t1 = time.time()

    # Forward pass
    output = model(batch_data.input)
    t2 = time.time()

    # Loss
    loss = loss_fn(output, batch_data.output)
    t3 = time.time()

    # Backward pass
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    t4 = time.time()

    # Log
    wandb.log({"loss": loss.item()})
    t5 = time.time()

    total_time = t5 - t0
    data_time = t1 - t0
    forward_time = t2 - t1
    loss_time = t3 - t2
    backward_time = t4 - t3
    log_time = t5 - t4
    total_time_aggregate += total_time
    data_time_aggregate += data_time
    forward_time_aggregate += forward_time
    loss_time_aggregate += loss_time
    backward_time_aggregate += backward_time
    log_time_aggregate += log_time
    print("MOST RECENT")
    print(f"Total: {total_time:.2f} | ")
    print(f"Data: {data_time:.2f} {data_time / total_time * 100:.2f}%")
    print(f"Forward: {forward_time:.2f} {forward_time / total_time * 100:.2f}%")
    print(f"Loss: {loss_time:.2f} {loss_time / total_time * 100:.2f}%")
    print(f"Backward: {backward_time:.2f} {backward_time / total_time * 100:.2f}%")
    print(f"Log: {log_time:.2f} {log_time / total_time * 100:.2f}%")
    print()
    print("AGGREGATE")
    print(f"Total: {total_time_aggregate:.2f} | ")
    print(f"Data: {data_time_aggregate:.2f} {data_time_aggregate / total_time_aggregate * 100:.2f}%")
    print(f"Forward: {forward_time_aggregate:.2f} {forward_time_aggregate / total_time_aggregate * 100:.2f}%")
    print(f"Loss: {loss_time_aggregate:.2f} {loss_time_aggregate / total_time_aggregate * 100:.2f}%")
    print(f"Backward: {backward_time_aggregate:.2f} {backward_time_aggregate / total_time_aggregate * 100:.2f}%")
    print(f"Log: {log_time_aggregate:.2f} {log_time_aggregate / total_time_aggregate * 100:.2f}%")
    print()
    print()
    t0 = time.time()
```

Loop timer profiling (nice)

```
loop_timer = LoopTimer()

dataload_section_timer = loop_timer.add_section_timer("Data").start()  # Workaround for dataload

for batch_idx, batch_data in tqdm(enumerate(dataloader), total=len(dataloader)):
    dataload_section_timer.stop()

    with loop_timer.add_section_timer("Forward"):
        output = model(batch_data.input)

    with loop_timer.add_section_timer("Loss"):
        loss = loss_fn(output, batch_data.output)

    with loop_timer.add_section_timer("Backward"):
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

    with loop_timer.add_section_timer("Log"):
        wandb.log({"loss": loss.item()})

    loop_timer.pretty_print_section_times()

    if batch_idx < len(dataloader) - 1:  # Avoid starting timer at end of last batch
        dataload_section_timer = loop_timer.add_section_timer("Data").start()
```

Note how the manual timing results in many error-prone variables and messy print statements, but the loop timer results in clean uses of `with` statements that act as section comments.



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/tylerlum/clean_loop_timer",
    "name": "clean-loop-timer",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "python,profiling,timer,loop",
    "author": "Tyler Lum",
    "author_email": "tylergwlum@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/da/70/8f410a12cd809edfbababfc31cfbb49a52e45194f133eef7652fb2d30233/clean_loop_timer-0.0.4.tar.gz",
    "platform": null,
    "description": "# clean_loop_timer\n\nEasy-to-use timer for profiling complex loops in dataset generation and neural network training\n\n# Installing\n\nInstall:\n\n```\npip install clean_loop_timer\n```\n\n# Usage\n\nIn this library, we have `SectionTimer` and `LoopTimer`. `SectionTimer` measures the time it takes to run a section of code using a `with` statement. `LoopTimer` uses multiple `SectionTimer` objects to measure multiple sections within a loop, aggregates the results, and prints out the results in a useful format. In particular, it prints out the timing both in absolute time and fraction of the total time. It also prints out both the time to run the section of code in the most recent loop, as well as aggregates the timing of the same section over multiple loops. This is very useful for situations in which a section of code is very slow on first run but is cached in subsequent runs, so we want to see both the most recent time and the aggregated time.\n\nExample:\n\n```\npython timers.py\n```\n\nFull example:\n\n```\nfrom tqdm import tqdm\nimport time\n\nfrom clean_loop_timer import LoopTimer\n\nloop_timer = LoopTimer()\npbar = tqdm(range(100))\nfor i in pbar:\n    with loop_timer.add_section_timer(\"test\"):\n        if i < 3:\n            time.sleep(1.0)\n        else:\n            time.sleep(0.1)\n\n    with loop_timer.add_section_timer(\"test2\"):\n        time.sleep(0.3)\n\n    section_times_df = loop_timer.get_section_times_df()\n    loop_timer.pretty_print_section_times(df=section_times_df)\n    pbar.set_description(\n        \" | \".join(\n            [\n                f\"{section_times_df['Section'].iloc[j]}: {section_times_df['Most Recent Time (ms)'].iloc[j]:.0f}\"\n                for j in range(len(section_times_df))\n            ]\n        )\n    )\n```\n\nFull example with global loop timer:\n\n```\nfrom tqdm import tqdm\nimport time\n\nfrom clean_loop_timer import get_loop_timer_instance\n\npbar = tqdm(range(100))\nfor i in pbar:\n    with get_loop_timer_instance().add_section_timer(\"test\"):\n        if i < 3:\n            time.sleep(1.0)\n        else:\n            time.sleep(0.1)\n\n    with get_loop_timer_instance().add_section_timer(\"test2\"):\n        time.sleep(0.3)\n\n    section_times_df = get_loop_timer_instance().get_section_times_df()\n    get_loop_timer_instance().pretty_print_section_times(df=section_times_df)\n    pbar.set_description(\n        \" | \".join(\n            [\n                f\"{section_times_df['Section'].iloc[j]}: {section_times_df['Most Recent Time (ms)'].iloc[j]:.0f}\"\n                for j in range(len(section_times_df))\n            ]\n        )\n    )\n```\n\n\nExample output:\n\n```\n(loop_timer_env) \u279c  clean_loop_timer git:(main) \u2717 python timers.py\n  0%|                                                                                                                                          | 0/100 [00:00<?, ?it/s]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |\n|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|\n|  0 | test      |                  1000.7 |                   76.9 |          1000.7 |           76.9 |\n|  1 | test2     |                   300.4 |                   23.1 |           300.4 |           23.1 |\ntest: 1001 | test2: 300:   1%|\u2588                                                                                                        | 1/100 [00:01<02:11,  1.32s/it]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |\n|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|\n|  0 | test      |                  1000.4 |                   76.9 |          2001.2 |           76.9 |\n|  1 | test2     |                   300.4 |                   23.1 |           600.8 |           23.1 |\ntest: 1000 | test2: 300:   2%|\u2588\u2588                                                                                                       | 2/100 [00:02<02:08,  1.31s/it]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |\n|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|\n|  0 | test      |                  1001.1 |                   76.9 |          3002.3 |           76.9 |\n|  1 | test2     |                   300.6 |                   23.1 |           901.4 |           23.1 |\ntest: 1001 | test2: 301:   3%|\u2588\u2588\u2588\u258f                                                                                                     | 3/100 [00:03<02:07,  1.31s/it]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |\n|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|\n|  0 | test      |                   100.2 |                   25.0 |          3102.5 |           72.1 |\n|  1 | test2     |                   300.2 |                   75.0 |          1201.6 |           27.9 |\ntest: 100 | test2: 300:   4%|\u2588\u2588\u2588\u2588\u258f                                                                                                     | 4/100 [00:04<01:31,  1.05it/s]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |\n|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|\n|  0 | test      |                   100.1 |                   25.0 |          3202.6 |           68.1 |\n|  1 | test2     |                   300.6 |                   75.0 |          1502.2 |           31.9 |\ntest: 100 | test2: 301:   5%|\u2588\u2588\u2588\u2588\u2588\u258e                                                                                                    | 5/100 [00:04<01:11,  1.32it/s]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |\n|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|\n|  0 | test      |                   100.1 |                   25.0 |          3302.7 |           64.7 |\n|  1 | test2     |                   300.4 |                   75.0 |          1802.6 |           35.3 |\ntest: 100 | test2: 300:   6%|\u2588\u2588\u2588\u2588\u2588\u2588\u258e                                                                                                   | 6/100 [00:05<00:59,  1.57it/s]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |\n|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|\n|  0 | test      |                   100.1 |                   25.0 |          3402.8 |           61.8 |\n|  1 | test2     |                   300.4 |                   75.0 |          2103.0 |           38.2 |\ntest: 100 | test2: 300:   7%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258d                                                                                                  | 7/100 [00:05<00:52,  1.78it/s]|    | Section   |   Most Recent Time (ms) |   Most Recent Time (%) |   Sum Time (ms) |   Sum Time (%) |\n|---:|:----------|------------------------:|-----------------------:|----------------:|---------------:|\n|  0 | test      |                   100.1 |                   25.0 |          3503.0 |           59.3 |\n|  1 | test2     |                   300.4 |                   75.0 |          2403.4 |           40.7 |\n```\n\n![loop_timer](https://github.com/tylerlum/loop_timer/assets/26510814/2150e7e1-47b9-405a-b83f-b6b705bff263)\n\n# Improvements Over Alternatives\n\n## Alterative 1: Manually adding timing code\n\nWe could produce similar behavior by adding `start_time = time.time()` and `end_time = time.time()` around the relevant section, and `print(f\"<section name> took {(end_time - start_time) * 1000} ms\")` afterwards. This is how I used to profile code, but this has issues:\n\n- There will be many lines of code dedicated to these new variables and prints, which can make it hard to read the code\n\n- When timing multple sections of code, we often need to either use new variable names for each timing or do the print before the next timing. Both of these solutions are not great because it is easy to make a mistake with variable names or the prints will be scattered\n\n- This method does not aggregate across loops. Even more variables would be needed to aggregate each sections times and print them.\n\n## Alternative 2: Other timing libraries\n\nMost other timing libraries either require the section of code to be measured to be put in a function (can take time to do this and is error prone since it requires a lot of code change) or is not flexible (can't control frequency of prints, prints right away on completion of each section, no aggregation)\n\n# Notes\n\n- TODO: Add in something like `get_instance()` function so that all the code can share a single `LoopTimer` instance so that we don't need to pass around a `loop_timer` variable all over the place\n\n- In many neural network training situations, the data loading is an important part of the training that needs to be profiled. However, this is not done in an explicit section, so it cannot be easily wrapped in a `with` statement. Below is an example of how to handle this.\n\n# Complex Example\n\nOriginal data load code:\n\n```\nfor batch_idx, batch_data in tqdm(enumerate(dataloader), total=len(dataloader)):\n    # Forward pass\n    output = model(batch_data.input)\n\n    # Loss\n    loss = loss_fn(output, batch_data.output)\n\n    # Backward pass\n    optimizer.zero_grad()\n    loss.backward()\n    optimizer.step()\n\n    # Log\n    wandb.log({\"loss\": loss.item()})\n```\n\nManual profiling (ugly)\n\n```\ntotal_time_aggregate = 0\ndata_time_aggregate = 0\nforward_time_aggregate = 0\nloss_time_aggregate = 0\nbackward_time_aggregate = 0\nlog_time_aggregate = 0\n\nt0 = time.time()\nfor batch_idx, batch_data in tqdm(enumerate(dataloader), total=len(dataloader)):\n    t1 = time.time()\n\n    # Forward pass\n    output = model(batch_data.input)\n    t2 = time.time()\n\n    # Loss\n    loss = loss_fn(output, batch_data.output)\n    t3 = time.time()\n\n    # Backward pass\n    optimizer.zero_grad()\n    loss.backward()\n    optimizer.step()\n    t4 = time.time()\n\n    # Log\n    wandb.log({\"loss\": loss.item()})\n    t5 = time.time()\n\n    total_time = t5 - t0\n    data_time = t1 - t0\n    forward_time = t2 - t1\n    loss_time = t3 - t2\n    backward_time = t4 - t3\n    log_time = t5 - t4\n    total_time_aggregate += total_time\n    data_time_aggregate += data_time\n    forward_time_aggregate += forward_time\n    loss_time_aggregate += loss_time\n    backward_time_aggregate += backward_time\n    log_time_aggregate += log_time\n    print(\"MOST RECENT\")\n    print(f\"Total: {total_time:.2f} | \")\n    print(f\"Data: {data_time:.2f} {data_time / total_time * 100:.2f}%\")\n    print(f\"Forward: {forward_time:.2f} {forward_time / total_time * 100:.2f}%\")\n    print(f\"Loss: {loss_time:.2f} {loss_time / total_time * 100:.2f}%\")\n    print(f\"Backward: {backward_time:.2f} {backward_time / total_time * 100:.2f}%\")\n    print(f\"Log: {log_time:.2f} {log_time / total_time * 100:.2f}%\")\n    print()\n    print(\"AGGREGATE\")\n    print(f\"Total: {total_time_aggregate:.2f} | \")\n    print(f\"Data: {data_time_aggregate:.2f} {data_time_aggregate / total_time_aggregate * 100:.2f}%\")\n    print(f\"Forward: {forward_time_aggregate:.2f} {forward_time_aggregate / total_time_aggregate * 100:.2f}%\")\n    print(f\"Loss: {loss_time_aggregate:.2f} {loss_time_aggregate / total_time_aggregate * 100:.2f}%\")\n    print(f\"Backward: {backward_time_aggregate:.2f} {backward_time_aggregate / total_time_aggregate * 100:.2f}%\")\n    print(f\"Log: {log_time_aggregate:.2f} {log_time_aggregate / total_time_aggregate * 100:.2f}%\")\n    print()\n    print()\n    t0 = time.time()\n```\n\nLoop timer profiling (nice)\n\n```\nloop_timer = LoopTimer()\n\ndataload_section_timer = loop_timer.add_section_timer(\"Data\").start()  # Workaround for dataload\n\nfor batch_idx, batch_data in tqdm(enumerate(dataloader), total=len(dataloader)):\n    dataload_section_timer.stop()\n\n    with loop_timer.add_section_timer(\"Forward\"):\n        output = model(batch_data.input)\n\n    with loop_timer.add_section_timer(\"Loss\"):\n        loss = loss_fn(output, batch_data.output)\n\n    with loop_timer.add_section_timer(\"Backward\"):\n        optimizer.zero_grad()\n        loss.backward()\n        optimizer.step()\n\n    with loop_timer.add_section_timer(\"Log\"):\n        wandb.log({\"loss\": loss.item()})\n\n    loop_timer.pretty_print_section_times()\n\n    if batch_idx < len(dataloader) - 1:  # Avoid starting timer at end of last batch\n        dataload_section_timer = loop_timer.add_section_timer(\"Data\").start()\n```\n\nNote how the manual timing results in many error-prone variables and messy print statements, but the loop timer results in clean uses of `with` statements that act as section comments.\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Easy-to-use timer for profiling complex loops in dataset generation and neural network training",
    "version": "0.0.4",
    "project_urls": {
        "Homepage": "https://github.com/tylerlum/clean_loop_timer"
    },
    "split_keywords": [
        "python",
        "profiling",
        "timer",
        "loop"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d3e3584189fdcd0b9adf44d34554bf740a7ee4e590b0abd6080afd108ac6a19b",
                "md5": "e10ad7f1260743f6a756d1660eed018e",
                "sha256": "81456296e66ce2548cbaccaa1c88cf30e3b1bf6dcd7829c77f413f04646a6d65"
            },
            "downloads": -1,
            "filename": "clean_loop_timer-0.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e10ad7f1260743f6a756d1660eed018e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 6711,
            "upload_time": "2023-10-08T04:30:36",
            "upload_time_iso_8601": "2023-10-08T04:30:36.225660Z",
            "url": "https://files.pythonhosted.org/packages/d3/e3/584189fdcd0b9adf44d34554bf740a7ee4e590b0abd6080afd108ac6a19b/clean_loop_timer-0.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "da708f410a12cd809edfbababfc31cfbb49a52e45194f133eef7652fb2d30233",
                "md5": "135684c1511b841d273d9dfae6d0f98a",
                "sha256": "decb2125b3c28dd002f4c382d8fc45a9c659e87cda8e5ef068640e89d8a29628"
            },
            "downloads": -1,
            "filename": "clean_loop_timer-0.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "135684c1511b841d273d9dfae6d0f98a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 6401,
            "upload_time": "2023-10-08T04:30:38",
            "upload_time_iso_8601": "2023-10-08T04:30:38.377558Z",
            "url": "https://files.pythonhosted.org/packages/da/70/8f410a12cd809edfbababfc31cfbb49a52e45194f133eef7652fb2d30233/clean_loop_timer-0.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-08 04:30:38",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "tylerlum",
    "github_project": "clean_loop_timer",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "clean-loop-timer"
}
        
Elapsed time: 0.12193s