mempulse
========
Tiny yet effective Python memory profiler/tracer. With minimized overhead with the C extension, `mempulse` gives you a holistic view of what's eating memory in your Python applications, on development and on production environments.
This package supports both `Python 2.7` and `Python >= 3.3` (tested with `Python 3.13`).
Install
-------
Install from [PyPI](https://pypi.org/project/mempulse/):
```
# Python-based `mempulse.MemoryUsageTracer` (depends on `psutil`, for Linux, macOS, Windows)
pip install mempulse[psutil]
# C-based `mempulse.cMemoryUsageTracer` (Linux only)
pip install mempulse
```
Install from source:
```
pip install setuptools
pip install .[psutil]
```
How to Use
----------
Choose appropriate "tracing depth" (which slightly affects the overhead) then wrap the function you'd like to profile with a `with` statement:
```python
import os
import sys
import mempulse
def application():
callback = lambda r: sys.stderr.write(mempulse.format_trace_result(r))
trace_depth = int(os.getenv('MEMORY_TRACE_DEPTH', '1')) # `0` disables tracing
with mempulse.cMemoryUsageTracer(callback, trace_depth):
the_workload()
```
Leavning the `with` scope will summarize line-by-line memory stats like belw:
```
Callsite Method Name USS Swap Peak RSS
--------------------------------------------------------------------
benchmark.py:53 run_mempulse_c 26,525,696 0 28,057,600
benchmark.py:29 workload 26,525,696 0 28,057,600
benchmark.py:30 workload 26,525,696 0 28,057,600
benchmark.py:31 workload 27,328,512 0 28,860,416
benchmark.py:32 workload 115,335,168 0 117,530,624
benchmark.py:33 workload 147,431,424 0 148,926,464
benchmark.py:34 workload 117,096,448 0 148,926,464
benchmark.py:35 workload 34,779,136 0 148,926,464
benchmark.py:36 workload 34,779,136 0 836,354,048
benchmark.py:37 workload 34,779,136 0 836,354,048
benchmark.py:38 workload 44,900,352 0 836,354,048
benchmark.py:39 workload 44,900,352 0 836,354,048
benchmark.py:52 run_mempulse_c 44,900,352 0 836,354,048
```
Benchmark
---------
Running benchmark program `examples/benchmark.py` with Python 3.11 on macOS (with 8-core Intel Core i9 @ 3.6GHz) here showcases that `mempulse` outperforms similar tools such as [`memory_profiler`](https://pypi.org/project/memory-profiler/), and [`tracemalloc`](https://docs.python.org/3/library/tracemalloc.html), in terms of overhead:
| Tracer | Execution Time in Average | Overhead |
|-------------------------------|---------------------------|----------|
| - (without tracer) | 2.26s | - |
| `mempulse.cMemoryUsageTracer` | 3.40s | 50.44% |
| `mempulse.MemoryUsageTracer` | 3.62s | 60.18% |
| `memory_profiler` | 8.31s | 268.58% |
| `tracemalloc` | 11.02s | 387.61% |
Limitations
-----------
* Interoperability: `mempulse` sets trace function though `sys.settrace()` (or `PyEval_SetStrace()`), therefore it cannot be used with other tracers (such as [Coverage.py](https://github.com/nedbat/coveragepy)) at the same time.
* Concurrency: `mempulse` traces current thread only. Functions run with other threads will not show in line-by-line trace records.
License
-------
* This `mempulse` software is released under [3-Clause BSD License](https://opensource.org/license/bsd-3-clause).
* The file `mempulse/ext/uthash.h` comes from [uthash](https://troydhanson.github.io/uthash/) which is under [BSD revised](https://troydhanson.github.io/uthash/license.html) license.
Raw data
{
"_id": null,
"home_page": "https://github.com/danchen6/mempulse",
"name": "mempulse",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "Dan Chen",
"author_email": "danchen666666@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/c1/ab/e090ed71aca6c7aa524a589423ad1a014ca9cedcb9644afd66c5c9879362/mempulse-0.6.4.tar.gz",
"platform": null,
"description": "mempulse\n========\n\nTiny yet effective Python memory profiler/tracer. With minimized overhead with the C extension, `mempulse` gives you a holistic view of what's eating memory in your Python applications, on development and on production environments.\n\nThis package supports both `Python 2.7` and `Python >= 3.3` (tested with `Python 3.13`).\n\n\nInstall\n-------\n\nInstall from [PyPI](https://pypi.org/project/mempulse/):\n```\n# Python-based `mempulse.MemoryUsageTracer` (depends on `psutil`, for Linux, macOS, Windows)\npip install mempulse[psutil]\n\n# C-based `mempulse.cMemoryUsageTracer` (Linux only)\npip install mempulse\n```\n\nInstall from source:\n```\npip install setuptools\npip install .[psutil]\n```\n\n\nHow to Use\n----------\n\nChoose appropriate \"tracing depth\" (which slightly affects the overhead) then wrap the function you'd like to profile with a `with` statement:\n```python\nimport os\nimport sys\nimport mempulse\n\ndef application():\n callback = lambda r: sys.stderr.write(mempulse.format_trace_result(r))\n trace_depth = int(os.getenv('MEMORY_TRACE_DEPTH', '1')) # `0` disables tracing\n with mempulse.cMemoryUsageTracer(callback, trace_depth):\n the_workload()\n```\n\nLeavning the `with` scope will summarize line-by-line memory stats like belw:\n```\nCallsite Method Name USS Swap Peak RSS\n--------------------------------------------------------------------\nbenchmark.py:53 run_mempulse_c 26,525,696 0 28,057,600\nbenchmark.py:29 workload 26,525,696 0 28,057,600\nbenchmark.py:30 workload 26,525,696 0 28,057,600\nbenchmark.py:31 workload 27,328,512 0 28,860,416\nbenchmark.py:32 workload 115,335,168 0 117,530,624\nbenchmark.py:33 workload 147,431,424 0 148,926,464\nbenchmark.py:34 workload 117,096,448 0 148,926,464\nbenchmark.py:35 workload 34,779,136 0 148,926,464\nbenchmark.py:36 workload 34,779,136 0 836,354,048\nbenchmark.py:37 workload 34,779,136 0 836,354,048\nbenchmark.py:38 workload 44,900,352 0 836,354,048\nbenchmark.py:39 workload 44,900,352 0 836,354,048\nbenchmark.py:52 run_mempulse_c 44,900,352 0 836,354,048\n```\n\n\nBenchmark\n---------\n\nRunning benchmark program `examples/benchmark.py` with Python 3.11 on macOS (with 8-core Intel Core i9 @ 3.6GHz) here showcases that `mempulse` outperforms similar tools such as [`memory_profiler`](https://pypi.org/project/memory-profiler/), and [`tracemalloc`](https://docs.python.org/3/library/tracemalloc.html), in terms of overhead:\n\n| Tracer | Execution Time in Average | Overhead |\n|-------------------------------|---------------------------|----------|\n| - (without tracer) | 2.26s | - |\n| `mempulse.cMemoryUsageTracer` | 3.40s | 50.44% |\n| `mempulse.MemoryUsageTracer` | 3.62s | 60.18% |\n| `memory_profiler` | 8.31s | 268.58% |\n| `tracemalloc` | 11.02s | 387.61% |\n\n\nLimitations\n-----------\n\n* Interoperability: `mempulse` sets trace function though `sys.settrace()` (or `PyEval_SetStrace()`), therefore it cannot be used with other tracers (such as [Coverage.py](https://github.com/nedbat/coveragepy)) at the same time.\n* Concurrency: `mempulse` traces current thread only. Functions run with other threads will not show in line-by-line trace records.\n\n\nLicense\n-------\n* This `mempulse` software is released under [3-Clause BSD License](https://opensource.org/license/bsd-3-clause).\n* The file `mempulse/ext/uthash.h` comes from [uthash](https://troydhanson.github.io/uthash/) which is under [BSD revised](https://troydhanson.github.io/uthash/license.html) license.\n",
"bugtrack_url": null,
"license": "3-Clause BSD License",
"summary": "Tiny yet effective Python memory profiler/tracer",
"version": "0.6.4",
"project_urls": {
"Homepage": "https://github.com/danchen6/mempulse"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "c1abe090ed71aca6c7aa524a589423ad1a014ca9cedcb9644afd66c5c9879362",
"md5": "53ac60fc191304248a9a357e5906dc6e",
"sha256": "f91c73481483c603121aa0cd09c6364a227bf56d422194033e1a481c36a7ab5d"
},
"downloads": -1,
"filename": "mempulse-0.6.4.tar.gz",
"has_sig": false,
"md5_digest": "53ac60fc191304248a9a357e5906dc6e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 22649,
"upload_time": "2024-12-07T09:37:43",
"upload_time_iso_8601": "2024-12-07T09:37:43.323427Z",
"url": "https://files.pythonhosted.org/packages/c1/ab/e090ed71aca6c7aa524a589423ad1a014ca9cedcb9644afd66c5c9879362/mempulse-0.6.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-07 09:37:43",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "danchen6",
"github_project": "mempulse",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "mempulse"
}