# PyBench — precise microbenchmarks for Python
[](https://github.com/fullzer4/pybenchx/actions/workflows/ci.yml)
[](https://pypi.org/project/pybenchx/)
[](https://pypi.org/project/pybenchx/)
[](https://opensource.org/licenses/MIT)
[](https://pepy.tech/project/pybenchx)
Measure small, focused snippets with minimal boilerplate, auto-discovery, smart calibration, and a clean CLI (`pybench`).
Run benchmarks with one command:
```bash
pybench run examples/ [-k keyword] [-P key=value ...]
```
## ✨ Highlights
- Simple API: use the `@bench(...)` decorator or suites with `Bench` + `BenchContext.start()/end()` to isolate the hot path.
- Auto-discovery: `pybench run <dir>` expands to `**/*bench.py`.
- Powerful parameterization: generate Cartesian products with `params={...}` or define per-case `args/kwargs`.
- On-the-fly overrides: `-P key=value` adjusts `n`, `repeat`, `warmup`, `group`, or custom params without editing code.
- Solid timing model: monotonic clock, warmup, GC control, and context fast-paths.
- Smart calibration: per-variant iteration tuning to hit a target budget.
- Rich reports: aligned tables with percentiles, iter/s, min…max, baseline markers, and speedups vs. base.
- HTML charts: export benchmarks as self-contained Chart.js dashboards with `--export chart`.
- History tooling: runs auto-save to `.pybenchx/`; list, inspect stats, clean, or compare with `--vs {name,last}`.
## 🚀 Quickstart
### 📦 Install
- pip
```bash
pip install pybenchx
```
- uv
```bash
uv pip install pybenchx
```
### 🧪 Example benchmark
See `examples/strings_bench.py` for both styles:
```python
from pybench import bench, Bench, BenchContext
@bench(name="join", n=1000, repeat=10)
def join(sep: str = ","):
sep.join(str(i) for i in range(100))
suite = Bench("strings")
@suite.bench(name="join-baseline", baseline=True)
def join_baseline(b: BenchContext):
s = ",".join(str(i) for i in range(50))
b.start(); _ = ",".join([s] * 5); b.end()
```
### 🏎️ Running
- Run all examples
```bash
pybench run examples/
```
- Filter by name
```bash
pybench run examples/ -k join
```
- Override params at runtime
```bash
pybench run examples/ -P repeat=5 -P n=10000
```
### 🎛️ Key CLI options
- Disable color
```bash
pybench run examples/ --no-color
```
- Sorting
```bash
pybench run examples/ --sort time --desc
```
- Time budget per variant (calibration)
```bash
pybench run examples/ --budget 300ms # total per variant; split across repeats
pybench run examples/ --max-n 1000000 # cap calibrated n
```
- Profiles
```bash
pybench run examples/ --profile thorough # ~1s budget, repeat=30
pybench run examples/ --profile smoke # no calibration, repeat=3 (default)
```
- Save / Compare / Export
```bash
pybench run examples/ --save latest
pybench run examples/ --save-baseline main
pybench run examples/ --compare main --fail-on mean:7%,p99:12%
pybench run examples/ --export chart # HTML dashboard (Chart.js)
pybench run examples/ --export json # JSON next to auto-saved run
```
### 🗂️ Manage history & baselines
- List everything under `.pybenchx/`
```bash
pybench list
pybench list --baselines
```
- Storage stats & cleanup
```bash
pybench stats
pybench clean --keep 50
```
- Compare quickly
```bash
pybench run examples/ --vs main # named baseline
pybench run examples/ --vs last # last auto-saved run
```
### 📊 Output
Header includes CPU, Python, perf_counter clock info, total time, and profile. Table shows speed vs baseline with percent:
```
(pybench) $ pybench run examples/
cpu: x86_64
runtime: python 3.13.5 (x86_64-linux) | perf_counter: res=1.0e-09s, mono=True
time: 23.378s | profile: smoke, budget=-, max-n=1000000, sequential
benchmark time (avg) iter/s (min … max) p75 p99 p995 vs base
join 13.06 µs 76.6 K 13.00 µs … 13.21 µs 13.08 µs 13.20 µs 13.21 µs -
join_param[n=100,sep='-'] 13.17 µs 75.9 K 12.79 µs … 13.72 µs 13.37 µs 13.70 µs 13.71 µs -
join_param[n=100,sep=':'] 13.06 µs 76.6 K 12.85 µs … 13.23 µs 13.14 µs 13.23 µs 13.23 µs -
join_param[n=1000,sep='-'] 131.75 µs 7.6 K 129.32 µs … 134.82 µs 132.23 µs 134.70 µs 134.76 µs -
join_param[n=1000,sep=':'] 135.62 µs 7.4 K 131.17 µs … 147.50 µs 136.68 µs 146.92 µs 147.21 µs -
group: strings
join-baseline ★ 376.07 ns 2.7 M 371.95 ns … 384.09 ns 378.96 ns 383.66 ns 383.87 ns baseline
join-basic 377.90 ns 2.6 M 365.89 ns … 382.65 ns 381.15 ns 382.55 ns 382.60 ns ≈ same
concat 10.62 µs 94.1 K 10.54 µs … 10.71 µs 10.65 µs 10.70 µs 10.71 µs 28.25× slower
```
## 💡 Tips
- Use `BenchContext.start()/end()` para isolar a seção crítica e evitar ruído de setup.
- Prefira `--profile smoke` durante o desenvolvimento; troque para `--profile thorough` antes de publicar números.
- Para logs, use `--no-color`.
Raw data
{
"_id": null,
"home_page": null,
"name": "pybenchx",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "benchmark, microbenchmark, performance, timeit",
"author": null,
"author_email": "fullzer4 <gabrielpelizzaro@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/35/5e/c656ac9d91731619457c666bc51f07e590894448027226ab11175a7ad9d6/pybenchx-1.2.0.tar.gz",
"platform": null,
"description": "# PyBench \u2014 precise microbenchmarks for Python\n\n[](https://github.com/fullzer4/pybenchx/actions/workflows/ci.yml)\n[](https://pypi.org/project/pybenchx/)\n[](https://pypi.org/project/pybenchx/)\n[](https://opensource.org/licenses/MIT)\n[](https://pepy.tech/project/pybenchx)\n\nMeasure small, focused snippets with minimal boilerplate, auto-discovery, smart calibration, and a clean CLI (`pybench`).\n\nRun benchmarks with one command:\n\n```bash\npybench run examples/ [-k keyword] [-P key=value ...]\n```\n\n## \u2728 Highlights\n\n- Simple API: use the `@bench(...)` decorator or suites with `Bench` + `BenchContext.start()/end()` to isolate the hot path.\n- Auto-discovery: `pybench run <dir>` expands to `**/*bench.py`.\n- Powerful parameterization: generate Cartesian products with `params={...}` or define per-case `args/kwargs`.\n- On-the-fly overrides: `-P key=value` adjusts `n`, `repeat`, `warmup`, `group`, or custom params without editing code.\n- Solid timing model: monotonic clock, warmup, GC control, and context fast-paths.\n- Smart calibration: per-variant iteration tuning to hit a target budget.\n- Rich reports: aligned tables with percentiles, iter/s, min\u2026max, baseline markers, and speedups vs. base.\n- HTML charts: export benchmarks as self-contained Chart.js dashboards with `--export chart`.\n- History tooling: runs auto-save to `.pybenchx/`; list, inspect stats, clean, or compare with `--vs {name,last}`.\n\n## \ud83d\ude80 Quickstart\n\n### \ud83d\udce6 Install\n\n- pip\n ```bash\n pip install pybenchx\n ```\n- uv\n ```bash\n uv pip install pybenchx\n ```\n\n### \ud83e\uddea Example benchmark\n\nSee `examples/strings_bench.py` for both styles:\n\n```python\nfrom pybench import bench, Bench, BenchContext\n\n@bench(name=\"join\", n=1000, repeat=10)\ndef join(sep: str = \",\"):\n sep.join(str(i) for i in range(100))\n\nsuite = Bench(\"strings\")\n\n@suite.bench(name=\"join-baseline\", baseline=True)\ndef join_baseline(b: BenchContext):\n s = \",\".join(str(i) for i in range(50))\n b.start(); _ = \",\".join([s] * 5); b.end()\n```\n\n### \ud83c\udfce\ufe0f Running\n\n- Run all examples\n ```bash\n pybench run examples/\n ```\n- Filter by name\n ```bash\n pybench run examples/ -k join\n ```\n- Override params at runtime\n ```bash\n pybench run examples/ -P repeat=5 -P n=10000\n ```\n\n### \ud83c\udf9b\ufe0f Key CLI options\n\n- Disable color\n ```bash\n pybench run examples/ --no-color\n ```\n- Sorting\n ```bash\n pybench run examples/ --sort time --desc\n ```\n- Time budget per variant (calibration)\n ```bash\n pybench run examples/ --budget 300ms # total per variant; split across repeats\n pybench run examples/ --max-n 1000000 # cap calibrated n\n ```\n- Profiles\n ```bash\n pybench run examples/ --profile thorough # ~1s budget, repeat=30\n pybench run examples/ --profile smoke # no calibration, repeat=3 (default)\n ```\n- Save / Compare / Export\n ```bash\n pybench run examples/ --save latest\n pybench run examples/ --save-baseline main\n pybench run examples/ --compare main --fail-on mean:7%,p99:12%\n pybench run examples/ --export chart # HTML dashboard (Chart.js)\n pybench run examples/ --export json # JSON next to auto-saved run\n ```\n\n### \ud83d\uddc2\ufe0f Manage history & baselines\n\n- List everything under `.pybenchx/`\n ```bash\n pybench list\n pybench list --baselines\n ```\n- Storage stats & cleanup\n ```bash\n pybench stats\n pybench clean --keep 50\n ```\n- Compare quickly\n ```bash\n pybench run examples/ --vs main # named baseline\n pybench run examples/ --vs last # last auto-saved run\n ```\n\n### \ud83d\udcca Output\n\nHeader includes CPU, Python, perf_counter clock info, total time, and profile. Table shows speed vs baseline with percent:\n\n```\n(pybench) $ pybench run examples/\ncpu: x86_64\nruntime: python 3.13.5 (x86_64-linux) | perf_counter: res=1.0e-09s, mono=True\ntime: 23.378s | profile: smoke, budget=-, max-n=1000000, sequential\nbenchmark time (avg) iter/s (min \u2026 max) p75 p99 p995 vs base\njoin 13.06 \u00b5s 76.6 K 13.00 \u00b5s \u2026 13.21 \u00b5s 13.08 \u00b5s 13.20 \u00b5s 13.21 \u00b5s -\njoin_param[n=100,sep='-'] 13.17 \u00b5s 75.9 K 12.79 \u00b5s \u2026 13.72 \u00b5s 13.37 \u00b5s 13.70 \u00b5s 13.71 \u00b5s -\njoin_param[n=100,sep=':'] 13.06 \u00b5s 76.6 K 12.85 \u00b5s \u2026 13.23 \u00b5s 13.14 \u00b5s 13.23 \u00b5s 13.23 \u00b5s -\njoin_param[n=1000,sep='-'] 131.75 \u00b5s 7.6 K 129.32 \u00b5s \u2026 134.82 \u00b5s 132.23 \u00b5s 134.70 \u00b5s 134.76 \u00b5s -\njoin_param[n=1000,sep=':'] 135.62 \u00b5s 7.4 K 131.17 \u00b5s \u2026 147.50 \u00b5s 136.68 \u00b5s 146.92 \u00b5s 147.21 \u00b5s -\ngroup: strings \njoin-baseline \u2605 376.07 ns 2.7 M 371.95 ns \u2026 384.09 ns 378.96 ns 383.66 ns 383.87 ns baseline\njoin-basic 377.90 ns 2.6 M 365.89 ns \u2026 382.65 ns 381.15 ns 382.55 ns 382.60 ns \u2248 same\nconcat 10.62 \u00b5s 94.1 K 10.54 \u00b5s \u2026 10.71 \u00b5s 10.65 \u00b5s 10.70 \u00b5s 10.71 \u00b5s 28.25\u00d7 slower\n```\n\n## \ud83d\udca1 Tips\n\n- Use `BenchContext.start()/end()` para isolar a se\u00e7\u00e3o cr\u00edtica e evitar ru\u00eddo de setup.\n- Prefira `--profile smoke` durante o desenvolvimento; troque para `--profile thorough` antes de publicar n\u00fameros.\n- Para logs, use `--no-color`.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A tiny, precise microbenchmarking framework for Python",
"version": "1.2.0",
"project_urls": {
"Homepage": "https://github.com/fullzer4/pybenchx",
"Issues": "https://github.com/fullzer4/pybenchx/issues",
"Repository": "https://github.com/fullzer4/pybenchx"
},
"split_keywords": [
"benchmark",
" microbenchmark",
" performance",
" timeit"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "af6e427250c5afc92149d21b0a9167f6415820d82728d74f7972f4733d645583",
"md5": "d7562cd6eea319606617e1116a582d20",
"sha256": "95956b69049e684b16e58225a0da04044e0d06ae91f1c3a2b1a5882b40c63008"
},
"downloads": -1,
"filename": "pybenchx-1.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d7562cd6eea319606617e1116a582d20",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 36453,
"upload_time": "2025-10-07T20:54:44",
"upload_time_iso_8601": "2025-10-07T20:54:44.421241Z",
"url": "https://files.pythonhosted.org/packages/af/6e/427250c5afc92149d21b0a9167f6415820d82728d74f7972f4733d645583/pybenchx-1.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "355ec656ac9d91731619457c666bc51f07e590894448027226ab11175a7ad9d6",
"md5": "195d1732d4e8d013bc8792133f775b53",
"sha256": "4a1d1d6193fd7cc311cc6d904cb97e5ca47e55b698e65da471e430d18c6ffacc"
},
"downloads": -1,
"filename": "pybenchx-1.2.0.tar.gz",
"has_sig": false,
"md5_digest": "195d1732d4e8d013bc8792133f775b53",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 128797,
"upload_time": "2025-10-07T20:54:46",
"upload_time_iso_8601": "2025-10-07T20:54:46.188048Z",
"url": "https://files.pythonhosted.org/packages/35/5e/c656ac9d91731619457c666bc51f07e590894448027226ab11175a7ad9d6/pybenchx-1.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-07 20:54:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "fullzer4",
"github_project": "pybenchx",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "pybenchx"
}