cmdbench


Namecmdbench JSON
Version 0.1.21 PyPI version JSON
download
home_pagehttps://github.com/manzik/cmdbench
SummaryQuick and easy benchmarking for any command's CPU, memory, disk usage and runtime.
upload_time2024-10-07 23:57:30
maintainerNone
docs_urlNone
authorMohsen Yousefian
requires_python>=3.6
licenseMIT
keywords benchmarks benchmark benchmarking profiler profiling timeit time runtime performance monitoring monitor cpu memory ram disk
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Deployment](https://github.com/manzik/cmdbench/actions/workflows/release.yml/badge.svg)](https://github.com/manzik/cmdbench/actions/workflows/release.yml)  
# CMDBench
A quick and easy benchmarking tool for any command's CPU, memory and disk usage.  
CLI and the library functionalities are both provided.  

Note: This library is written in, heavily tested, and maintained for the **Linux** operating system primarily. But **Windows** and **macOS** are also supported. Create an issue in case you run into a problem. 
## Install
To install the library from PyPI, execute the following command in your terminal: 
```bash
pip install cmdbench
```
Python compatibility: >=3.6
# Table of contents
   * [Quick Start: Command Line Interface](#quick-start-command-line-interface)
   * [Quick Start: Library](#quick-start-library)
      * [Method 1: Easier](#method-1-easier)
      * [Method 2: More customizable](#method-2-more-customizable)
      * [Usage IPython Notebook](#usage-ipython-notebook)
   * [Documentation](#documentation)
      * [benchmark_command: method](#benchmark_commandcommand-str-iterations_num--1-raw_data--false)
      * [benchmark_command_generator: method](#benchmark_command_generatorcommand-str-interations_num--1-raw_data--false)
      * [BenchmarkResults: Class](#benchmarkresults-class)
      * [BenchmarkDict: Class](#benchmarkdict-classdefaultdict)
   * [Notes](#notes)
      * [macOS](#macos)
      * [Windows](#windows)
      
# Quick Start: Command Line Interface
You can use the CLI provided by the python package to benchmark any command.  
In the following demo, the command `node test.js` (a slightly modified version of [test.js](test.js)) is being benchmarked 10 times, average of resources are being printed and a plot for the command's cpu and memory usage is being saved to the file `plot.png`.  

[![Usage demo](https://github.com/manzik/cmdbench/raw/main/resources/cmdbench.svg?sanitize=true)](https://asciinema.org/a/25Juo57eeSrNVJPa7rJiokW78)  

The output plot file `plot.png` for the demo will look like:  

![Resources plot](https://github.com/manzik/cmdbench/raw/main/resources/plot.png)  

# Quick Start: Library  

## Method 1: Easier  
You can simply use the `benchmark_command` function to benchmark a command.
Benchmarks the command `stress --cpu 10 --timeout 5` over 20 iterations. But prints only the first one from the benchmark results.
```python
>>> import cmdbench
>>> benchmark_results = cmdbench.benchmark_command("stress --cpu 10 --timeout 5", iterations_num = 20)
>>> first_iteration_result = benchmark_results.get_first_iteration()
>>> first_iteration_result
{
  'cpu': {
    'system_time': 0.04,
    'total_time': 49.75,
    'user_time': 49.71,
  },
  'disk': {
    'read_bytes': 0,
    'read_chars': 5124,
    'total_bytes': 0,
    'total_chars': 5243,
    'write_bytes': 0,
    'write_chars': 119,
  },
  'memory': {
    'max': 2166784,
    'max_perprocess': 1060864,
  },
  'process': {
    'execution_time': 5.0,
    'stderr_data': '',
    'stdout_data': 'stress: info: [20773] dispatching hogs: 10 cpu, 0 io, 0 vm, 0 hdd\n\nstress: info: [20773] successful run
                    completed in 5s\n',
  },
  'time_series': {
    'cpu_percentages': array([  0. ,   0. , 824.1, ..., 889. , 998.3,   0. ])
    'memory_bytes': array([2166784, 2166784, 2166784, ..., 2166784, 2166784, 1060864])
    'sample_milliseconds': array([  39,   54,   65, ..., 4979, 4988, 4997])
  },
}
>>> first_iteration_result.process.execution_time
5.0
```
## Method 2: More customizable  
You can also create one or more BenchmarkResults objects, and add benchmark results to them over time.  
So you are not forced to perform the benchmarking for the command consecutively when you simply can't.  
Could be helpful when you are trying to benchmark multiple commands that need to be executed in a certain order consecutively or depend on each other.
```python
>>> from cmdbench import benchmark_command, BenchmarkResults
>>> benchmark_results = BenchmarkResults()
>>> for _ in range(20):
...   new_benchmark_result = cmdbench.benchmark_command("stress --cpu 10 --timeout 5")
...   benchmark_results.add_benchmark_result(new_benchmark_result)
... # The for loop above is equivalent to: benchmark_results = cmdbench.benchmark_command("stress --cpu 10 --timeout 5", iterations_num = 20)
>>> benchmark_results.get_averages()
{
  'cpu': {
    'system_time': 0.012500000000000002,
    'total_time': 48.468,
    'user_time': 48.45550000000001,
  },
  'disk': {
    'read_bytes': 0.0,
    'read_chars': 5124.0,
    'total_bytes': 0.0,
    'total_chars': 5232.4,
    'write_bytes': 0.0,
    'write_chars': 108.4,
  },
  'memory': {
    'max': 2094080.0,
    'max_perprocess': 1020928.0,
  },
  'process': {
    'execution_time': 5.0,
    'stderr_data': None,
    'stdout_data': None,
  },
  'time_series': {
    'cpu_percentages': array([  0.        , 476.03157895, 794.66363636, ..., 976.05555556,
       188.97777778,   0.        ])
    'memory_bytes': array([2093924.84848485, 2096074.10526316, 2099013.81818182, ...,
       2090552.88888889, 1256561.77777778,  810188.8       ])
    'sample_milliseconds': array([  11.42424242,   21.73684211,   30.90909091, ..., 4986.44444444,
       4995.05555556, 5000.2       ])
  },
}
```
## Usage IPython notebook  
For a more comprehensive demonstration on how to use the library and the resources plot, check the provided [ipython notebook](benchmark-usage.ipynb). 

# Documentation  

## benchmark_command(command: str, iterations_num = 1, raw_data = False)  
  - Arguments
    - command: Target command to process.
    - iterations_num: Number of times to measure the program's resources.
    - raw_data: Whether or not to show all different info from different sources like psutil and GNU Time (if available).
  - Returns a BenchmarkResults object containing the related results.

## benchmark_command_generator(command: str, interations_num = 1, raw_data = False)
  - Arguments: Same as benchmark_command
  - Returns a [generator](https://wiki.python.org/moin/Generators) object allowing you to obtain a BenchmarkResults after each iteration of benchmarking until done (useful for monitoring the progress and recieving benchmarking data on the go).

## BenchmarkResults: Class
  - Methods:
    - `get_first_iteration()`  
      Returns the first iteration result in the benchmark results object.
    - `get_iterations()`  
      Returns the result for all of the iterations in the benchmark results object.
    - `get_values_per_attribute()`  
      Returns object containing lists for each type of value over different iterations. 
    - `get_averages()`  
      Returns the average for all types of value over different iterations. Also calculates the average of the time series data.
    - `get_statistics()`  
      Returns different statistics (mean, stdev, min, max) for all types of values over different iterations.
    - `get_resources_plot(width: int, height: int)`  
      Returns matplotlib figure object of CPU and Memory usage of target process over time which can be viewed in an ipython notebook or be saved to an image file.
    - `add_benchmark_result(adding_result: BenchmarkResults)`  
      Adds another BenchmarkResults object's benchmark results iterations' data to the current object.

## BenchmarkDict: Class(defaultdict)
  A custom internal dictionary class used to represent the data for an iteration.  
  Data inside objects from this class are accessible through both dot notation `obj.key` and key access `obj["key"]`

# Notes

## Windows
When benchmarking on windows, you will need to wrap your main code around the `if __name__ == '__main__':` statement.

## MacOS
MacOS does not allow process-specific disk usage information collection, therefore disk usage will not be reported on macOS when you perform benchmarking.


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/manzik/cmdbench",
    "name": "cmdbench",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": "benchmarks, benchmark, benchmarking, profiler, profiling, timeit, time, runtime, performance, monitoring, monitor, cpu, memory, ram, disk",
    "author": "Mohsen Yousefian",
    "author_email": "contact@manzik.com",
    "download_url": "https://files.pythonhosted.org/packages/ef/46/6714638c646ab5a8b67c0b73fe483f39e8c16e8b4399901e9033def91eb9/cmdbench-0.1.21.tar.gz",
    "platform": null,
    "description": "[![Deployment](https://github.com/manzik/cmdbench/actions/workflows/release.yml/badge.svg)](https://github.com/manzik/cmdbench/actions/workflows/release.yml)  \n# CMDBench\nA quick and easy benchmarking tool for any command's CPU, memory and disk usage.  \nCLI and the library functionalities are both provided.  \n\nNote: This library is written in, heavily tested, and maintained for the **Linux** operating system primarily. But **Windows** and **macOS** are also supported. Create an issue in case you run into a problem. \n## Install\nTo install the library from PyPI, execute the following command in your terminal: \n```bash\npip install cmdbench\n```\nPython compatibility: >=3.6\n# Table of contents\n   * [Quick Start: Command Line Interface](#quick-start-command-line-interface)\n   * [Quick Start: Library](#quick-start-library)\n      * [Method 1: Easier](#method-1-easier)\n      * [Method 2: More customizable](#method-2-more-customizable)\n      * [Usage IPython Notebook](#usage-ipython-notebook)\n   * [Documentation](#documentation)\n      * [benchmark_command: method](#benchmark_commandcommand-str-iterations_num--1-raw_data--false)\n      * [benchmark_command_generator: method](#benchmark_command_generatorcommand-str-interations_num--1-raw_data--false)\n      * [BenchmarkResults: Class](#benchmarkresults-class)\n      * [BenchmarkDict: Class](#benchmarkdict-classdefaultdict)\n   * [Notes](#notes)\n      * [macOS](#macos)\n      * [Windows](#windows)\n      \n# Quick Start: Command Line Interface\nYou can use the CLI provided by the python package to benchmark any command.  \nIn the following demo, the command `node test.js` (a slightly modified version of [test.js](test.js)) is being benchmarked 10 times, average of resources are being printed and a plot for the command's cpu and memory usage is being saved to the file `plot.png`.  \n\n[![Usage demo](https://github.com/manzik/cmdbench/raw/main/resources/cmdbench.svg?sanitize=true)](https://asciinema.org/a/25Juo57eeSrNVJPa7rJiokW78)  \n\nThe output plot file `plot.png` for the demo will look like:  \n\n![Resources plot](https://github.com/manzik/cmdbench/raw/main/resources/plot.png)  \n\n# Quick Start: Library  \n\n## Method 1: Easier  \nYou can simply use the `benchmark_command` function to benchmark a command.\nBenchmarks the command `stress --cpu 10 --timeout 5` over 20 iterations. But prints only the first one from the benchmark results.\n```python\n>>> import cmdbench\n>>> benchmark_results = cmdbench.benchmark_command(\"stress --cpu 10 --timeout 5\", iterations_num = 20)\n>>> first_iteration_result = benchmark_results.get_first_iteration()\n>>> first_iteration_result\n{\n  'cpu': {\n    'system_time': 0.04,\n    'total_time': 49.75,\n    'user_time': 49.71,\n  },\n  'disk': {\n    'read_bytes': 0,\n    'read_chars': 5124,\n    'total_bytes': 0,\n    'total_chars': 5243,\n    'write_bytes': 0,\n    'write_chars': 119,\n  },\n  'memory': {\n    'max': 2166784,\n    'max_perprocess': 1060864,\n  },\n  'process': {\n    'execution_time': 5.0,\n    'stderr_data': '',\n    'stdout_data': 'stress: info: [20773] dispatching hogs: 10 cpu, 0 io, 0 vm, 0 hdd\\n\\nstress: info: [20773] successful run\n                    completed in 5s\\n',\n  },\n  'time_series': {\n    'cpu_percentages': array([  0. ,   0. , 824.1, ..., 889. , 998.3,   0. ])\n    'memory_bytes': array([2166784, 2166784, 2166784, ..., 2166784, 2166784, 1060864])\n    'sample_milliseconds': array([  39,   54,   65, ..., 4979, 4988, 4997])\n  },\n}\n>>> first_iteration_result.process.execution_time\n5.0\n```\n## Method 2: More customizable  \nYou can also create one or more BenchmarkResults objects, and add benchmark results to them over time.  \nSo you are not forced to perform the benchmarking for the command consecutively when you simply can't.  \nCould be helpful when you are trying to benchmark multiple commands that need to be executed in a certain order consecutively or depend on each other.\n```python\n>>> from cmdbench import benchmark_command, BenchmarkResults\n>>> benchmark_results = BenchmarkResults()\n>>> for _ in range(20):\n...   new_benchmark_result = cmdbench.benchmark_command(\"stress --cpu 10 --timeout 5\")\n...   benchmark_results.add_benchmark_result(new_benchmark_result)\n... # The for loop above is equivalent to: benchmark_results = cmdbench.benchmark_command(\"stress --cpu 10 --timeout 5\", iterations_num = 20)\n>>> benchmark_results.get_averages()\n{\n  'cpu': {\n    'system_time': 0.012500000000000002,\n    'total_time': 48.468,\n    'user_time': 48.45550000000001,\n  },\n  'disk': {\n    'read_bytes': 0.0,\n    'read_chars': 5124.0,\n    'total_bytes': 0.0,\n    'total_chars': 5232.4,\n    'write_bytes': 0.0,\n    'write_chars': 108.4,\n  },\n  'memory': {\n    'max': 2094080.0,\n    'max_perprocess': 1020928.0,\n  },\n  'process': {\n    'execution_time': 5.0,\n    'stderr_data': None,\n    'stdout_data': None,\n  },\n  'time_series': {\n    'cpu_percentages': array([  0.        , 476.03157895, 794.66363636, ..., 976.05555556,\n       188.97777778,   0.        ])\n    'memory_bytes': array([2093924.84848485, 2096074.10526316, 2099013.81818182, ...,\n       2090552.88888889, 1256561.77777778,  810188.8       ])\n    'sample_milliseconds': array([  11.42424242,   21.73684211,   30.90909091, ..., 4986.44444444,\n       4995.05555556, 5000.2       ])\n  },\n}\n```\n## Usage IPython notebook  \nFor a more comprehensive demonstration on how to use the library and the resources plot, check the provided [ipython notebook](benchmark-usage.ipynb). \n\n# Documentation  \n\n## benchmark_command(command: str, iterations_num = 1, raw_data = False)  \n  - Arguments\n    - command: Target command to process.\n    - iterations_num: Number of times to measure the program's resources.\n    - raw_data: Whether or not to show all different info from different sources like psutil and GNU Time (if available).\n  - Returns a BenchmarkResults object containing the related results.\n\n## benchmark_command_generator(command: str, interations_num = 1, raw_data = False)\n  - Arguments: Same as benchmark_command\n  - Returns a [generator](https://wiki.python.org/moin/Generators) object allowing you to obtain a BenchmarkResults after each iteration of benchmarking until done (useful for monitoring the progress and recieving benchmarking data on the go).\n\n## BenchmarkResults: Class\n  - Methods:\n    - `get_first_iteration()`  \n      Returns the first iteration result in the benchmark results object.\n    - `get_iterations()`  \n      Returns the result for all of the iterations in the benchmark results object.\n    - `get_values_per_attribute()`  \n      Returns object containing lists for each type of value over different iterations. \n    - `get_averages()`  \n      Returns the average for all types of value over different iterations. Also calculates the average of the time series data.\n    - `get_statistics()`  \n      Returns different statistics (mean, stdev, min, max) for all types of values over different iterations.\n    - `get_resources_plot(width: int, height: int)`  \n      Returns matplotlib figure object of CPU and Memory usage of target process over time which can be viewed in an ipython notebook or be saved to an image file.\n    - `add_benchmark_result(adding_result: BenchmarkResults)`  \n      Adds another BenchmarkResults object's benchmark results iterations' data to the current object.\n\n## BenchmarkDict: Class(defaultdict)\n  A custom internal dictionary class used to represent the data for an iteration.  \n  Data inside objects from this class are accessible through both dot notation `obj.key` and key access `obj[\"key\"]`\n\n# Notes\n\n## Windows\nWhen benchmarking on windows, you will need to wrap your main code around the `if __name__ == '__main__':` statement.\n\n## MacOS\nMacOS does not allow process-specific disk usage information collection, therefore disk usage will not be reported on macOS when you perform benchmarking.\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Quick and easy benchmarking for any command's CPU, memory, disk usage and runtime.",
    "version": "0.1.21",
    "project_urls": {
        "Homepage": "https://github.com/manzik/cmdbench",
        "Repository": "https://github.com/manzik/cmdbench"
    },
    "split_keywords": [
        "benchmarks",
        " benchmark",
        " benchmarking",
        " profiler",
        " profiling",
        " timeit",
        " time",
        " runtime",
        " performance",
        " monitoring",
        " monitor",
        " cpu",
        " memory",
        " ram",
        " disk"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f8f709d1eba3b47687dd84e8f7c9c28ab2d2ee80a486cedec3fa7d33b27279b4",
                "md5": "215e7487625111e9f9b89b020c024ad4",
                "sha256": "b64667b988ba606f2fcdc71da6501c7fee15bfc026989f22d6168c63450e5c2f"
            },
            "downloads": -1,
            "filename": "cmdbench-0.1.21-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "215e7487625111e9f9b89b020c024ad4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 18249,
            "upload_time": "2024-10-07T23:57:28",
            "upload_time_iso_8601": "2024-10-07T23:57:28.619048Z",
            "url": "https://files.pythonhosted.org/packages/f8/f7/09d1eba3b47687dd84e8f7c9c28ab2d2ee80a486cedec3fa7d33b27279b4/cmdbench-0.1.21-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ef466714638c646ab5a8b67c0b73fe483f39e8c16e8b4399901e9033def91eb9",
                "md5": "0b2c1630f7bf0dcde65d0c0812a31569",
                "sha256": "9e1e7c340871f50d496efc94df4eae2d5f2fb04d9a36f26cf5649c9eebd2840b"
            },
            "downloads": -1,
            "filename": "cmdbench-0.1.21.tar.gz",
            "has_sig": false,
            "md5_digest": "0b2c1630f7bf0dcde65d0c0812a31569",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 18721,
            "upload_time": "2024-10-07T23:57:30",
            "upload_time_iso_8601": "2024-10-07T23:57:30.617527Z",
            "url": "https://files.pythonhosted.org/packages/ef/46/6714638c646ab5a8b67c0b73fe483f39e8c16e8b4399901e9033def91eb9/cmdbench-0.1.21.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-07 23:57:30",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "manzik",
    "github_project": "cmdbench",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "cmdbench"
}
        
Elapsed time: 0.84863s