# sparkle_log
Write a spark line graph of CPU, Memory, etc to the python log
```text
❯ sparkle_log
Demo of Sparkle Monitoring system metrics during operations...
INFO CPU : % | ▄ | min, mean, max (4, 4, 4)
INFO Memory: % | ▄ | min, mean, max (46, 46, 46)
Maybe CPU intensive work done here...
INFO CPU : % | ▆▁█▄ | min, mean, max (1, 3.2, 5)
INFO Memory: % | ▄▄▄▄ | min, mean, max (46, 46, 46)
Maybe Memory intensive work done here...
INFO Memory: % | ▄▄▄▄▄▄ | min, mean, max (46, 46, 46)
INFO CPU : % | ▆▁█▄▃▃▁ | min, mean, max (1, 2.6, 5)
INFO Memory: % | ▄▄▄▄▄▄▄ | min, mean, max (46, 46, 46)
```
Tracking just one metric at a time looks better.
```text
INFO Memory: % | ▄ | min, mean, max (46, 46, 46)
INFO Memory: % | ▄▄▄▄ | min, mean, max (46, 46, 46)
INFO Memory: % | ▄▄▄▄▄▄ | min, mean, max (46, 46, 46)
INFO Memory: % | ▄▄▄▄▄▄▄ | min, mean, max (46, 46, 46)
```
## Install
`pip install sparkle_log`
## Usage
This will write up to log entries to your AWS Lambda log, at a frequency you specify, e.g. every 60 seconds.
Light-weight, cheap, immediately correlates to your other print statements and log entries.
If logging is less than INFO, then no data is collected.
As a decorator
```python
import sparkle_log
import logging
logging.basicConfig(level=logging.INFO)
@sparkle_log.monitor_metrics_on_call(("cpu", "memory" "drive"), 60)
def handler_name(event, context) -> str:
return "Hello world!"
```
As a context manager:
```python
import time
import sparkle_log
import logging
logging.basicConfig(level=logging.INFO)
def handler_name(event, context) -> str:
with sparkle_log.MetricsLoggingContext(
metrics=("cpu", "memory", "drive"), interval=5
):
time.sleep(20)
return "Hello world!"
```
```python
import time
import logging
import random
from sparkle_log import MetricsLoggingContext
logging.basicConfig(level=logging.INFO)
def dodgy_metric() -> int:
return random.randint(0, 100)
with MetricsLoggingContext(
metrics=("dodgy",), interval=1, custom_metrics={"dodgy": dodgy_metric}
):
print("Monitoring system metrics during operations...")
time.sleep(20)
```
## Supported Styles
Graph styles currently are all autoscaled. Linear, faces, vertical have only 3 levels. Bar has 8 levels.
```python
from typing import cast
from sparkle_log import sparkline, GraphStyle
for style in ["bar", "jagged", "vertical", "linear", "ascii_art", "pie_chart", "faces"]:
print(
f"{style}: {sparkline([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], cast(GraphStyle, style))}"
)
```
Results:
```text
bar: ▁▂▃▃▄▅▆▆▇█
jagged: ___--^^¯¯¯
vertical: ___|||‖‖‖‖
linear: ___---¯¯¯¯
ascii_art: .:-=+*#%@
pie_chart: ○○◔◔◑◑◕◕●●
faces: 😞😞😞😐😐😊😊😁😁😁
```
## Prior art
You could also use container insights or htop. This tool should provide the most value when the server is headless and
you only have logging or no easy way to correlate log entries to graphs.
### Diagnostics as sparklines
- [memsparkline](https://pypi.org/project/memsparkline/) - CLI tool to show memory as sparkline.
- [densli](https://pypi.org/project/densli/) (defunct?) server stats tool with terminal sparkline display
- [sparcli](https://pypi.org/project/sparcli/) Context manager for displaying arbitrary metrics as sparklines
### Sparkline functions
- [py-sparkblocks](https://pypi.org/project/py-sparkblocks/) function to create sparkline graph
- [sparklines](https://pypi.org/project/sparklines/) function to create sparkline graph
- [rich-sparklines](https://pypi.org/project/rich-sparklines/) function that works with rich UI library
- [yasl](https://pypi.org/project/yasl/) Yet Another Sparkline Library
- [Piltdown](https://pypi.org/project/Piltdown) Variety of ASCII/Unicode graphs including sparklines.
- [termgraph](https://pypi.org/project/termgraph/) - Various terminal graphs not including sparklines, but including bar
graphs.
- [lehar](https://pypi.org/project/lehar/) - Another sparkline function
### CLI tools that display sparklines from arbitrary numbers
- [sparkl](https://pypi.org/project/sparkl/)
- [sparkback](https://github.com/mmichie/sparkback)
- [spark](http://github.com/holman/spark) Pure bash implementation that seems to have inspired many clones.
Raw data
{
"_id": null,
"home_page": "https://github.com/matthewdeanmartin/sparkle_log",
"name": "sparkle-log",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "monitoring, logging",
"author": "Matthew Martin",
"author_email": "matthewdeanmartin@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/3d/2c/736dbf3a5461993b734e8c592345202d8e5dda5483eecd2b9893445a7834/sparkle_log-0.4.0.tar.gz",
"platform": null,
"description": "# sparkle_log\n\nWrite a spark line graph of CPU, Memory, etc to the python log\n\n```text\n\u276f sparkle_log\nDemo of Sparkle Monitoring system metrics during operations...\nINFO CPU : % | \u2584 | min, mean, max (4, 4, 4)\nINFO Memory: % | \u2584 | min, mean, max (46, 46, 46)\nMaybe CPU intensive work done here...\nINFO CPU : % | \u2586\u2581\u2588\u2584 | min, mean, max (1, 3.2, 5)\nINFO Memory: % | \u2584\u2584\u2584\u2584 | min, mean, max (46, 46, 46)\nMaybe Memory intensive work done here...\nINFO Memory: % | \u2584\u2584\u2584\u2584\u2584\u2584 | min, mean, max (46, 46, 46)\nINFO CPU : % | \u2586\u2581\u2588\u2584\u2583\u2583\u2581 | min, mean, max (1, 2.6, 5)\nINFO Memory: % | \u2584\u2584\u2584\u2584\u2584\u2584\u2584 | min, mean, max (46, 46, 46)\n```\n\nTracking just one metric at a time looks better.\n\n```text\nINFO Memory: % | \u2584 | min, mean, max (46, 46, 46)\nINFO Memory: % | \u2584\u2584\u2584\u2584 | min, mean, max (46, 46, 46)\nINFO Memory: % | \u2584\u2584\u2584\u2584\u2584\u2584 | min, mean, max (46, 46, 46)\nINFO Memory: % | \u2584\u2584\u2584\u2584\u2584\u2584\u2584 | min, mean, max (46, 46, 46)\n```\n\n## Install\n\n`pip install sparkle_log`\n\n## Usage\n\nThis will write up to log entries to your AWS Lambda log, at a frequency you specify, e.g. every 60 seconds.\nLight-weight, cheap, immediately correlates to your other print statements and log entries.\n\nIf logging is less than INFO, then no data is collected.\n\nAs a decorator\n\n```python\nimport sparkle_log\nimport logging\n\nlogging.basicConfig(level=logging.INFO)\n\n\n@sparkle_log.monitor_metrics_on_call((\"cpu\", \"memory\" \"drive\"), 60)\ndef handler_name(event, context) -> str:\n return \"Hello world!\"\n```\n\nAs a context manager:\n\n```python\nimport time\nimport sparkle_log\nimport logging\n\nlogging.basicConfig(level=logging.INFO)\n\n\ndef handler_name(event, context) -> str:\n with sparkle_log.MetricsLoggingContext(\n metrics=(\"cpu\", \"memory\", \"drive\"), interval=5\n ):\n time.sleep(20)\n return \"Hello world!\"\n```\n\n```python\nimport time\nimport logging\nimport random\nfrom sparkle_log import MetricsLoggingContext\n\nlogging.basicConfig(level=logging.INFO)\n\n\ndef dodgy_metric() -> int:\n return random.randint(0, 100)\n\n\nwith MetricsLoggingContext(\n metrics=(\"dodgy\",), interval=1, custom_metrics={\"dodgy\": dodgy_metric}\n):\n print(\"Monitoring system metrics during operations...\")\n time.sleep(20)\n```\n\n## Supported Styles\n\nGraph styles currently are all autoscaled. Linear, faces, vertical have only 3 levels. Bar has 8 levels.\n\n```python\nfrom typing import cast\nfrom sparkle_log import sparkline, GraphStyle\n\nfor style in [\"bar\", \"jagged\", \"vertical\", \"linear\", \"ascii_art\", \"pie_chart\", \"faces\"]:\n print(\n f\"{style}: {sparkline([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], cast(GraphStyle, style))}\"\n )\n```\n\nResults:\n\n```text\nbar: \u2581\u2582\u2583\u2583\u2584\u2585\u2586\u2586\u2587\u2588\njagged: ___--^^\u00af\u00af\u00af\nvertical: ___|||\u2016\u2016\u2016\u2016\nlinear: ___---\u00af\u00af\u00af\u00af\nascii_art: .:-=+*#%@\npie_chart: \u25cb\u25cb\u25d4\u25d4\u25d1\u25d1\u25d5\u25d5\u25cf\u25cf\nfaces: \ud83d\ude1e\ud83d\ude1e\ud83d\ude1e\ud83d\ude10\ud83d\ude10\ud83d\ude0a\ud83d\ude0a\ud83d\ude01\ud83d\ude01\ud83d\ude01\n```\n\n## Prior art\n\nYou could also use container insights or htop. This tool should provide the most value when the server is headless and\nyou only have logging or no easy way to correlate log entries to graphs.\n\n### Diagnostics as sparklines\n\n- [memsparkline](https://pypi.org/project/memsparkline/) - CLI tool to show memory as sparkline.\n- [densli](https://pypi.org/project/densli/) (defunct?) server stats tool with terminal sparkline display\n- [sparcli](https://pypi.org/project/sparcli/) Context manager for displaying arbitrary metrics as sparklines\n\n### Sparkline functions\n\n- [py-sparkblocks](https://pypi.org/project/py-sparkblocks/) function to create sparkline graph\n- [sparklines](https://pypi.org/project/sparklines/) function to create sparkline graph\n- [rich-sparklines](https://pypi.org/project/rich-sparklines/) function that works with rich UI library\n- [yasl](https://pypi.org/project/yasl/) Yet Another Sparkline Library\n- [Piltdown](https://pypi.org/project/Piltdown) Variety of ASCII/Unicode graphs including sparklines.\n- [termgraph](https://pypi.org/project/termgraph/) - Various terminal graphs not including sparklines, but including bar\n graphs.\n- [lehar](https://pypi.org/project/lehar/) - Another sparkline function\n\n### CLI tools that display sparklines from arbitrary numbers\n\n- [sparkl](https://pypi.org/project/sparkl/)\n- [sparkback](https://github.com/mmichie/sparkback)\n- [spark](http://github.com/holman/spark) Pure bash implementation that seems to have inspired many clones.\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Write sparkline graphs of CPU and memory usage to your logs.",
"version": "0.4.0",
"project_urls": {
"Bug Tracker": "https://github.com/matthewdeanmartin/sparkle_log/issues",
"Change Log": "https://github.com/matthewdeanmartin/sparkle_log/blob/main/CHANGELOG.md",
"Documentation": "https://matthewdeanmartin.github.io/sparkle_log/sparkle_log/index.html",
"Homepage": "https://github.com/matthewdeanmartin/sparkle_log",
"Repository": "https://github.com/matthewdeanmartin/sparkle_log"
},
"split_keywords": [
"monitoring",
" logging"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "d3bd3318cbc77e739133857d72a3b1f839403c8dc03504de38242d77267dd525",
"md5": "ad92fa06cce06d8290e1f8ee8f5b8b5e",
"sha256": "13c18f6c34b8302cadfb2cb015f03c7ad8cbb7ab36d1a623cfafd489eda39cf2"
},
"downloads": -1,
"filename": "sparkle_log-0.4.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ad92fa06cce06d8290e1f8ee8f5b8b5e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 13249,
"upload_time": "2024-04-28T22:13:51",
"upload_time_iso_8601": "2024-04-28T22:13:51.704651Z",
"url": "https://files.pythonhosted.org/packages/d3/bd/3318cbc77e739133857d72a3b1f839403c8dc03504de38242d77267dd525/sparkle_log-0.4.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "3d2c736dbf3a5461993b734e8c592345202d8e5dda5483eecd2b9893445a7834",
"md5": "c2b2cd986dae60ab9b5dfcb3b40b470b",
"sha256": "c987af1c8af45f9480e5651e47d604b990148fee0c6abe4d354f58d3306b26b3"
},
"downloads": -1,
"filename": "sparkle_log-0.4.0.tar.gz",
"has_sig": false,
"md5_digest": "c2b2cd986dae60ab9b5dfcb3b40b470b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 12189,
"upload_time": "2024-04-28T22:13:53",
"upload_time_iso_8601": "2024-04-28T22:13:53.399791Z",
"url": "https://files.pythonhosted.org/packages/3d/2c/736dbf3a5461993b734e8c592345202d8e5dda5483eecd2b9893445a7834/sparkle_log-0.4.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-28 22:13:53",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "matthewdeanmartin",
"github_project": "sparkle_log",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "sparkle-log"
}