autometrics


Nameautometrics JSON
Version 0.9 PyPI version JSON
download
home_pagehttps://github.com/autometrics-dev/autometrics-py
SummaryEasily add metrics to your system – and actually understand them using automatically customized Prometheus queries
upload_time2023-08-24 13:03:04
maintainer
docs_urlNone
authorFiberplane
requires_python>=3.8,<4.0
licenseMIT OR Apache-2.0
keywords metrics telemetry prometheus monitoring observability instrumentation tracing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![GitHub_headerImage](https://user-images.githubusercontent.com/3262610/221191767-73b8a8d9-9f8b-440e-8ab6-75cb3c82f2bc.png)

[![Tests](https://github.com/autometrics-dev/autometrics-py/actions/workflows/main.yml/badge.svg)](https://github.com/autometrics-dev/autometrics-py/actions/workflows/main.yml)
[![Discord Shield](https://discordapp.com/api/guilds/950489382626951178/widget.png?style=shield)](https://discord.gg/kHtwcH8As9)

> A Python port of the Rust
> [autometrics-rs](https://github.com/fiberplane/autometrics-rs) library

**Autometrics is a library that exports a decorator that makes it easy to understand the error rate, response time, and production usage of any function in your code.** Jump straight from your IDE to live Prometheus charts for each HTTP/RPC handler, database method, or other piece of application logic.

Autometrics for Python provides:

1. A decorator that can create [Prometheus](https://prometheus.io/) metrics for your functions and class methods throughout your code base.
2. A helper function that will write corresponding Prometheus queries for you in a Markdown file.

See [Why Autometrics?](https://github.com/autometrics-dev#why-autometrics) for more details on the ideas behind autometrics.

## Features

- ✨ `autometrics` decorator instruments any function or class method to track the
  most useful metrics
- 💡 Writes Prometheus queries so you can understand the data generated without
  knowing PromQL
- 🔗 Create links to live Prometheus charts directly into each function's docstring
- [🔍 Identify commits](#build-info) that introduced errors or increased latency
- [🚨 Define alerts](#alerts--slos) using SLO best practices directly in your source code
- [📊 Grafana dashboards](#dashboards) work out of the box to visualize the performance of instrumented functions & SLOs
- [⚙️ Configurable](#settings) metric collection library (`opentelemetry` or `prometheus`)
- [📍 Attach exemplars](#exemplars) to connect metrics with traces
- ⚡ Minimal runtime overhead

## Using autometrics-py

- Set up a [Prometheus instance](https://prometheus.io/download/)
- Configure prometheus to scrape your application ([check our instructions if you need help](https://github.com/autometrics-dev#5-configuring-prometheus))
- Include a .env file with your prometheus endpoint `PROMETHEUS_URL=your endpoint`. If this is not defined, the default endpoint will be `http://localhost:9090/`
- `pip install autometrics`
- Import the library in your code and use the decorator for any function:

```py
from autometrics import autometrics

@autometrics
def sayHello:
   return "hello"

```

- You can also track the number of concurrent calls to a function by using the `track_concurrency` argument: `@autometrics(track_concurrency=True)`. Note: currently only supported by the `prometheus` tracker.

- To access the PromQL queries for your decorated functions, run `help(yourfunction)` or `print(yourfunction.__doc__)`.

- To show tooltips over decorated functions in VSCode, with links to Prometheus queries, try installing [the VSCode extension](https://marketplace.visualstudio.com/items?itemName=Fiberplane.autometrics).

> Note that we cannot support tooltips without a VSCode extension due to behavior of the [static analyzer](https://github.com/davidhalter/jedi/issues/1921) used in VSCode.

## Dashboards

Autometrics provides [Grafana dashboards](https://github.com/autometrics-dev/autometrics-shared#dashboards) that will work for any project instrumented with the library.

## Alerts / SLOs

Autometrics makes it easy to add Prometheus alerts using Service-Level Objectives (SLOs) to a function or group of functions.

In order to receive alerts you need to add a set of rules to your Prometheus set up. You can find out more about those rules here: [Prometheus alerting rules](https://github.com/autometrics-dev/autometrics-shared#prometheus-recording--alerting-rules). Once added, most of the recording rules are dormant. They are enabled by specific metric labels that can be automatically attached by autometrics.

To use autometrics SLOs and alerts, create one or multiple `Objective`s based on the function(s) success rate and/or latency, as shown below. The `Objective` can be passed as an argument to the `autometrics` decorator to include the given function in that objective.

```python
from autometrics import autometrics
from autometrics.objectives import Objective, ObjectiveLatency, ObjectivePercentile

# Create an objective for a high success rate
API_SLO_HIGH_SUCCESS = Objective(
    "My API SLO for High Success Rate (99.9%)",
    success_rate=ObjectivePercentile.P99_9,
)

# Or you can also create an objective for low latency
API_SLO_LOW_LATENCY = Objective(
    "My API SLO for Low Latency (99th percentile < 250ms)",
    latency=(ObjectiveLatency.Ms250, ObjectivePercentile.P99),
)

@autometrics(objective=API_SLO_HIGH_SUCCESS)
def api_handler():
  # ...
```

Autometrics keeps track of instrumented functions calling each other. If you have a function that calls another function, metrics for later will include `caller` label set to the name of the autometricised function that called it.

## Settings

Autometrics makes use of a number of environment variables to configure its behavior. All of them are also configurable with keyword arguments to the `init` function.

- `tracker` - Configure the package that autometrics will use to produce metrics. Default is `opentelemetry`, but you can also use `prometheus`. Look in `pyproject.toml` for the corresponding versions of packages that will be used.
- `histogram_buckets` - Configure the buckets used for latency histograms. Default is `[0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10.0]`.
- `enable_exemplars` - Enable [exemplar collection](#exemplars). Default is `False`.
- `service_name` - Configure the [service name](#service-name).
- `version`, `commit`, `branch` - Used to configure [build_info](#build-info).

## Identifying commits that introduced problems <span name="build-info" />

> **NOTE** - As of writing, `build_info` will not work correctly when using the default tracker (`AUTOMETRICS_TRACKER=OPEN_TELEMETRY`).
> This will be fixed once the following PR is merged on the opentelemetry-python project: https://github.com/open-telemetry/opentelemetry-python/pull/3306
>
> autometrics-py will track support for build_info using the OpenTelemetry tracker via #38

Autometrics makes it easy to identify if a specific version or commit introduced errors or increased latencies.

It uses a separate metric (`build_info`) to track the version and, optionally, git commit of your service. It then writes queries that group metrics by the `version`, `commit` and `branch` labels so you can spot correlations between those and potential issues.
Configure the labels by setting the following environment variables:

| Label     | Run-Time Environment Variables        | Default value |
| --------- | ------------------------------------- | ------------- |
| `version` | `AUTOMETRICS_VERSION`                 | `""`          |
| `commit`  | `AUTOMETRICS_COMMIT` or `COMMIT_SHA`  | `""`          |
| `branch`  | `AUTOMETRICS_BRANCH` or `BRANCH_NAME` | `""`          |

This follows the method outlined in [Exposing the software version to Prometheus](https://www.robustperception.io/exposing-the-software-version-to-prometheus/).

## Service name

All metrics produced by Autometrics have a label called `service.name` (or `service_name` when exported to Prometheus) attached to identify the logical service they are part of.

You may want to override the default service name, for example if you are running multiple instances of the same code base as separate services and want to differentiate between the metrics produced by each one.

The service name is loaded from the following environment variables, in this order:

1. `AUTOMETRICS_SERVICE_NAME` (at runtime)
2. `OTEL_SERVICE_NAME` (at runtime)
3. First part of `__package__` (at runtime)

## Exemplars

> **NOTE** - As of writing, exemplars aren't supported by the default tracker (`AUTOMETRICS_TRACKER=OPEN_TELEMETRY`).
> You can track the progress of this feature here: https://github.com/autometrics-dev/autometrics-py/issues/41

Exemplars are a way to associate a metric sample to a trace by attaching `trace_id` and `span_id` to it. You can then use this information to jump from a metric to a trace in your tracing system (for example Jaeger). If you have an OpenTelemetry tracer configured, autometrics will automatically pick up the current span from it.

To use exemplars, you need to first switch to a tracker that supports them by setting `AUTOMETRICS_TRACKER=prometheus` and enable
exemplar collection by setting `AUTOMETRICS_EXEMPLARS=true`. You also need to enable exemplars in Prometheus by launching Prometheus with the `--enable-feature=exemplar-storage` flag.

## Exporting metrics

After collecting metrics with Autometrics, you need to export them to Prometheus. You can either add a separate route to your server and use the `generate_latest` function from the `prometheus_client` package, or you can use the `start_http_server` function from the same package to start a separate server that will expose the metrics. Autometrics also re-exports the `start_http_server` function with a preselected port 9464 for compatibility with other Autometrics packages.

## Development of the package

This package uses [poetry](https://python-poetry.org) as a package manager, with all dependencies separated into three groups:

- root level dependencies, required
- `dev`, everything that is needed for development or in ci
- `examples`, dependencies of everything in `examples/` directory

By default, poetry will only install required dependencies, if you want to run examples, install using this command:

```sh
poetry install --with examples
```

Code in this repository is:

- formatted using [black](https://black.readthedocs.io/en/stable/).
- contains type definitions (which are linted by [pyright](https://microsoft.github.io/pyright/))
- tested using [pytest](https://docs.pytest.org/)

In order to run these tools locally you have to install them, you can install them using poetry:

```sh
poetry install --with dev
```

After that you can run the tools individually

```sh
# Formatting using black
poetry run black .
# Lint using pyright
poetry run pyright
# Run the tests using pytest
poetry run pytest
# Run a single test, and clear the cache
poetry run pytest --cache-clear -k test_tracker
```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/autometrics-dev/autometrics-py",
    "name": "autometrics",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8,<4.0",
    "maintainer_email": "",
    "keywords": "metrics,telemetry,prometheus,monitoring,observability,instrumentation,tracing",
    "author": "Fiberplane",
    "author_email": "info@fiberplane.com",
    "download_url": "https://files.pythonhosted.org/packages/8c/3b/8ee88e61d59dcb8aa3218a2b9c10b07ef03e0d169158c561c6125d1db114/autometrics-0.9.tar.gz",
    "platform": null,
    "description": "![GitHub_headerImage](https://user-images.githubusercontent.com/3262610/221191767-73b8a8d9-9f8b-440e-8ab6-75cb3c82f2bc.png)\n\n[![Tests](https://github.com/autometrics-dev/autometrics-py/actions/workflows/main.yml/badge.svg)](https://github.com/autometrics-dev/autometrics-py/actions/workflows/main.yml)\n[![Discord Shield](https://discordapp.com/api/guilds/950489382626951178/widget.png?style=shield)](https://discord.gg/kHtwcH8As9)\n\n> A Python port of the Rust\n> [autometrics-rs](https://github.com/fiberplane/autometrics-rs) library\n\n**Autometrics is a library that exports a decorator that makes it easy to understand the error rate, response time, and production usage of any function in your code.** Jump straight from your IDE to live Prometheus charts for each HTTP/RPC handler, database method, or other piece of application logic.\n\nAutometrics for Python provides:\n\n1. A decorator that can create [Prometheus](https://prometheus.io/) metrics for your functions and class methods throughout your code base.\n2. A helper function that will write corresponding Prometheus queries for you in a Markdown file.\n\nSee [Why Autometrics?](https://github.com/autometrics-dev#why-autometrics) for more details on the ideas behind autometrics.\n\n## Features\n\n- \u2728 `autometrics` decorator instruments any function or class method to track the\n  most useful metrics\n- \ud83d\udca1 Writes Prometheus queries so you can understand the data generated without\n  knowing PromQL\n- \ud83d\udd17 Create links to live Prometheus charts directly into each function's docstring\n- [\ud83d\udd0d Identify commits](#build-info) that introduced errors or increased latency\n- [\ud83d\udea8 Define alerts](#alerts--slos) using SLO best practices directly in your source code\n- [\ud83d\udcca Grafana dashboards](#dashboards) work out of the box to visualize the performance of instrumented functions & SLOs\n- [\u2699\ufe0f Configurable](#settings) metric collection library (`opentelemetry` or `prometheus`)\n- [\ud83d\udccd Attach exemplars](#exemplars) to connect metrics with traces\n- \u26a1 Minimal runtime overhead\n\n## Using autometrics-py\n\n- Set up a [Prometheus instance](https://prometheus.io/download/)\n- Configure prometheus to scrape your application ([check our instructions if you need help](https://github.com/autometrics-dev#5-configuring-prometheus))\n- Include a .env file with your prometheus endpoint `PROMETHEUS_URL=your endpoint`. If this is not defined, the default endpoint will be `http://localhost:9090/`\n- `pip install autometrics`\n- Import the library in your code and use the decorator for any function:\n\n```py\nfrom autometrics import autometrics\n\n@autometrics\ndef sayHello:\n   return \"hello\"\n\n```\n\n- You can also track the number of concurrent calls to a function by using the `track_concurrency` argument: `@autometrics(track_concurrency=True)`. Note: currently only supported by the `prometheus` tracker.\n\n- To access the PromQL queries for your decorated functions, run `help(yourfunction)` or `print(yourfunction.__doc__)`.\n\n- To show tooltips over decorated functions in VSCode, with links to Prometheus queries, try installing [the VSCode extension](https://marketplace.visualstudio.com/items?itemName=Fiberplane.autometrics).\n\n> Note that we cannot support tooltips without a VSCode extension due to behavior of the [static analyzer](https://github.com/davidhalter/jedi/issues/1921) used in VSCode.\n\n## Dashboards\n\nAutometrics provides [Grafana dashboards](https://github.com/autometrics-dev/autometrics-shared#dashboards) that will work for any project instrumented with the library.\n\n## Alerts / SLOs\n\nAutometrics makes it easy to add Prometheus alerts using Service-Level Objectives (SLOs) to a function or group of functions.\n\nIn order to receive alerts you need to add a set of rules to your Prometheus set up. You can find out more about those rules here: [Prometheus alerting rules](https://github.com/autometrics-dev/autometrics-shared#prometheus-recording--alerting-rules). Once added, most of the recording rules are dormant. They are enabled by specific metric labels that can be automatically attached by autometrics.\n\nTo use autometrics SLOs and alerts, create one or multiple `Objective`s based on the function(s) success rate and/or latency, as shown below. The `Objective` can be passed as an argument to the `autometrics` decorator to include the given function in that objective.\n\n```python\nfrom autometrics import autometrics\nfrom autometrics.objectives import Objective, ObjectiveLatency, ObjectivePercentile\n\n# Create an objective for a high success rate\nAPI_SLO_HIGH_SUCCESS = Objective(\n    \"My API SLO for High Success Rate (99.9%)\",\n    success_rate=ObjectivePercentile.P99_9,\n)\n\n# Or you can also create an objective for low latency\nAPI_SLO_LOW_LATENCY = Objective(\n    \"My API SLO for Low Latency (99th percentile < 250ms)\",\n    latency=(ObjectiveLatency.Ms250, ObjectivePercentile.P99),\n)\n\n@autometrics(objective=API_SLO_HIGH_SUCCESS)\ndef api_handler():\n  # ...\n```\n\nAutometrics keeps track of instrumented functions calling each other. If you have a function that calls another function, metrics for later will include `caller` label set to the name of the autometricised function that called it.\n\n## Settings\n\nAutometrics makes use of a number of environment variables to configure its behavior. All of them are also configurable with keyword arguments to the `init` function.\n\n- `tracker` - Configure the package that autometrics will use to produce metrics. Default is `opentelemetry`, but you can also use `prometheus`. Look in `pyproject.toml` for the corresponding versions of packages that will be used.\n- `histogram_buckets` - Configure the buckets used for latency histograms. Default is `[0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10.0]`.\n- `enable_exemplars` - Enable [exemplar collection](#exemplars). Default is `False`.\n- `service_name` - Configure the [service name](#service-name).\n- `version`, `commit`, `branch` - Used to configure [build_info](#build-info).\n\n## Identifying commits that introduced problems <span name=\"build-info\" />\n\n> **NOTE** - As of writing, `build_info` will not work correctly when using the default tracker (`AUTOMETRICS_TRACKER=OPEN_TELEMETRY`).\n> This will be fixed once the following PR is merged on the opentelemetry-python project: https://github.com/open-telemetry/opentelemetry-python/pull/3306\n>\n> autometrics-py will track support for build_info using the OpenTelemetry tracker via #38\n\nAutometrics makes it easy to identify if a specific version or commit introduced errors or increased latencies.\n\nIt uses a separate metric (`build_info`) to track the version and, optionally, git commit of your service. It then writes queries that group metrics by the `version`, `commit` and `branch` labels so you can spot correlations between those and potential issues.\nConfigure the labels by setting the following environment variables:\n\n| Label     | Run-Time Environment Variables        | Default value |\n| --------- | ------------------------------------- | ------------- |\n| `version` | `AUTOMETRICS_VERSION`                 | `\"\"`          |\n| `commit`  | `AUTOMETRICS_COMMIT` or `COMMIT_SHA`  | `\"\"`          |\n| `branch`  | `AUTOMETRICS_BRANCH` or `BRANCH_NAME` | `\"\"`          |\n\nThis follows the method outlined in [Exposing the software version to Prometheus](https://www.robustperception.io/exposing-the-software-version-to-prometheus/).\n\n## Service name\n\nAll metrics produced by Autometrics have a label called `service.name` (or `service_name` when exported to Prometheus) attached to identify the logical service they are part of.\n\nYou may want to override the default service name, for example if you are running multiple instances of the same code base as separate services and want to differentiate between the metrics produced by each one.\n\nThe service name is loaded from the following environment variables, in this order:\n\n1. `AUTOMETRICS_SERVICE_NAME` (at runtime)\n2. `OTEL_SERVICE_NAME` (at runtime)\n3. First part of `__package__` (at runtime)\n\n## Exemplars\n\n> **NOTE** - As of writing, exemplars aren't supported by the default tracker (`AUTOMETRICS_TRACKER=OPEN_TELEMETRY`).\n> You can track the progress of this feature here: https://github.com/autometrics-dev/autometrics-py/issues/41\n\nExemplars are a way to associate a metric sample to a trace by attaching `trace_id` and `span_id` to it. You can then use this information to jump from a metric to a trace in your tracing system (for example Jaeger). If you have an OpenTelemetry tracer configured, autometrics will automatically pick up the current span from it.\n\nTo use exemplars, you need to first switch to a tracker that supports them by setting `AUTOMETRICS_TRACKER=prometheus` and enable\nexemplar collection by setting `AUTOMETRICS_EXEMPLARS=true`. You also need to enable exemplars in Prometheus by launching Prometheus with the `--enable-feature=exemplar-storage` flag.\n\n## Exporting metrics\n\nAfter collecting metrics with Autometrics, you need to export them to Prometheus. You can either add a separate route to your server and use the `generate_latest` function from the `prometheus_client` package, or you can use the `start_http_server` function from the same package to start a separate server that will expose the metrics. Autometrics also re-exports the `start_http_server` function with a preselected port 9464 for compatibility with other Autometrics packages.\n\n## Development of the package\n\nThis package uses [poetry](https://python-poetry.org) as a package manager, with all dependencies separated into three groups:\n\n- root level dependencies, required\n- `dev`, everything that is needed for development or in ci\n- `examples`, dependencies of everything in `examples/` directory\n\nBy default, poetry will only install required dependencies, if you want to run examples, install using this command:\n\n```sh\npoetry install --with examples\n```\n\nCode in this repository is:\n\n- formatted using [black](https://black.readthedocs.io/en/stable/).\n- contains type definitions (which are linted by [pyright](https://microsoft.github.io/pyright/))\n- tested using [pytest](https://docs.pytest.org/)\n\nIn order to run these tools locally you have to install them, you can install them using poetry:\n\n```sh\npoetry install --with dev\n```\n\nAfter that you can run the tools individually\n\n```sh\n# Formatting using black\npoetry run black .\n# Lint using pyright\npoetry run pyright\n# Run the tests using pytest\npoetry run pytest\n# Run a single test, and clear the cache\npoetry run pytest --cache-clear -k test_tracker\n```\n\n",
    "bugtrack_url": null,
    "license": "MIT OR Apache-2.0",
    "summary": "Easily add metrics to your system \u2013 and actually understand them using automatically customized Prometheus queries",
    "version": "0.9",
    "project_urls": {
        "Homepage": "https://github.com/autometrics-dev/autometrics-py",
        "Repository": "https://github.com/autometrics-dev/autometrics-py"
    },
    "split_keywords": [
        "metrics",
        "telemetry",
        "prometheus",
        "monitoring",
        "observability",
        "instrumentation",
        "tracing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "48a24209b1ab090ad7144e4fc448504d3e9010b966b494a39305f6298e1a19c3",
                "md5": "2755333304bcb922d2592ddfde9f5d7a",
                "sha256": "efb766a9e202cf5339df0be7c7ca1c96a1fbc5f7da4cdaade58ac9cc57960195"
            },
            "downloads": -1,
            "filename": "autometrics-0.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2755333304bcb922d2592ddfde9f5d7a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<4.0",
            "size": 23146,
            "upload_time": "2023-08-24T13:03:02",
            "upload_time_iso_8601": "2023-08-24T13:03:02.352584Z",
            "url": "https://files.pythonhosted.org/packages/48/a2/4209b1ab090ad7144e4fc448504d3e9010b966b494a39305f6298e1a19c3/autometrics-0.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8c3b8ee88e61d59dcb8aa3218a2b9c10b07ef03e0d169158c561c6125d1db114",
                "md5": "2b3ec7b239abdcd939b428a7221be7ba",
                "sha256": "19aa43f7756d6d13d10e965b177695cc790b6e2ff0da016497ada84fba4356a6"
            },
            "downloads": -1,
            "filename": "autometrics-0.9.tar.gz",
            "has_sig": false,
            "md5_digest": "2b3ec7b239abdcd939b428a7221be7ba",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<4.0",
            "size": 25097,
            "upload_time": "2023-08-24T13:03:04",
            "upload_time_iso_8601": "2023-08-24T13:03:04.096015Z",
            "url": "https://files.pythonhosted.org/packages/8c/3b/8ee88e61d59dcb8aa3218a2b9c10b07ef03e0d169158c561c6125d1db114/autometrics-0.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-24 13:03:04",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "autometrics-dev",
    "github_project": "autometrics-py",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "autometrics"
}
        
Elapsed time: 0.11376s