prometheus-exporter-celery


Nameprometheus-exporter-celery JSON
Version 0.10.14 PyPI version JSON
download
home_pagehttps://github.com/danihodovic/celery-exporter
SummaryNone
upload_time2024-11-11 16:40:30
maintainerNone
docs_urlNone
authorDani Hodovic
requires_python<3.13,>=3.11
licenseMIT
keywords celery task-processing prometheus grafana monitoring
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            # celery-exporter ![Build Status](https://github.com/danihodovic/celery-exporter/actions/workflows/.github/workflows/ci.yml/badge.svg) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)

![celery-tasks-by-task](images/celery-tasks-by-task.png)

##### Table of Contents

* [Why another exporter?](#why-another-exporter)
  * [Features](#features)
* [Usage](#usage)
  * [Enable events using the CLI](#enable-events-using-the-cli)
  * [Running the exporter](#running-the-exporter)
* [Metrics](#metrics)
* [Development](#development)
* [Contributors](#contributors)

### Why another exporter?

While I was adding Celery monitoring to a client site I realized that the
existing brokers either didn't work, exposed incorrect metric values or didn't
expose the metrics I needed. So I wrote this exporter which essentially wraps
the built-in Celery monitoring API and exposes all of the event metrics to
Prometheus in real-time.

## Features

- Tested for both Redis and RabbitMQ
- Uses the built in [real-time monitoring component in Celery](https://docs.celeryproject.org/en/latest/userguide/monitoring.html#real-time-processing) to expose Prometheus metrics
- Tracks task status (task-started, task-succeeded, task-failed etc)
- Tracks which workers are running and the number of active tasks
- Follows the Prometheus exporter [best practises](https://prometheus.io/docs/instrumenting/writing_exporters/)
- Deployed as a Docker image or Python single-file binary (via PyInstaller)
- Exposes a health check endpoint at /health
- Grafana dashboards provided by the Celery-mixin
- Prometheus alerts provided by the Celery-mixin

## Dashboards and alerts

Alerting rules can be found [here](./celery-mixin/prometheus-alerts.yaml). By
default we alert if:

- A task failed in the last 10 minutes.
- No Celery workers are online.

Tweak these to suit your use-case.

The Grafana dashboard (seen in the image above) is
[here](https://grafana.com/grafana/dashboards/17508). You can import it
directly into your Grafana instance.

There's another Grafana dashboards that shows an overview of Celery tasks. An image can be found in `./images/celery-tasks-overview.png`. It can also be found
[here](https://grafana.com/grafana/dashboards/17509).

## Usage

Celery needs to be configured to send events to the broker which the exporter
will collect. You can either enable this via Celery configuration or via the
Celery CLI.

##### Enable events using the CLI

To enable events in the CLI run the below command. Note that by default it
doesn't send the `task-sent` event which needs to be [configured](https://docs.celeryproject.org/en/latest/userguide/configuration.html#std-setting-task_send_sent_event) in the
configuration. The other events work out of the box.

```sh
$ celery -A <myproject> control enable_events
```

**Enable events using the configuration:**

```python
# In celeryconfig.py
worker_send_task_events = True
task_send_sent_event = True
```

**Configuration in Django:**
```python
# In settings.py
CELERY_WORKER_SEND_TASK_EVENTS = True
CELERY_TASK_SEND_SENT_EVENT = True
```

##### Running the exporter

Using Docker:

```sh
docker run -p 9808:9808 danihodovic/celery-exporter --broker-url=redis://redis.service.consul/1
```

Using the Python binary (for-non Docker environments):
```sh
curl -L https://github.com/danihodovic/celery-exporter/releases/download/latest/celery-exporter -o ./celery-exporter
chmod+x ./celery-exporter
./celery-exporter --broker-url=redis://redis.service.consul/1
```

###### Kubernetes

There's a Helm in the directory `charts/celery-exporter` for deploying the Celery-exporter to Kubernetes using Helm.

###### Environment variables

All arguments can be specified using environment variables with a `CE_` prefix:

```sh
docker run -p 9808:9808 -e CE_BROKER_URL=redis://redis danihodovic/celery-exporter
```

###### Specifying optional broker transport options

While the default options may be fine for most cases,
there may be a need to specify optional broker transport options. This can be done by specifying
one or more --broker-transport-option parameters as follows:

```sh
docker run -p 9808:9808 danihodovic/celery-exporter --broker-url=redis://redis.service.consul/1 \
  --broker-transport-option global_keyprefix=danihodovic \
  --broker-transport-option visibility_timeout=7200
```

In case of extended transport options, such as `sentinel_kwargs` you can pass JSON string:,
for example:

```sh
docker run -p 9808:9808 danihodovic/celery-exporter --broker-url=sentinel://sentinel.service.consul/1 \
  --broker-transport-option master_name=my_master \
  --broker-transport-option sentinel_kwargs="{\"password\": \"sentinelpass\"}"
```

The list of available broker transport options can be found here:
https://docs.celeryq.dev/projects/kombu/en/stable/reference/kombu.transport.redis.html

###### Specifying an optional retry interval

By default, celery-exporter will raise an exception and exit if there
are any errors communicating with the broker. If preferred, one can
have the celery-exporter retry connecting to the broker after a certain
period of time in seconds via the `--retry-interval` parameter as follows:

```sh
docker run -p 9808:9808 danihodovic/celery-exporter --broker-url=redis://redis.service.consul/1 \
  --retry-interval=5
```

##### Grafana Dashboards & Prometheus Alerts

Head over to the [Celery-mixin in this subdirectory](https://github.com/danihodovic/celery-exporter/tree/master/celery-mixin) to generate rules and dashboards suited to your Prometheus setup.

### Metrics
Name     | Description | Type
---------|-------------|----
celery_task_sent_total | Sent when a task message is published. | Counter
celery_task_received_total | Sent when the worker receives a task. | Counter
celery_task_started_total | Sent just before the worker executes the task. | Counter
celery_task_succeeded_total | Sent if the task executed successfully. | Counter
celery_task_failed_total | Sent if the execution of the task failed. | Counter
celery_task_rejected_total | The task was rejected by the worker, possibly to be re-queued or moved to a dead letter queue. | Counter
celery_task_revoked_total | Sent if the task has been revoked. | Counter
celery_task_retried_total | Sent if the task failed, but will be retried in the future. | Counter
celery_worker_up | Indicates if a worker has recently sent a heartbeat. | Gauge
celery_worker_tasks_active | The number of tasks the worker is currently processing | Gauge
celery_task_runtime_bucket | Histogram of runtime measurements for each task | Histogram
celery_queue_length | The number of message in broker queue | Gauge
celery_active_consumer_count | The number of active consumer in broker queue **(Only work for [RabbitMQ and Qpid](https://qpid.apache.org/) broker, more details at [here](https://github.com/danihodovic/celery-exporter/pull/118#issuecomment-1169870481))** | Gauge
celery_active_worker_count | The number of active workers in broker queue | Gauge
celery_active_process_count | The number of active process in broker queue. Each worker may have more than one process. | Gauge

Used in production at [https://findwork.dev](https://findwork.dev) and [https://django.wtf](https://django.wtf).


## Development
Pull requests are welcome here!

To start developing run commands below to prepare your environment after the `git clone` command:
```shell
# Install dependencies and pre-commit hooks
poetry install
pre-commit install

# Test everything works fine
pre-commit run --all-files
docker-compose up -d
pytest --broker=memory      --log-level=DEBUG
pytest --broker=redis       --log-level=DEBUG
pytest --broker=rabbitmq    --log-level=DEBUG
```

## Contributors

<a href="https://github.com/danihodovic/celery-exporter/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=danihodovic/celery-exporter" />
</a>

Made with [contrib.rocks](https://contrib.rocks).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/danihodovic/celery-exporter",
    "name": "prometheus-exporter-celery",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.11",
    "maintainer_email": null,
    "keywords": "celery, task-processing, prometheus, grafana, monitoring",
    "author": "Dani Hodovic",
    "author_email": "dani.hodovic@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/b1/61/26c2652535d0a18c61dd6677e80fa32ffa6e0dad8fd3021105173aae88a1/prometheus_exporter_celery-0.10.14.tar.gz",
    "platform": null,
    "description": "# celery-exporter ![Build Status](https://github.com/danihodovic/celery-exporter/actions/workflows/.github/workflows/ci.yml/badge.svg) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n![celery-tasks-by-task](images/celery-tasks-by-task.png)\n\n##### Table of Contents\n\n* [Why another exporter?](#why-another-exporter)\n  * [Features](#features)\n* [Usage](#usage)\n  * [Enable events using the CLI](#enable-events-using-the-cli)\n  * [Running the exporter](#running-the-exporter)\n* [Metrics](#metrics)\n* [Development](#development)\n* [Contributors](#contributors)\n\n### Why another exporter?\n\nWhile I was adding Celery monitoring to a client site I realized that the\nexisting brokers either didn't work, exposed incorrect metric values or didn't\nexpose the metrics I needed. So I wrote this exporter which essentially wraps\nthe built-in Celery monitoring API and exposes all of the event metrics to\nPrometheus in real-time.\n\n## Features\n\n- Tested for both Redis and RabbitMQ\n- Uses the built in [real-time monitoring component in Celery](https://docs.celeryproject.org/en/latest/userguide/monitoring.html#real-time-processing) to expose Prometheus metrics\n- Tracks task status (task-started, task-succeeded, task-failed etc)\n- Tracks which workers are running and the number of active tasks\n- Follows the Prometheus exporter [best practises](https://prometheus.io/docs/instrumenting/writing_exporters/)\n- Deployed as a Docker image or Python single-file binary (via PyInstaller)\n- Exposes a health check endpoint at /health\n- Grafana dashboards provided by the Celery-mixin\n- Prometheus alerts provided by the Celery-mixin\n\n## Dashboards and alerts\n\nAlerting rules can be found [here](./celery-mixin/prometheus-alerts.yaml). By\ndefault we alert if:\n\n- A task failed in the last 10 minutes.\n- No Celery workers are online.\n\nTweak these to suit your use-case.\n\nThe Grafana dashboard (seen in the image above) is\n[here](https://grafana.com/grafana/dashboards/17508). You can import it\ndirectly into your Grafana instance.\n\nThere's another Grafana dashboards that shows an overview of Celery tasks. An image can be found in `./images/celery-tasks-overview.png`. It can also be found\n[here](https://grafana.com/grafana/dashboards/17509).\n\n## Usage\n\nCelery needs to be configured to send events to the broker which the exporter\nwill collect. You can either enable this via Celery configuration or via the\nCelery CLI.\n\n##### Enable events using the CLI\n\nTo enable events in the CLI run the below command. Note that by default it\ndoesn't send the `task-sent` event which needs to be [configured](https://docs.celeryproject.org/en/latest/userguide/configuration.html#std-setting-task_send_sent_event) in the\nconfiguration. The other events work out of the box.\n\n```sh\n$ celery -A <myproject> control enable_events\n```\n\n**Enable events using the configuration:**\n\n```python\n# In celeryconfig.py\nworker_send_task_events = True\ntask_send_sent_event = True\n```\n\n**Configuration in Django:**\n```python\n# In settings.py\nCELERY_WORKER_SEND_TASK_EVENTS = True\nCELERY_TASK_SEND_SENT_EVENT = True\n```\n\n##### Running the exporter\n\nUsing Docker:\n\n```sh\ndocker run -p 9808:9808 danihodovic/celery-exporter --broker-url=redis://redis.service.consul/1\n```\n\nUsing the Python binary (for-non Docker environments):\n```sh\ncurl -L https://github.com/danihodovic/celery-exporter/releases/download/latest/celery-exporter -o ./celery-exporter\nchmod+x ./celery-exporter\n./celery-exporter --broker-url=redis://redis.service.consul/1\n```\n\n###### Kubernetes\n\nThere's a Helm in the directory `charts/celery-exporter` for deploying the Celery-exporter to Kubernetes using Helm.\n\n###### Environment variables\n\nAll arguments can be specified using environment variables with a `CE_` prefix:\n\n```sh\ndocker run -p 9808:9808 -e CE_BROKER_URL=redis://redis danihodovic/celery-exporter\n```\n\n###### Specifying optional broker transport options\n\nWhile the default options may be fine for most cases,\nthere may be a need to specify optional broker transport options. This can be done by specifying\none or more --broker-transport-option parameters as follows:\n\n```sh\ndocker run -p 9808:9808 danihodovic/celery-exporter --broker-url=redis://redis.service.consul/1 \\\n  --broker-transport-option global_keyprefix=danihodovic \\\n  --broker-transport-option visibility_timeout=7200\n```\n\nIn case of extended transport options, such as `sentinel_kwargs` you can pass JSON string:,\nfor example:\n\n```sh\ndocker run -p 9808:9808 danihodovic/celery-exporter --broker-url=sentinel://sentinel.service.consul/1 \\\n  --broker-transport-option master_name=my_master \\\n  --broker-transport-option sentinel_kwargs=\"{\\\"password\\\": \\\"sentinelpass\\\"}\"\n```\n\nThe list of available broker transport options can be found here:\nhttps://docs.celeryq.dev/projects/kombu/en/stable/reference/kombu.transport.redis.html\n\n###### Specifying an optional retry interval\n\nBy default, celery-exporter will raise an exception and exit if there\nare any errors communicating with the broker. If preferred, one can\nhave the celery-exporter retry connecting to the broker after a certain\nperiod of time in seconds via the `--retry-interval` parameter as follows:\n\n```sh\ndocker run -p 9808:9808 danihodovic/celery-exporter --broker-url=redis://redis.service.consul/1 \\\n  --retry-interval=5\n```\n\n##### Grafana Dashboards & Prometheus Alerts\n\nHead over to the [Celery-mixin in this subdirectory](https://github.com/danihodovic/celery-exporter/tree/master/celery-mixin) to generate rules and dashboards suited to your Prometheus setup.\n\n### Metrics\nName     | Description | Type\n---------|-------------|----\ncelery_task_sent_total | Sent when a task message is published. | Counter\ncelery_task_received_total | Sent when the worker receives a task. | Counter\ncelery_task_started_total | Sent just before the worker executes the task. | Counter\ncelery_task_succeeded_total | Sent if the task executed successfully. | Counter\ncelery_task_failed_total | Sent if the execution of the task failed. | Counter\ncelery_task_rejected_total | The task was rejected by the worker, possibly to be re-queued or moved to a dead letter queue. | Counter\ncelery_task_revoked_total | Sent if the task has been revoked. | Counter\ncelery_task_retried_total | Sent if the task failed, but will be retried in the future. | Counter\ncelery_worker_up | Indicates if a worker has recently sent a heartbeat. | Gauge\ncelery_worker_tasks_active | The number of tasks the worker is currently processing | Gauge\ncelery_task_runtime_bucket | Histogram of runtime measurements for each task | Histogram\ncelery_queue_length | The number of message in broker queue | Gauge\ncelery_active_consumer_count | The number of active consumer in broker queue **(Only work for [RabbitMQ and Qpid](https://qpid.apache.org/) broker, more details at [here](https://github.com/danihodovic/celery-exporter/pull/118#issuecomment-1169870481))** | Gauge\ncelery_active_worker_count | The number of active workers in broker queue | Gauge\ncelery_active_process_count | The number of active process in broker queue. Each worker may have more than one process. | Gauge\n\nUsed in production at [https://findwork.dev](https://findwork.dev) and [https://django.wtf](https://django.wtf).\n\n\n## Development\nPull requests are welcome here!\n\nTo start developing run commands below to prepare your environment after the `git clone` command:\n```shell\n# Install dependencies and pre-commit hooks\npoetry install\npre-commit install\n\n# Test everything works fine\npre-commit run --all-files\ndocker-compose up -d\npytest --broker=memory      --log-level=DEBUG\npytest --broker=redis       --log-level=DEBUG\npytest --broker=rabbitmq    --log-level=DEBUG\n```\n\n## Contributors\n\n<a href=\"https://github.com/danihodovic/celery-exporter/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=danihodovic/celery-exporter\" />\n</a>\n\nMade with [contrib.rocks](https://contrib.rocks).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": null,
    "version": "0.10.14",
    "project_urls": {
        "Documentation": "https://github.com/danihodovic/celery-exporter",
        "Homepage": "https://github.com/danihodovic/celery-exporter",
        "Repository": "https://github.com/danihodovic/celery-exporter"
    },
    "split_keywords": [
        "celery",
        " task-processing",
        " prometheus",
        " grafana",
        " monitoring"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9c0b807ff33a89c18f159626f1ce8ed4bb16e0b9b23e9c0755d05fe9dad7695b",
                "md5": "d6d0818df27b168e1405752035375083",
                "sha256": "fbe23c93b2b0504cf8e2cb87679be7824f2bf66b0c1064055afc72537003fbc0"
            },
            "downloads": -1,
            "filename": "prometheus_exporter_celery-0.10.14-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d6d0818df27b168e1405752035375083",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.11",
            "size": 16485,
            "upload_time": "2024-11-11T16:40:28",
            "upload_time_iso_8601": "2024-11-11T16:40:28.757213Z",
            "url": "https://files.pythonhosted.org/packages/9c/0b/807ff33a89c18f159626f1ce8ed4bb16e0b9b23e9c0755d05fe9dad7695b/prometheus_exporter_celery-0.10.14-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b16126c2652535d0a18c61dd6677e80fa32ffa6e0dad8fd3021105173aae88a1",
                "md5": "473ff3875f8d87a776429225cb818bbe",
                "sha256": "d4a8e25c7605335aebbeff9dac6d04672074fe48791f58b1e44697d5ba8dd745"
            },
            "downloads": -1,
            "filename": "prometheus_exporter_celery-0.10.14.tar.gz",
            "has_sig": false,
            "md5_digest": "473ff3875f8d87a776429225cb818bbe",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.11",
            "size": 16846,
            "upload_time": "2024-11-11T16:40:30",
            "upload_time_iso_8601": "2024-11-11T16:40:30.713314Z",
            "url": "https://files.pythonhosted.org/packages/b1/61/26c2652535d0a18c61dd6677e80fa32ffa6e0dad8fd3021105173aae88a1/prometheus_exporter_celery-0.10.14.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-11 16:40:30",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "danihodovic",
    "github_project": "celery-exporter",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "lcname": "prometheus-exporter-celery"
}
        
Elapsed time: 1.34405s