<!-- TOC ignore:true -->
# reaction-metrics-exporter
> [!note] 💚 A lot of inspiration has been drawn from [`dmarc-metrics-exporter`](https://github.com/jgosmann/dmarc-metrics-exporter).
Export [OpenMetrics](https://prometheus.io/docs/specs/om/open_metrics_spec/) for [reaction](https://reaction.ppom.me/). The exporter continuously monitors and parses reaction's logs and state.
The following metrics are collected and exposed through an HTTP endpoint:
- `reaction_match_total`: total number of matches;
- `reaction_action_total`: total number of actions;
- `reaction_pending_count`: current number of pending actions.
All metrics are labelled with `stream` and `filter`. Action-related metrics have an additional `action` label.
> ⚠️ In the long-term, metrics will be integrated into `reaction`. Whether they will be retro-compatible depends on long-term relevance and performance.
<!-- TOC ignore:true -->
## Table of contents
<!-- TOC -->
- [Quick start](#quick-start)
- [The matches dilemma](#the-matches-dilemma)
- [Usage details](#usage-details)
- [Real-world setup](#real-world-setup)
- [Visualising data](#visualising-data)
- [Development setup](#development-setup)
<!-- /TOC -->
# Quick start
> [!caution] ⚠️ Do not use in production as-is; see [real-world setup](#real-world-setup).
## Prerequisites
- `python>=3.10` and `pip`;
- `reaction==2` (tested up to `v2.1.2`);
- [`libsystemd`](https://www.freedesktop.org/software/systemd/man/latest/libsystemd.html);
- [`pkg-config`](https://www.freedesktop.org/wiki/Software/pkg-config/).
## Install
```bash
python3 -m pip install reaction-metrics-exporter
```
It is recommended to install the exporter in a [virtualenv](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/).
## Configure
Create a configuration file, *e.g.* `config.yml`:
```yaml
metrics:
# export all possible metrics
export:
matches:
actions:
pending:
reaction:
# as you would pass to `reaction test-config`
config: /etc/reaction
logs:
# monitor logs for `reaction.service`
systemd:
persist:
# save metrics from time to time
folder: ~/.local/share/reaction-metrics-exporter
```
>>> [!tip] Using a log file ?
```yaml
reaction:
# ...
logs:
# replace with you log path
file: /var/log/reaction.log
```
>>>
## Run
```bash
python3 -m reaction_metrics_exporter -c /etc/reaction-metrics-exporter/config.yml start
```
Metrics are exposed at http://localhost:8080/metrics.
> [!note] 💡 Metrics are written on disk on exit and reloaded on subsequent starts.
# The matches dilemma
`reaction` matches often contains valuable information, such as IP addresses. Exporting them as metrics' labels is kind of hackish; they should in theory be sent to a log database, but these are heavy and less common. The default configuration is conservative and **do not export them**.
## How matches can become a problem
Quoting the [Prometheus docs](https://prometheus.io/docs/practices/naming/):
> CAUTION: Remember that every unique combination of key-value label pairs represents a new time series, which can dramatically increase the amount of data stored. Do not use labels to store dimensions with high cardinality (many different label values), such as user IDs, email addresses, or other unbounded sets of values.
For example, metrics exported from [the SSH filter](https://reaction.ppom.me/filters/ssh.html) look like:
```
reaction_matches_total{stream="ssh",filter="failedlogin",ip="X.X.X.X"}: N
```
`N` being the number of matches for this unique combination of labels.
> ⚠️ Each new IP address will therefore create a new line in the exported data **and** a new time serie in the TSDB. For large instance, this can result in storage and performance issues.
## Choosing exported matches
You need to explicitly specify which patterns you want to export.
For example, to export `ip` matches of the `failedlogin` filter from the `openssh` stream:
```yaml
metrics:
for:
ssh:
failedlogin:
ip:
```
If you use the pattern `ip` in multiple streams, you can avoid repetition by exporting it globally:
```yaml
metrics:
all:
ip:
```
## Pre-treating matches
In some cases, you may want to transform matches prior to exporting. You can do so with [Jinja2](https://jinja.palletsprojects.com/en/stable/) expressions.
For example, for an `email` pattern, you could to keep only the domain part in metrics: first to reduce cardinality, second to avoid storing too much personal data in the TSDB. This can be achieve with:
```yaml
metrics:
all:
email: "{{ email.split('@') | last }}
```
## Tweaking metrics
To delete a metric from exports, simply remove the corresponding key in the configuration. You can alternatively disable matches export for individual metrics as these are essentially redundant.
To include meta-metrics (Python, GC, CPU...), add the `internals` key.
In this example...
```yaml
metrics:
export:
matches:
labels: false
actions:
internals:
```
- matches will be exported with limited labels (stream and filters);
- actions will be exported with all matches;
- pending actions will **not** be exported;
- meta-metrics will be exported.
## Automatically forgetting metrics
You can configure the exporter to forget metrics periodically:
```yaml
persist:
# you can use any number followed by M (minutes), H (hours), d (days),
# w (weeks), m (months) or y (years)
forget: 1m
```
This approach has a drawback: any plot relying on the absolute values of counters will reset. In practice, such plots are rare and `rate` or `increase`-like functions are used instead. Fortunately, these ignore breaks in monotonicity.
Besides, it is possible to approximate counters as if they hadn't been reset using VictoriaMetrics: see the [visualization section](#visualization).
> 🕧 The duration depends on your setup. Start without `forget` and monitor the size of the HTTP response and the size of your TSDB.
> [!tip] 💡 A backup file is created before forgetting.
# Usage details
## Configuration
You can either provide a YAML file or a JSON file. Albeit not recommended, you can run the exporter without a configuration file.
The default configuration looks like this:
```json5
{
// only stdout is supported atm
"loglevel": "INFO",
"listen": {
"port": 8080,
"address": "127.0.0.1"
},
"metrics": {
"all": {},
// ⚠️ no metrics exported by default!
"export": {},
"for": {}
},
"reaction": {
"config": "/etc/reaction",
"logs": {
"systemd": "reaction.service"
},
// same default as reaction
"socket": "/run/reaction/reaction.sock"
},
"persist": {
// in seconds (e.g. 10 minutes)
"interval": 600,
"folder": "/var/lib/reaction-metrics-exporter",
// never-ish
"forget": "10y"
}
}
```
## Ingesting existing logs
You may want to calculate metrics from existing logs. Whilst possible, there are several limitations:
- the exporter **needs the configuration to be aligned with the logs**, especially for stream, filters and patterns.
- any previously exported metrics **will be erased** to avoid duplication.
The following command reads all known logs, calculate metrics, saves them and exits.
```bash
python3 -m reaction_metrics_exporter -c config.yml init
```
You can then launch the usual command (`start`).
> 👉 Use this command if something has gone wrong with your metrics (*hopefully not*) and that you have kept the logs.
## Commands
```
usage: python -m reaction_metrics_exporter [-h] [-c CONFIG] [-f] [-y] {init,start,clear,defaults,test-config}
positional arguments:
{init,start,clear,defaults,test-config}
mode of operation; see below
options:
-h, --help show this help message and exit
-c, --config CONFIG path to the configuration file (JSON or YAML)
-f, --force force clear even if backup is impossible, then delete backup
-y, --yes disable interaction. caution with init and clear
command:
init: read all existing logs, compute metrics, save on disk and exit
start: continuously read **new** logs, compute and save metrics; serve HTTP endpoint
clear: make a backup and delete all existing metrics (-f to force)
defaults: print the default configuration in json
test-config: validate and output configuration in json
```
# Real-world setup
## Create an unprivileged user
The exporter should run with an unprivileged, system user. Among numerous reasons:
- the exporter is exposed on the web;
- it parses arbitrary data;
- it has a lot of dependencies;
- I am neither a developer nor a security expert.
This user should be able to read [journald](https://www.freedesktop.org/software/systemd/man/latest/systemd-journald.service.html) logs and to communicate with `reaction`'s socket.
First create a user and a group, then add the user to the `systemd-journal` group.
```bash
# creates group automatically
/sbin/adduser exporter-reaction --no-create-home --system
usermod -aG systemd-journal exporter-reaction
```
Then, open an editor to modify `reaction`'s service:
```bash
systemctl edit reaction.service
```
Paste the following under the `[Service]` section:
```systemd
# Files (inc. socket) created by reaction will be owned by this group
Group=reaction-metrics-exporter
# Group will have permission for read and write
UMask=0002
```
Restart reaction:
```bash
systemctl daemon-reload
systemctl restart reaction
```
Check that you should be able to communicate with `reaction` and to read the journal as the user.
```bash
sudo su reaction-metrics-exporter
reaction show
journalctl -feu reaction
```
## Running with systemd
A [service file](./reaction-metrics-exporter.service) file is provided: save it to `/etc/systemd/systemd`.
You will need to adjust the configuration path (and hopefully the Python path of your `venv`) in the `ExecStart=` directive.
> 💡 Persistence directory is created automatically by systemd in `/var/lib`.
Enable and start the exporter:
```bash
systemctl daemon-reload
systemctl enable --now reaction-metrics-exporter.service
```
Follow the logs with:
```bash
journalctl -feu reaction-metrics-exporter.service
```
## Running with Docker
> [!caution] ⬆️ Make sure you completed the [rootless setup](#create-an-unprivileged-user).
Start inside the [docker](./docker) directory.
Create a `.env` file:
```ini
UID=
GID=
JOURNAL_GID=
```
Values can be found out of command `id exporter-reaction`.
You may need to adjust the default mounts in [`compose.yml`](./docker/compose.yml). Expectations are:
- `reaction`'s configuration mounted on `/etc/reaction`;
- `reaction`'s socket mounted on `/run/reaction/reaction.sock`;
- `journald` file mounted on `/var/log/journal`.
A [sample configuration file](./docker/config.yml) is provided. Tweak it to fit your needs (don't forget to [add matches](#choosing-exported-matches) if needed).
If you want to [`init`](#ingesting-previous-logs):
```bash
docker compose up rme-init
```
To start exposing metrics:
```bash
docker compose up -d rme && docker compose logs -f
```
The exporter is mapped to the host's `8081` port by default.
>>> [!tip] Optionally, you can build the image yourself:
```bash
docker compose build
```
>>>
# Visualising data
🚧 WIP !
# Development setup
In addition of the prerequisites, you need [Poetry](https://python-poetry.org/).
```bash
# inside the cloned repository
poetry install
# run app
poetry run python -m reaction_metrics_exporter [...]
# run tests
poetry run pytest
```
Raw data
{
"_id": null,
"home_page": null,
"name": "reaction-metrics-exporter",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "reaction, prometheus, openmetrics",
"author": "Lilou",
"author_email": "lilou@chosto.me",
"download_url": "https://files.pythonhosted.org/packages/f8/a4/b84b7980cf6e2a2d1c46148f95d8fc0126f770ccb645059ca5625ccb0e28/reaction_metrics_exporter-0.1.1.tar.gz",
"platform": null,
"description": "\n<!-- TOC ignore:true -->\n# reaction-metrics-exporter\n\n> [!note] \ud83d\udc9a A lot of inspiration has been drawn from [`dmarc-metrics-exporter`](https://github.com/jgosmann/dmarc-metrics-exporter).\n\nExport [OpenMetrics](https://prometheus.io/docs/specs/om/open_metrics_spec/) for [reaction](https://reaction.ppom.me/). The exporter continuously monitors and parses reaction's logs and state. \n\nThe following metrics are collected and exposed through an HTTP endpoint:\n\n- `reaction_match_total`: total number of matches;\n- `reaction_action_total`: total number of actions;\n- `reaction_pending_count`: current number of pending actions.\n\nAll metrics are labelled with `stream` and `filter`. Action-related metrics have an additional `action` label.\n\n> \u26a0\ufe0f In the long-term, metrics will be integrated into `reaction`. Whether they will be retro-compatible depends on long-term relevance and performance.\n\n<!-- TOC ignore:true -->\n## Table of contents\n\n<!-- TOC -->\n\n- [Quick start](#quick-start)\n- [The matches dilemma](#the-matches-dilemma)\n- [Usage details](#usage-details)\n- [Real-world setup](#real-world-setup)\n- [Visualising data](#visualising-data)\n- [Development setup](#development-setup)\n\n<!-- /TOC -->\n\n# Quick start\n\n> [!caution] \u26a0\ufe0f Do not use in production as-is; see [real-world setup](#real-world-setup).\n\n## Prerequisites\n\n- `python>=3.10` and `pip`;\n- `reaction==2` (tested up to `v2.1.2`);\n- [`libsystemd`](https://www.freedesktop.org/software/systemd/man/latest/libsystemd.html);\n- [`pkg-config`](https://www.freedesktop.org/wiki/Software/pkg-config/).\n\n## Install\n\n```bash\npython3 -m pip install reaction-metrics-exporter\n```\n\nIt is recommended to install the exporter in a [virtualenv](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/).\n\n## Configure\n\nCreate a configuration file, *e.g.* `config.yml`:\n\n```yaml\nmetrics:\n # export all possible metrics\n export:\n matches:\n actions:\n pending:\n\nreaction:\n # as you would pass to `reaction test-config`\n config: /etc/reaction\n logs:\n # monitor logs for `reaction.service`\n systemd:\n\npersist:\n # save metrics from time to time\n folder: ~/.local/share/reaction-metrics-exporter\n```\n\n>>> [!tip] Using a log file ?\n```yaml\nreaction:\n # ...\n logs:\n # replace with you log path\n file: /var/log/reaction.log\n```\n>>>\n\n## Run\n\n```bash\npython3 -m reaction_metrics_exporter -c /etc/reaction-metrics-exporter/config.yml start\n```\n\nMetrics are exposed at http://localhost:8080/metrics.\n\n> [!note] \ud83d\udca1 Metrics are written on disk on exit and reloaded on subsequent starts.\n\n# The matches dilemma\n\n`reaction` matches often contains valuable information, such as IP addresses. Exporting them as metrics' labels is kind of hackish; they should in theory be sent to a log database, but these are heavy and less common. The default configuration is conservative and **do not export them**.\n\n## How matches can become a problem\n\nQuoting the [Prometheus docs](https://prometheus.io/docs/practices/naming/):\n\n> CAUTION: Remember that every unique combination of key-value label pairs represents a new time series, which can dramatically increase the amount of data stored. Do not use labels to store dimensions with high cardinality (many different label values), such as user IDs, email addresses, or other unbounded sets of values.\n\nFor example, metrics exported from [the SSH filter](https://reaction.ppom.me/filters/ssh.html) look like:\n\n```\nreaction_matches_total{stream=\"ssh\",filter=\"failedlogin\",ip=\"X.X.X.X\"}: N\n```\n\n`N` being the number of matches for this unique combination of labels.\n\n> \u26a0\ufe0f Each new IP address will therefore create a new line in the exported data **and** a new time serie in the TSDB. For large instance, this can result in storage and performance issues.\n\n## Choosing exported matches\n\nYou need to explicitly specify which patterns you want to export.\n\nFor example, to export `ip` matches of the `failedlogin` filter from the `openssh` stream:\n\n```yaml\nmetrics:\n for:\n ssh:\n failedlogin:\n ip:\n```\n\nIf you use the pattern `ip` in multiple streams, you can avoid repetition by exporting it globally:\n\n```yaml\nmetrics:\n all:\n ip:\n```\n\n## Pre-treating matches\n\nIn some cases, you may want to transform matches prior to exporting. You can do so with [Jinja2](https://jinja.palletsprojects.com/en/stable/) expressions.\n\nFor example, for an `email` pattern, you could to keep only the domain part in metrics: first to reduce cardinality, second to avoid storing too much personal data in the TSDB. This can be achieve with:\n\n```yaml\nmetrics:\n all:\n email: \"{{ email.split('@') | last }}\n```\n\n## Tweaking metrics\n\nTo delete a metric from exports, simply remove the corresponding key in the configuration. You can alternatively disable matches export for individual metrics as these are essentially redundant.\n\nTo include meta-metrics (Python, GC, CPU...), add the `internals` key.\n\nIn this example...\n\n```yaml\nmetrics:\n export:\n matches:\n labels: false\n actions:\n internals:\n```\n\n- matches will be exported with limited labels (stream and filters);\n- actions will be exported with all matches;\n- pending actions will **not** be exported;\n- meta-metrics will be exported.\n\n## Automatically forgetting metrics\n\nYou can configure the exporter to forget metrics periodically:\n\n```yaml\npersist:\n # you can use any number followed by M (minutes), H (hours), d (days),\n # w (weeks), m (months) or y (years) \n forget: 1m\n```\n\nThis approach has a drawback: any plot relying on the absolute values of counters will reset. In practice, such plots are rare and `rate` or `increase`-like functions are used instead. Fortunately, these ignore breaks in monotonicity.\n\nBesides, it is possible to approximate counters as if they hadn't been reset using VictoriaMetrics: see the [visualization section](#visualization).\n\n> \ud83d\udd67 The duration depends on your setup. Start without `forget` and monitor the size of the HTTP response and the size of your TSDB.\n\n> [!tip] \ud83d\udca1 A backup file is created before forgetting.\n\n# Usage details\n\n## Configuration\n\nYou can either provide a YAML file or a JSON file. Albeit not recommended, you can run the exporter without a configuration file.\n\nThe default configuration looks like this:\n\n```json5\n{\n // only stdout is supported atm\n \"loglevel\": \"INFO\",\n \"listen\": {\n \"port\": 8080,\n \"address\": \"127.0.0.1\"\n },\n \"metrics\": {\n \"all\": {},\n // \u26a0\ufe0f no metrics exported by default!\n \"export\": {},\n \"for\": {}\n },\n \"reaction\": {\n \"config\": \"/etc/reaction\",\n \"logs\": {\n \"systemd\": \"reaction.service\"\n },\n // same default as reaction\n \"socket\": \"/run/reaction/reaction.sock\"\n },\n \"persist\": {\n // in seconds (e.g. 10 minutes)\n \"interval\": 600,\n \"folder\": \"/var/lib/reaction-metrics-exporter\",\n // never-ish\n \"forget\": \"10y\"\n }\n}\n```\n\n## Ingesting existing logs\n\nYou may want to calculate metrics from existing logs. Whilst possible, there are several limitations:\n\n- the exporter **needs the configuration to be aligned with the logs**, especially for stream, filters and patterns.\n- any previously exported metrics **will be erased** to avoid duplication.\n\nThe following command reads all known logs, calculate metrics, saves them and exits.\n\n```bash\npython3 -m reaction_metrics_exporter -c config.yml init\n```\n\nYou can then launch the usual command (`start`).\n\n> \ud83d\udc49 Use this command if something has gone wrong with your metrics (*hopefully not*) and that you have kept the logs.\n\n## Commands\n\n```\nusage: python -m reaction_metrics_exporter [-h] [-c CONFIG] [-f] [-y] {init,start,clear,defaults,test-config}\n\npositional arguments:\n {init,start,clear,defaults,test-config}\n mode of operation; see below\n\noptions:\n -h, --help show this help message and exit\n -c, --config CONFIG path to the configuration file (JSON or YAML)\n -f, --force force clear even if backup is impossible, then delete backup\n -y, --yes disable interaction. caution with init and clear\n\ncommand:\n init: read all existing logs, compute metrics, save on disk and exit\n start: continuously read **new** logs, compute and save metrics; serve HTTP endpoint\n clear: make a backup and delete all existing metrics (-f to force)\n defaults: print the default configuration in json\n test-config: validate and output configuration in json\n```\n\n# Real-world setup\n\n## Create an unprivileged user\n\nThe exporter should run with an unprivileged, system user. Among numerous reasons:\n- the exporter is exposed on the web;\n- it parses arbitrary data;\n- it has a lot of dependencies;\n- I am neither a developer nor a security expert.\n\nThis user should be able to read [journald](https://www.freedesktop.org/software/systemd/man/latest/systemd-journald.service.html) logs and to communicate with `reaction`'s socket.\n\nFirst create a user and a group, then add the user to the `systemd-journal` group.\n\n```bash\n# creates group automatically\n/sbin/adduser exporter-reaction --no-create-home --system\nusermod -aG systemd-journal exporter-reaction\n```\n\nThen, open an editor to modify `reaction`'s service:\n\n```bash\nsystemctl edit reaction.service\n```\n\nPaste the following under the `[Service]` section:\n\n```systemd\n# Files (inc. socket) created by reaction will be owned by this group\nGroup=reaction-metrics-exporter\n# Group will have permission for read and write\nUMask=0002\n```\n\nRestart reaction:\n\n```bash\nsystemctl daemon-reload\nsystemctl restart reaction\n```\n\nCheck that you should be able to communicate with `reaction` and to read the journal as the user.\n\n```bash\nsudo su reaction-metrics-exporter\nreaction show\njournalctl -feu reaction\n```\n\n## Running with systemd\n\nA [service file](./reaction-metrics-exporter.service) file is provided: save it to `/etc/systemd/systemd`.\n\nYou will need to adjust the configuration path (and hopefully the Python path of your `venv`) in the `ExecStart=` directive. \n\n> \ud83d\udca1 Persistence directory is created automatically by systemd in `/var/lib`.\n\nEnable and start the exporter:\n\n```bash\nsystemctl daemon-reload\nsystemctl enable --now reaction-metrics-exporter.service\n```\n\nFollow the logs with:\n\n```bash\njournalctl -feu reaction-metrics-exporter.service\n```\n\n## Running with Docker\n\n> [!caution] \u2b06\ufe0f Make sure you completed the [rootless setup](#create-an-unprivileged-user).\n\nStart inside the [docker](./docker) directory. \n\nCreate a `.env` file:\n\n```ini\nUID=\nGID=\nJOURNAL_GID=\n```\n\nValues can be found out of command `id exporter-reaction`.\n\nYou may need to adjust the default mounts in [`compose.yml`](./docker/compose.yml). Expectations are:\n- `reaction`'s configuration mounted on `/etc/reaction`;\n- `reaction`'s socket mounted on `/run/reaction/reaction.sock`;\n- `journald` file mounted on `/var/log/journal`.\n\nA [sample configuration file](./docker/config.yml) is provided. Tweak it to fit your needs (don't forget to [add matches](#choosing-exported-matches) if needed).\n\nIf you want to [`init`](#ingesting-previous-logs):\n\n```bash\ndocker compose up rme-init\n```\n\nTo start exposing metrics:\n\n```bash\ndocker compose up -d rme && docker compose logs -f\n```\n\nThe exporter is mapped to the host's `8081` port by default.\n\n>>> [!tip] Optionally, you can build the image yourself:\n```bash\ndocker compose build\n```\n>>>\n\n# Visualising data\n\n\ud83d\udea7 WIP !\n\n# Development setup\n\nIn addition of the prerequisites, you need [Poetry](https://python-poetry.org/).\n\n```bash\n# inside the cloned repository\npoetry install\n# run app\npoetry run python -m reaction_metrics_exporter [...]\n# run tests\npoetry run pytest\n```",
"bugtrack_url": null,
"license": "AGPL-3.0-or-later",
"summary": "A daemon that scans reaction outputs, and serves an HTTP OpenMetrics endpoint",
"version": "0.1.1",
"project_urls": null,
"split_keywords": [
"reaction",
" prometheus",
" openmetrics"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "26c04db1e3db79eedfa922fca2615f7f3bc175c84fd91698a10ad91f51b26895",
"md5": "0cf64cbb232062d6a96393543bf3956f",
"sha256": "7de5dc28811c228d9beb898fcceb48209c60f609ffd50c90ac6865d664a6336c"
},
"downloads": -1,
"filename": "reaction_metrics_exporter-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0cf64cbb232062d6a96393543bf3956f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 36080,
"upload_time": "2025-08-01T10:38:17",
"upload_time_iso_8601": "2025-08-01T10:38:17.973832Z",
"url": "https://files.pythonhosted.org/packages/26/c0/4db1e3db79eedfa922fca2615f7f3bc175c84fd91698a10ad91f51b26895/reaction_metrics_exporter-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f8a4b84b7980cf6e2a2d1c46148f95d8fc0126f770ccb645059ca5625ccb0e28",
"md5": "3e86d7a02e7859d87a57d32cf2c57262",
"sha256": "2a8732980a5777f97c89bd91a597428fca532399893d537c3271cbfef3581a7c"
},
"downloads": -1,
"filename": "reaction_metrics_exporter-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "3e86d7a02e7859d87a57d32cf2c57262",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 35038,
"upload_time": "2025-08-01T10:38:19",
"upload_time_iso_8601": "2025-08-01T10:38:19.339661Z",
"url": "https://files.pythonhosted.org/packages/f8/a4/b84b7980cf6e2a2d1c46148f95d8fc0126f770ccb645059ca5625ccb0e28/reaction_metrics_exporter-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-01 10:38:19",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "reaction-metrics-exporter"
}