resotometrics


Nameresotometrics JSON
Version 3.9.0 PyPI version JSON
download
home_page
SummaryExports Resoto metrics in Prometheus format.
upload_time2024-02-20 18:28:22
maintainer
docs_urlNone
authorSome Engineering Inc.
requires_python>=3.9
licenseAGPLv3
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # `resotometrics`
Resoto Prometheus exporter


## Table of contents

* [Overview](#overview)
* [Usage](#usage)
* [Details](#details)
    * [Example](#example)
    * [Taking it one step further](#taking-it-one-step-further)
* [Contact](#contact)
* [License](#license)


## Overview
`resotometrics` takes [`resotocore`](../resotocore/) graph data and runs aggregation functions on it. Those aggregated metrics
are then exposed in a [Prometheus](https://prometheus.io/) compatible format. The default TCP port is `9955` but
can be changed using the `resotometrics.web_port` config attribute.

More information can be found below and in [the docs](https://resoto.com/docs/concepts/components/metrics).


## Usage
`resotometrics` uses the following commandline arguments:
```
  --subscriber-id SUBSCRIBER_ID
                        Unique subscriber ID (default: resoto.metrics)
  --override CONFIG_OVERRIDE [CONFIG_OVERRIDE ...]
                        Override config attribute(s)
  --resotocore-uri RESOTOCORE_URI
                        resotocore URI (default: https://localhost:8900)
  --verbose, -v         Verbose logging
  --quiet               Only log errors
  --psk PSK             Pre-shared key
  --ca-cert CA_CERT     Path to custom CA certificate file
  --cert CERT           Path to custom certificate file
  --cert-key CERT_KEY   Path to custom certificate key file
  --cert-key-pass CERT_KEY_PASS
                        Passphrase for certificate key file
  --no-verify-certs     Turn off certificate verification
```

ENV Prefix: `RESOTOMETRICS_`
Every CLI arg can also be specified using ENV variables.

For instance the boolean `--verbose` would become `RESOTOMETRICS_VERBOSE=true`.

Once started `resotometrics` will register for `generate_metrics` core events. When such an event is received it will
generate Resoto metrics and provide them at the `/metrics` endpoint.

A prometheus config could look like this:
```
scrape_configs:
  - job_name: "resotometrics"
    static_configs:
      - targets: ["localhost:9955"]
```

## Details
Resoto core supports aggregated queries to produce metrics. Our common library [`resotolib`](../resotolib/) define a number of base resources that are common to a lot of cloud proviers, like say compute instances, subnets, routers, load balancers, and so on. All of those ship with a standard set of metrics specific to each resource.

For example, instances have CPU cores and memory, so they define default metrics for those attributes. Right now metrics are hard coded and read from the base resources, but future versions of Resoto will allow you to define your own metrics in `resotocore` and have `resotometrics` export them.

For right now you can use the aggregate API at `{resotocore}:8900/graph/{graph}/reported/search/aggregate` or the `aggregate` CLI command to generate your own metrics. For API details check out the `resotocore` API documentation as well as the Swagger UI at `{resotocore}:8900/api-doc/`.

In the following we will be using the Resoto shell `resh` and the `aggregate` command.


### Example
Enter the following commands into `resh`
```
search is(instance) | aggregate /ancestors.cloud.reported.name as cloud, /ancestors.account.reported.name as account, /ancestors.region.reported.name as region, instance_type as type : sum(1) as instances_total, sum(instance_cores) as cores_total, sum(instance_memory*1024*1024*1024) as memory_bytes
```

Here is the same query with line feeds for readability (can not be copy'pasted)
```
search is(instance) |
  aggregate
    /ancestors.cloud.reported.name as cloud,
    /ancestors.account.reported.name as account,
    /ancestors.region.reported.name as region,
    instance_type as type :
  sum(1) as instances_total,
  sum(instance_cores) as cores_total,
  sum(instance_memory*1024*1024*1024) as memory_bytes
```

If your graph contains any compute instances the resulting output will look something like this
```
---
group:
  cloud: aws
  account: someengineering-platform
  region: us-west-2
  type: m5.2xlarge
instances_total: 6
cores_total: 24
memory_bytes: 96636764160
---
group:
  cloud: aws
  account: someengineering-platform
  region: us-west-2
  type: m5.xlarge
instances_total: 8
cores_total: 64
memory_bytes: 257698037760
---
group:
  cloud: gcp
  account: someengineering-dev
  region: us-west1
  type: n1-standard-4
instances_total: 12
cores_total: 48
memory_bytes: 193273528320
```

Let us dissect what we've written here:
- `search is(instance)` fetch all the resources that inherit from base kind `instance`. This would be compute instances like `aws_ec2_instance` or `gcp_instance`.
- `aggregate /ancestors.cloud.reported.name as cloud, /ancestors.account.reported.name as account, /ancestors.region.reported.name as region, instance_type as type` aggregate the instance metrics by `cloud`, `account`, and `region` name as well as `instance_type` (think `GROUP_BY` in SQL).
- `sum(1) as instances_total, sum(instance_cores) as cores_total, sum(instance_memory*1024*1024*1024) as memory_bytes` sum up the total number of instances, number of instance cores and memory. The later is stored in GB and here we convert it to bytes as is customary in Prometheus exporters.


### Taking it one step further
```
search is(instance) and instance_status = running | aggregate /ancestors.cloud.reported.name as cloud, /ancestors.account.reported.name as account, /ancestors.region.reported.name as region, instance_type as type : sum(/ancestors.instance_type.reported.ondemand_cost) as instances_hourly_cost_estimate
```

Again the same query with line feeds for readability (can not be copy'pasted)
```
search is(instance) and instance_status = running |
  aggregate
    /ancestors.cloud.reported.name as cloud,
    /ancestors.account.reported.name as account,
    /ancestors.region.reported.name as region,
    instance_type as type :
  sum(/ancestors.instance_type.reported.ondemand_cost) as instances_hourly_cost_estimate
```

Outputs something like
```
---
group:
  cloud: gcp
  account: maestro-229419
  region: us-central1
  type: n1-standard-4
instances_hourly_cost_estimate: 0.949995
```

What did we do here? We told Resoto to find all resource of type compute instance (`search is(instance)`) with a status of `running` and then merge the result with ancestors (parents and parent parents) of type `cloud`, `account`, `region` and now also `instance_type`.

Let us look at two things here. First, in the previous example we already aggregated by `instance_type`. However this was the string attribute called `instance_type` that is part of every instance resource and contains strings like `m5.xlarge` (AWS) or `n1-standard-4` (GCP).

Example
```
> search is(instance) | tail -1 | format {kind} {name} {instance_type}
aws_ec2_instance i-039e06bb2539e5484 t2.micro
```

What we did now was ask Resoto to go up the graph and find the directly connected resource of kind `instance_type`.

An `instance_type` resource looks something like this
```
> search is(instance_type) | tail -1 | dump
reported:
  kind: aws_ec2_instance_type
  id: t2.micro
  tags: {}
  name: t2.micro
  instance_type: t2.micro
  instance_cores: 1
  instance_memory: 1
  ondemand_cost: 0.0116
  ctime: '2021-09-28T13:10:08Z'
```

As you can see, the instance type resource has a float attribute called `ondemand_cost` which is the hourly cost a cloud provider charges for this particular type of compute instance. In our aggregation query we now sum up the hourly cost of all currently running compute instances and export them as a metric named `instances_hourly_cost_estimate`. If we now export this metric into a timeseries DB like Prometheus we are able to plot our instance cost over time.

This is the core functionality `resotometrics` provides.


## Contact
If you have any questions feel free to [join our Discord](https://discord.gg/someengineering) or [open a GitHub issue](https://github.com/someengineering/resoto/issues/new).


## License
See [LICENSE](../LICENSE) for details.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "resotometrics",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "",
    "keywords": "",
    "author": "Some Engineering Inc.",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/b7/23/ac55b6e5e2396d503c692e00849ef500b511f0515b03cdc12e6bb5d9e976/resotometrics-3.9.0.tar.gz",
    "platform": null,
    "description": "# `resotometrics`\nResoto Prometheus exporter\n\n\n## Table of contents\n\n* [Overview](#overview)\n* [Usage](#usage)\n* [Details](#details)\n    * [Example](#example)\n    * [Taking it one step further](#taking-it-one-step-further)\n* [Contact](#contact)\n* [License](#license)\n\n\n## Overview\n`resotometrics` takes [`resotocore`](../resotocore/) graph data and runs aggregation functions on it. Those aggregated metrics\nare then exposed in a [Prometheus](https://prometheus.io/) compatible format. The default TCP port is `9955` but\ncan be changed using the `resotometrics.web_port` config attribute.\n\nMore information can be found below and in [the docs](https://resoto.com/docs/concepts/components/metrics).\n\n\n## Usage\n`resotometrics` uses the following commandline arguments:\n```\n  --subscriber-id SUBSCRIBER_ID\n                        Unique subscriber ID (default: resoto.metrics)\n  --override CONFIG_OVERRIDE [CONFIG_OVERRIDE ...]\n                        Override config attribute(s)\n  --resotocore-uri RESOTOCORE_URI\n                        resotocore URI (default: https://localhost:8900)\n  --verbose, -v         Verbose logging\n  --quiet               Only log errors\n  --psk PSK             Pre-shared key\n  --ca-cert CA_CERT     Path to custom CA certificate file\n  --cert CERT           Path to custom certificate file\n  --cert-key CERT_KEY   Path to custom certificate key file\n  --cert-key-pass CERT_KEY_PASS\n                        Passphrase for certificate key file\n  --no-verify-certs     Turn off certificate verification\n```\n\nENV Prefix: `RESOTOMETRICS_`\nEvery CLI arg can also be specified using ENV variables.\n\nFor instance the boolean `--verbose` would become `RESOTOMETRICS_VERBOSE=true`.\n\nOnce started `resotometrics` will register for `generate_metrics` core events. When such an event is received it will\ngenerate Resoto metrics and provide them at the `/metrics` endpoint.\n\nA prometheus config could look like this:\n```\nscrape_configs:\n  - job_name: \"resotometrics\"\n    static_configs:\n      - targets: [\"localhost:9955\"]\n```\n\n## Details\nResoto core supports aggregated queries to produce metrics. Our common library [`resotolib`](../resotolib/) define a number of base resources that are common to a lot of cloud proviers, like say compute instances, subnets, routers, load balancers, and so on. All of those ship with a standard set of metrics specific to each resource.\n\nFor example, instances have CPU cores and memory, so they define default metrics for those attributes. Right now metrics are hard coded and read from the base resources, but future versions of Resoto will allow you to define your own metrics in `resotocore` and have `resotometrics` export them.\n\nFor right now you can use the aggregate API at `{resotocore}:8900/graph/{graph}/reported/search/aggregate` or the `aggregate` CLI command to generate your own metrics. For API details check out the `resotocore` API documentation as well as the Swagger UI at `{resotocore}:8900/api-doc/`.\n\nIn the following we will be using the Resoto shell `resh` and the `aggregate` command.\n\n\n### Example\nEnter the following commands into `resh`\n```\nsearch is(instance) | aggregate /ancestors.cloud.reported.name as cloud, /ancestors.account.reported.name as account, /ancestors.region.reported.name as region, instance_type as type : sum(1) as instances_total, sum(instance_cores) as cores_total, sum(instance_memory*1024*1024*1024) as memory_bytes\n```\n\nHere is the same query with line feeds for readability (can not be copy'pasted)\n```\nsearch is(instance) |\n  aggregate\n    /ancestors.cloud.reported.name as cloud,\n    /ancestors.account.reported.name as account,\n    /ancestors.region.reported.name as region,\n    instance_type as type :\n  sum(1) as instances_total,\n  sum(instance_cores) as cores_total,\n  sum(instance_memory*1024*1024*1024) as memory_bytes\n```\n\nIf your graph contains any compute instances the resulting output will look something like this\n```\n---\ngroup:\n  cloud: aws\n  account: someengineering-platform\n  region: us-west-2\n  type: m5.2xlarge\ninstances_total: 6\ncores_total: 24\nmemory_bytes: 96636764160\n---\ngroup:\n  cloud: aws\n  account: someengineering-platform\n  region: us-west-2\n  type: m5.xlarge\ninstances_total: 8\ncores_total: 64\nmemory_bytes: 257698037760\n---\ngroup:\n  cloud: gcp\n  account: someengineering-dev\n  region: us-west1\n  type: n1-standard-4\ninstances_total: 12\ncores_total: 48\nmemory_bytes: 193273528320\n```\n\nLet us dissect what we've written here:\n- `search is(instance)` fetch all the resources that inherit from base kind `instance`. This would be compute instances like `aws_ec2_instance` or `gcp_instance`.\n- `aggregate /ancestors.cloud.reported.name as cloud, /ancestors.account.reported.name as account, /ancestors.region.reported.name as region, instance_type as type` aggregate the instance metrics by `cloud`, `account`, and `region` name as well as `instance_type` (think `GROUP_BY` in SQL).\n- `sum(1) as instances_total, sum(instance_cores) as cores_total, sum(instance_memory*1024*1024*1024) as memory_bytes` sum up the total number of instances, number of instance cores and memory. The later is stored in GB and here we convert it to bytes as is customary in Prometheus exporters.\n\n\n### Taking it one step further\n```\nsearch is(instance) and instance_status = running | aggregate /ancestors.cloud.reported.name as cloud, /ancestors.account.reported.name as account, /ancestors.region.reported.name as region, instance_type as type : sum(/ancestors.instance_type.reported.ondemand_cost) as instances_hourly_cost_estimate\n```\n\nAgain the same query with line feeds for readability (can not be copy'pasted)\n```\nsearch is(instance) and instance_status = running |\n  aggregate\n    /ancestors.cloud.reported.name as cloud,\n    /ancestors.account.reported.name as account,\n    /ancestors.region.reported.name as region,\n    instance_type as type :\n  sum(/ancestors.instance_type.reported.ondemand_cost) as instances_hourly_cost_estimate\n```\n\nOutputs something like\n```\n---\ngroup:\n  cloud: gcp\n  account: maestro-229419\n  region: us-central1\n  type: n1-standard-4\ninstances_hourly_cost_estimate: 0.949995\n```\n\nWhat did we do here? We told Resoto to find all resource of type compute instance (`search is(instance)`) with a status of `running` and then merge the result with ancestors (parents and parent parents) of type `cloud`, `account`, `region` and now also `instance_type`.\n\nLet us look at two things here. First, in the previous example we already aggregated by `instance_type`. However this was the string attribute called `instance_type` that is part of every instance resource and contains strings like `m5.xlarge` (AWS) or `n1-standard-4` (GCP).\n\nExample\n```\n> search is(instance) | tail -1 | format {kind} {name} {instance_type}\naws_ec2_instance i-039e06bb2539e5484 t2.micro\n```\n\nWhat we did now was ask Resoto to go up the graph and find the directly connected resource of kind `instance_type`.\n\nAn `instance_type` resource looks something like this\n```\n> search is(instance_type) | tail -1 | dump\nreported:\n  kind: aws_ec2_instance_type\n  id: t2.micro\n  tags: {}\n  name: t2.micro\n  instance_type: t2.micro\n  instance_cores: 1\n  instance_memory: 1\n  ondemand_cost: 0.0116\n  ctime: '2021-09-28T13:10:08Z'\n```\n\nAs you can see, the instance type resource has a float attribute called `ondemand_cost` which is the hourly cost a cloud provider charges for this particular type of compute instance. In our aggregation query we now sum up the hourly cost of all currently running compute instances and export them as a metric named `instances_hourly_cost_estimate`. If we now export this metric into a timeseries DB like Prometheus we are able to plot our instance cost over time.\n\nThis is the core functionality `resotometrics` provides.\n\n\n## Contact\nIf you have any questions feel free to [join our Discord](https://discord.gg/someengineering) or [open a GitHub issue](https://github.com/someengineering/resoto/issues/new).\n\n\n## License\nSee [LICENSE](../LICENSE) for details.\n",
    "bugtrack_url": null,
    "license": "AGPLv3",
    "summary": "Exports Resoto metrics in Prometheus format.",
    "version": "3.9.0",
    "project_urls": {
        "Documentation": "https://resoto.com",
        "Source": "https://github.com/someengineering/resoto/tree/main/resotometrics"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ca5e89525e27fb28a2f37e25733adc1ec7983e53d6e2d665dd075bf41c07300c",
                "md5": "eb772af113a24e1f7d04eb7b4dcc7bee",
                "sha256": "217e081948b5b5e70fbc4c6859b8965c223708a3cb097386aa9e218ddb54b1f3"
            },
            "downloads": -1,
            "filename": "resotometrics-3.9.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "eb772af113a24e1f7d04eb7b4dcc7bee",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 10834,
            "upload_time": "2024-02-20T18:28:21",
            "upload_time_iso_8601": "2024-02-20T18:28:21.285959Z",
            "url": "https://files.pythonhosted.org/packages/ca/5e/89525e27fb28a2f37e25733adc1ec7983e53d6e2d665dd075bf41c07300c/resotometrics-3.9.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b723ac55b6e5e2396d503c692e00849ef500b511f0515b03cdc12e6bb5d9e976",
                "md5": "ddec29e712a441551b533b1b9405414a",
                "sha256": "e3edcfa88bd39ea9f528ff5617e969ac45c97fcd83905ba7f61b81e7d964de11"
            },
            "downloads": -1,
            "filename": "resotometrics-3.9.0.tar.gz",
            "has_sig": false,
            "md5_digest": "ddec29e712a441551b533b1b9405414a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 13071,
            "upload_time": "2024-02-20T18:28:22",
            "upload_time_iso_8601": "2024-02-20T18:28:22.865771Z",
            "url": "https://files.pythonhosted.org/packages/b7/23/ac55b6e5e2396d503c692e00849ef500b511f0515b03cdc12e6bb5d9e976/resotometrics-3.9.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-20 18:28:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "someengineering",
    "github_project": "resoto",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "resotometrics"
}
        
Elapsed time: 0.20486s