signal-analog


Namesignal-analog JSON
Version 1.2.0 PyPI version JSON
download
home_pagehttps://github.com/Nike-inc/signal_analog
SummaryA troposphere-like library for managing SignalFxCharts, Dashboards, and Detectors.
upload_time2018-04-12 23:10:21
maintainer
docs_urlNone
authorFernando Freire
requires_python
licenseBSD 3-Clause License
keywords signal_analog signalfx dashboards charts detectors monitoring signalflow
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI
coveralls test coverage No coveralls.
            [![Build Status](https://travis-ci.com/Nike-Inc/signal_analog.svg?token=ifpf79nsCVxHsobs3pQ5&branch=master)](https://travis-ci.com/Nike-Inc/signal_analog)
# signal_analog

A troposphere-like library for managing SignalFx Charts, Dashboards, and
Detectors.

This library assumes a basic familiarity with resources in SignalFx. For a
good overview of the SignalFx API consult the [upstream documentation][sfxdocs].

## TOC

  - [Features](#features)
  - [Installation](#installation)
  - [Usage](#usage)
      - [Building Charts](#charts)
      - [Building Dashboards](#dashboards)
      - [Updating Dashboards](#dashboards-updates)
      - [Dashboard Filters](#dashboard-filters)
      - [Creating Detectors](#detectors)
          - [Building Detectors from Existing Charts](#from_chart)
      - [Using Flow and Combinator Functions In Formulas](#flow)
      - [Building Dashboard Groups](#dashboard-groups)
      - [Updating Dashboard Group](#dashboard-group-updates)
      - [Talking to the SignalFlow API Directly](#signalflow)
      - [General `Resource` Guidelines](#general-resource-guidlines)
      - [Creating a CLI for your resources](#cli-builder)
  - [Contributing](#contributing)

<a name="features"></a>
## Features

  - Provides bindings for the SignalFlow DSL
  - Provides abstractions for:
      - Charts
      - Dashboards, DashboardGroups
      - Detectors
  - A CLI builder to wrap resource definitions (useful for automation)

<a name="installation"></a>
## Installation

Add `signal_analog` to the requirements file in your project:

```
# requirements.txt
# ... your other dependencies
signal_analog
```

Then run the following command to update your environment:

```
pip install -r requirements.txt
```

<a name="usage"></a>
## Usage

`signal_analog` provides two kinds of abstractions, one for building resources
in the SignalFx API and the other for describing metric timeseries through the
[Signal Flow DSL][signalflow].

The following sections describe how to use `Resource` abstractions in
conjunction with the [Signal Flow DSL][signalflow].

<a name="charts"></a>
### Building Charts

`signal_analog` provides constructs for building charts in the
`signal_analog.charts` module.

Consult the [upstream documentation][charts] for more information Charts.

Let's consider an example where we would like to build a chart to monitor
memory utilization for a single applicaton in a single environment.

This assumes a service reports metrics for application name as `app` and
environment as `env` with memory utilization reporting via the
`memory.utilization` metric name.

In a timeseries chart, all data displayed on the screen comes from at least one
`data` definition in the SignalFlow language. Let's begin by defining our
timeseries:

```python
from signal_analog.flow import Data

ts = Data('memory.utilization')
```

In SignalFlow parlance a timeseries is only displayed on a chart if it has been
"published". All stream functions in SignalFlow have a `publish` method that
may be called at the _end_ of all timeseries transformations.

```python
ts = Data('memory.utilization').publish()
```

As a convenience, all transformations on stream functions return the callee,
so in the above example `ts` remains bound to an instance of `Data`.

Now, this timeseries isn't very useful by itself; if we attached this program
to a chart we would see _all_ timeseries for _all_ [Riposte] applications
reporting to SignalFx!

We can restrict our view of the data by adding a filter on application name:

```python
from signal_analog.flow import Data, Filter

app_filter = Filter('app', 'foo')

ts = Data('memory.utilization', filter=app_filter).publish()
```

Now if we created a chart with this program we would only be looking at metrics
that relate to the `foo` application. Much better, but we're still
looking at instance of `foo` _regardless_ of the environment it
lives in.

What we'll want to do is combine our `app_filter` with another filter for the
environment. The `signal_analog.combinators` module provides some helpful
constructs for achieving this goal:

```python
from signal_analog.combinators import And

env_filter = Filter('env', 'prod')

all_filters = And(app_filter, env_filter)

ts = Data('memory.utilization', filter=all_filters).publish()
```

Excellent! We're now ready to create our chart.

First, let's give our chart a name:

```python
from signal_analog.charts import TimeSeriesChart

memory_chart = TimeSeriesChart().with_name('Memory Used %')
```

Like it's `flow` counterparts, `charts` adhere to the builder pattern for
constructing objects that interact with the SignalFx API.

With our name in place, let's go ahead and add our program:

```python
memory_chart = TimeSeriesChart().with_name('Memory Used %').with_program(ts)
```

Each Chart understands how to serialize our SignalFlow programs appropriately,
so it is sufficient to simply pass in our reference here.

Finally, let's change the plot type on our chart so that we see solid areas
instead of flimsy lines:

```python
from signal_analog.charts import PlotType

memory_chart = TimeSeriesChart()\
                 .with_name('Memory Used %')\
                 .with_program(ts)
                 .with_default_plot_type(PlotType.area_chart)
```

[Terrific]; there's only a few more details before we have a complete chart.

In the following sections we'll see how we can create dashboards from
collections of charts.

<a name="dashboards"></a>
### Building Dashboards

`signal_analog` provides constructs for building charts in the
`signal_analog.dashboards` module.

Consult the [upstream documentation][dashboards] for more information on the
Dashboard API.

Building on the examples described in the previous section, we'd now like to
build a dashboard containing our memory chart.

We start with the humble `Dashboard` object:

```python
from signal_analog.dashboards import Dashboard

dash = Dashboard()
```

Many of the same methods for charts are available on dashboards as well, so
let's give our dashboard a memorable name and configure it's API token:

```python
dash.with_name('My Little Dashboard: Metrics are Magic')\
    .with_api_token('my-api-token')
```

Our final task will be to add charts to our dashboard and create it in the API!

```python
response = dash\
  .with_charts(memory_chart)\
  .with_api_token('my-api-token')\
  .create()
```

At this point one of two things will happen:

  - We receive some sort of error from the SignalFx API and an exception
  is thrown
  - We successfully created the dashboard, in which case the JSON response is
  returned as a dictionary.

Now, storing API keys in source isn't ideal, so if you'd like to see how you
can pass in your API keys at runtime check the documentation below to see how
you can [dynamically build a CLI for your resources](#cli-builder).

<a name="dashboards-updates"></a>
### Updating Dashboards
Once you have created a dashboard you can update properties like name and
description:

```python
dash.update(
    name='updated_dashboard_name',
    description='updated_dashboard_description'
)
```

`Dashboard` updates will also update any `Chart` configurations it owns.

    Note: If the given dashboard does not already exist, `update` will create a new dashboard for you

<a name="dashboard-filters"></a>
### Providing Dashboard Filters

Dashboards can be configured to provide various filters that affect the behavior of all configured charts (overriding any conflicting filters at the chart level). You may wish to do this in order to quickly change the environment that you're observing for a given set of charts.


```python
from signal_analog.filters import DashboardFilters, FilterVariable
app_var = FilterVariable().with_alias('app')\
.with_property('app')\
.with_is_required(True)\
.with_value('foo')

env_var = FilterVariable().with_alias('env')\
.with_property('env')\
.with_is_required(True)\
.with_value('prod')

app_filter = DashboardFilters() \
.with_variables(app_var, env_var)
```
So, here we are creating a couple of filters "app=foo" and "env=prod".
Now we can pass this config to a dashboard object:

```python
response = dash\
.with_charts(memory_chart)\
.with_api_token('my-api-token')\
.with_filters(app_filter)\
.create()
```

If you are updating an existing dashboard:

```python
response = dash\
.with_filters(app_filter)\
.update()
```
<a name="detectors"></a>
### Creating Detectors

`signal_analog` provides a means of managing the lifecycle of `Detectors` in
the `signal_analog.detectors` module. As of `v0.21.0` only a subset of
the full Detector API is supported.

Consult the [upstream documentation][detectors] for more information about
Detectors.

Detectors are comprised of a few key elements:

  - A name
  - A SignalFlow Program
  - A set of rules for alerting

We start by building a `Detector` object and giving it a name:

```python
from signal_analog.detectors import Detector

detector = Detector().with_name('My Super Serious Detector')
```

We'll now need to give it a program to alert on:

```python
from signal_analog.flow import Program, Detect, Filter, Data
from signal_analog.combinators import GT

# This program fires an alert if memory utilization is above 90% for the
# 'bar' application.
data = Data('memory.utilization', filter=Filter('app', 'bar')).publish(label='A')
alert_label = 'Memory Utilization Above 90'
detect = Detect(GT(data, 90)).publish(label=alert_label)

detector.with_program(Program(detect))
```

With our name and program in hand, it's time to build up an alert rule that we
can use to notify our teammates:

```python
# We provide a number of notification strategies in the detectors module.
from signal_analog.detectors import EmailNotification, Rule, Severity

info_rule = Rule()\
  # From our detector defined above.
  .for_label(alert_label)\
  .with_severity(Severity.Info)\
  .with_notifications(EmailNotification('me@example.com'))

detector.with_rules(info_rule)

# We can now create this resource in SignalFx:
detector.with_api_token('foo').create()
# For a more robust solution consult the "Creating a CLI for your Resources"
# section below.
```

To add multiple alerting rules we would need to use different `detect`
statements with distinct `label`s to differentiate them from one another.

#### Detectors that Combine Data Streams

More complex detectors, like those created as a function of two other data
streams, require a more complex setup including data stream assignments.
If we wanted to create a detector that watched for an average above a certain
threshold, we may want to use the quotient of the sum() of the data and the
count() of the datapoints over a given period of time.

```python
from signal_analog.flow import \
    Assign, \
    Data, \
    Detect, \
    Ref, \
    When

from signal_analog.combinators import \
    Div, \
    GT

program = Program( \
    Assign('my_var', Data('cpu.utilization')) \
    Assign('my_other_var', Data('cpu.utilization').count()) \
    Assign('mean', Div(Ref('my_var'), Ref('my_other_var'))) \
    Detect(When(GT(Ref('mean'), 2000))) \
)

print(program)
```

The above code generates the following program:

```
my_var = data('cpu.utilization')
my_other_var = data('cpu.utilization').count()
mean = (my_var / my_other_var)

when(detect(mean > 2000))
```

<a name="from_chart"></a>
#### Building Detectors from Existing Charts

We can also build up Detectors from an existing chart, which allows us to reuse
our SignalFlow program and ensure consistency between what we're monitoring
and what we're alerting on.

Let's assume that we already have a chart defined for our use:

```python
from signal_analog.flow import Program, Data
from signal_analog.charts import TimeSeriesChart

program = Program(Data('cpu.utilization').publish(label='A'))
cpu_chart = TimeSeriesChart().with_name('Disk Utilization').with_program(program)
```

In order to alert on this chart we'll use the `from_chart`  builder for
detectors:

```python
from signal_analog.combinators import GT
from signal_analog.detectors import Detector
from signal_analog.flow import Detect

# Alert when CPU utilization rises above 95%
detector = Detector()\
    .with_name('CPU Detector')\
    .from_chart(
        cpu_chart,
        # `p` is the Program object from the cpu_chart we passed in.
        lambda p: Detect(GT(p.find_label('A'), 95).publish(label='Info Alert'))
    )
```

The above example won't actually alert on anything until we add a `Rule`, which
you can find examples for in the previous section.

<a name="flow"></a>
### Using Flow and Combinator Functions In Formulas

`signal_analog` also provides functions for combining SignalFlow statements
into more complex SignalFlow Formulas. These sorts of Formulas can be useful
when creating more complex detectors and charts. For instance, if you would like
to multiply one data stream by another and receive the sum of that Formula,
it can be accomplished using Op and Mul like so:

```python
from signal_analog.flow import Op, Program, Data
from signal_analog.combinators import Mul

# Multiply stream A by stream B and sum the result
    A = Data('request.mean')

    B = Data('request.count')

    C = Op(Mul(A,B)).sum()
```

Print(C) in the above example would produce the following output:

```
(data("request.mean") * data("request.count")).sum()
```

<a name="dashboard-groups"></a>
### Building Dashboard Groups

`signal_analog` provides abstractions for building dashboard groups in the
`signal_analog.dashboards` module.

Consult the [upstream documentation][dashboard-groups] for more information on
the Dashboard Groups API.

Building on the examples described in the previous section, we'd now like to
build a dashboard group containing our dashboards.

First, lets build a couple of Dashboard objects similar to how we did it in
the `Building Dashboards` example:

```python
from signal_analog.dashboards import Dashboard, DashboardGroup

dg = DashboardGroup()
dash1 = Dashboard().with_name('My Little Dashboard1: Metrics are Magic')\
    .with_charts(memory_chart)
dash2 = Dashboard().with_name('My Little Dashboard2: Metrics are Magic')\
    .with_charts(memory_chart)
```
**Note: we do not create Dashboard objects ourselves, the DashboardGroup object
is responsible for creating all child resources.**

Many of the same methods for dashboards are available on dashboard groups as
well, so let's give our dashboard group a memorable name and configure it's
API token:

```python

dg.with_name('My Dashboard Group')\
    .with_api_token('my-api-token')
```

Our final task will be to add dashboard to our dashboard group and create it
in the API!

```python
response = dg\
    .with_dashboards(dash1)\
    .with_api_token('my-api-token')\
    .create()
```

Now, storing API keys in source isn't ideal, so if you'd like to see how you
can pass in your API keys at runtime check the documentation below to see how
you can [dynamically build a CLI for your resources](#cli-builder).

<a name="dashboard-group-updates"></a>
### Updating Dashboard Groups

Once you have created a dashboard group, you can update properties like name
and description of a dashboard group or add/remove dashboards in a group.

*Example 1:*

```python
dg.with_api_token('my-api-token')\
    .update(name='updated_dashboard_group_name',
            description='updated_dashboard_group_description')
```

*Example 2:*

```python
dg.with_api_token('my-api-token').with_dashboards(dash1, dash2).update()
```

<a name="signalflow"></a>
### Talking to the SignalFlow API Directly

If you need to process SignalFx data outside the confince of the API it may be
useful to call the SignalFlow API directly. Note that you may incur time
penalties when pulling data out depending on the source of the data
(e.g. AWS/CloudWatch).

SignalFlow constructs are contained in the `flow` module. The following is an
example SignalFlow program that monitors an API services (like [Riposte])
RPS metrics for the `foo` application in the `test` environment.

```python
from signal_analog.flow import Data, Filter
from signal_analog.combinators import And

all_filters = And(Filter('env', 'prod'), Filter('app', 'foo'))

program = Data('requests.count', filter=all_filters)).publish()
```

You now have an object representation of the SignalFlow program. To take it for
a test ride you can use the official SignalFx client like so:

```python
# Original example found here:
# https://github.com/signalfx/signalfx-python#executing-signalflow-computations

import signalfx
from signal_analog.flow import Data, Filter
from signal_analog.combinators import And

app_filter = Filter('app', 'foo')
env_filter = Filter('env', 'prod')
program = Data('requests.count', filter=And(app_filter, env_filter)).publish()

with signalfx.SignalFx().signalflow('MY_TOKEN') as flow:
    print('Executing {0} ...'.format(program))
    computation = flow.execute(str(program))

    for msg in computation.stream():
        if isinstance(msg, signalfx.signalflow.messages.DataMessage):
            print('{0}: {1}'.format(msg.logical_timestamp_ms, msg.data))
        if isinstance(msg, signalfx.signalflow.messages.EventMessage):
            print('{0}: {1}'.format(msg.timestamp_ms, msg.properties))
```

<a name="general-resource-guidlines"></a>
### General `Resource` Guidelines

#### Charts Always Belong to Dashboards

It is always assumed that a Chart belongs to an existing Dashboard. This makes
it easier for the library to manage the state of the world.

#### Resource Names are Unique per Account

In a `signal_analog` world it is assumed that all resource names are unique.
That is, if we have two dashboards 'Foo Dashboard', when we attempt to update
_either_ dashboard via `signal_analog` we expect to see errors.

Resource names are assumed to be unique in order to simplify state management
by the library itself. In practice we have not found this to be a major
inconvenience.

#### Configuration is the Source of Truth

When conflicts arise between the state of a resource in your configuration and
what SignalFx thinks that state should be, this library **always** prefers the
local configuration.

#### Only "CCRUD" Methods Interact with the SignalFx API

`Resource` objects contain a number of builder methods to enable a "fluent" API
when describing your project's dashboards in SignalFx. It is assumed that these
methods do not perform state-affecting actions in the SignalFx API.

Only "CCRUD" (Create, Clone, Read, Update, and Delete) methods will affect the
state of your resources in SignalFx.

<a name="cli-builder"></a>
### Creating a CLI for your Resources

`signal_analog` provides builders for fully featured command line clients that
can manage the lifecycle of sets of resources.

#### Simple CLI integration

Integrating with the CLI is as simple as importing the builder and passing
it your resources. Let's consider an example where we want to update two
existing dashboards:

```python
#!/usr/bin/env python

# ^ It's always good to include a "hashbang" so that your terminal knows
# how to run your script.

from signal_analog.dashboards import Dashboard
from signal_analog.cli import CliBuilder

ingest_dashboard = Dashboard().with_name('my-ingest-service')
service_dashboard = Dashboard().with_name('my-service')

if __name__ == '__main__':
  cli = CliBuilder()\
      .with_resources(ingest_dashboard, service_dashboard)\
      .build()
  cli()
```

Assuming we called this `dashboards.py` we could run it in one of two ways:

  - Give the script execution rights and run it directly
  (typically `chmod +x dashboards.py`)
      - `./dashboards.py --api-key mykey update`
  - Pass the script in to the Python executor
      - `python dashboards.py --api-key mykey update`

If you want to know about the available actions you can take with your new
CLI you can always the `--help` command.

```shell
./dashboards.py --help
```

This gives you the following features:
  - Consistent resource management
      - All resources passed to the CLI builder can be updated with one
      `update` invocation, rather than calling the `update()` method on each
      resource indvidually
  - API key handling for all resources
      - Rather than duplicating your API key for each resource, you can instead
      invoke the CLI with an API key
      - This also provides a way to supply keys for users who don't want to
      store them in source control (that's you! don't store your keys in
      source control)

<a name="contributing"></a>
## Contributing

Please read our [docs here for more info about contributing](CONTRIBUTING.md).

[sfxdocs]: https://developers.signalfx.com/docs/signalfx-api-overview
[signalflow]: https://developers.signalfx.com/docs/signalflow-overview
[charts]: https://developers.signalfx.com/reference#charts-overview-1
[terrific]: https://media.giphy.com/media/jir4LEGA68A9y/200.gif
[dashboards]: https://developers.signalfx.com/v2/reference#dashboards-overview
[dashboard-groups]: https://developers.signalfx.com/v2/reference#dashboard-groups-overview
[detectors]: https://developers.signalfx.com/v2/reference#detectors-overview
[Riposte]: https://github.com/Nike-inc/riposte


# History

## 1.2.0 (2018-04-11)
  * Added an Assign function that will enable more complex detectors which are constructed by combining multiple data streams
  * Added a Ref flow operator that will enable referencing assignments in a way that can be validated at later steps by checking for an Assign object with a match between the reference string and the assignee

## 1.1.0 (2018-04-04)
  * Introducing Dashboard Filters(only variables as of now) which can be configured to provide various filters that affect the behavior of all configured charts (overriding any conflicting filters at the chart level). You may wish to do this in order to quickly change the environment that you're observing for a given set of charts.

## 1.0.0 (2018-04-02)

  * Symbolic release for `signal_analog`. Future version bumps should conform
  to the `semver` policy outlined [here][deployment].

## 0.25.1 (2018-03-22)

  * The timeshift method's arguments changed. Now accepts a single argument for offset.

## 0.24.0 (2018-03-09)

  * Fix string parsing to not exclude boolean False, which is required for certain functions like .publish()

## 0.23.0 (2018-03-06)

  * Added Op class in flow.py to allow multiplying and dividing datastreams
  to create SignalFlow Functions

## 0.22.0 (2018-03-01)

  * Added Mul and Div combinators for multiplying and dividing streams
  * Added "enable" option for publishing a stream. Setting enable=False
    will hide that particular stream in a chart/detector.

## 0.21.0 (2018-02-28)

  * Dashboard Group support has been added giving you the ability group sets of
  dashboards together in a convenient construct
  * Detector support has been added giving you the ability to create detectors
  from scratch or re-use the SignalFlow program of an existing Chart
  * Dashboards and Charts now update via their `id` instead of by name to
  mitigate name conflicts when creating multiple resources with the same name
  * Dry-run results are now more consistent between all resources and expose
  the API call (sans-headers) that would have been made to use for the given
  resource

## 0.20.0 (2018-01-31)

  * Dashboards have learned how to update their child resources (e.g. if you
    add a chart in your config, the change will be reflected when you next run
    your configuration against SignalFx)
  * The CLI builder has learned how to pass dry-run options to its configured resources
  * Minor bugfixes for the `signal_analog.flow` module

## 0.19.1 (2018-01-26)

  * Added click to setup.py

## 0.19.0 (2018-01-19)

  * Added CLI builder to create and update dashboard resources

## 0.18.0 (2018-01-11)

  * Dashboard resources have learned to interactively prompt the user if the user wants to
   create a new dashboard if there is a pre-existing match (this behavior is disabled
      by default).
  * Added "Update Dashboard" functionality where a user can update the properties of a dashboard(only name and description for now)

## 0.17.0 (2018-01-11)
  * Added Heatmap Chart style
     * Added by Jeremy Hicks

## 0.16.0 (2018-01-10)
  * Added the ability to sort a list chart by value ascending/descending
      * Added by Jeremy Hicks

## 0.15.0 (2018-01-08)

  * Added "Scale" to ColorBy class for coloring thresholds in SingleValueChart
      * Added by Jeremy Hicks

## 0.14.0 (2018-01-04)

  * Added List Chart style
      * Added by Jeremy Hicks

## 0.13.0 (2018-01-04)

  * Dashboard resources have learned how to force create themselves in the
  SignalFx API regardless of a pre-existing match (this behavior is disabled
  by default).

## 0.12.0 (2017-12-21)

  * Dashboard resources have learned how to check for themselves in the
  SignalFx API, and will no longer create themselves if an exact match is found

## 0.3.0 (2017-09-25)

  * Adds support for base Resource object. Will be used for Chart/Dashboard
  abstractions in future versions.
  * Adds support for base Chart and TimeSeriesChart objects. Note that some
  TimeSeriesChart builder options have not yet been implemented (and marked
  clearly with NotImplementedErrors)

## 0.2.0 (2017-09-18)

  * Adds support for function combinators like `and`, `or`, and `not`

## 0.1.1 (2017-09-14)

  * Add README documentation

## 0.1.0 (2017-09-14)

  * Initial release

[deployment]: https://github.com/Nike-Inc/signal_analog/wiki/Developers-::-Deployment



            

Raw data

            {
    "maintainer": "", 
    "docs_url": null, 
    "requires_python": "", 
    "maintainer_email": "", 
    "cheesecake_code_kwalitee_id": null, 
    "keywords": "signal_analog signalfx dashboards charts detectors monitoring signalflow", 
    "upload_time": "2018-04-12 23:10:21", 
    "author": "Fernando Freire", 
    "home_page": "https://github.com/Nike-inc/signal_analog", 
    "github_user": "Nike-inc", 
    "download_url": "", 
    "platform": "", 
    "version": "1.2.0", 
    "cheesecake_documentation_id": null, 
    "description": "[![Build Status](https://travis-ci.com/Nike-Inc/signal_analog.svg?token=ifpf79nsCVxHsobs3pQ5&branch=master)](https://travis-ci.com/Nike-Inc/signal_analog)\n# signal_analog\n\nA troposphere-like library for managing SignalFx Charts, Dashboards, and\nDetectors.\n\nThis library assumes a basic familiarity with resources in SignalFx. For a\ngood overview of the SignalFx API consult the [upstream documentation][sfxdocs].\n\n## TOC\n\n  - [Features](#features)\n  - [Installation](#installation)\n  - [Usage](#usage)\n      - [Building Charts](#charts)\n      - [Building Dashboards](#dashboards)\n      - [Updating Dashboards](#dashboards-updates)\n      - [Dashboard Filters](#dashboard-filters)\n      - [Creating Detectors](#detectors)\n          - [Building Detectors from Existing Charts](#from_chart)\n      - [Using Flow and Combinator Functions In Formulas](#flow)\n      - [Building Dashboard Groups](#dashboard-groups)\n      - [Updating Dashboard Group](#dashboard-group-updates)\n      - [Talking to the SignalFlow API Directly](#signalflow)\n      - [General `Resource` Guidelines](#general-resource-guidlines)\n      - [Creating a CLI for your resources](#cli-builder)\n  - [Contributing](#contributing)\n\n<a name=\"features\"></a>\n## Features\n\n  - Provides bindings for the SignalFlow DSL\n  - Provides abstractions for:\n      - Charts\n      - Dashboards, DashboardGroups\n      - Detectors\n  - A CLI builder to wrap resource definitions (useful for automation)\n\n<a name=\"installation\"></a>\n## Installation\n\nAdd `signal_analog` to the requirements file in your project:\n\n```\n# requirements.txt\n# ... your other dependencies\nsignal_analog\n```\n\nThen run the following command to update your environment:\n\n```\npip install -r requirements.txt\n```\n\n<a name=\"usage\"></a>\n## Usage\n\n`signal_analog` provides two kinds of abstractions, one for building resources\nin the SignalFx API and the other for describing metric timeseries through the\n[Signal Flow DSL][signalflow].\n\nThe following sections describe how to use `Resource` abstractions in\nconjunction with the [Signal Flow DSL][signalflow].\n\n<a name=\"charts\"></a>\n### Building Charts\n\n`signal_analog` provides constructs for building charts in the\n`signal_analog.charts` module.\n\nConsult the [upstream documentation][charts] for more information Charts.\n\nLet's consider an example where we would like to build a chart to monitor\nmemory utilization for a single applicaton in a single environment.\n\nThis assumes a service reports metrics for application name as `app` and\nenvironment as `env` with memory utilization reporting via the\n`memory.utilization` metric name.\n\nIn a timeseries chart, all data displayed on the screen comes from at least one\n`data` definition in the SignalFlow language. Let's begin by defining our\ntimeseries:\n\n```python\nfrom signal_analog.flow import Data\n\nts = Data('memory.utilization')\n```\n\nIn SignalFlow parlance a timeseries is only displayed on a chart if it has been\n\"published\". All stream functions in SignalFlow have a `publish` method that\nmay be called at the _end_ of all timeseries transformations.\n\n```python\nts = Data('memory.utilization').publish()\n```\n\nAs a convenience, all transformations on stream functions return the callee,\nso in the above example `ts` remains bound to an instance of `Data`.\n\nNow, this timeseries isn't very useful by itself; if we attached this program\nto a chart we would see _all_ timeseries for _all_ [Riposte] applications\nreporting to SignalFx!\n\nWe can restrict our view of the data by adding a filter on application name:\n\n```python\nfrom signal_analog.flow import Data, Filter\n\napp_filter = Filter('app', 'foo')\n\nts = Data('memory.utilization', filter=app_filter).publish()\n```\n\nNow if we created a chart with this program we would only be looking at metrics\nthat relate to the `foo` application. Much better, but we're still\nlooking at instance of `foo` _regardless_ of the environment it\nlives in.\n\nWhat we'll want to do is combine our `app_filter` with another filter for the\nenvironment. The `signal_analog.combinators` module provides some helpful\nconstructs for achieving this goal:\n\n```python\nfrom signal_analog.combinators import And\n\nenv_filter = Filter('env', 'prod')\n\nall_filters = And(app_filter, env_filter)\n\nts = Data('memory.utilization', filter=all_filters).publish()\n```\n\nExcellent! We're now ready to create our chart.\n\nFirst, let's give our chart a name:\n\n```python\nfrom signal_analog.charts import TimeSeriesChart\n\nmemory_chart = TimeSeriesChart().with_name('Memory Used %')\n```\n\nLike it's `flow` counterparts, `charts` adhere to the builder pattern for\nconstructing objects that interact with the SignalFx API.\n\nWith our name in place, let's go ahead and add our program:\n\n```python\nmemory_chart = TimeSeriesChart().with_name('Memory Used %').with_program(ts)\n```\n\nEach Chart understands how to serialize our SignalFlow programs appropriately,\nso it is sufficient to simply pass in our reference here.\n\nFinally, let's change the plot type on our chart so that we see solid areas\ninstead of flimsy lines:\n\n```python\nfrom signal_analog.charts import PlotType\n\nmemory_chart = TimeSeriesChart()\\\n                 .with_name('Memory Used %')\\\n                 .with_program(ts)\n                 .with_default_plot_type(PlotType.area_chart)\n```\n\n[Terrific]; there's only a few more details before we have a complete chart.\n\nIn the following sections we'll see how we can create dashboards from\ncollections of charts.\n\n<a name=\"dashboards\"></a>\n### Building Dashboards\n\n`signal_analog` provides constructs for building charts in the\n`signal_analog.dashboards` module.\n\nConsult the [upstream documentation][dashboards] for more information on the\nDashboard API.\n\nBuilding on the examples described in the previous section, we'd now like to\nbuild a dashboard containing our memory chart.\n\nWe start with the humble `Dashboard` object:\n\n```python\nfrom signal_analog.dashboards import Dashboard\n\ndash = Dashboard()\n```\n\nMany of the same methods for charts are available on dashboards as well, so\nlet's give our dashboard a memorable name and configure it's API token:\n\n```python\ndash.with_name('My Little Dashboard: Metrics are Magic')\\\n    .with_api_token('my-api-token')\n```\n\nOur final task will be to add charts to our dashboard and create it in the API!\n\n```python\nresponse = dash\\\n  .with_charts(memory_chart)\\\n  .with_api_token('my-api-token')\\\n  .create()\n```\n\nAt this point one of two things will happen:\n\n  - We receive some sort of error from the SignalFx API and an exception\n  is thrown\n  - We successfully created the dashboard, in which case the JSON response is\n  returned as a dictionary.\n\nNow, storing API keys in source isn't ideal, so if you'd like to see how you\ncan pass in your API keys at runtime check the documentation below to see how\nyou can [dynamically build a CLI for your resources](#cli-builder).\n\n<a name=\"dashboards-updates\"></a>\n### Updating Dashboards\nOnce you have created a dashboard you can update properties like name and\ndescription:\n\n```python\ndash.update(\n    name='updated_dashboard_name',\n    description='updated_dashboard_description'\n)\n```\n\n`Dashboard` updates will also update any `Chart` configurations it owns.\n\n    Note: If the given dashboard does not already exist, `update` will create a new dashboard for you\n\n<a name=\"dashboard-filters\"></a>\n### Providing Dashboard Filters\n\nDashboards can be configured to provide various filters that affect the behavior of all configured charts (overriding any conflicting filters at the chart level). You may wish to do this in order to quickly change the environment that you're observing for a given set of charts.\n\n\n```python\nfrom signal_analog.filters import DashboardFilters, FilterVariable\napp_var = FilterVariable().with_alias('app')\\\n.with_property('app')\\\n.with_is_required(True)\\\n.with_value('foo')\n\nenv_var = FilterVariable().with_alias('env')\\\n.with_property('env')\\\n.with_is_required(True)\\\n.with_value('prod')\n\napp_filter = DashboardFilters() \\\n.with_variables(app_var, env_var)\n```\nSo, here we are creating a couple of filters \"app=foo\" and \"env=prod\".\nNow we can pass this config to a dashboard object:\n\n```python\nresponse = dash\\\n.with_charts(memory_chart)\\\n.with_api_token('my-api-token')\\\n.with_filters(app_filter)\\\n.create()\n```\n\nIf you are updating an existing dashboard:\n\n```python\nresponse = dash\\\n.with_filters(app_filter)\\\n.update()\n```\n<a name=\"detectors\"></a>\n### Creating Detectors\n\n`signal_analog` provides a means of managing the lifecycle of `Detectors` in\nthe `signal_analog.detectors` module. As of `v0.21.0` only a subset of\nthe full Detector API is supported.\n\nConsult the [upstream documentation][detectors] for more information about\nDetectors.\n\nDetectors are comprised of a few key elements:\n\n  - A name\n  - A SignalFlow Program\n  - A set of rules for alerting\n\nWe start by building a `Detector` object and giving it a name:\n\n```python\nfrom signal_analog.detectors import Detector\n\ndetector = Detector().with_name('My Super Serious Detector')\n```\n\nWe'll now need to give it a program to alert on:\n\n```python\nfrom signal_analog.flow import Program, Detect, Filter, Data\nfrom signal_analog.combinators import GT\n\n# This program fires an alert if memory utilization is above 90% for the\n# 'bar' application.\ndata = Data('memory.utilization', filter=Filter('app', 'bar')).publish(label='A')\nalert_label = 'Memory Utilization Above 90'\ndetect = Detect(GT(data, 90)).publish(label=alert_label)\n\ndetector.with_program(Program(detect))\n```\n\nWith our name and program in hand, it's time to build up an alert rule that we\ncan use to notify our teammates:\n\n```python\n# We provide a number of notification strategies in the detectors module.\nfrom signal_analog.detectors import EmailNotification, Rule, Severity\n\ninfo_rule = Rule()\\\n  # From our detector defined above.\n  .for_label(alert_label)\\\n  .with_severity(Severity.Info)\\\n  .with_notifications(EmailNotification('me@example.com'))\n\ndetector.with_rules(info_rule)\n\n# We can now create this resource in SignalFx:\ndetector.with_api_token('foo').create()\n# For a more robust solution consult the \"Creating a CLI for your Resources\"\n# section below.\n```\n\nTo add multiple alerting rules we would need to use different `detect`\nstatements with distinct `label`s to differentiate them from one another.\n\n#### Detectors that Combine Data Streams\n\nMore complex detectors, like those created as a function of two other data\nstreams, require a more complex setup including data stream assignments.\nIf we wanted to create a detector that watched for an average above a certain\nthreshold, we may want to use the quotient of the sum() of the data and the\ncount() of the datapoints over a given period of time.\n\n```python\nfrom signal_analog.flow import \\\n    Assign, \\\n    Data, \\\n    Detect, \\\n    Ref, \\\n    When\n\nfrom signal_analog.combinators import \\\n    Div, \\\n    GT\n\nprogram = Program( \\\n    Assign('my_var', Data('cpu.utilization')) \\\n    Assign('my_other_var', Data('cpu.utilization').count()) \\\n    Assign('mean', Div(Ref('my_var'), Ref('my_other_var'))) \\\n    Detect(When(GT(Ref('mean'), 2000))) \\\n)\n\nprint(program)\n```\n\nThe above code generates the following program:\n\n```\nmy_var = data('cpu.utilization')\nmy_other_var = data('cpu.utilization').count()\nmean = (my_var / my_other_var)\n\nwhen(detect(mean > 2000))\n```\n\n<a name=\"from_chart\"></a>\n#### Building Detectors from Existing Charts\n\nWe can also build up Detectors from an existing chart, which allows us to reuse\nour SignalFlow program and ensure consistency between what we're monitoring\nand what we're alerting on.\n\nLet's assume that we already have a chart defined for our use:\n\n```python\nfrom signal_analog.flow import Program, Data\nfrom signal_analog.charts import TimeSeriesChart\n\nprogram = Program(Data('cpu.utilization').publish(label='A'))\ncpu_chart = TimeSeriesChart().with_name('Disk Utilization').with_program(program)\n```\n\nIn order to alert on this chart we'll use the `from_chart`  builder for\ndetectors:\n\n```python\nfrom signal_analog.combinators import GT\nfrom signal_analog.detectors import Detector\nfrom signal_analog.flow import Detect\n\n# Alert when CPU utilization rises above 95%\ndetector = Detector()\\\n    .with_name('CPU Detector')\\\n    .from_chart(\n        cpu_chart,\n        # `p` is the Program object from the cpu_chart we passed in.\n        lambda p: Detect(GT(p.find_label('A'), 95).publish(label='Info Alert'))\n    )\n```\n\nThe above example won't actually alert on anything until we add a `Rule`, which\nyou can find examples for in the previous section.\n\n<a name=\"flow\"></a>\n### Using Flow and Combinator Functions In Formulas\n\n`signal_analog` also provides functions for combining SignalFlow statements\ninto more complex SignalFlow Formulas. These sorts of Formulas can be useful\nwhen creating more complex detectors and charts. For instance, if you would like\nto multiply one data stream by another and receive the sum of that Formula,\nit can be accomplished using Op and Mul like so:\n\n```python\nfrom signal_analog.flow import Op, Program, Data\nfrom signal_analog.combinators import Mul\n\n# Multiply stream A by stream B and sum the result\n    A = Data('request.mean')\n\n    B = Data('request.count')\n\n    C = Op(Mul(A,B)).sum()\n```\n\nPrint(C) in the above example would produce the following output:\n\n```\n(data(\"request.mean\") * data(\"request.count\")).sum()\n```\n\n<a name=\"dashboard-groups\"></a>\n### Building Dashboard Groups\n\n`signal_analog` provides abstractions for building dashboard groups in the\n`signal_analog.dashboards` module.\n\nConsult the [upstream documentation][dashboard-groups] for more information on\nthe Dashboard Groups API.\n\nBuilding on the examples described in the previous section, we'd now like to\nbuild a dashboard group containing our dashboards.\n\nFirst, lets build a couple of Dashboard objects similar to how we did it in\nthe `Building Dashboards` example:\n\n```python\nfrom signal_analog.dashboards import Dashboard, DashboardGroup\n\ndg = DashboardGroup()\ndash1 = Dashboard().with_name('My Little Dashboard1: Metrics are Magic')\\\n    .with_charts(memory_chart)\ndash2 = Dashboard().with_name('My Little Dashboard2: Metrics are Magic')\\\n    .with_charts(memory_chart)\n```\n**Note: we do not create Dashboard objects ourselves, the DashboardGroup object\nis responsible for creating all child resources.**\n\nMany of the same methods for dashboards are available on dashboard groups as\nwell, so let's give our dashboard group a memorable name and configure it's\nAPI token:\n\n```python\n\ndg.with_name('My Dashboard Group')\\\n    .with_api_token('my-api-token')\n```\n\nOur final task will be to add dashboard to our dashboard group and create it\nin the API!\n\n```python\nresponse = dg\\\n    .with_dashboards(dash1)\\\n    .with_api_token('my-api-token')\\\n    .create()\n```\n\nNow, storing API keys in source isn't ideal, so if you'd like to see how you\ncan pass in your API keys at runtime check the documentation below to see how\nyou can [dynamically build a CLI for your resources](#cli-builder).\n\n<a name=\"dashboard-group-updates\"></a>\n### Updating Dashboard Groups\n\nOnce you have created a dashboard group, you can update properties like name\nand description of a dashboard group or add/remove dashboards in a group.\n\n*Example 1:*\n\n```python\ndg.with_api_token('my-api-token')\\\n    .update(name='updated_dashboard_group_name',\n            description='updated_dashboard_group_description')\n```\n\n*Example 2:*\n\n```python\ndg.with_api_token('my-api-token').with_dashboards(dash1, dash2).update()\n```\n\n<a name=\"signalflow\"></a>\n### Talking to the SignalFlow API Directly\n\nIf you need to process SignalFx data outside the confince of the API it may be\nuseful to call the SignalFlow API directly. Note that you may incur time\npenalties when pulling data out depending on the source of the data\n(e.g. AWS/CloudWatch).\n\nSignalFlow constructs are contained in the `flow` module. The following is an\nexample SignalFlow program that monitors an API services (like [Riposte])\nRPS metrics for the `foo` application in the `test` environment.\n\n```python\nfrom signal_analog.flow import Data, Filter\nfrom signal_analog.combinators import And\n\nall_filters = And(Filter('env', 'prod'), Filter('app', 'foo'))\n\nprogram = Data('requests.count', filter=all_filters)).publish()\n```\n\nYou now have an object representation of the SignalFlow program. To take it for\na test ride you can use the official SignalFx client like so:\n\n```python\n# Original example found here:\n# https://github.com/signalfx/signalfx-python#executing-signalflow-computations\n\nimport signalfx\nfrom signal_analog.flow import Data, Filter\nfrom signal_analog.combinators import And\n\napp_filter = Filter('app', 'foo')\nenv_filter = Filter('env', 'prod')\nprogram = Data('requests.count', filter=And(app_filter, env_filter)).publish()\n\nwith signalfx.SignalFx().signalflow('MY_TOKEN') as flow:\n    print('Executing {0} ...'.format(program))\n    computation = flow.execute(str(program))\n\n    for msg in computation.stream():\n        if isinstance(msg, signalfx.signalflow.messages.DataMessage):\n            print('{0}: {1}'.format(msg.logical_timestamp_ms, msg.data))\n        if isinstance(msg, signalfx.signalflow.messages.EventMessage):\n            print('{0}: {1}'.format(msg.timestamp_ms, msg.properties))\n```\n\n<a name=\"general-resource-guidlines\"></a>\n### General `Resource` Guidelines\n\n#### Charts Always Belong to Dashboards\n\nIt is always assumed that a Chart belongs to an existing Dashboard. This makes\nit easier for the library to manage the state of the world.\n\n#### Resource Names are Unique per Account\n\nIn a `signal_analog` world it is assumed that all resource names are unique.\nThat is, if we have two dashboards 'Foo Dashboard', when we attempt to update\n_either_ dashboard via `signal_analog` we expect to see errors.\n\nResource names are assumed to be unique in order to simplify state management\nby the library itself. In practice we have not found this to be a major\ninconvenience.\n\n#### Configuration is the Source of Truth\n\nWhen conflicts arise between the state of a resource in your configuration and\nwhat SignalFx thinks that state should be, this library **always** prefers the\nlocal configuration.\n\n#### Only \"CCRUD\" Methods Interact with the SignalFx API\n\n`Resource` objects contain a number of builder methods to enable a \"fluent\" API\nwhen describing your project's dashboards in SignalFx. It is assumed that these\nmethods do not perform state-affecting actions in the SignalFx API.\n\nOnly \"CCRUD\" (Create, Clone, Read, Update, and Delete) methods will affect the\nstate of your resources in SignalFx.\n\n<a name=\"cli-builder\"></a>\n### Creating a CLI for your Resources\n\n`signal_analog` provides builders for fully featured command line clients that\ncan manage the lifecycle of sets of resources.\n\n#### Simple CLI integration\n\nIntegrating with the CLI is as simple as importing the builder and passing\nit your resources. Let's consider an example where we want to update two\nexisting dashboards:\n\n```python\n#!/usr/bin/env python\n\n# ^ It's always good to include a \"hashbang\" so that your terminal knows\n# how to run your script.\n\nfrom signal_analog.dashboards import Dashboard\nfrom signal_analog.cli import CliBuilder\n\ningest_dashboard = Dashboard().with_name('my-ingest-service')\nservice_dashboard = Dashboard().with_name('my-service')\n\nif __name__ == '__main__':\n  cli = CliBuilder()\\\n      .with_resources(ingest_dashboard, service_dashboard)\\\n      .build()\n  cli()\n```\n\nAssuming we called this `dashboards.py` we could run it in one of two ways:\n\n  - Give the script execution rights and run it directly\n  (typically `chmod +x dashboards.py`)\n      - `./dashboards.py --api-key mykey update`\n  - Pass the script in to the Python executor\n      - `python dashboards.py --api-key mykey update`\n\nIf you want to know about the available actions you can take with your new\nCLI you can always the `--help` command.\n\n```shell\n./dashboards.py --help\n```\n\nThis gives you the following features:\n  - Consistent resource management\n      - All resources passed to the CLI builder can be updated with one\n      `update` invocation, rather than calling the `update()` method on each\n      resource indvidually\n  - API key handling for all resources\n      - Rather than duplicating your API key for each resource, you can instead\n      invoke the CLI with an API key\n      - This also provides a way to supply keys for users who don't want to\n      store them in source control (that's you! don't store your keys in\n      source control)\n\n<a name=\"contributing\"></a>\n## Contributing\n\nPlease read our [docs here for more info about contributing](CONTRIBUTING.md).\n\n[sfxdocs]: https://developers.signalfx.com/docs/signalfx-api-overview\n[signalflow]: https://developers.signalfx.com/docs/signalflow-overview\n[charts]: https://developers.signalfx.com/reference#charts-overview-1\n[terrific]: https://media.giphy.com/media/jir4LEGA68A9y/200.gif\n[dashboards]: https://developers.signalfx.com/v2/reference#dashboards-overview\n[dashboard-groups]: https://developers.signalfx.com/v2/reference#dashboard-groups-overview\n[detectors]: https://developers.signalfx.com/v2/reference#detectors-overview\n[Riposte]: https://github.com/Nike-inc/riposte\n\n\n# History\n\n## 1.2.0 (2018-04-11)\n  * Added an Assign function that will enable more complex detectors which are constructed by combining multiple data streams\n  * Added a Ref flow operator that will enable referencing assignments in a way that can be validated at later steps by checking for an Assign object with a match between the reference string and the assignee\n\n## 1.1.0 (2018-04-04)\n  * Introducing Dashboard Filters(only variables as of now) which can be configured to provide various filters that affect the behavior of all configured charts (overriding any conflicting filters at the chart level). You may wish to do this in order to quickly change the environment that you're observing for a given set of charts.\n\n## 1.0.0 (2018-04-02)\n\n  * Symbolic release for `signal_analog`. Future version bumps should conform\n  to the `semver` policy outlined [here][deployment].\n\n## 0.25.1 (2018-03-22)\n\n  * The timeshift method's arguments changed. Now accepts a single argument for offset.\n\n## 0.24.0 (2018-03-09)\n\n  * Fix string parsing to not exclude boolean False, which is required for certain functions like .publish()\n\n## 0.23.0 (2018-03-06)\n\n  * Added Op class in flow.py to allow multiplying and dividing datastreams\n  to create SignalFlow Functions\n\n## 0.22.0 (2018-03-01)\n\n  * Added Mul and Div combinators for multiplying and dividing streams\n  * Added \"enable\" option for publishing a stream. Setting enable=False\n    will hide that particular stream in a chart/detector.\n\n## 0.21.0 (2018-02-28)\n\n  * Dashboard Group support has been added giving you the ability group sets of\n  dashboards together in a convenient construct\n  * Detector support has been added giving you the ability to create detectors\n  from scratch or re-use the SignalFlow program of an existing Chart\n  * Dashboards and Charts now update via their `id` instead of by name to\n  mitigate name conflicts when creating multiple resources with the same name\n  * Dry-run results are now more consistent between all resources and expose\n  the API call (sans-headers) that would have been made to use for the given\n  resource\n\n## 0.20.0 (2018-01-31)\n\n  * Dashboards have learned how to update their child resources (e.g. if you\n    add a chart in your config, the change will be reflected when you next run\n    your configuration against SignalFx)\n  * The CLI builder has learned how to pass dry-run options to its configured resources\n  * Minor bugfixes for the `signal_analog.flow` module\n\n## 0.19.1 (2018-01-26)\n\n  * Added click to setup.py\n\n## 0.19.0 (2018-01-19)\n\n  * Added CLI builder to create and update dashboard resources\n\n## 0.18.0 (2018-01-11)\n\n  * Dashboard resources have learned to interactively prompt the user if the user wants to\n   create a new dashboard if there is a pre-existing match (this behavior is disabled\n      by default).\n  * Added \"Update Dashboard\" functionality where a user can update the properties of a dashboard(only name and description for now)\n\n## 0.17.0 (2018-01-11)\n  * Added Heatmap Chart style\n     * Added by Jeremy Hicks\n\n## 0.16.0 (2018-01-10)\n  * Added the ability to sort a list chart by value ascending/descending\n      * Added by Jeremy Hicks\n\n## 0.15.0 (2018-01-08)\n\n  * Added \"Scale\" to ColorBy class for coloring thresholds in SingleValueChart\n      * Added by Jeremy Hicks\n\n## 0.14.0 (2018-01-04)\n\n  * Added List Chart style\n      * Added by Jeremy Hicks\n\n## 0.13.0 (2018-01-04)\n\n  * Dashboard resources have learned how to force create themselves in the\n  SignalFx API regardless of a pre-existing match (this behavior is disabled\n  by default).\n\n## 0.12.0 (2017-12-21)\n\n  * Dashboard resources have learned how to check for themselves in the\n  SignalFx API, and will no longer create themselves if an exact match is found\n\n## 0.3.0 (2017-09-25)\n\n  * Adds support for base Resource object. Will be used for Chart/Dashboard\n  abstractions in future versions.\n  * Adds support for base Chart and TimeSeriesChart objects. Note that some\n  TimeSeriesChart builder options have not yet been implemented (and marked\n  clearly with NotImplementedErrors)\n\n## 0.2.0 (2017-09-18)\n\n  * Adds support for function combinators like `and`, `or`, and `not`\n\n## 0.1.1 (2017-09-14)\n\n  * Add README documentation\n\n## 0.1.0 (2017-09-14)\n\n  * Initial release\n\n[deployment]: https://github.com/Nike-Inc/signal_analog/wiki/Developers-::-Deployment\n\n\n", 
    "tox": true, 
    "lcname": "signal-analog", 
    "bugtrack_url": null, 
    "github": true, 
    "coveralls": false, 
    "name": "signal-analog", 
    "license": "BSD 3-Clause License", 
    "travis_ci": true, 
    "github_project": "signal_analog", 
    "summary": "A troposphere-like library for managing SignalFxCharts, Dashboards, and Detectors.", 
    "split_keywords": [
        "signal_analog", 
        "signalfx", 
        "dashboards", 
        "charts", 
        "detectors", 
        "monitoring", 
        "signalflow"
    ], 
    "author_email": "fernando.freire@nike.com", 
    "urls": [
        {
            "has_sig": false, 
            "upload_time": "2018-04-12T23:10:21", 
            "comment_text": "", 
            "python_version": "py2.py3", 
            "url": "https://pypi.python.org/packages/3a/15/082161b4106320c41d1767546dfe795f33cebc61e6edd11b0ecc5d3ca0d8/signal_analog-1.2.0-py2.py3-none-any.whl", 
            "md5_digest": "c82d3bb6190bd5d65ed0c9cafcb11304", 
            "downloads": 0, 
            "filename": "signal_analog-1.2.0-py2.py3-none-any.whl", 
            "packagetype": "bdist_wheel", 
            "path": "3a/15/082161b4106320c41d1767546dfe795f33cebc61e6edd11b0ecc5d3ca0d8/signal_analog-1.2.0-py2.py3-none-any.whl", 
            "digests": {
                "sha256": "f637db40cf255f9721d1380bfdf907a030fd9e63fa46509d52627a27dbc10986", 
                "md5": "c82d3bb6190bd5d65ed0c9cafcb11304"
            }, 
            "sha256_digest": "f637db40cf255f9721d1380bfdf907a030fd9e63fa46509d52627a27dbc10986", 
            "size": 46876
        }
    ], 
    "_id": null, 
    "cheesecake_installability_id": null
}