intervalsicu-to-influxdb


Nameintervalsicu-to-influxdb JSON
Version 0.2.4 PyPI version JSON
download
home_pagehttps://codeberg.org/tmllull/intervalsicu-to-influxdb
SummaryA package to extract data from intervals.icu to influxDB
upload_time2023-09-06 10:41:02
maintainer
docs_urlNone
authorToni Miquel Llull
requires_python>=3
licenseGPL3
keywords intervalsicu influxdb sport
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Intervals.icu to InfluxDB
Intervalsicu-to-influxdb is a personal project to extract data from Intervals.icu to InfluxDB (oh, really?). But if Intervals.icu already shows a lot of graphics, statistics and more, why I need to extract it?

Full documentation can be found in [here](https://intervalsicu-to-influxdb.readthedocs.io)

## Why
Well, as a sportsman and techie, it's just a personal project, but the main reason is because I want to create my own dashboards (using Grafana in this case).

So, for example, I can combine activity data with sleep time or quality, compare the evolution between pace/bpm for the similar activities or whatever.

![Grafana Dashboard example](docs/screenshots/image.png)
![Grafana Dashboard example2](docs/screenshots/image2.png)

## How it works
This project exports some data from [intervals.icu](https://intervals.icu) to [influxDB](https://www.influxdata.com/). To retrieve the information the official [intervals.icu API](https://intervals.icu/api/v1/docs/swagger-ui/index.html) is used.

### Exported data
Not all information is exported. This project has been created to extract data from activities and wellness. Besides, information about data account (like email, location, preferences, etc.), calendar or workouts are not retrieved neither (for now).

Currently the following data is exported:
- **Wellness**\*: this data contains information like sleep time and quality, atl/ctl or VO2Max
- **Activities**\*: general information about every activity, like elapsed time, time in zones (hr or pace), distance, average pace/hr, etc.
- **Streams**\*\*: streams contains detailed information about activities, like hr/pace for every second.

\* There are som extra fields generated, just to facilitate the use for the dashboards (see [Entities](Entities.md))

\*\* Currently working on it.

## How to use
There are 2 ways (3 if you count 'from source code') to use the project: with Docker or directly with Python (or from source code), but in both cases you need to create a `.env` file to save your credentials for Intervals.icu and InfluxDB as follow:

```
INFLUXDB_TOKEN=
INFLUXDB_ORG=
INFLUXDB_URL=
INFLUXDB_BUCKET=
INFLUXDB_TIMEOUT=10000
INTERVALS_ATHLETE_ID=
INTERVALS_API_KEY=
```

### Docker
To use with Docker, just run the following command:

```bash
docker run --env-file PATH/TO/FILE -it --rm tmllull/intervals-to-influxdb app.py [-h] [--start-date START_DATE] [--end-date END_DATE] [--streams] [--reset]
```

#### Arguments
All the arguments are optional, but take in consideration the following variations when run it:

- No arguments: retrieve the wellness and activities data for today (this is the basic use to run with a cronjob)
- Start date: retrieve data from the starting date (in format YYYY-MM-DD) until today
- End date: retrieve data until specified date (in format YYYY-MM-DD). Use it with `start-date`
- Streams: retrieve the streams for the activities
- Reset: delete the current bucket and recreate again

NOTE: on the first run, the bucket is created automatically if not exists on InfluxDB

### With Python
If you want to run it directly with Python, first install the dependency:

```bash
pip install intervalsicu-to-influxdb
```

Then, the minimum code to run it is (remember to put the `.env` file on the same folder):

```python
from intervalsicu_to_influxdb.extractor import IntervalsToInflux

extractor = IntervalsToInflux()
extractor.all_data()
```

To run it, just save as `app.py` and run it:

```bash
python app.py
```

#### Arguments
As the Docker way, we can pass arguments when create the extractor. For example:

```python
extractor = IntervalsToInflux(start_date="2023-01-01")
```
```python
extractor = IntervalsToInflux(streams=True)
```
```python
extractor = IntervalsToInflux(start_date="2023-01-01", end_date="2023-05-01")
```
#### Dynamic script
If you want to create a more dynamic script, here is a more complete example:

```python
import argparse

from intervalsicu_to_influxdb.extractor import IntervalsToInflux

parser = argparse.ArgumentParser()

parser.add_argument("--start-date", type=str, help="Start date in format YYYY-MM-DD")
parser.add_argument("--end-date", type=str, help="End date in format YYYY-MM-DD")
parser.add_argument(
    "--streams",
    action="store_true",
    help="Export streams for the activities",
)
parser.add_argument(
    "--reset", action="store_true", help="Reset influx bucket (delete and create)"
)

args = parser.parse_args()

if args.start_date:
    start_date = args.start_date
else:
    start_date = None
if args.end_date:
    end_date = args.end_date
else:
    end_date = None
if args.streams:
    streams = True
else:
    streams = False
if args.reset:
    reset = True
else:
    reset = False

extractor = IntervalsToInflux(start_date, end_date, reset, streams)
extractor.all_data()
```

Then, just run the script as before, but you will can use arguments (same as the Docker section):

```bash
python app.py [-h] [--start-date START_DATE] [--end-date END_DATE] [--streams] [--reset]
```

### From source code
If you want to run it from source code, just clone the project, and the follow the next steps (remember to create the `.env` file):

#### Run with Docker
First, compile the image
```bash
docker build --tag intervals-to-influxdb .
```
And then, just run it like the Docker section above (but with the image name)

```bash
docker run --env-file PATH/TO/FILE -it --rm intervals-to-influxdb app.py [-h] [--start-date START_DATE] [--end-date END_DATE] [--streams] [--reset]
```

Run with Python
First, install dependencies from source
```
pip install .
```

And then, run the script
```
python app.py [-h] [--start-date START_DATE] [--end-date END_DATE] [--streams] [--reset]
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://codeberg.org/tmllull/intervalsicu-to-influxdb",
    "name": "intervalsicu-to-influxdb",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3",
    "maintainer_email": "",
    "keywords": "intervalsicu,influxdb,sport",
    "author": "Toni Miquel Llull",
    "author_email": "Toni Miquel Llull <tonimiquel.llull@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/0c/45/ceba547483a0dc1401b804647874dda9389c6084bf7b04edfa67e736ce71/intervalsicu_to_influxdb-0.2.4.tar.gz",
    "platform": null,
    "description": "# Intervals.icu to InfluxDB\nIntervalsicu-to-influxdb is a personal project to extract data from Intervals.icu to InfluxDB (oh, really?). But if Intervals.icu already shows a lot of graphics, statistics and more, why I need to extract it?\n\nFull documentation can be found in [here](https://intervalsicu-to-influxdb.readthedocs.io)\n\n## Why\nWell, as a sportsman and techie, it's just a personal project, but the main reason is because I want to create my own dashboards (using Grafana in this case).\n\nSo, for example, I can combine activity data with sleep time or quality, compare the evolution between pace/bpm for the similar activities or whatever.\n\n![Grafana Dashboard example](docs/screenshots/image.png)\n![Grafana Dashboard example2](docs/screenshots/image2.png)\n\n## How it works\nThis project exports some data from [intervals.icu](https://intervals.icu) to [influxDB](https://www.influxdata.com/). To retrieve the information the official [intervals.icu API](https://intervals.icu/api/v1/docs/swagger-ui/index.html) is used.\n\n### Exported data\nNot all information is exported. This project has been created to extract data from activities and wellness. Besides, information about data account (like email, location, preferences, etc.), calendar or workouts are not retrieved neither (for now).\n\nCurrently the following data is exported:\n- **Wellness**\\*: this data contains information like sleep time and quality, atl/ctl or VO2Max\n- **Activities**\\*: general information about every activity, like elapsed time, time in zones (hr or pace), distance, average pace/hr, etc.\n- **Streams**\\*\\*: streams contains detailed information about activities, like hr/pace for every second.\n\n\\* There are som extra fields generated, just to facilitate the use for the dashboards (see [Entities](Entities.md))\n\n\\*\\* Currently working on it.\n\n## How to use\nThere are 2 ways (3 if you count 'from source code') to use the project: with Docker or directly with Python (or from source code), but in both cases you need to create a `.env` file to save your credentials for Intervals.icu and InfluxDB as follow:\n\n```\nINFLUXDB_TOKEN=\nINFLUXDB_ORG=\nINFLUXDB_URL=\nINFLUXDB_BUCKET=\nINFLUXDB_TIMEOUT=10000\nINTERVALS_ATHLETE_ID=\nINTERVALS_API_KEY=\n```\n\n### Docker\nTo use with Docker, just run the following command:\n\n```bash\ndocker run --env-file PATH/TO/FILE -it --rm tmllull/intervals-to-influxdb app.py [-h] [--start-date START_DATE] [--end-date END_DATE] [--streams] [--reset]\n```\n\n#### Arguments\nAll the arguments are optional, but take in consideration the following variations when run it:\n\n- No arguments: retrieve the wellness and activities data for today (this is the basic use to run with a cronjob)\n- Start date: retrieve data from the starting date (in format YYYY-MM-DD) until today\n- End date: retrieve data until specified date (in format YYYY-MM-DD). Use it with `start-date`\n- Streams: retrieve the streams for the activities\n- Reset: delete the current bucket and recreate again\n\nNOTE: on the first run, the bucket is created automatically if not exists on InfluxDB\n\n### With Python\nIf you want to run it directly with Python, first install the dependency:\n\n```bash\npip install intervalsicu-to-influxdb\n```\n\nThen, the minimum code to run it is (remember to put the `.env` file on the same folder):\n\n```python\nfrom intervalsicu_to_influxdb.extractor import IntervalsToInflux\n\nextractor = IntervalsToInflux()\nextractor.all_data()\n```\n\nTo run it, just save as `app.py` and run it:\n\n```bash\npython app.py\n```\n\n#### Arguments\nAs the Docker way, we can pass arguments when create the extractor. For example:\n\n```python\nextractor = IntervalsToInflux(start_date=\"2023-01-01\")\n```\n```python\nextractor = IntervalsToInflux(streams=True)\n```\n```python\nextractor = IntervalsToInflux(start_date=\"2023-01-01\", end_date=\"2023-05-01\")\n```\n#### Dynamic script\nIf you want to create a more dynamic script, here is a more complete example:\n\n```python\nimport argparse\n\nfrom intervalsicu_to_influxdb.extractor import IntervalsToInflux\n\nparser = argparse.ArgumentParser()\n\nparser.add_argument(\"--start-date\", type=str, help=\"Start date in format YYYY-MM-DD\")\nparser.add_argument(\"--end-date\", type=str, help=\"End date in format YYYY-MM-DD\")\nparser.add_argument(\n    \"--streams\",\n    action=\"store_true\",\n    help=\"Export streams for the activities\",\n)\nparser.add_argument(\n    \"--reset\", action=\"store_true\", help=\"Reset influx bucket (delete and create)\"\n)\n\nargs = parser.parse_args()\n\nif args.start_date:\n    start_date = args.start_date\nelse:\n    start_date = None\nif args.end_date:\n    end_date = args.end_date\nelse:\n    end_date = None\nif args.streams:\n    streams = True\nelse:\n    streams = False\nif args.reset:\n    reset = True\nelse:\n    reset = False\n\nextractor = IntervalsToInflux(start_date, end_date, reset, streams)\nextractor.all_data()\n```\n\nThen, just run the script as before, but you will can use arguments (same as the Docker section):\n\n```bash\npython app.py [-h] [--start-date START_DATE] [--end-date END_DATE] [--streams] [--reset]\n```\n\n### From source code\nIf you want to run it from source code, just clone the project, and the follow the next steps (remember to create the `.env` file):\n\n#### Run with Docker\nFirst, compile the image\n```bash\ndocker build --tag intervals-to-influxdb .\n```\nAnd then, just run it like the Docker section above (but with the image name)\n\n```bash\ndocker run --env-file PATH/TO/FILE -it --rm intervals-to-influxdb app.py [-h] [--start-date START_DATE] [--end-date END_DATE] [--streams] [--reset]\n```\n\nRun with Python\nFirst, install dependencies from source\n```\npip install .\n```\n\nAnd then, run the script\n```\npython app.py [-h] [--start-date START_DATE] [--end-date END_DATE] [--streams] [--reset]\n```\n",
    "bugtrack_url": null,
    "license": "GPL3",
    "summary": "A package to extract data from intervals.icu to influxDB",
    "version": "0.2.4",
    "project_urls": {
        "Bug Tracker": "https://codeberg.org/tmllull/intervalsicu-to-influxdb/issues",
        "Documentation": "https://intervalsicu-to-influxdb.readthedocs.io",
        "Homepage": "https://codeberg.org/tmllull/intervalsicu-to-influxdb"
    },
    "split_keywords": [
        "intervalsicu",
        "influxdb",
        "sport"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "89b559af36fcae8473c091aa3b1a9237f3273fbe544e12429dba365786dbec70",
                "md5": "bdf1cd146d5f4919e94fa2ba5c87a426",
                "sha256": "b1134c11c18e8927bfe383f67c86f3922bed1716c1f80b58502b43f9ee454cda"
            },
            "downloads": -1,
            "filename": "intervalsicu_to_influxdb-0.2.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bdf1cd146d5f4919e94fa2ba5c87a426",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3",
            "size": 24203,
            "upload_time": "2023-09-06T10:41:01",
            "upload_time_iso_8601": "2023-09-06T10:41:01.028557Z",
            "url": "https://files.pythonhosted.org/packages/89/b5/59af36fcae8473c091aa3b1a9237f3273fbe544e12429dba365786dbec70/intervalsicu_to_influxdb-0.2.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0c45ceba547483a0dc1401b804647874dda9389c6084bf7b04edfa67e736ce71",
                "md5": "78215dd342c3e18b2d6a4b49a334cfe4",
                "sha256": "0437227836c065b18c4706b0bac509bb06807a2c7ac8252e3a52858817de2ada"
            },
            "downloads": -1,
            "filename": "intervalsicu_to_influxdb-0.2.4.tar.gz",
            "has_sig": false,
            "md5_digest": "78215dd342c3e18b2d6a4b49a334cfe4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3",
            "size": 24328,
            "upload_time": "2023-09-06T10:41:02",
            "upload_time_iso_8601": "2023-09-06T10:41:02.886559Z",
            "url": "https://files.pythonhosted.org/packages/0c/45/ceba547483a0dc1401b804647874dda9389c6084bf7b04edfa67e736ce71/intervalsicu_to_influxdb-0.2.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-06 10:41:02",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": true,
    "codeberg_user": "tmllull",
    "codeberg_project": "intervalsicu-to-influxdb",
    "lcname": "intervalsicu-to-influxdb"
}
        
Elapsed time: 0.10835s