shepherd-herd


Nameshepherd-herd JSON
Version 0.7.0 PyPI version JSON
download
home_pagehttps://pypi.org/project/shepherd-herd
SummarySynchronized Energy Harvesting Emulator and Recorder CLI
upload_time2023-10-09 20:49:45
maintainer
docs_urlNone
authorKai Geissdoerfer, Ingmar Splitt
requires_python>=3.8
licenseMIT
keywords testbed beaglebone pru batteryless energyharvesting solar
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Shepherd-Herd

[![PyPiVersion](https://img.shields.io/pypi/v/shepherd_herd.svg)](https://pypi.org/project/shepherd_herd)
[![CodeStyle](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)

*Shepherd-herd* is the command line utility for controlling a group of shepherd nodes remotely through an IP-based network.

---

**Documentation**: [https://orgua.github.io/shepherd/](https://orgua.github.io/shepherd/)

**Source Code**: [https://github.com/orgua/shepherd](https://github.com/orgua/shepherd)

---

## Installation

*shepherd-herd* is a python package and available on [PyPI](https://pypi.org/project/shepherd_herd).
Use your python package manager to install it.
For example, using pip:

```Shell
pip3 install shepherd-herd
```

For install directly from GitHub-Sources (here `dev`-branch):

```Shell
 pip install git+https://github.com/orgua/shepherd.git@dev#subdirectory=software/shepherd-herd -U
```

For install from local sources:

```Shell
cd shepherd/software/shepherd-herd/
pip3 install . -U
```

## Usage

All *shepherd-herd* commands require the list of hosts on which to perform the requested action.
This list of hosts is provided with the `-i` option, that takes either the path to a file or a comma-separated list of hosts (compare Ansible `-i`).

For example, save the following file in your current working directory as an ansible style, YAML-formatted inventory file named `herd.yml`.

```yaml
sheep:
  hosts:
    sheep0:
    sheep1:
    sheep2:
  vars:
    ansible_user: jane
```

To find active nodes a ping-sweep (in this example from .1 to .64) can be achieved with:

```Shell
nmap -sn 192.168.1.1-64
```

After setting up the inventory, use shepherd-herd to check if all your nodes are responding correctly:

```Shell
shepherd-herd -i herd.yml shell-cmd "echo 'hello'"
```

Or, equivalently define the list of hosts on the command line

```Shell
shepherd-herd -i sheep0,sheep1,sheep2, shell-cmd "echo 'hello'"
```

To **simplify usage** it is recommended to set up the `herd.yml` in either of these directories (with falling lookup priority):

- relative to your current working directory in `inventory/herd.yml`
- in your local home-directory `~/herd.yml`
- in the **config path** `/etc/shepherd/herd.yml` (**recommendation**)

From then on you can just call:

```Shell
shepherd-herd shell-cmd "echo 'hello'"
```

Or select individual sheep from the herd:

```Shell
shepherd-herd --limit sheep0,sheep2, shell-cmd "echo 'hello'"
```

## Library-Examples

See [example-files](https://github.com/orgua/shepherd/tree/main/software/shepherd-herd/examples/) for details.


## CLI-Examples

Here, we just provide a selected set of examples of how to use *shepherd-herd*. It is assumed that the `herd.yml` is located at the recommended config path.

For a full list of supported commands and options, run ```shepherd-herd --help``` and for more detail for each command ```shepherd-herd [COMMAND] --help```.

### Harvesting

Simultaneously start harvesting the connected energy sources on the nodes:

```Shell
shepherd-herd harvest -a cv20 -d 30 -o hrv.h5
```

or with long arguments as alternative

```Shell
shepherd-herd harvest --virtual-harvester cv20 --duration 30.0 --output-path hrv.h5
```

Explanation:

- uses cv20 algorithm as virtual harvester (constant voltage 2.0 V)
- duration is 30s
- file will be stored to `/var/shepherd/recordings/hrv.h5` and not forcefully overwritten if it already exists (add `-f` for that)
- nodes will sync up and start immediately (otherwise add `--no-start`)

For more harvesting algorithms see [virtual_harvester_fixture.yaml](https://github.com/orgua/shepherd-datalib/blob/main/shepherd_core/shepherd_core/data_models/content/virtual_harvester_fixture.yaml).

### Emulation

Use the previously recorded harvest for emulating an energy environment for the attached sensor nodes and monitor their power consumption and GPIO events:

```Shell
shepherd-herd emulate --virtual-source BQ25504 -o emu.h5 hrv.h5
```

Explanation:

- duration (`-d`) will be that of input file (`hrv.h5`)
- target port A will be selected for current-monitoring and io-routing (implicit `--enable-io --io-port A --pwr-port A`)
- second target port will stay unpowered (add `--voltage-aux` for that)
- virtual source will be configured as BQ25504-Converter
- file will be stored to `/var/shepherd/recordings/emu.h5` and not forcefully overwritten if it already exists (add `-f` for that)
- nodes will sync up and start immediately (otherwise add `--no-start`)

For more virtual source models see [virtual_source_fixture.yaml](https://github.com/orgua/shepherd-datalib/blob/main/shepherd_core/shepherd_core/data_models/content/virtual_source_fixture.yaml).

### Generalized Task-Execution

An individual task or set of tasks can be generated from experiments via the [shepherd-core](https://pypi.org/project/shepherd-core/) of the [datalib](https://github.com/orgua/shepherd-datalib)

```Shell
shepherd-herd run experiment_file.yaml --attach
```

Explanation:

- a set of tasks is send to the individual sheep and executed there
- [tasks](https://github.com/orgua/shepherd-datalib/tree/main/shepherd_core/shepherd_core/data_models/task) currently range from

  - modifying firmware / patching a node-id,
  - flashing firmware to the targets,
  - running an emulation- or harvest-task
  - these individual tasks can be bundled up in observer-tasks -> a task-set for one sheep
  - these observer-tasks can be bundled up once more into testbed-tasks

- `online` means the program stays attached to the task and shows cli-output of the sheep, once the measurements are done


### File-distribution & retrieval

Recordings and config-files can be **distributed** to the remote nodes via:

```Shell
shepherd-herd distribute hrv.h5
```

The default remote path is `/var/shepherd/recordings/`. For security reasons there are only two allowed paths:

- `/var/shepherd/` for hdf5-recordings
- `/etc/shepherd/` for yaml-config-files

To retrieve the recordings from the shepherd nodes and store them locally on your machine in the current working directory (`./`):

```Shell
shepherd-herd retrieve hrv.h5 ./
```

Explanation:

- look for remote `/var/shepherd/recordings/hrv.h5` (when not issuing an absolute path)
- don't delete remote file (add `-d` for that)
- be sure measurement is done, otherwise you get a partial file (or add `--force-stop` to force it)
- files will be put in current working director (`./rec_[node-name].h5`, or `./[node-name]/hrv.h5` if you add `--separate`)
- you can add `--timestamp` to extend filename (`./rec_[timestamp]_[node-name].h5`)

### Start, check and stop Measurements

Manually **starting** a pre-configured measurement can be done via:

```Shell
shepherd-herd start
```

**Note 1**: configuration is loading locally from `/etc/shepherd/config.yml`.

**Note 2**: the start is not synchronized itself (you have to set `time_start` in config).

The current state of the measurement can be **checked** with (console printout and return code):

```Shell
shepherd-herd status
```

If the measurement runs indefinitely or something different came up, and you want to **stop** forcefully:

```Shell
shepherd-herd -l sheep1 stop
```

### Creating an Inventory

Creating an overview for what's running on the individual sheep / hosts. An inventory-file is created for each host.

```Shell
shepherd-herd inventorize ./
```

### Programming Targets (pru-programmer)

The integrated programmer allows flashing a firmware image to an MSP430FR (SBW) or nRF52 (SWD) and shares the interface with `shepherd-sheep`. This example writes the image `firmware_img.hex` to a MSP430 on target port B and its programming port 2:

```Shell
shepherd-herd program --mcu-type msp430 --target-port B --mcu-port 2 firmware_img.hex
```

To check available options and arguments call

```Shell
shepherd-herd program --help
```

The options default to:

- nRF52 as Target
- Target Port A
- Programming Port 1
- 3 V Target Supply
- 500 kbit/s


### Deprecated - Programming Targets (OpenOCD Interface)

Flash a firmware image `firmware_img.hex` that is stored on the local machine in your current working directory to the attached sensor nodes:

```Shell
shepherd-herd target flash firmware_img.hex
```

Reset the sensor nodes:

```Shell
shepherd-herd target reset
```

### Shutdown

Sheep can either be forced to power down completely or in this case reboot:

```Shell
shepherd-herd poweroff --restart
```

**NOTE**: Be sure to have physical access to the hardware for manually starting them again.

## Testbench

For testing `shepherd-herd` there must be a valid `herd.yml` at one of the three mentioned locations (look at [simplified usage](#Usage)) with accessible sheep-nodes (at least one). Navigate your host-shell into the package-folder `/shepherd/software/shepherd-herd/` and run the following commands for setup and running the testbench (~ 30 tests):

```Shell
pip3 install ./[tests]
pytest
```

## ToDo

- None

            

Raw data

            {
    "_id": null,
    "home_page": "https://pypi.org/project/shepherd-herd",
    "name": "shepherd-herd",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "ingmar.splitt@tu-dresden.de",
    "keywords": "testbed,beaglebone,pru,batteryless,energyharvesting,solar",
    "author": "Kai Geissdoerfer, Ingmar Splitt",
    "author_email": "kai.geissdoerfer@tu-dresden.de",
    "download_url": "https://files.pythonhosted.org/packages/f7/35/96ebb4e030cba225d34f408a7d19d80b6f00837e7e6de04217beb5d2d7eb/shepherd_herd-0.7.0.tar.gz",
    "platform": "unix",
    "description": "# Shepherd-Herd\n\n[![PyPiVersion](https://img.shields.io/pypi/v/shepherd_herd.svg)](https://pypi.org/project/shepherd_herd)\n[![CodeStyle](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n*Shepherd-herd* is the command line utility for controlling a group of shepherd nodes remotely through an IP-based network.\n\n---\n\n**Documentation**: [https://orgua.github.io/shepherd/](https://orgua.github.io/shepherd/)\n\n**Source Code**: [https://github.com/orgua/shepherd](https://github.com/orgua/shepherd)\n\n---\n\n## Installation\n\n*shepherd-herd* is a python package and available on [PyPI](https://pypi.org/project/shepherd_herd).\nUse your python package manager to install it.\nFor example, using pip:\n\n```Shell\npip3 install shepherd-herd\n```\n\nFor install directly from GitHub-Sources (here `dev`-branch):\n\n```Shell\n pip install git+https://github.com/orgua/shepherd.git@dev#subdirectory=software/shepherd-herd -U\n```\n\nFor install from local sources:\n\n```Shell\ncd shepherd/software/shepherd-herd/\npip3 install . -U\n```\n\n## Usage\n\nAll *shepherd-herd* commands require the list of hosts on which to perform the requested action.\nThis list of hosts is provided with the `-i` option, that takes either the path to a file or a comma-separated list of hosts (compare Ansible `-i`).\n\nFor example, save the following file in your current working directory as an ansible style, YAML-formatted inventory file named `herd.yml`.\n\n```yaml\nsheep:\n  hosts:\n    sheep0:\n    sheep1:\n    sheep2:\n  vars:\n    ansible_user: jane\n```\n\nTo find active nodes a ping-sweep (in this example from .1 to .64) can be achieved with:\n\n```Shell\nnmap -sn 192.168.1.1-64\n```\n\nAfter setting up the inventory, use shepherd-herd to check if all your nodes are responding correctly:\n\n```Shell\nshepherd-herd -i herd.yml shell-cmd \"echo 'hello'\"\n```\n\nOr, equivalently define the list of hosts on the command line\n\n```Shell\nshepherd-herd -i sheep0,sheep1,sheep2, shell-cmd \"echo 'hello'\"\n```\n\nTo **simplify usage** it is recommended to set up the `herd.yml` in either of these directories (with falling lookup priority):\n\n- relative to your current working directory in `inventory/herd.yml`\n- in your local home-directory `~/herd.yml`\n- in the **config path** `/etc/shepherd/herd.yml` (**recommendation**)\n\nFrom then on you can just call:\n\n```Shell\nshepherd-herd shell-cmd \"echo 'hello'\"\n```\n\nOr select individual sheep from the herd:\n\n```Shell\nshepherd-herd --limit sheep0,sheep2, shell-cmd \"echo 'hello'\"\n```\n\n## Library-Examples\n\nSee [example-files](https://github.com/orgua/shepherd/tree/main/software/shepherd-herd/examples/) for details.\n\n\n## CLI-Examples\n\nHere, we just provide a selected set of examples of how to use *shepherd-herd*. It is assumed that the `herd.yml` is located at the recommended config path.\n\nFor a full list of supported commands and options, run ```shepherd-herd --help``` and for more detail for each command ```shepherd-herd [COMMAND] --help```.\n\n### Harvesting\n\nSimultaneously start harvesting the connected energy sources on the nodes:\n\n```Shell\nshepherd-herd harvest -a cv20 -d 30 -o hrv.h5\n```\n\nor with long arguments as alternative\n\n```Shell\nshepherd-herd harvest --virtual-harvester cv20 --duration 30.0 --output-path hrv.h5\n```\n\nExplanation:\n\n- uses cv20 algorithm as virtual harvester (constant voltage 2.0 V)\n- duration is 30s\n- file will be stored to `/var/shepherd/recordings/hrv.h5` and not forcefully overwritten if it already exists (add `-f` for that)\n- nodes will sync up and start immediately (otherwise add `--no-start`)\n\nFor more harvesting algorithms see [virtual_harvester_fixture.yaml](https://github.com/orgua/shepherd-datalib/blob/main/shepherd_core/shepherd_core/data_models/content/virtual_harvester_fixture.yaml).\n\n### Emulation\n\nUse the previously recorded harvest for emulating an energy environment for the attached sensor nodes and monitor their power consumption and GPIO events:\n\n```Shell\nshepherd-herd emulate --virtual-source BQ25504 -o emu.h5 hrv.h5\n```\n\nExplanation:\n\n- duration (`-d`) will be that of input file (`hrv.h5`)\n- target port A will be selected for current-monitoring and io-routing (implicit `--enable-io --io-port A --pwr-port A`)\n- second target port will stay unpowered (add `--voltage-aux` for that)\n- virtual source will be configured as BQ25504-Converter\n- file will be stored to `/var/shepherd/recordings/emu.h5` and not forcefully overwritten if it already exists (add `-f` for that)\n- nodes will sync up and start immediately (otherwise add `--no-start`)\n\nFor more virtual source models see [virtual_source_fixture.yaml](https://github.com/orgua/shepherd-datalib/blob/main/shepherd_core/shepherd_core/data_models/content/virtual_source_fixture.yaml).\n\n### Generalized Task-Execution\n\nAn individual task or set of tasks can be generated from experiments via the [shepherd-core](https://pypi.org/project/shepherd-core/) of the [datalib](https://github.com/orgua/shepherd-datalib)\n\n```Shell\nshepherd-herd run experiment_file.yaml --attach\n```\n\nExplanation:\n\n- a set of tasks is send to the individual sheep and executed there\n- [tasks](https://github.com/orgua/shepherd-datalib/tree/main/shepherd_core/shepherd_core/data_models/task) currently range from\n\n  - modifying firmware / patching a node-id,\n  - flashing firmware to the targets,\n  - running an emulation- or harvest-task\n  - these individual tasks can be bundled up in observer-tasks -> a task-set for one sheep\n  - these observer-tasks can be bundled up once more into testbed-tasks\n\n- `online` means the program stays attached to the task and shows cli-output of the sheep, once the measurements are done\n\n\n### File-distribution & retrieval\n\nRecordings and config-files can be **distributed** to the remote nodes via:\n\n```Shell\nshepherd-herd distribute hrv.h5\n```\n\nThe default remote path is `/var/shepherd/recordings/`. For security reasons there are only two allowed paths:\n\n- `/var/shepherd/` for hdf5-recordings\n- `/etc/shepherd/` for yaml-config-files\n\nTo retrieve the recordings from the shepherd nodes and store them locally on your machine in the current working directory (`./`):\n\n```Shell\nshepherd-herd retrieve hrv.h5 ./\n```\n\nExplanation:\n\n- look for remote `/var/shepherd/recordings/hrv.h5` (when not issuing an absolute path)\n- don't delete remote file (add `-d` for that)\n- be sure measurement is done, otherwise you get a partial file (or add `--force-stop` to force it)\n- files will be put in current working director (`./rec_[node-name].h5`, or `./[node-name]/hrv.h5` if you add `--separate`)\n- you can add `--timestamp` to extend filename (`./rec_[timestamp]_[node-name].h5`)\n\n### Start, check and stop Measurements\n\nManually **starting** a pre-configured measurement can be done via:\n\n```Shell\nshepherd-herd start\n```\n\n**Note 1**: configuration is loading locally from `/etc/shepherd/config.yml`.\n\n**Note 2**: the start is not synchronized itself (you have to set `time_start` in config).\n\nThe current state of the measurement can be **checked** with (console printout and return code):\n\n```Shell\nshepherd-herd status\n```\n\nIf the measurement runs indefinitely or something different came up, and you want to **stop** forcefully:\n\n```Shell\nshepherd-herd -l sheep1 stop\n```\n\n### Creating an Inventory\n\nCreating an overview for what's running on the individual sheep / hosts. An inventory-file is created for each host.\n\n```Shell\nshepherd-herd inventorize ./\n```\n\n### Programming Targets (pru-programmer)\n\nThe integrated programmer allows flashing a firmware image to an MSP430FR (SBW) or nRF52 (SWD) and shares the interface with `shepherd-sheep`. This example writes the image `firmware_img.hex` to a MSP430 on target port B and its programming port 2:\n\n```Shell\nshepherd-herd program --mcu-type msp430 --target-port B --mcu-port 2 firmware_img.hex\n```\n\nTo check available options and arguments call\n\n```Shell\nshepherd-herd program --help\n```\n\nThe options default to:\n\n- nRF52 as Target\n- Target Port A\n- Programming Port 1\n- 3 V Target Supply\n- 500 kbit/s\n\n\n### Deprecated - Programming Targets (OpenOCD Interface)\n\nFlash a firmware image `firmware_img.hex` that is stored on the local machine in your current working directory to the attached sensor nodes:\n\n```Shell\nshepherd-herd target flash firmware_img.hex\n```\n\nReset the sensor nodes:\n\n```Shell\nshepherd-herd target reset\n```\n\n### Shutdown\n\nSheep can either be forced to power down completely or in this case reboot:\n\n```Shell\nshepherd-herd poweroff --restart\n```\n\n**NOTE**: Be sure to have physical access to the hardware for manually starting them again.\n\n## Testbench\n\nFor testing `shepherd-herd` there must be a valid `herd.yml` at one of the three mentioned locations (look at [simplified usage](#Usage)) with accessible sheep-nodes (at least one). Navigate your host-shell into the package-folder `/shepherd/software/shepherd-herd/` and run the following commands for setup and running the testbench (~ 30 tests):\n\n```Shell\npip3 install ./[tests]\npytest\n```\n\n## ToDo\n\n- None\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Synchronized Energy Harvesting Emulator and Recorder CLI",
    "version": "0.7.0",
    "project_urls": {
        "Homepage": "https://pypi.org/project/shepherd-herd",
        "Source": "https://github.com/orgua/shepherd",
        "Tracker": "https://github.com/orgua/shepherd/issues"
    },
    "split_keywords": [
        "testbed",
        "beaglebone",
        "pru",
        "batteryless",
        "energyharvesting",
        "solar"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4d18ad27db3c9fae1305d1faab400ec2bb0b684d387560beda8b61baba6cf549",
                "md5": "3c7d5a274a877fd2c6cd4bdea432f17f",
                "sha256": "d9b0c41b570e9df76481bc73dd185f098c4fb378ecee0aa31121c8fba6323b5e"
            },
            "downloads": -1,
            "filename": "shepherd_herd-0.7.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3c7d5a274a877fd2c6cd4bdea432f17f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 21884,
            "upload_time": "2023-10-09T20:49:43",
            "upload_time_iso_8601": "2023-10-09T20:49:43.704707Z",
            "url": "https://files.pythonhosted.org/packages/4d/18/ad27db3c9fae1305d1faab400ec2bb0b684d387560beda8b61baba6cf549/shepherd_herd-0.7.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f73596ebb4e030cba225d34f408a7d19d80b6f00837e7e6de04217beb5d2d7eb",
                "md5": "74835ee6c2ad3c6cc000d993cd4e19d2",
                "sha256": "09d27fde695ae8cc6d34cbbb0c9b3b2d2ad7e0039a4d9e2e0edce11060d10166"
            },
            "downloads": -1,
            "filename": "shepherd_herd-0.7.0.tar.gz",
            "has_sig": false,
            "md5_digest": "74835ee6c2ad3c6cc000d993cd4e19d2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 22175,
            "upload_time": "2023-10-09T20:49:45",
            "upload_time_iso_8601": "2023-10-09T20:49:45.430596Z",
            "url": "https://files.pythonhosted.org/packages/f7/35/96ebb4e030cba225d34f408a7d19d80b6f00837e7e6de04217beb5d2d7eb/shepherd_herd-0.7.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-09 20:49:45",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "orgua",
    "github_project": "shepherd",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "shepherd-herd"
}
        
Elapsed time: 0.12400s