automatix-cmd


Nameautomatix-cmd JSON
Version 2.2.0 PyPI version JSON
download
home_pagehttps://github.com/vanadinit/automatix_cmd
SummaryAutomation wrapper for bash and python commands
upload_time2024-06-11 13:20:34
maintainerNone
docs_urlNone
authorJohannes Paul
requires_python>=3.10
licenseMIT
keywords bash shell command automation process wrapper devops system administration
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # automatix
Automation wrapper for bash and python commands. Extended Feature version.


# DESCRIPTION

**automatix** is a wrapper for scripted sysadmin tasks. It offers
 some useful functionality for easier scripting and having full
 control over the automated process.

The idea of **automatix** is to write down all the commands you would
 normally type to your commandline or python console into a YAML file.
 Then use **automatix** to execute these commands. 

There are different modes for **automatix** to work. Without any
 parameters automatix will try to execute the specified command
 pipeline from the script file until an error occurs or the pipeline
 is done. The interactive mode (**-i**) asks for every single
 commandline step whether to execute, skip or abort.
 Forced mode (**-f**) will also proceed if errors occur.

**automatix** was originally designed for internal seibert// use.
 It comes therefore with bundlewrap and teamvault support as well as
 the possibility to use your own logging library.

This **automatix** version (automatix_cmd) is a fork of the original
**automatix** (https://github.com/seibert-media/automatix) with some
extended functionality and is maintained private in the authors freetime
(not maintained by seibert//).

## Warning:

Beware that this tool cannot substitute the system administrators
 brain and it needs a responsible handling, since you can do
 (and destroy) almost everything with it.

**Automatix** evaluates YAML files and executes defined commands as
 shell or python commands. There is no check for harmful commands.
 Be aware that this can cause critical damage to your system.

Please use the interactive mode and doublecheck commands before
 executing. Usage of automatix is at your own risk!


# INSTALLATION

Automatix requires Python ≥ 3.10.

```
pip install automatix_cmd
```

NOTICE: original `automatix` and this `automatix_cmd` share the
same main entrypoint. To avoid overwriting and confusion,
you should have only installed **ONE** of them!

# CONFIGURATION

You can specify a path to a configuration YAML file via the
 environment variable **AUTOMATIX_CONFIG**.
Default location is "~/.automatix.cfg.yaml".
All (string) configuration values can be overwritten by the
 corresponding upper case environment variables preceeded
 by 'AUTOMATIX_', e.g. _AUTOMATIX_ENCODING_.

### Example: .automatix.cfg.yaml

    # Path to scripts directory
    script_dir: '~/automatix_script_files'
    
    # Global constants for use in pipeline scripts
    constants:
      apt_update: 'apt-get -qy update'
      apt_upgrade: 'DEBIAN_FRONTEND=noninteractive apt-get -qy -o Dpkg::Options::=--force-confold --no-install-recommends upgrade'
      apt_full_upgrade: 'DEBIAN_FRONTEND=noninteractive apt-get -qy -o Dpkg::Options::=--force-confold --no-install-recommends full-upgrade'
    
    # Encoding
    encoding: 'utf-8'
    
    # Path for shell imports
    import_path: '.'
    
    # SSH Command used for remote connections
    ssh_cmd: 'ssh {hostname} sudo '
    
    # Temporary directory on remote machines for shell imports
    remote_tmp_dir: 'automatix_tmp'
    
    # Logger
    logger: 'mylogger'
    
    # Logging library (has to implement the init_logger method)
    logging_lib: 'mylib.logging'

    # Logfile directory for parallel processing (ONLY for parallel processing!)
    logfile_dir: 'automatix_logs'
    
    # Bundlewrap support, bundlewrap has to be installed (default: false)
    bundlewrap: true
    
    # Teamvault / Secret support, bundlewrap-teamvault has to be installed (default: false)
    teamvault: true

    # Activate progress bar, python_progress_bar has to be installed (default: false)
    progress_bar: true

# SYNOPSIS

**automatix**
      \[**--help**|**-h**\]
      \[**--systems** \[_SYSTEM1=ADDRESS_OR_NODENAME_ ...\]\]
      \[**--vars** \[_VAR1=VALUE1_ ...\]\]
      \[**--secrets** \[_SECRET1=SECRETID_ ...\]\]
      \[**--vars-file** _VARS_FILE_PATH_ \]
      \[**--print-overview**|**-p**\]
      \[**--jump-to**|**-j** _JUMP_TO_\]
      \[**--steps**|**-s** _STEPS_\]
      \[**--interactive**|**-i**\]
      \[**--force**|**-f**\]
      \[**--debug**|**-d**\]
      \[**--**\] **scriptfile**


## OPTIONS

**scriptfile**
: The only required parameter for this tool to work. Use " -- " if
 needed to delimit this from argument fields. See **SCRIPTFILE**
 section for more information.  

**-h**, **--help**
: View help message and exit.  

**--systems** _SYSTEM1=ADDRESS_OR_NODENAME_
: Use this to set systems without adding them to the
  scriptfile or to overwrite them. You can specify multiple
  systems like: --systems v1=string1 v2=string2 v3=string3  
  
**--vars** _VAR1=VALUE1_
: Use this to set vars without adding them to the scriptfile
  or to overwrite them. You can specify multiple vars
  like: --vars v1=string1 v2=string2 v3=string3  
  
**--secrets** _SECRET1=SECRETID_
: Use this to set secrets without adding them to the
  scriptfile or to overwrite them. You can specify multiple
  secrets like: --secrets v1=string1 v2=string2 v3=string3 *(only if
  teamvault is enabled)*  
  
**--vars-file** _VARS_FILE_PATH_
: Use this to specify a CSV file from where **automatix** reads
  systems, variables and secrets. First row must contain the field
  types and names. You may also specify an `label` field.
  Example: `label,systems:mysystem,vars:myvar`. The automatix script will
  be processed for each row sequentially.  
  
**--parallel**
: Run CSV file entries parallel in screen sessions; only valid with --vars-file.
  GNU screen has to be installed. See EXTRAS section below.

**--print-overview**, **-p**
: Just print command pipeline overview with indices then exit without
 executing the commandline. Note that the *always pipeline* will be
 executed anyway.  

**--jump-to** _JUMP_TO_, **-j** _JUMP_TO_
: Jump to step with index _JUMP_TO_ instead of starting at the
 beginning. Use **-p** or the output messages to determine the
 desired step index. You can use negative numbers to start counting
 from the end.  

**--steps** _STEPS_, **-s** _STEPS_
: Only execute these steps (comma-separated indices) or exclude steps
 by prepending the comma-separated list with "e".
 Examples: `-s 1,3,7`, `-s e2`, `-s e0,5,7,2`  

**--interactive**, **-i**
: Confirm actions before executing.  
  
**--force**, **-f**
: Try always to proceed (except manual steps), even if errors occur
 (no retries).  

**--debug**, **-d**
: Activate debug log level.  


### EXAMPLE: Usage

    automatix -i --systems source=sourcesystem.com target=targetsystem.org -- scriptfile.yaml


## SCRIPTFILE

The **scriptfile** describes your automated process. Therefore it
 contains information about systems, variables, secrets and the
 command pipeline.

You can provide a path to your **scriptfile** or place your
 scriptfile in the predefined directory (see **CONFIGURATION**
 section, _script_dir_). The path has precedence over the predefined
 directory, if the file exists at both locations.

The **scriptfile** has to contain valid YAML.

### EXAMPLE: scriptfile
    
    name: Migration Server XY
    # Systems you like to refer to in pipeline (accessible via 'SYSTEMS.source')
    # If Bundlewrap support is activated use node names instead of hostnames or add preceeding 'hostname!'.
    require_version: '1.5.0'
    systems:
      source: sourcesystem.com
      target: targetsystem.org
    # Custom vars to use in pipeline
    vars:
      version: 1.2.3
      domain: 'bla.mein-test-system'
    # Teamvault Secrets, if activated (left: like vars, right: SECRETID_FIELD, FIELD=username|password|file)
    secrets:
      web_user: v6GQag_username
      web_pw: v6GQag_password
    # Imports for functions you like to use (path may be modified in configuration)
    imports:
      - myfunctions.sh
    # like command pipeline but will be exectuted always beforehand
    always:
      - python: |
          import mylib as nc
          PERSISTENT_VARS.update(locals())
    pipeline:
      - remote@target: systemctl stop server
      - remote@source: zfs snapshot -r tank@before-migration
      - manual: Please trigger preparing tasks via webinterface
      - myvar=local: curl -L -vvv -k https://{domain}/
      - local: echo "1.1.1.1 {domain}" >> /etc/hosts
      - sla=python: NODES.source.metadata.get('sla')
      - python: |
            sla = '{sla}'
            if sla == 'gold':
                print('Wow that\'s pretty cool. You have SLA Gold.')
            else:
                print('Oh. Running out of money? SLA Gold is worth it. You should check your wallet.')
            PERSISTENT_VARS['sla'] = sla
      - cond=python: sla == 'gold'
      - cond?local: echo "This command is only executed if sla is gold."
    cleanup:
      - local: rm temp_files


### FIELDS

**name** _(string)_
: Just a name for the process. Does not do anything.

**require_version** _(string)_
: Minimum required Automatix version for this script to run.

**systems** _(associative array)_
: Define some systems. Value has to be a valid SSH destination like an
 IP address or hostname. If Bundlewrap support is enabled, it has to
 be a valid and existing Bundlewrap node or group name, or you can 
 precede your IP or hostname with `hostname!` to define a
 non-Bundlewrap system.
You can refer to these systems in the command pipeline in multiple ways:

1) remote@systemname as your command action (see below)

2) via {SYSTEMS.systemname} which will be replaced with the value

3) via NODES.systemname in python actions to use the Bundlewrap node
   object (Bundlewrap nodes only, no groups)

**vars** _(associative array)_
: Define some vars. These are accessible in the command pipeline via
 {varname}. Note: Only valid Python variable names are allowed.
 You can use "*FILE_*" prefix followed by a file path to assign the file
 content to the variable.

**secrets** _(associative array)_
: Define teamvault secrets. Value has to be in this format:
 _SECRETID_FIELD_. _FIELD_ must be one of username, password or file.
 The resolved secret values are accessible in command line via
 {secretname}. *(only if teamvault is enabled)*

**imports** _(list)_
: Listed shell files (see **CONFIGURATION** section, _import_path_)
 will be sourced before every local or remote command execution.
 For remote commands, these files are transferred via tar and ssh to
 your home directory on the remote system beforehand and deleted
 afterwards. This is meant to define some functions you may need.

**always**, **cleanup** _(list of associative arrays)_
: See **ALWAYS / CLEANUP PIPELINE** section.

**pipeline** _(list of associative arrays)_
: See **PIPELINE** section.

### PIPELINE

Here you define the commands automatix shall execute.

**KEY**: One of these possible command actions:

1) **manual**: Some manual instruction for the user. The user has to
 confirm, that automatix may proceed.

2) **local**: Local shell command to execute. Imports will be sourced
 beforehand. /bin/bash will be used for execution.

3) **remote@systemname**: Remote shell command to execute. Systemname
 has to be a defined system. The command will be run via SSH (without
  pseudo-terminal allocation). It uses the standard SSH command.
  Therefore your .ssh/config should be respected.
 If systemname is a Bundlewrap group, the remote command will be
  executed sequentially for every node.

4) **python**: Python code to execute.
   * Notice that there are some modules, constants and functions which
     are already imported (check command.py): e.g.
     `re, subprocess, quote(from shlex)`. The variable `vars` is used
     to store the Automatix variables as a dictionary. You can use it 
     to access or change the variables directly.
   * If bundlewrap is enabled, the Bundlewrap repository object is
     available via AUTOMATIX_BW_REPO and system node objects are
     available via NODES.systemname.
     Use `AUTOMATIX_BW_REPO.reload()` to reinitialize the Bundlewrap 
     repository from the file system. This can be useful for using
     newly created nodes (e.g. remote commands).  
   

**ASSIGNMENT**: For **local**, **remote** and **python** action you
 can also define a variable to which the output will be assigned.
 To do this prefix the desired variablename and = before the action
 key, e.g. `myvar=python: NODES.system.hostname`. Be careful when
 working with multiline statements. In **python** the first line is
 likely to set the variable. All variables will be converted to
 strings when used to build commands in following steps.
 
**CONDITIONS**: You can define the command only to be executed if
 your condition variable evaluates to "True" in Python. To achieve
 this write the variable name followed by a question mark at the very
 beginning like `cond?python: destroy_system()`. Be aware that all
 output from **local** or **remote** commands will lead to a non-empty
 string which evaluates to "True" in Python, but empty output will
 evaluate to "False". Use `!?` instead of `?` to invert the condition.

**VALUE**: Your command. Variables will be replaced with Python
 format function. Therefore, use curly brackets to refer to variables,
 systems, secrets and constants.

Constants are available via CONST.KEY, where KEY is the key of your
 constants in your **CONFIGURATION** file. There you can define some
 widely used constants.

In most cases its a good idea to define your command in quotes to
 avoid parsing errors, but it is not always necessary. Another way is
 to use '|' to indicate a _literal scalar block_. There you can even
 define whole program structures for python (see example).

#### Escaping in Pipeline

Because automatix uses Python's format() function:  
`{` -> `{{`  
`}` ->  `}}`  

Standard YAML escapes (see also https://yaml.org/spec/1.2/spec.html):  
`'` -> `''`  
`"` -> `\"`  
`\ ` -> `\\`  
`:` -> Please use quotes (double or single).


### ALWAYS / CLEANUP PIPELINE

Same usage as the 'normal' command pipeline, but will be executed
 every time at start of automatix (**always**) or at the end
 (**cleanup**) even if aborted (a). The commands are executed without
 --interactive flag, independend of the specified parameters.

Intended use case for **always**: python imports or informations that
 are needed afterwards and do not change anything on systems.
 You want to have these available even if using --jump|-j feature.

Intended use case for **cleanup**: Remove temporary files or artifacts.


## ENVIRONMENT

**AUTOMATIX_CONFIG**: Specify the path to the configuration file.
 Default is "~/.automatix.cfg.yaml".  

**AUTOMATIX_**_config-variable-in-upper-case_: Set or overwrite the 
 corresponding configuration value. See **CONFIGURATION** section.
 Works only for string values!  

**AUTOMATIX_TIME**: Set this to an arbitrary value to print the times
 for the single steps and the whole script.  

Additionally you can modify the environment to adjust things to your
 needs.


# TIPS & TRICKS

### YAML Syntax

For multiline commands and variables YAML offers different possibilities
 to write multiline strings. A look at https://yaml-multiline.info/ might
 be helpful.  

### PERSISTENT_VARS

If you want to access variables in **python** action you defined in
preceeding command, you can use the **PERSISTENT_VARS** dictionary
(shortcut: **PVARS**).
This is added to the local scope of **python** actions and the
dictonary keys are also available as attributes.
 Examples:
- To make all local variables of the actual command persistent use
 `PERSISTENT_VARS.update(locals())`.
- To delete one persistent variable named "myvar" use
 `del PERSISTENT_VARS['myvar']`
- To make variable "v2" persistent use `PERSISTENT_VARS['v2'] = v2`
  or `PERSISTENT_VARS.v2 = v2`
- Use the shortcut like `PVARS.v2 = v2`

You can use variables in PERSISTENT_VARS also as condition by
using the shortcut and the attribute notation:
    
      - python: PVARS.cond = some_function()
      - PVARS.cond?local: echo 'This is only printed if "some_function" evaluates to "True"'
      - PVARS.cond!?local: echo 'And this is printed if "some_function" evaluates to "False"'

An alternative is to make variables global, but in most cases using
 PERSISTENT_VARS is more clean. _**CAUTION: Choosing already existing
 (Python) variable names may lead to unexpected behaviour!!!**_ Maybe
  you want to check the source code (command.py).  
Explanation: automatix is written in Python and uses 'exec' to
 execute the command in function context. If you declare variables
 globally they remain across commands.

### Abort and Skip Exceptions

To abort the current automatix and jump to the next batch item you can
 raise the `SkipBatchItemException`. For aborting the whole automatix
 process raise `AbortException(return_code: int)`. In both cases the
 cleanup pipeline is executed. Same is the case for selecting
 `a`:abort or `c`:continue when asked (interactive or error).

### Logging / Saving the output

**automatix** offers no own capability to log the output to a log file or
 save it otherwise.  

If you have _GNU screen_ installed, you may start a screen session with
 `-L` and optional `-Logfile LOGFILE` in which you start **automatix**.
 (This is how it works with "parallel processing", see **EXTRAS** section.)

A different approach is to use `tee`, e.g. `automatix [script file + options] 2>&1 | tee auto.log`.
 Different to the screen approach this seems not to capture your input.

# BEST PRACTISES

There are different ways to start scripting with **automatix**. The
 author's approach is mainly to consider the process and simply write
 down, what to do (manual steps for complex or not automated steps)
 and which commands to use.  
Then start **automatix** in interactive mode (-i) and adjust the
 single steps one by one. Replace manual steps, if suitable. Whenever
 adjustment is needed, abort, adjust and restart **automatix** with
 jump (-j) to the adjusted step.  
From **automatix** 1.13.0 on you can use the reload scriptfile feature
 instead. When asked for options (either because a command failed or
 you are in interactive mode) you can use **-R** to reload the
 scriptfile. If lines in the scriptfile have changed, or you need to
 repeat steps, you can use R+/-$number to reload and adjust the
 restart point (available since **automatix** 1.14.0). NOTICE: If using
 vars-file, this reloads the script ONLY the active CSV row!

Repeat this procedure to automate more and more and increase quality,
 whenever you feel like it.

Consider to put often used paths or code sequences in automatix
 variables for better readability.  
Do the same with variable content like URLs, to make it possible to
 overwrite it by command line options. Where ever possible prefer to
 use functions to determine already available information, such as BW
 metadata, instead of defining things explicitly. This will make
 things easier when using the script with different systems /
 parameters.

Preferred way of using **automatix** is to put often used and complex
 algorithms in shell functions or python libraries (shelllib/pylib)
 and import them. Advantage of this approach is that you can use your
 implemented functions multiple times and build up a toolbox of nice
 functionality over time.


# NOTES

**Manual steps** will always cause automatix to stop and wait for
 user input.

Be careful with **assignments** containing line breaks (echo, ...).
 Using the variables may lead to unexpected behaviour or errors.
 From version 1.14.0 on trailing new lines in **assignments**
 of Shell commands (_local_, _remote@_) are removed.

Assignments containing **null bytes** are currently not supported.

Because the **always** pipeline should not change anything, aborting
 while running this pipeline will not trigger a cleanup.

If you want to abort the **pipeline** without triggering the
 **cleanup** pipeline, use CRTL+C.

While **aborting remote functions** (via imports), automatix is not
 able to determine still running processes invoked by the function,
 because it only checks the processes for the commands (in this case
 the function name) which is called in the pipeline.

User input questions are of following categories:
- [MS] **M**anual **S**tep
- [CF] **C**ommand **F**ailed
- [PF] **P**artial command **F**ailed (BW groups)
- [RR] **R**emote process still **R**unning
- [SE] **S**yntax **E**rror

The terminal (T) answer starts an interactive Bash-Shell (/bin/bash -i).
 Therefore .bashrc is executed, but the command prompt (PS1) is
 replaced to indicate, we are still in an automatix process.
 

# EXTRAS

## Parallel processing
Requirement: GNU screen installed and accessible via `screen` command in bash.

This **automatix** version has the option to process multiple **automatix** instances at a time.
 This is achieved by starting multiple [GNU screen](https://www.gnu.org/software/screen/) sessions.
 Please make yourself comfortable with the screen controls before using this feature to avoid getting lost.

The main programm stays in a loop while attaching to the screen sessions and you will come back to it
 if you detach a screen session. The **automatix-manager** runs in its own screen session and is
 responsible for starting the automatix screens and status updates.

By default the programm starts with 2 parallel automatix instances. Use the main programm loop controls
 to change the number of allowed parallel sessions (pressing 'm' followed by your desired number).

If you force the programm to terminate (e.g. keyboard interrupt, process kill, ...),
 check for still running screen processes via `screen -list`. They are independent and may continue
 running. Cleanup manually, if necessary.

The screens write their output to log files in the specified **logfile_dir** (see **CONFIGURATION** section).
 These logfiles contain the escape sequences that are used to provide the colored output an the terminal.
 You can use a pager that supports interpreting these sequences like the terminal to have a similar
 experience (`more` or `less -r` worked for me).

## Bash completion (experimental)
Automatix supports bash completion for parameters and the script directory via [argcomplete](https://github.com/kislyuk/argcomplete).

Therefor follow the installation instructions for argcomplete, which is at the current time

    pip install argcomplete

and either global activation via executing

    activate-global-python-argcomplete

or activation for automatix (e.g. in `.bashrc`)

    eval "$(register-python-argcomplete automatix)"

Automatix will recognize the installed module and offer the completion automatically.

## Progress bar (experimental)
For activation of an "apt-like" progress bar based on the amount of commands
 install `python_progress_bar` via pip and either set `AUTOMATIX_PROGRESS_BAR`
 environment variable to an arbitrary value (not "False") or set `progress_bar`
 to `true` in the config file.

You can force deactivation in setting `AUTOMATIX_PROGRESS_BAR` environment variable
 to "False" (overwrites config file setting).

Note, that using commands that heavily modify the terminal behaviour/output
 (such as `top`, `watch`, `glances`, ...), may lead to a unreadable
 or undesirable output. It might be a better idea to encourage the user
 to open a separate terminal and type these commands there.

Using automatix itself as command should work, but may lead to confusing
 output as well. Note, that the progress bar will be overwritten by the
 new automatix instance for the duration of the automatix command.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/vanadinit/automatix_cmd",
    "name": "automatix-cmd",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "bash, shell, command, automation, process, wrapper, devops, system administration",
    "author": "Johannes Paul",
    "author_email": "vanadinit@quantentunnel.de",
    "download_url": "https://files.pythonhosted.org/packages/ca/39/80b4142ef3c0046610802d216d98884b1cedde0edd7b1a863668bc1e4dc2/automatix_cmd-2.2.0.tar.gz",
    "platform": null,
    "description": "# automatix\nAutomation wrapper for bash and python commands. Extended Feature version.\n\n\n# DESCRIPTION\n\n**automatix** is a wrapper for scripted sysadmin tasks. It offers\n some useful functionality for easier scripting and having full\n control over the automated process.\n\nThe idea of **automatix** is to write down all the commands you would\n normally type to your commandline or python console into a YAML file.\n Then use **automatix** to execute these commands. \n\nThere are different modes for **automatix** to work. Without any\n parameters automatix will try to execute the specified command\n pipeline from the script file until an error occurs or the pipeline\n is done. The interactive mode (**-i**) asks for every single\n commandline step whether to execute, skip or abort.\n Forced mode (**-f**) will also proceed if errors occur.\n\n**automatix** was originally designed for internal seibert// use.\n It comes therefore with bundlewrap and teamvault support as well as\n the possibility to use your own logging library.\n\nThis **automatix** version (automatix_cmd) is a fork of the original\n**automatix** (https://github.com/seibert-media/automatix) with some\nextended functionality and is maintained private in the authors freetime\n(not maintained by seibert//).\n\n## Warning:\n\nBeware that this tool cannot substitute the system administrators\n brain and it needs a responsible handling, since you can do\n (and destroy) almost everything with it.\n\n**Automatix** evaluates YAML files and executes defined commands as\n shell or python commands. There is no check for harmful commands.\n Be aware that this can cause critical damage to your system.\n\nPlease use the interactive mode and doublecheck commands before\n executing. Usage of automatix is at your own risk!\n\n\n# INSTALLATION\n\nAutomatix requires Python ≥ 3.10.\n\n```\npip install automatix_cmd\n```\n\nNOTICE: original `automatix` and this `automatix_cmd` share the\nsame main entrypoint. To avoid overwriting and confusion,\nyou should have only installed **ONE** of them!\n\n# CONFIGURATION\n\nYou can specify a path to a configuration YAML file via the\n environment variable **AUTOMATIX_CONFIG**.\nDefault location is \"~/.automatix.cfg.yaml\".\nAll (string) configuration values can be overwritten by the\n corresponding upper case environment variables preceeded\n by 'AUTOMATIX_', e.g. _AUTOMATIX_ENCODING_.\n\n### Example: .automatix.cfg.yaml\n\n    # Path to scripts directory\n    script_dir: '~/automatix_script_files'\n    \n    # Global constants for use in pipeline scripts\n    constants:\n      apt_update: 'apt-get -qy update'\n      apt_upgrade: 'DEBIAN_FRONTEND=noninteractive apt-get -qy -o Dpkg::Options::=--force-confold --no-install-recommends upgrade'\n      apt_full_upgrade: 'DEBIAN_FRONTEND=noninteractive apt-get -qy -o Dpkg::Options::=--force-confold --no-install-recommends full-upgrade'\n    \n    # Encoding\n    encoding: 'utf-8'\n    \n    # Path for shell imports\n    import_path: '.'\n    \n    # SSH Command used for remote connections\n    ssh_cmd: 'ssh {hostname} sudo '\n    \n    # Temporary directory on remote machines for shell imports\n    remote_tmp_dir: 'automatix_tmp'\n    \n    # Logger\n    logger: 'mylogger'\n    \n    # Logging library (has to implement the init_logger method)\n    logging_lib: 'mylib.logging'\n\n    # Logfile directory for parallel processing (ONLY for parallel processing!)\n    logfile_dir: 'automatix_logs'\n    \n    # Bundlewrap support, bundlewrap has to be installed (default: false)\n    bundlewrap: true\n    \n    # Teamvault / Secret support, bundlewrap-teamvault has to be installed (default: false)\n    teamvault: true\n\n    # Activate progress bar, python_progress_bar has to be installed (default: false)\n    progress_bar: true\n\n# SYNOPSIS\n\n**automatix**\n      \\[**--help**|**-h**\\]\n      \\[**--systems** \\[_SYSTEM1=ADDRESS_OR_NODENAME_ ...\\]\\]\n      \\[**--vars** \\[_VAR1=VALUE1_ ...\\]\\]\n      \\[**--secrets** \\[_SECRET1=SECRETID_ ...\\]\\]\n      \\[**--vars-file** _VARS_FILE_PATH_ \\]\n      \\[**--print-overview**|**-p**\\]\n      \\[**--jump-to**|**-j** _JUMP_TO_\\]\n      \\[**--steps**|**-s** _STEPS_\\]\n      \\[**--interactive**|**-i**\\]\n      \\[**--force**|**-f**\\]\n      \\[**--debug**|**-d**\\]\n      \\[**--**\\] **scriptfile**\n\n\n## OPTIONS\n\n**scriptfile**\n: The only required parameter for this tool to work. Use \" -- \" if\n needed to delimit this from argument fields. See **SCRIPTFILE**\n section for more information.  \n\n**-h**, **--help**\n: View help message and exit.  \n\n**--systems** _SYSTEM1=ADDRESS_OR_NODENAME_\n: Use this to set systems without adding them to the\n  scriptfile or to overwrite them. You can specify multiple\n  systems like: --systems v1=string1 v2=string2 v3=string3  \n  \n**--vars** _VAR1=VALUE1_\n: Use this to set vars without adding them to the scriptfile\n  or to overwrite them. You can specify multiple vars\n  like: --vars v1=string1 v2=string2 v3=string3  \n  \n**--secrets** _SECRET1=SECRETID_\n: Use this to set secrets without adding them to the\n  scriptfile or to overwrite them. You can specify multiple\n  secrets like: --secrets v1=string1 v2=string2 v3=string3 *(only if\n  teamvault is enabled)*  \n  \n**--vars-file** _VARS_FILE_PATH_\n: Use this to specify a CSV file from where **automatix** reads\n  systems, variables and secrets. First row must contain the field\n  types and names. You may also specify an `label` field.\n  Example: `label,systems:mysystem,vars:myvar`. The automatix script will\n  be processed for each row sequentially.  \n  \n**--parallel**\n: Run CSV file entries parallel in screen sessions; only valid with --vars-file.\n  GNU screen has to be installed. See EXTRAS section below.\n\n**--print-overview**, **-p**\n: Just print command pipeline overview with indices then exit without\n executing the commandline. Note that the *always pipeline* will be\n executed anyway.  \n\n**--jump-to** _JUMP_TO_, **-j** _JUMP_TO_\n: Jump to step with index _JUMP_TO_ instead of starting at the\n beginning. Use **-p** or the output messages to determine the\n desired step index. You can use negative numbers to start counting\n from the end.  \n\n**--steps** _STEPS_, **-s** _STEPS_\n: Only execute these steps (comma-separated indices) or exclude steps\n by prepending the comma-separated list with \"e\".\n Examples: `-s 1,3,7`, `-s e2`, `-s e0,5,7,2`  \n\n**--interactive**, **-i**\n: Confirm actions before executing.  \n  \n**--force**, **-f**\n: Try always to proceed (except manual steps), even if errors occur\n (no retries).  \n\n**--debug**, **-d**\n: Activate debug log level.  \n\n\n### EXAMPLE: Usage\n\n    automatix -i --systems source=sourcesystem.com target=targetsystem.org -- scriptfile.yaml\n\n\n## SCRIPTFILE\n\nThe **scriptfile** describes your automated process. Therefore it\n contains information about systems, variables, secrets and the\n command pipeline.\n\nYou can provide a path to your **scriptfile** or place your\n scriptfile in the predefined directory (see **CONFIGURATION**\n section, _script_dir_). The path has precedence over the predefined\n directory, if the file exists at both locations.\n\nThe **scriptfile** has to contain valid YAML.\n\n### EXAMPLE: scriptfile\n    \n    name: Migration Server XY\n    # Systems you like to refer to in pipeline (accessible via 'SYSTEMS.source')\n    # If Bundlewrap support is activated use node names instead of hostnames or add preceeding 'hostname!'.\n    require_version: '1.5.0'\n    systems:\n      source: sourcesystem.com\n      target: targetsystem.org\n    # Custom vars to use in pipeline\n    vars:\n      version: 1.2.3\n      domain: 'bla.mein-test-system'\n    # Teamvault Secrets, if activated (left: like vars, right: SECRETID_FIELD, FIELD=username|password|file)\n    secrets:\n      web_user: v6GQag_username\n      web_pw: v6GQag_password\n    # Imports for functions you like to use (path may be modified in configuration)\n    imports:\n      - myfunctions.sh\n    # like command pipeline but will be exectuted always beforehand\n    always:\n      - python: |\n          import mylib as nc\n          PERSISTENT_VARS.update(locals())\n    pipeline:\n      - remote@target: systemctl stop server\n      - remote@source: zfs snapshot -r tank@before-migration\n      - manual: Please trigger preparing tasks via webinterface\n      - myvar=local: curl -L -vvv -k https://{domain}/\n      - local: echo \"1.1.1.1 {domain}\" >> /etc/hosts\n      - sla=python: NODES.source.metadata.get('sla')\n      - python: |\n            sla = '{sla}'\n            if sla == 'gold':\n                print('Wow that\\'s pretty cool. You have SLA Gold.')\n            else:\n                print('Oh. Running out of money? SLA Gold is worth it. You should check your wallet.')\n            PERSISTENT_VARS['sla'] = sla\n      - cond=python: sla == 'gold'\n      - cond?local: echo \"This command is only executed if sla is gold.\"\n    cleanup:\n      - local: rm temp_files\n\n\n### FIELDS\n\n**name** _(string)_\n: Just a name for the process. Does not do anything.\n\n**require_version** _(string)_\n: Minimum required Automatix version for this script to run.\n\n**systems** _(associative array)_\n: Define some systems. Value has to be a valid SSH destination like an\n IP address or hostname. If Bundlewrap support is enabled, it has to\n be a valid and existing Bundlewrap node or group name, or you can \n precede your IP or hostname with `hostname!` to define a\n non-Bundlewrap system.\nYou can refer to these systems in the command pipeline in multiple ways:\n\n1) remote@systemname as your command action (see below)\n\n2) via {SYSTEMS.systemname} which will be replaced with the value\n\n3) via NODES.systemname in python actions to use the Bundlewrap node\n   object (Bundlewrap nodes only, no groups)\n\n**vars** _(associative array)_\n: Define some vars. These are accessible in the command pipeline via\n {varname}. Note: Only valid Python variable names are allowed.\n You can use \"*FILE_*\" prefix followed by a file path to assign the file\n content to the variable.\n\n**secrets** _(associative array)_\n: Define teamvault secrets. Value has to be in this format:\n _SECRETID_FIELD_. _FIELD_ must be one of username, password or file.\n The resolved secret values are accessible in command line via\n {secretname}. *(only if teamvault is enabled)*\n\n**imports** _(list)_\n: Listed shell files (see **CONFIGURATION** section, _import_path_)\n will be sourced before every local or remote command execution.\n For remote commands, these files are transferred via tar and ssh to\n your home directory on the remote system beforehand and deleted\n afterwards. This is meant to define some functions you may need.\n\n**always**, **cleanup** _(list of associative arrays)_\n: See **ALWAYS / CLEANUP PIPELINE** section.\n\n**pipeline** _(list of associative arrays)_\n: See **PIPELINE** section.\n\n### PIPELINE\n\nHere you define the commands automatix shall execute.\n\n**KEY**: One of these possible command actions:\n\n1) **manual**: Some manual instruction for the user. The user has to\n confirm, that automatix may proceed.\n\n2) **local**: Local shell command to execute. Imports will be sourced\n beforehand. /bin/bash will be used for execution.\n\n3) **remote@systemname**: Remote shell command to execute. Systemname\n has to be a defined system. The command will be run via SSH (without\n  pseudo-terminal allocation). It uses the standard SSH command.\n  Therefore your .ssh/config should be respected.\n If systemname is a Bundlewrap group, the remote command will be\n  executed sequentially for every node.\n\n4) **python**: Python code to execute.\n   * Notice that there are some modules, constants and functions which\n     are already imported (check command.py): e.g.\n     `re, subprocess, quote(from shlex)`. The variable `vars` is used\n     to store the Automatix variables as a dictionary. You can use it \n     to access or change the variables directly.\n   * If bundlewrap is enabled, the Bundlewrap repository object is\n     available via AUTOMATIX_BW_REPO and system node objects are\n     available via NODES.systemname.\n     Use `AUTOMATIX_BW_REPO.reload()` to reinitialize the Bundlewrap \n     repository from the file system. This can be useful for using\n     newly created nodes (e.g. remote commands).  \n   \n\n**ASSIGNMENT**: For **local**, **remote** and **python** action you\n can also define a variable to which the output will be assigned.\n To do this prefix the desired variablename and = before the action\n key, e.g. `myvar=python: NODES.system.hostname`. Be careful when\n working with multiline statements. In **python** the first line is\n likely to set the variable. All variables will be converted to\n strings when used to build commands in following steps.\n \n**CONDITIONS**: You can define the command only to be executed if\n your condition variable evaluates to \"True\" in Python. To achieve\n this write the variable name followed by a question mark at the very\n beginning like `cond?python: destroy_system()`. Be aware that all\n output from **local** or **remote** commands will lead to a non-empty\n string which evaluates to \"True\" in Python, but empty output will\n evaluate to \"False\". Use `!?` instead of `?` to invert the condition.\n\n**VALUE**: Your command. Variables will be replaced with Python\n format function. Therefore, use curly brackets to refer to variables,\n systems, secrets and constants.\n\nConstants are available via CONST.KEY, where KEY is the key of your\n constants in your **CONFIGURATION** file. There you can define some\n widely used constants.\n\nIn most cases its a good idea to define your command in quotes to\n avoid parsing errors, but it is not always necessary. Another way is\n to use '|' to indicate a _literal scalar block_. There you can even\n define whole program structures for python (see example).\n\n#### Escaping in Pipeline\n\nBecause automatix uses Python's format() function:  \n`{` -> `{{`  \n`}` ->  `}}`  \n\nStandard YAML escapes (see also https://yaml.org/spec/1.2/spec.html):  \n`'` -> `''`  \n`\"` -> `\\\"`  \n`\\ ` -> `\\\\`  \n`:` -> Please use quotes (double or single).\n\n\n### ALWAYS / CLEANUP PIPELINE\n\nSame usage as the 'normal' command pipeline, but will be executed\n every time at start of automatix (**always**) or at the end\n (**cleanup**) even if aborted (a). The commands are executed without\n --interactive flag, independend of the specified parameters.\n\nIntended use case for **always**: python imports or informations that\n are needed afterwards and do not change anything on systems.\n You want to have these available even if using --jump|-j feature.\n\nIntended use case for **cleanup**: Remove temporary files or artifacts.\n\n\n## ENVIRONMENT\n\n**AUTOMATIX_CONFIG**: Specify the path to the configuration file.\n Default is \"~/.automatix.cfg.yaml\".  \n\n**AUTOMATIX_**_config-variable-in-upper-case_: Set or overwrite the \n corresponding configuration value. See **CONFIGURATION** section.\n Works only for string values!  \n\n**AUTOMATIX_TIME**: Set this to an arbitrary value to print the times\n for the single steps and the whole script.  \n\nAdditionally you can modify the environment to adjust things to your\n needs.\n\n\n# TIPS & TRICKS\n\n### YAML Syntax\n\nFor multiline commands and variables YAML offers different possibilities\n to write multiline strings. A look at https://yaml-multiline.info/ might\n be helpful.  \n\n### PERSISTENT_VARS\n\nIf you want to access variables in **python** action you defined in\npreceeding command, you can use the **PERSISTENT_VARS** dictionary\n(shortcut: **PVARS**).\nThis is added to the local scope of **python** actions and the\ndictonary keys are also available as attributes.\n Examples:\n- To make all local variables of the actual command persistent use\n `PERSISTENT_VARS.update(locals())`.\n- To delete one persistent variable named \"myvar\" use\n `del PERSISTENT_VARS['myvar']`\n- To make variable \"v2\" persistent use `PERSISTENT_VARS['v2'] = v2`\n  or `PERSISTENT_VARS.v2 = v2`\n- Use the shortcut like `PVARS.v2 = v2`\n\nYou can use variables in PERSISTENT_VARS also as condition by\nusing the shortcut and the attribute notation:\n    \n      - python: PVARS.cond = some_function()\n      - PVARS.cond?local: echo 'This is only printed if \"some_function\" evaluates to \"True\"'\n      - PVARS.cond!?local: echo 'And this is printed if \"some_function\" evaluates to \"False\"'\n\nAn alternative is to make variables global, but in most cases using\n PERSISTENT_VARS is more clean. _**CAUTION: Choosing already existing\n (Python) variable names may lead to unexpected behaviour!!!**_ Maybe\n  you want to check the source code (command.py).  \nExplanation: automatix is written in Python and uses 'exec' to\n execute the command in function context. If you declare variables\n globally they remain across commands.\n\n### Abort and Skip Exceptions\n\nTo abort the current automatix and jump to the next batch item you can\n raise the `SkipBatchItemException`. For aborting the whole automatix\n process raise `AbortException(return_code: int)`. In both cases the\n cleanup pipeline is executed. Same is the case for selecting\n `a`:abort or `c`:continue when asked (interactive or error).\n\n### Logging / Saving the output\n\n**automatix** offers no own capability to log the output to a log file or\n save it otherwise.  \n\nIf you have _GNU screen_ installed, you may start a screen session with\n `-L` and optional `-Logfile LOGFILE` in which you start **automatix**.\n (This is how it works with \"parallel processing\", see **EXTRAS** section.)\n\nA different approach is to use `tee`, e.g. `automatix [script file + options] 2>&1 | tee auto.log`.\n Different to the screen approach this seems not to capture your input.\n\n# BEST PRACTISES\n\nThere are different ways to start scripting with **automatix**. The\n author's approach is mainly to consider the process and simply write\n down, what to do (manual steps for complex or not automated steps)\n and which commands to use.  \nThen start **automatix** in interactive mode (-i) and adjust the\n single steps one by one. Replace manual steps, if suitable. Whenever\n adjustment is needed, abort, adjust and restart **automatix** with\n jump (-j) to the adjusted step.  \nFrom **automatix** 1.13.0 on you can use the reload scriptfile feature\n instead. When asked for options (either because a command failed or\n you are in interactive mode) you can use **-R** to reload the\n scriptfile. If lines in the scriptfile have changed, or you need to\n repeat steps, you can use R+/-$number to reload and adjust the\n restart point (available since **automatix** 1.14.0). NOTICE: If using\n vars-file, this reloads the script ONLY the active CSV row!\n\nRepeat this procedure to automate more and more and increase quality,\n whenever you feel like it.\n\nConsider to put often used paths or code sequences in automatix\n variables for better readability.  \nDo the same with variable content like URLs, to make it possible to\n overwrite it by command line options. Where ever possible prefer to\n use functions to determine already available information, such as BW\n metadata, instead of defining things explicitly. This will make\n things easier when using the script with different systems /\n parameters.\n\nPreferred way of using **automatix** is to put often used and complex\n algorithms in shell functions or python libraries (shelllib/pylib)\n and import them. Advantage of this approach is that you can use your\n implemented functions multiple times and build up a toolbox of nice\n functionality over time.\n\n\n# NOTES\n\n**Manual steps** will always cause automatix to stop and wait for\n user input.\n\nBe careful with **assignments** containing line breaks (echo, ...).\n Using the variables may lead to unexpected behaviour or errors.\n From version 1.14.0 on trailing new lines in **assignments**\n of Shell commands (_local_, _remote@_) are removed.\n\nAssignments containing **null bytes** are currently not supported.\n\nBecause the **always** pipeline should not change anything, aborting\n while running this pipeline will not trigger a cleanup.\n\nIf you want to abort the **pipeline** without triggering the\n **cleanup** pipeline, use CRTL+C.\n\nWhile **aborting remote functions** (via imports), automatix is not\n able to determine still running processes invoked by the function,\n because it only checks the processes for the commands (in this case\n the function name) which is called in the pipeline.\n\nUser input questions are of following categories:\n- [MS] **M**anual **S**tep\n- [CF] **C**ommand **F**ailed\n- [PF] **P**artial command **F**ailed (BW groups)\n- [RR] **R**emote process still **R**unning\n- [SE] **S**yntax **E**rror\n\nThe terminal (T) answer starts an interactive Bash-Shell (/bin/bash -i).\n Therefore .bashrc is executed, but the command prompt (PS1) is\n replaced to indicate, we are still in an automatix process.\n \n\n# EXTRAS\n\n## Parallel processing\nRequirement: GNU screen installed and accessible via `screen` command in bash.\n\nThis **automatix** version has the option to process multiple **automatix** instances at a time.\n This is achieved by starting multiple [GNU screen](https://www.gnu.org/software/screen/) sessions.\n Please make yourself comfortable with the screen controls before using this feature to avoid getting lost.\n\nThe main programm stays in a loop while attaching to the screen sessions and you will come back to it\n if you detach a screen session. The **automatix-manager** runs in its own screen session and is\n responsible for starting the automatix screens and status updates.\n\nBy default the programm starts with 2 parallel automatix instances. Use the main programm loop controls\n to change the number of allowed parallel sessions (pressing 'm' followed by your desired number).\n\nIf you force the programm to terminate (e.g. keyboard interrupt, process kill, ...),\n check for still running screen processes via `screen -list`. They are independent and may continue\n running. Cleanup manually, if necessary.\n\nThe screens write their output to log files in the specified **logfile_dir** (see **CONFIGURATION** section).\n These logfiles contain the escape sequences that are used to provide the colored output an the terminal.\n You can use a pager that supports interpreting these sequences like the terminal to have a similar\n experience (`more` or `less -r` worked for me).\n\n## Bash completion (experimental)\nAutomatix supports bash completion for parameters and the script directory via [argcomplete](https://github.com/kislyuk/argcomplete).\n\nTherefor follow the installation instructions for argcomplete, which is at the current time\n\n    pip install argcomplete\n\nand either global activation via executing\n\n    activate-global-python-argcomplete\n\nor activation for automatix (e.g. in `.bashrc`)\n\n    eval \"$(register-python-argcomplete automatix)\"\n\nAutomatix will recognize the installed module and offer the completion automatically.\n\n## Progress bar (experimental)\nFor activation of an \"apt-like\" progress bar based on the amount of commands\n install `python_progress_bar` via pip and either set `AUTOMATIX_PROGRESS_BAR`\n environment variable to an arbitrary value (not \"False\") or set `progress_bar`\n to `true` in the config file.\n\nYou can force deactivation in setting `AUTOMATIX_PROGRESS_BAR` environment variable\n to \"False\" (overwrites config file setting).\n\nNote, that using commands that heavily modify the terminal behaviour/output\n (such as `top`, `watch`, `glances`, ...), may lead to a unreadable\n or undesirable output. It might be a better idea to encourage the user\n to open a separate terminal and type these commands there.\n\nUsing automatix itself as command should work, but may lead to confusing\n output as well. Note, that the progress bar will be overwritten by the\n new automatix instance for the duration of the automatix command.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Automation wrapper for bash and python commands",
    "version": "2.2.0",
    "project_urls": {
        "Homepage": "https://github.com/vanadinit/automatix_cmd"
    },
    "split_keywords": [
        "bash",
        " shell",
        " command",
        " automation",
        " process",
        " wrapper",
        " devops",
        " system administration"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "42c860036ee156d3fd19f3d516abc99bff0a2a21a1517e5737d873dda239bf38",
                "md5": "cf473460d629c7fb35657e4507ae2602",
                "sha256": "9bf5834622ab2c72612223c2ff87f2c85e9cb3fe50c4c84f809e9aa19a913bab"
            },
            "downloads": -1,
            "filename": "automatix_cmd-2.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "cf473460d629c7fb35657e4507ae2602",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 33912,
            "upload_time": "2024-06-11T13:20:33",
            "upload_time_iso_8601": "2024-06-11T13:20:33.736758Z",
            "url": "https://files.pythonhosted.org/packages/42/c8/60036ee156d3fd19f3d516abc99bff0a2a21a1517e5737d873dda239bf38/automatix_cmd-2.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ca3980b4142ef3c0046610802d216d98884b1cedde0edd7b1a863668bc1e4dc2",
                "md5": "ef7b9e820eaba26c77f09f01bd8426d9",
                "sha256": "c824069e74524fb386213810d85fc1a9a8891e6b9511de6a97145673bd6c0a02"
            },
            "downloads": -1,
            "filename": "automatix_cmd-2.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "ef7b9e820eaba26c77f09f01bd8426d9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 39419,
            "upload_time": "2024-06-11T13:20:34",
            "upload_time_iso_8601": "2024-06-11T13:20:34.835249Z",
            "url": "https://files.pythonhosted.org/packages/ca/39/80b4142ef3c0046610802d216d98884b1cedde0edd7b1a863668bc1e4dc2/automatix_cmd-2.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-11 13:20:34",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "vanadinit",
    "github_project": "automatix_cmd",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "automatix-cmd"
}
        
Elapsed time: 0.30030s