python-heptapod-runner


Namepython-heptapod-runner JSON
Version 0.6.3 PyPI version JSON
download
home_pagehttps://foss.heptapod.net/heptapod/runner
SummaryHeptapod Runner: Python utilities and subsystems
upload_time2023-11-23 16:20:03
maintainer
docs_urlNone
authorGeorges Racinet
requires_python
licenseGPLv3+
keywords hg mercurial git heptapod gitlab
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Python utilities and subsytems for Heptapod Runner

[Heptapod Runner](https://foss.heptapod.net/heptapod/heptapod-runner)
is the fork of [GitLab Runner](https://docs.gitlab.com/runner/) meant to
work for [Heptapod](https://heptapod.net) and [GitLab](https://gitlab.com)
instances.

The facilities provided in this package are part of the Heptapod Runner
extended distribution, aiming to provide additional options. They also
share the same
[source code repository](https://foss.heptapod.net/heptapod/heptapod-runner)
with Heptapod Runner.

They are not to be confused with the main Heptapod Runner program, which
is a standalone programm written in Go (like GitLab Runner) that does not have
any dependency on `python-heptapod-runner`.

Python was selected for fast prototyping and because it is lingua franca in
the Mercurial community, but It is possible that some of facilities end up
being rewritten in another language.

## `heptapod-paas-runner-register`

This interactive program can be used to bootstrap the configuration for
`heptapod-paas-runner`.

**Warning:** in all cases, the configuration has to be manually completed
afterwards.

It takes care notably of the registration against
the GitLab or Heptapod instance, hence playing a similar role as the
`gitlab-runner register` command.

It will just append a new `[[runners]` section if the configuration file
already exists.

### Caveats for configuration file creation

- The parent directory of the configuration file path has to exist,
  be writable by the system user doing the registration
- The resulting configuration file has to be readable by
  the system user for `heptapod-paas-runner`.
- The parent directory of the state file path has to be writable by the
  system user for  `heptapod-paas-runner`.


## `heptapod-paas-runner`

The purpose of this program is to provide on-demand provisioning while
following the expected workflows of PAAS systems:

- Docker hosts are provisioned only after jobs have been acquired from the
  coordinator
- actual job launch involves pushing a Dockerfile to a Git repository that
  controls the Docker host.

### Process and state management

#### Simple invocation

`heptapod-paas-runner` takes a single positional argument: the path to
its configuration file (see the section about `heptapod-paas-runner-register`
above)

Several options are available and can be displayed with:

```
heptapod-paas-runner --help
```

#### Graceful restart and state management

Heptapod PAAS Runner tracks the jobs it has launched
in order to deprovision the resources once they are finished.

To that end, it implements a graceful shutdown when `SIGTERM` is
received (this is the default stop signal used by many process managers,
including systemd).

Upon signal reception, it will

- stop acquiring new jobs
- finish operations that can't be interrupted (launchings, decommissionings)
- write all needed information about currently tracked jobs in a file
- exit

This shutdown sequence is not instantaneous. A waiting time of 2 minutes
should be enough before resorting to more drastic means (this includes margin).

We will probably improve the interruption of the launch sequence, but we
can't go as far as to interrupt a currently running request, as it could
provision untrackable resources.

#### Graceful reload

Currently not supported, a configuration change can be done without losing
information with the grateful restart.

In theory, a proper reload would be less dangerous.

#### Sample systemd unit file

```
[Unit]
Description=Heptapod PAAS Runner
After=network.target

[Install]
WantedBy=multi-user.target

[Service]
User=heptapod-runner
WorkingDirectory=/srv/heptapod-runner
ExecStart=/srv/heptapod-runner/venv/bin/heptapod-paas-runner /etc/heptapod-runner/pass-runner.toml
# We don't need a specific ExecStop, as systemd has a cascading system
# of defaults for the stop signal, with the
# SIGTERM being the ultimate default.

TimeoutStopSec=120
Restart=always
RestartSec=125
```

### The configuration file

`heptapod-paas-runner` uses the same configuration file as the normal
Heptapod (or GitLab) Runner.

Each `runners` section has an `executor` entry that must be one of the PAAS
executors, and consists otherwise of a mix of

- specific PAAS Runner configuration, depending on the executor.
- standard GitLab Runner configuration, forwarded to the final executor,
  unless forced by the PAAS Runner.

The PAAS executors are currently:

- `clever-docker` (see below)
- `local-docker`, for testing and development purposes only

Note: `heptapod-paas-runner` does not have a registration facility yet.
In practice, you can use any `heptapod-runner` or `gitlab-runner` executable
to create a configuration file with an appropriate coordinator token,
and then modify it for `heptapod-paas-runner`. This does not have to be done
on the actual target system.

### Global configuration

The global configuration is not forwarded to the PAAS resources. It is instead
used to tweak `heptapod-paas-runner` itself.


- `concurrent` (default 50): same meaning as in standard GitLab Runner.
  This is the maximum number of jobs across all defined runners.
- `check_interval` (default 3):
  similar meaning as in standard GitLab Runner. This is
  the time in seconds between full coordinator polls, given that:
  + if a coordinator grants a job, it is repolled immediately unless limits
    on jobs are reached (number of jobs, weighted quota, and concurrent
    provisioning)
  + if limits on jobs are reached, the waiting time is given by
    `job_progress_poll_interval` (see below)
- `job_progress_interval` (default 30): time in seconds to wait between
   coordinator requests to check if launched jobs are finished.
- `quota_computation` (required): allows to set the weighted quota by taking
  a runner section, one of its associated flavor and a number of job as
  reference. Example:

  ```
  [quota_computation]
    reference_runner = "my-runner"
    reference_flavor = "M"
    reference_jobs_count = 4

  [[runners]]
    name = "my-runner"
    url = "https://heptapod.example"
    ...
  ```
  This sets the quota to be the weight of 4 jobs with flavor M of the
  `my-runner` runner defined afterwards.
- `state_file`: path of the file used to keep tracking running jobs after
  a graceful restart.
- `paas_max_concurrent_provisioning (default 10)`: maximum number of
  concurrent jobs currently queued at the PAAS infrastructure that haven't
  been acknowledged as really running by their coordinators yet.
- `paas_finished_jobs_keep_resources` (default `false`): if `true`, no
  decommissioning will occur.
- `pass_decommision_launched_failures` (default `true`): if `false`, in the
  case where resource was supposedly created in the PAAS infrastructure but
  did not actually start, no attempt to remove it will be done, so that it
  can be inspected to understand the issue.


### Global footprint: the weight system

Some `heptapod-paas-runner` executors have the capability to use
different resources ("flavors") according to job details.
The job details are usually available only once the job is acquired,
which can currently not be reverted.

A concurrency control system accounting only for the number of jobs would
hence need to cripple its capability to run small jobs because of the
possibility to run large jobs. This is what happened before the introduction
of the weight system.

The weights used by the various configured `runners` are simply summed up.
Beware that the weights of different executors are usually very much not
comparable, hence it is a very bad idea to mix several executor *types*
in a single `heptapod-paas-runner` service.
For instance, at the time of this writing,
the `clever-docker` executor has weights ranging roughly from 2 to 75,
whereas an executor not implementing weights would have all values equal to 1.

It is perfectly admissible, and even recommended, to smooth out those limits
to put several `runners` in a single service, as long as they all have the
same `executor`. The typical use-case for this would be to perform CI for
several Heptapod (or pure upstream GitLab) instances.

### Common properties of the Docker executors

- All standard features (images, services) are supporteed

#### Image management and Dependency Proxy

We don't currently have the means to reuse PAAS resources, hence all jobs
start with fresh downloads of all necessary Docker images.

Partly because of this, the Heptapod PAAS Runner uses the Dependency Proxy
automatically, diverging in that from the standard GitLab Runner Docker
executor.

Practical consequences for job authors:

- do *not* disable the Dependency Proxy for your Group
- if a service has no alias defined in the job, it will be accessible with
  only one of [the two syntaxes normally supported by GitLab Runner](https://docs.gitlab.com/ce/ci/services/#accessing-the-services):

  + if the service image is defined as `postgres:13`, the service container
    can be accessed as usual as `postgres`
  + if the image is defined as `tutum/wordpress`, the service
    container can be accessed as `tutum-wordpress`, but *not*
    `tutum__wordpress`


### The `clever-docker` executor

This runs the job in [Clever Cloud](https://clever-cloud.com).

Clever Cloud is also the company hosting the [public Heptapod instance for
Free and Open Source Software](https://foss.heptapod.net), where Heptapod
is self-hosted and the [commercial Heptapod instance](https://heptapod.host).

There are two modes of operation: single organization and multi-tenant.

#### Common configuration

Required:

- `executor`: `clever-docker`
- `cc_extra_env` subsection. Can be used to pass extra environment to
  the sub-runner spawned on the provisioned resource. Currently
  `CC_ENABLE_HEPTAPOD_RUNNER = "true"` is necessary

Optional:

- `cc_api_url` (defaults to `https://api.clever-cloud.com/v2`):
  Clever Cloud base API URL
- `cc_zone` (defaults to `par`): any zone can be in theory used. In practice,
  the zone better be close to the GitLab / Heptapod instance (the coordinator).
- `cc_default_flavor` (defaults to `M`): the flavor (size) of instances that
  will be launched on Clever Cloud if not specified by the job.
- `[job_trace_watch]` (strongly recommended): this section enables watching
  the user-level job log (internally called "the job trace") to limit the
  number of concurrent jobs being in provisioning and not fully started yet,
  and to provide quicker user feedback in case of problems.
  Items are:
  + `token`: sadly, has to be an Administrator token with `read-api` scope.
    A future version of Heptapod should be able to avoid that, but probably not
    upstream GitLab
  + `timout_seconds` (default 300): the time after which the job launch
    is considered to have failed.
  + `poll_step` (default 10): the time in seconds between checks of the
    job trace
- `cc_deployment_repo_timeout` (default 20): maximum time in seconds to wait
  for the Clever Cloud's deployment Git repository to be ready
- `cc_deployment_repo_wait_step` (default 2): time between requests
   to assess if the Clever Cloud's deployment Git repository is ready

#### Running for a single Clever Cloud Organization

Put these in the Runner configuration

- `cc_multi_tenant`: unspecified or `false`.
- `cc_orga_id`: specify the id of your Clever Cloud Organization, as seen,
  e.g, in its Overview page in the Clever Cloud console.
- `cc_token`: token for the Clever Cloud API, with enough rights to create,
  deploy and delete applications and instances

It doesn't matter whether the Runner is tied to specific Projects, a Group or
a whole GitLab / Heptapod instance: all resources will be attached
(and billed) to the specified Organization.

This is a good fit for a self-hosted Heptapod instance.

Full example:

```toml

state_file = "/srv/heptapod-runner/paas-runner-state.json"

[quota_computation]
  reference_runner = "clever-cloud"
  reference_flavor = "3XL"
  reference_jobs_count = 8

[[runners]]
  name = "clever-cloud"
  url = "https://heptapod.example.com"
  token = "D3adNQYu8OCjkYDbwDaG"
  executor = "clever-docker"

  cc_orga_id = "orga_07cf2ef0-c9ad-4f04-b492-94c164f95c76"
  cc_token = "bb52e490-d47e-47a4-b190-73e23eb17111"

  [runners.cc_extra_env]
    CC_ENABLE_HEPTAPOD_RUNNER = "true"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    # A future version of Heptapod PAAS Runner may fill this in automatically.
    # Meanwhile it is possible to use any S3 configuration. Using Clever's
    # Cellar for your CI caches in the same zone is the best for
    # network proximity and bandwidth.
    # (replace with your credentials and bucket of choice)
    ServerAddress = "cellar-c2.services.clever-cloud.com"
    AccessKey = "dEA7gjmYM98gobVi6Y1x"
    SecretKey = "v0tdpjgpsDRqaSvIndvHAXFmjbpEd958gbZuO7yv"
    BucketName = "heptapod-ci"
    [runners.cache.gcs]
  [runners.docker]
    helper_image = "registry.heptapod.net/heptapod/heptapod-runner/helper:x86_64-latest"
    tls_verify = false
    image = "debian:bullseye"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache"]
    shm_size = 0
```

(all tokens and uuids in this example are random values freshly obtained
for this documentation)


#### The multi-tenant mode

In this mode, the runner determines the Clever Cloud Organization and the
associated API token from attributes of the top-level Heptapod Group to which
the Project belongs.

It assumes that something populates those attributes.

This mode of operation is intended for instances such as `heptapod.host`.

Required Runner configuration:

- `cc_multi_tenant`: `true`
- `cc_gitlab_namespace_attributes_token`: a GitLab / Heptapod private token
  with enough rights to query Group custom attributes.

Optional Runner configuration:

- `cc_orga_id_attribute` (defaults to `cc_orga_id`). Name of the custom
  attribute on top-level Groups to use for the Organization ID.
- `cc_orga_token_attribute (defaults to `cc_orga_token`). Name of the custom
  attribute on top-level Groups to use for the Clever API token of the
  Organization.

Full example:

```toml
state_file = "/srv/heptapod-runner/paas-runner-state.json"

[quota_computation]
  reference_runner = "clever-cloud"
  reference_flavor = "3XL"
  reference_jobs_count = 8

[[runners]]
  name = "clever-cloud"
  url = "https://heptapod.example.com"
  token = "D3adNQYu8OCjkYDbwDaG"
  executor = "clever-docker"

  cc_multi_tenant = true
  cc_gitlab_namespace_attributes_token = "D7aY5I5SygxA5oyZ11vB"

  [runners.cc_extra_env]
    CC_ENABLE_HEPTAPOD_RUNNER = "true"

  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    # Do not fill in this: it would use the same bucket for all tenants.
    # A future version of Heptapod PAAS Runner will fill this automatically,
    # using the Cellar add-on of each tenant for proper separation.
    [runners.cache.gcs]
  [runners.docker]
    helper_image = "registry.heptapod.net/heptapod/heptapod-runner/helper:x86_64-latest"
    tls_verify = false
    image = "debian:bullseye"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache"]
    shm_size = 0
```

(all tokens in this example are random values freshly obtained for this
documentation)


#### Flavors and their weights

End users can control the flavors (sizes) of the Clever Cloud instances
running their jobs. This is done with the `CI_CLEVER_CLOUD_FLAVOR` job
variable, with admissible values currently ranging from `XS` to `3XL`.

The `clever-docker` executors implement the weight system described above
to limit the maximum footprint of the whole runner service in the Clever
while taking differences between the flavors into account.

If one wishes to allow as much as exactly `N` jobs with flavor `F`, the
simplest is to use the `quota_computation` section (see above).
Alternatively, one may set the `concurrent` global configuration parameter to
`N*F.weight`.
The weight system will then automatically adjust what it means for the typical
payload, consisting of bigger and or smaller instances than `F`.

The weight computation is expected to change in the future, as it is merely
an approximation of actual infrastructure constraints, hence it is better
to use `quota_computation` rather than `concurrent`.

Currently, the weight of a flavor is computed as `M^1.25` where `M`
is the RAM footprint of the flavor, expressed in GiB. The non linearity of
this formula will allow in practice to spawn more smaller flavors with an
equivalent total memory footprint.

            

Raw data

            {
    "_id": null,
    "home_page": "https://foss.heptapod.net/heptapod/runner",
    "name": "python-heptapod-runner",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "hg mercurial git heptapod gitlab",
    "author": "Georges Racinet",
    "author_email": "georges.racinet@octobus.net",
    "download_url": "https://files.pythonhosted.org/packages/b2/55/85b736defebef19def6ce719e79d3a8d234e461ebf98ef5fc43e40578177/python-heptapod-runner-0.6.3.tar.gz",
    "platform": null,
    "description": "# Python utilities and subsytems for Heptapod Runner\n\n[Heptapod Runner](https://foss.heptapod.net/heptapod/heptapod-runner)\nis the fork of [GitLab Runner](https://docs.gitlab.com/runner/) meant to\nwork for [Heptapod](https://heptapod.net) and [GitLab](https://gitlab.com)\ninstances.\n\nThe facilities provided in this package are part of the Heptapod Runner\nextended distribution, aiming to provide additional options. They also\nshare the same\n[source code repository](https://foss.heptapod.net/heptapod/heptapod-runner)\nwith Heptapod Runner.\n\nThey are not to be confused with the main Heptapod Runner program, which\nis a standalone programm written in Go (like GitLab Runner) that does not have\nany dependency on `python-heptapod-runner`.\n\nPython was selected for fast prototyping and because it is lingua franca in\nthe Mercurial community, but It is possible that some of facilities end up\nbeing rewritten in another language.\n\n## `heptapod-paas-runner-register`\n\nThis interactive program can be used to bootstrap the configuration for\n`heptapod-paas-runner`.\n\n**Warning:** in all cases, the configuration has to be manually completed\nafterwards.\n\nIt takes care notably of the registration against\nthe GitLab or Heptapod instance, hence playing a similar role as the\n`gitlab-runner register` command.\n\nIt will just append a new `[[runners]` section if the configuration file\nalready exists.\n\n### Caveats for configuration file creation\n\n- The parent directory of the configuration file path has to exist,\n  be writable by the system user doing the registration\n- The resulting configuration file has to be readable by\n  the system user for `heptapod-paas-runner`.\n- The parent directory of the state file path has to be writable by the\n  system user for  `heptapod-paas-runner`.\n\n\n## `heptapod-paas-runner`\n\nThe purpose of this program is to provide on-demand provisioning while\nfollowing the expected workflows of PAAS systems:\n\n- Docker hosts are provisioned only after jobs have been acquired from the\n  coordinator\n- actual job launch involves pushing a Dockerfile to a Git repository that\n  controls the Docker host.\n\n### Process and state management\n\n#### Simple invocation\n\n`heptapod-paas-runner` takes a single positional argument: the path to\nits configuration file (see the section about `heptapod-paas-runner-register`\nabove)\n\nSeveral options are available and can be displayed with:\n\n```\nheptapod-paas-runner --help\n```\n\n#### Graceful restart and state management\n\nHeptapod PAAS Runner tracks the jobs it has launched\nin order to deprovision the resources once they are finished.\n\nTo that end, it implements a graceful shutdown when `SIGTERM` is\nreceived (this is the default stop signal used by many process managers,\nincluding systemd).\n\nUpon signal reception, it will\n\n- stop acquiring new jobs\n- finish operations that can't be interrupted (launchings, decommissionings)\n- write all needed information about currently tracked jobs in a file\n- exit\n\nThis shutdown sequence is not instantaneous. A waiting time of 2 minutes\nshould be enough before resorting to more drastic means (this includes margin).\n\nWe will probably improve the interruption of the launch sequence, but we\ncan't go as far as to interrupt a currently running request, as it could\nprovision untrackable resources.\n\n#### Graceful reload\n\nCurrently not supported, a configuration change can be done without losing\ninformation with the grateful restart.\n\nIn theory, a proper reload would be less dangerous.\n\n#### Sample systemd unit file\n\n```\n[Unit]\nDescription=Heptapod PAAS Runner\nAfter=network.target\n\n[Install]\nWantedBy=multi-user.target\n\n[Service]\nUser=heptapod-runner\nWorkingDirectory=/srv/heptapod-runner\nExecStart=/srv/heptapod-runner/venv/bin/heptapod-paas-runner /etc/heptapod-runner/pass-runner.toml\n# We don't need a specific ExecStop, as systemd has a cascading system\n# of defaults for the stop signal, with the\n# SIGTERM being the ultimate default.\n\nTimeoutStopSec=120\nRestart=always\nRestartSec=125\n```\n\n### The configuration file\n\n`heptapod-paas-runner` uses the same configuration file as the normal\nHeptapod (or GitLab) Runner.\n\nEach `runners` section has an `executor` entry that must be one of the PAAS\nexecutors, and consists otherwise of a mix of\n\n- specific PAAS Runner configuration, depending on the executor.\n- standard GitLab Runner configuration, forwarded to the final executor,\n  unless forced by the PAAS Runner.\n\nThe PAAS executors are currently:\n\n- `clever-docker` (see below)\n- `local-docker`, for testing and development purposes only\n\nNote: `heptapod-paas-runner` does not have a registration facility yet.\nIn practice, you can use any `heptapod-runner` or `gitlab-runner` executable\nto create a configuration file with an appropriate coordinator token,\nand then modify it for `heptapod-paas-runner`. This does not have to be done\non the actual target system.\n\n### Global configuration\n\nThe global configuration is not forwarded to the PAAS resources. It is instead\nused to tweak `heptapod-paas-runner` itself.\n\n\n- `concurrent` (default 50): same meaning as in standard GitLab Runner.\n  This is the maximum number of jobs across all defined runners.\n- `check_interval` (default 3):\n  similar meaning as in standard GitLab Runner. This is\n  the time in seconds between full coordinator polls, given that:\n  + if a coordinator grants a job, it is repolled immediately unless limits\n    on jobs are reached (number of jobs, weighted quota, and concurrent\n    provisioning)\n  + if limits on jobs are reached, the waiting time is given by\n    `job_progress_poll_interval` (see below)\n- `job_progress_interval` (default 30): time in seconds to wait between\n   coordinator requests to check if launched jobs are finished.\n- `quota_computation` (required): allows to set the weighted quota by taking\n  a runner section, one of its associated flavor and a number of job as\n  reference. Example:\n\n  ```\n  [quota_computation]\n    reference_runner = \"my-runner\"\n    reference_flavor = \"M\"\n    reference_jobs_count = 4\n\n  [[runners]]\n    name = \"my-runner\"\n    url = \"https://heptapod.example\"\n    ...\n  ```\n  This sets the quota to be the weight of 4 jobs with flavor M of the\n  `my-runner` runner defined afterwards.\n- `state_file`: path of the file used to keep tracking running jobs after\n  a graceful restart.\n- `paas_max_concurrent_provisioning (default 10)`: maximum number of\n  concurrent jobs currently queued at the PAAS infrastructure that haven't\n  been acknowledged as really running by their coordinators yet.\n- `paas_finished_jobs_keep_resources` (default `false`): if `true`, no\n  decommissioning will occur.\n- `pass_decommision_launched_failures` (default `true`): if `false`, in the\n  case where resource was supposedly created in the PAAS infrastructure but\n  did not actually start, no attempt to remove it will be done, so that it\n  can be inspected to understand the issue.\n\n\n### Global footprint: the weight system\n\nSome `heptapod-paas-runner` executors have the capability to use\ndifferent resources (\"flavors\") according to job details.\nThe job details are usually available only once the job is acquired,\nwhich can currently not be reverted.\n\nA concurrency control system accounting only for the number of jobs would\nhence need to cripple its capability to run small jobs because of the\npossibility to run large jobs. This is what happened before the introduction\nof the weight system.\n\nThe weights used by the various configured `runners` are simply summed up.\nBeware that the weights of different executors are usually very much not\ncomparable, hence it is a very bad idea to mix several executor *types*\nin a single `heptapod-paas-runner` service.\nFor instance, at the time of this writing,\nthe `clever-docker` executor has weights ranging roughly from 2 to 75,\nwhereas an executor not implementing weights would have all values equal to 1.\n\nIt is perfectly admissible, and even recommended, to smooth out those limits\nto put several `runners` in a single service, as long as they all have the\nsame `executor`. The typical use-case for this would be to perform CI for\nseveral Heptapod (or pure upstream GitLab) instances.\n\n### Common properties of the Docker executors\n\n- All standard features (images, services) are supporteed\n\n#### Image management and Dependency Proxy\n\nWe don't currently have the means to reuse PAAS resources, hence all jobs\nstart with fresh downloads of all necessary Docker images.\n\nPartly because of this, the Heptapod PAAS Runner uses the Dependency Proxy\nautomatically, diverging in that from the standard GitLab Runner Docker\nexecutor.\n\nPractical consequences for job authors:\n\n- do *not* disable the Dependency Proxy for your Group\n- if a service has no alias defined in the job, it will be accessible with\n  only one of [the two syntaxes normally supported by GitLab Runner](https://docs.gitlab.com/ce/ci/services/#accessing-the-services):\n\n  + if the service image is defined as `postgres:13`, the service container\n    can be accessed as usual as `postgres`\n  + if the image is defined as `tutum/wordpress`, the service\n    container can be accessed as `tutum-wordpress`, but *not*\n    `tutum__wordpress`\n\n\n### The `clever-docker` executor\n\nThis runs the job in [Clever Cloud](https://clever-cloud.com).\n\nClever Cloud is also the company hosting the [public Heptapod instance for\nFree and Open Source Software](https://foss.heptapod.net), where Heptapod\nis self-hosted and the [commercial Heptapod instance](https://heptapod.host).\n\nThere are two modes of operation: single organization and multi-tenant.\n\n#### Common configuration\n\nRequired:\n\n- `executor`: `clever-docker`\n- `cc_extra_env` subsection. Can be used to pass extra environment to\n  the sub-runner spawned on the provisioned resource. Currently\n  `CC_ENABLE_HEPTAPOD_RUNNER = \"true\"` is necessary\n\nOptional:\n\n- `cc_api_url` (defaults to `https://api.clever-cloud.com/v2`):\n  Clever Cloud base API URL\n- `cc_zone` (defaults to `par`): any zone can be in theory used. In practice,\n  the zone better be close to the GitLab / Heptapod instance (the coordinator).\n- `cc_default_flavor` (defaults to `M`): the flavor (size) of instances that\n  will be launched on Clever Cloud if not specified by the job.\n- `[job_trace_watch]` (strongly recommended): this section enables watching\n  the user-level job log (internally called \"the job trace\") to limit the\n  number of concurrent jobs being in provisioning and not fully started yet,\n  and to provide quicker user feedback in case of problems.\n  Items are:\n  + `token`: sadly, has to be an Administrator token with `read-api` scope.\n    A future version of Heptapod should be able to avoid that, but probably not\n    upstream GitLab\n  + `timout_seconds` (default 300): the time after which the job launch\n    is considered to have failed.\n  + `poll_step` (default 10): the time in seconds between checks of the\n    job trace\n- `cc_deployment_repo_timeout` (default 20): maximum time in seconds to wait\n  for the Clever Cloud's deployment Git repository to be ready\n- `cc_deployment_repo_wait_step` (default 2): time between requests\n   to assess if the Clever Cloud's deployment Git repository is ready\n\n#### Running for a single Clever Cloud Organization\n\nPut these in the Runner configuration\n\n- `cc_multi_tenant`: unspecified or `false`.\n- `cc_orga_id`: specify the id of your Clever Cloud Organization, as seen,\n  e.g, in its Overview page in the Clever Cloud console.\n- `cc_token`: token for the Clever Cloud API, with enough rights to create,\n  deploy and delete applications and instances\n\nIt doesn't matter whether the Runner is tied to specific Projects, a Group or\na whole GitLab / Heptapod instance: all resources will be attached\n(and billed) to the specified Organization.\n\nThis is a good fit for a self-hosted Heptapod instance.\n\nFull example:\n\n```toml\n\nstate_file = \"/srv/heptapod-runner/paas-runner-state.json\"\n\n[quota_computation]\n  reference_runner = \"clever-cloud\"\n  reference_flavor = \"3XL\"\n  reference_jobs_count = 8\n\n[[runners]]\n  name = \"clever-cloud\"\n  url = \"https://heptapod.example.com\"\n  token = \"D3adNQYu8OCjkYDbwDaG\"\n  executor = \"clever-docker\"\n\n  cc_orga_id = \"orga_07cf2ef0-c9ad-4f04-b492-94c164f95c76\"\n  cc_token = \"bb52e490-d47e-47a4-b190-73e23eb17111\"\n\n  [runners.cc_extra_env]\n    CC_ENABLE_HEPTAPOD_RUNNER = \"true\"\n  [runners.custom_build_dir]\n  [runners.cache]\n    [runners.cache.s3]\n    # A future version of Heptapod PAAS Runner may fill this in automatically.\n    # Meanwhile it is possible to use any S3 configuration. Using Clever's\n    # Cellar for your CI caches in the same zone is the best for\n    # network proximity and bandwidth.\n    # (replace with your credentials and bucket of choice)\n    ServerAddress = \"cellar-c2.services.clever-cloud.com\"\n    AccessKey = \"dEA7gjmYM98gobVi6Y1x\"\n    SecretKey = \"v0tdpjgpsDRqaSvIndvHAXFmjbpEd958gbZuO7yv\"\n    BucketName = \"heptapod-ci\"\n    [runners.cache.gcs]\n  [runners.docker]\n    helper_image = \"registry.heptapod.net/heptapod/heptapod-runner/helper:x86_64-latest\"\n    tls_verify = false\n    image = \"debian:bullseye\"\n    privileged = false\n    disable_entrypoint_overwrite = false\n    oom_kill_disable = false\n    disable_cache = false\n    volumes = [\"/cache\"]\n    shm_size = 0\n```\n\n(all tokens and uuids in this example are random values freshly obtained\nfor this documentation)\n\n\n#### The multi-tenant mode\n\nIn this mode, the runner determines the Clever Cloud Organization and the\nassociated API token from attributes of the top-level Heptapod Group to which\nthe Project belongs.\n\nIt assumes that something populates those attributes.\n\nThis mode of operation is intended for instances such as `heptapod.host`.\n\nRequired Runner configuration:\n\n- `cc_multi_tenant`: `true`\n- `cc_gitlab_namespace_attributes_token`: a GitLab / Heptapod private token\n  with enough rights to query Group custom attributes.\n\nOptional Runner configuration:\n\n- `cc_orga_id_attribute` (defaults to `cc_orga_id`). Name of the custom\n  attribute on top-level Groups to use for the Organization ID.\n- `cc_orga_token_attribute (defaults to `cc_orga_token`). Name of the custom\n  attribute on top-level Groups to use for the Clever API token of the\n  Organization.\n\nFull example:\n\n```toml\nstate_file = \"/srv/heptapod-runner/paas-runner-state.json\"\n\n[quota_computation]\n  reference_runner = \"clever-cloud\"\n  reference_flavor = \"3XL\"\n  reference_jobs_count = 8\n\n[[runners]]\n  name = \"clever-cloud\"\n  url = \"https://heptapod.example.com\"\n  token = \"D3adNQYu8OCjkYDbwDaG\"\n  executor = \"clever-docker\"\n\n  cc_multi_tenant = true\n  cc_gitlab_namespace_attributes_token = \"D7aY5I5SygxA5oyZ11vB\"\n\n  [runners.cc_extra_env]\n    CC_ENABLE_HEPTAPOD_RUNNER = \"true\"\n\n  [runners.custom_build_dir]\n  [runners.cache]\n    [runners.cache.s3]\n    # Do not fill in this: it would use the same bucket for all tenants.\n    # A future version of Heptapod PAAS Runner will fill this automatically,\n    # using the Cellar add-on of each tenant for proper separation.\n    [runners.cache.gcs]\n  [runners.docker]\n    helper_image = \"registry.heptapod.net/heptapod/heptapod-runner/helper:x86_64-latest\"\n    tls_verify = false\n    image = \"debian:bullseye\"\n    privileged = false\n    disable_entrypoint_overwrite = false\n    oom_kill_disable = false\n    disable_cache = false\n    volumes = [\"/cache\"]\n    shm_size = 0\n```\n\n(all tokens in this example are random values freshly obtained for this\ndocumentation)\n\n\n#### Flavors and their weights\n\nEnd users can control the flavors (sizes) of the Clever Cloud instances\nrunning their jobs. This is done with the `CI_CLEVER_CLOUD_FLAVOR` job\nvariable, with admissible values currently ranging from `XS` to `3XL`.\n\nThe `clever-docker` executors implement the weight system described above\nto limit the maximum footprint of the whole runner service in the Clever\nwhile taking differences between the flavors into account.\n\nIf one wishes to allow as much as exactly `N` jobs with flavor `F`, the\nsimplest is to use the `quota_computation` section (see above).\nAlternatively, one may set the `concurrent` global configuration parameter to\n`N*F.weight`.\nThe weight system will then automatically adjust what it means for the typical\npayload, consisting of bigger and or smaller instances than `F`.\n\nThe weight computation is expected to change in the future, as it is merely\nan approximation of actual infrastructure constraints, hence it is better\nto use `quota_computation` rather than `concurrent`.\n\nCurrently, the weight of a flavor is computed as `M^1.25` where `M`\nis the RAM footprint of the flavor, expressed in GiB. The non linearity of\nthis formula will allow in practice to spawn more smaller flavors with an\nequivalent total memory footprint.\n",
    "bugtrack_url": null,
    "license": "GPLv3+",
    "summary": "Heptapod Runner: Python utilities and subsystems",
    "version": "0.6.3",
    "project_urls": {
        "Homepage": "https://foss.heptapod.net/heptapod/runner"
    },
    "split_keywords": [
        "hg",
        "mercurial",
        "git",
        "heptapod",
        "gitlab"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b25585b736defebef19def6ce719e79d3a8d234e461ebf98ef5fc43e40578177",
                "md5": "fc15523798a8b3a8eb4661889334ea15",
                "sha256": "de94a131d1356bf9fc6acde23403f338eeb72e7138230122b884be0c81aa6449"
            },
            "downloads": -1,
            "filename": "python-heptapod-runner-0.6.3.tar.gz",
            "has_sig": false,
            "md5_digest": "fc15523798a8b3a8eb4661889334ea15",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 47290,
            "upload_time": "2023-11-23T16:20:03",
            "upload_time_iso_8601": "2023-11-23T16:20:03.112156Z",
            "url": "https://files.pythonhosted.org/packages/b2/55/85b736defebef19def6ce719e79d3a8d234e461ebf98ef5fc43e40578177/python-heptapod-runner-0.6.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-23 16:20:03",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "python-heptapod-runner"
}
        
Elapsed time: 0.15987s