seto


Nameseto JSON
Version 2.2.2 PyPI version JSON
download
home_pageNone
SummaryA Docker Swarm Deployment Manager
upload_time2024-10-17 13:29:20
maintainerNone
docs_urlNone
authorSébastien Demanou
requires_python<3.12,>=3.11
licenseApache 2.0
keywords docker swarm manager
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Ṣeto

Ṣeto is a command-line tool designed to assist with setting up and managing
shared storage volumes using NFS or GlusterFS drivers. It simplifies the process
of configuring stack-based deployments, setting up manager and replica nodes,
creating and syncing shared volumes, and mounting and unmounting these volumes.

### Features

- **Compose Command**: Resolves Docker Compose files.
- **Setup Command**: Sets up manager and replica nodes.
- **Create Volumes Command**: Creates and syncs shared volumes across nodes.
- **Mount Volumes Command**: Mounts shared volumes on specified nodes.
- **Unmount Volumes Command**: Unmounts shared volumes from specified nodes.

### Usage

The main entry point for Ṣeto is the `seto` command. Below is a detailed
description of each subcommand and its options.

#### Global Options

These options are applicable to all subcommands:

- `--stack`: Required. Specifies the stack name.
- `--driver`: Required. Specifies the driver URI to use. Can be `nfs://username:password@hostname` or `gluster://username:password@hostname`.

#### Subcommands

##### 1. Compose Command

Resolves Docker Compose files.

```bash
seto --stack <stack-name> --driver <driver-uri> compose
```

Example:

```bash
seto --stack my-stack --driver nfs://user:pass@host compose
```

##### 2. Setup Command

Sets up the manager and replica nodes.

```bash
seto --stack <stack-name> --driver <driver-uri> setup --replica <replica-connection-strings>
```

- `--replica`: Required. Specifies the nodes to set up in the format `username:password@hostname`.

Example:

```bash
seto --stack my-stack --driver nfs://user:pass@host setup --replica user:pass@replica1 user:pass@replica2
```

##### 3. Create Volumes Command

Creates and syncs shared volumes across nodes.

```bash
seto --stack <stack-name> --driver <driver-uri> create-volumes --replica <replica-connection-strings> [--force]
```

- `--replica`: Required. Specifies the nodes where volumes will be created.
- `--force`: Optional. Forces volume data synchronization.

Example:

```bash
seto --stack my-stack --driver nfs://user:pass@host create-volumes --replica user:pass@replica1 user:pass@replica2 --force
```

##### 4. Mount Volumes Command

Mounts shared volumes on specified nodes.

```bash
seto --stack <stack-name> --driver <driver-uri> mount-volumes --replica <replica-connection-strings>
```

- `--replica`: Required. Specifies the nodes where volumes will be mounted.

Example:

```bash
seto --stack my-stack --driver nfs://user:pass@host mount-volumes --replica user:pass@replica1 user:pass@replica2
```

##### 5. Unmount Volumes Command

Unmounts shared volumes from specified nodes.

```bash
seto --stack <stack-name> --driver <driver-uri> unmount-volumes --replica <replica-connection-strings>
```

- `--replica`: Required. Specifies the nodes where volumes will be unmounted.

Example:

```bash
seto --stack my-stack --driver nfs://user:pass@host unmount-volumes --replica user:pass@replica1 user:pass@replica2
```

### Example Workflow

1. **Setup Manager and Replica Nodes**

```bash
seto --stack my-stack --driver nfs://user:pass@host setup --replica user:pass@replica1 user:pass@replica2
```

2. **Create Volumes**

```bash
seto --stack my-stack --driver nfs://user:pass@host create-volumes --replica user:pass@replica1 user:pass@replica2 --force
```

3. **Mount Volumes**

```bash
seto --stack my-stack --driver nfs://user:pass@host mount-volumes --replica user:pass@replica1 user:pass@replica2
```

4. **Unmount Volumes**

```bash
seto --stack my-stack --driver nfs://user:pass@host unmount-volumes --replica user:pass@replica1 user:pass@replica2
```

5. **Deploy Stack**

```bash
seto --stack my-stack --manager nfs://user@manager-host deploy
```

### Error Handling

The tool includes basic error handling to catch and report errors related to argument parsing and execution. If an error occurs, a message will be printed, and the tool will exit with a non-zero status code.

## Environment Setup

0. See [cloud-init.yaml](cloud-init.yaml) file for prerequisites to install.

1. [Install Devbox](https://www.jetify.com/devbox/docs/installing_devbox/)

2. [Install `direnv` with your OS package manager](https://direnv.net/docs/installation.html#from-system-packages)

3. [Hook it `direnv` into your shell](https://direnv.net/docs/hook.html)

4. **Load environment**

   At the top-level of your project run:

   ```sh
   direnv allow
   ```

   > The next time you will launch your terminal and enter the top-level of your
   > project, `direnv` will check for changes and will automatically load the
   > Devbox environment.

5. **Install dependencies**

   ```sh
   make install
   ```

6. **Start environment**

   ```sh
   make shell
   ```

   This will starts a preconfigured Tmux session.
   Please see the [.tmuxinator.yml](.tmuxinator.yml) file.

## Makefile Targets

Please see the [Makefile](Makefile) for the full list of targets.

## Docker Swarm Setup

To set up Docker Swarm, you'll first need to ensure you have Docker installed on
your machines. Then, you can initialize Docker Swarm on one of your machines to
act as the manager node, and join other machines as worker nodes. Below are the
general steps to set up Docker Swarm:

1. **Install Docker**

   Make sure Docker is installed on all machines that will participate in the
   Swarm cluster. You can follow the official Docker installation guide for your
   operating system.

2. **Choose Manager Node**

   Select one of your machines to act as the manager node. This machine will be
   responsible for managing the Swarm cluster.

3. **Initialize Swarm**

   SSH into the chosen manager node and run the following command to initialize
   Docker Swarm:

   ```bash
   docker swarm init --advertise-addr <MANAGER_IP>
   ```

   Replace `<MANAGER_IP>` with the IP address of the manager node. This command
   initializes a new Docker Swarm cluster with the manager node.

4. **Join Worker Nodes**

   After initializing the Swarm, Docker will output a command to join other
   nodes to the cluster as worker nodes. Run this command on each machine you
   want to join as a worker node.

   ```bash
   docker swarm join --token <TOKEN> <MANAGER_IP>:<PORT>
   ```

   Replace `<TOKEN>` with the token generated by the `docker swarm init` command
   and `<MANAGER_IP>:<PORT>` with the IP address and port of the manager node.

5. **Verify Swarm Status**

   Once all nodes have joined the Swarm, you can verify the status of the Swarm
   by running the following command on the manager node:

   ```bash
   docker node ls
   ```

   This command will list all nodes in the Swarm along with their status.

## License

Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License.
You may obtain a copy of the License at [LICENSE](https://gitlab.com/demsking/seto/blob/main/LICENSE).


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "seto",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.12,>=3.11",
    "maintainer_email": null,
    "keywords": "docker, swarm, manager",
    "author": "S\u00e9bastien Demanou",
    "author_email": "demsking@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/5a/3a/3c80ead935d89f2185c999b5c7bc90e42104fb561438d3c196a727230f09/seto-2.2.2.tar.gz",
    "platform": null,
    "description": "# \u1e62eto\n\n\u1e62eto is a command-line tool designed to assist with setting up and managing\nshared storage volumes using NFS or GlusterFS drivers. It simplifies the process\nof configuring stack-based deployments, setting up manager and replica nodes,\ncreating and syncing shared volumes, and mounting and unmounting these volumes.\n\n### Features\n\n- **Compose Command**: Resolves Docker Compose files.\n- **Setup Command**: Sets up manager and replica nodes.\n- **Create Volumes Command**: Creates and syncs shared volumes across nodes.\n- **Mount Volumes Command**: Mounts shared volumes on specified nodes.\n- **Unmount Volumes Command**: Unmounts shared volumes from specified nodes.\n\n### Usage\n\nThe main entry point for \u1e62eto is the `seto` command. Below is a detailed\ndescription of each subcommand and its options.\n\n#### Global Options\n\nThese options are applicable to all subcommands:\n\n- `--stack`: Required. Specifies the stack name.\n- `--driver`: Required. Specifies the driver URI to use. Can be `nfs://username:password@hostname` or `gluster://username:password@hostname`.\n\n#### Subcommands\n\n##### 1. Compose Command\n\nResolves Docker Compose files.\n\n```bash\nseto --stack <stack-name> --driver <driver-uri> compose\n```\n\nExample:\n\n```bash\nseto --stack my-stack --driver nfs://user:pass@host compose\n```\n\n##### 2. Setup Command\n\nSets up the manager and replica nodes.\n\n```bash\nseto --stack <stack-name> --driver <driver-uri> setup --replica <replica-connection-strings>\n```\n\n- `--replica`: Required. Specifies the nodes to set up in the format `username:password@hostname`.\n\nExample:\n\n```bash\nseto --stack my-stack --driver nfs://user:pass@host setup --replica user:pass@replica1 user:pass@replica2\n```\n\n##### 3. Create Volumes Command\n\nCreates and syncs shared volumes across nodes.\n\n```bash\nseto --stack <stack-name> --driver <driver-uri> create-volumes --replica <replica-connection-strings> [--force]\n```\n\n- `--replica`: Required. Specifies the nodes where volumes will be created.\n- `--force`: Optional. Forces volume data synchronization.\n\nExample:\n\n```bash\nseto --stack my-stack --driver nfs://user:pass@host create-volumes --replica user:pass@replica1 user:pass@replica2 --force\n```\n\n##### 4. Mount Volumes Command\n\nMounts shared volumes on specified nodes.\n\n```bash\nseto --stack <stack-name> --driver <driver-uri> mount-volumes --replica <replica-connection-strings>\n```\n\n- `--replica`: Required. Specifies the nodes where volumes will be mounted.\n\nExample:\n\n```bash\nseto --stack my-stack --driver nfs://user:pass@host mount-volumes --replica user:pass@replica1 user:pass@replica2\n```\n\n##### 5. Unmount Volumes Command\n\nUnmounts shared volumes from specified nodes.\n\n```bash\nseto --stack <stack-name> --driver <driver-uri> unmount-volumes --replica <replica-connection-strings>\n```\n\n- `--replica`: Required. Specifies the nodes where volumes will be unmounted.\n\nExample:\n\n```bash\nseto --stack my-stack --driver nfs://user:pass@host unmount-volumes --replica user:pass@replica1 user:pass@replica2\n```\n\n### Example Workflow\n\n1. **Setup Manager and Replica Nodes**\n\n```bash\nseto --stack my-stack --driver nfs://user:pass@host setup --replica user:pass@replica1 user:pass@replica2\n```\n\n2. **Create Volumes**\n\n```bash\nseto --stack my-stack --driver nfs://user:pass@host create-volumes --replica user:pass@replica1 user:pass@replica2 --force\n```\n\n3. **Mount Volumes**\n\n```bash\nseto --stack my-stack --driver nfs://user:pass@host mount-volumes --replica user:pass@replica1 user:pass@replica2\n```\n\n4. **Unmount Volumes**\n\n```bash\nseto --stack my-stack --driver nfs://user:pass@host unmount-volumes --replica user:pass@replica1 user:pass@replica2\n```\n\n5. **Deploy Stack**\n\n```bash\nseto --stack my-stack --manager nfs://user@manager-host deploy\n```\n\n### Error Handling\n\nThe tool includes basic error handling to catch and report errors related to argument parsing and execution. If an error occurs, a message will be printed, and the tool will exit with a non-zero status code.\n\n## Environment Setup\n\n0. See [cloud-init.yaml](cloud-init.yaml) file for prerequisites to install.\n\n1. [Install Devbox](https://www.jetify.com/devbox/docs/installing_devbox/)\n\n2. [Install `direnv` with your OS package manager](https://direnv.net/docs/installation.html#from-system-packages)\n\n3. [Hook it `direnv` into your shell](https://direnv.net/docs/hook.html)\n\n4. **Load environment**\n\n   At the top-level of your project run:\n\n   ```sh\n   direnv allow\n   ```\n\n   > The next time you will launch your terminal and enter the top-level of your\n   > project, `direnv` will check for changes and will automatically load the\n   > Devbox environment.\n\n5. **Install dependencies**\n\n   ```sh\n   make install\n   ```\n\n6. **Start environment**\n\n   ```sh\n   make shell\n   ```\n\n   This will starts a preconfigured Tmux session.\n   Please see the [.tmuxinator.yml](.tmuxinator.yml) file.\n\n## Makefile Targets\n\nPlease see the [Makefile](Makefile) for the full list of targets.\n\n## Docker Swarm Setup\n\nTo set up Docker Swarm, you'll first need to ensure you have Docker installed on\nyour machines. Then, you can initialize Docker Swarm on one of your machines to\nact as the manager node, and join other machines as worker nodes. Below are the\ngeneral steps to set up Docker Swarm:\n\n1. **Install Docker**\n\n   Make sure Docker is installed on all machines that will participate in the\n   Swarm cluster. You can follow the official Docker installation guide for your\n   operating system.\n\n2. **Choose Manager Node**\n\n   Select one of your machines to act as the manager node. This machine will be\n   responsible for managing the Swarm cluster.\n\n3. **Initialize Swarm**\n\n   SSH into the chosen manager node and run the following command to initialize\n   Docker Swarm:\n\n   ```bash\n   docker swarm init --advertise-addr <MANAGER_IP>\n   ```\n\n   Replace `<MANAGER_IP>` with the IP address of the manager node. This command\n   initializes a new Docker Swarm cluster with the manager node.\n\n4. **Join Worker Nodes**\n\n   After initializing the Swarm, Docker will output a command to join other\n   nodes to the cluster as worker nodes. Run this command on each machine you\n   want to join as a worker node.\n\n   ```bash\n   docker swarm join --token <TOKEN> <MANAGER_IP>:<PORT>\n   ```\n\n   Replace `<TOKEN>` with the token generated by the `docker swarm init` command\n   and `<MANAGER_IP>:<PORT>` with the IP address and port of the manager node.\n\n5. **Verify Swarm Status**\n\n   Once all nodes have joined the Swarm, you can verify the status of the Swarm\n   by running the following command on the manager node:\n\n   ```bash\n   docker node ls\n   ```\n\n   This command will list all nodes in the Swarm along with their status.\n\n## License\n\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use\nthis file except in compliance with the License.\nYou may obtain a copy of the License at [LICENSE](https://gitlab.com/demsking/seto/blob/main/LICENSE).\n\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "A Docker Swarm Deployment Manager",
    "version": "2.2.2",
    "project_urls": {
        "Say Thanks!": "https://www.buymeacoffee.com/demsking",
        "Source": "https://gitlab.com/demsking/seto",
        "Tracker": "https://gitlab.com/demsking/seto/-/issues"
    },
    "split_keywords": [
        "docker",
        " swarm",
        " manager"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e4d1bf088c1eee86f64e99bc068be7f7cc3a5091d317a9ac6649c6a77adaed8a",
                "md5": "22e8370d3ad4c8266778d132f245e781",
                "sha256": "d9e92317c3257907511e1407ab5a914308ad5496ff556d12d27831b1f02030a8"
            },
            "downloads": -1,
            "filename": "seto-2.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "22e8370d3ad4c8266778d132f245e781",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.12,>=3.11",
            "size": 40038,
            "upload_time": "2024-10-17T13:29:19",
            "upload_time_iso_8601": "2024-10-17T13:29:19.150560Z",
            "url": "https://files.pythonhosted.org/packages/e4/d1/bf088c1eee86f64e99bc068be7f7cc3a5091d317a9ac6649c6a77adaed8a/seto-2.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5a3a3c80ead935d89f2185c999b5c7bc90e42104fb561438d3c196a727230f09",
                "md5": "8935e0dbc9134a5cbc9e4dbceabc8d45",
                "sha256": "f0d230cdaff73dce44a1a1bbf338a2661c225852a122cf5bf0b76d3c79f5147f"
            },
            "downloads": -1,
            "filename": "seto-2.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "8935e0dbc9134a5cbc9e4dbceabc8d45",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.12,>=3.11",
            "size": 25526,
            "upload_time": "2024-10-17T13:29:20",
            "upload_time_iso_8601": "2024-10-17T13:29:20.779152Z",
            "url": "https://files.pythonhosted.org/packages/5a/3a/3c80ead935d89f2185c999b5c7bc90e42104fb561438d3c196a727230f09/seto-2.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-17 13:29:20",
    "github": false,
    "gitlab": true,
    "bitbucket": false,
    "codeberg": false,
    "gitlab_user": "demsking",
    "gitlab_project": "seto",
    "lcname": "seto"
}
        
Elapsed time: 0.52870s