# Nami π
**N**ode **A**ccess & **M**anipulation **I**nterface is a simple tool for managing connections to multiple remote instances (particularly GPU servers), with built-in GPU monitoring, file transfer capabilities via rsync/S3, and a template system for common tasks.
### Features
- **π Multi-instance SSH management** - Add, list, and connect to remote servers
- **π Heterogeneous environments** - Works across different Linux distros and cloud providers (Vast, AWS, Runpod, etc.)
- **π GPU monitoring** - GPU utilization and memory tracking
- **π File transfer** - Transfer files between instances directly via rsync or using S3 as intermediary
- **ποΈ NFS mesh mounting** - Set up and mount shared directories across selected instances
- **π Template system** - Execute pre-configured bash script templates on remote instances
- **βοΈ Configuration management** - Personal and global configuration storage
### Installation <img src="https://img.shields.io/pypi/v/nami-surf?color=blue&style=flat-square">
```bash
pip install -U nami-surf
```
### π Quick Start
```bash
# Add a remote instance
nami add gpu-box 192.168.1.100 22 --user ubuntu --description "Main GPU server"
# List all instances with GPU status
nami list
# Connect to an instance via SSH
nami ssh gpu-box
# Run a command on an instance
nami ssh gpu-box "nvidia-smi"
# Forward an instanceβs configured port (e.g. Jupyter on 8888) to localhost
nami ssh gpu-box --forward
# Forward an arbitrary local port (override the one in config)
nami ssh gpu-box --forward 9000
# Transfer files between instances
nami transfer --source_instance local --dest_instance gpu-box --source_path ./data --dest_path ~/data
# Upload files to S3 from an instance
nami to_s3 --source_instance gpu-box --source_path ~/results --dest_path s3://bucket/experiment1/
# Download files from S3 to an instance
nami from_s3 --dest_instance gpu-box --source_path s3://bucket/dataset/ --dest_path ~/data/
# Execute a template on an instance
nami template gpu-box setup_conda
```
#### Example output
```text
$ nami list
π Configured Instances:
-----------------------------------------------------------------
π₯οΈ training-box (β
Online)
SSH: ubuntu@203.0.113.10:2222, local port: 8080
Description: Primary training server
GPUs:
π’ GPU0: 0% | Mem: 2% | NVIDIA A100 80GB
π΄ GPU1: 100% | Mem: 94% | NVIDIA A100 80GB
π GPU2: 0% | Mem: 51% | NVIDIA A100 80GB
π₯οΈ idle-node (β
Online)
SSH: admin@203.0.113.11:2222
Description: Spare capacity node
GPUs:
π’ GPU0: 0% | Mem: 0% | NVIDIA H100
π₯οΈ backup-box (β Offline)
SSH: root@203.0.113.12:2222
Description: Cold backup server
```
### π§ Commands
#### Instance Management
```bash
# List all instances with GPU status
nami list
# Connect via SSH or run a command
nami ssh <instance_name> [command] [--forward [PORT]]
# Add a new instance
nami add <instance_name> <host> <port> [--user USER] [--local-port PORT] [--description DESC]
# Remove an instance
nami remove <instance_name>
# Add SSH public key to instance(s)
nami ssh-key add "<public_key>" [--instance <instance_name>]
```
#### Configuration
```bash
# Set personal config value
nami config set <key> <value>
# Show configuration (all or specific key)
nami config show [key]
```
#### File Transfer
Nami supports two strategies for moving data between machines:
- **rsync** β Files are copied directly between the two instances over SSH. This is ideal for smaller transfers and, thanks to rsyncβs synchronization logic, it will only transmit files that are new or have changed on the source, saving both time and bandwidth.
- **s3** β Data are first uploaded from the source instance to an S3 bucket and then downloaded to the destination instance. Despite the extra hop, this approach is usually the fastest for large datasets because the upload/download steps can fully saturate network bandwidth and run in parallel.
```bash
# Transfer files between instances
nami transfer --source_instance SRC \
--dest_instance DEST \
--source_path PATH \
[--dest_path PATH] \
[--method rsync|s3] \
[--exclude PATTERNS] \
[--archive] \
[--rsync_opts "OPTIONS"]
# Upload to S3
nami to_s3 \
--source_instance INSTANCE \
--source_path PATH \
--dest_path S3_PATH \
[--exclude PATTERNS] \
[--archive] \
[--aws_profile PROFILE]
# Download from S3
nami from_s3
--dest_instance INSTANCE \
--source_path S3_PATH \
--dest_path PATH \
[--exclude PATTERNS] \
[--archive] \
[--aws_profile PROFILE]
```
#### NFS Mesh
Set up NFS exports on selected servers and mount a full mesh among them (each instance mounts every other instance, including itself via loopback) in one command.
```bash
# Export local /workspace on each server; mount peers under /mnt/peers/<instance>
nami nfs mount-mesh
--instances instance-1 instance-2 instance-3 \
--export_dir /workspace \
--mount_base /mnt/peers
```
After this completes on, say, `instance-1`, running `ls /mnt/peers` will show one directory per selected instance (including itself).
```bash
instance-1$ ls /mnt/peers
instance-1 instance-2 instance-3
```
Notes:
- The command installs and configures NFS server on the selected instances (if needed) and exports `--export_dir`.
- On each instance, mounts every peer under `--mount_base/<instance-name>` using NFSv4.
- Idempotent behavior: if the mount directory is a real non-empty directory (not a mount), it is skipped to avoid masking data; otherwise mounts/remounts as needed.
- Changing `--export_dir` updates existing mounts and `/etc/fstab` entries accordingly.
- Ensure network access to NFS ports (2049/TCP+UDP and 111/TCP+UDP) in your firewall/Security Groups.
#### Templates
```bash
# Execute a template with variables
nami template <instance> <template_name> \
[--var1 value1 --var2 value2 ...]
```
### βοΈ Configuration
Nami stores its configuration in `~/.nami/`:
- `config.yaml` - Instance definitions and global settings
- `personal.yaml` - User-specific configurations (S3 bucket, AWS profile, etc.)
- `templates/` - Custom bash script templates
#### Configuration File Structure
**`~/.nami/config.yaml`** - Main configuration file:
```yaml
instances:
gpu-box:
host: "192.168.1.100"
port: 22
user: "ubuntu"
description: "Main GPU server"
local_port: 8888 # optional - for SSH tunneling
cloud-instance:
host: "ec2-xxx.compute.amazonaws.com"
port: 22
user: "ec2-user"
description: "AWS EC2 instance"
variables:
# Global template variables available to all templates
# var1: value1
# ...
```
**`~/.nami/personal.yaml`** - User-specific settings:
```yaml
home_dir: "/workspace/<username>"
s3_bucket: "<username>"
aws_profile: "my-profile"
aws_access_key_id: XXXX
aws_secret_access_key: XXXX
aws_endpoint_url: https://XXXX.com
# Other personal settings
ssh_key: "~/.ssh/id_rsa_default" # Default SSH key for all instances
ssh_keys: # Per-instance SSH key overrides
gpu-box: "~/.ssh/id_rsa_custom"
cloud-instance: "~/.ssh/id_ed25519_custom"
```
#### Variable Priority
Template variables are resolved in this order (highest priority first):
1. Command-line variables (`--var key=value`)
2. Personal config (`personal.yaml`)
3. Global config (`config.yaml` variables section)
Raw data
{
"_id": null,
"home_page": null,
"name": "nami-surf",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "ssh, rsync, s3, remote, administration, automation",
"author": null,
"author_email": "Alexander Lutsenko <lex.lutsenko@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/5b/52/fbbc8ef80d714a0e7b28b6de24bbfb5f0d29036eddf23c54135c35c9add0/nami_surf-0.2.0.tar.gz",
"platform": null,
"description": "# Nami \ud83c\udf0a\n\n**N**ode **A**ccess & **M**anipulation **I**nterface is a simple tool for managing connections to multiple remote instances (particularly GPU servers), with built-in GPU monitoring, file transfer capabilities via rsync/S3, and a template system for common tasks.\n\n### Features\n\n- **\ud83d\udd17 Multi-instance SSH management** - Add, list, and connect to remote servers\n- **\ud83c\udf10 Heterogeneous environments** - Works across different Linux distros and cloud providers (Vast, AWS, Runpod, etc.)\n- **\ud83d\udcca GPU monitoring** - GPU utilization and memory tracking\n- **\ud83d\udcc1 File transfer** - Transfer files between instances directly via rsync or using S3 as intermediary\n- **\ud83d\uddc4\ufe0f NFS mesh mounting** - Set up and mount shared directories across selected instances\n- **\ud83d\udcdc Template system** - Execute pre-configured bash script templates on remote instances \n- **\u2699\ufe0f Configuration management** - Personal and global configuration storage\n\n### Installation <img src=\"https://img.shields.io/pypi/v/nami-surf?color=blue&style=flat-square\">\n\n```bash\npip install -U nami-surf\n```\n\n### \ud83d\ude80 Quick Start\n\n```bash\n# Add a remote instance\nnami add gpu-box 192.168.1.100 22 --user ubuntu --description \"Main GPU server\"\n\n# List all instances with GPU status\nnami list\n\n# Connect to an instance via SSH \nnami ssh gpu-box\n\n# Run a command on an instance\nnami ssh gpu-box \"nvidia-smi\"\n\n# Forward an instance\u2019s configured port (e.g. Jupyter on 8888) to localhost\nnami ssh gpu-box --forward\n\n# Forward an arbitrary local port (override the one in config)\nnami ssh gpu-box --forward 9000\n\n# Transfer files between instances\nnami transfer --source_instance local --dest_instance gpu-box --source_path ./data --dest_path ~/data\n\n# Upload files to S3 from an instance\nnami to_s3 --source_instance gpu-box --source_path ~/results --dest_path s3://bucket/experiment1/\n\n# Download files from S3 to an instance \nnami from_s3 --dest_instance gpu-box --source_path s3://bucket/dataset/ --dest_path ~/data/\n\n# Execute a template on an instance\nnami template gpu-box setup_conda\n```\n\n#### Example output\n```text\n$ nami list\n\n\ud83d\udccb Configured Instances:\n-----------------------------------------------------------------\n\ud83d\udda5\ufe0f training-box (\u2705 Online)\n SSH: ubuntu@203.0.113.10:2222, local port: 8080\n Description: Primary training server\n GPUs:\n \ud83d\udfe2 GPU0: 0% | Mem: 2% | NVIDIA A100 80GB\n \ud83d\udd34 GPU1: 100% | Mem: 94% | NVIDIA A100 80GB\n \ud83d\udfe0 GPU2: 0% | Mem: 51% | NVIDIA A100 80GB\n\n\ud83d\udda5\ufe0f idle-node (\u2705 Online)\n SSH: admin@203.0.113.11:2222\n Description: Spare capacity node\n GPUs:\n \ud83d\udfe2 GPU0: 0% | Mem: 0% | NVIDIA H100\n\n\ud83d\udda5\ufe0f backup-box (\u274c Offline)\n SSH: root@203.0.113.12:2222\n Description: Cold backup server\n```\n\n### \ud83d\udd27 Commands\n\n#### Instance Management\n```bash\n# List all instances with GPU status\nnami list\n\n# Connect via SSH or run a command\nnami ssh <instance_name> [command] [--forward [PORT]]\n\n# Add a new instance\nnami add <instance_name> <host> <port> [--user USER] [--local-port PORT] [--description DESC]\n\n# Remove an instance\nnami remove <instance_name>\n\n# Add SSH public key to instance(s)\nnami ssh-key add \"<public_key>\" [--instance <instance_name>]\n\n```\n\n#### Configuration\n```bash\n# Set personal config value\nnami config set <key> <value>\n\n# Show configuration (all or specific key)\nnami config show [key]\n```\n\n#### File Transfer\n\nNami supports two strategies for moving data between machines:\n\n- **rsync** \u2013 Files are copied directly between the two instances over SSH. This is ideal for smaller transfers and, thanks to rsync\u2019s synchronization logic, it will only transmit files that are new or have changed on the source, saving both time and bandwidth.\n- **s3** \u2013 Data are first uploaded from the source instance to an S3 bucket and then downloaded to the destination instance. Despite the extra hop, this approach is usually the fastest for large datasets because the upload/download steps can fully saturate network bandwidth and run in parallel.\n\n```bash\n# Transfer files between instances\nnami transfer --source_instance SRC \\\n --dest_instance DEST \\\n --source_path PATH \\\n [--dest_path PATH] \\\n [--method rsync|s3] \\\n [--exclude PATTERNS] \\\n [--archive] \\\n [--rsync_opts \"OPTIONS\"]\n\n# Upload to S3\nnami to_s3 \\\n --source_instance INSTANCE \\\n --source_path PATH \\\n --dest_path S3_PATH \\\n [--exclude PATTERNS] \\\n [--archive] \\\n [--aws_profile PROFILE]\n\n# Download from S3 \nnami from_s3 \n --dest_instance INSTANCE \\\n --source_path S3_PATH \\\n --dest_path PATH \\\n [--exclude PATTERNS] \\\n [--archive] \\\n [--aws_profile PROFILE]\n```\n\n#### NFS Mesh\n\nSet up NFS exports on selected servers and mount a full mesh among them (each instance mounts every other instance, including itself via loopback) in one command.\n\n```bash\n# Export local /workspace on each server; mount peers under /mnt/peers/<instance>\nnami nfs mount-mesh \n--instances instance-1 instance-2 instance-3 \\\n--export_dir /workspace \\\n--mount_base /mnt/peers\n```\n\nAfter this completes on, say, `instance-1`, running `ls /mnt/peers` will show one directory per selected instance (including itself).\n\n```bash\ninstance-1$ ls /mnt/peers\ninstance-1 instance-2 instance-3\n```\n\nNotes:\n- The command installs and configures NFS server on the selected instances (if needed) and exports `--export_dir`.\n- On each instance, mounts every peer under `--mount_base/<instance-name>` using NFSv4.\n- Idempotent behavior: if the mount directory is a real non-empty directory (not a mount), it is skipped to avoid masking data; otherwise mounts/remounts as needed.\n- Changing `--export_dir` updates existing mounts and `/etc/fstab` entries accordingly.\n- Ensure network access to NFS ports (2049/TCP+UDP and 111/TCP+UDP) in your firewall/Security Groups.\n\n#### Templates\n```bash\n# Execute a template with variables\nnami template <instance> <template_name> \\\n [--var1 value1 --var2 value2 ...]\n```\n\n### \u2699\ufe0f Configuration\n\nNami stores its configuration in `~/.nami/`:\n\n- `config.yaml` - Instance definitions and global settings\n- `personal.yaml` - User-specific configurations (S3 bucket, AWS profile, etc.)\n- `templates/` - Custom bash script templates\n\n#### Configuration File Structure\n\n**`~/.nami/config.yaml`** - Main configuration file:\n```yaml\ninstances:\n gpu-box:\n host: \"192.168.1.100\"\n port: 22\n user: \"ubuntu\"\n description: \"Main GPU server\"\n local_port: 8888 # optional - for SSH tunneling\n \n cloud-instance:\n host: \"ec2-xxx.compute.amazonaws.com\"\n port: 22\n user: \"ec2-user\"\n description: \"AWS EC2 instance\"\n\nvariables:\n # Global template variables available to all templates\n # var1: value1\n # ...\n```\n\n**`~/.nami/personal.yaml`** - User-specific settings:\n```yaml\nhome_dir: \"/workspace/<username>\"\n\ns3_bucket: \"<username>\"\n\naws_profile: \"my-profile\"\naws_access_key_id: XXXX\naws_secret_access_key: XXXX\naws_endpoint_url: https://XXXX.com\n\n# Other personal settings\nssh_key: \"~/.ssh/id_rsa_default\" # Default SSH key for all instances\nssh_keys: # Per-instance SSH key overrides\n gpu-box: \"~/.ssh/id_rsa_custom\"\n cloud-instance: \"~/.ssh/id_ed25519_custom\"\n\n```\n\n#### Variable Priority\nTemplate variables are resolved in this order (highest priority first):\n1. Command-line variables (`--var key=value`)\n2. Personal config (`personal.yaml`)\n3. Global config (`config.yaml` variables section)\n",
"bugtrack_url": null,
"license": null,
"summary": "Node Administration Made Intuitive",
"version": "0.2.0",
"project_urls": {
"Bug Tracker": "https://github.com/AlexanderLutsenko/nami/issues",
"Homepage": "https://github.com/AlexanderLutsenko/nami",
"Repository": "https://github.com/AlexanderLutsenko/nami"
},
"split_keywords": [
"ssh",
" rsync",
" s3",
" remote",
" administration",
" automation"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "6a5819156f3ef535dd4b96d62a9bd13703d5997c58e1347882529dbae75f2a02",
"md5": "84c8cb1bdd5292c5ec24dc1c48b9f77d",
"sha256": "d2e0ea59d38547e684d1331bf83ec47f35c79cd41c1ce1bc197aecf25d2199ae"
},
"downloads": -1,
"filename": "nami_surf-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "84c8cb1bdd5292c5ec24dc1c48b9f77d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 27761,
"upload_time": "2025-08-21T17:16:17",
"upload_time_iso_8601": "2025-08-21T17:16:17.852997Z",
"url": "https://files.pythonhosted.org/packages/6a/58/19156f3ef535dd4b96d62a9bd13703d5997c58e1347882529dbae75f2a02/nami_surf-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "5b52fbbc8ef80d714a0e7b28b6de24bbfb5f0d29036eddf23c54135c35c9add0",
"md5": "87465b6597717f88f116c261712015ce",
"sha256": "6f808c5da90670acc21bc94836a59d81dc407e4552b7f503a80abc37873c3418"
},
"downloads": -1,
"filename": "nami_surf-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "87465b6597717f88f116c261712015ce",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 25761,
"upload_time": "2025-08-21T17:16:20",
"upload_time_iso_8601": "2025-08-21T17:16:20.573796Z",
"url": "https://files.pythonhosted.org/packages/5b/52/fbbc8ef80d714a0e7b28b6de24bbfb5f0d29036eddf23c54135c35c9add0/nami_surf-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-21 17:16:20",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "AlexanderLutsenko",
"github_project": "nami",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "PyYAML",
"specs": [
[
">=",
"6.0"
]
]
},
{
"name": "sty",
"specs": [
[
">=",
"1.0.0"
]
]
}
],
"lcname": "nami-surf"
}