proxmove


Nameproxmove JSON
Version 1.1 PyPI version JSON
download
home_pagehttps://github.com/ossobv/proxmove
SummaryMigrate virtual machines between different Proxmox VM clusters
upload_time2021-06-11 08:36:31
maintainer
docs_urlNone
authorWalter Doekes, OSSO B.V.
requires_python
licenseGPLv3+
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            |proxmove|
==========

*The Proxmox VM migrator: migrates VMs between different Proxmox VE clusters.*

Migrating a virtual machine (VM) on a PVE-cluster from one node to
another is implemented in the Proxmox Virtual Environment (PVE). But
migrating a VM from one PVE-cluster to another is not.

proxmove helps you move VMs between PVE-clusters with minimal hassle.


Example invocation:

.. code-block:: console

    $ proxmove SOURCE_CLUSTER DEST_CLUSTER DEST_NODE DEST_STORAGE VM_NAME1...

But, to get it to work, you'll need to configure ``~/.proxmoverc``
first. See `Configuration`_.


Additional tips:

- Use ``--debug``; it doesn't flood your screen, but provides useful clues
  about what it's doing.
- If your network bridge is different on the ``DEST_CLUSTER``, use
  ``--skip-start``; that way *proxmove* "completes" successfully when
  done with the move. (You'll still need to change the bridge before
  starting the VM obviously.)
- If *proxmove* detects that a move was in progress, it will
  interactively attempt a resume.


Full invocation specification (``--help``):

.. code-block::

    usage: proxmove [-c FILENAME] [-n] [--bwlimit MBPS] [--no-verify-ssl]
                    [--skip-disks] [--skip-start] [--ssh-ciphers CIPHERS]
                    [--debug] [--ignore-exists] [-h] [--version]
                    source destination nodeid storage vm [vm ...]

    Migrate VMs from one Proxmox cluster to another.

    positional arguments:
      source                alias of source cluster
      destination           alias of destination cluster
      nodeid                node on destination cluster
      storage               storage on destination node
      vm                    one or more VMs (guests) to move

    optional arguments:
      -c FILENAME, --config FILENAME
                            use alternate configuration inifile
      -n, --dry-run         stop before doing any writes
      --bwlimit MBPS        limit bandwidth in Mbit/s
      --no-verify-ssl       skip ssl verification on the api hosts
      --skip-disks          do the move, but skip copying of the disks;
                            implies --skip-start
      --skip-start          do the move, but do not start the new instance
      --ssh-ciphers CIPHERS
                            comma separated list of ssh -c ciphers to
                            prefer, (aes128-gcm@openssh.com is supposed to
                            be fast if you have aes on your cpu); set to
                            "-" to use ssh defaults

    debug arguments:
      --debug               enables extra debug logging
      --ignore-exists       continue when target VM already exists; allows
                            moving to same cluster

    other actions:
      -h, --help            show this help message and exit
      --version             show program's version number and exit

    Cluster aliases and storage locations should be defined in
    ~/.proxmoverc (or see -c option). See the example proxmoverc.sample.
    It requires [pve:CLUSTER_ALIAS] sections for the proxmox "api" URL and
    [storage:CLUSTER_ALIAS:STORAGE_NAME] sections with "ssh", "path" and
    "temp" settings.


Example run
-----------

First you need to configure ``~/.proxmoverc``; see below.

When configured, you can do something like this:

.. code-block:: console

    $ proxmove apple-cluster banana-cluster node2 node2-ssd the-vm-to-move
    12:12:27: Attempt moving apple-cluster<e1400248> => banana-cluster<6669ad2c> (node 'node2'): the-vm-to-move
    12:12:27: - source VM the-vm-to-move@node1<qemu/565/running>
    12:12:27: - storage 'ide2': None,media=cdrom (host=<unknown>, guest=<unknown>)
    12:12:27: - storage 'virtio0': sharedsan:565/vm-565-disk-1.qcow2,format=qcow2,iops_rd=4000,iops_wr=500,size=50G (host=37.7GiB, guest=50.0GiB)
    12:12:27: Creating new VM 'the-vm-to-move' on 'banana-cluster', node 'node2'
    12:12:27: - created new VM 'the-vm-to-move--CREATING' as UPID:node2:00005977:1F4D78F4:57C55C0B:qmcreate:126:user@pve:; waiting for it to show up
    12:12:34: - created new VM 'the-vm-to-move--CREATING': the-vm-to-move--CREATING@node2<qemu/126/stopped>
    12:12:34: Stopping VM the-vm-to-move@node1<qemu/565/running>
    12:12:42: - stopped VM the-vm-to-move@node1<qemu/565/stopped>
    12:12:42: Ejected (cdrom?) volume 'ide2' (none) added to the-vm-to-move--CREATING@node2<qemu/126/stopped>
    12:12:42: Begin copy of 'virtio0' (sharedsan:565/vm-565-disk-1.qcow2,format=qcow2,iops_rd=4000,iops_wr=500,size=50G) to local-ssd
    12:12:42: scp(1) copy from '/pool0/san/images/565/vm-565-disk-1.qcow2' (on sharedsan) to 'root@node2.banana-cluster.com:/node2-ssd/temp/temp-proxmove/vm-126-virtio0'
    Warning: Permanently added 'node2.banana-cluster.com' (ECDSA) to the list of known hosts.
    vm-565-disk-1.qcow2   100%   50GB   90.5MB/s   09:26
    Connection to san.apple-cluster.com closed.
    12:22:08: Temp data '/node2-ssd/temp/temp-proxmove/vm-126-virtio0' on local-ssd
    12:22:08: Writing data from temp '/node2-ssd/temp/temp-proxmove/vm-126-virtio0' to '/dev/zvol/node2-ssd/vm-126-virtio0' (on local-ssd)
        (100.00/100%)
    Connection to node2.banana-cluster.com closed.
    12:24:25: Removing temp '/node2-ssd/temp/temp-proxmove/vm-126-virtio0' (on local-ssd)
    12:24:26: Starting VM the-vm-to-move@node2<qemu/126/stopped>
    12:24:27: - started VM the-vm-to-move@node2<qemu/126/running>
    12:24:27: Completed moving apple-cluster<e1400248> => banana-cluster<6669ad2c> (node 'node2'): the-vm-to-move

Before, ``the-vm-to-move`` was running on ``apple-cluster`` on ``node1``.

Afterwards, ``the-vm-to-move`` is running on ``banana-cluster`` on ``node2``.
The ``the-vm-to-move`` on the ``apple-cluster`` has been stopped and renamed to
``the-vm-to-move--MIGRATED``.


Configuration
-------------

Set up the ``~/.proxmoverc`` config file. First you need to define which
clusters you have. For example *apple-cluster* and *banana-cluster*.

.. code-block:: ini

    ; Example cluster named "apple-cluster" with 3 storage devices, one
    ; shared, and two which exist on a single node only.
    ;
    ; The user requires various permissions found in the PVEVMAdmin role (VM
    ; allocate + audit) and PVEAuditor role (Datastore audit).
    ;
    [pve:apple-cluster]
    api=https://user@pve:PASSWORD@apple-cluster.com:443

    ; Example cluster named "banana-cluster" with 2 storage devices; both
    ; storage devices exist on the respective nodes only.
    [pve:banana-cluster]
    api=https://user@pve:PASSWORD@banana-cluster.com:443

Next, it needs configuration for the storage devices. They are expected
to be reachable over SSH; both from the caller and from each other
(using SSH-agent forwarding).

The following defines two storage devices for the *apple-cluster*, one shared
and one local to *node1* only.

If on *sharedsan*, the images are probably called something like
``/pool0/san/images/VMID/vm-VMID-disk1.qcow2``, while in Proxmox, they are
referred to as ``sharedsan:VMID/vm-VMID-disk1.qcow2``.

.. code-block:: ini

    [storage:apple-cluster:sharedsan] ; "sharedsan" is available on all nodes
    ssh=root@san.apple-cluster.com
    path=/pool0/san/images
    temp=/pool0/san/private

    [storage:apple-cluster:local@node1] ; local disk on node1 only
    ssh=root@node1.apple-cluster.com
    path=/srv/images
    temp=/srv/temp

If you use ZFS storage on *banana-cluster*, the storage config could look
like this. Disk volumes exist on the ZFS filesystem ``node1-ssd/images``
and ``node2-ssd/images`` on the nodes *node1* and *node2* respectively.

Note that the ``temp=`` path is always a regular path.

.. code-block:: ini

    [storage:banana-cluster:node1-ssd@node1]
    ssh=root@node1.banana-cluster.com
    path=zfs:node1-ssd/images
    temp=/node1-ssd/temp

    [storage:banana-cluster:node2-ssd@node2]
    ssh=root@node2.banana-cluster.com
    path=zfs:node2-ssd/images
    temp=/node2-ssd/temp

The config file looks better with indentation. The author suggests this layout:

.. code-block:: ini

    [pve:apple-cluster]
    ...

      [storage:apple-cluster:sharedsan]
      ...
      [storage:apple-cluster:local@node1]
      ...

    [pve:banana-cluster]
    ...

      [storage:banana-cluster:node1-ssd@node1]
      ...


Debugging
---------

If you run into a ``ResourceException``, you may want to patch proxmoxer 1.0.3
to show the HTTP error reason as well.

.. code-block:: udiff

    --- proxmoxer/core.py	2019-04-04 09:13:16.832961589 +0200
    +++ proxmoxer/core.py	2019-04-04 09:15:45.434175030 +0200
    @@ -75,8 +75,10 @@ class ProxmoxResource(ProxmoxResourceBas
             logger.debug('Status code: %s, output: %s', resp.status_code, resp.content)

             if resp.status_code >= 400:
    -            raise ResourceException("{0} {1}: {2}".format(resp.status_code, httplib.responses[resp.status_code],
    -                                                          resp.content))
    +            raise ResourceException('{0} {1} ("{2}"): {3}'.format(
    +                resp.status_code, httplib.responses[resp.status_code],
    +                resp.reason,  # reason = textual status_code
    +                resp.content))
             elif 200 <= resp.status_code <= 299:
                 return self._store["serializer"].loads(resp)

It might reveal a bug (or new feature), like::

    proxmoxer.core.ResourceException:
      500 Internal Server Error ("only root can set 'vmgenid' config"):
      b'{"data":null}'


License
-------

proxmove is free software: you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation, version 3 or any later version.


.. |proxmove| image:: assets/proxmove_head.png
    :alt: proxmove


Changes
-------

* **1.1** - 2021-06-11

  Features/fixes:

  - Add basic resume support.
  - Allow :port in ssh=user@host:port config.
  - Unmount cdrom media before moving.
  - Fix destination volume naming (use vm-NNN-disk-N insteaf of
    vm-NNN-virtioN).
  - Some documentation improvements.

* **1.0** - 2020-01-17

  Features/fixes:

  - Fix disk I/O resource hog overuse on newer PVE clusters.
  - Fix API connection to newer PVE clusters.
  - Add faster ssh cipher by default.
  - Work around Proxmox API timeout.
  - Improved usability through better logging and prepare checks.

* **0.1.0** - 2018-11-22

  Bugs fixed:

  - Show that not seeing a VM is probably a PVEAdmin-permissions issue.
  - Don't die if image_size cannot be determined.
  - Place the sample files and docs in a /usr/share/...proxmove subdir.

* **0.0.9** - 2017-03-28

  New features:

  - Added --no-verify-ssl option.

  Bugs fixed:

  - Fix str-cast bug with ZFS destination creation.
  - Fix ignoring of non-volume properties like "scsihw".

* **0.0.8** - 2017-01-26

  New features:

  - Partial LXC container move implemented. Not complete.

  Bugs fixed:

  - Allow ZFS paths to be more than just the a pool name. Instead of
    e.g. ``path=zfs:mc9-8-ssd`` we now also allow
    ``path=zfs:rpool/data/images``. Closes #7.

* **v0.0.7** - 2016-10-07

  Bugs fixed:

  - Instead of trusting on the "size=XXG" which may or may not be
    present in the storage volume config, it reads the QCOW header or
    ZFS volume size directly. Also checks that the values are available
    before attempting a move.

* **v0.0.6** - 2016-09-21

  New features:

  - Add --bwlimit in Mbit/s to limit bandwidth during transfer. Will use
    the scp(1) -l option or for ZFS use the mbuffer(1) auxiliary. As an
    added bonus mbuffer may improve ZFS send/recv speed through
    buffering. Closes #4.
  - Add --skip-disks option to skip copying of the disks. Use this if
    you want to copy the disks manually. Closes #3.
  - Add --skip-start option to skip autostarting of the VM.
  - Adds optional pv(1) pipe viewer progress bar to get ETA in ZFS
    transfers.
  - Add hidden --debug option for more verbosity.
  - Add hidden --ignore-exists option that allows you to test moves
    between the same cluster by creating an alias (second config).

  Bugs fixed:

  - Format is not always specified in the properties. If it isn't, use
    the image filename suffix when available.
  - Sometimes old values aren't available in the "pending" list. Don't croak.
    Closes #2.
  - Begun refactoring. Testing bettercodehub.com.
  - Also check whether temporary (renamed) VMs exist before starting.

* **v0.0.5** - 2016-08-30

  - Added support for ZFS to ZFS disk copy. QCOW2 to ZFS and ZFS to ZFS
    is now tested.

* **v0.0.4** - 2016-08-30

  - Major overhaul of configuration format and other changes.


TODO
----

* On missing disk (bad config), the zfs send stalls and mbuffer waits for
  infinity.

* Rename "VM" to "Instance" so "LXC" containers don't feel misrepresented.

* Communicate with the storage API to check/configure storage (more easily).

* Create a ``--config`` command to set up a basic configuration. Combine with
  storage API reading.

* Fix so LXC-copying works (this is a bit tricky because of the necessity for
  a temporary image/tarball to install).

* Create a proxmoxer_api example that returns test data, so we can run tests.

* Let it work with pve-zsync -- a daemon that syncs data between nodes:
  https://pve.proxmox.com/wiki/PVE-zsync
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ossobv/proxmove",
    "name": "proxmove",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Walter Doekes, OSSO B.V.",
    "author_email": "wjdoekes+proxmove@osso.nl",
    "download_url": "https://files.pythonhosted.org/packages/35/1f/093400c6dfe85cb3cef39c727fd72de892cd88b2094c5ce07d5fbca52780/proxmove-1.1.tar.gz",
    "platform": "linux",
    "description": "|proxmove|\n==========\n\n*The Proxmox VM migrator: migrates VMs between different Proxmox VE clusters.*\n\nMigrating a virtual machine (VM) on a PVE-cluster from one node to\nanother is implemented in the Proxmox Virtual Environment (PVE). But\nmigrating a VM from one PVE-cluster to another is not.\n\nproxmove helps you move VMs between PVE-clusters with minimal hassle.\n\n\nExample invocation:\n\n.. code-block:: console\n\n    $ proxmove SOURCE_CLUSTER DEST_CLUSTER DEST_NODE DEST_STORAGE VM_NAME1...\n\nBut, to get it to work, you'll need to configure ``~/.proxmoverc``\nfirst. See `Configuration`_.\n\n\nAdditional tips:\n\n- Use ``--debug``; it doesn't flood your screen, but provides useful clues\n  about what it's doing.\n- If your network bridge is different on the ``DEST_CLUSTER``, use\n  ``--skip-start``; that way *proxmove* \"completes\" successfully when\n  done with the move. (You'll still need to change the bridge before\n  starting the VM obviously.)\n- If *proxmove* detects that a move was in progress, it will\n  interactively attempt a resume.\n\n\nFull invocation specification (``--help``):\n\n.. code-block::\n\n    usage: proxmove [-c FILENAME] [-n] [--bwlimit MBPS] [--no-verify-ssl]\n                    [--skip-disks] [--skip-start] [--ssh-ciphers CIPHERS]\n                    [--debug] [--ignore-exists] [-h] [--version]\n                    source destination nodeid storage vm [vm ...]\n\n    Migrate VMs from one Proxmox cluster to another.\n\n    positional arguments:\n      source                alias of source cluster\n      destination           alias of destination cluster\n      nodeid                node on destination cluster\n      storage               storage on destination node\n      vm                    one or more VMs (guests) to move\n\n    optional arguments:\n      -c FILENAME, --config FILENAME\n                            use alternate configuration inifile\n      -n, --dry-run         stop before doing any writes\n      --bwlimit MBPS        limit bandwidth in Mbit/s\n      --no-verify-ssl       skip ssl verification on the api hosts\n      --skip-disks          do the move, but skip copying of the disks;\n                            implies --skip-start\n      --skip-start          do the move, but do not start the new instance\n      --ssh-ciphers CIPHERS\n                            comma separated list of ssh -c ciphers to\n                            prefer, (aes128-gcm@openssh.com is supposed to\n                            be fast if you have aes on your cpu); set to\n                            \"-\" to use ssh defaults\n\n    debug arguments:\n      --debug               enables extra debug logging\n      --ignore-exists       continue when target VM already exists; allows\n                            moving to same cluster\n\n    other actions:\n      -h, --help            show this help message and exit\n      --version             show program's version number and exit\n\n    Cluster aliases and storage locations should be defined in\n    ~/.proxmoverc (or see -c option). See the example proxmoverc.sample.\n    It requires [pve:CLUSTER_ALIAS] sections for the proxmox \"api\" URL and\n    [storage:CLUSTER_ALIAS:STORAGE_NAME] sections with \"ssh\", \"path\" and\n    \"temp\" settings.\n\n\nExample run\n-----------\n\nFirst you need to configure ``~/.proxmoverc``; see below.\n\nWhen configured, you can do something like this:\n\n.. code-block:: console\n\n    $ proxmove apple-cluster banana-cluster node2 node2-ssd the-vm-to-move\n    12:12:27: Attempt moving apple-cluster<e1400248> => banana-cluster<6669ad2c> (node 'node2'): the-vm-to-move\n    12:12:27: - source VM the-vm-to-move@node1<qemu/565/running>\n    12:12:27: - storage 'ide2': None,media=cdrom (host=<unknown>, guest=<unknown>)\n    12:12:27: - storage 'virtio0': sharedsan:565/vm-565-disk-1.qcow2,format=qcow2,iops_rd=4000,iops_wr=500,size=50G (host=37.7GiB, guest=50.0GiB)\n    12:12:27: Creating new VM 'the-vm-to-move' on 'banana-cluster', node 'node2'\n    12:12:27: - created new VM 'the-vm-to-move--CREATING' as UPID:node2:00005977:1F4D78F4:57C55C0B:qmcreate:126:user@pve:; waiting for it to show up\n    12:12:34: - created new VM 'the-vm-to-move--CREATING': the-vm-to-move--CREATING@node2<qemu/126/stopped>\n    12:12:34: Stopping VM the-vm-to-move@node1<qemu/565/running>\n    12:12:42: - stopped VM the-vm-to-move@node1<qemu/565/stopped>\n    12:12:42: Ejected (cdrom?) volume 'ide2' (none) added to the-vm-to-move--CREATING@node2<qemu/126/stopped>\n    12:12:42: Begin copy of 'virtio0' (sharedsan:565/vm-565-disk-1.qcow2,format=qcow2,iops_rd=4000,iops_wr=500,size=50G) to local-ssd\n    12:12:42: scp(1) copy from '/pool0/san/images/565/vm-565-disk-1.qcow2' (on sharedsan) to 'root@node2.banana-cluster.com:/node2-ssd/temp/temp-proxmove/vm-126-virtio0'\n    Warning: Permanently added 'node2.banana-cluster.com' (ECDSA) to the list of known hosts.\n    vm-565-disk-1.qcow2   100%   50GB   90.5MB/s   09:26\n    Connection to san.apple-cluster.com closed.\n    12:22:08: Temp data '/node2-ssd/temp/temp-proxmove/vm-126-virtio0' on local-ssd\n    12:22:08: Writing data from temp '/node2-ssd/temp/temp-proxmove/vm-126-virtio0' to '/dev/zvol/node2-ssd/vm-126-virtio0' (on local-ssd)\n        (100.00/100%)\n    Connection to node2.banana-cluster.com closed.\n    12:24:25: Removing temp '/node2-ssd/temp/temp-proxmove/vm-126-virtio0' (on local-ssd)\n    12:24:26: Starting VM the-vm-to-move@node2<qemu/126/stopped>\n    12:24:27: - started VM the-vm-to-move@node2<qemu/126/running>\n    12:24:27: Completed moving apple-cluster<e1400248> => banana-cluster<6669ad2c> (node 'node2'): the-vm-to-move\n\nBefore, ``the-vm-to-move`` was running on ``apple-cluster`` on ``node1``.\n\nAfterwards, ``the-vm-to-move`` is running on ``banana-cluster`` on ``node2``.\nThe ``the-vm-to-move`` on the ``apple-cluster`` has been stopped and renamed to\n``the-vm-to-move--MIGRATED``.\n\n\nConfiguration\n-------------\n\nSet up the ``~/.proxmoverc`` config file. First you need to define which\nclusters you have. For example *apple-cluster* and *banana-cluster*.\n\n.. code-block:: ini\n\n    ; Example cluster named \"apple-cluster\" with 3 storage devices, one\n    ; shared, and two which exist on a single node only.\n    ;\n    ; The user requires various permissions found in the PVEVMAdmin role (VM\n    ; allocate + audit) and PVEAuditor role (Datastore audit).\n    ;\n    [pve:apple-cluster]\n    api=https://user@pve:PASSWORD@apple-cluster.com:443\n\n    ; Example cluster named \"banana-cluster\" with 2 storage devices; both\n    ; storage devices exist on the respective nodes only.\n    [pve:banana-cluster]\n    api=https://user@pve:PASSWORD@banana-cluster.com:443\n\nNext, it needs configuration for the storage devices. They are expected\nto be reachable over SSH; both from the caller and from each other\n(using SSH-agent forwarding).\n\nThe following defines two storage devices for the *apple-cluster*, one shared\nand one local to *node1* only.\n\nIf on *sharedsan*, the images are probably called something like\n``/pool0/san/images/VMID/vm-VMID-disk1.qcow2``, while in Proxmox, they are\nreferred to as ``sharedsan:VMID/vm-VMID-disk1.qcow2``.\n\n.. code-block:: ini\n\n    [storage:apple-cluster:sharedsan] ; \"sharedsan\" is available on all nodes\n    ssh=root@san.apple-cluster.com\n    path=/pool0/san/images\n    temp=/pool0/san/private\n\n    [storage:apple-cluster:local@node1] ; local disk on node1 only\n    ssh=root@node1.apple-cluster.com\n    path=/srv/images\n    temp=/srv/temp\n\nIf you use ZFS storage on *banana-cluster*, the storage config could look\nlike this. Disk volumes exist on the ZFS filesystem ``node1-ssd/images``\nand ``node2-ssd/images`` on the nodes *node1* and *node2* respectively.\n\nNote that the ``temp=`` path is always a regular path.\n\n.. code-block:: ini\n\n    [storage:banana-cluster:node1-ssd@node1]\n    ssh=root@node1.banana-cluster.com\n    path=zfs:node1-ssd/images\n    temp=/node1-ssd/temp\n\n    [storage:banana-cluster:node2-ssd@node2]\n    ssh=root@node2.banana-cluster.com\n    path=zfs:node2-ssd/images\n    temp=/node2-ssd/temp\n\nThe config file looks better with indentation. The author suggests this layout:\n\n.. code-block:: ini\n\n    [pve:apple-cluster]\n    ...\n\n      [storage:apple-cluster:sharedsan]\n      ...\n      [storage:apple-cluster:local@node1]\n      ...\n\n    [pve:banana-cluster]\n    ...\n\n      [storage:banana-cluster:node1-ssd@node1]\n      ...\n\n\nDebugging\n---------\n\nIf you run into a ``ResourceException``, you may want to patch proxmoxer 1.0.3\nto show the HTTP error reason as well.\n\n.. code-block:: udiff\n\n    --- proxmoxer/core.py\t2019-04-04 09:13:16.832961589 +0200\n    +++ proxmoxer/core.py\t2019-04-04 09:15:45.434175030 +0200\n    @@ -75,8 +75,10 @@ class ProxmoxResource(ProxmoxResourceBas\n             logger.debug('Status code: %s, output: %s', resp.status_code, resp.content)\n\n             if resp.status_code >= 400:\n    -            raise ResourceException(\"{0} {1}: {2}\".format(resp.status_code, httplib.responses[resp.status_code],\n    -                                                          resp.content))\n    +            raise ResourceException('{0} {1} (\"{2}\"): {3}'.format(\n    +                resp.status_code, httplib.responses[resp.status_code],\n    +                resp.reason,  # reason = textual status_code\n    +                resp.content))\n             elif 200 <= resp.status_code <= 299:\n                 return self._store[\"serializer\"].loads(resp)\n\nIt might reveal a bug (or new feature), like::\n\n    proxmoxer.core.ResourceException:\n      500 Internal Server Error (\"only root can set 'vmgenid' config\"):\n      b'{\"data\":null}'\n\n\nLicense\n-------\n\nproxmove is free software: you can redistribute it and/or modify it under\nthe terms of the GNU General Public License as published by the Free\nSoftware Foundation, version 3 or any later version.\n\n\n.. |proxmove| image:: assets/proxmove_head.png\n    :alt: proxmove\n\n\nChanges\n-------\n\n* **1.1** - 2021-06-11\n\n  Features/fixes:\n\n  - Add basic resume support.\n  - Allow :port in ssh=user@host:port config.\n  - Unmount cdrom media before moving.\n  - Fix destination volume naming (use vm-NNN-disk-N insteaf of\n    vm-NNN-virtioN).\n  - Some documentation improvements.\n\n* **1.0** - 2020-01-17\n\n  Features/fixes:\n\n  - Fix disk I/O resource hog overuse on newer PVE clusters.\n  - Fix API connection to newer PVE clusters.\n  - Add faster ssh cipher by default.\n  - Work around Proxmox API timeout.\n  - Improved usability through better logging and prepare checks.\n\n* **0.1.0** - 2018-11-22\n\n  Bugs fixed:\n\n  - Show that not seeing a VM is probably a PVEAdmin-permissions issue.\n  - Don't die if image_size cannot be determined.\n  - Place the sample files and docs in a /usr/share/...proxmove subdir.\n\n* **0.0.9** - 2017-03-28\n\n  New features:\n\n  - Added --no-verify-ssl option.\n\n  Bugs fixed:\n\n  - Fix str-cast bug with ZFS destination creation.\n  - Fix ignoring of non-volume properties like \"scsihw\".\n\n* **0.0.8** - 2017-01-26\n\n  New features:\n\n  - Partial LXC container move implemented. Not complete.\n\n  Bugs fixed:\n\n  - Allow ZFS paths to be more than just the a pool name. Instead of\n    e.g. ``path=zfs:mc9-8-ssd`` we now also allow\n    ``path=zfs:rpool/data/images``. Closes #7.\n\n* **v0.0.7** - 2016-10-07\n\n  Bugs fixed:\n\n  - Instead of trusting on the \"size=XXG\" which may or may not be\n    present in the storage volume config, it reads the QCOW header or\n    ZFS volume size directly. Also checks that the values are available\n    before attempting a move.\n\n* **v0.0.6** - 2016-09-21\n\n  New features:\n\n  - Add --bwlimit in Mbit/s to limit bandwidth during transfer. Will use\n    the scp(1) -l option or for ZFS use the mbuffer(1) auxiliary. As an\n    added bonus mbuffer may improve ZFS send/recv speed through\n    buffering. Closes #4.\n  - Add --skip-disks option to skip copying of the disks. Use this if\n    you want to copy the disks manually. Closes #3.\n  - Add --skip-start option to skip autostarting of the VM.\n  - Adds optional pv(1) pipe viewer progress bar to get ETA in ZFS\n    transfers.\n  - Add hidden --debug option for more verbosity.\n  - Add hidden --ignore-exists option that allows you to test moves\n    between the same cluster by creating an alias (second config).\n\n  Bugs fixed:\n\n  - Format is not always specified in the properties. If it isn't, use\n    the image filename suffix when available.\n  - Sometimes old values aren't available in the \"pending\" list. Don't croak.\n    Closes #2.\n  - Begun refactoring. Testing bettercodehub.com.\n  - Also check whether temporary (renamed) VMs exist before starting.\n\n* **v0.0.5** - 2016-08-30\n\n  - Added support for ZFS to ZFS disk copy. QCOW2 to ZFS and ZFS to ZFS\n    is now tested.\n\n* **v0.0.4** - 2016-08-30\n\n  - Major overhaul of configuration format and other changes.\n\n\nTODO\n----\n\n* On missing disk (bad config), the zfs send stalls and mbuffer waits for\n  infinity.\n\n* Rename \"VM\" to \"Instance\" so \"LXC\" containers don't feel misrepresented.\n\n* Communicate with the storage API to check/configure storage (more easily).\n\n* Create a ``--config`` command to set up a basic configuration. Combine with\n  storage API reading.\n\n* Fix so LXC-copying works (this is a bit tricky because of the necessity for\n  a temporary image/tarball to install).\n\n* Create a proxmoxer_api example that returns test data, so we can run tests.\n\n* Let it work with pve-zsync -- a daemon that syncs data between nodes:\n  https://pve.proxmox.com/wiki/PVE-zsync",
    "bugtrack_url": null,
    "license": "GPLv3+",
    "summary": "Migrate virtual machines between different Proxmox VM clusters",
    "version": "1.1",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "94b44064e072cc18ec73d523366e3331",
                "sha256": "e3b52987d6d954bf5c7edaad131ef5c5bcfa616a8508289a6b8ae55d0c2b0080"
            },
            "downloads": -1,
            "filename": "proxmove-1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "94b44064e072cc18ec73d523366e3331",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 27896,
            "upload_time": "2021-06-11T08:36:31",
            "upload_time_iso_8601": "2021-06-11T08:36:31.805311Z",
            "url": "https://files.pythonhosted.org/packages/35/1f/093400c6dfe85cb3cef39c727fd72de892cd88b2094c5ce07d5fbca52780/proxmove-1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2021-06-11 08:36:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": null,
    "github_project": "ossobv",
    "error": "Could not fetch GitHub repository",
    "lcname": "proxmove"
}
        
Elapsed time: 0.28753s