backy


Namebacky JSON
Version 2.5.1 PyPI version JSON
download
home_pagehttps://bitbucket.org/flyingcircus/backy
SummaryBlock-based backup and restore utility for virtual machine images
upload_time2023-10-12 11:54:43
maintainer
docs_urlhttps://pythonhosted.org/backy/
authorChristian Theune <ct@flyingcircus.io>, Christian Kauhaus <kc@flyingcircus.io>, Daniel Kraft <daniel.kraft@d9t.de>
requires_python
licenseGPL-3
keywords backup
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ========
Overview
========

Backy is a block-based backup and restore utility for virtual machine images.

Backy is intended to be:

* space-, time-, and network-efficient
* trivial to restore
* reliable.

To achieve this, we rely on:

* space-efficient storages (CoW filesystems, content-hashed chunking)
* using a snapshot-capable source for our volumes (i.e. Ceph RBD)
  that allows easy extraction of changes between snapshots,
* leverage proven, existing low-level tools,
* keep the code-base small, simple, and well-tested.

We also have a few ground rules for the implementation:

* VM data is stored self-contained on the filesystem and can be
  moved between servers using regular FS tools like copy, rsync, or such.

* No third party daemons are required to interact with backy: no database
  server. The scheduler daemon is only responsible for scheduling and simply
  calls regular CLI commands to perform a backup. Backy may interact with
  external daemons like Ceph or Consul, depending on the source storage
  implementation.


Operations
==========

Full restore
------------

Check which revision to restore::

  $ backy -b /srv/backy/<vm> status

Maybe set up the Ceph environment - depending on your configuration::

  $ export CEPH_ARGS="--id $HOSTNAME"

Restore the full image through a Pipe::

  $ backy restore -r <revision> - | rbd import - <pool>/<rootimage>


Setting up backy
----------------

#. Create a sufficiently large backup partition using a COW-capable filesystem
   like btrfs and mount it under `/srv/backy`.

#. Create a configuration file at `/etc/backy.conf`. See man page for details.

#. Start the scheduler with your favourite init system::

      backy -l /var/log/backy.log scheduler -c /path/to/backy.conf

   The scheduler runs in the foreground until it is shot by SIGTERM.

#. Set up monitoring using `backy check`.

#. Set up log rotation for `/var/log/backy.conf` and `/srv/backy/*/backy.log`.

The file paths given above match the built-in defaults, but paths are fully
configurable.


Features
========

Telnet shell
------------

Telnet into localhost port 6023 to get an interactive console. The console can
currently be used to inspect the scheduler's live status.


Self-check
----------

Backy includes a self-checking facility. Invoke `backy check` to see if there is
a recent revision present for all configured backup jobs::

   $ backy check
   OK: 9 jobs within SLA

Both output and exit code are suited for processing with Nagios-compatible
monitoring systems.


Pluggable backup sources
------------------------

Backy comes with a number of plug-ins which define block-file like sources:

- **file** extracts data from simple image files living on a regular file
  system.
- **ceph-rbd** pulls data from RBD images using Ceph features like snapshots.
- **flyingcircus** is an extension to the `ceph-rbd` source which we use
  internally on the `Flying Circus`_ hosting platform. It uses advanced features
  like Consul integration.

.. _Flying Circus: http://flyingcircus.io/

It should be easy to write plug-ins for additional sources.


Adaptive verification
---------------------

Backy always verifies freshly created backups. Verification scale depends on
the source type: file-based sources get fully verified. Ceph-based sources are
verified based on random samples for runtime reasons.

Zero-configuration scheduling
-----------------------------

The backy scheduler is intended to run continuously. It will spread jobs
according to the configured run intervals over the day. After resuming from an
interruption, it will reschedule missed jobs so that SLAs are still kept if
possible.

Backup jobs can be triggered at specific times as well: just invoke `backy
backup` manually.


Performance
===========

Backy is designed to use all of the available storage and network bandwidth by
running several instances in parallel. The backing storage must be prepared for
this kind of (mixed) load. Finding optimal settings needs a bit of experimentation
given that hardware and load profiles differ from site to site. The following
section contains a few points to start off.

Storage backend
---------------

If the backing storage is a RAID array, its stripe size should be aligned with
the filesystem. We have made good experiences with 256k stripes. Also check for
512B/4K block misalignments on HDDs. We're using it usually with RAID-6 and have
seen reasonable performance with both hardware and software RAID.

Filesystem
----------

We generally recommend XFS since it provides a high degree of parallelism and is
able to handle very large directories well.

Note that the standard `cfq` I/O scheduler is not a good pick for
highly parallel bulk I/O on multiple drives. Use `deadline` or `noop`.

Kernel
------

Since backy performs a lot of metadata operations, make sure that inodes and
dentries are not evicted from the VFS cache too early. We found that lowering
the `vm.vfs_cache_pressure` sysctl can make quite a difference in total backup
performance. We're currently getting good results setting it to `10`.
You may also want to increase `vm.min_free_kbytes` to avoid page allocation
errors on 10 GbE network interfaces.

Development
===========

Backy has switched to using `poetry` to manage its dependencies. This means
that you can use `poetry install` to install dependencies from PyPI.

If you don't have `backy` in your PATH when developing, enter the poetry
virtualenv with `poetry shell` or if you're using nix with `nix develop`.

You can build backy with `poetry build` making a wheel and a tar archive
in the `dist` directory, or by running `nix build`.

Authors
=======

* Christian Theune <ct@flyingcircus.io>
* Christian Kauhaus <kc@flyingcircus.io>
* Daniel Kraft <daniel.kraft@d9t.de>


License
=======

GPLv3


Links
=====

* `Bitbucket repository <https://bitbucket.org/flyingcircus/backy>`_
* `PyPI page <https://pypi.python.org/pypi/backy>`_
* `Online docs <http://pythonhosted.org/backy/>`_
* `Build server <https://builds.flyingcircus.io/job/backy/>`_

.. vim: set ft=rst spell spelllang=en sw=3:

=========
Changelog
=========

Unreleased
==========

See the fragment files in the `changelog.d` directory.

.. scriv-insert-here

.. _changelog-2.5.1:

2.5.1 (2023-10-12)
==================

- Fix telnet `jobs` command

.. _changelog-2.5.0:

2.5.0 (2023-10-11)
==================

- Separate pool into fast and slow. According to our statistics 90% of jobs are
  faster than 10 minutes and after that they might run for hours. We now run
  the configured amount of workers both in a fast and slow pool so that long
  jobs may delay other long jobs but fast jobs should be able to squeeze
  through quickly.

- Port to Python 3.10+.

- Continue deleting snapshots even if some are protected.

- Improve detection for 'whole object' RBD export: we failed to detect this is
  Ceph is built with wrong version options and shows itself as 'Development'
  instead of a real version.

- Do not count running jobs as overdue.

- Regularly log overdue jobs.

- Delete Tags from former schedules

- Add forget subcommand

- Add compatibility to changed `rbd showmapped` output format in Ceph Nautilus.
  Ceph Jewel and Luminous clusters remain supported.

- Use structlog for logging

- Static typechecking with mypy

- List all manual tags in `backy check`

- Fix crash for non-numeric Ceph version strings

- Fix a missing lock on distrust

- use `scriv` for changelog management

- Restore default value for missing entries on daemon reload

- Crash on invalid config while reloading

- remove find subcommand and nbd-server

- Compute parent revision at runtime

- Remove the `last` and `last.rev` symlink

- Distrust revisions on verification failure

- Quarantine differences between source and backend

- Fix race condition when reloading while generating a status file

- Add nix flake support

- Call configured script on (scheduler triggered) backup completion

2.4.3 (2019-04-17)
==================

- Avoid superfluous work when a source is missing, specifically avoid
  unnecessary Ceph/KVM interaction. (#27020)


2.4.2 (2019-04-17)
==================

- Optimize handling offline/deletion pending VMs: don't just timeout. Take
  snapshots and make backups (as long as the image exists). (#22345)

- Documentation update about performance implications.

- Clean up build system instrumentation to get our Jenkins going again.


2.4.1 (2018-12-06)
==================

- Optimization: bundle 'unlink' calls to improve cache locality of VFS metadata

- Optimization: load all known chunks at startup to avoid further seeky IO
  and large metadata parsing, this also speeds up purges.

- Reduce OS VFS cache trashing by explicitly indicating data we don't expect
  to read again.


2.4 (2018-11-30)
================

- Add support for Ceph Jewel's --whole-object diff export. (#24636)

- Improve garbage collection of old snapshot requests. (#100024)

- Switch to a new chunked store format: remove one level of directories
  to massively reduce seeky IO.

- Reorder and improve potentially seeky IOPS in the per-chunk write effort.
  Do not create directories lazily.

- Require Python 3.6.

2.3 (2018-05-16)
================

- Add major operational support for handling inconsistencies.

- Operators can mark revisions as 'distrusted', either all for a backup, or
  individual revisions or time ranges.

- If the newest revision is distrusted then we always perform a full backup
  instead of a differential backup.

- If there is any revision that is distrusted then every chunk will be written
  even if it exists (once during a backup, identical chunks within a backup will
  be trusted once written).

- Implement an explicit verification procedure that will automatically trigger
  after a backup and will verify distrusted revisions and either delete them or
  mark them as verified.

- Safety belt: always verify content hashes when reading chunks.

- Improve status report logging.

2.2 (2017-08-28)
================

- Introduce a new backend storage mechanism, independent of BTRFS: instead of
  using COW, use a directory with content-hashed 4MiB chunks of the file.
  Deduplication happens automatically based on the 4MiB chunks.

- Made the use of fadvise features more opportunistic: don't fail if they are
  not supported by the backend as they are only optimizations anyway.

- Introduce an exponential backoff after a failed backup: instead of retrying
  quickly and thus hogging the queue (in the case that timeouts are involved)
  we now backoff exponentially starting with 2 minutes, then 4, then 8, ...
  until we reach 6 hours as the biggest backoff time.

  You can still trigger an explicit run using the telnet "run" command for
  the affected backup. This will put the backup in the run queue immediately
  but will not reset the error counter or backoff interval in case it
  fails again.

- Performance improvement on restore: don't read the restore target. We don't
  have to optimize CoW in this case. (#28268)

2.1.5 (2016-07-01)
==================

- Bugfix release: fix data corruption bug in the new full-always mode. (FC
  #21963)


2.1.4 (2016-06-20)
==================

- Add "full-always" flag to Ceph and Flyingcircus sources. (FC #21960)

- Rewrite full backup code to make use of shallow copies to conserve disk space.
  (FC #21960)


2.1.3 (2016-06-09)
==================

- Fix new timeout to be 5 minutes by default, not 5 days.

- Do not sort blocks any longer: we do not win much from seeking
  over volumes with random blocks anyway and this helps for a more
  even distribution with the new timeout over multiple runs.


2.1.2 (2016-06-09)
==================

- Fix backup of images containing holes (#33).

- Introduce a (short) timeout for partial image verification. Especially
  very large images and images that are backed up frequently do not profit
  from running for hours to verify them, blocking further backups.
  (FC #21879)

2.1.1 (2016-01-15)
==================

- Fix logging bugs.

- Shut down daemon loop cleanly on signal reception.


2.1 (2016-01-08)
================

- Add optional regex filter to the `jobs` command in the telnet shell.

- Provide list of failed jobs in check output, not only the total number.

- Add `status-interval`, `telnet-addrs`, and `telnet-port` configuration
  options.

- Automatically recover from missing/damaged last or last.rev symlinks (#19532).

- Use `{BASE_DIR}/.lock` as daemon lock file instead of the status file.

- Usability improvements: count jobs, more informative log output.

- Support restoring to block special files like LVM volumes (#31).


2.0 (2015-11-06)
================

- backy now accepts a `-l` option to specify a log file. If no such option is
  given, it logs to stdout.

- Add `backy find -r REVISION` subcommand to query image paths from shell scripts.

- Fix monitoring bug where partially written images made the check go green
  (#30).

- Greatly improve error handling and detection of failed jobs.

- Performance improvement: turn off line buffering in bulk file operations
  (#20).

- The scheduler reports child failures (exit status > 0) now in the main log.

- Fix fallocate() behaviour on 32 bit systems.

- The `flyingcircus` source type now requires 3 arguments: vm, pool, image.


2.0b3 (2015-10-02)
==================

- Improve telnet console.

- Provide Nix build script.

- Generate `requirements.txt` automatically from buildout's `versions.cfg`.


2.0b2 (2015-09-15)
==================

- Introduce scheduler and rework the main backup command. The backy
  command is now only responsible for dealing with individual backups.

  It does no longer care about scheduling.

  A new daemon and a central configuration file is responsible for that
  now. However, it simply calls out to the existing backy command
  so we can still manually interact with the system even if we do not
  use the daemon.

- Add consul integration for backing up Flying Circus root disk images
  with clean snapshots (by asking fc.qemu to use fs-freeze before preparing
  a Ceph snapshot).

- Switch to shorter UUIDs. Existing files with old UUIDs are compatible.

- Turn the configuration format into YAML. Old files are still compatible.
  New configs will be generated as YAML.

- Performance: defrag all new files automatically to avoid btrfs degrading
  extent performance. It appears this doesn't completely duplicate all CoW
  data. Will have to monitor this in the future.

2.0b1 (2014-07-07)
==================

- Clean up docs.

- Add classifiers in setup.py.

- More or less complete rewrite expecting a copy-on-write filesystem as the
  target.

- Flexible backup scheduling using free-form tags.

- Compatible with Python 3.2-3.4.

- Initial open source import as provided by Daniel Kraft (D9T).

.. vim: set ft=rst spell spelllang=en:

            

Raw data

            {
    "_id": null,
    "home_page": "https://bitbucket.org/flyingcircus/backy",
    "name": "backy",
    "maintainer": "",
    "docs_url": "https://pythonhosted.org/backy/",
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "backup",
    "author": "Christian Theune <ct@flyingcircus.io>, Christian Kauhaus <kc@flyingcircus.io>, Daniel Kraft <daniel.kraft@d9t.de>",
    "author_email": "mail@flyingcircus.io",
    "download_url": "https://files.pythonhosted.org/packages/b1/51/fb4ca5b2574955ace30800ad61412009718dded7efdb02ab90090d1776a8/backy-2.5.1.tar.gz",
    "platform": null,
    "description": "========\nOverview\n========\n\nBacky is a block-based backup and restore utility for virtual machine images.\n\nBacky is intended to be:\n\n* space-, time-, and network-efficient\n* trivial to restore\n* reliable.\n\nTo achieve this, we rely on:\n\n* space-efficient storages (CoW filesystems, content-hashed chunking)\n* using a snapshot-capable source for our volumes (i.e. Ceph RBD)\n  that allows easy extraction of changes between snapshots,\n* leverage proven, existing low-level tools,\n* keep the code-base small, simple, and well-tested.\n\nWe also have a few ground rules for the implementation:\n\n* VM data is stored self-contained on the filesystem and can be\n  moved between servers using regular FS tools like copy, rsync, or such.\n\n* No third party daemons are required to interact with backy: no database\n  server. The scheduler daemon is only responsible for scheduling and simply\n  calls regular CLI commands to perform a backup. Backy may interact with\n  external daemons like Ceph or Consul, depending on the source storage\n  implementation.\n\n\nOperations\n==========\n\nFull restore\n------------\n\nCheck which revision to restore::\n\n  $ backy -b /srv/backy/<vm> status\n\nMaybe set up the Ceph environment - depending on your configuration::\n\n  $ export CEPH_ARGS=\"--id $HOSTNAME\"\n\nRestore the full image through a Pipe::\n\n  $ backy restore -r <revision> - | rbd import - <pool>/<rootimage>\n\n\nSetting up backy\n----------------\n\n#. Create a sufficiently large backup partition using a COW-capable filesystem\n   like btrfs and mount it under `/srv/backy`.\n\n#. Create a configuration file at `/etc/backy.conf`. See man page for details.\n\n#. Start the scheduler with your favourite init system::\n\n      backy -l /var/log/backy.log scheduler -c /path/to/backy.conf\n\n   The scheduler runs in the foreground until it is shot by SIGTERM.\n\n#. Set up monitoring using `backy check`.\n\n#. Set up log rotation for `/var/log/backy.conf` and `/srv/backy/*/backy.log`.\n\nThe file paths given above match the built-in defaults, but paths are fully\nconfigurable.\n\n\nFeatures\n========\n\nTelnet shell\n------------\n\nTelnet into localhost port 6023 to get an interactive console. The console can\ncurrently be used to inspect the scheduler's live status.\n\n\nSelf-check\n----------\n\nBacky includes a self-checking facility. Invoke `backy check` to see if there is\na recent revision present for all configured backup jobs::\n\n   $ backy check\n   OK: 9 jobs within SLA\n\nBoth output and exit code are suited for processing with Nagios-compatible\nmonitoring systems.\n\n\nPluggable backup sources\n------------------------\n\nBacky comes with a number of plug-ins which define block-file like sources:\n\n- **file** extracts data from simple image files living on a regular file\n  system.\n- **ceph-rbd** pulls data from RBD images using Ceph features like snapshots.\n- **flyingcircus** is an extension to the `ceph-rbd` source which we use\n  internally on the `Flying Circus`_ hosting platform. It uses advanced features\n  like Consul integration.\n\n.. _Flying Circus: http://flyingcircus.io/\n\nIt should be easy to write plug-ins for additional sources.\n\n\nAdaptive verification\n---------------------\n\nBacky always verifies freshly created backups. Verification scale depends on\nthe source type: file-based sources get fully verified. Ceph-based sources are\nverified based on random samples for runtime reasons.\n\nZero-configuration scheduling\n-----------------------------\n\nThe backy scheduler is intended to run continuously. It will spread jobs\naccording to the configured run intervals over the day. After resuming from an\ninterruption, it will reschedule missed jobs so that SLAs are still kept if\npossible.\n\nBackup jobs can be triggered at specific times as well: just invoke `backy\nbackup` manually.\n\n\nPerformance\n===========\n\nBacky is designed to use all of the available storage and network bandwidth by\nrunning several instances in parallel. The backing storage must be prepared for\nthis kind of (mixed) load. Finding optimal settings needs a bit of experimentation\ngiven that hardware and load profiles differ from site to site. The following\nsection contains a few points to start off.\n\nStorage backend\n---------------\n\nIf the backing storage is a RAID array, its stripe size should be aligned with\nthe filesystem. We have made good experiences with 256k stripes. Also check for\n512B/4K block misalignments on HDDs. We're using it usually with RAID-6 and have\nseen reasonable performance with both hardware and software RAID.\n\nFilesystem\n----------\n\nWe generally recommend XFS since it provides a high degree of parallelism and is\nable to handle very large directories well.\n\nNote that the standard `cfq` I/O scheduler is not a good pick for\nhighly parallel bulk I/O on multiple drives. Use `deadline` or `noop`.\n\nKernel\n------\n\nSince backy performs a lot of metadata operations, make sure that inodes and\ndentries are not evicted from the VFS cache too early. We found that lowering\nthe `vm.vfs_cache_pressure` sysctl can make quite a difference in total backup\nperformance. We're currently getting good results setting it to `10`.\nYou may also want to increase `vm.min_free_kbytes` to avoid page allocation\nerrors on 10 GbE network interfaces.\n\nDevelopment\n===========\n\nBacky has switched to using `poetry` to manage its dependencies. This means\nthat you can use `poetry install` to install dependencies from PyPI.\n\nIf you don't have `backy` in your PATH when developing, enter the poetry\nvirtualenv with `poetry shell` or if you're using nix with `nix develop`.\n\nYou can build backy with `poetry build` making a wheel and a tar archive\nin the `dist` directory, or by running `nix build`.\n\nAuthors\n=======\n\n* Christian Theune <ct@flyingcircus.io>\n* Christian Kauhaus <kc@flyingcircus.io>\n* Daniel Kraft <daniel.kraft@d9t.de>\n\n\nLicense\n=======\n\nGPLv3\n\n\nLinks\n=====\n\n* `Bitbucket repository <https://bitbucket.org/flyingcircus/backy>`_\n* `PyPI page <https://pypi.python.org/pypi/backy>`_\n* `Online docs <http://pythonhosted.org/backy/>`_\n* `Build server <https://builds.flyingcircus.io/job/backy/>`_\n\n.. vim: set ft=rst spell spelllang=en sw=3:\n\n=========\nChangelog\n=========\n\nUnreleased\n==========\n\nSee the fragment files in the `changelog.d` directory.\n\n.. scriv-insert-here\n\n.. _changelog-2.5.1:\n\n2.5.1 (2023-10-12)\n==================\n\n- Fix telnet `jobs` command\n\n.. _changelog-2.5.0:\n\n2.5.0 (2023-10-11)\n==================\n\n- Separate pool into fast and slow. According to our statistics 90% of jobs are\n  faster than 10 minutes and after that they might run for hours. We now run\n  the configured amount of workers both in a fast and slow pool so that long\n  jobs may delay other long jobs but fast jobs should be able to squeeze\n  through quickly.\n\n- Port to Python 3.10+.\n\n- Continue deleting snapshots even if some are protected.\n\n- Improve detection for 'whole object' RBD export: we failed to detect this is\n  Ceph is built with wrong version options and shows itself as 'Development'\n  instead of a real version.\n\n- Do not count running jobs as overdue.\n\n- Regularly log overdue jobs.\n\n- Delete Tags from former schedules\n\n- Add forget subcommand\n\n- Add compatibility to changed `rbd showmapped` output format in Ceph Nautilus.\n  Ceph Jewel and Luminous clusters remain supported.\n\n- Use structlog for logging\n\n- Static typechecking with mypy\n\n- List all manual tags in `backy check`\n\n- Fix crash for non-numeric Ceph version strings\n\n- Fix a missing lock on distrust\n\n- use `scriv` for changelog management\n\n- Restore default value for missing entries on daemon reload\n\n- Crash on invalid config while reloading\n\n- remove find subcommand and nbd-server\n\n- Compute parent revision at runtime\n\n- Remove the `last` and `last.rev` symlink\n\n- Distrust revisions on verification failure\n\n- Quarantine differences between source and backend\n\n- Fix race condition when reloading while generating a status file\n\n- Add nix flake support\n\n- Call configured script on (scheduler triggered) backup completion\n\n2.4.3 (2019-04-17)\n==================\n\n- Avoid superfluous work when a source is missing, specifically avoid\n  unnecessary Ceph/KVM interaction. (#27020)\n\n\n2.4.2 (2019-04-17)\n==================\n\n- Optimize handling offline/deletion pending VMs: don't just timeout. Take\n  snapshots and make backups (as long as the image exists). (#22345)\n\n- Documentation update about performance implications.\n\n- Clean up build system instrumentation to get our Jenkins going again.\n\n\n2.4.1 (2018-12-06)\n==================\n\n- Optimization: bundle 'unlink' calls to improve cache locality of VFS metadata\n\n- Optimization: load all known chunks at startup to avoid further seeky IO\n  and large metadata parsing, this also speeds up purges.\n\n- Reduce OS VFS cache trashing by explicitly indicating data we don't expect\n  to read again.\n\n\n2.4 (2018-11-30)\n================\n\n- Add support for Ceph Jewel's --whole-object diff export. (#24636)\n\n- Improve garbage collection of old snapshot requests. (#100024)\n\n- Switch to a new chunked store format: remove one level of directories\n  to massively reduce seeky IO.\n\n- Reorder and improve potentially seeky IOPS in the per-chunk write effort.\n  Do not create directories lazily.\n\n- Require Python 3.6.\n\n2.3 (2018-05-16)\n================\n\n- Add major operational support for handling inconsistencies.\n\n- Operators can mark revisions as 'distrusted', either all for a backup, or\n  individual revisions or time ranges.\n\n- If the newest revision is distrusted then we always perform a full backup\n  instead of a differential backup.\n\n- If there is any revision that is distrusted then every chunk will be written\n  even if it exists (once during a backup, identical chunks within a backup will\n  be trusted once written).\n\n- Implement an explicit verification procedure that will automatically trigger\n  after a backup and will verify distrusted revisions and either delete them or\n  mark them as verified.\n\n- Safety belt: always verify content hashes when reading chunks.\n\n- Improve status report logging.\n\n2.2 (2017-08-28)\n================\n\n- Introduce a new backend storage mechanism, independent of BTRFS: instead of\n  using COW, use a directory with content-hashed 4MiB chunks of the file.\n  Deduplication happens automatically based on the 4MiB chunks.\n\n- Made the use of fadvise features more opportunistic: don't fail if they are\n  not supported by the backend as they are only optimizations anyway.\n\n- Introduce an exponential backoff after a failed backup: instead of retrying\n  quickly and thus hogging the queue (in the case that timeouts are involved)\n  we now backoff exponentially starting with 2 minutes, then 4, then 8, ...\n  until we reach 6 hours as the biggest backoff time.\n\n  You can still trigger an explicit run using the telnet \"run\" command for\n  the affected backup. This will put the backup in the run queue immediately\n  but will not reset the error counter or backoff interval in case it\n  fails again.\n\n- Performance improvement on restore: don't read the restore target. We don't\n  have to optimize CoW in this case. (#28268)\n\n2.1.5 (2016-07-01)\n==================\n\n- Bugfix release: fix data corruption bug in the new full-always mode. (FC\n  #21963)\n\n\n2.1.4 (2016-06-20)\n==================\n\n- Add \"full-always\" flag to Ceph and Flyingcircus sources. (FC #21960)\n\n- Rewrite full backup code to make use of shallow copies to conserve disk space.\n  (FC #21960)\n\n\n2.1.3 (2016-06-09)\n==================\n\n- Fix new timeout to be 5 minutes by default, not 5 days.\n\n- Do not sort blocks any longer: we do not win much from seeking\n  over volumes with random blocks anyway and this helps for a more\n  even distribution with the new timeout over multiple runs.\n\n\n2.1.2 (2016-06-09)\n==================\n\n- Fix backup of images containing holes (#33).\n\n- Introduce a (short) timeout for partial image verification. Especially\n  very large images and images that are backed up frequently do not profit\n  from running for hours to verify them, blocking further backups.\n  (FC #21879)\n\n2.1.1 (2016-01-15)\n==================\n\n- Fix logging bugs.\n\n- Shut down daemon loop cleanly on signal reception.\n\n\n2.1 (2016-01-08)\n================\n\n- Add optional regex filter to the `jobs` command in the telnet shell.\n\n- Provide list of failed jobs in check output, not only the total number.\n\n- Add `status-interval`, `telnet-addrs`, and `telnet-port` configuration\n  options.\n\n- Automatically recover from missing/damaged last or last.rev symlinks (#19532).\n\n- Use `{BASE_DIR}/.lock` as daemon lock file instead of the status file.\n\n- Usability improvements: count jobs, more informative log output.\n\n- Support restoring to block special files like LVM volumes (#31).\n\n\n2.0 (2015-11-06)\n================\n\n- backy now accepts a `-l` option to specify a log file. If no such option is\n  given, it logs to stdout.\n\n- Add `backy find -r REVISION` subcommand to query image paths from shell scripts.\n\n- Fix monitoring bug where partially written images made the check go green\n  (#30).\n\n- Greatly improve error handling and detection of failed jobs.\n\n- Performance improvement: turn off line buffering in bulk file operations\n  (#20).\n\n- The scheduler reports child failures (exit status > 0) now in the main log.\n\n- Fix fallocate() behaviour on 32 bit systems.\n\n- The `flyingcircus` source type now requires 3 arguments: vm, pool, image.\n\n\n2.0b3 (2015-10-02)\n==================\n\n- Improve telnet console.\n\n- Provide Nix build script.\n\n- Generate `requirements.txt` automatically from buildout's `versions.cfg`.\n\n\n2.0b2 (2015-09-15)\n==================\n\n- Introduce scheduler and rework the main backup command. The backy\n  command is now only responsible for dealing with individual backups.\n\n  It does no longer care about scheduling.\n\n  A new daemon and a central configuration file is responsible for that\n  now. However, it simply calls out to the existing backy command\n  so we can still manually interact with the system even if we do not\n  use the daemon.\n\n- Add consul integration for backing up Flying Circus root disk images\n  with clean snapshots (by asking fc.qemu to use fs-freeze before preparing\n  a Ceph snapshot).\n\n- Switch to shorter UUIDs. Existing files with old UUIDs are compatible.\n\n- Turn the configuration format into YAML. Old files are still compatible.\n  New configs will be generated as YAML.\n\n- Performance: defrag all new files automatically to avoid btrfs degrading\n  extent performance. It appears this doesn't completely duplicate all CoW\n  data. Will have to monitor this in the future.\n\n2.0b1 (2014-07-07)\n==================\n\n- Clean up docs.\n\n- Add classifiers in setup.py.\n\n- More or less complete rewrite expecting a copy-on-write filesystem as the\n  target.\n\n- Flexible backup scheduling using free-form tags.\n\n- Compatible with Python 3.2-3.4.\n\n- Initial open source import as provided by Daniel Kraft (D9T).\n\n.. vim: set ft=rst spell spelllang=en:\n",
    "bugtrack_url": null,
    "license": "GPL-3",
    "summary": "Block-based backup and restore utility for virtual machine images",
    "version": "2.5.1",
    "project_urls": {
        "Homepage": "https://bitbucket.org/flyingcircus/backy"
    },
    "split_keywords": [
        "backup"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b151fb4ca5b2574955ace30800ad61412009718dded7efdb02ab90090d1776a8",
                "md5": "c1c151c24f0b594135667c6602667528",
                "sha256": "5c91799315cfcb8ecc776c1dcf0408756600b36be7b1c211927d244a1a4da0b8"
            },
            "downloads": -1,
            "filename": "backy-2.5.1.tar.gz",
            "has_sig": false,
            "md5_digest": "c1c151c24f0b594135667c6602667528",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 108093,
            "upload_time": "2023-10-12T11:54:43",
            "upload_time_iso_8601": "2023-10-12T11:54:43.155372Z",
            "url": "https://files.pythonhosted.org/packages/b1/51/fb4ca5b2574955ace30800ad61412009718dded7efdb02ab90090d1776a8/backy-2.5.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-12 11:54:43",
    "github": false,
    "gitlab": false,
    "bitbucket": true,
    "codeberg": false,
    "bitbucket_user": "flyingcircus",
    "bitbucket_project": "backy",
    "lcname": "backy"
}
        
Elapsed time: 0.17099s