# glacier-upload
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit)](https://github.com/pre-commit/pre-commit)
[![pypi](https://img.shields.io/pypi/v/glacier_upload)](https://pypi.org/project/glacier_upload/)
[![License-GPLv3](https://img.shields.io/github/license/tbumi/glacier-upload)](https://github.com/tbumi/glacier-upload/blob/main/LICENSE)
A helper tool to upload and manage archives in
[Amazon S3 Glacier](https://docs.aws.amazon.com/amazonglacier/latest/dev/introduction.html)
Vaults. Amazon S3 Glacier is a cloud storage service that is optimized for long
term storage for a relatively cheap price. NOT to be confused with Amazon S3
with Glacier (Instant Retrieval, Flexible Retrieval, and Deep Archive) tier
storage, which uses the S3 API and does not deal with vaults and archives.
## Installation
Minimum required Python version is 3.9. To install, run this in your terminal:
```
$ pip install glacier_upload
```
## Usage
### Prerequisites
To upload an archive to Amazon S3 Glacier vault, ensure you have:
- Created an AWS account
- Created an Amazon S3 Glacier vault from the AWS CLI tool or the Management
Console
### Uploading an archive
An upload can be performed by running `glacier upload` followed by the vault
name and the file name(s) that you want to upload.
```
glacier upload VAULT_NAME FILE_NAME [FILE_NAME ...]
```
`FILE_NAME` can be one or more files or directories.
The script will:
1. Read the file(s)
2. Consolidate them into a `.tar.xz` archive if multiple `FILE_NAME`s are
specified or `FILE_NAME` is one or more directories
3. Upload the file in one go if the file is less than 100 MB in size, or
4. Split the file into chunks
5. Spawn a number of threads that will upload the chunks in parallel. Note that
it will not read the entire file into memory, but only parts of the file as
it processes the chunks.
6. Return the archive ID when complete. Consider saving this archive ID in a
safe place for retrieval purposes, because Amazon Glacier does not provide a
list of archives in realtime. See the "Requesting an inventory" section below
for details.
There are additional options to customize your upload, such as adding a
description to the archive or configuring the number of threads or the size of
parts. Run `glacier upload --help` for more information.
If a multipart upload is interrupted in the middle (because of an exception,
interrupted manually, or other reason), the script will show you the upload ID.
That upload ID can be used to resume the upload, using the same command but
adding the `--upload-id` option, like so:
```
glacier upload --upload-id UPLOAD_ID VAULT_NAME FILE_NAME [FILE_NAME ...]
```
### Retrieving an archive
Retrieving an archive in glacier requires two steps. First, initiate a
"retrieval job" using:
```
glacier archive init-retrieval VAULT_NAME ARCHIVE_ID
```
To see a list of archive IDs in a vault, see "Requesting an inventory" below.
Then, the retrieval job will take some time to complete. Run the next step to
both check whether the job is complete and retrieve the archive if it has been
completed.
```
glacier archive get VAULT_NAME JOB_ID FILE_NAME
```
### Requesting an inventory
Vaults do not provide realtime access to a list of their contents. To know what
a vault contains, you need to request an inventory of the archive, in a similar
manner to retrieving an archive. To initiate an inventory, run:
```
glacier inventory init-retrieval VAULT_NAME
```
Then, the inventory job will take some time to complete. Run the next step to
both check whether the job is complete and retrieve the inventory if it has been
completed.
```
glacier inventory get VAULT_NAME JOB_ID
```
### Deleting an archive, deleting an upload job, creating/deleting a vault, etc.
All jobs other than uploading an archive and requesting/downloading an inventory
or archive can be done using the AWS CLI. Those functionalities are not
implemented here to avoid duplication of work, and minimize maintenance efforts
of this package.
## Contributing
Contributions and/or bug fixes are welcome! Just make sure you've read the below
requirements, then feel free to fork, make a topic branch, make your changes,
and submit a PR.
### Development Requirements
Before committing to this repo, install [poetry](https://python-poetry.org/) on
your local machine, then run these commands to setup your environment:
```sh
poetry install
pre-commit install
```
All code is formatted with [black](https://github.com/psf/black). Consider
installing an integration for it in your favourite text editor.
Raw data
{
"_id": null,
"home_page": "https://github.com/tbumi/glacier-upload",
"name": "glacier-upload",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": "AWS, glacier, upload, multipart",
"author": "Trapsilo Bumi",
"author_email": "tbumi@thpd.io",
"download_url": "https://files.pythonhosted.org/packages/ef/13/1676b3a09433ff948f363ea7682051ed9dcb5af0188575f74eff10cc57c4/glacier_upload-2.1.1.tar.gz",
"platform": null,
"description": "# glacier-upload\n\n[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit)](https://github.com/pre-commit/pre-commit)\n[![pypi](https://img.shields.io/pypi/v/glacier_upload)](https://pypi.org/project/glacier_upload/)\n[![License-GPLv3](https://img.shields.io/github/license/tbumi/glacier-upload)](https://github.com/tbumi/glacier-upload/blob/main/LICENSE)\n\nA helper tool to upload and manage archives in\n[Amazon S3 Glacier](https://docs.aws.amazon.com/amazonglacier/latest/dev/introduction.html)\nVaults. Amazon S3 Glacier is a cloud storage service that is optimized for long\nterm storage for a relatively cheap price. NOT to be confused with Amazon S3\nwith Glacier (Instant Retrieval, Flexible Retrieval, and Deep Archive) tier\nstorage, which uses the S3 API and does not deal with vaults and archives.\n\n## Installation\n\nMinimum required Python version is 3.9. To install, run this in your terminal:\n\n```\n$ pip install glacier_upload\n```\n\n## Usage\n\n### Prerequisites\n\nTo upload an archive to Amazon S3 Glacier vault, ensure you have:\n\n- Created an AWS account\n- Created an Amazon S3 Glacier vault from the AWS CLI tool or the Management\n Console\n\n### Uploading an archive\n\nAn upload can be performed by running `glacier upload` followed by the vault\nname and the file name(s) that you want to upload.\n\n```\nglacier upload VAULT_NAME FILE_NAME [FILE_NAME ...]\n```\n\n`FILE_NAME` can be one or more files or directories.\n\nThe script will:\n\n1. Read the file(s)\n2. Consolidate them into a `.tar.xz` archive if multiple `FILE_NAME`s are\n specified or `FILE_NAME` is one or more directories\n3. Upload the file in one go if the file is less than 100 MB in size, or\n4. Split the file into chunks\n5. Spawn a number of threads that will upload the chunks in parallel. Note that\n it will not read the entire file into memory, but only parts of the file as\n it processes the chunks.\n6. Return the archive ID when complete. Consider saving this archive ID in a\n safe place for retrieval purposes, because Amazon Glacier does not provide a\n list of archives in realtime. See the \"Requesting an inventory\" section below\n for details.\n\nThere are additional options to customize your upload, such as adding a\ndescription to the archive or configuring the number of threads or the size of\nparts. Run `glacier upload --help` for more information.\n\nIf a multipart upload is interrupted in the middle (because of an exception,\ninterrupted manually, or other reason), the script will show you the upload ID.\nThat upload ID can be used to resume the upload, using the same command but\nadding the `--upload-id` option, like so:\n\n```\nglacier upload --upload-id UPLOAD_ID VAULT_NAME FILE_NAME [FILE_NAME ...]\n```\n\n### Retrieving an archive\n\nRetrieving an archive in glacier requires two steps. First, initiate a\n\"retrieval job\" using:\n\n```\nglacier archive init-retrieval VAULT_NAME ARCHIVE_ID\n```\n\nTo see a list of archive IDs in a vault, see \"Requesting an inventory\" below.\n\nThen, the retrieval job will take some time to complete. Run the next step to\nboth check whether the job is complete and retrieve the archive if it has been\ncompleted.\n\n```\nglacier archive get VAULT_NAME JOB_ID FILE_NAME\n```\n\n### Requesting an inventory\n\nVaults do not provide realtime access to a list of their contents. To know what\na vault contains, you need to request an inventory of the archive, in a similar\nmanner to retrieving an archive. To initiate an inventory, run:\n\n```\nglacier inventory init-retrieval VAULT_NAME\n```\n\nThen, the inventory job will take some time to complete. Run the next step to\nboth check whether the job is complete and retrieve the inventory if it has been\ncompleted.\n\n```\nglacier inventory get VAULT_NAME JOB_ID\n```\n\n### Deleting an archive, deleting an upload job, creating/deleting a vault, etc.\n\nAll jobs other than uploading an archive and requesting/downloading an inventory\nor archive can be done using the AWS CLI. Those functionalities are not\nimplemented here to avoid duplication of work, and minimize maintenance efforts\nof this package.\n\n## Contributing\n\nContributions and/or bug fixes are welcome! Just make sure you've read the below\nrequirements, then feel free to fork, make a topic branch, make your changes,\nand submit a PR.\n\n### Development Requirements\n\nBefore committing to this repo, install [poetry](https://python-poetry.org/) on\nyour local machine, then run these commands to setup your environment:\n\n```sh\npoetry install\npre-commit install\n```\n\nAll code is formatted with [black](https://github.com/psf/black). Consider\ninstalling an integration for it in your favourite text editor.\n",
"bugtrack_url": null,
"license": "GPL-3.0+",
"summary": "A helper tool to upload and manage archives in AWS Glacier Vaults",
"version": "2.1.1",
"project_urls": {
"Homepage": "https://github.com/tbumi/glacier-upload",
"Repository": "https://github.com/tbumi/glacier-upload"
},
"split_keywords": [
"aws",
" glacier",
" upload",
" multipart"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3e07893826b8569fedfeceac872176b81ee7065cb77db5aee37c634ae0cb53b0",
"md5": "25aebdd0832d814aa930d1c890968cc5",
"sha256": "95ebe3fc4b900dcafa3abb27bd573ba4214a7643c9fd0b0a8b23596e09b59877"
},
"downloads": -1,
"filename": "glacier_upload-2.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "25aebdd0832d814aa930d1c890968cc5",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 25770,
"upload_time": "2024-05-27T10:43:44",
"upload_time_iso_8601": "2024-05-27T10:43:44.858888Z",
"url": "https://files.pythonhosted.org/packages/3e/07/893826b8569fedfeceac872176b81ee7065cb77db5aee37c634ae0cb53b0/glacier_upload-2.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ef131676b3a09433ff948f363ea7682051ed9dcb5af0188575f74eff10cc57c4",
"md5": "595816cafec1bc95832478f5b60892c4",
"sha256": "cbc660fd2a580b9a9cc7d6e33babdf41aef5cc903defaf5c9e15575dcc4db216"
},
"downloads": -1,
"filename": "glacier_upload-2.1.1.tar.gz",
"has_sig": false,
"md5_digest": "595816cafec1bc95832478f5b60892c4",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 23616,
"upload_time": "2024-05-27T10:43:47",
"upload_time_iso_8601": "2024-05-27T10:43:47.723326Z",
"url": "https://files.pythonhosted.org/packages/ef/13/1676b3a09433ff948f363ea7682051ed9dcb5af0188575f74eff10cc57c4/glacier_upload-2.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-27 10:43:47",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "tbumi",
"github_project": "glacier-upload",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "glacier-upload"
}