[![PyPI version](https://badge.fury.io/py/cloud-files.svg)](https://badge.fury.io/py/cloud-files) [![Test Suite](https://github.com/seung-lab/cloud-files/workflows/Test%20Suite/badge.svg)](https://github.com/seung-lab/cloud-files/actions?query=workflow%3A%22Test+Suite%22)
CloudFiles: Fast access to cloud storage and local FS.
========
```python
from cloudfiles import CloudFiles, CloudFile, dl
results = dl(["gs://bucket/file1", "gs://bucket2/file2", ... ]) # shorthand
cf = CloudFiles('gs://bucket', progress=True) # s3://, https://, and file:// also supported
results = cf.get(['file1', 'file2', 'file3', ..., 'fileN']) # threaded
results = cf.get(paths, parallel=2) # threaded and two processes
file1 = cf['file1']
part = cf['file1', 0:30] # first 30 bytes of file1
cf.put('filename', content)
cf.put_json('filename', content)
cf.puts([{
'path': 'filename',
'content': content,
}, ... ]) # automatically threaded
cf.puts(content, parallel=2) # threaded + two processes
cf.puts(content, storage_class="NEARLINE") # apply vendor-specific storage class
cf.put_jsons(...) # same as puts
cf['filename'] = content
for fname in cf.list(prefix='abc123'):
print(fname)
list(cf) # same as list(cf.list())
cf.delete('filename')
del cf['filename']
cf.delete([ 'filename_1', 'filename_2', ... ]) # threaded
cf.delete(paths, parallel=2) # threaded + two processes
boolean = cf.exists('filename')
results = cf.exists([ 'filename_1', ... ]) # threaded
cf.move("a", "gs://bucket/b")
cf.moves("gs://bucket/", [ ("a", "b") ])
cf.touch("example")
cf.touch([ "example", "example2" ])
# for single files
cf = CloudFile("gs://bucket/file1")
info = cf.head()
binary = cf.get()
cf.put(binary)
cf[:30] # get first 30 bytes of file
```
CloudFiles was developed to access files from object storage without ever touching disk. The goal was to reliably and rapidly access a petabyte of image data broken down into tens to hundreds of millions of files being accessed in parallel across thousands of cores. CloudFiles has been used to processes dozens of images, many of which were in the hundreds of terabyte range. It has reliably read and written tens of billions of files to date.
## Highlights
1. Fast file access with transparent threading and optionally multi-process access w/ local file locking.
2. Google Cloud Storage, Amazon S3, local filesystems, and arbitrary web servers making hybrid or multi-cloud easy.
3. Robust to flaky network connections. Uses exponential random window retries to avoid network collisions on a large cluster. Validates md5 for gcs and s3.
4. gzip, brotli, bz2, zstd, and xz compression.
5. Supports HTTP Range reads.
6. Supports green threads, which are important for achieving maximum performance on virtualized servers.
7. High efficiency transfers that avoid compression/decompression cycles.
8. High speed gzip decompression using libdeflate (compared with zlib).
9. Bundled CLI tool.
10. Accepts iterator and generator input.
11. Resumable bulk transfers.
12. Supports composite parallel upload for GCS and multi-part upload for AWS S3.
13. Supports s3 and GCS internal copies to avoid unnecessary data movement.
## Installation
```bash
pip install cloud-files
pip install cloud-files[test] # to enable testing with pytest
```
If you run into trouble installing dependenies, make sure you're using at least Python3.6 and you have updated pip. On Linux, some dependencies require manylinux2010 or manylinux2014 binaries which earlier versions of pip do not search for. MacOS, Linux, and Windows are supported platforms.
### Credentials
You may wish to install credentials under `~/.cloudvolume/secrets`. CloudFiles is descended from CloudVolume, and for now we'll leave the same configuration structure in place.
You need credentials only for the services you'll use. The local filesystem doesn't need any. Google Storage ([setup instructions here](https://github.com/seung-lab/cloud-volume/wiki/Setting-up-Google-Cloud-Storage)) will attempt to use default account credentials if no service account is provided.
If neither of those two conditions apply, you need a service account credential. `google-secret.json` is a service account credential for Google Storage, `aws-secret.json` is a service account for S3, etc. You can support multiple projects at once by prefixing the bucket you are planning to access to the credential filename. `google-secret.json` will be your defaut service account, but if you also want to also access bucket ABC, you can provide `ABC-google-secret.json` and you'll have simultaneous access to your ordinary buckets and ABC. The secondary credentials are accessed on the basis of the bucket name, not the project name.
```bash
mkdir -p ~/.cloudvolume/secrets/
mv aws-secret.json ~/.cloudvolume/secrets/ # needed for Amazon
mv google-secret.json ~/.cloudvolume/secrets/ # needed for Google
mv matrix-secret.json ~/.cloudvolume/secrets/ # needed for Matrix
```
#### `aws-secret.json` and `matrix-secret.json`
Create an [IAM user service account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) that can read, write, and delete objects from at least one bucket.
```json
{
"AWS_ACCESS_KEY_ID": "$MY_AWS_ACCESS_KEY_ID",
"AWS_SECRET_ACCESS_KEY": "$MY_SECRET_ACCESS_TOKEN",
"AWS_DEFAULT_REGION": "$MY_AWS_REGION" // defaults to us-east-1
}
```
#### `google-secret.json`
You can create the `google-secret.json` file [here](https://console.cloud.google.com/iam-admin/serviceaccounts). You don't need to manually fill in JSON by hand, the below example is provided to show you what the end result should look like. You should be able to read, write, and delete objects from at least one bucket.
```json
{
"type": "service_account",
"project_id": "$YOUR_GOOGLE_PROJECT_ID",
"private_key_id": "...",
"private_key": "...",
"client_email": "...",
"client_id": "...",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": ""
}
```
## API Documentation
Note that the "Cloud Costs" mentioned below are current as of June 2020 and are subject to change. As of this writing, S3 and Google use identical cost structures for these operations.
### Constructor
```python
# import gevent.monkey # uncomment when using green threads
# gevent.monkey.patch_all(thread=False)
from cloudfiles import CloudFiles
cf = CloudFiles(
cloudpath, progress=False,
green=None, secrets=None, num_threads=20,
use_https=False, endpoint=None, request_payer=None,
composite_upload_threshold = int(1e8)
)
# cloudpath examples:
cf = CloudFiles('gs://bucket/') # google cloud storage
cf = CloudFiles('s3://bucket/') # Amazon S3
cf = CloudFiles('s3://https://s3emulator.com/coolguy/') # alternate s3 endpoint
cf = CloudFiles('file:///home/coolguy/') # local filesystem
cf = CloudFiles('mem:///home/coolguy/') # in memory
cf = CloudFiles('https://website.com/coolguy/') # arbitrary web server
```
* cloudpath: The path to the bucket you are accessing. The path is formatted as `$PROTOCOL://BUCKET/PATH`. Files will then be accessed relative to the path. The protocols supported are `gs` (GCS), `s3` (AWS S3), `file` (local FS), `mem` (RAM), and `http`/`https`.
* progress: Whether to display a progress bar when processing multiple items simultaneously.
* green: Use green threads. For this to work properly, you must uncomment the top two lines. Green threads are used automatically upon monkey patching if green is None.
* secrets: Provide secrets dynamically rather than fetching from the credentials directory `$HOME/.cloudvolume/secrets`.
* num_threads: Number of simultaneous requests to make. Usually 20 per core is pretty close to optimal unless file sizes are extreme.
* use_https: `gs://` and `s3://` require credentials to access their files. However, each has a read-only https endpoint that sometimes requires no credentials. If True, automatically convert `gs://` to `https://storage.googleapis.com/` and `s3://` to `https://s3.amazonaws.com/`.
* endpoint: (s3 only) provide an alternate endpoint than the official Amazon servers. This is useful for accessing the various S3 emulators offered by on-premises deployments of object storage.
* request_payer: Specify the account that should be charged for requests towards the bucket, rather than the bucket owner.
* `gs://`: `request_payer` can be any Google Cloud project id. Please refer to the documentation for [more information](https://cloud.google.com/storage/docs/requester-pays).
* `s3://`: `request_payer` must be `"requester"`. The AWS account associated with the AWS_ACCESS_KEY_ID will be charged. Please refer to the documentation for [more information](https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html)
The advantage of using `mem://` versus a `dict` has both the advantage of using identical interfaces in your code and it will use compression automatically.
### get / get_json
```python
# Let 'filename' be the file b'hello world'
binary = cf.get('filename')
binary = cf['filename']
>> b'hello world'
compressed_binary = cf.get('filename', raw=True)
binaries = cf.get(['filename1', 'filename2'])
>> [ { 'path': 'filename1', 'content': b'...', 'byte_range': (None, None), 'error': None }, { 'path': 'filename2', 'content': b'...', 'byte_range': (None, None), 'error': None } ]
# total provides info for progress bar when using generators.
binaries = cf.get(generator(), total=N)
binary = cf.get({ 'path': 'filename', 'start': 0, 'end': 5 }) # fetches 5 bytes
binary = cf['filename', 0:5] # only fetches 5 bytes
binary = cf['filename'][0:5] # same result, fetches 11 bytes
>> b'hello' # represents byte range 0-4 inclusive of filename
binaries = cf[:100] # download the first 100 files
```
`get` supports several different styles of input. The simplest takes a scalar filename and returns the contents of that file. However, you can also specify lists of filenames, a byte range request, and lists of byte range requests. You can provide a generator or iterator as input as well. Order is not guaranteed.
When more than one file is provided at once, the download will be threaded using preemptive or cooperative (green) threads depending on the `green` setting. If `progress` is set to true, a progress bar will be displayed that counts down the number of files to download.
`get_json` is the same as get but it will parse the returned binary as JSON data encoded as utf8 and returns a dictionary. Order is guaranteed.
Cloud Cost: Usually about $0.40 per million requests.
### put / puts / put_json / put_jsons
```python
cf.put('filename', b'content')
cf['filename'] = b'content'
cf.put_json('digits', [1,2,3,4,5])
cf.puts([{
'path': 'filename',
'content': b'...',
'content_type': 'application/octet-stream',
'compress': 'gzip',
'compression_level': 6, # parameter for gzip or brotli compressor
'cache_control': 'no-cache',
}])
cf.puts([ (path, content), (path, content) ], compression='gzip')
cf.put_jsons(...)
# Definition of put, put_json is identical
def put(
self,
path, content,
content_type=None, compress=None,
compression_level=None, cache_control=None,
raw=False
)
# Definition of puts, put_jsons is identical
def puts(
self, files,
content_type=None, compress=None,
compression_level=None, cache_control=None,
total=None, raw=False
)
```
The PUT operation is the most complex operation because it's so configurable. Sometimes you want one file, sometimes many. Sometimes you want to configure each file individually, sometimes you want to standardize a bulk upload. Sometimes it's binary data, but oftentimes it's JSON. We provide a simpler interface for uploading a single file `put` and `put_json` (singular) versus the interface for uploading possibly many files `puts` and `put_jsons` (plural).
In order to upload many files at once (which is much faster due to threading), you need to minimally provide the `path` and `content` for each file. This can be done either as a dict containing those fields or as a tuple `(path, content)`. If dicts are used, the fields (if present) specified in the dict take precedence over the parameters of the function. You can mix tuples with dicts. The input to puts can be a scalar (a single dict or tuple) or an iterable such as a list, iterator, or generator.
Cloud Cost: Usually about $5 per million files.
### delete
```python
cf.delete('filename')
cf.delete([ 'file1', 'file2', ... ])
del cf['filename']
```
This will issue a delete request for each file specified in a threaded fashion.
Cloud Cost: Usually free.
### exists
```python
cf.exists('filename')
>> True # or False
cf.exists([ 'file1', 'file2', ... ])
>> { 'file1': True, 'file2': False, ... }
```
Scalar input results in a simple boolean output while iterable input returns a dictionary of input paths mapped to whether they exist. In iterable mode, a progress bar may be displayed and threading is utilized to improve performance.
Cloud Cost: Usually about $0.40 per million requests.
### size
```python
cf.size('filename')
>>> 1337 # size in bytes
cf.size([ 'file1', 'nonexistent', 'empty_file', ... ])
>>> { 'file1': 392, 'nonexistent': None, 'empty_file': 0, ... }
```
The output is the size of each file as it is stored in bytes. If the file doesn't exist, `None` is returned. Scalar input results in a simple boolean output while iterable input returns a dictionary of input paths mapped to whether they exist. In iterable mode, a progress bar may be displayed and threading is utilized to improve performance.
Cloud Cost: Usually about $0.40 per million requests.
### list
```python
cf.list() # returns generator
list(cf) # same as list(cf.list())
cf.list(prefix="abc")
cf.list(prefix="abc", flat=True)
```
Recall that in object storage, directories do not really exist and file paths are really a key-value mapping. The `list` operator will list everything under the `cloudpath` given in the constructor. The `prefix` operator allows you to efficiently filter some of the results. If `flat` is specified, the results will be filtered to return only a single "level" of the "directory" even though directories are fake. The entire set of all subdirectories will still need to be fetched.
Cloud Cost: Usually about $5 per million requests, but each request might list 1000 files. The list operation will continuously issue list requests lazily as needed.
### transfer_to / transfer_from
```python
cff = CloudFiles('file:///source_location')
cfg = CloudFiles('gs://dest_location')
# Transfer all files from local filesys to google cloud storage
cfg.transfer_from(cff, block_size=64) # in blocks of 64 files
cff.transfer_to(cfg, block_size=64)
cff.transfer_to(cfg, block_size=64, reencode='br') # change encoding to brotli
cfg[:] = cff # default block size 64
```
Transfer semantics provide a simple way to perform bulk file transfers. Use the `block_size` parameter to adjust the number of files handled in a given pass. This can be important for preventing memory blow-up and reducing latency between batches.
gs to gs and s3 to s3 transfers will occur within the cloud without looping through the executing client provided no reencoding is specified.
#### resumable transfer
```python
from cloudfiles import ResumableTransfer
# .db b/c this is a sqlite database
# that will be automatically created
rt = ResumableTransfer("NAME_OF_JOB.db")
# init should only be called once
rt.init("file://source_location", "gs://dest_location")
# This part can be interrupted and resumed
rt.execute("NAME_OF_JOB.db")
# If multiple transfer clients, the lease_msec
# parameter must be specified to prevent conflicts.
rt.execute("NAME_OF_JOB.db", lease_msec=30000)
rt.close() # deletes NAME_OF_JOB.db
```
This is esentially a more durable version of `cp`. The transfer works by first loading a sqlite database with filenames, a "done" flag, and a lease time. Then clients can attach to the database and execute the transfer in batches. When multiple clients are used, a lease time must be set so that the database does not return the same set of files to each client (and is robust).
This transfer type can also be accessed via the CLI.
```bash
cloudfiles xfer init SOURCE DEST --db NAME_OF_JOB.db
cloudfiles xfer execute NAME_OF_JOB.db # deletes db when done
```
### composite upload (Google Cloud Storage)
If a file is larger than 100MB (default), CloudFiles will split the file into 100MB parts and upload them as individual part files using the STANDARD storage class to minimize deletion costs. Once uploaded, the part files will be recursively merged in a tree 32 files at a time. After each merge, the part files will be deleted. The final file will have the default storage class for the bucket.
If an upload is interrupted, the part files will remain and must be cleaned up. You can provide an open for binary reading file handle instead of a bytes object so that large files can be uploaded without overwhelming RAM. You can also adjust the composite threshold using `CloudFiles(..., composite_upload_threshold=int(2e8))` to for example, raise the threshold to 200MB.
### mutli-part upload (S3)
If a file is larger than 100MB (default), the S3 service will use multi-part upload. ou can provide an open for binary reading file handle instead of a bytes object so that large files can be uploaded without overwhelming RAM. You can also adjust the composite threshold using `CloudFiles(..., composite_upload_threshold=int(2e8))` to for example, raise the threshold to 200MB.
Unfinished upload parts remain on S3 (and cost money) unless you use a bucket lifecycle rule to remove them automatically.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html
### transcode
```python
from cloudfiles.compression import transcode
files = cf.get(...)
for file in transcode(files, 'gzip'):
file['content'] # gzipped file content regardless of source
transcode(files,
encoding='gzip', # any cf compatible compression scheme
in_place=False, # modify the files in-place to save memory
progress=True # progress bar
)
```
Sometimes we want to change the encoding type of a set of arbitrary files (often when moving them around to another storage system). `transcode` will take the output of `get` and transcode the resultant files into a new format. `transcode` respects the `raw` attribute which indicates that the contents are already compressed and will decompress them first before recompressing. If the input data are already compressed to the correct output encoding, it will simply pass it through without going through a decompression/recompression cycle.
`transcode` returns a generator so that the transcoding can be done in a streaming manner.
## Network Robustness
CloudFiles protects itself from network issues in several ways.
First, it uses a connection pool to avoid needing to reestablish connections or exhausting the number of available sockets.
Second, it uses an exponential random window backoff to retry failed connections and requests. The exponential backoff allows increasing breathing room for an overloaded server and the random window decorrelates independent attempts by a cluster. If the backoff was not growing, the retry attempts by a large cluster would be too rapid fire or inefficiently slow. If the attempts were not decorrellated, then regardless of the backoff, the servers will often all try again around the same time. We backoff seven times starting from 0.5 seconds to 60 seconds, doubling the random window each time.
Third, for Google Cloud Storage (GCS) and S3 endpoints, we compute the md5 digest both sending and receiving to ensure data corruption did not occur in transit and that the server sent the full response. We cannot validate the digest for partial ("Range") reads. For [composite objects](https://cloud.google.com/storage/docs/composite-objects) (GCS) we can check the [crc32c](https://pypi.org/project/crc32c/) check-sum which catches transmission errors but not tampering (though MD5 isn't secure at all anymore). We are unable to perform validation for [multi-part uploads](https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html) (S3). Using custom encryption keys may also create validation problems.
## Local File Locking
When accessing local files (`file://`), CloudFiles can use the `fasteners` library to perform locking so multi-process IO can be performed safely. These options have no effect on remote file access such as `gs://` or `s3://`.
Lock files are by default stored in `$CLOUD_FILES_DIR/locks` (usually `~/.cloudfiles/locks`).
Locking is enabled if:
1. `CloudFiles(..., locking=True)`
2. `CloudFiles(..., locking=None)` and a locking directory is set either via `CloudFiles(..., lock_dir=...)` or via the environment variable `CLOUD_FILES_LOCK_DIR`.
Locking is always disabled if `locking=False`. This can be advantageous for performance reasons but may require careful design of access patterns to avoid reading a file that is being written to.
You can check the current assigned locks directory:
```python
cf.lock_dir
```
You can clear the lock dir of all locks with:
```python
cf.clear_locks()
```
## CloudFiles CLI Tool
```bash
# list cloud and local directories
cloudfiles ls gs://bucket-folder/
# parallel file transfer, no decompression
cloudfiles -p 2 cp --progress -r s3://bkt/ gs://bkt2/
# change compression type to brotli
cloudfiles cp -c br s3://bkt/file.txt gs://bkt2/
# decompress
cloudfiles cp -c none s3://bkt/file.txt gs://bkt2/
# move or rename files
cloudfiles mv s3://bkt/file.txt gs://bkt2/
# create an empty file if not existing
cloudfiles touch s3://bkt/empty.txt
# pass from stdin (use "-" for source argument)
find some_dir | cloudfiles cp - s3://bkt/
# resumable transfers
cloudfiles xfer init SRC DEST --db JOBNAME.db
cloudfiles xfer execute JOBNAME.db --progress # can quit and resume
# Get human readable file sizes from anywhere
cloudfiles du -shc ./tmp gs://bkt/dir s3://bkt/dir
# remove files
cloudfiles rm ./tmp gs://bkt/dir/file s3://bkt/dir/file
# cat across services, -r for range reads
cloudfiles cat ./tmp gs://bkt/dir/file s3://bkt/dir/file
# verify a transfer was successful by comparing bytes and hashes
cloudfiles verify ./my-data gs://bucket/my-data/
```
### `cp` Pros and Cons
For the cp command, the bundled CLI tool has a number of advantages vs. `gsutil` when it comes to transfers.
1. No decompression of file transfers (unless you want to).
2. Can shift compression format.
3. Easily control the number of parallel processes.
4. Green threads make core utilization more efficient.
5. Optionally uses libdeflate for faster gzip decompression.
It also has some disadvantages:
1. Doesn't support all commands.
2. File suffixes may be added to signify compression type on the local filesystem (e.g. `.gz`, `.br`, or `.zstd`). `cloudfiles ls` will list them without the extension and they will be converted into `Content-Encoding` on cloud storage.
### `ls` Generative Expressions
For the `ls` command, we support (via the `-e` flag) simple generative expressions that enable querying multiple prefixes at once. A generative expression is denoted `[chars]` where `c`,`h`,`a`,`r`, & `s` will be inserted individually into the position where the expression appears. Multiple expressions are allowed and produce a cartesian product of resulting strings. This functionality is very limited at the moment but we intend to improve it.
```bash
cloudfiles ls -e "gs://bucket/prefix[ab]"
# equivalent to:
# cloudfiles ls gs://bucket/prefixa
# cloudfiles ls gs://bucket/prefixb
```
### `alias` for Alternative S3 Endpoints
You can set your own protocols for S3 compatible endpoints by creating dynamic or persistent aliases. CloudFiles comes with two official s3 endpoints that are important for the Seung Lab, `matrix://` and `tigerdata://` which point to Princeton S3 endpoints. Official aliases can't be overridden.
To create a dynamic alias, you can use `cloudfiles.paths.add_alias` which will only affect the current process. To create a persistent alias that resides in `~/.cloudfiles/aliases.json`, you can use the CLI.
```bash
cloudfiles alias add example s3://https://example.com/ # example://
cloudfiles alias ls # list all aliases
cloudfiles alias rm example # remove example://
```
The alias file is only accessed (and cached) if CloudFiles encounters an unknown protocol. If you stick to default protocols and use the syntax `s3://https://example.com/` for alternative endpoints, you can still use CloudFiles in environments without filesystem access.
## Credits
CloudFiles is derived from the [CloudVolume.Storage](https://github.com/seung-lab/cloud-volume/tree/master/cloudvolume/storage) system.
Storage was initially created by William Silversmith and Ignacio Tartavull. It was maintained and improved by William Silversmith and includes improvements by Nico Kemnitz (extremely fast exists), Ben Falk (brotli), and Ran Lu (local file locking). Manuel Castro added the ability to chose cloud storage class. Thanks to the anonymous author from https://teppen.io/ for their s3 etag validation code.
Raw data
{
"_id": null,
"home_page": "https://github.com/seung-lab/cloud-files/",
"name": "cloud-files",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.7",
"maintainer_email": null,
"keywords": null,
"author": "William Silversmith",
"author_email": "ws9@princeton.edu",
"download_url": "https://files.pythonhosted.org/packages/9c/ed/43c1890224456895d9bd475560ba21f782f0d1378e84d368430ea47e8992/cloud_files-4.30.0.tar.gz",
"platform": null,
"description": "[![PyPI version](https://badge.fury.io/py/cloud-files.svg)](https://badge.fury.io/py/cloud-files) [![Test Suite](https://github.com/seung-lab/cloud-files/workflows/Test%20Suite/badge.svg)](https://github.com/seung-lab/cloud-files/actions?query=workflow%3A%22Test+Suite%22)\n\nCloudFiles: Fast access to cloud storage and local FS.\n========\n\n```python\nfrom cloudfiles import CloudFiles, CloudFile, dl\n\nresults = dl([\"gs://bucket/file1\", \"gs://bucket2/file2\", ... ]) # shorthand\n\ncf = CloudFiles('gs://bucket', progress=True) # s3://, https://, and file:// also supported\nresults = cf.get(['file1', 'file2', 'file3', ..., 'fileN']) # threaded\nresults = cf.get(paths, parallel=2) # threaded and two processes\nfile1 = cf['file1']\npart = cf['file1', 0:30] # first 30 bytes of file1\n\ncf.put('filename', content)\ncf.put_json('filename', content)\ncf.puts([{\n 'path': 'filename',\n 'content': content,\n}, ... ]) # automatically threaded\ncf.puts(content, parallel=2) # threaded + two processes\ncf.puts(content, storage_class=\"NEARLINE\") # apply vendor-specific storage class\n\ncf.put_jsons(...) # same as puts\ncf['filename'] = content\n\nfor fname in cf.list(prefix='abc123'):\n print(fname)\nlist(cf) # same as list(cf.list())\n\ncf.delete('filename')\ndel cf['filename']\ncf.delete([ 'filename_1', 'filename_2', ... ]) # threaded\ncf.delete(paths, parallel=2) # threaded + two processes\n\nboolean = cf.exists('filename')\nresults = cf.exists([ 'filename_1', ... ]) # threaded\n\ncf.move(\"a\", \"gs://bucket/b\")\ncf.moves(\"gs://bucket/\", [ (\"a\", \"b\") ])\n\ncf.touch(\"example\")\ncf.touch([ \"example\", \"example2\" ])\n\n# for single files\ncf = CloudFile(\"gs://bucket/file1\")\ninfo = cf.head()\nbinary = cf.get()\ncf.put(binary)\ncf[:30] # get first 30 bytes of file\n```\n\nCloudFiles was developed to access files from object storage without ever touching disk. The goal was to reliably and rapidly access a petabyte of image data broken down into tens to hundreds of millions of files being accessed in parallel across thousands of cores. CloudFiles has been used to processes dozens of images, many of which were in the hundreds of terabyte range. It has reliably read and written tens of billions of files to date.\n\n## Highlights\n\n1. Fast file access with transparent threading and optionally multi-process access w/ local file locking.\n2. Google Cloud Storage, Amazon S3, local filesystems, and arbitrary web servers making hybrid or multi-cloud easy.\n3. Robust to flaky network connections. Uses exponential random window retries to avoid network collisions on a large cluster. Validates md5 for gcs and s3.\n4. gzip, brotli, bz2, zstd, and xz compression.\n5. Supports HTTP Range reads.\n6. Supports green threads, which are important for achieving maximum performance on virtualized servers.\n7. High efficiency transfers that avoid compression/decompression cycles.\n8. High speed gzip decompression using libdeflate (compared with zlib).\n9. Bundled CLI tool.\n10. Accepts iterator and generator input.\n11. Resumable bulk transfers.\n12. Supports composite parallel upload for GCS and multi-part upload for AWS S3.\n13. Supports s3 and GCS internal copies to avoid unnecessary data movement.\n\n## Installation \n\n```bash\npip install cloud-files\npip install cloud-files[test] # to enable testing with pytest\n```\n\nIf you run into trouble installing dependenies, make sure you're using at least Python3.6 and you have updated pip. On Linux, some dependencies require manylinux2010 or manylinux2014 binaries which earlier versions of pip do not search for. MacOS, Linux, and Windows are supported platforms.\n\n### Credentials\n\nYou may wish to install credentials under `~/.cloudvolume/secrets`. CloudFiles is descended from CloudVolume, and for now we'll leave the same configuration structure in place. \n\nYou need credentials only for the services you'll use. The local filesystem doesn't need any. Google Storage ([setup instructions here](https://github.com/seung-lab/cloud-volume/wiki/Setting-up-Google-Cloud-Storage)) will attempt to use default account credentials if no service account is provided. \n\nIf neither of those two conditions apply, you need a service account credential. `google-secret.json` is a service account credential for Google Storage, `aws-secret.json` is a service account for S3, etc. You can support multiple projects at once by prefixing the bucket you are planning to access to the credential filename. `google-secret.json` will be your defaut service account, but if you also want to also access bucket ABC, you can provide `ABC-google-secret.json` and you'll have simultaneous access to your ordinary buckets and ABC. The secondary credentials are accessed on the basis of the bucket name, not the project name.\n\n```bash\nmkdir -p ~/.cloudvolume/secrets/\nmv aws-secret.json ~/.cloudvolume/secrets/ # needed for Amazon\nmv google-secret.json ~/.cloudvolume/secrets/ # needed for Google\nmv matrix-secret.json ~/.cloudvolume/secrets/ # needed for Matrix\n```\n\n#### `aws-secret.json` and `matrix-secret.json`\n\nCreate an [IAM user service account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) that can read, write, and delete objects from at least one bucket.\n\n```json\n{\n \"AWS_ACCESS_KEY_ID\": \"$MY_AWS_ACCESS_KEY_ID\",\n \"AWS_SECRET_ACCESS_KEY\": \"$MY_SECRET_ACCESS_TOKEN\",\n \"AWS_DEFAULT_REGION\": \"$MY_AWS_REGION\" // defaults to us-east-1\n}\n```\n\n#### `google-secret.json`\n\nYou can create the `google-secret.json` file [here](https://console.cloud.google.com/iam-admin/serviceaccounts). You don't need to manually fill in JSON by hand, the below example is provided to show you what the end result should look like. You should be able to read, write, and delete objects from at least one bucket.\n\n```json\n{\n \"type\": \"service_account\",\n \"project_id\": \"$YOUR_GOOGLE_PROJECT_ID\",\n \"private_key_id\": \"...\",\n \"private_key\": \"...\",\n \"client_email\": \"...\",\n \"client_id\": \"...\",\n \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n \"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\n \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n \"client_x509_cert_url\": \"\"\n}\n```\n\n## API Documentation \n\nNote that the \"Cloud Costs\" mentioned below are current as of June 2020 and are subject to change. As of this writing, S3 and Google use identical cost structures for these operations. \n\n### Constructor\n```python\n# import gevent.monkey # uncomment when using green threads\n# gevent.monkey.patch_all(thread=False)\nfrom cloudfiles import CloudFiles\n\ncf = CloudFiles(\n cloudpath, progress=False, \n green=None, secrets=None, num_threads=20,\n use_https=False, endpoint=None, request_payer=None,\n composite_upload_threshold = int(1e8)\n)\n\n# cloudpath examples:\ncf = CloudFiles('gs://bucket/') # google cloud storage\ncf = CloudFiles('s3://bucket/') # Amazon S3\ncf = CloudFiles('s3://https://s3emulator.com/coolguy/') # alternate s3 endpoint\ncf = CloudFiles('file:///home/coolguy/') # local filesystem\ncf = CloudFiles('mem:///home/coolguy/') # in memory\ncf = CloudFiles('https://website.com/coolguy/') # arbitrary web server\n```\n\n* cloudpath: The path to the bucket you are accessing. The path is formatted as `$PROTOCOL://BUCKET/PATH`. Files will then be accessed relative to the path. The protocols supported are `gs` (GCS), `s3` (AWS S3), `file` (local FS), `mem` (RAM), and `http`/`https`.\n* progress: Whether to display a progress bar when processing multiple items simultaneously.\n* green: Use green threads. For this to work properly, you must uncomment the top two lines. Green threads are used automatically upon monkey patching if green is None.\n* secrets: Provide secrets dynamically rather than fetching from the credentials directory `$HOME/.cloudvolume/secrets`.\n* num_threads: Number of simultaneous requests to make. Usually 20 per core is pretty close to optimal unless file sizes are extreme.\n* use_https: `gs://` and `s3://` require credentials to access their files. However, each has a read-only https endpoint that sometimes requires no credentials. If True, automatically convert `gs://` to `https://storage.googleapis.com/` and `s3://` to `https://s3.amazonaws.com/`.\n* endpoint: (s3 only) provide an alternate endpoint than the official Amazon servers. This is useful for accessing the various S3 emulators offered by on-premises deployments of object storage. \n* request_payer: Specify the account that should be charged for requests towards the bucket, rather than the bucket owner.\n * `gs://`: `request_payer` can be any Google Cloud project id. Please refer to the documentation for [more information](https://cloud.google.com/storage/docs/requester-pays).\n * `s3://`: `request_payer` must be `\"requester\"`. The AWS account associated with the AWS_ACCESS_KEY_ID will be charged. Please refer to the documentation for [more information](https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html)\n\nThe advantage of using `mem://` versus a `dict` has both the advantage of using identical interfaces in your code and it will use compression automatically.\n\n### get / get_json\n\n```python\n# Let 'filename' be the file b'hello world'\n\nbinary = cf.get('filename')\nbinary = cf['filename']\n>> b'hello world'\n\ncompressed_binary = cf.get('filename', raw=True) \n\nbinaries = cf.get(['filename1', 'filename2'])\n>> [ { 'path': 'filename1', 'content': b'...', 'byte_range': (None, None), 'error': None }, { 'path': 'filename2', 'content': b'...', 'byte_range': (None, None), 'error': None } ]\n\n# total provides info for progress bar when using generators.\nbinaries = cf.get(generator(), total=N) \n\nbinary = cf.get({ 'path': 'filename', 'start': 0, 'end': 5 }) # fetches 5 bytes\nbinary = cf['filename', 0:5] # only fetches 5 bytes\nbinary = cf['filename'][0:5] # same result, fetches 11 bytes\n>> b'hello' # represents byte range 0-4 inclusive of filename\n\nbinaries = cf[:100] # download the first 100 files\n```\n\n`get` supports several different styles of input. The simplest takes a scalar filename and returns the contents of that file. However, you can also specify lists of filenames, a byte range request, and lists of byte range requests. You can provide a generator or iterator as input as well. Order is not guaranteed.\n\nWhen more than one file is provided at once, the download will be threaded using preemptive or cooperative (green) threads depending on the `green` setting. If `progress` is set to true, a progress bar will be displayed that counts down the number of files to download.\n\n`get_json` is the same as get but it will parse the returned binary as JSON data encoded as utf8 and returns a dictionary. Order is guaranteed.\n\nCloud Cost: Usually about $0.40 per million requests.\n\n### put / puts / put_json / put_jsons\n\n```python \ncf.put('filename', b'content')\ncf['filename'] = b'content'\ncf.put_json('digits', [1,2,3,4,5])\n\ncf.puts([{ \n 'path': 'filename',\n 'content': b'...',\n 'content_type': 'application/octet-stream',\n 'compress': 'gzip',\n 'compression_level': 6, # parameter for gzip or brotli compressor\n 'cache_control': 'no-cache',\n}])\n\ncf.puts([ (path, content), (path, content) ], compression='gzip')\ncf.put_jsons(...)\n\n# Definition of put, put_json is identical\ndef put(\n self, \n path, content, \n content_type=None, compress=None, \n compression_level=None, cache_control=None,\n raw=False\n)\n\n# Definition of puts, put_jsons is identical\ndef puts(\n self, files, \n content_type=None, compress=None, \n compression_level=None, cache_control=None,\n total=None, raw=False\n)\n```\n\nThe PUT operation is the most complex operation because it's so configurable. Sometimes you want one file, sometimes many. Sometimes you want to configure each file individually, sometimes you want to standardize a bulk upload. Sometimes it's binary data, but oftentimes it's JSON. We provide a simpler interface for uploading a single file `put` and `put_json` (singular) versus the interface for uploading possibly many files `puts` and `put_jsons` (plural). \n\nIn order to upload many files at once (which is much faster due to threading), you need to minimally provide the `path` and `content` for each file. This can be done either as a dict containing those fields or as a tuple `(path, content)`. If dicts are used, the fields (if present) specified in the dict take precedence over the parameters of the function. You can mix tuples with dicts. The input to puts can be a scalar (a single dict or tuple) or an iterable such as a list, iterator, or generator. \n\nCloud Cost: Usually about $5 per million files.\n\n### delete\n\n```python \ncf.delete('filename')\ncf.delete([ 'file1', 'file2', ... ])\ndel cf['filename']\n```\n\nThis will issue a delete request for each file specified in a threaded fashion.\n\nCloud Cost: Usually free.\n\n### exists \n\n```python \ncf.exists('filename') \n>> True # or False\n\ncf.exists([ 'file1', 'file2', ... ]) \n>> { 'file1': True, 'file2': False, ... }\n```\n\nScalar input results in a simple boolean output while iterable input returns a dictionary of input paths mapped to whether they exist. In iterable mode, a progress bar may be displayed and threading is utilized to improve performance. \n\nCloud Cost: Usually about $0.40 per million requests.\n\n### size \n\n```python\ncf.size('filename')\n>>> 1337 # size in bytes\n\ncf.size([ 'file1', 'nonexistent', 'empty_file', ... ])\n>>> { 'file1': 392, 'nonexistent': None, 'empty_file': 0, ... }\n```\n\nThe output is the size of each file as it is stored in bytes. If the file doesn't exist, `None` is returned. Scalar input results in a simple boolean output while iterable input returns a dictionary of input paths mapped to whether they exist. In iterable mode, a progress bar may be displayed and threading is utilized to improve performance. \n\nCloud Cost: Usually about $0.40 per million requests.\n\n### list\n\n```python \ncf.list() # returns generator\nlist(cf) # same as list(cf.list())\ncf.list(prefix=\"abc\")\ncf.list(prefix=\"abc\", flat=True)\n```\n\nRecall that in object storage, directories do not really exist and file paths are really a key-value mapping. The `list` operator will list everything under the `cloudpath` given in the constructor. The `prefix` operator allows you to efficiently filter some of the results. If `flat` is specified, the results will be filtered to return only a single \"level\" of the \"directory\" even though directories are fake. The entire set of all subdirectories will still need to be fetched.\n\nCloud Cost: Usually about $5 per million requests, but each request might list 1000 files. The list operation will continuously issue list requests lazily as needed.\n\n### transfer_to / transfer_from\n\n```python\ncff = CloudFiles('file:///source_location')\ncfg = CloudFiles('gs://dest_location')\n\n# Transfer all files from local filesys to google cloud storage\ncfg.transfer_from(cff, block_size=64) # in blocks of 64 files\ncff.transfer_to(cfg, block_size=64)\ncff.transfer_to(cfg, block_size=64, reencode='br') # change encoding to brotli\ncfg[:] = cff # default block size 64\n```\n\nTransfer semantics provide a simple way to perform bulk file transfers. Use the `block_size` parameter to adjust the number of files handled in a given pass. This can be important for preventing memory blow-up and reducing latency between batches.\n\ngs to gs and s3 to s3 transfers will occur within the cloud without looping through the executing client provided no reencoding is specified.\n\n#### resumable transfer\n\n```python\nfrom cloudfiles import ResumableTransfer\n\n# .db b/c this is a sqlite database\n# that will be automatically created\nrt = ResumableTransfer(\"NAME_OF_JOB.db\") \n\n# init should only be called once\nrt.init(\"file://source_location\", \"gs://dest_location\")\n# This part can be interrupted and resumed\nrt.execute(\"NAME_OF_JOB.db\")\n# If multiple transfer clients, the lease_msec\n# parameter must be specified to prevent conflicts.\nrt.execute(\"NAME_OF_JOB.db\", lease_msec=30000)\n\nrt.close() # deletes NAME_OF_JOB.db\n```\n\nThis is esentially a more durable version of `cp`. The transfer works by first loading a sqlite database with filenames, a \"done\" flag, and a lease time. Then clients can attach to the database and execute the transfer in batches. When multiple clients are used, a lease time must be set so that the database does not return the same set of files to each client (and is robust).\n\nThis transfer type can also be accessed via the CLI.\n\n```bash\ncloudfiles xfer init SOURCE DEST --db NAME_OF_JOB.db\ncloudfiles xfer execute NAME_OF_JOB.db # deletes db when done\n```\n\n### composite upload (Google Cloud Storage)\n\nIf a file is larger than 100MB (default), CloudFiles will split the file into 100MB parts and upload them as individual part files using the STANDARD storage class to minimize deletion costs. Once uploaded, the part files will be recursively merged in a tree 32 files at a time. After each merge, the part files will be deleted. The final file will have the default storage class for the bucket.\n\nIf an upload is interrupted, the part files will remain and must be cleaned up. You can provide an open for binary reading file handle instead of a bytes object so that large files can be uploaded without overwhelming RAM. You can also adjust the composite threshold using `CloudFiles(..., composite_upload_threshold=int(2e8))` to for example, raise the threshold to 200MB.\n\n### mutli-part upload (S3)\n\nIf a file is larger than 100MB (default), the S3 service will use multi-part upload. ou can provide an open for binary reading file handle instead of a bytes object so that large files can be uploaded without overwhelming RAM. You can also adjust the composite threshold using `CloudFiles(..., composite_upload_threshold=int(2e8))` to for example, raise the threshold to 200MB. \n\nUnfinished upload parts remain on S3 (and cost money) unless you use a bucket lifecycle rule to remove them automatically. \n\nhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html\n\n### transcode\n\n```python\nfrom cloudfiles.compression import transcode\n\nfiles = cf.get(...) \n\nfor file in transcode(files, 'gzip'):\n file['content'] # gzipped file content regardless of source\n\ntranscode(files, \n encoding='gzip', # any cf compatible compression scheme\n in_place=False, # modify the files in-place to save memory\n progress=True # progress bar\n)\n```\n\nSometimes we want to change the encoding type of a set of arbitrary files (often when moving them around to another storage system). `transcode` will take the output of `get` and transcode the resultant files into a new format. `transcode` respects the `raw` attribute which indicates that the contents are already compressed and will decompress them first before recompressing. If the input data are already compressed to the correct output encoding, it will simply pass it through without going through a decompression/recompression cycle.\n\n`transcode` returns a generator so that the transcoding can be done in a streaming manner.\n\n## Network Robustness\n\nCloudFiles protects itself from network issues in several ways. \n\nFirst, it uses a connection pool to avoid needing to reestablish connections or exhausting the number of available sockets. \n\nSecond, it uses an exponential random window backoff to retry failed connections and requests. The exponential backoff allows increasing breathing room for an overloaded server and the random window decorrelates independent attempts by a cluster. If the backoff was not growing, the retry attempts by a large cluster would be too rapid fire or inefficiently slow. If the attempts were not decorrellated, then regardless of the backoff, the servers will often all try again around the same time. We backoff seven times starting from 0.5 seconds to 60 seconds, doubling the random window each time.\n\nThird, for Google Cloud Storage (GCS) and S3 endpoints, we compute the md5 digest both sending and receiving to ensure data corruption did not occur in transit and that the server sent the full response. We cannot validate the digest for partial (\"Range\") reads. For [composite objects](https://cloud.google.com/storage/docs/composite-objects) (GCS) we can check the [crc32c](https://pypi.org/project/crc32c/) check-sum which catches transmission errors but not tampering (though MD5 isn't secure at all anymore). We are unable to perform validation for [multi-part uploads](https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html) (S3). Using custom encryption keys may also create validation problems.\n\n## Local File Locking\n\nWhen accessing local files (`file://`), CloudFiles can use the `fasteners` library to perform locking so multi-process IO can be performed safely. These options have no effect on remote file access such as `gs://` or `s3://`.\n\nLock files are by default stored in `$CLOUD_FILES_DIR/locks` (usually `~/.cloudfiles/locks`). \n\nLocking is enabled if:\n1. `CloudFiles(..., locking=True)`\n2. `CloudFiles(..., locking=None)` and a locking directory is set either via `CloudFiles(..., lock_dir=...)` or via the environment variable `CLOUD_FILES_LOCK_DIR`.\n\nLocking is always disabled if `locking=False`. This can be advantageous for performance reasons but may require careful design of access patterns to avoid reading a file that is being written to.\n\nYou can check the current assigned locks directory:\n\n```python\ncf.lock_dir\n```\n\nYou can clear the lock dir of all locks with:\n\n```python\ncf.clear_locks()\n```\n\n\n## CloudFiles CLI Tool\n\n```bash\n# list cloud and local directories\ncloudfiles ls gs://bucket-folder/\n# parallel file transfer, no decompression\ncloudfiles -p 2 cp --progress -r s3://bkt/ gs://bkt2/\n# change compression type to brotli\ncloudfiles cp -c br s3://bkt/file.txt gs://bkt2/\n# decompress\ncloudfiles cp -c none s3://bkt/file.txt gs://bkt2/\n# move or rename files\ncloudfiles mv s3://bkt/file.txt gs://bkt2/\n# create an empty file if not existing\ncloudfiles touch s3://bkt/empty.txt\n# pass from stdin (use \"-\" for source argument)\nfind some_dir | cloudfiles cp - s3://bkt/\n# resumable transfers\ncloudfiles xfer init SRC DEST --db JOBNAME.db\ncloudfiles xfer execute JOBNAME.db --progress # can quit and resume\n# Get human readable file sizes from anywhere\ncloudfiles du -shc ./tmp gs://bkt/dir s3://bkt/dir\n# remove files\ncloudfiles rm ./tmp gs://bkt/dir/file s3://bkt/dir/file\n# cat across services, -r for range reads\ncloudfiles cat ./tmp gs://bkt/dir/file s3://bkt/dir/file\n# verify a transfer was successful by comparing bytes and hashes\ncloudfiles verify ./my-data gs://bucket/my-data/ \n```\n\n### `cp` Pros and Cons\n\nFor the cp command, the bundled CLI tool has a number of advantages vs. `gsutil` when it comes to transfers.\n\n1. No decompression of file transfers (unless you want to).\n2. Can shift compression format.\n3. Easily control the number of parallel processes.\n4. Green threads make core utilization more efficient.\n5. Optionally uses libdeflate for faster gzip decompression.\n\nIt also has some disadvantages: \n\n1. Doesn't support all commands.\n2. File suffixes may be added to signify compression type on the local filesystem (e.g. `.gz`, `.br`, or `.zstd`). `cloudfiles ls` will list them without the extension and they will be converted into `Content-Encoding` on cloud storage.\n\n### `ls` Generative Expressions\n\nFor the `ls` command, we support (via the `-e` flag) simple generative expressions that enable querying multiple prefixes at once. A generative expression is denoted `[chars]` where `c`,`h`,`a`,`r`, & `s` will be inserted individually into the position where the expression appears. Multiple expressions are allowed and produce a cartesian product of resulting strings. This functionality is very limited at the moment but we intend to improve it.\n\n```bash\ncloudfiles ls -e \"gs://bucket/prefix[ab]\"\n# equivalent to:\n# cloudfiles ls gs://bucket/prefixa\n# cloudfiles ls gs://bucket/prefixb\n```\n\n### `alias` for Alternative S3 Endpoints\n\nYou can set your own protocols for S3 compatible endpoints by creating dynamic or persistent aliases. CloudFiles comes with two official s3 endpoints that are important for the Seung Lab, `matrix://` and `tigerdata://` which point to Princeton S3 endpoints. Official aliases can't be overridden.\n\nTo create a dynamic alias, you can use `cloudfiles.paths.add_alias` which will only affect the current process. To create a persistent alias that resides in `~/.cloudfiles/aliases.json`, you can use the CLI. \n\n```bash\ncloudfiles alias add example s3://https://example.com/ # example://\ncloudfiles alias ls # list all aliases\ncloudfiles alias rm example # remove example://\n```\n\nThe alias file is only accessed (and cached) if CloudFiles encounters an unknown protocol. If you stick to default protocols and use the syntax `s3://https://example.com/` for alternative endpoints, you can still use CloudFiles in environments without filesystem access.\n\n## Credits\n\nCloudFiles is derived from the [CloudVolume.Storage](https://github.com/seung-lab/cloud-volume/tree/master/cloudvolume/storage) system. \n\nStorage was initially created by William Silversmith and Ignacio Tartavull. It was maintained and improved by William Silversmith and includes improvements by Nico Kemnitz (extremely fast exists), Ben Falk (brotli), and Ran Lu (local file locking). Manuel Castro added the ability to chose cloud storage class. Thanks to the anonymous author from https://teppen.io/ for their s3 etag validation code.\n\n",
"bugtrack_url": null,
"license": "License :: OSI Approved :: BSD License",
"summary": "Fast access to cloud storage and local FS.",
"version": "4.30.0",
"project_urls": {
"Homepage": "https://github.com/seung-lab/cloud-files/"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a0b27c0eba0a2de3b5540bef54ba488f2178c1642ca72e42fc6a7505ed8d6d02",
"md5": "6720b3ebf68bb357a1169acea3c13970",
"sha256": "4298ed0c71c504f470b2965dff2c2688a285e34949fca76f84d7165b3d50f574"
},
"downloads": -1,
"filename": "cloud_files-4.30.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6720b3ebf68bb357a1169acea3c13970",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.7",
"size": 64008,
"upload_time": "2024-12-04T02:16:42",
"upload_time_iso_8601": "2024-12-04T02:16:42.545935Z",
"url": "https://files.pythonhosted.org/packages/a0/b2/7c0eba0a2de3b5540bef54ba488f2178c1642ca72e42fc6a7505ed8d6d02/cloud_files-4.30.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "9ced43c1890224456895d9bd475560ba21f782f0d1378e84d368430ea47e8992",
"md5": "b5a2c6ed84ad8f001fe1a53568e44628",
"sha256": "6409e71eba6c572da5279204a11c95233287d9ead5de45eaa0c707174dcda50e"
},
"downloads": -1,
"filename": "cloud_files-4.30.0.tar.gz",
"has_sig": false,
"md5_digest": "b5a2c6ed84ad8f001fe1a53568e44628",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.7",
"size": 82589,
"upload_time": "2024-12-04T02:16:44",
"upload_time_iso_8601": "2024-12-04T02:16:44.630041Z",
"url": "https://files.pythonhosted.org/packages/9c/ed/43c1890224456895d9bd475560ba21f782f0d1378e84d368430ea47e8992/cloud_files-4.30.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-04 02:16:44",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "seung-lab",
"github_project": "cloud-files",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "cloud-files"
}