miarec-s3fs


Namemiarec-s3fs JSON
Version 2024.1.1 PyPI version JSON
download
home_pagehttps://github.com/miarec/miarec_s3fs
SummaryAmazon S3 filesystem for PyFilesystem2
upload_time2024-01-15 22:49:05
maintainer
docs_urlNone
authorMiaRec
requires_python>=3.6
licenseMIT
keywords filesystem pyfilesystem2 s3 amazon
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # miarec_s3fs

[![Actions](https://img.shields.io/github/actions/workflow/status/miarec/miarec_s3fs/test_and_release.yml?branch=master&logo=github&style=flat-square&maxAge=300)](https://github.com/miarec/miarec_s3fs/actions)

MiaRec S3FS is a [PyFilesystem](https://www.pyfilesystem.org/) interface to
Amazon S3 cloud storage.

As a PyFilesystem concrete class, [S3FS](http://fs-s3fs.readthedocs.io/en/latest/) allows you to work with S3 in the
same way as any other supported filesystem.

This a fork of the [fs-s3fs](https://github.com/PyFilesystem/s3fs) project, written by Will McGugan (email willmcgugan@gmail.com). 

The code was modified by MiaRec team to fullfill out needs.

## Notable differences between miarec_s3fs and fs-s3fs

1. Required Python 3.6+. A support of Python 2.7 is removed.

2. Opener is not implemented. Use explicit constructor instead.

3. Unit tests are run with [moto](https://github.com/getmoto/moto)


## Installing

You can install S3FS from pip as follows:

```
pip install miarec_s3fs
```

This will install the most recent stable version.

Alternatively, if you want the cutting edge code, you can check out
the GitHub repos at https://github.com/miarec/miarec_s3fs

## Opening a S3FS

Open an S3FS by explicitly using the constructor:

```python
from fs_s3fs import S3FS
s3fs = S3FS('mybucket')
```

## Limitations

Amazon S3 isn't strictly speaking a *filesystem*, in that it contains
files, but doesn't offer true *directories*. S3FS follows the convention
of simulating directories by creating an object that ends in a forward
slash. For instance, if you create a file called `"foo/bar"`, S3FS will
create an S3 object for the file called `"foo/bar"` *and* an
empty object called `"foo/"` which stores that fact that the `"foo"`
directory exists.

If you create all your files and directories with S3FS, then you can
forget about how things are stored under the hood. Everything will work
as you expect. You *may* run in to problems if your data has been
uploaded without the use of S3FS. For instance, if you create a
`"foo/bar"` object without a `"foo/"` object. If this occurs, then S3FS
may give errors about directories not existing, where you would expect
them to be. The solution is to create an empty object for all
directories and subdirectories. Fortunately most tools will do this for
you, and it is probably only required of you upload your files manually.

## Authentication

If you don't supply any credentials, then S3FS will use the access key
and secret key configured on your system. 

Here's how you specify credentials with the constructor:

    s3fs = S3FS(
        'mybucket'
        aws_access_key_id=<access key>,
        aws_secret_access_key=<secret key>
    )

Note: Amazon recommends against specifying credentials explicitly like this in production.


## Downloading Files

To *download* files from an S3 bucket, open a file on the S3
filesystem for reading, then write the data to a file on the local
filesystem. Here's an example that copies a file `example.mov` from
S3 to your HD:

```python
from fs.tools import copy_file_data
with s3fs.open('example.mov', 'rb') as remote_file:
    with open('example.mov', 'wb') as local_file:
        copy_file_data(remote_file, local_file)
```

Although it is preferable to use the higher-level functionality in the
`fs.copy` module. Here's an example:

```python
from fs.copy import copy_file
copy_file(s3fs, 'example.mov', './', 'example.mov')
```

## Uploading Files

You can *upload* files in the same way. Simply copy a file from a
source filesystem to the S3 filesystem.
See [Moving and Copying](https://docs.pyfilesystem.org/en/latest/guide.html#moving-and-copying)
for more information.

## ExtraArgs

S3 objects have additional properties, beyond a traditional
filesystem. These options can be set using the ``upload_args``
and ``download_args`` properties. which are handed to upload
and download methods, as appropriate, for the lifetime of the
filesystem instance.

For example, to set the ``cache-control`` header of all objects
uploaded to a bucket:

```python
import fs, fs.mirror
s3fs = S3FS('example', upload_args={"CacheControl": "max-age=2592000", "ACL": "public-read"})
fs.mirror.mirror('/path/to/mirror', s3fs)
```

see [the Boto3 docs](https://boto3.readthedocs.io/en/latest/reference/customizations/s3.html#boto3.s3.transfer.S3Transfer.ALLOWED_UPLOAD_ARGS)
for more information.

## S3 Info

You can retrieve S3 info via the ``s3`` namespace. Here's an example:

```python
>>> info = s.getinfo('foo', namespaces=['s3'])
>>> info.raw['s3']
{'metadata': {}, 'delete_marker': None, 'version_id': None, 'parts_count': None, 'accept_ranges': 'bytes', 'last_modified': 1501935315, 'content_length': 3, 'content_encoding': None, 'request_charged': None, 'replication_status': None, 'server_side_encryption': None, 'expires': None, 'restore': None, 'content_type': 'binary/octet-stream', 'sse_customer_key_md5': None, 'content_disposition': None, 'storage_class': None, 'expiration': None, 'missing_meta': None, 'content_language': None, 'ssekms_key_id': None, 'sse_customer_algorithm': None, 'e_tag': '"37b51d194a7513e45b56f6524f2d51f2"', 'website_redirect_location': None, 'cache_control': None}
```


## S3 URLs

You can use the ``geturl`` method to generate an externally accessible
URL from an S3 object. Here's an example:

```python
>>> s3fs.geturl('foo')
'https://fsexample.s3.amazonaws.com//foo?AWSAccessKeyId=AKIAIEZZDQU72WQP3JUA&Expires=1501939084&Signature=4rfDuqVgmvILjtTeYOJvyIXRMvs%3D'
```

## Testing

Automated unit tests are run on [GitHub Actions](https://github.com/miarec/miarec_s3fs/actions)

To run the tests locally, do the following.

Install Docker on local machine.

Create activate python virtual environment:

    python -m vevn venv
    source venv\bin\activate

Install the project and test dependencies:

    pip install -e ".[test]"

Run tests:

    pytest

## Documentation

- [PyFilesystem Wiki](https://www.pyfilesystem.org)
- [PyFilesystem Reference](https://docs.pyfilesystem.org/en/latest/reference/base.html)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/miarec/miarec_s3fs",
    "name": "miarec-s3fs",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "filesystem,Pyfilesystem2,s3,Amazon",
    "author": "MiaRec",
    "author_email": "support@miarec.com",
    "download_url": "https://files.pythonhosted.org/packages/d5/f7/aaf6985325cb05070c7b9640b2db1304f3c783b77ef081b448516331ab72/miarec_s3fs-2024.1.1.tar.gz",
    "platform": "any",
    "description": "# miarec_s3fs\n\n[![Actions](https://img.shields.io/github/actions/workflow/status/miarec/miarec_s3fs/test_and_release.yml?branch=master&logo=github&style=flat-square&maxAge=300)](https://github.com/miarec/miarec_s3fs/actions)\n\nMiaRec S3FS is a [PyFilesystem](https://www.pyfilesystem.org/) interface to\nAmazon S3 cloud storage.\n\nAs a PyFilesystem concrete class, [S3FS](http://fs-s3fs.readthedocs.io/en/latest/) allows you to work with S3 in the\nsame way as any other supported filesystem.\n\nThis a fork of the [fs-s3fs](https://github.com/PyFilesystem/s3fs) project, written by Will McGugan (email willmcgugan@gmail.com). \n\nThe code was modified by MiaRec team to fullfill out needs.\n\n## Notable differences between miarec_s3fs and fs-s3fs\n\n1. Required Python 3.6+. A support of Python 2.7 is removed.\n\n2. Opener is not implemented. Use explicit constructor instead.\n\n3. Unit tests are run with [moto](https://github.com/getmoto/moto)\n\n\n## Installing\n\nYou can install S3FS from pip as follows:\n\n```\npip install miarec_s3fs\n```\n\nThis will install the most recent stable version.\n\nAlternatively, if you want the cutting edge code, you can check out\nthe GitHub repos at https://github.com/miarec/miarec_s3fs\n\n## Opening a S3FS\n\nOpen an S3FS by explicitly using the constructor:\n\n```python\nfrom fs_s3fs import S3FS\ns3fs = S3FS('mybucket')\n```\n\n## Limitations\n\nAmazon S3 isn't strictly speaking a *filesystem*, in that it contains\nfiles, but doesn't offer true *directories*. S3FS follows the convention\nof simulating directories by creating an object that ends in a forward\nslash. For instance, if you create a file called `\"foo/bar\"`, S3FS will\ncreate an S3 object for the file called `\"foo/bar\"` *and* an\nempty object called `\"foo/\"` which stores that fact that the `\"foo\"`\ndirectory exists.\n\nIf you create all your files and directories with S3FS, then you can\nforget about how things are stored under the hood. Everything will work\nas you expect. You *may* run in to problems if your data has been\nuploaded without the use of S3FS. For instance, if you create a\n`\"foo/bar\"` object without a `\"foo/\"` object. If this occurs, then S3FS\nmay give errors about directories not existing, where you would expect\nthem to be. The solution is to create an empty object for all\ndirectories and subdirectories. Fortunately most tools will do this for\nyou, and it is probably only required of you upload your files manually.\n\n## Authentication\n\nIf you don't supply any credentials, then S3FS will use the access key\nand secret key configured on your system. \n\nHere's how you specify credentials with the constructor:\n\n    s3fs = S3FS(\n        'mybucket'\n        aws_access_key_id=<access key>,\n        aws_secret_access_key=<secret key>\n    )\n\nNote: Amazon recommends against specifying credentials explicitly like this in production.\n\n\n## Downloading Files\n\nTo *download* files from an S3 bucket, open a file on the S3\nfilesystem for reading, then write the data to a file on the local\nfilesystem. Here's an example that copies a file `example.mov` from\nS3 to your HD:\n\n```python\nfrom fs.tools import copy_file_data\nwith s3fs.open('example.mov', 'rb') as remote_file:\n    with open('example.mov', 'wb') as local_file:\n        copy_file_data(remote_file, local_file)\n```\n\nAlthough it is preferable to use the higher-level functionality in the\n`fs.copy` module. Here's an example:\n\n```python\nfrom fs.copy import copy_file\ncopy_file(s3fs, 'example.mov', './', 'example.mov')\n```\n\n## Uploading Files\n\nYou can *upload* files in the same way. Simply copy a file from a\nsource filesystem to the S3 filesystem.\nSee [Moving and Copying](https://docs.pyfilesystem.org/en/latest/guide.html#moving-and-copying)\nfor more information.\n\n## ExtraArgs\n\nS3 objects have additional properties, beyond a traditional\nfilesystem. These options can be set using the ``upload_args``\nand ``download_args`` properties. which are handed to upload\nand download methods, as appropriate, for the lifetime of the\nfilesystem instance.\n\nFor example, to set the ``cache-control`` header of all objects\nuploaded to a bucket:\n\n```python\nimport fs, fs.mirror\ns3fs = S3FS('example', upload_args={\"CacheControl\": \"max-age=2592000\", \"ACL\": \"public-read\"})\nfs.mirror.mirror('/path/to/mirror', s3fs)\n```\n\nsee [the Boto3 docs](https://boto3.readthedocs.io/en/latest/reference/customizations/s3.html#boto3.s3.transfer.S3Transfer.ALLOWED_UPLOAD_ARGS)\nfor more information.\n\n## S3 Info\n\nYou can retrieve S3 info via the ``s3`` namespace. Here's an example:\n\n```python\n>>> info = s.getinfo('foo', namespaces=['s3'])\n>>> info.raw['s3']\n{'metadata': {}, 'delete_marker': None, 'version_id': None, 'parts_count': None, 'accept_ranges': 'bytes', 'last_modified': 1501935315, 'content_length': 3, 'content_encoding': None, 'request_charged': None, 'replication_status': None, 'server_side_encryption': None, 'expires': None, 'restore': None, 'content_type': 'binary/octet-stream', 'sse_customer_key_md5': None, 'content_disposition': None, 'storage_class': None, 'expiration': None, 'missing_meta': None, 'content_language': None, 'ssekms_key_id': None, 'sse_customer_algorithm': None, 'e_tag': '\"37b51d194a7513e45b56f6524f2d51f2\"', 'website_redirect_location': None, 'cache_control': None}\n```\n\n\n## S3 URLs\n\nYou can use the ``geturl`` method to generate an externally accessible\nURL from an S3 object. Here's an example:\n\n```python\n>>> s3fs.geturl('foo')\n'https://fsexample.s3.amazonaws.com//foo?AWSAccessKeyId=AKIAIEZZDQU72WQP3JUA&Expires=1501939084&Signature=4rfDuqVgmvILjtTeYOJvyIXRMvs%3D'\n```\n\n## Testing\n\nAutomated unit tests are run on [GitHub Actions](https://github.com/miarec/miarec_s3fs/actions)\n\nTo run the tests locally, do the following.\n\nInstall Docker on local machine.\n\nCreate activate python virtual environment:\n\n    python -m vevn venv\n    source venv\\bin\\activate\n\nInstall the project and test dependencies:\n\n    pip install -e \".[test]\"\n\nRun tests:\n\n    pytest\n\n## Documentation\n\n- [PyFilesystem Wiki](https://www.pyfilesystem.org)\n- [PyFilesystem Reference](https://docs.pyfilesystem.org/en/latest/reference/base.html)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Amazon S3 filesystem for PyFilesystem2",
    "version": "2024.1.1",
    "project_urls": {
        "Bug Tracker": "https://github.com/miarec/miarec_s3fs/issues",
        "Builds": "https://github.com/miarec/miarec_s3fs/actions",
        "Homepage": "https://github.com/miarec/miarec_s3fs"
    },
    "split_keywords": [
        "filesystem",
        "pyfilesystem2",
        "s3",
        "amazon"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "44a99370b65b2a0eba6f72d37453ef7ff957e6b90a9702b437997b59c7c756db",
                "md5": "6a5b85dcda498df3096badb383c314e8",
                "sha256": "e68eb18eda49e7cef7765c5d1d955a646c66589fb5c8e84fe930a84f70a827de"
            },
            "downloads": -1,
            "filename": "miarec_s3fs-2024.1.1-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6a5b85dcda498df3096badb383c314e8",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": ">=3.6",
            "size": 12702,
            "upload_time": "2024-01-15T22:49:03",
            "upload_time_iso_8601": "2024-01-15T22:49:03.943178Z",
            "url": "https://files.pythonhosted.org/packages/44/a9/9370b65b2a0eba6f72d37453ef7ff957e6b90a9702b437997b59c7c756db/miarec_s3fs-2024.1.1-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d5f7aaf6985325cb05070c7b9640b2db1304f3c783b77ef081b448516331ab72",
                "md5": "82d4c5f8b77ff63050079dc89393528b",
                "sha256": "3f201fdc95c6d5f81ee323af95e2185d039f819637447db29c57baa4f82808f4"
            },
            "downloads": -1,
            "filename": "miarec_s3fs-2024.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "82d4c5f8b77ff63050079dc89393528b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 15900,
            "upload_time": "2024-01-15T22:49:05",
            "upload_time_iso_8601": "2024-01-15T22:49:05.151656Z",
            "url": "https://files.pythonhosted.org/packages/d5/f7/aaf6985325cb05070c7b9640b2db1304f3c783b77ef081b448516331ab72/miarec_s3fs-2024.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-15 22:49:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "miarec",
    "github_project": "miarec_s3fs",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "miarec-s3fs"
}
        
Elapsed time: 0.26899s