cdk-efs-assets


Namecdk-efs-assets JSON
Version 0.2.1 PyPI version JSON
download
home_pagehttps://github.com/pahud/cdk-efs-assets.git
SummaryAmazon EFS assets from Github repositories or S3 buckets
upload_time2021-01-18 08:00:52
maintainer
docs_urlNone
authorPahud Hsieh<pahudnet@gmail.com>
requires_python>=3.6
licenseApache-2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![NPM version](https://badge.fury.io/js/cdk-efs-assets.svg)](https://badge.fury.io/js/cdk-efs-assets)
[![PyPI version](https://badge.fury.io/py/cdk-efs-assets.svg)](https://badge.fury.io/py/cdk-efs-assets)
![Release](https://github.com/pahud/cdk-efs-assets/workflows/Release/badge.svg)

# cdk-efs-assets

CDK construct library to populate Amazon EFS assets from Github or S3. If the source is S3, the construct also optionally supports updating the contents in EFS if a new zip file is uploaded to S3.

## Install

TypeScript/JavaScript:

```bash
npm i cdk-efs-assets
```

## SyncedAccessPoint

The main construct that is used to provide this EFS sync functionality is `SyncedAccessPoint`. This extends the standard EFS `AccessPoint` construct, and takes an additional `SyncSource` constructor property which defines the source to sync assets from. The `SyncedAccessPoint` instance can be used anywhere an `AccessPoint` can be used. For example, to specify a volume in a Task Definition:

```python
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
task_definition = ecs.FargateTaskDefinition(self, "TaskDefinition",
    (SpreadAssignment ...
      volumes
      volumes), {
        "name": "efs-storage",
        "efs_volume_configuration": {
            "file_system_id": shared_file_system.file_system_id,
            "transit_encryption": "ENABLED",
            "authorization_config": {
                "access_point_id": synced_access_point.access_point_id
            }
        }
    } , =
)
```

## SyncSource

Use the `SyncSource` static functions to create a `SyncSource` instance that can then be passed as a `SyncedAccessPoint` constructor property to define the source of the sync. For example:

```python
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
SyncedAccessPoint(stack, "EfsAccessPoint",
    (SpreadAssignment ...
      syncSource
      sync_source), SyncSource=SyncSource, =.github(
        vpc=vpc,
        repository="https://github.com/pahud/cdk-efs-assets.git"
    )
)
```

### syncDirectoryPath

By default, the synced EFS assets are placed into a directory corresponding to the type of the sync source. For example, the default behavior of the GitHub source is to place the copied files into a directory named the same as the repository name (for a repository specified as 'https://github.com/pahud/cdk-efs-assets.git', the directory name would be 'cdk-efs-assets'), while the default behavior of the S3 archive source is to place the copied files into a directory named the same as the zip file (for a zip file name of 'assets.zip', the directory name would be 'assets').

If you wish to override this default behavior, specify a value for the `syncDirectoryPath` property that is passed into the `SyncSource` call.

If you are using the `AccessPoint` in an ECS/Fargate Task Definition, you probably will want to override the value of `syncDirectoryPath` to '/'. This will place the file contents in the root directory of the Access Point. The reason for this is that when you create a volume that is referencing an EFS Access Point, you are not allowed to specify any path other than the root directory in the task definition configuration.

## How to use SyncedAccessPoint initialized with files provisioned from GitHub repository

This will sync assets from a GitHub repository to a directory (by default, the output directory is named after the repository name) in the EFS AccessPoint:

```python
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
from cdk_efs_assets import SyncSource, SyncedAccessPoint

app = App()

env = {
    "region": process.env.CDK_DEFAULT_REGION ?? AWS_DEFAULT_REGION,
    "account": process.env.CDK_DEFAULT_ACCOUNT
}

stack = Stack(app, "testing-stack", env=env)

vpc = ec2.Vpc.from_lookup(stack, "Vpc", is_default=True)

fs = efs.FileSystem(stack, "Filesystem",
    vpc=vpc,
    removal_policy=RemovalPolicy.DESTROY
)

efs_access_point = SyncedAccessPoint(stack, "GithubAccessPoint",
    file_system=fs,
    path="/demo-github",
    create_acl={
        "owner_gid": "1001",
        "owner_uid": "1001",
        "permissions": "0755"
    },
    posix_user={
        "uid": "1001",
        "gid": "1001"
    },
    sync_source=SyncSource.github(
        vpc=vpc,
        repository="https://github.com/pahud/cdk-efs-assets.git"
    )
)
```

## How to use SyncedAccessPoint initialized with files provisioned from zip file stored in S3

This will sync assets from a zip file stored in an S3 bucket to a directory (by default, the output directory is named after the zip file name) in the EFS AccessPoint:

```python
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
from cdk_efs_assets import S3ArchiveSync

app = App()

env = {
    "region": process.env.CDK_DEFAULT_REGION ?? AWS_DEFAULT_REGION,
    "account": process.env.CDK_DEFAULT_ACCOUNT
}

stack = Stack(app, "testing-stack", env=env)

vpc = ec2.Vpc.from_lookup(stack, "Vpc", is_default=True)

fs = efs.FileSystem(stack, "Filesystem",
    vpc=vpc,
    removal_policy=RemovalPolicy.DESTROY
)

bucket = Bucket.from_bucket_name(self, "Bucket", "demo-bucket")

efs_access_point = SyncedAccessPoint(stack, "EfsAccessPoint",
    file_system=fs,
    path="/demo-s3",
    create_acl={
        "owner_gid": "1001",
        "owner_uid": "1001",
        "permissions": "0755"
    },
    posix_user={
        "uid": "1001",
        "gid": "1001"
    },
    sync_source=SyncSource.s3_archive(
        vpc=vpc,
        bucket=bucket,
        zip_file_path="folder/foo.zip"
    )
)
```

### syncOnUpdate

If the `syncOnUpdate` property is set to `true` (defaults to `true`), then the specified zip file path will be monitored, and if a new object is uploaded to the path, then it will resync the data to EFS. Note that to use this functionality, you must have a CloudTrail Trail in your account that captures the desired S3 write data event.

*WARNING*: The contents of the extraction directory in the access point will be destroyed before extracting the zip file.



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/pahud/cdk-efs-assets.git",
    "name": "cdk-efs-assets",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "",
    "author": "Pahud Hsieh<pahudnet@gmail.com>",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/78/cb/69a0254287797d0718c9bd445415c1cdd0ff93d2c73df0d4c236178cfe80/cdk-efs-assets-0.2.1.tar.gz",
    "platform": "",
    "description": "[![NPM version](https://badge.fury.io/js/cdk-efs-assets.svg)](https://badge.fury.io/js/cdk-efs-assets)\n[![PyPI version](https://badge.fury.io/py/cdk-efs-assets.svg)](https://badge.fury.io/py/cdk-efs-assets)\n![Release](https://github.com/pahud/cdk-efs-assets/workflows/Release/badge.svg)\n\n# cdk-efs-assets\n\nCDK construct library to populate Amazon EFS assets from Github or S3. If the source is S3, the construct also optionally supports updating the contents in EFS if a new zip file is uploaded to S3.\n\n## Install\n\nTypeScript/JavaScript:\n\n```bash\nnpm i cdk-efs-assets\n```\n\n## SyncedAccessPoint\n\nThe main construct that is used to provide this EFS sync functionality is `SyncedAccessPoint`. This extends the standard EFS `AccessPoint` construct, and takes an additional `SyncSource` constructor property which defines the source to sync assets from. The `SyncedAccessPoint` instance can be used anywhere an `AccessPoint` can be used. For example, to specify a volume in a Task Definition:\n\n```python\n# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826\ntask_definition = ecs.FargateTaskDefinition(self, \"TaskDefinition\",\n    (SpreadAssignment ...\n      volumes\n      volumes), {\n        \"name\": \"efs-storage\",\n        \"efs_volume_configuration\": {\n            \"file_system_id\": shared_file_system.file_system_id,\n            \"transit_encryption\": \"ENABLED\",\n            \"authorization_config\": {\n                \"access_point_id\": synced_access_point.access_point_id\n            }\n        }\n    } , =\n)\n```\n\n## SyncSource\n\nUse the `SyncSource` static functions to create a `SyncSource` instance that can then be passed as a `SyncedAccessPoint` constructor property to define the source of the sync. For example:\n\n```python\n# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826\nSyncedAccessPoint(stack, \"EfsAccessPoint\",\n    (SpreadAssignment ...\n      syncSource\n      sync_source), SyncSource=SyncSource, =.github(\n        vpc=vpc,\n        repository=\"https://github.com/pahud/cdk-efs-assets.git\"\n    )\n)\n```\n\n### syncDirectoryPath\n\nBy default, the synced EFS assets are placed into a directory corresponding to the type of the sync source. For example, the default behavior of the GitHub source is to place the copied files into a directory named the same as the repository name (for a repository specified as 'https://github.com/pahud/cdk-efs-assets.git', the directory name would be 'cdk-efs-assets'), while the default behavior of the S3 archive source is to place the copied files into a directory named the same as the zip file (for a zip file name of 'assets.zip', the directory name would be 'assets').\n\nIf you wish to override this default behavior, specify a value for the `syncDirectoryPath` property that is passed into the `SyncSource` call.\n\nIf you are using the `AccessPoint` in an ECS/Fargate Task Definition, you probably will want to override the value of `syncDirectoryPath` to '/'. This will place the file contents in the root directory of the Access Point. The reason for this is that when you create a volume that is referencing an EFS Access Point, you are not allowed to specify any path other than the root directory in the task definition configuration.\n\n## How to use SyncedAccessPoint initialized with files provisioned from GitHub repository\n\nThis will sync assets from a GitHub repository to a directory (by default, the output directory is named after the repository name) in the EFS AccessPoint:\n\n```python\n# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826\nfrom cdk_efs_assets import SyncSource, SyncedAccessPoint\n\napp = App()\n\nenv = {\n    \"region\": process.env.CDK_DEFAULT_REGION ?? AWS_DEFAULT_REGION,\n    \"account\": process.env.CDK_DEFAULT_ACCOUNT\n}\n\nstack = Stack(app, \"testing-stack\", env=env)\n\nvpc = ec2.Vpc.from_lookup(stack, \"Vpc\", is_default=True)\n\nfs = efs.FileSystem(stack, \"Filesystem\",\n    vpc=vpc,\n    removal_policy=RemovalPolicy.DESTROY\n)\n\nefs_access_point = SyncedAccessPoint(stack, \"GithubAccessPoint\",\n    file_system=fs,\n    path=\"/demo-github\",\n    create_acl={\n        \"owner_gid\": \"1001\",\n        \"owner_uid\": \"1001\",\n        \"permissions\": \"0755\"\n    },\n    posix_user={\n        \"uid\": \"1001\",\n        \"gid\": \"1001\"\n    },\n    sync_source=SyncSource.github(\n        vpc=vpc,\n        repository=\"https://github.com/pahud/cdk-efs-assets.git\"\n    )\n)\n```\n\n## How to use SyncedAccessPoint initialized with files provisioned from zip file stored in S3\n\nThis will sync assets from a zip file stored in an S3 bucket to a directory (by default, the output directory is named after the zip file name) in the EFS AccessPoint:\n\n```python\n# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826\nfrom cdk_efs_assets import S3ArchiveSync\n\napp = App()\n\nenv = {\n    \"region\": process.env.CDK_DEFAULT_REGION ?? AWS_DEFAULT_REGION,\n    \"account\": process.env.CDK_DEFAULT_ACCOUNT\n}\n\nstack = Stack(app, \"testing-stack\", env=env)\n\nvpc = ec2.Vpc.from_lookup(stack, \"Vpc\", is_default=True)\n\nfs = efs.FileSystem(stack, \"Filesystem\",\n    vpc=vpc,\n    removal_policy=RemovalPolicy.DESTROY\n)\n\nbucket = Bucket.from_bucket_name(self, \"Bucket\", \"demo-bucket\")\n\nefs_access_point = SyncedAccessPoint(stack, \"EfsAccessPoint\",\n    file_system=fs,\n    path=\"/demo-s3\",\n    create_acl={\n        \"owner_gid\": \"1001\",\n        \"owner_uid\": \"1001\",\n        \"permissions\": \"0755\"\n    },\n    posix_user={\n        \"uid\": \"1001\",\n        \"gid\": \"1001\"\n    },\n    sync_source=SyncSource.s3_archive(\n        vpc=vpc,\n        bucket=bucket,\n        zip_file_path=\"folder/foo.zip\"\n    )\n)\n```\n\n### syncOnUpdate\n\nIf the `syncOnUpdate` property is set to `true` (defaults to `true`), then the specified zip file path will be monitored, and if a new object is uploaded to the path, then it will resync the data to EFS. Note that to use this functionality, you must have a CloudTrail Trail in your account that captures the desired S3 write data event.\n\n*WARNING*: The contents of the extraction directory in the access point will be destroyed before extracting the zip file.\n\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Amazon EFS assets from Github repositories or S3 buckets",
    "version": "0.2.1",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "6290d617dae96b1d8d9950cfed150928",
                "sha256": "dabb022b399483a8e14e18c140b476599eb55a582079ae3c1d9305d8262b5b86"
            },
            "downloads": -1,
            "filename": "cdk_efs_assets-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6290d617dae96b1d8d9950cfed150928",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 35293,
            "upload_time": "2021-01-18T08:00:51",
            "upload_time_iso_8601": "2021-01-18T08:00:51.151632Z",
            "url": "https://files.pythonhosted.org/packages/cd/d2/9cab572d25b086c1a782157c10667bec550e7401293dd9e0902300a797ac/cdk_efs_assets-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "b3b5139eedb2fe9f72f70c14ea4b40c0",
                "sha256": "b310e417605ea1081322daca99fc59559528b4f6853d60a82c18fbfc58ef6eee"
            },
            "downloads": -1,
            "filename": "cdk-efs-assets-0.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "b3b5139eedb2fe9f72f70c14ea4b40c0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 37260,
            "upload_time": "2021-01-18T08:00:52",
            "upload_time_iso_8601": "2021-01-18T08:00:52.485139Z",
            "url": "https://files.pythonhosted.org/packages/78/cb/69a0254287797d0718c9bd445415c1cdd0ff93d2c73df0d4c236178cfe80/cdk-efs-assets-0.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2021-01-18 08:00:52",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": null,
    "github_project": "pahud",
    "error": "Could not fetch GitHub repository",
    "lcname": "cdk-efs-assets"
}
        
Elapsed time: 0.21938s