aws-cdk.aws-s3-deployment


Nameaws-cdk.aws-s3-deployment JSON
Version 1.204.0 PyPI version JSON
download
home_pagehttps://github.com/aws/aws-cdk
SummaryConstructs for deploying contents to S3 buckets
upload_time2023-06-19 21:07:10
maintainer
docs_urlNone
authorAmazon Web Services
requires_python~=3.7
licenseApache-2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # AWS S3 Deployment Construct Library

<!--BEGIN STABILITY BANNER-->---


![End-of-Support](https://img.shields.io/badge/End--of--Support-critical.svg?style=for-the-badge)

> AWS CDK v1 has reached End-of-Support on 2023-06-01.
> This package is no longer being updated, and users should migrate to AWS CDK v2.
>
> For more information on how to migrate, see the [*Migrating to AWS CDK v2* guide](https://docs.aws.amazon.com/cdk/v2/guide/migrating-v2.html).

---
<!--END STABILITY BANNER-->

This library allows populating an S3 bucket with the contents of .zip files
from other S3 buckets or from local disk.

The following example defines a publicly accessible S3 bucket with web hosting
enabled and populates it from a local directory on disk.

```python
website_bucket = s3.Bucket(self, "WebsiteBucket",
    website_index_document="index.html",
    public_read_access=True
)

s3deploy.BucketDeployment(self, "DeployWebsite",
    sources=[s3deploy.Source.asset("./website-dist")],
    destination_bucket=website_bucket,
    destination_key_prefix="web/static"
)
```

This is what happens under the hood:

1. When this stack is deployed (either via `cdk deploy` or via CI/CD), the
   contents of the local `website-dist` directory will be archived and uploaded
   to an intermediary assets bucket. If there is more than one source, they will
   be individually uploaded.
2. The `BucketDeployment` construct synthesizes a custom CloudFormation resource
   of type `Custom::CDKBucketDeployment` into the template. The source bucket/key
   is set to point to the assets bucket.
3. The custom resource downloads the .zip archive, extracts it and issues `aws s3 sync --delete` against the destination bucket (in this case
   `websiteBucket`). If there is more than one source, the sources will be
   downloaded and merged pre-deployment at this step.

If you are referencing the filled bucket in another construct that depends on
the files already be there, be sure to use `deployment.deployedBucket`. This
will ensure the bucket deployment has finished before the resource that uses
the bucket is created:

```python
# website_bucket: s3.Bucket


deployment = s3deploy.BucketDeployment(self, "DeployWebsite",
    sources=[s3deploy.Source.asset(path.join(__dirname, "my-website"))],
    destination_bucket=website_bucket
)

ConstructThatReadsFromTheBucket(self, "Consumer", {
    # Use 'deployment.deployedBucket' instead of 'websiteBucket' here
    "bucket": deployment.deployed_bucket
})
```

## Supported sources

The following source types are supported for bucket deployments:

* Local .zip file: `s3deploy.Source.asset('/path/to/local/file.zip')`
* Local directory: `s3deploy.Source.asset('/path/to/local/directory')`
* Another bucket: `s3deploy.Source.bucket(bucket, zipObjectKey)`
* String data: `s3deploy.Source.data('object-key.txt', 'hello, world!')`
  (supports [deploy-time values](#data-with-deploy-time-values))
* JSON data: `s3deploy.Source.jsonData('object-key.json', { json: 'object' })`
  (supports [deploy-time values](#data-with-deploy-time-values))

To create a source from a single file, you can pass `AssetOptions` to exclude
all but a single file:

* Single file: `s3deploy.Source.asset('/path/to/local/directory', { exclude: ['**', '!onlyThisFile.txt'] })`

**IMPORTANT** The `aws-s3-deployment` module is only intended to be used with
zip files from trusted sources. Directories bundled by the CDK CLI (by using
`Source.asset()` on a directory) are safe. If you are using `Source.asset()` or
`Source.bucket()` to reference an existing zip file, make sure you trust the
file you are referencing. Zips from untrusted sources might be able to execute
arbitrary code in the Lambda Function used by this module, and use its permissions
to read or write unexpected files in the S3 bucket.

## Retain on Delete

By default, the contents of the destination bucket will **not** be deleted when the
`BucketDeployment` resource is removed from the stack or when the destination is
changed. You can use the option `retainOnDelete: false` to disable this behavior,
in which case the contents will be deleted.

Configuring this has a few implications you should be aware of:

* **Logical ID Changes**

  Changing the logical ID of the `BucketDeployment` construct, without changing the destination
  (for example due to refactoring, or intentional ID change) **will result in the deletion of the objects**.
  This is because CloudFormation will first create the new resource, which will have no affect,
  followed by a deletion of the old resource, which will cause a deletion of the objects,
  since the destination hasn't changed, and `retainOnDelete` is `false`.
* **Destination Changes**

  When the destination bucket or prefix is changed, all files in the previous destination will **first** be
  deleted and then uploaded to the new destination location. This could have availability implications
  on your users.

### General Recommendations

#### Shared Bucket

If the destination bucket **is not** dedicated to the specific `BucketDeployment` construct (i.e shared by other entities),
we recommend to always configure the `destinationKeyPrefix` property. This will prevent the deployment from
accidentally deleting data that wasn't uploaded by it.

#### Dedicated Bucket

If the destination bucket **is** dedicated, it might be reasonable to skip the prefix configuration,
in which case, we recommend to remove `retainOnDelete: false`, and instead, configure the
[`autoDeleteObjects`](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-s3-readme.html#bucket-deletion)
property on the destination bucket. This will avoid the logical ID problem mentioned above.

## Prune

By default, files in the destination bucket that don't exist in the source will be deleted
when the `BucketDeployment` resource is created or updated. You can use the option `prune: false` to disable
this behavior, in which case the files will not be deleted.

```python
# destination_bucket: s3.Bucket

s3deploy.BucketDeployment(self, "DeployMeWithoutDeletingFilesOnDestination",
    sources=[s3deploy.Source.asset(path.join(__dirname, "my-website"))],
    destination_bucket=destination_bucket,
    prune=False
)
```

This option also enables you to
multiple bucket deployments for the same destination bucket & prefix,
each with its own characteristics. For example, you can set different cache-control headers
based on file extensions:

```python
# destination_bucket: s3.Bucket

s3deploy.BucketDeployment(self, "BucketDeployment",
    sources=[s3deploy.Source.asset("./website", exclude=["index.html"])],
    destination_bucket=destination_bucket,
    cache_control=[s3deploy.CacheControl.from_string("max-age=31536000,public,immutable")],
    prune=False
)

s3deploy.BucketDeployment(self, "HTMLBucketDeployment",
    sources=[s3deploy.Source.asset("./website", exclude=["*", "!index.html"])],
    destination_bucket=destination_bucket,
    cache_control=[s3deploy.CacheControl.from_string("max-age=0,no-cache,no-store,must-revalidate")],
    prune=False
)
```

## Exclude and Include Filters

There are two points at which filters are evaluated in a deployment: asset bundling and the actual deployment. If you simply want to exclude files in the asset bundling process, you should leverage the `exclude` property of `AssetOptions` when defining your source:

```python
# destination_bucket: s3.Bucket

s3deploy.BucketDeployment(self, "HTMLBucketDeployment",
    sources=[s3deploy.Source.asset("./website", exclude=["*", "!index.html"])],
    destination_bucket=destination_bucket
)
```

If you want to specify filters to be used in the deployment process, you can use the `exclude` and `include` filters on `BucketDeployment`.  If excluded, these files will not be deployed to the destination bucket. In addition, if the file already exists in the destination bucket, it will not be deleted if you are using the `prune` option:

```python
# destination_bucket: s3.Bucket

s3deploy.BucketDeployment(self, "DeployButExcludeSpecificFiles",
    sources=[s3deploy.Source.asset(path.join(__dirname, "my-website"))],
    destination_bucket=destination_bucket,
    exclude=["*.txt"]
)
```

These filters follow the same format that is used for the AWS CLI.  See the CLI documentation for information on [Using Include and Exclude Filters](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html#use-of-exclude-and-include-filters).

## Objects metadata

You can specify metadata to be set on all the objects in your deployment.
There are 2 types of metadata in S3: system-defined metadata and user-defined metadata.
System-defined metadata have a special purpose, for example cache-control defines how long to keep an object cached.
User-defined metadata are not used by S3 and keys always begin with `x-amz-meta-` (this prefix is added automatically).

System defined metadata keys include the following:

* cache-control (`--cache-control` in `aws s3 sync`)
* content-disposition (`--content-disposition` in `aws s3 sync`)
* content-encoding (`--content-encoding` in `aws s3 sync`)
* content-language (`--content-language` in `aws s3 sync`)
* content-type (`--content-type` in `aws s3 sync`)
* expires (`--expires` in `aws s3 sync`)
* x-amz-storage-class (`--storage-class` in `aws s3 sync`)
* x-amz-website-redirect-location (`--website-redirect` in `aws s3 sync`)
* x-amz-server-side-encryption (`--sse` in `aws s3 sync`)
* x-amz-server-side-encryption-aws-kms-key-id (`--sse-kms-key-id` in `aws s3 sync`)
* x-amz-server-side-encryption-customer-algorithm (`--sse-c-copy-source` in `aws s3 sync`)
* x-amz-acl (`--acl` in `aws s3 sync`)

You can find more information about system defined metadata keys in
[S3 PutObject documentation](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)
and [`aws s3 sync` documentation](https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html).

```python
website_bucket = s3.Bucket(self, "WebsiteBucket",
    website_index_document="index.html",
    public_read_access=True
)

s3deploy.BucketDeployment(self, "DeployWebsite",
    sources=[s3deploy.Source.asset("./website-dist")],
    destination_bucket=website_bucket,
    destination_key_prefix="web/static",  # optional prefix in destination bucket
    metadata=s3deploy.UserDefinedObjectMetadata(A="1", b="2"),  # user-defined metadata

    # system-defined metadata
    content_type="text/html",
    content_language="en",
    storage_class=s3deploy.StorageClass.INTELLIGENT_TIERING,
    server_side_encryption=s3deploy.ServerSideEncryption.AES_256,
    cache_control=[
        s3deploy.CacheControl.set_public(),
        s3deploy.CacheControl.max_age(Duration.hours(1))
    ],
    access_control=s3.BucketAccessControl.BUCKET_OWNER_FULL_CONTROL
)
```

## CloudFront Invalidation

You can provide a CloudFront distribution and optional paths to invalidate after the bucket deployment finishes.

```python
import aws_cdk.aws_cloudfront as cloudfront
import aws_cdk.aws_cloudfront_origins as origins


bucket = s3.Bucket(self, "Destination")

# Handles buckets whether or not they are configured for website hosting.
distribution = cloudfront.Distribution(self, "Distribution",
    default_behavior=cloudfront.BehaviorOptions(origin=origins.S3Origin(bucket))
)

s3deploy.BucketDeployment(self, "DeployWithInvalidation",
    sources=[s3deploy.Source.asset("./website-dist")],
    destination_bucket=bucket,
    distribution=distribution,
    distribution_paths=["/images/*.png"]
)
```

## Size Limits

The default memory limit for the deployment resource is 128MiB. If you need to
copy larger files, you can use the `memoryLimit` configuration to increase the
size of the AWS Lambda resource handler.

The default ephemeral storage size for the deployment resource is 512MiB. If you
need to upload larger files, you may hit this limit. You can use the
`ephemeralStorageSize` configuration to increase the storage size of the AWS Lambda
resource handler.

> NOTE: a new AWS Lambda handler will be created in your stack for each combination
> of memory and storage size.

## EFS Support

If your workflow needs more disk space than default (512 MB) disk space, you may attach an EFS storage to underlying
lambda function. To Enable EFS support set `efs` and `vpc` props for BucketDeployment.

Check sample usage below.
Please note that creating VPC inline may cause stack deletion failures. It is shown as below for simplicity.
To avoid such condition, keep your network infra (VPC) in a separate stack and pass as props.

```python
# destination_bucket: s3.Bucket
# vpc: ec2.Vpc


s3deploy.BucketDeployment(self, "DeployMeWithEfsStorage",
    sources=[s3deploy.Source.asset(path.join(__dirname, "my-website"))],
    destination_bucket=destination_bucket,
    destination_key_prefix="efs/",
    use_efs=True,
    vpc=vpc,
    retain_on_delete=False
)
```

## Data with deploy-time values

The content passed to `Source.data()` or `Source.jsonData()` can include
references that will get resolved only during deployment.

For example:

```python
import aws_cdk.aws_sns as sns

# destination_bucket: s3.Bucket
# topic: sns.Topic


app_config = {
    "topic_arn": topic.topic_arn,
    "base_url": "https://my-endpoint"
}

s3deploy.BucketDeployment(self, "BucketDeployment",
    sources=[s3deploy.Source.json_data("config.json", app_config)],
    destination_bucket=destination_bucket
)
```

The value in `topic.topicArn` is a deploy-time value. It only gets resolved
during deployment by placing a marker in the generated source file and
substituting it when its deployed to the destination with the actual value.

## Notes

* This library uses an AWS CloudFormation custom resource which is about 10MiB in
  size. The code of this resource is bundled with this library.
* AWS Lambda execution time is limited to 15min. This limits the amount of data
  which can be deployed into the bucket by this timeout.
* When the `BucketDeployment` is removed from the stack, the contents are retained
  in the destination bucket ([#952](https://github.com/aws/aws-cdk/issues/952)).
* If you are using `s3deploy.Source.bucket()` to take the file source from
  another bucket: the deployed files will only be updated if the key (file name)
  of the file in the source  bucket changes. Mutating the file in place will not
  be good enough: the custom resource will simply not run if the properties don't
  change.

  * If you use assets (`s3deploy.Source.asset()`) you don't need to worry
    about this: the asset system will make sure that if the files have changed,
    the file name is unique and the deployment will run.

## Development

The custom resource is implemented in Python 3.7 in order to be able to leverage
the AWS CLI for "aws s3 sync". The code is under [`lib/lambda`](https://github.com/aws/aws-cdk/tree/master/packages/%40aws-cdk/aws-s3-deployment/lib/lambda) and
unit tests are under [`test/lambda`](https://github.com/aws/aws-cdk/tree/master/packages/%40aws-cdk/aws-s3-deployment/test/lambda).

This package requires Python 3.7 during build time in order to create the custom
resource Lambda bundle and test it. It also relies on a few bash scripts, so
might be tricky to build on Windows.

## Roadmap

* [ ] Support "blue/green" deployments ([#954](https://github.com/aws/aws-cdk/issues/954))

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/aws/aws-cdk",
    "name": "aws-cdk.aws-s3-deployment",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "~=3.7",
    "maintainer_email": "",
    "keywords": "",
    "author": "Amazon Web Services",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/de/c8/c4c5d9e120218db237ff8fdcb23420cebbad2b74d5f573a8ee44d015a98e/aws-cdk.aws-s3-deployment-1.204.0.tar.gz",
    "platform": null,
    "description": "# AWS S3 Deployment Construct Library\n\n<!--BEGIN STABILITY BANNER-->---\n\n\n![End-of-Support](https://img.shields.io/badge/End--of--Support-critical.svg?style=for-the-badge)\n\n> AWS CDK v1 has reached End-of-Support on 2023-06-01.\n> This package is no longer being updated, and users should migrate to AWS CDK v2.\n>\n> For more information on how to migrate, see the [*Migrating to AWS CDK v2* guide](https://docs.aws.amazon.com/cdk/v2/guide/migrating-v2.html).\n\n---\n<!--END STABILITY BANNER-->\n\nThis library allows populating an S3 bucket with the contents of .zip files\nfrom other S3 buckets or from local disk.\n\nThe following example defines a publicly accessible S3 bucket with web hosting\nenabled and populates it from a local directory on disk.\n\n```python\nwebsite_bucket = s3.Bucket(self, \"WebsiteBucket\",\n    website_index_document=\"index.html\",\n    public_read_access=True\n)\n\ns3deploy.BucketDeployment(self, \"DeployWebsite\",\n    sources=[s3deploy.Source.asset(\"./website-dist\")],\n    destination_bucket=website_bucket,\n    destination_key_prefix=\"web/static\"\n)\n```\n\nThis is what happens under the hood:\n\n1. When this stack is deployed (either via `cdk deploy` or via CI/CD), the\n   contents of the local `website-dist` directory will be archived and uploaded\n   to an intermediary assets bucket. If there is more than one source, they will\n   be individually uploaded.\n2. The `BucketDeployment` construct synthesizes a custom CloudFormation resource\n   of type `Custom::CDKBucketDeployment` into the template. The source bucket/key\n   is set to point to the assets bucket.\n3. The custom resource downloads the .zip archive, extracts it and issues `aws s3 sync --delete` against the destination bucket (in this case\n   `websiteBucket`). If there is more than one source, the sources will be\n   downloaded and merged pre-deployment at this step.\n\nIf you are referencing the filled bucket in another construct that depends on\nthe files already be there, be sure to use `deployment.deployedBucket`. This\nwill ensure the bucket deployment has finished before the resource that uses\nthe bucket is created:\n\n```python\n# website_bucket: s3.Bucket\n\n\ndeployment = s3deploy.BucketDeployment(self, \"DeployWebsite\",\n    sources=[s3deploy.Source.asset(path.join(__dirname, \"my-website\"))],\n    destination_bucket=website_bucket\n)\n\nConstructThatReadsFromTheBucket(self, \"Consumer\", {\n    # Use 'deployment.deployedBucket' instead of 'websiteBucket' here\n    \"bucket\": deployment.deployed_bucket\n})\n```\n\n## Supported sources\n\nThe following source types are supported for bucket deployments:\n\n* Local .zip file: `s3deploy.Source.asset('/path/to/local/file.zip')`\n* Local directory: `s3deploy.Source.asset('/path/to/local/directory')`\n* Another bucket: `s3deploy.Source.bucket(bucket, zipObjectKey)`\n* String data: `s3deploy.Source.data('object-key.txt', 'hello, world!')`\n  (supports [deploy-time values](#data-with-deploy-time-values))\n* JSON data: `s3deploy.Source.jsonData('object-key.json', { json: 'object' })`\n  (supports [deploy-time values](#data-with-deploy-time-values))\n\nTo create a source from a single file, you can pass `AssetOptions` to exclude\nall but a single file:\n\n* Single file: `s3deploy.Source.asset('/path/to/local/directory', { exclude: ['**', '!onlyThisFile.txt'] })`\n\n**IMPORTANT** The `aws-s3-deployment` module is only intended to be used with\nzip files from trusted sources. Directories bundled by the CDK CLI (by using\n`Source.asset()` on a directory) are safe. If you are using `Source.asset()` or\n`Source.bucket()` to reference an existing zip file, make sure you trust the\nfile you are referencing. Zips from untrusted sources might be able to execute\narbitrary code in the Lambda Function used by this module, and use its permissions\nto read or write unexpected files in the S3 bucket.\n\n## Retain on Delete\n\nBy default, the contents of the destination bucket will **not** be deleted when the\n`BucketDeployment` resource is removed from the stack or when the destination is\nchanged. You can use the option `retainOnDelete: false` to disable this behavior,\nin which case the contents will be deleted.\n\nConfiguring this has a few implications you should be aware of:\n\n* **Logical ID Changes**\n\n  Changing the logical ID of the `BucketDeployment` construct, without changing the destination\n  (for example due to refactoring, or intentional ID change) **will result in the deletion of the objects**.\n  This is because CloudFormation will first create the new resource, which will have no affect,\n  followed by a deletion of the old resource, which will cause a deletion of the objects,\n  since the destination hasn't changed, and `retainOnDelete` is `false`.\n* **Destination Changes**\n\n  When the destination bucket or prefix is changed, all files in the previous destination will **first** be\n  deleted and then uploaded to the new destination location. This could have availability implications\n  on your users.\n\n### General Recommendations\n\n#### Shared Bucket\n\nIf the destination bucket **is not** dedicated to the specific `BucketDeployment` construct (i.e shared by other entities),\nwe recommend to always configure the `destinationKeyPrefix` property. This will prevent the deployment from\naccidentally deleting data that wasn't uploaded by it.\n\n#### Dedicated Bucket\n\nIf the destination bucket **is** dedicated, it might be reasonable to skip the prefix configuration,\nin which case, we recommend to remove `retainOnDelete: false`, and instead, configure the\n[`autoDeleteObjects`](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-s3-readme.html#bucket-deletion)\nproperty on the destination bucket. This will avoid the logical ID problem mentioned above.\n\n## Prune\n\nBy default, files in the destination bucket that don't exist in the source will be deleted\nwhen the `BucketDeployment` resource is created or updated. You can use the option `prune: false` to disable\nthis behavior, in which case the files will not be deleted.\n\n```python\n# destination_bucket: s3.Bucket\n\ns3deploy.BucketDeployment(self, \"DeployMeWithoutDeletingFilesOnDestination\",\n    sources=[s3deploy.Source.asset(path.join(__dirname, \"my-website\"))],\n    destination_bucket=destination_bucket,\n    prune=False\n)\n```\n\nThis option also enables you to\nmultiple bucket deployments for the same destination bucket & prefix,\neach with its own characteristics. For example, you can set different cache-control headers\nbased on file extensions:\n\n```python\n# destination_bucket: s3.Bucket\n\ns3deploy.BucketDeployment(self, \"BucketDeployment\",\n    sources=[s3deploy.Source.asset(\"./website\", exclude=[\"index.html\"])],\n    destination_bucket=destination_bucket,\n    cache_control=[s3deploy.CacheControl.from_string(\"max-age=31536000,public,immutable\")],\n    prune=False\n)\n\ns3deploy.BucketDeployment(self, \"HTMLBucketDeployment\",\n    sources=[s3deploy.Source.asset(\"./website\", exclude=[\"*\", \"!index.html\"])],\n    destination_bucket=destination_bucket,\n    cache_control=[s3deploy.CacheControl.from_string(\"max-age=0,no-cache,no-store,must-revalidate\")],\n    prune=False\n)\n```\n\n## Exclude and Include Filters\n\nThere are two points at which filters are evaluated in a deployment: asset bundling and the actual deployment. If you simply want to exclude files in the asset bundling process, you should leverage the `exclude` property of `AssetOptions` when defining your source:\n\n```python\n# destination_bucket: s3.Bucket\n\ns3deploy.BucketDeployment(self, \"HTMLBucketDeployment\",\n    sources=[s3deploy.Source.asset(\"./website\", exclude=[\"*\", \"!index.html\"])],\n    destination_bucket=destination_bucket\n)\n```\n\nIf you want to specify filters to be used in the deployment process, you can use the `exclude` and `include` filters on `BucketDeployment`.  If excluded, these files will not be deployed to the destination bucket. In addition, if the file already exists in the destination bucket, it will not be deleted if you are using the `prune` option:\n\n```python\n# destination_bucket: s3.Bucket\n\ns3deploy.BucketDeployment(self, \"DeployButExcludeSpecificFiles\",\n    sources=[s3deploy.Source.asset(path.join(__dirname, \"my-website\"))],\n    destination_bucket=destination_bucket,\n    exclude=[\"*.txt\"]\n)\n```\n\nThese filters follow the same format that is used for the AWS CLI.  See the CLI documentation for information on [Using Include and Exclude Filters](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html#use-of-exclude-and-include-filters).\n\n## Objects metadata\n\nYou can specify metadata to be set on all the objects in your deployment.\nThere are 2 types of metadata in S3: system-defined metadata and user-defined metadata.\nSystem-defined metadata have a special purpose, for example cache-control defines how long to keep an object cached.\nUser-defined metadata are not used by S3 and keys always begin with `x-amz-meta-` (this prefix is added automatically).\n\nSystem defined metadata keys include the following:\n\n* cache-control (`--cache-control` in `aws s3 sync`)\n* content-disposition (`--content-disposition` in `aws s3 sync`)\n* content-encoding (`--content-encoding` in `aws s3 sync`)\n* content-language (`--content-language` in `aws s3 sync`)\n* content-type (`--content-type` in `aws s3 sync`)\n* expires (`--expires` in `aws s3 sync`)\n* x-amz-storage-class (`--storage-class` in `aws s3 sync`)\n* x-amz-website-redirect-location (`--website-redirect` in `aws s3 sync`)\n* x-amz-server-side-encryption (`--sse` in `aws s3 sync`)\n* x-amz-server-side-encryption-aws-kms-key-id (`--sse-kms-key-id` in `aws s3 sync`)\n* x-amz-server-side-encryption-customer-algorithm (`--sse-c-copy-source` in `aws s3 sync`)\n* x-amz-acl (`--acl` in `aws s3 sync`)\n\nYou can find more information about system defined metadata keys in\n[S3 PutObject documentation](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)\nand [`aws s3 sync` documentation](https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html).\n\n```python\nwebsite_bucket = s3.Bucket(self, \"WebsiteBucket\",\n    website_index_document=\"index.html\",\n    public_read_access=True\n)\n\ns3deploy.BucketDeployment(self, \"DeployWebsite\",\n    sources=[s3deploy.Source.asset(\"./website-dist\")],\n    destination_bucket=website_bucket,\n    destination_key_prefix=\"web/static\",  # optional prefix in destination bucket\n    metadata=s3deploy.UserDefinedObjectMetadata(A=\"1\", b=\"2\"),  # user-defined metadata\n\n    # system-defined metadata\n    content_type=\"text/html\",\n    content_language=\"en\",\n    storage_class=s3deploy.StorageClass.INTELLIGENT_TIERING,\n    server_side_encryption=s3deploy.ServerSideEncryption.AES_256,\n    cache_control=[\n        s3deploy.CacheControl.set_public(),\n        s3deploy.CacheControl.max_age(Duration.hours(1))\n    ],\n    access_control=s3.BucketAccessControl.BUCKET_OWNER_FULL_CONTROL\n)\n```\n\n## CloudFront Invalidation\n\nYou can provide a CloudFront distribution and optional paths to invalidate after the bucket deployment finishes.\n\n```python\nimport aws_cdk.aws_cloudfront as cloudfront\nimport aws_cdk.aws_cloudfront_origins as origins\n\n\nbucket = s3.Bucket(self, \"Destination\")\n\n# Handles buckets whether or not they are configured for website hosting.\ndistribution = cloudfront.Distribution(self, \"Distribution\",\n    default_behavior=cloudfront.BehaviorOptions(origin=origins.S3Origin(bucket))\n)\n\ns3deploy.BucketDeployment(self, \"DeployWithInvalidation\",\n    sources=[s3deploy.Source.asset(\"./website-dist\")],\n    destination_bucket=bucket,\n    distribution=distribution,\n    distribution_paths=[\"/images/*.png\"]\n)\n```\n\n## Size Limits\n\nThe default memory limit for the deployment resource is 128MiB. If you need to\ncopy larger files, you can use the `memoryLimit` configuration to increase the\nsize of the AWS Lambda resource handler.\n\nThe default ephemeral storage size for the deployment resource is 512MiB. If you\nneed to upload larger files, you may hit this limit. You can use the\n`ephemeralStorageSize` configuration to increase the storage size of the AWS Lambda\nresource handler.\n\n> NOTE: a new AWS Lambda handler will be created in your stack for each combination\n> of memory and storage size.\n\n## EFS Support\n\nIf your workflow needs more disk space than default (512 MB) disk space, you may attach an EFS storage to underlying\nlambda function. To Enable EFS support set `efs` and `vpc` props for BucketDeployment.\n\nCheck sample usage below.\nPlease note that creating VPC inline may cause stack deletion failures. It is shown as below for simplicity.\nTo avoid such condition, keep your network infra (VPC) in a separate stack and pass as props.\n\n```python\n# destination_bucket: s3.Bucket\n# vpc: ec2.Vpc\n\n\ns3deploy.BucketDeployment(self, \"DeployMeWithEfsStorage\",\n    sources=[s3deploy.Source.asset(path.join(__dirname, \"my-website\"))],\n    destination_bucket=destination_bucket,\n    destination_key_prefix=\"efs/\",\n    use_efs=True,\n    vpc=vpc,\n    retain_on_delete=False\n)\n```\n\n## Data with deploy-time values\n\nThe content passed to `Source.data()` or `Source.jsonData()` can include\nreferences that will get resolved only during deployment.\n\nFor example:\n\n```python\nimport aws_cdk.aws_sns as sns\n\n# destination_bucket: s3.Bucket\n# topic: sns.Topic\n\n\napp_config = {\n    \"topic_arn\": topic.topic_arn,\n    \"base_url\": \"https://my-endpoint\"\n}\n\ns3deploy.BucketDeployment(self, \"BucketDeployment\",\n    sources=[s3deploy.Source.json_data(\"config.json\", app_config)],\n    destination_bucket=destination_bucket\n)\n```\n\nThe value in `topic.topicArn` is a deploy-time value. It only gets resolved\nduring deployment by placing a marker in the generated source file and\nsubstituting it when its deployed to the destination with the actual value.\n\n## Notes\n\n* This library uses an AWS CloudFormation custom resource which is about 10MiB in\n  size. The code of this resource is bundled with this library.\n* AWS Lambda execution time is limited to 15min. This limits the amount of data\n  which can be deployed into the bucket by this timeout.\n* When the `BucketDeployment` is removed from the stack, the contents are retained\n  in the destination bucket ([#952](https://github.com/aws/aws-cdk/issues/952)).\n* If you are using `s3deploy.Source.bucket()` to take the file source from\n  another bucket: the deployed files will only be updated if the key (file name)\n  of the file in the source  bucket changes. Mutating the file in place will not\n  be good enough: the custom resource will simply not run if the properties don't\n  change.\n\n  * If you use assets (`s3deploy.Source.asset()`) you don't need to worry\n    about this: the asset system will make sure that if the files have changed,\n    the file name is unique and the deployment will run.\n\n## Development\n\nThe custom resource is implemented in Python 3.7 in order to be able to leverage\nthe AWS CLI for \"aws s3 sync\". The code is under [`lib/lambda`](https://github.com/aws/aws-cdk/tree/master/packages/%40aws-cdk/aws-s3-deployment/lib/lambda) and\nunit tests are under [`test/lambda`](https://github.com/aws/aws-cdk/tree/master/packages/%40aws-cdk/aws-s3-deployment/test/lambda).\n\nThis package requires Python 3.7 during build time in order to create the custom\nresource Lambda bundle and test it. It also relies on a few bash scripts, so\nmight be tricky to build on Windows.\n\n## Roadmap\n\n* [ ] Support \"blue/green\" deployments ([#954](https://github.com/aws/aws-cdk/issues/954))\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Constructs for deploying contents to S3 buckets",
    "version": "1.204.0",
    "project_urls": {
        "Homepage": "https://github.com/aws/aws-cdk",
        "Source": "https://github.com/aws/aws-cdk.git"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1a27a45ff318553126fe4b2cdd0b7653742bd0e5b3d76164fae6ee51834bb1c9",
                "md5": "5f0c6af270e4decce9f2915d0e52529a",
                "sha256": "9550432929291ad77c8b4734a8c1cc1c0105725eed5a463d01226d50dffabd63"
            },
            "downloads": -1,
            "filename": "aws_cdk.aws_s3_deployment-1.204.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5f0c6af270e4decce9f2915d0e52529a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "~=3.7",
            "size": 108951,
            "upload_time": "2023-06-19T21:00:59",
            "upload_time_iso_8601": "2023-06-19T21:00:59.078412Z",
            "url": "https://files.pythonhosted.org/packages/1a/27/a45ff318553126fe4b2cdd0b7653742bd0e5b3d76164fae6ee51834bb1c9/aws_cdk.aws_s3_deployment-1.204.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "dec8c4c5d9e120218db237ff8fdcb23420cebbad2b74d5f573a8ee44d015a98e",
                "md5": "7d46805ab90cf58a4d9d607e86083789",
                "sha256": "049c127e5b74563685361a95b9ccf8d70884ccc1464c2e5a23597479849c5af7"
            },
            "downloads": -1,
            "filename": "aws-cdk.aws-s3-deployment-1.204.0.tar.gz",
            "has_sig": false,
            "md5_digest": "7d46805ab90cf58a4d9d607e86083789",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "~=3.7",
            "size": 110441,
            "upload_time": "2023-06-19T21:07:10",
            "upload_time_iso_8601": "2023-06-19T21:07:10.765332Z",
            "url": "https://files.pythonhosted.org/packages/de/c8/c4c5d9e120218db237ff8fdcb23420cebbad2b74d5f573a8ee44d015a98e/aws-cdk.aws-s3-deployment-1.204.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-19 21:07:10",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "aws",
    "github_project": "aws-cdk",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "aws-cdk.aws-s3-deployment"
}
        
Elapsed time: 0.10548s