aws-cdk.aws-ecs


Nameaws-cdk.aws-ecs JSON
Version 1.203.0 PyPI version JSON
download
home_pagehttps://github.com/aws/aws-cdk
SummaryThe CDK Construct Library for AWS::ECS
upload_time2023-05-31 23:02:07
maintainer
docs_urlNone
authorAmazon Web Services
requires_python~=3.7
licenseApache-2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Amazon ECS Construct Library

<!--BEGIN STABILITY BANNER-->---


![cfn-resources: Stable](https://img.shields.io/badge/cfn--resources-stable-success.svg?style=for-the-badge)

![cdk-constructs: Stable](https://img.shields.io/badge/cdk--constructs-stable-success.svg?style=for-the-badge)

---
<!--END STABILITY BANNER-->

This package contains constructs for working with **Amazon Elastic Container
Service** (Amazon ECS).

Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service.

For further information on Amazon ECS,
see the [Amazon ECS documentation](https://docs.aws.amazon.com/ecs)

The following example creates an Amazon ECS cluster, adds capacity to it, and
runs a service on it:

```python
# vpc: ec2.Vpc


# Create an ECS cluster
cluster = ecs.Cluster(self, "Cluster",
    vpc=vpc
)

# Add capacity to it
cluster.add_capacity("DefaultAutoScalingGroupCapacity",
    instance_type=ec2.InstanceType("t2.xlarge"),
    desired_capacity=3
)

task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")

task_definition.add_container("DefaultContainer",
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
    memory_limit_mi_b=512
)

# Instantiate an Amazon ECS Service
ecs_service = ecs.Ec2Service(self, "Service",
    cluster=cluster,
    task_definition=task_definition
)
```

For a set of constructs defining common ECS architectural patterns, see the `@aws-cdk/aws-ecs-patterns` package.

## Launch Types: AWS Fargate vs Amazon EC2

There are two sets of constructs in this library; one to run tasks on Amazon EC2 and
one to run tasks on AWS Fargate.

* Use the `Ec2TaskDefinition` and `Ec2Service` constructs to run tasks on Amazon EC2 instances running in your account.
* Use the `FargateTaskDefinition` and `FargateService` constructs to run tasks on
  instances that are managed for you by AWS.
* Use the `ExternalTaskDefinition` and `ExternalService` constructs to run AWS ECS Anywhere tasks on self-managed infrastructure.

Here are the main differences:

* **Amazon EC2**: instances are under your control. Complete control of task to host
  allocation. Required to specify at least a memory reservation or limit for
  every container. Can use Host, Bridge and AwsVpc networking modes. Can attach
  Classic Load Balancer. Can share volumes between container and host.
* **AWS Fargate**: tasks run on AWS-managed instances, AWS manages task to host
  allocation for you. Requires specification of memory and cpu sizes at the
  taskdefinition level. Only supports AwsVpc networking modes and
  Application/Network Load Balancers. Only the AWS log driver is supported.
  Many host features are not supported such as adding kernel capabilities
  and mounting host devices/volumes inside the container.
* **AWS ECSAnywhere**: tasks are run and managed by AWS ECS Anywhere on infrastructure owned by the customer. Only Bridge networking mode is supported. Does not support autoscaling, load balancing, cloudmap or attachment of volumes.

For more information on Amazon EC2 vs AWS Fargate, networking and ECS Anywhere see the AWS Documentation:
[AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html),
[Task Networking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html),
[ECS Anywhere](https://aws.amazon.com/ecs/anywhere/)

## Clusters

A `Cluster` defines the infrastructure to run your
tasks on. You can run many tasks on a single cluster.

The following code creates a cluster that can run AWS Fargate tasks:

```python
# vpc: ec2.Vpc


cluster = ecs.Cluster(self, "Cluster",
    vpc=vpc
)
```

The following code imports an existing cluster using the ARN which can be used to
import an Amazon ECS service either EC2 or Fargate.

```python
cluster_arn = "arn:aws:ecs:us-east-1:012345678910:cluster/clusterName"

cluster = ecs.Cluster.from_cluster_arn(self, "Cluster", cluster_arn)
```

To use tasks with Amazon EC2 launch-type, you have to add capacity to
the cluster in order for tasks to be scheduled on your instances.  Typically,
you add an AutoScalingGroup with instances running the latest
Amazon ECS-optimized AMI to the cluster. There is a method to build and add such an
AutoScalingGroup automatically, or you can supply a customized AutoScalingGroup
that you construct yourself. It's possible to add multiple AutoScalingGroups
with various instance types.

The following example creates an Amazon ECS cluster and adds capacity to it:

```python
# vpc: ec2.Vpc


cluster = ecs.Cluster(self, "Cluster",
    vpc=vpc
)

# Either add default capacity
cluster.add_capacity("DefaultAutoScalingGroupCapacity",
    instance_type=ec2.InstanceType("t2.xlarge"),
    desired_capacity=3
)

# Or add customized capacity. Be sure to start the Amazon ECS-optimized AMI.
auto_scaling_group = autoscaling.AutoScalingGroup(self, "ASG",
    vpc=vpc,
    instance_type=ec2.InstanceType("t2.xlarge"),
    machine_image=ecs.EcsOptimizedImage.amazon_linux(),
    # Or use Amazon ECS-Optimized Amazon Linux 2 AMI
    # machineImage: EcsOptimizedImage.amazonLinux2(),
    desired_capacity=3
)

capacity_provider = ecs.AsgCapacityProvider(self, "AsgCapacityProvider",
    auto_scaling_group=auto_scaling_group
)
cluster.add_asg_capacity_provider(capacity_provider)
```

If you omit the property `vpc`, the construct will create a new VPC with two AZs.

By default, all machine images will auto-update to the latest version
on each deployment, causing a replacement of the instances in your AutoScalingGroup
if the AMI has been updated since the last deployment.

If task draining is enabled, ECS will transparently reschedule tasks on to the new
instances before terminating your old instances. If you have disabled task draining,
the tasks will be terminated along with the instance. To prevent that, you
can pick a non-updating AMI by passing `cacheInContext: true`, but be sure
to periodically update to the latest AMI manually by using the [CDK CLI
context management commands](https://docs.aws.amazon.com/cdk/latest/guide/context.html):

```python
# vpc: ec2.Vpc

auto_scaling_group = autoscaling.AutoScalingGroup(self, "ASG",
    machine_image=ecs.EcsOptimizedImage.amazon_linux(cached_in_context=True),
    vpc=vpc,
    instance_type=ec2.InstanceType("t2.micro")
)
```

### Bottlerocket

[Bottlerocket](https://aws.amazon.com/bottlerocket/) is a Linux-based open source operating system that is
purpose-built by AWS for running containers. You can launch Amazon ECS container instances with the Bottlerocket AMI.

The following example will create a capacity with self-managed Amazon EC2 capacity of 2 `c5.large` Linux instances running with `Bottlerocket` AMI.

The following example adds Bottlerocket capacity to the cluster:

```python
# cluster: ecs.Cluster


cluster.add_capacity("bottlerocket-asg",
    min_capacity=2,
    instance_type=ec2.InstanceType("c5.large"),
    machine_image=ecs.BottleRocketImage()
)
```

### ARM64 (Graviton) Instances

To launch instances with ARM64 hardware, you can use the Amazon ECS-optimized
Amazon Linux 2 (arm64) AMI. Based on Amazon Linux 2, this AMI is recommended
for use when launching your EC2 instances that are powered by Arm-based AWS
Graviton Processors.

```python
# cluster: ecs.Cluster


cluster.add_capacity("graviton-cluster",
    min_capacity=2,
    instance_type=ec2.InstanceType("c6g.large"),
    machine_image=ecs.EcsOptimizedImage.amazon_linux2(ecs.AmiHardwareType.ARM)
)
```

Bottlerocket is also supported:

```python
# cluster: ecs.Cluster


cluster.add_capacity("graviton-cluster",
    min_capacity=2,
    instance_type=ec2.InstanceType("c6g.large"),
    machine_image_type=ecs.MachineImageType.BOTTLEROCKET
)
```

### Spot Instances

To add spot instances into the cluster, you must specify the `spotPrice` in the `ecs.AddCapacityOptions` and optionally enable the `spotInstanceDraining` property.

```python
# cluster: ecs.Cluster


# Add an AutoScalingGroup with spot instances to the existing cluster
cluster.add_capacity("AsgSpot",
    max_capacity=2,
    min_capacity=2,
    desired_capacity=2,
    instance_type=ec2.InstanceType("c5.xlarge"),
    spot_price="0.0735",
    # Enable the Automated Spot Draining support for Amazon ECS
    spot_instance_draining=True
)
```

### SNS Topic Encryption

When the `ecs.AddCapacityOptions` that you provide has a non-zero `taskDrainTime` (the default) then an SNS topic and Lambda are created to ensure that the
cluster's instances have been properly drained of tasks before terminating. The SNS Topic is sent the instance-terminating lifecycle event from the AutoScalingGroup,
and the Lambda acts on that event. If you wish to engage [server-side encryption](https://docs.aws.amazon.com/sns/latest/dg/sns-data-encryption.html) for this SNS Topic
then you may do so by providing a KMS key for the `topicEncryptionKey` property of `ecs.AddCapacityOptions`.

```python
# Given
# cluster: ecs.Cluster
# key: kms.Key

# Then, use that key to encrypt the lifecycle-event SNS Topic.
cluster.add_capacity("ASGEncryptedSNS",
    instance_type=ec2.InstanceType("t2.xlarge"),
    desired_capacity=3,
    topic_encryption_key=key
)
```

## Task definitions

A task definition describes what a single copy of a **task** should look like.
A task definition has one or more containers; typically, it has one
main container (the *default container* is the first one that's added
to the task definition, and it is marked *essential*) and optionally
some supporting containers which are used to support the main container,
doings things like upload logs or metrics to monitoring services.

To run a task or service with Amazon EC2 launch type, use the `Ec2TaskDefinition`. For AWS Fargate tasks/services, use the
`FargateTaskDefinition`. For AWS ECS Anywhere use the `ExternalTaskDefinition`. These classes
provide simplified APIs that only contain properties relevant for each specific launch type.

For a `FargateTaskDefinition`, specify the task size (`memoryLimitMiB` and `cpu`):

```python
fargate_task_definition = ecs.FargateTaskDefinition(self, "TaskDef",
    memory_limit_mi_b=512,
    cpu=256
)
```

On Fargate Platform Version 1.4.0 or later, you may specify up to 200GiB of
[ephemeral storage](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-task-storage.html#fargate-task-storage-pv14):

```python
fargate_task_definition = ecs.FargateTaskDefinition(self, "TaskDef",
    memory_limit_mi_b=512,
    cpu=256,
    ephemeral_storage_gi_b=100
)
```

To add containers to a task definition, call `addContainer()`:

```python
fargate_task_definition = ecs.FargateTaskDefinition(self, "TaskDef",
    memory_limit_mi_b=512,
    cpu=256
)
container = fargate_task_definition.add_container("WebContainer",
    # Use an image from DockerHub
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample")
)
```

For a `Ec2TaskDefinition`:

```python
ec2_task_definition = ecs.Ec2TaskDefinition(self, "TaskDef",
    network_mode=ecs.NetworkMode.BRIDGE
)

container = ec2_task_definition.add_container("WebContainer",
    # Use an image from DockerHub
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
    memory_limit_mi_b=1024
)
```

For an `ExternalTaskDefinition`:

```python
external_task_definition = ecs.ExternalTaskDefinition(self, "TaskDef")

container = external_task_definition.add_container("WebContainer",
    # Use an image from DockerHub
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
    memory_limit_mi_b=1024
)
```

You can specify container properties when you add them to the task definition, or with various methods, e.g.:

To add a port mapping when adding a container to the task definition, specify the `portMappings` option:

```python
# task_definition: ecs.TaskDefinition


task_definition.add_container("WebContainer",
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
    memory_limit_mi_b=1024,
    port_mappings=[ecs.PortMapping(container_port=3000)]
)
```

To add port mappings directly to a container definition, call `addPortMappings()`:

```python
# container: ecs.ContainerDefinition


container.add_port_mappings(
    container_port=3000
)
```

To add data volumes to a task definition, call `addVolume()`:

```python
fargate_task_definition = ecs.FargateTaskDefinition(self, "TaskDef",
    memory_limit_mi_b=512,
    cpu=256
)
volume = {
    # Use an Elastic FileSystem
    "name": "mydatavolume",
    "efs_volume_configuration": {
        "file_system_id": "EFS"
    }
}

container = fargate_task_definition.add_volume(volume)
```

> Note: ECS Anywhere doesn't support volume attachments in the task definition.

To use a TaskDefinition that can be used with either Amazon EC2 or
AWS Fargate launch types, use the `TaskDefinition` construct.

When creating a task definition you have to specify what kind of
tasks you intend to run: Amazon EC2, AWS Fargate, or both.
The following example uses both:

```python
task_definition = ecs.TaskDefinition(self, "TaskDef",
    memory_mi_b="512",
    cpu="256",
    network_mode=ecs.NetworkMode.AWS_VPC,
    compatibility=ecs.Compatibility.EC2_AND_FARGATE
)
```

### Images

Images supply the software that runs inside the container. Images can be
obtained from either DockerHub or from ECR repositories, built directly from a local Dockerfile, or use an existing tarball.

* `ecs.ContainerImage.fromRegistry(imageName)`: use a public image.
* `ecs.ContainerImage.fromRegistry(imageName, { credentials: mySecret })`: use a private image that requires credentials.
* `ecs.ContainerImage.fromEcrRepository(repo, tagOrDigest)`: use the given ECR repository as the image
  to start. If no tag or digest is provided, "latest" is assumed.
* `ecs.ContainerImage.fromAsset('./image')`: build and upload an
  image directly from a `Dockerfile` in your source directory.
* `ecs.ContainerImage.fromDockerImageAsset(asset)`: uses an existing
  `@aws-cdk/aws-ecr-assets.DockerImageAsset` as a container image.
* `ecs.ContainerImage.fromTarball(file)`: use an existing tarball.
* `new ecs.TagParameterContainerImage(repository)`: use the given ECR repository as the image
  but a CloudFormation parameter as the tag.

### Environment variables

To pass environment variables to the container, you can use the `environment`, `environmentFiles`, and `secrets` props.

```python
# secret: secretsmanager.Secret
# db_secret: secretsmanager.Secret
# parameter: ssm.StringParameter
# task_definition: ecs.TaskDefinition
# s3_bucket: s3.Bucket


new_container = task_definition.add_container("container",
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
    memory_limit_mi_b=1024,
    environment={ # clear text, not for sensitive data
        "STAGE": "prod"},
    environment_files=[ # list of environment files hosted either on local disk or S3
        ecs.EnvironmentFile.from_asset("./demo-env-file.env"),
        ecs.EnvironmentFile.from_bucket(s3_bucket, "assets/demo-env-file.env")],
    secrets={ # Retrieved from AWS Secrets Manager or AWS Systems Manager Parameter Store at container start-up.
        "SECRET": ecs.Secret.from_secrets_manager(secret),
        "DB_PASSWORD": ecs.Secret.from_secrets_manager(db_secret, "password"),  # Reference a specific JSON field, (requires platform version 1.4.0 or later for Fargate tasks)
        "API_KEY": ecs.Secret.from_secrets_manager_version(secret, ecs.SecretVersionInfo(version_id="12345"), "apiKey"),  # Reference a specific version of the secret by its version id or version stage (requires platform version 1.4.0 or later for Fargate tasks)
        "PARAMETER": ecs.Secret.from_ssm_parameter(parameter)}
)
new_container.add_environment("QUEUE_NAME", "MyQueue")
```

The task execution role is automatically granted read permissions on the secrets/parameters. Support for environment
files is restricted to the EC2 launch type for files hosted on S3. Further details provided in the AWS documentation
about [specifying environment variables](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html).

### System controls

To set system controls (kernel parameters) on the container, use the `systemControls` prop:

```python
# task_definition: ecs.TaskDefinition


task_definition.add_container("container",
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
    memory_limit_mi_b=1024,
    system_controls=[ecs.SystemControl(
        namespace="net",
        value="ipv4.tcp_tw_recycle"
    )
    ]
)
```

### Using Windows containers on Fargate

AWS Fargate supports Amazon ECS Windows containers. For more details, please see this [blog post](https://aws.amazon.com/tw/blogs/containers/running-windows-containers-with-amazon-ecs-on-aws-fargate/)

```python
# Create a Task Definition for the Windows container to start
task_definition = ecs.FargateTaskDefinition(self, "TaskDef",
    runtime_platform=ecs.RuntimePlatform(
        operating_system_family=ecs.OperatingSystemFamily.WINDOWS_SERVER_2019_CORE,
        cpu_architecture=ecs.CpuArchitecture.X86_64
    ),
    cpu=1024,
    memory_limit_mi_b=2048
)

task_definition.add_container("windowsservercore",
    logging=ecs.LogDriver.aws_logs(stream_prefix="win-iis-on-fargate"),
    port_mappings=[ecs.PortMapping(container_port=80)],
    image=ecs.ContainerImage.from_registry("mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019")
)
```

### Using Graviton2 with Fargate

AWS Graviton2 supports AWS Fargate. For more details, please see this [blog post](https://aws.amazon.com/blogs/aws/announcing-aws-graviton2-support-for-aws-fargate-get-up-to-40-better-price-performance-for-your-serverless-containers/)

```python
# Create a Task Definition for running container on Graviton Runtime.
task_definition = ecs.FargateTaskDefinition(self, "TaskDef",
    runtime_platform=ecs.RuntimePlatform(
        operating_system_family=ecs.OperatingSystemFamily.LINUX,
        cpu_architecture=ecs.CpuArchitecture.ARM64
    ),
    cpu=1024,
    memory_limit_mi_b=2048
)

task_definition.add_container("webarm64",
    logging=ecs.LogDriver.aws_logs(stream_prefix="graviton2-on-fargate"),
    port_mappings=[ecs.PortMapping(container_port=80)],
    image=ecs.ContainerImage.from_registry("public.ecr.aws/nginx/nginx:latest-arm64v8")
)
```

## Service

A `Service` instantiates a `TaskDefinition` on a `Cluster` a given number of
times, optionally associating them with a load balancer.
If a task fails,
Amazon ECS automatically restarts the task.

```python
# cluster: ecs.Cluster
# task_definition: ecs.TaskDefinition


service = ecs.FargateService(self, "Service",
    cluster=cluster,
    task_definition=task_definition,
    desired_count=5
)
```

ECS Anywhere service definition looks like:

```python
# cluster: ecs.Cluster
# task_definition: ecs.TaskDefinition


service = ecs.ExternalService(self, "Service",
    cluster=cluster,
    task_definition=task_definition,
    desired_count=5
)
```

`Services` by default will create a security group if not provided.
If you'd like to specify which security groups to use you can override the `securityGroups` property.

### Deployment circuit breaker and rollback

Amazon ECS [deployment circuit breaker](https://aws.amazon.com/tw/blogs/containers/announcing-amazon-ecs-deployment-circuit-breaker/)
automatically rolls back unhealthy service deployments without the need for manual intervention. Use `circuitBreaker` to enable
deployment circuit breaker and optionally enable `rollback` for automatic rollback. See [Using the deployment circuit breaker](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-type-ecs.html)
for more details.

```python
# cluster: ecs.Cluster
# task_definition: ecs.TaskDefinition

service = ecs.FargateService(self, "Service",
    cluster=cluster,
    task_definition=task_definition,
    circuit_breaker=ecs.DeploymentCircuitBreaker(rollback=True)
)
```

> Note: ECS Anywhere doesn't support deployment circuit breakers and rollback.

### Include an application/network load balancer

`Services` are load balancing targets and can be added to a target group, which will be attached to an application/network load balancers:

```python
# vpc: ec2.Vpc
# cluster: ecs.Cluster
# task_definition: ecs.TaskDefinition

service = ecs.FargateService(self, "Service", cluster=cluster, task_definition=task_definition)

lb = elbv2.ApplicationLoadBalancer(self, "LB", vpc=vpc, internet_facing=True)
listener = lb.add_listener("Listener", port=80)
target_group1 = listener.add_targets("ECS1",
    port=80,
    targets=[service]
)
target_group2 = listener.add_targets("ECS2",
    port=80,
    targets=[service.load_balancer_target(
        container_name="MyContainer",
        container_port=8080
    )]
)
```

> Note: ECS Anywhere doesn't support application/network load balancers.

Note that in the example above, the default `service` only allows you to register the first essential container or the first mapped port on the container as a target and add it to a new target group. To have more control over which container and port to register as targets, you can use `service.loadBalancerTarget()` to return a load balancing target for a specific container and port.

Alternatively, you can also create all load balancer targets to be registered in this service, add them to target groups, and attach target groups to listeners accordingly.

```python
# cluster: ecs.Cluster
# task_definition: ecs.TaskDefinition
# vpc: ec2.Vpc

service = ecs.FargateService(self, "Service", cluster=cluster, task_definition=task_definition)

lb = elbv2.ApplicationLoadBalancer(self, "LB", vpc=vpc, internet_facing=True)
listener = lb.add_listener("Listener", port=80)
service.register_load_balancer_targets(
    container_name="web",
    container_port=80,
    new_target_group_id="ECS",
    listener=ecs.ListenerConfig.application_listener(listener,
        protocol=elbv2.ApplicationProtocol.HTTPS
    )
)
```

### Using a Load Balancer from a different Stack

If you want to put your Load Balancer and the Service it is load balancing to in
different stacks, you may not be able to use the convenience methods
`loadBalancer.addListener()` and `listener.addTargets()`.

The reason is that these methods will create resources in the same Stack as the
object they're called on, which may lead to cyclic references between stacks.
Instead, you will have to create an `ApplicationListener` in the service stack,
or an empty `TargetGroup` in the load balancer stack that you attach your
service to.

See the [ecs/cross-stack-load-balancer example](https://github.com/aws-samples/aws-cdk-examples/tree/master/typescript/ecs/cross-stack-load-balancer/)
for the alternatives.

### Include a classic load balancer

`Services` can also be directly attached to a classic load balancer as targets:

```python
# cluster: ecs.Cluster
# task_definition: ecs.TaskDefinition
# vpc: ec2.Vpc

service = ecs.Ec2Service(self, "Service", cluster=cluster, task_definition=task_definition)

lb = elb.LoadBalancer(self, "LB", vpc=vpc)
lb.add_listener(external_port=80)
lb.add_target(service)
```

Similarly, if you want to have more control over load balancer targeting:

```python
# cluster: ecs.Cluster
# task_definition: ecs.TaskDefinition
# vpc: ec2.Vpc

service = ecs.Ec2Service(self, "Service", cluster=cluster, task_definition=task_definition)

lb = elb.LoadBalancer(self, "LB", vpc=vpc)
lb.add_listener(external_port=80)
lb.add_target(service.load_balancer_target(
    container_name="MyContainer",
    container_port=80
))
```

There are two higher-level constructs available which include a load balancer for you that can be found in the aws-ecs-patterns module:

* `LoadBalancedFargateService`
* `LoadBalancedEc2Service`

## Task Auto-Scaling

You can configure the task count of a service to match demand. Task auto-scaling is
configured by calling `autoScaleTaskCount()`:

```python
# target: elbv2.ApplicationTargetGroup
# service: ecs.BaseService

scaling = service.auto_scale_task_count(max_capacity=10)
scaling.scale_on_cpu_utilization("CpuScaling",
    target_utilization_percent=50
)

scaling.scale_on_request_count("RequestScaling",
    requests_per_target=10000,
    target_group=target
)
```

Task auto-scaling is powered by *Application Auto-Scaling*.
See that section for details.

## Integration with CloudWatch Events

To start an Amazon ECS task on an Amazon EC2-backed Cluster, instantiate an
`@aws-cdk/aws-events-targets.EcsTask` instead of an `Ec2Service`:

```python
# cluster: ecs.Cluster

# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_asset(path.resolve(__dirname, "..", "eventhandler-image")),
    memory_limit_mi_b=256,
    logging=ecs.AwsLogDriver(stream_prefix="EventDemo", mode=ecs.AwsLogDriverMode.NON_BLOCKING)
)

# An Rule that describes the event trigger (in this case a scheduled run)
rule = events.Rule(self, "Rule",
    schedule=events.Schedule.expression("rate(1 min)")
)

# Pass an environment variable to the container 'TheContainer' in the task
rule.add_target(targets.EcsTask(
    cluster=cluster,
    task_definition=task_definition,
    task_count=1,
    container_overrides=[targets.ContainerOverride(
        container_name="TheContainer",
        environment=[targets.TaskEnvironmentVariable(
            name="I_WAS_TRIGGERED",
            value="From CloudWatch Events"
        )]
    )]
))
```

## Log Drivers

Currently Supported Log Drivers:

* awslogs
* fluentd
* gelf
* journald
* json-file
* splunk
* syslog
* awsfirelens
* Generic

### awslogs Log Driver

```python
# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.aws_logs(stream_prefix="EventDemo")
)
```

### fluentd Log Driver

```python
# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.fluentd()
)
```

### gelf Log Driver

```python
# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.gelf(address="my-gelf-address")
)
```

### journald Log Driver

```python
# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.journald()
)
```

### json-file Log Driver

```python
# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.json_file()
)
```

### splunk Log Driver

```python
# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.splunk(
        token=SecretValue.secrets_manager("my-splunk-token"),
        url="my-splunk-url"
    )
)
```

### syslog Log Driver

```python
# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.syslog()
)
```

### firelens Log Driver

```python
# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.firelens(
        options={
            "Name": "firehose",
            "region": "us-west-2",
            "delivery_stream": "my-stream"
        }
    )
)
```

To pass secrets to the log configuration, use the `secretOptions` property of the log configuration. The task execution role is automatically granted read permissions on the secrets/parameters.

```python
# secret: secretsmanager.Secret
# parameter: ssm.StringParameter


task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.LogDrivers.firelens(
        options={},
        secret_options={ # Retrieved from AWS Secrets Manager or AWS Systems Manager Parameter Store
            "apikey": ecs.Secret.from_secrets_manager(secret),
            "host": ecs.Secret.from_ssm_parameter(parameter)}
    )
)
```

### Generic Log Driver

A generic log driver object exists to provide a lower level abstraction of the log driver configuration.

```python
# Create a Task Definition for the container to start
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
    image=ecs.ContainerImage.from_registry("example-image"),
    memory_limit_mi_b=256,
    logging=ecs.GenericLogDriver(
        log_driver="fluentd",
        options={
            "tag": "example-tag"
        }
    )
)
```

## CloudMap Service Discovery

To register your ECS service with a CloudMap Service Registry, you may add the
`cloudMapOptions` property to your service:

```python
# task_definition: ecs.TaskDefinition
# cluster: ecs.Cluster


service = ecs.Ec2Service(self, "Service",
    cluster=cluster,
    task_definition=task_definition,
    cloud_map_options=ecs.CloudMapOptions(
        # Create A records - useful for AWSVPC network mode.
        dns_record_type=cloudmap.DnsRecordType.A
    )
)
```

With `bridge` or `host` network modes, only `SRV` DNS record types are supported.
By default, `SRV` DNS record types will target the default container and default
port. However, you may target a different container and port on the same ECS task:

```python
# task_definition: ecs.TaskDefinition
# cluster: ecs.Cluster


# Add a container to the task definition
specific_container = task_definition.add_container("Container",
    image=ecs.ContainerImage.from_registry("/aws/aws-example-app"),
    memory_limit_mi_b=2048
)

# Add a port mapping
specific_container.add_port_mappings(
    container_port=7600,
    protocol=ecs.Protocol.TCP
)

ecs.Ec2Service(self, "Service",
    cluster=cluster,
    task_definition=task_definition,
    cloud_map_options=ecs.CloudMapOptions(
        # Create SRV records - useful for bridge networking
        dns_record_type=cloudmap.DnsRecordType.SRV,
        # Targets port TCP port 7600 `specificContainer`
        container=specific_container,
        container_port=7600
    )
)
```

### Associate With a Specific CloudMap Service

You may associate an ECS service with a specific CloudMap service. To do
this, use the service's `associateCloudMapService` method:

```python
# cloud_map_service: cloudmap.Service
# ecs_service: ecs.FargateService


ecs_service.associate_cloud_map_service(
    service=cloud_map_service
)
```

## Capacity Providers

There are two major families of Capacity Providers: [AWS
Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-capacity-providers.html)
(including Fargate Spot) and EC2 [Auto Scaling
Group](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/asg-capacity-providers.html)
Capacity Providers. Both are supported.

### Fargate Capacity Providers

To enable Fargate capacity providers, you can either set
`enableFargateCapacityProviders` to `true` when creating your cluster, or by
invoking the `enableFargateCapacityProviders()` method after creating your
cluster. This will add both `FARGATE` and `FARGATE_SPOT` as available capacity
providers on your cluster.

```python
# vpc: ec2.Vpc


cluster = ecs.Cluster(self, "FargateCPCluster",
    vpc=vpc,
    enable_fargate_capacity_providers=True
)

task_definition = ecs.FargateTaskDefinition(self, "TaskDef")

task_definition.add_container("web",
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample")
)

ecs.FargateService(self, "FargateService",
    cluster=cluster,
    task_definition=task_definition,
    capacity_provider_strategies=[ecs.CapacityProviderStrategy(
        capacity_provider="FARGATE_SPOT",
        weight=2
    ), ecs.CapacityProviderStrategy(
        capacity_provider="FARGATE",
        weight=1
    )
    ]
)
```

### Auto Scaling Group Capacity Providers

To add an Auto Scaling Group Capacity Provider, first create an EC2 Auto Scaling
Group. Then, create an `AsgCapacityProvider` and pass the Auto Scaling Group to
it in the constructor. Then add the Capacity Provider to the cluster. Finally,
you can refer to the Provider by its name in your service's or task's Capacity
Provider strategy.

By default, an Auto Scaling Group Capacity Provider will manage the Auto Scaling
Group's size for you. It will also enable managed termination protection, in
order to prevent EC2 Auto Scaling from terminating EC2 instances that have tasks
running on them. If you want to disable this behavior, set both
`enableManagedScaling` to and `enableManagedTerminationProtection` to `false`.

```python
# vpc: ec2.Vpc


cluster = ecs.Cluster(self, "Cluster",
    vpc=vpc
)

auto_scaling_group = autoscaling.AutoScalingGroup(self, "ASG",
    vpc=vpc,
    instance_type=ec2.InstanceType("t2.micro"),
    machine_image=ecs.EcsOptimizedImage.amazon_linux2(),
    min_capacity=0,
    max_capacity=100
)

capacity_provider = ecs.AsgCapacityProvider(self, "AsgCapacityProvider",
    auto_scaling_group=auto_scaling_group
)
cluster.add_asg_capacity_provider(capacity_provider)

task_definition = ecs.Ec2TaskDefinition(self, "TaskDef")

task_definition.add_container("web",
    image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
    memory_reservation_mi_b=256
)

ecs.Ec2Service(self, "EC2Service",
    cluster=cluster,
    task_definition=task_definition,
    capacity_provider_strategies=[ecs.CapacityProviderStrategy(
        capacity_provider=capacity_provider.capacity_provider_name,
        weight=1
    )
    ]
)
```

## Elastic Inference Accelerators

Currently, this feature is only supported for services with EC2 launch types.

To add elastic inference accelerators to your EC2 instance, first add
`inferenceAccelerators` field to the Ec2TaskDefinition and set the `deviceName`
and `deviceType` properties.

```python
inference_accelerators = [{
    "device_name": "device1",
    "device_type": "eia2.medium"
}]

task_definition = ecs.Ec2TaskDefinition(self, "Ec2TaskDef",
    inference_accelerators=inference_accelerators
)
```

To enable using the inference accelerators in the containers, add `inferenceAcceleratorResources`
field and set it to a list of device names used for the inference accelerators. Each value in the
list should match a `DeviceName` for an `InferenceAccelerator` specified in the task definition.

```python
# task_definition: ecs.TaskDefinition

inference_accelerator_resources = ["device1"]

task_definition.add_container("cont",
    image=ecs.ContainerImage.from_registry("test"),
    memory_limit_mi_b=1024,
    inference_accelerator_resources=inference_accelerator_resources
)
```

## ECS Exec command

Please note, ECS Exec leverages AWS Systems Manager (SSM). So as a prerequisite for the exec command
to work, you need to have the SSM plugin for the AWS CLI installed locally. For more information, see
[Install Session Manager plugin for AWS CLI](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html).

To enable the ECS Exec feature for your containers, set the boolean flag `enableExecuteCommand` to `true` in
your `Ec2Service` or `FargateService`.

```python
# cluster: ecs.Cluster
# task_definition: ecs.TaskDefinition


service = ecs.Ec2Service(self, "Service",
    cluster=cluster,
    task_definition=task_definition,
    enable_execute_command=True
)
```

### Enabling logging

You can enable sending logs of your execute session commands to a CloudWatch log group or S3 bucket by configuring
the `executeCommandConfiguration` property for your cluster. The default configuration will send the
logs to the CloudWatch Logs using the `awslogs` log driver that is configured in your task definition. Please note,
when using your own `logConfiguration` the log group or S3 Bucket specified must already be created.

To encrypt data using your own KMS Customer Key (CMK), you must create a CMK and provide the key in the `kmsKey` field
of the `executeCommandConfiguration`. To use this key for encrypting CloudWatch log data or S3 bucket, make sure to associate the key
to these resources on creation.

```python
# vpc: ec2.Vpc

kms_key = kms.Key(self, "KmsKey")

# Pass the KMS key in the `encryptionKey` field to associate the key to the log group
log_group = logs.LogGroup(self, "LogGroup",
    encryption_key=kms_key
)

# Pass the KMS key in the `encryptionKey` field to associate the key to the S3 bucket
exec_bucket = s3.Bucket(self, "EcsExecBucket",
    encryption_key=kms_key
)

cluster = ecs.Cluster(self, "Cluster",
    vpc=vpc,
    execute_command_configuration=ecs.ExecuteCommandConfiguration(
        kms_key=kms_key,
        log_configuration=ecs.ExecuteCommandLogConfiguration(
            cloud_watch_log_group=log_group,
            cloud_watch_encryption_enabled=True,
            s3_bucket=exec_bucket,
            s3_encryption_enabled=True,
            s3_key_prefix="exec-command-output"
        ),
        logging=ecs.ExecuteCommandLogging.OVERRIDE
    )
)
```



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/aws/aws-cdk",
    "name": "aws-cdk.aws-ecs",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "~=3.7",
    "maintainer_email": "",
    "keywords": "",
    "author": "Amazon Web Services",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/84/f8/57d58ec335de0804b4ed662de89c215f575d7d6671f8fa16762ce0ce65c4/aws-cdk.aws-ecs-1.203.0.tar.gz",
    "platform": null,
    "description": "# Amazon ECS Construct Library\n\n<!--BEGIN STABILITY BANNER-->---\n\n\n![cfn-resources: Stable](https://img.shields.io/badge/cfn--resources-stable-success.svg?style=for-the-badge)\n\n![cdk-constructs: Stable](https://img.shields.io/badge/cdk--constructs-stable-success.svg?style=for-the-badge)\n\n---\n<!--END STABILITY BANNER-->\n\nThis package contains constructs for working with **Amazon Elastic Container\nService** (Amazon ECS).\n\nAmazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service.\n\nFor further information on Amazon ECS,\nsee the [Amazon ECS documentation](https://docs.aws.amazon.com/ecs)\n\nThe following example creates an Amazon ECS cluster, adds capacity to it, and\nruns a service on it:\n\n```python\n# vpc: ec2.Vpc\n\n\n# Create an ECS cluster\ncluster = ecs.Cluster(self, \"Cluster\",\n    vpc=vpc\n)\n\n# Add capacity to it\ncluster.add_capacity(\"DefaultAutoScalingGroupCapacity\",\n    instance_type=ec2.InstanceType(\"t2.xlarge\"),\n    desired_capacity=3\n)\n\ntask_definition = ecs.Ec2TaskDefinition(self, \"TaskDef\")\n\ntask_definition.add_container(\"DefaultContainer\",\n    image=ecs.ContainerImage.from_registry(\"amazon/amazon-ecs-sample\"),\n    memory_limit_mi_b=512\n)\n\n# Instantiate an Amazon ECS Service\necs_service = ecs.Ec2Service(self, \"Service\",\n    cluster=cluster,\n    task_definition=task_definition\n)\n```\n\nFor a set of constructs defining common ECS architectural patterns, see the `@aws-cdk/aws-ecs-patterns` package.\n\n## Launch Types: AWS Fargate vs Amazon EC2\n\nThere are two sets of constructs in this library; one to run tasks on Amazon EC2 and\none to run tasks on AWS Fargate.\n\n* Use the `Ec2TaskDefinition` and `Ec2Service` constructs to run tasks on Amazon EC2 instances running in your account.\n* Use the `FargateTaskDefinition` and `FargateService` constructs to run tasks on\n  instances that are managed for you by AWS.\n* Use the `ExternalTaskDefinition` and `ExternalService` constructs to run AWS ECS Anywhere tasks on self-managed infrastructure.\n\nHere are the main differences:\n\n* **Amazon EC2**: instances are under your control. Complete control of task to host\n  allocation. Required to specify at least a memory reservation or limit for\n  every container. Can use Host, Bridge and AwsVpc networking modes. Can attach\n  Classic Load Balancer. Can share volumes between container and host.\n* **AWS Fargate**: tasks run on AWS-managed instances, AWS manages task to host\n  allocation for you. Requires specification of memory and cpu sizes at the\n  taskdefinition level. Only supports AwsVpc networking modes and\n  Application/Network Load Balancers. Only the AWS log driver is supported.\n  Many host features are not supported such as adding kernel capabilities\n  and mounting host devices/volumes inside the container.\n* **AWS ECSAnywhere**: tasks are run and managed by AWS ECS Anywhere on infrastructure owned by the customer. Only Bridge networking mode is supported. Does not support autoscaling, load balancing, cloudmap or attachment of volumes.\n\nFor more information on Amazon EC2 vs AWS Fargate, networking and ECS Anywhere see the AWS Documentation:\n[AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html),\n[Task Networking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html),\n[ECS Anywhere](https://aws.amazon.com/ecs/anywhere/)\n\n## Clusters\n\nA `Cluster` defines the infrastructure to run your\ntasks on. You can run many tasks on a single cluster.\n\nThe following code creates a cluster that can run AWS Fargate tasks:\n\n```python\n# vpc: ec2.Vpc\n\n\ncluster = ecs.Cluster(self, \"Cluster\",\n    vpc=vpc\n)\n```\n\nThe following code imports an existing cluster using the ARN which can be used to\nimport an Amazon ECS service either EC2 or Fargate.\n\n```python\ncluster_arn = \"arn:aws:ecs:us-east-1:012345678910:cluster/clusterName\"\n\ncluster = ecs.Cluster.from_cluster_arn(self, \"Cluster\", cluster_arn)\n```\n\nTo use tasks with Amazon EC2 launch-type, you have to add capacity to\nthe cluster in order for tasks to be scheduled on your instances.  Typically,\nyou add an AutoScalingGroup with instances running the latest\nAmazon ECS-optimized AMI to the cluster. There is a method to build and add such an\nAutoScalingGroup automatically, or you can supply a customized AutoScalingGroup\nthat you construct yourself. It's possible to add multiple AutoScalingGroups\nwith various instance types.\n\nThe following example creates an Amazon ECS cluster and adds capacity to it:\n\n```python\n# vpc: ec2.Vpc\n\n\ncluster = ecs.Cluster(self, \"Cluster\",\n    vpc=vpc\n)\n\n# Either add default capacity\ncluster.add_capacity(\"DefaultAutoScalingGroupCapacity\",\n    instance_type=ec2.InstanceType(\"t2.xlarge\"),\n    desired_capacity=3\n)\n\n# Or add customized capacity. Be sure to start the Amazon ECS-optimized AMI.\nauto_scaling_group = autoscaling.AutoScalingGroup(self, \"ASG\",\n    vpc=vpc,\n    instance_type=ec2.InstanceType(\"t2.xlarge\"),\n    machine_image=ecs.EcsOptimizedImage.amazon_linux(),\n    # Or use Amazon ECS-Optimized Amazon Linux 2 AMI\n    # machineImage: EcsOptimizedImage.amazonLinux2(),\n    desired_capacity=3\n)\n\ncapacity_provider = ecs.AsgCapacityProvider(self, \"AsgCapacityProvider\",\n    auto_scaling_group=auto_scaling_group\n)\ncluster.add_asg_capacity_provider(capacity_provider)\n```\n\nIf you omit the property `vpc`, the construct will create a new VPC with two AZs.\n\nBy default, all machine images will auto-update to the latest version\non each deployment, causing a replacement of the instances in your AutoScalingGroup\nif the AMI has been updated since the last deployment.\n\nIf task draining is enabled, ECS will transparently reschedule tasks on to the new\ninstances before terminating your old instances. If you have disabled task draining,\nthe tasks will be terminated along with the instance. To prevent that, you\ncan pick a non-updating AMI by passing `cacheInContext: true`, but be sure\nto periodically update to the latest AMI manually by using the [CDK CLI\ncontext management commands](https://docs.aws.amazon.com/cdk/latest/guide/context.html):\n\n```python\n# vpc: ec2.Vpc\n\nauto_scaling_group = autoscaling.AutoScalingGroup(self, \"ASG\",\n    machine_image=ecs.EcsOptimizedImage.amazon_linux(cached_in_context=True),\n    vpc=vpc,\n    instance_type=ec2.InstanceType(\"t2.micro\")\n)\n```\n\n### Bottlerocket\n\n[Bottlerocket](https://aws.amazon.com/bottlerocket/) is a Linux-based open source operating system that is\npurpose-built by AWS for running containers. You can launch Amazon ECS container instances with the Bottlerocket AMI.\n\nThe following example will create a capacity with self-managed Amazon EC2 capacity of 2 `c5.large` Linux instances running with `Bottlerocket` AMI.\n\nThe following example adds Bottlerocket capacity to the cluster:\n\n```python\n# cluster: ecs.Cluster\n\n\ncluster.add_capacity(\"bottlerocket-asg\",\n    min_capacity=2,\n    instance_type=ec2.InstanceType(\"c5.large\"),\n    machine_image=ecs.BottleRocketImage()\n)\n```\n\n### ARM64 (Graviton) Instances\n\nTo launch instances with ARM64 hardware, you can use the Amazon ECS-optimized\nAmazon Linux 2 (arm64) AMI. Based on Amazon Linux 2, this AMI is recommended\nfor use when launching your EC2 instances that are powered by Arm-based AWS\nGraviton Processors.\n\n```python\n# cluster: ecs.Cluster\n\n\ncluster.add_capacity(\"graviton-cluster\",\n    min_capacity=2,\n    instance_type=ec2.InstanceType(\"c6g.large\"),\n    machine_image=ecs.EcsOptimizedImage.amazon_linux2(ecs.AmiHardwareType.ARM)\n)\n```\n\nBottlerocket is also supported:\n\n```python\n# cluster: ecs.Cluster\n\n\ncluster.add_capacity(\"graviton-cluster\",\n    min_capacity=2,\n    instance_type=ec2.InstanceType(\"c6g.large\"),\n    machine_image_type=ecs.MachineImageType.BOTTLEROCKET\n)\n```\n\n### Spot Instances\n\nTo add spot instances into the cluster, you must specify the `spotPrice` in the `ecs.AddCapacityOptions` and optionally enable the `spotInstanceDraining` property.\n\n```python\n# cluster: ecs.Cluster\n\n\n# Add an AutoScalingGroup with spot instances to the existing cluster\ncluster.add_capacity(\"AsgSpot\",\n    max_capacity=2,\n    min_capacity=2,\n    desired_capacity=2,\n    instance_type=ec2.InstanceType(\"c5.xlarge\"),\n    spot_price=\"0.0735\",\n    # Enable the Automated Spot Draining support for Amazon ECS\n    spot_instance_draining=True\n)\n```\n\n### SNS Topic Encryption\n\nWhen the `ecs.AddCapacityOptions` that you provide has a non-zero `taskDrainTime` (the default) then an SNS topic and Lambda are created to ensure that the\ncluster's instances have been properly drained of tasks before terminating. The SNS Topic is sent the instance-terminating lifecycle event from the AutoScalingGroup,\nand the Lambda acts on that event. If you wish to engage [server-side encryption](https://docs.aws.amazon.com/sns/latest/dg/sns-data-encryption.html) for this SNS Topic\nthen you may do so by providing a KMS key for the `topicEncryptionKey` property of `ecs.AddCapacityOptions`.\n\n```python\n# Given\n# cluster: ecs.Cluster\n# key: kms.Key\n\n# Then, use that key to encrypt the lifecycle-event SNS Topic.\ncluster.add_capacity(\"ASGEncryptedSNS\",\n    instance_type=ec2.InstanceType(\"t2.xlarge\"),\n    desired_capacity=3,\n    topic_encryption_key=key\n)\n```\n\n## Task definitions\n\nA task definition describes what a single copy of a **task** should look like.\nA task definition has one or more containers; typically, it has one\nmain container (the *default container* is the first one that's added\nto the task definition, and it is marked *essential*) and optionally\nsome supporting containers which are used to support the main container,\ndoings things like upload logs or metrics to monitoring services.\n\nTo run a task or service with Amazon EC2 launch type, use the `Ec2TaskDefinition`. For AWS Fargate tasks/services, use the\n`FargateTaskDefinition`. For AWS ECS Anywhere use the `ExternalTaskDefinition`. These classes\nprovide simplified APIs that only contain properties relevant for each specific launch type.\n\nFor a `FargateTaskDefinition`, specify the task size (`memoryLimitMiB` and `cpu`):\n\n```python\nfargate_task_definition = ecs.FargateTaskDefinition(self, \"TaskDef\",\n    memory_limit_mi_b=512,\n    cpu=256\n)\n```\n\nOn Fargate Platform Version 1.4.0 or later, you may specify up to 200GiB of\n[ephemeral storage](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-task-storage.html#fargate-task-storage-pv14):\n\n```python\nfargate_task_definition = ecs.FargateTaskDefinition(self, \"TaskDef\",\n    memory_limit_mi_b=512,\n    cpu=256,\n    ephemeral_storage_gi_b=100\n)\n```\n\nTo add containers to a task definition, call `addContainer()`:\n\n```python\nfargate_task_definition = ecs.FargateTaskDefinition(self, \"TaskDef\",\n    memory_limit_mi_b=512,\n    cpu=256\n)\ncontainer = fargate_task_definition.add_container(\"WebContainer\",\n    # Use an image from DockerHub\n    image=ecs.ContainerImage.from_registry(\"amazon/amazon-ecs-sample\")\n)\n```\n\nFor a `Ec2TaskDefinition`:\n\n```python\nec2_task_definition = ecs.Ec2TaskDefinition(self, \"TaskDef\",\n    network_mode=ecs.NetworkMode.BRIDGE\n)\n\ncontainer = ec2_task_definition.add_container(\"WebContainer\",\n    # Use an image from DockerHub\n    image=ecs.ContainerImage.from_registry(\"amazon/amazon-ecs-sample\"),\n    memory_limit_mi_b=1024\n)\n```\n\nFor an `ExternalTaskDefinition`:\n\n```python\nexternal_task_definition = ecs.ExternalTaskDefinition(self, \"TaskDef\")\n\ncontainer = external_task_definition.add_container(\"WebContainer\",\n    # Use an image from DockerHub\n    image=ecs.ContainerImage.from_registry(\"amazon/amazon-ecs-sample\"),\n    memory_limit_mi_b=1024\n)\n```\n\nYou can specify container properties when you add them to the task definition, or with various methods, e.g.:\n\nTo add a port mapping when adding a container to the task definition, specify the `portMappings` option:\n\n```python\n# task_definition: ecs.TaskDefinition\n\n\ntask_definition.add_container(\"WebContainer\",\n    image=ecs.ContainerImage.from_registry(\"amazon/amazon-ecs-sample\"),\n    memory_limit_mi_b=1024,\n    port_mappings=[ecs.PortMapping(container_port=3000)]\n)\n```\n\nTo add port mappings directly to a container definition, call `addPortMappings()`:\n\n```python\n# container: ecs.ContainerDefinition\n\n\ncontainer.add_port_mappings(\n    container_port=3000\n)\n```\n\nTo add data volumes to a task definition, call `addVolume()`:\n\n```python\nfargate_task_definition = ecs.FargateTaskDefinition(self, \"TaskDef\",\n    memory_limit_mi_b=512,\n    cpu=256\n)\nvolume = {\n    # Use an Elastic FileSystem\n    \"name\": \"mydatavolume\",\n    \"efs_volume_configuration\": {\n        \"file_system_id\": \"EFS\"\n    }\n}\n\ncontainer = fargate_task_definition.add_volume(volume)\n```\n\n> Note: ECS Anywhere doesn't support volume attachments in the task definition.\n\nTo use a TaskDefinition that can be used with either Amazon EC2 or\nAWS Fargate launch types, use the `TaskDefinition` construct.\n\nWhen creating a task definition you have to specify what kind of\ntasks you intend to run: Amazon EC2, AWS Fargate, or both.\nThe following example uses both:\n\n```python\ntask_definition = ecs.TaskDefinition(self, \"TaskDef\",\n    memory_mi_b=\"512\",\n    cpu=\"256\",\n    network_mode=ecs.NetworkMode.AWS_VPC,\n    compatibility=ecs.Compatibility.EC2_AND_FARGATE\n)\n```\n\n### Images\n\nImages supply the software that runs inside the container. Images can be\nobtained from either DockerHub or from ECR repositories, built directly from a local Dockerfile, or use an existing tarball.\n\n* `ecs.ContainerImage.fromRegistry(imageName)`: use a public image.\n* `ecs.ContainerImage.fromRegistry(imageName, { credentials: mySecret })`: use a private image that requires credentials.\n* `ecs.ContainerImage.fromEcrRepository(repo, tagOrDigest)`: use the given ECR repository as the image\n  to start. If no tag or digest is provided, \"latest\" is assumed.\n* `ecs.ContainerImage.fromAsset('./image')`: build and upload an\n  image directly from a `Dockerfile` in your source directory.\n* `ecs.ContainerImage.fromDockerImageAsset(asset)`: uses an existing\n  `@aws-cdk/aws-ecr-assets.DockerImageAsset` as a container image.\n* `ecs.ContainerImage.fromTarball(file)`: use an existing tarball.\n* `new ecs.TagParameterContainerImage(repository)`: use the given ECR repository as the image\n  but a CloudFormation parameter as the tag.\n\n### Environment variables\n\nTo pass environment variables to the container, you can use the `environment`, `environmentFiles`, and `secrets` props.\n\n```python\n# secret: secretsmanager.Secret\n# db_secret: secretsmanager.Secret\n# parameter: ssm.StringParameter\n# task_definition: ecs.TaskDefinition\n# s3_bucket: s3.Bucket\n\n\nnew_container = task_definition.add_container(\"container\",\n    image=ecs.ContainerImage.from_registry(\"amazon/amazon-ecs-sample\"),\n    memory_limit_mi_b=1024,\n    environment={ # clear text, not for sensitive data\n        \"STAGE\": \"prod\"},\n    environment_files=[ # list of environment files hosted either on local disk or S3\n        ecs.EnvironmentFile.from_asset(\"./demo-env-file.env\"),\n        ecs.EnvironmentFile.from_bucket(s3_bucket, \"assets/demo-env-file.env\")],\n    secrets={ # Retrieved from AWS Secrets Manager or AWS Systems Manager Parameter Store at container start-up.\n        \"SECRET\": ecs.Secret.from_secrets_manager(secret),\n        \"DB_PASSWORD\": ecs.Secret.from_secrets_manager(db_secret, \"password\"),  # Reference a specific JSON field, (requires platform version 1.4.0 or later for Fargate tasks)\n        \"API_KEY\": ecs.Secret.from_secrets_manager_version(secret, ecs.SecretVersionInfo(version_id=\"12345\"), \"apiKey\"),  # Reference a specific version of the secret by its version id or version stage (requires platform version 1.4.0 or later for Fargate tasks)\n        \"PARAMETER\": ecs.Secret.from_ssm_parameter(parameter)}\n)\nnew_container.add_environment(\"QUEUE_NAME\", \"MyQueue\")\n```\n\nThe task execution role is automatically granted read permissions on the secrets/parameters. Support for environment\nfiles is restricted to the EC2 launch type for files hosted on S3. Further details provided in the AWS documentation\nabout [specifying environment variables](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html).\n\n### System controls\n\nTo set system controls (kernel parameters) on the container, use the `systemControls` prop:\n\n```python\n# task_definition: ecs.TaskDefinition\n\n\ntask_definition.add_container(\"container\",\n    image=ecs.ContainerImage.from_registry(\"amazon/amazon-ecs-sample\"),\n    memory_limit_mi_b=1024,\n    system_controls=[ecs.SystemControl(\n        namespace=\"net\",\n        value=\"ipv4.tcp_tw_recycle\"\n    )\n    ]\n)\n```\n\n### Using Windows containers on Fargate\n\nAWS Fargate supports Amazon ECS Windows containers. For more details, please see this [blog post](https://aws.amazon.com/tw/blogs/containers/running-windows-containers-with-amazon-ecs-on-aws-fargate/)\n\n```python\n# Create a Task Definition for the Windows container to start\ntask_definition = ecs.FargateTaskDefinition(self, \"TaskDef\",\n    runtime_platform=ecs.RuntimePlatform(\n        operating_system_family=ecs.OperatingSystemFamily.WINDOWS_SERVER_2019_CORE,\n        cpu_architecture=ecs.CpuArchitecture.X86_64\n    ),\n    cpu=1024,\n    memory_limit_mi_b=2048\n)\n\ntask_definition.add_container(\"windowsservercore\",\n    logging=ecs.LogDriver.aws_logs(stream_prefix=\"win-iis-on-fargate\"),\n    port_mappings=[ecs.PortMapping(container_port=80)],\n    image=ecs.ContainerImage.from_registry(\"mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019\")\n)\n```\n\n### Using Graviton2 with Fargate\n\nAWS Graviton2 supports AWS Fargate. For more details, please see this [blog post](https://aws.amazon.com/blogs/aws/announcing-aws-graviton2-support-for-aws-fargate-get-up-to-40-better-price-performance-for-your-serverless-containers/)\n\n```python\n# Create a Task Definition for running container on Graviton Runtime.\ntask_definition = ecs.FargateTaskDefinition(self, \"TaskDef\",\n    runtime_platform=ecs.RuntimePlatform(\n        operating_system_family=ecs.OperatingSystemFamily.LINUX,\n        cpu_architecture=ecs.CpuArchitecture.ARM64\n    ),\n    cpu=1024,\n    memory_limit_mi_b=2048\n)\n\ntask_definition.add_container(\"webarm64\",\n    logging=ecs.LogDriver.aws_logs(stream_prefix=\"graviton2-on-fargate\"),\n    port_mappings=[ecs.PortMapping(container_port=80)],\n    image=ecs.ContainerImage.from_registry(\"public.ecr.aws/nginx/nginx:latest-arm64v8\")\n)\n```\n\n## Service\n\nA `Service` instantiates a `TaskDefinition` on a `Cluster` a given number of\ntimes, optionally associating them with a load balancer.\nIf a task fails,\nAmazon ECS automatically restarts the task.\n\n```python\n# cluster: ecs.Cluster\n# task_definition: ecs.TaskDefinition\n\n\nservice = ecs.FargateService(self, \"Service\",\n    cluster=cluster,\n    task_definition=task_definition,\n    desired_count=5\n)\n```\n\nECS Anywhere service definition looks like:\n\n```python\n# cluster: ecs.Cluster\n# task_definition: ecs.TaskDefinition\n\n\nservice = ecs.ExternalService(self, \"Service\",\n    cluster=cluster,\n    task_definition=task_definition,\n    desired_count=5\n)\n```\n\n`Services` by default will create a security group if not provided.\nIf you'd like to specify which security groups to use you can override the `securityGroups` property.\n\n### Deployment circuit breaker and rollback\n\nAmazon ECS [deployment circuit breaker](https://aws.amazon.com/tw/blogs/containers/announcing-amazon-ecs-deployment-circuit-breaker/)\nautomatically rolls back unhealthy service deployments without the need for manual intervention. Use `circuitBreaker` to enable\ndeployment circuit breaker and optionally enable `rollback` for automatic rollback. See [Using the deployment circuit breaker](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-type-ecs.html)\nfor more details.\n\n```python\n# cluster: ecs.Cluster\n# task_definition: ecs.TaskDefinition\n\nservice = ecs.FargateService(self, \"Service\",\n    cluster=cluster,\n    task_definition=task_definition,\n    circuit_breaker=ecs.DeploymentCircuitBreaker(rollback=True)\n)\n```\n\n> Note: ECS Anywhere doesn't support deployment circuit breakers and rollback.\n\n### Include an application/network load balancer\n\n`Services` are load balancing targets and can be added to a target group, which will be attached to an application/network load balancers:\n\n```python\n# vpc: ec2.Vpc\n# cluster: ecs.Cluster\n# task_definition: ecs.TaskDefinition\n\nservice = ecs.FargateService(self, \"Service\", cluster=cluster, task_definition=task_definition)\n\nlb = elbv2.ApplicationLoadBalancer(self, \"LB\", vpc=vpc, internet_facing=True)\nlistener = lb.add_listener(\"Listener\", port=80)\ntarget_group1 = listener.add_targets(\"ECS1\",\n    port=80,\n    targets=[service]\n)\ntarget_group2 = listener.add_targets(\"ECS2\",\n    port=80,\n    targets=[service.load_balancer_target(\n        container_name=\"MyContainer\",\n        container_port=8080\n    )]\n)\n```\n\n> Note: ECS Anywhere doesn't support application/network load balancers.\n\nNote that in the example above, the default `service` only allows you to register the first essential container or the first mapped port on the container as a target and add it to a new target group. To have more control over which container and port to register as targets, you can use `service.loadBalancerTarget()` to return a load balancing target for a specific container and port.\n\nAlternatively, you can also create all load balancer targets to be registered in this service, add them to target groups, and attach target groups to listeners accordingly.\n\n```python\n# cluster: ecs.Cluster\n# task_definition: ecs.TaskDefinition\n# vpc: ec2.Vpc\n\nservice = ecs.FargateService(self, \"Service\", cluster=cluster, task_definition=task_definition)\n\nlb = elbv2.ApplicationLoadBalancer(self, \"LB\", vpc=vpc, internet_facing=True)\nlistener = lb.add_listener(\"Listener\", port=80)\nservice.register_load_balancer_targets(\n    container_name=\"web\",\n    container_port=80,\n    new_target_group_id=\"ECS\",\n    listener=ecs.ListenerConfig.application_listener(listener,\n        protocol=elbv2.ApplicationProtocol.HTTPS\n    )\n)\n```\n\n### Using a Load Balancer from a different Stack\n\nIf you want to put your Load Balancer and the Service it is load balancing to in\ndifferent stacks, you may not be able to use the convenience methods\n`loadBalancer.addListener()` and `listener.addTargets()`.\n\nThe reason is that these methods will create resources in the same Stack as the\nobject they're called on, which may lead to cyclic references between stacks.\nInstead, you will have to create an `ApplicationListener` in the service stack,\nor an empty `TargetGroup` in the load balancer stack that you attach your\nservice to.\n\nSee the [ecs/cross-stack-load-balancer example](https://github.com/aws-samples/aws-cdk-examples/tree/master/typescript/ecs/cross-stack-load-balancer/)\nfor the alternatives.\n\n### Include a classic load balancer\n\n`Services` can also be directly attached to a classic load balancer as targets:\n\n```python\n# cluster: ecs.Cluster\n# task_definition: ecs.TaskDefinition\n# vpc: ec2.Vpc\n\nservice = ecs.Ec2Service(self, \"Service\", cluster=cluster, task_definition=task_definition)\n\nlb = elb.LoadBalancer(self, \"LB\", vpc=vpc)\nlb.add_listener(external_port=80)\nlb.add_target(service)\n```\n\nSimilarly, if you want to have more control over load balancer targeting:\n\n```python\n# cluster: ecs.Cluster\n# task_definition: ecs.TaskDefinition\n# vpc: ec2.Vpc\n\nservice = ecs.Ec2Service(self, \"Service\", cluster=cluster, task_definition=task_definition)\n\nlb = elb.LoadBalancer(self, \"LB\", vpc=vpc)\nlb.add_listener(external_port=80)\nlb.add_target(service.load_balancer_target(\n    container_name=\"MyContainer\",\n    container_port=80\n))\n```\n\nThere are two higher-level constructs available which include a load balancer for you that can be found in the aws-ecs-patterns module:\n\n* `LoadBalancedFargateService`\n* `LoadBalancedEc2Service`\n\n## Task Auto-Scaling\n\nYou can configure the task count of a service to match demand. Task auto-scaling is\nconfigured by calling `autoScaleTaskCount()`:\n\n```python\n# target: elbv2.ApplicationTargetGroup\n# service: ecs.BaseService\n\nscaling = service.auto_scale_task_count(max_capacity=10)\nscaling.scale_on_cpu_utilization(\"CpuScaling\",\n    target_utilization_percent=50\n)\n\nscaling.scale_on_request_count(\"RequestScaling\",\n    requests_per_target=10000,\n    target_group=target\n)\n```\n\nTask auto-scaling is powered by *Application Auto-Scaling*.\nSee that section for details.\n\n## Integration with CloudWatch Events\n\nTo start an Amazon ECS task on an Amazon EC2-backed Cluster, instantiate an\n`@aws-cdk/aws-events-targets.EcsTask` instead of an `Ec2Service`:\n\n```python\n# cluster: ecs.Cluster\n\n# Create a Task Definition for the container to start\ntask_definition = ecs.Ec2TaskDefinition(self, \"TaskDef\")\ntask_definition.add_container(\"TheContainer\",\n    image=ecs.ContainerImage.from_asset(path.resolve(__dirname, \"..\", \"eventhandler-image\")),\n    memory_limit_mi_b=256,\n    logging=ecs.AwsLogDriver(stream_prefix=\"EventDemo\", mode=ecs.AwsLogDriverMode.NON_BLOCKING)\n)\n\n# An Rule that describes the event trigger (in this case a scheduled run)\nrule = events.Rule(self, \"Rule\",\n    schedule=events.Schedule.expression(\"rate(1 min)\")\n)\n\n# Pass an environment variable to the container 'TheContainer' in the task\nrule.add_target(targets.EcsTask(\n    cluster=cluster,\n    task_definition=task_definition,\n    task_count=1,\n    container_overrides=[targets.ContainerOverride(\n        container_name=\"TheContainer\",\n        environment=[targets.TaskEnvironmentVariable(\n            name=\"I_WAS_TRIGGERED\",\n            value=\"From CloudWatch Events\"\n        )]\n    )]\n))\n```\n\n## Log Drivers\n\nCurrently Supported Log Drivers:\n\n* awslogs\n* fluentd\n* gelf\n* journald\n* json-file\n* splunk\n* syslog\n* awsfirelens\n* Generic\n\n### awslogs Log Driver\n\n```python\n# Create a Task Definition for the container to start\ntask_definition = ecs.Ec2TaskDefinition(self, \"TaskDef\")\ntask_definition.add_container(\"TheContainer\",\n    image=ecs.ContainerImage.from_registry(\"example-image\"),\n    memory_limit_mi_b=256,\n    logging=ecs.LogDrivers.aws_logs(stream_prefix=\"EventDemo\")\n)\n```\n\n### fluentd Log Driver\n\n```python\n# Create a Task Definition for the container to start\ntask_definition = ecs.Ec2TaskDefinition(self, \"TaskDef\")\ntask_definition.add_container(\"TheContainer\",\n    image=ecs.ContainerImage.from_registry(\"example-image\"),\n    memory_limit_mi_b=256,\n    logging=ecs.LogDrivers.fluentd()\n)\n```\n\n### gelf Log Driver\n\n```python\n# Create a Task Definition for the container to start\ntask_definition = ecs.Ec2TaskDefinition(self, \"TaskDef\")\ntask_definition.add_container(\"TheContainer\",\n    image=ecs.ContainerImage.from_registry(\"example-image\"),\n    memory_limit_mi_b=256,\n    logging=ecs.LogDrivers.gelf(address=\"my-gelf-address\")\n)\n```\n\n### journald Log Driver\n\n```python\n# Create a Task Definition for the container to start\ntask_definition = ecs.Ec2TaskDefinition(self, \"TaskDef\")\ntask_definition.add_container(\"TheContainer\",\n    image=ecs.ContainerImage.from_registry(\"example-image\"),\n    memory_limit_mi_b=256,\n    logging=ecs.LogDrivers.journald()\n)\n```\n\n### json-file Log Driver\n\n```python\n# Create a Task Definition for the container to start\ntask_definition = ecs.Ec2TaskDefinition(self, \"TaskDef\")\ntask_definition.add_container(\"TheContainer\",\n    image=ecs.ContainerImage.from_registry(\"example-image\"),\n    memory_limit_mi_b=256,\n    logging=ecs.LogDrivers.json_file()\n)\n```\n\n### splunk Log Driver\n\n```python\n# Create a Task Definition for the container to start\ntask_definition = ecs.Ec2TaskDefinition(self, \"TaskDef\")\ntask_definition.add_container(\"TheContainer\",\n    image=ecs.ContainerImage.from_registry(\"example-image\"),\n    memory_limit_mi_b=256,\n    logging=ecs.LogDrivers.splunk(\n        token=SecretValue.secrets_manager(\"my-splunk-token\"),\n        url=\"my-splunk-url\"\n    )\n)\n```\n\n### syslog Log Driver\n\n```python\n# Create a Task Definition for the container to start\ntask_definition = ecs.Ec2TaskDefinition(self, \"TaskDef\")\ntask_definition.add_container(\"TheContainer\",\n    image=ecs.ContainerImage.from_registry(\"example-image\"),\n    memory_limit_mi_b=256,\n    logging=ecs.LogDrivers.syslog()\n)\n```\n\n### firelens Log Driver\n\n```python\n# Create a Task Definition for the container to start\ntask_definition = ecs.Ec2TaskDefinition(self, \"TaskDef\")\ntask_definition.add_container(\"TheContainer\",\n    image=ecs.ContainerImage.from_registry(\"example-image\"),\n    memory_limit_mi_b=256,\n    logging=ecs.LogDrivers.firelens(\n        options={\n            \"Name\": \"firehose\",\n            \"region\": \"us-west-2\",\n            \"delivery_stream\": \"my-stream\"\n        }\n    )\n)\n```\n\nTo pass secrets to the log configuration, use the `secretOptions` property of the log configuration. The task execution role is automatically granted read permissions on the secrets/parameters.\n\n```python\n# secret: secretsmanager.Secret\n# parameter: ssm.StringParameter\n\n\ntask_definition = ecs.Ec2TaskDefinition(self, \"TaskDef\")\ntask_definition.add_container(\"TheContainer\",\n    image=ecs.ContainerImage.from_registry(\"example-image\"),\n    memory_limit_mi_b=256,\n    logging=ecs.LogDrivers.firelens(\n        options={},\n        secret_options={ # Retrieved from AWS Secrets Manager or AWS Systems Manager Parameter Store\n            \"apikey\": ecs.Secret.from_secrets_manager(secret),\n            \"host\": ecs.Secret.from_ssm_parameter(parameter)}\n    )\n)\n```\n\n### Generic Log Driver\n\nA generic log driver object exists to provide a lower level abstraction of the log driver configuration.\n\n```python\n# Create a Task Definition for the container to start\ntask_definition = ecs.Ec2TaskDefinition(self, \"TaskDef\")\ntask_definition.add_container(\"TheContainer\",\n    image=ecs.ContainerImage.from_registry(\"example-image\"),\n    memory_limit_mi_b=256,\n    logging=ecs.GenericLogDriver(\n        log_driver=\"fluentd\",\n        options={\n            \"tag\": \"example-tag\"\n        }\n    )\n)\n```\n\n## CloudMap Service Discovery\n\nTo register your ECS service with a CloudMap Service Registry, you may add the\n`cloudMapOptions` property to your service:\n\n```python\n# task_definition: ecs.TaskDefinition\n# cluster: ecs.Cluster\n\n\nservice = ecs.Ec2Service(self, \"Service\",\n    cluster=cluster,\n    task_definition=task_definition,\n    cloud_map_options=ecs.CloudMapOptions(\n        # Create A records - useful for AWSVPC network mode.\n        dns_record_type=cloudmap.DnsRecordType.A\n    )\n)\n```\n\nWith `bridge` or `host` network modes, only `SRV` DNS record types are supported.\nBy default, `SRV` DNS record types will target the default container and default\nport. However, you may target a different container and port on the same ECS task:\n\n```python\n# task_definition: ecs.TaskDefinition\n# cluster: ecs.Cluster\n\n\n# Add a container to the task definition\nspecific_container = task_definition.add_container(\"Container\",\n    image=ecs.ContainerImage.from_registry(\"/aws/aws-example-app\"),\n    memory_limit_mi_b=2048\n)\n\n# Add a port mapping\nspecific_container.add_port_mappings(\n    container_port=7600,\n    protocol=ecs.Protocol.TCP\n)\n\necs.Ec2Service(self, \"Service\",\n    cluster=cluster,\n    task_definition=task_definition,\n    cloud_map_options=ecs.CloudMapOptions(\n        # Create SRV records - useful for bridge networking\n        dns_record_type=cloudmap.DnsRecordType.SRV,\n        # Targets port TCP port 7600 `specificContainer`\n        container=specific_container,\n        container_port=7600\n    )\n)\n```\n\n### Associate With a Specific CloudMap Service\n\nYou may associate an ECS service with a specific CloudMap service. To do\nthis, use the service's `associateCloudMapService` method:\n\n```python\n# cloud_map_service: cloudmap.Service\n# ecs_service: ecs.FargateService\n\n\necs_service.associate_cloud_map_service(\n    service=cloud_map_service\n)\n```\n\n## Capacity Providers\n\nThere are two major families of Capacity Providers: [AWS\nFargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-capacity-providers.html)\n(including Fargate Spot) and EC2 [Auto Scaling\nGroup](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/asg-capacity-providers.html)\nCapacity Providers. Both are supported.\n\n### Fargate Capacity Providers\n\nTo enable Fargate capacity providers, you can either set\n`enableFargateCapacityProviders` to `true` when creating your cluster, or by\ninvoking the `enableFargateCapacityProviders()` method after creating your\ncluster. This will add both `FARGATE` and `FARGATE_SPOT` as available capacity\nproviders on your cluster.\n\n```python\n# vpc: ec2.Vpc\n\n\ncluster = ecs.Cluster(self, \"FargateCPCluster\",\n    vpc=vpc,\n    enable_fargate_capacity_providers=True\n)\n\ntask_definition = ecs.FargateTaskDefinition(self, \"TaskDef\")\n\ntask_definition.add_container(\"web\",\n    image=ecs.ContainerImage.from_registry(\"amazon/amazon-ecs-sample\")\n)\n\necs.FargateService(self, \"FargateService\",\n    cluster=cluster,\n    task_definition=task_definition,\n    capacity_provider_strategies=[ecs.CapacityProviderStrategy(\n        capacity_provider=\"FARGATE_SPOT\",\n        weight=2\n    ), ecs.CapacityProviderStrategy(\n        capacity_provider=\"FARGATE\",\n        weight=1\n    )\n    ]\n)\n```\n\n### Auto Scaling Group Capacity Providers\n\nTo add an Auto Scaling Group Capacity Provider, first create an EC2 Auto Scaling\nGroup. Then, create an `AsgCapacityProvider` and pass the Auto Scaling Group to\nit in the constructor. Then add the Capacity Provider to the cluster. Finally,\nyou can refer to the Provider by its name in your service's or task's Capacity\nProvider strategy.\n\nBy default, an Auto Scaling Group Capacity Provider will manage the Auto Scaling\nGroup's size for you. It will also enable managed termination protection, in\norder to prevent EC2 Auto Scaling from terminating EC2 instances that have tasks\nrunning on them. If you want to disable this behavior, set both\n`enableManagedScaling` to and `enableManagedTerminationProtection` to `false`.\n\n```python\n# vpc: ec2.Vpc\n\n\ncluster = ecs.Cluster(self, \"Cluster\",\n    vpc=vpc\n)\n\nauto_scaling_group = autoscaling.AutoScalingGroup(self, \"ASG\",\n    vpc=vpc,\n    instance_type=ec2.InstanceType(\"t2.micro\"),\n    machine_image=ecs.EcsOptimizedImage.amazon_linux2(),\n    min_capacity=0,\n    max_capacity=100\n)\n\ncapacity_provider = ecs.AsgCapacityProvider(self, \"AsgCapacityProvider\",\n    auto_scaling_group=auto_scaling_group\n)\ncluster.add_asg_capacity_provider(capacity_provider)\n\ntask_definition = ecs.Ec2TaskDefinition(self, \"TaskDef\")\n\ntask_definition.add_container(\"web\",\n    image=ecs.ContainerImage.from_registry(\"amazon/amazon-ecs-sample\"),\n    memory_reservation_mi_b=256\n)\n\necs.Ec2Service(self, \"EC2Service\",\n    cluster=cluster,\n    task_definition=task_definition,\n    capacity_provider_strategies=[ecs.CapacityProviderStrategy(\n        capacity_provider=capacity_provider.capacity_provider_name,\n        weight=1\n    )\n    ]\n)\n```\n\n## Elastic Inference Accelerators\n\nCurrently, this feature is only supported for services with EC2 launch types.\n\nTo add elastic inference accelerators to your EC2 instance, first add\n`inferenceAccelerators` field to the Ec2TaskDefinition and set the `deviceName`\nand `deviceType` properties.\n\n```python\ninference_accelerators = [{\n    \"device_name\": \"device1\",\n    \"device_type\": \"eia2.medium\"\n}]\n\ntask_definition = ecs.Ec2TaskDefinition(self, \"Ec2TaskDef\",\n    inference_accelerators=inference_accelerators\n)\n```\n\nTo enable using the inference accelerators in the containers, add `inferenceAcceleratorResources`\nfield and set it to a list of device names used for the inference accelerators. Each value in the\nlist should match a `DeviceName` for an `InferenceAccelerator` specified in the task definition.\n\n```python\n# task_definition: ecs.TaskDefinition\n\ninference_accelerator_resources = [\"device1\"]\n\ntask_definition.add_container(\"cont\",\n    image=ecs.ContainerImage.from_registry(\"test\"),\n    memory_limit_mi_b=1024,\n    inference_accelerator_resources=inference_accelerator_resources\n)\n```\n\n## ECS Exec command\n\nPlease note, ECS Exec leverages AWS Systems Manager (SSM). So as a prerequisite for the exec command\nto work, you need to have the SSM plugin for the AWS CLI installed locally. For more information, see\n[Install Session Manager plugin for AWS CLI](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html).\n\nTo enable the ECS Exec feature for your containers, set the boolean flag `enableExecuteCommand` to `true` in\nyour `Ec2Service` or `FargateService`.\n\n```python\n# cluster: ecs.Cluster\n# task_definition: ecs.TaskDefinition\n\n\nservice = ecs.Ec2Service(self, \"Service\",\n    cluster=cluster,\n    task_definition=task_definition,\n    enable_execute_command=True\n)\n```\n\n### Enabling logging\n\nYou can enable sending logs of your execute session commands to a CloudWatch log group or S3 bucket by configuring\nthe `executeCommandConfiguration` property for your cluster. The default configuration will send the\nlogs to the CloudWatch Logs using the `awslogs` log driver that is configured in your task definition. Please note,\nwhen using your own `logConfiguration` the log group or S3 Bucket specified must already be created.\n\nTo encrypt data using your own KMS Customer Key (CMK), you must create a CMK and provide the key in the `kmsKey` field\nof the `executeCommandConfiguration`. To use this key for encrypting CloudWatch log data or S3 bucket, make sure to associate the key\nto these resources on creation.\n\n```python\n# vpc: ec2.Vpc\n\nkms_key = kms.Key(self, \"KmsKey\")\n\n# Pass the KMS key in the `encryptionKey` field to associate the key to the log group\nlog_group = logs.LogGroup(self, \"LogGroup\",\n    encryption_key=kms_key\n)\n\n# Pass the KMS key in the `encryptionKey` field to associate the key to the S3 bucket\nexec_bucket = s3.Bucket(self, \"EcsExecBucket\",\n    encryption_key=kms_key\n)\n\ncluster = ecs.Cluster(self, \"Cluster\",\n    vpc=vpc,\n    execute_command_configuration=ecs.ExecuteCommandConfiguration(\n        kms_key=kms_key,\n        log_configuration=ecs.ExecuteCommandLogConfiguration(\n            cloud_watch_log_group=log_group,\n            cloud_watch_encryption_enabled=True,\n            s3_bucket=exec_bucket,\n            s3_encryption_enabled=True,\n            s3_key_prefix=\"exec-command-output\"\n        ),\n        logging=ecs.ExecuteCommandLogging.OVERRIDE\n    )\n)\n```\n\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "The CDK Construct Library for AWS::ECS",
    "version": "1.203.0",
    "project_urls": {
        "Homepage": "https://github.com/aws/aws-cdk",
        "Source": "https://github.com/aws/aws-cdk.git"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8c6e4fddc3bee8848844023850962a7e5d014088c5ffa85a232ec7611508cf84",
                "md5": "1f72debf002c118d14a6be7d1feaa324",
                "sha256": "43ecebdc25bcaebf91c63a5b655a9f744062535b2641c9f6b2bbf2e820c6ecb7"
            },
            "downloads": -1,
            "filename": "aws_cdk.aws_ecs-1.203.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1f72debf002c118d14a6be7d1feaa324",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "~=3.7",
            "size": 1152123,
            "upload_time": "2023-05-31T22:54:34",
            "upload_time_iso_8601": "2023-05-31T22:54:34.871536Z",
            "url": "https://files.pythonhosted.org/packages/8c/6e/4fddc3bee8848844023850962a7e5d014088c5ffa85a232ec7611508cf84/aws_cdk.aws_ecs-1.203.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "84f857d58ec335de0804b4ed662de89c215f575d7d6671f8fa16762ce0ce65c4",
                "md5": "d7a4f43d1b0e866b5130a075367cb42e",
                "sha256": "2417fa831a8145a792e9b8dd09aa14d9051ad67a68c1499855e053909fcbb8a4"
            },
            "downloads": -1,
            "filename": "aws-cdk.aws-ecs-1.203.0.tar.gz",
            "has_sig": false,
            "md5_digest": "d7a4f43d1b0e866b5130a075367cb42e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "~=3.7",
            "size": 1169306,
            "upload_time": "2023-05-31T23:02:07",
            "upload_time_iso_8601": "2023-05-31T23:02:07.366927Z",
            "url": "https://files.pythonhosted.org/packages/84/f8/57d58ec335de0804b4ed662de89c215f575d7d6671f8fa16762ce0ce65c4/aws-cdk.aws-ecs-1.203.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-31 23:02:07",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "aws",
    "github_project": "aws-cdk",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "aws-cdk.aws-ecs"
}
        
Elapsed time: 0.18179s