# CDK Pipelines
<!--BEGIN STABILITY BANNER-->---
![End-of-Support](https://img.shields.io/badge/End--of--Support-critical.svg?style=for-the-badge)
> AWS CDK v1 has reached End-of-Support on 2023-06-01.
> This package is no longer being updated, and users should migrate to AWS CDK v2.
>
> For more information on how to migrate, see the [*Migrating to AWS CDK v2* guide](https://docs.aws.amazon.com/cdk/v2/guide/migrating-v2.html).
---
<!--END STABILITY BANNER-->
A construct library for painless Continuous Delivery of CDK applications.
CDK Pipelines is an *opinionated construct library*. It is purpose-built to
deploy one or more copies of your CDK applications using CloudFormation with a
minimal amount of effort on your part. It is *not* intended to support arbitrary
deployment pipelines, and very specifically it is not built to use CodeDeploy to
applications to instances, or deploy your custom-built ECR images to an ECS
cluster directly: use CDK file assets with CloudFormation Init for instances, or
CDK container assets for ECS clusters instead.
Give the CDK Pipelines way of doing things a shot first: you might find it does
everything you need. If you want or need more control, we recommend you drop
down to using the `aws-codepipeline` construct library directly.
> This module contains two sets of APIs: an **original** and a **modern** version of
> CDK Pipelines. The *modern* API has been updated to be easier to work with and
> customize, and will be the preferred API going forward. The *original* version
> of the API is still available for backwards compatibility, but we recommend migrating
> to the new version if possible.
>
> Compared to the original API, the modern API: has more sensible defaults; is
> more flexible; supports parallel deployments; supports multiple synth inputs;
> allows more control of CodeBuild project generation; supports deployment
> engines other than CodePipeline.
>
> The README for the original API, as well as a migration guide, can be found in [our GitHub repository](https://github.com/aws/aws-cdk/blob/master/packages/@aws-cdk/pipelines/ORIGINAL_API.md).
## At a glance
Deploying your application continuously starts by defining a
`MyApplicationStage`, a subclass of `Stage` that contains the stacks that make
up a single copy of your application.
You then define a `Pipeline`, instantiate as many instances of
`MyApplicationStage` as you want for your test and production environments, with
different parameters for each, and calling `pipeline.addStage()` for each of
them. You can deploy to the same account and Region, or to a different one,
with the same amount of code. The *CDK Pipelines* library takes care of the
details.
CDK Pipelines supports multiple *deployment engines* (see
[Using a different deployment engine](#using-a-different-deployment-engine)),
and comes with a deployment engine that deploys CDK apps using AWS CodePipeline.
To use the CodePipeline engine, define a `CodePipeline` construct. The following
example creates a CodePipeline that deploys an application from GitHub:
```python
# The stacks for our app are minimally defined here. The internals of these
# stacks aren't important, except that DatabaseStack exposes an attribute
# "table" for a database table it defines, and ComputeStack accepts a reference
# to this table in its properties.
#
class DatabaseStack(Stack):
def __init__(self, scope, id):
super().__init__(scope, id)
self.table = dynamodb.Table(self, "Table",
partition_key=dynamodb.Attribute(name="id", type=dynamodb.AttributeType.STRING)
)
class ComputeStack(Stack):
def __init__(self, scope, id, *, table):
super().__init__(scope, id)
#
# Stack to hold the pipeline
#
class MyPipelineStack(Stack):
def __init__(self, scope, id, *, description=None, env=None, stackName=None, tags=None, synthesizer=None, terminationProtection=None, analyticsReporting=None):
super().__init__(scope, id, description=description, env=env, stackName=stackName, tags=tags, synthesizer=synthesizer, terminationProtection=terminationProtection, analyticsReporting=analyticsReporting)
pipeline = pipelines.CodePipeline(self, "Pipeline",
synth=pipelines.ShellStep("Synth",
# Use a connection created using the AWS console to authenticate to GitHub
# Other sources are available.
input=pipelines.CodePipelineSource.connection("my-org/my-app", "main",
connection_arn="arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41"
),
commands=["npm ci", "npm run build", "npx cdk synth"
]
)
)
# 'MyApplication' is defined below. Call `addStage` as many times as
# necessary with any account and region (may be different from the
# pipeline's).
pipeline.add_stage(MyApplication(self, "Prod",
env=cdk.Environment(
account="123456789012",
region="eu-west-1"
)
))
#
# Your application
#
# May consist of one or more Stacks (here, two)
#
# By declaring our DatabaseStack and our ComputeStack inside a Stage,
# we make sure they are deployed together, or not at all.
#
class MyApplication(Stage):
def __init__(self, scope, id, *, env=None, outdir=None):
super().__init__(scope, id, env=env, outdir=outdir)
db_stack = DatabaseStack(self, "Database")
ComputeStack(self, "Compute",
table=db_stack.table
)
# In your main file
MyPipelineStack(self, "PipelineStack",
env=cdk.Environment(
account="123456789012",
region="eu-west-1"
)
)
```
The pipeline is **self-mutating**, which means that if you add new
application stages in the source code, or new stacks to `MyApplication`, the
pipeline will automatically reconfigure itself to deploy those new stages and
stacks.
(Note that you have to *bootstrap* all environments before the above code
will work, and switch on "Modern synthesis" if you are using
CDKv1. See the section **CDK Environment Bootstrapping** below for
more information).
## Provisioning the pipeline
To provision the pipeline you have defined, make sure the target environment
has been bootstrapped (see below), and then execute deploying the
`PipelineStack` *once*. Afterwards, the pipeline will keep itself up-to-date.
> **Important**: be sure to `git commit` and `git push` before deploying the
> Pipeline stack using `cdk deploy`!
>
> The reason is that the pipeline will start deploying and self-mutating
> right away based on the sources in the repository, so the sources it finds
> in there should be the ones you want it to find.
Run the following commands to get the pipeline going:
```console
$ git commit -a
$ git push
$ cdk deploy PipelineStack
```
Administrative permissions to the account are only necessary up until
this point. We recommend you remove access to these credentials after doing this.
### Working on the pipeline
The self-mutation feature of the Pipeline might at times get in the way
of the pipeline development workflow. Each change to the pipeline must be pushed
to git, otherwise, after the pipeline was updated using `cdk deploy`, it will
automatically revert to the state found in git.
To make the development more convenient, the self-mutation feature can be turned
off temporarily, by passing `selfMutation: false` property, example:
```python
# Modern API
modern_pipeline = pipelines.CodePipeline(self, "Pipeline",
self_mutation=False,
synth=pipelines.ShellStep("Synth",
input=pipelines.CodePipelineSource.connection("my-org/my-app", "main",
connection_arn="arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41"
),
commands=["npm ci", "npm run build", "npx cdk synth"
]
)
)
# Original API
cloud_assembly_artifact = codepipeline.Artifact()
original_pipeline = pipelines.CdkPipeline(self, "Pipeline",
self_mutating=False,
cloud_assembly_artifact=cloud_assembly_artifact
)
```
## Definining the pipeline
This section of the documentation describes the AWS CodePipeline engine,
which comes with this library. If you want to use a different deployment
engine, read the section
[Using a different deployment engine](#using-a-different-deployment-engine)below.
### Synth and sources
To define a pipeline, instantiate a `CodePipeline` construct from the
`@aws-cdk/pipelines` module. It takes one argument, a `synth` step, which is
expected to produce the CDK Cloud Assembly as its single output (the contents of
the `cdk.out` directory after running `cdk synth`). "Steps" are arbitrary
actions in the pipeline, typically used to run scripts or commands.
For the synth, use a `ShellStep` and specify the commands necessary to install
dependencies, the CDK CLI, build your project and run `cdk synth`; the specific
commands required will depend on the programming language you are using. For a
typical NPM-based project, the synth will look like this:
```python
# source: pipelines.IFileSetProducer
# the repository source
pipeline = pipelines.CodePipeline(self, "Pipeline",
synth=pipelines.ShellStep("Synth",
input=source,
commands=["npm ci", "npm run build", "npx cdk synth"
]
)
)
```
The pipeline assumes that your `ShellStep` will produce a `cdk.out`
directory in the root, containing the CDK cloud assembly. If your
CDK project lives in a subdirectory, be sure to adjust the
`primaryOutputDirectory` to match:
```python
# source: pipelines.IFileSetProducer
# the repository source
pipeline = pipelines.CodePipeline(self, "Pipeline",
synth=pipelines.ShellStep("Synth",
input=source,
commands=["cd mysubdir", "npm ci", "npm run build", "npx cdk synth"
],
primary_output_directory="mysubdir/cdk.out"
)
)
```
The underlying `@aws-cdk/aws-codepipeline.Pipeline` construct will be produced
when `app.synth()` is called. You can also force it to be produced
earlier by calling `pipeline.buildPipeline()`. After you've called
that method, you can inspect the constructs that were produced by
accessing the properties of the `pipeline` object.
#### Commands for other languages and package managers
The commands you pass to `new ShellStep` will be very similar to the commands
you run on your own workstation to install dependencies and synth your CDK
project. Here are some (non-exhaustive) examples for what those commands might
look like in a number of different situations.
For Yarn, the install commands are different:
```python
# source: pipelines.IFileSetProducer
# the repository source
pipeline = pipelines.CodePipeline(self, "Pipeline",
synth=pipelines.ShellStep("Synth",
input=source,
commands=["yarn install --frozen-lockfile", "yarn build", "npx cdk synth"
]
)
)
```
For Python projects, remember to install the CDK CLI globally (as
there is no `package.json` to automatically install it for you):
```python
# source: pipelines.IFileSetProducer
# the repository source
pipeline = pipelines.CodePipeline(self, "Pipeline",
synth=pipelines.ShellStep("Synth",
input=source,
commands=["pip install -r requirements.txt", "npm install -g aws-cdk", "cdk synth"
]
)
)
```
For Java projects, remember to install the CDK CLI globally (as
there is no `package.json` to automatically install it for you),
and the Maven compilation step is automatically executed for you
as you run `cdk synth`:
```python
# source: pipelines.IFileSetProducer
# the repository source
pipeline = pipelines.CodePipeline(self, "Pipeline",
synth=pipelines.ShellStep("Synth",
input=source,
commands=["npm install -g aws-cdk", "cdk synth"
]
)
)
```
You can adapt these examples to your own situation.
#### Migrating from buildspec.yml files
You may currently have the build instructions for your CodeBuild Projects in a
`buildspec.yml` file in your source repository. In addition to your build
commands, the CodeBuild Project's buildspec also controls some information that
CDK Pipelines manages for you, like artifact identifiers, input artifact
locations, Docker authorization, and exported variables.
Since there is no way in general for CDK Pipelines to modify the file in your
resource repository, CDK Pipelines configures the BuildSpec directly on the
CodeBuild Project, instead of loading it from the `buildspec.yml` file.
This requires a pipeline self-mutation to update.
To avoid this, put your build instructions in a separate script, for example
`build.sh`, and call that script from the build `commands` array:
```python
# source: pipelines.IFileSetProducer
pipeline = pipelines.CodePipeline(self, "Pipeline",
synth=pipelines.ShellStep("Synth",
input=source,
commands=["./build.sh"
]
)
)
```
Doing so keeps your exact build instructions in sync with your source code in
the source repository where it belongs, and provides a convenient build script
for developers at the same time.
#### CodePipeline Sources
In CodePipeline, *Sources* define where the source of your application lives.
When a change to the source is detected, the pipeline will start executing.
Source objects can be created by factory methods on the `CodePipelineSource` class:
##### GitHub, GitHub Enterprise, BitBucket using a connection
The recommended way of connecting to GitHub or BitBucket is by using a *connection*.
You will first use the AWS Console to authenticate to the source control
provider, and then use the connection ARN in your pipeline definition:
```python
pipelines.CodePipelineSource.connection("org/repo", "branch",
connection_arn="arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41"
)
```
##### GitHub using OAuth
You can also authenticate to GitHub using a personal access token. This expects
that you've created a personal access token and stored it in Secrets Manager.
By default, the source object will look for a secret named **github-token**, but
you can change the name. The token should have the **repo** and **admin:repo_hook**
scopes.
```python
pipelines.CodePipelineSource.git_hub("org/repo", "branch",
# This is optional
authentication=cdk.SecretValue.secrets_manager("my-token")
)
```
##### CodeCommit
You can use a CodeCommit repository as the source. Either create or import
that the CodeCommit repository and then use `CodePipelineSource.codeCommit`
to reference it:
```python
repository = codecommit.Repository.from_repository_name(self, "Repository", "my-repository")
pipelines.CodePipelineSource.code_commit(repository, "main")
```
##### S3
You can use a zip file in S3 as the source of the pipeline. The pipeline will be
triggered every time the file in S3 is changed:
```python
bucket = s3.Bucket.from_bucket_name(self, "Bucket", "my-bucket")
pipelines.CodePipelineSource.s3(bucket, "my/source.zip")
```
##### ECR
You can use a Docker image in ECR as the source of the pipeline. The pipeline will be
triggered every time an image is pushed to ECR:
```python
repository = ecr.Repository(self, "Repository")
pipelines.CodePipelineSource.ecr(repository)
```
#### Additional inputs
`ShellStep` allows passing in more than one input: additional
inputs will be placed in the directories you specify. Any step that produces an
output file set can be used as an input, such as a `CodePipelineSource`, but
also other `ShellStep`:
```python
prebuild = pipelines.ShellStep("Prebuild",
input=pipelines.CodePipelineSource.git_hub("myorg/repo1", "main"),
primary_output_directory="./build",
commands=["./build.sh"]
)
pipeline = pipelines.CodePipeline(self, "Pipeline",
synth=pipelines.ShellStep("Synth",
input=pipelines.CodePipelineSource.git_hub("myorg/repo2", "main"),
additional_inputs={
"subdir": pipelines.CodePipelineSource.git_hub("myorg/repo3", "main"),
"../siblingdir": prebuild
},
commands=["./build.sh"]
)
)
```
### CDK application deployments
After you have defined the pipeline and the `synth` step, you can add one or
more CDK `Stages` which will be deployed to their target environments. To do
so, call `pipeline.addStage()` on the Stage object:
```python
# pipeline: pipelines.CodePipeline
# Do this as many times as necessary with any account and region
# Account and region may different from the pipeline's.
pipeline.add_stage(MyApplicationStage(self, "Prod",
env=cdk.Environment(
account="123456789012",
region="eu-west-1"
)
))
```
CDK Pipelines will automatically discover all `Stacks` in the given `Stage`
object, determine their dependency order, and add appropriate actions to the
pipeline to publish the assets referenced in those stacks and deploy the stacks
in the right order.
If the `Stacks` are targeted at an environment in a different AWS account or
Region and that environment has been
[bootstrapped](https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html)
, CDK Pipelines will transparently make sure the IAM roles are set up
correctly and any requisite replication Buckets are created.
#### Deploying in parallel
By default, all applications added to CDK Pipelines by calling `addStage()` will
be deployed in sequence, one after the other. If you have a lot of stages, you can
speed up the pipeline by choosing to deploy some stages in parallel. You do this
by calling `addWave()` instead of `addStage()`: a *wave* is a set of stages that
are all deployed in parallel instead of sequentially. Waves themselves are still
deployed in sequence. For example, the following will deploy two copies of your
application to `eu-west-1` and `eu-central-1` in parallel:
```python
# pipeline: pipelines.CodePipeline
europe_wave = pipeline.add_wave("Europe")
europe_wave.add_stage(MyApplicationStage(self, "Ireland",
env=cdk.Environment(region="eu-west-1")
))
europe_wave.add_stage(MyApplicationStage(self, "Germany",
env=cdk.Environment(region="eu-central-1")
))
```
#### Deploying to other accounts / encrypting the Artifact Bucket
CDK Pipelines can transparently deploy to other Regions and other accounts
(provided those target environments have been
[*bootstrapped*](https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html)).
However, deploying to another account requires one additional piece of
configuration: you need to enable `crossAccountKeys: true` when creating the
pipeline.
This will encrypt the artifact bucket(s), but incurs a cost for maintaining the
KMS key.
Example:
```python
pipeline = pipelines.CodePipeline(self, "Pipeline",
# Encrypt artifacts, required for cross-account deployments
cross_account_keys=True,
synth=pipelines.ShellStep("Synth",
input=pipelines.CodePipelineSource.connection("my-org/my-app", "main",
connection_arn="arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41"
),
commands=["npm ci", "npm run build", "npx cdk synth"
]
)
)
```
### Validation
Every `addStage()` and `addWave()` command takes additional options. As part of these options,
you can specify `pre` and `post` steps, which are arbitrary steps that run before or after
the contents of the stage or wave, respectively. You can use these to add validations like
manual or automated gates to your pipeline. We recommend putting manual approval gates in the set of `pre` steps, and automated approval gates in
the set of `post` steps.
The following example shows both an automated approval in the form of a `ShellStep`, and
a manual approval in the form of a `ManualApprovalStep` added to the pipeline. Both must
pass in order to promote from the `PreProd` to the `Prod` environment:
```python
# pipeline: pipelines.CodePipeline
preprod = MyApplicationStage(self, "PreProd")
prod = MyApplicationStage(self, "Prod")
pipeline.add_stage(preprod,
post=[
pipelines.ShellStep("Validate Endpoint",
commands=["curl -Ssf https://my.webservice.com/"]
)
]
)
pipeline.add_stage(prod,
pre=[
pipelines.ManualApprovalStep("PromoteToProd")
]
)
```
You can also specify steps to be executed at the stack level. To achieve this, you can specify the stack and step via the `stackSteps` property:
```python
# pipeline: pipelines.CodePipeline
class MyStacksStage(Stage):
def __init__(self, scope, id, *, env=None, outdir=None):
super().__init__(scope, id, env=env, outdir=outdir)
self.stack1 = Stack(self, "stack1")
self.stack2 = Stack(self, "stack2")
prod = MyStacksStage(self, "Prod")
pipeline.add_stage(prod,
stack_steps=[pipelines.StackSteps(
stack=prod.stack1,
pre=[pipelines.ManualApprovalStep("Pre-Stack Check")], # Executed before stack is prepared
change_set=[pipelines.ManualApprovalStep("ChangeSet Approval")], # Executed after stack is prepared but before the stack is deployed
post=[pipelines.ManualApprovalStep("Post-Deploy Check")]
), pipelines.StackSteps(
stack=prod.stack2,
post=[pipelines.ManualApprovalStep("Post-Deploy Check")]
)]
)
```
If you specify multiple steps, they will execute in parallel by default. You can add dependencies between them
to if you wish to specify an order. To add a dependency, call `step.addStepDependency()`:
```python
first_step = pipelines.ManualApprovalStep("A")
second_step = pipelines.ManualApprovalStep("B")
second_step.add_step_dependency(first_step)
```
For convenience, `Step.sequence()` will take an array of steps and dependencies between adjacent steps,
so that the whole list executes in order:
```python
# Step A will depend on step B and step B will depend on step C
ordered_steps = pipelines.Step.sequence([
pipelines.ManualApprovalStep("A"),
pipelines.ManualApprovalStep("B"),
pipelines.ManualApprovalStep("C")
])
```
#### Using CloudFormation Stack Outputs in approvals
Because many CloudFormation deployments result in the generation of resources with unpredictable
names, validations have support for reading back CloudFormation Outputs after a deployment. This
makes it possible to pass (for example) the generated URL of a load balancer to the test set.
To use Stack Outputs, expose the `CfnOutput` object you're interested in, and
pass it to `envFromCfnOutputs` of the `ShellStep`:
```python
# pipeline: pipelines.CodePipeline
class MyOutputStage(Stage):
def __init__(self, scope, id, *, env=None, outdir=None):
super().__init__(scope, id, env=env, outdir=outdir)
self.load_balancer_address = CfnOutput(self, "Output", value="value")
lb_app = MyOutputStage(self, "MyApp")
pipeline.add_stage(lb_app,
post=[
pipelines.ShellStep("HitEndpoint",
env_from_cfn_outputs={
# Make the load balancer address available as $URL inside the commands
"URL": lb_app.load_balancer_address
},
commands=["curl -Ssf $URL"]
)
]
)
```
#### Running scripts compiled during the synth step
As part of a validation, you probably want to run a test suite that's more
elaborate than what can be expressed in a couple of lines of shell script.
You can bring additional files into the shell script validation by supplying
the `input` or `additionalInputs` property of `ShellStep`. The input can
be produced by the `Synth` step, or come from a source or any other build
step.
Here's an example that captures an additional output directory in the synth
step and runs tests from there:
```python
# synth: pipelines.ShellStep
stage = MyApplicationStage(self, "MyApplication")
pipeline = pipelines.CodePipeline(self, "Pipeline", synth=synth)
pipeline.add_stage(stage,
post=[
pipelines.ShellStep("Approve",
# Use the contents of the 'integ' directory from the synth step as the input
input=synth.add_output_directory("integ"),
commands=["cd integ && ./run.sh"]
)
]
)
```
### Customizing CodeBuild Projects
CDK pipelines will generate CodeBuild projects for each `ShellStep` you use, and it
will also generate CodeBuild projects to publish assets and perform the self-mutation
of the pipeline. To control the various aspects of the CodeBuild projects that get
generated, use a `CodeBuildStep` instead of a `ShellStep`. This class has a number
of properties that allow you to customize various aspects of the projects:
```python
# vpc: ec2.Vpc
# my_security_group: ec2.SecurityGroup
pipelines.CodeBuildStep("Synth",
# ...standard ShellStep props...
commands=[],
env={},
# If you are using a CodeBuildStep explicitly, set the 'cdk.out' directory
# to be the synth step's output.
primary_output_directory="cdk.out",
# Control the name of the project
project_name="MyProject",
# Control parts of the BuildSpec other than the regular 'build' and 'install' commands
partial_build_spec=codebuild.BuildSpec.from_object({
"version": "0.2"
}),
# Control the build environment
build_environment=codebuild.BuildEnvironment(
compute_type=codebuild.ComputeType.LARGE
),
timeout=Duration.minutes(90),
# Control Elastic Network Interface creation
vpc=vpc,
subnet_selection=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT),
security_groups=[my_security_group],
# Additional policy statements for the execution role
role_policy_statements=[
iam.PolicyStatement()
]
)
```
You can also configure defaults for *all* CodeBuild projects by passing `codeBuildDefaults`,
or just for the synth, asset publishing, and self-mutation projects by passing `synthCodeBuildDefaults`,
`assetPublishingCodeBuildDefaults`, or `selfMutationCodeBuildDefaults`:
```python
# vpc: ec2.Vpc
# my_security_group: ec2.SecurityGroup
pipelines.CodePipeline(self, "Pipeline",
# Standard CodePipeline properties
synth=pipelines.ShellStep("Synth",
input=pipelines.CodePipelineSource.connection("my-org/my-app", "main",
connection_arn="arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41"
),
commands=["npm ci", "npm run build", "npx cdk synth"
]
),
# Defaults for all CodeBuild projects
code_build_defaults=pipelines.CodeBuildOptions(
# Prepend commands and configuration to all projects
partial_build_spec=codebuild.BuildSpec.from_object({
"version": "0.2"
}),
# Control the build environment
build_environment=codebuild.BuildEnvironment(
compute_type=codebuild.ComputeType.LARGE
),
# Control Elastic Network Interface creation
vpc=vpc,
subnet_selection=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT),
security_groups=[my_security_group],
# Additional policy statements for the execution role
role_policy=[
iam.PolicyStatement()
]
),
synth_code_build_defaults=pipelines.CodeBuildOptions(),
asset_publishing_code_build_defaults=pipelines.CodeBuildOptions(),
self_mutation_code_build_defaults=pipelines.CodeBuildOptions()
)
```
### Arbitrary CodePipeline actions
If you want to add a type of CodePipeline action to the CDK Pipeline that
doesn't have a matching class yet, you can define your own step class that extends
`Step` and implements `ICodePipelineActionFactory`.
Here's an example that adds a Jenkins step:
```python
class MyJenkinsStep(pipelines.Steppipelines.ICodePipelineActionFactory):
def __init__(self, provider, input):
super().__init__("MyJenkinsStep")
# This is necessary if your step accepts parametres, like environment variables,
# that may contain outputs from other steps. It doesn't matter what the
# structure is, as long as it contains the values that may contain outputs.
self.discover_referenced_outputs({
"env": {}
})
def produce_action(self, stage, *, scope, actionName, runOrder, variablesNamespace=None, artifacts, fallbackArtifact=None, pipeline, codeBuildDefaults=None, beforeSelfMutation=None):
# This is where you control what type of Action gets added to the
# CodePipeline
stage.add_action(cpactions.JenkinsAction(
# Copy 'actionName' and 'runOrder' from the options
action_name=action_name,
run_order=run_order,
# Jenkins-specific configuration
type=cpactions.JenkinsActionType.TEST,
jenkins_provider=self.provider,
project_name="MyJenkinsProject",
# Translate the FileSet into a codepipeline.Artifact
inputs=[artifacts.to_code_pipeline(self.input)]
))
return pipelines.CodePipelineActionFactoryResult(run_orders_consumed=1)
```
## Using Docker in the pipeline
Docker can be used in 3 different places in the pipeline:
* If you are using Docker image assets in your application stages: Docker will
run in the asset publishing projects.
* If you are using Docker image assets in your stack (for example as
images for your CodeBuild projects): Docker will run in the self-mutate project.
* If you are using Docker to bundle file assets anywhere in your project (for
example, if you are using such construct libraries as
`@aws-cdk/aws-lambda-nodejs`): Docker will run in the
*synth* project.
For the first case, you don't need to do anything special. For the other two cases,
you need to make sure that **privileged mode** is enabled on the correct CodeBuild
projects, so that Docker can run correctly. The follow sections describe how to do
that.
You may also need to authenticate to Docker registries to avoid being throttled.
See the section **Authenticating to Docker registries** below for information on how to do
that.
### Using Docker image assets in the pipeline
If your `PipelineStack` is using Docker image assets (as opposed to the application
stacks the pipeline is deploying), for example by the use of `LinuxBuildImage.fromAsset()`,
you need to pass `dockerEnabledForSelfMutation: true` to the pipeline. For example:
```python
pipeline = pipelines.CodePipeline(self, "Pipeline",
synth=pipelines.ShellStep("Synth",
input=pipelines.CodePipelineSource.connection("my-org/my-app", "main",
connection_arn="arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41"
),
commands=["npm ci", "npm run build", "npx cdk synth"]
),
# Turn this on because the pipeline uses Docker image assets
docker_enabled_for_self_mutation=True
)
pipeline.add_wave("MyWave",
post=[
pipelines.CodeBuildStep("RunApproval",
commands=["command-from-image"],
build_environment=codebuild.BuildEnvironment(
# The user of a Docker image asset in the pipeline requires turning on
# 'dockerEnabledForSelfMutation'.
build_image=codebuild.LinuxBuildImage.from_asset(self, "Image",
directory="./docker-image"
)
)
)
]
)
```
> **Important**: You must turn on the `dockerEnabledForSelfMutation` flag,
> commit and allow the pipeline to self-update *before* adding the actual
> Docker asset.
### Using bundled file assets
If you are using asset bundling anywhere (such as automatically done for you
if you add a construct like `@aws-cdk/aws-lambda-nodejs`), you need to pass
`dockerEnabledForSynth: true` to the pipeline. For example:
```python
pipeline = pipelines.CodePipeline(self, "Pipeline",
synth=pipelines.ShellStep("Synth",
input=pipelines.CodePipelineSource.connection("my-org/my-app", "main",
connection_arn="arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41"
),
commands=["npm ci", "npm run build", "npx cdk synth"]
),
# Turn this on because the application uses bundled file assets
docker_enabled_for_synth=True
)
```
> **Important**: You must turn on the `dockerEnabledForSynth` flag,
> commit and allow the pipeline to self-update *before* adding the actual
> Docker asset.
### Authenticating to Docker registries
You can specify credentials to use for authenticating to Docker registries as part of the
pipeline definition. This can be useful if any Docker image assets — in the pipeline or
any of the application stages — require authentication, either due to being in a
different environment (e.g., ECR repo) or to avoid throttling (e.g., DockerHub).
```python
docker_hub_secret = secretsmanager.Secret.from_secret_complete_arn(self, "DHSecret", "arn:aws:...")
custom_reg_secret = secretsmanager.Secret.from_secret_complete_arn(self, "CRSecret", "arn:aws:...")
repo1 = ecr.Repository.from_repository_arn(self, "Repo", "arn:aws:ecr:eu-west-1:0123456789012:repository/Repo1")
repo2 = ecr.Repository.from_repository_arn(self, "Repo", "arn:aws:ecr:eu-west-1:0123456789012:repository/Repo2")
pipeline = pipelines.CodePipeline(self, "Pipeline",
docker_credentials=[
pipelines.DockerCredential.docker_hub(docker_hub_secret),
pipelines.DockerCredential.custom_registry("dockerregistry.example.com", custom_reg_secret),
pipelines.DockerCredential.ecr([repo1, repo2])
],
synth=pipelines.ShellStep("Synth",
input=pipelines.CodePipelineSource.connection("my-org/my-app", "main",
connection_arn="arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41"
),
commands=["npm ci", "npm run build", "npx cdk synth"]
)
)
```
For authenticating to Docker registries that require a username and password combination
(like DockerHub), create a Secrets Manager Secret with fields named `username`
and `secret`, and import it (the field names change be customized).
Authentication to ECR repostories is done using the execution role of the
relevant CodeBuild job. Both types of credentials can be provided with an
optional role to assume before requesting the credentials.
By default, the Docker credentials provided to the pipeline will be available to
the **Synth**, **Self-Update**, and **Asset Publishing** actions within the
*pipeline. The scope of the credentials can be limited via the `DockerCredentialUsage` option.
```python
docker_hub_secret = secretsmanager.Secret.from_secret_complete_arn(self, "DHSecret", "arn:aws:...")
# Only the image asset publishing actions will be granted read access to the secret.
creds = pipelines.DockerCredential.docker_hub(docker_hub_secret, usages=[pipelines.DockerCredentialUsage.ASSET_PUBLISHING])
```
## CDK Environment Bootstrapping
An *environment* is an *(account, region)* pair where you want to deploy a
CDK stack (see
[Environments](https://docs.aws.amazon.com/cdk/latest/guide/environments.html)
in the CDK Developer Guide). In a Continuous Deployment pipeline, there are
at least two environments involved: the environment where the pipeline is
provisioned, and the environment where you want to deploy the application (or
different stages of the application). These can be the same, though best
practices recommend you isolate your different application stages from each
other in different AWS accounts or regions.
Before you can provision the pipeline, you have to *bootstrap* the environment you want
to create it in. If you are deploying your application to different environments, you
also have to bootstrap those and be sure to add a *trust* relationship.
After you have bootstrapped an environment and created a pipeline that deploys
to it, it's important that you don't delete the stack or change its *Qualifier*,
or future deployments to this environment will fail. If you want to upgrade
the bootstrap stack to a newer version, do that by updating it in-place.
> This library requires the *modern* bootstrapping stack which has
> been updated specifically to support cross-account continuous delivery.
>
> If you are using CDKv2, you do not need to do anything else. Modern
> bootstrapping and modern stack synthesis (also known as "default stack
> synthesis") is the default.
>
> If you are using CDKv1, you need to opt in to modern bootstrapping and
> modern stack synthesis using a feature flag. Make sure `cdk.json` includes:
>
> ```json
> {
> "context": {
> "@aws-cdk/core:newStyleStackSynthesis": true
> }
> }
> ```
>
> And be sure to run `cdk bootstrap` in the same directory as the `cdk.json`
> file.
To bootstrap an environment for provisioning the pipeline:
```console
$ npx cdk bootstrap \
[--profile admin-profile-1] \
--cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
aws://111111111111/us-east-1
```
To bootstrap a different environment for deploying CDK applications into using
a pipeline in account `111111111111`:
```console
$ npx cdk bootstrap \
[--profile admin-profile-2] \
--cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
--trust 11111111111 \
aws://222222222222/us-east-2
```
If you only want to trust an account to do lookups (e.g, when your CDK application has a
`Vpc.fromLookup()` call), use the option `--trust-for-lookup`:
```console
$ npx cdk bootstrap \
[--profile admin-profile-2] \
--cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
--trust-for-lookup 11111111111 \
aws://222222222222/us-east-2
```
These command lines explained:
* `npx`: means to use the CDK CLI from the current NPM install. If you are using
a global install of the CDK CLI, leave this out.
* `--profile`: should indicate a profile with administrator privileges that has
permissions to provision a pipeline in the indicated account. You can leave this
flag out if either the AWS default credentials or the `AWS_*` environment
variables confer these permissions.
* `--cloudformation-execution-policies`: ARN of the managed policy that future CDK
deployments should execute with. By default this is `AdministratorAccess`, but
if you also specify the `--trust` flag to give another Account permissions to
deploy into the current account, you must specify a value here.
* `--trust`: indicates which other account(s) should have permissions to deploy
CDK applications into this account. In this case we indicate the Pipeline's account,
but you could also use this for developer accounts (don't do that for production
application accounts though!).
* `--trust-for-lookup`: gives a more limited set of permissions to the
trusted account, only allowing it to look up values such as availability zones, EC2 images and
VPCs. `--trust-for-lookup` does not give permissions to modify anything in the account.
Note that `--trust` implies `--trust-for-lookup`, so you don't need to specify
the same acocunt twice.
* `aws://222222222222/us-east-2`: the account and region we're bootstrapping.
> Be aware that anyone who has access to the trusted Accounts **effectively has all
> permissions conferred by the configured CloudFormation execution policies**,
> allowing them to do things like read arbitrary S3 buckets and create arbitrary
> infrastructure in the bootstrapped account. Restrict the list of `--trust`ed Accounts,
> or restrict the policies configured by `--cloudformation-execution-policies`.
<br>
> **Security tip**: we recommend that you use administrative credentials to an
> account only to bootstrap it and provision the initial pipeline. Otherwise,
> access to administrative credentials should be dropped as soon as possible.
<br>
> **On the use of AdministratorAccess**: The use of the `AdministratorAccess` policy
> ensures that your pipeline can deploy every type of AWS resource to your account.
> Make sure you trust all the code and dependencies that make up your CDK app.
> Check with the appropriate department within your organization to decide on the
> proper policy to use.
>
> If your policy includes permissions to create on attach permission to a role,
> developers can escalate their privilege with more permissive permission.
> Thus, we recommend implementing [permissions boundary](https://aws.amazon.com/premiumsupport/knowledge-center/iam-permission-boundaries/)
> in the CDK Execution role. To do this, you can bootstrap with the `--template` option with
> [a customized template](https://github.com/aws-samples/aws-bootstrap-kit-examples/blob/ba28a97d289128281bc9483bcba12c1793f2c27a/source/1-SDLC-organization/lib/cdk-bootstrap-template.yml#L395) that contains a permission boundary.
### Migrating from old bootstrap stack
The bootstrap stack is a CloudFormation stack in your account named
**CDKToolkit** that provisions a set of resources required for the CDK
to deploy into that environment.
The "new" bootstrap stack (obtained by running `cdk bootstrap` with
`CDK_NEW_BOOTSTRAP=1`) is slightly more elaborate than the "old" stack. It
contains:
* An S3 bucket and ECR repository with predictable names, so that we can reference
assets in these storage locations *without* the use of CloudFormation template
parameters.
* A set of roles with permissions to access these asset locations and to execute
CloudFormation, assumable from whatever accounts you specify under `--trust`.
It is possible and safe to migrate from the old bootstrap stack to the new
bootstrap stack. This will create a new S3 file asset bucket in your account
and orphan the old bucket. You should manually delete the orphaned bucket
after you are sure you have redeployed all CDK applications and there are no
more references to the old asset bucket.
## Context Lookups
You might be using CDK constructs that need to look up [runtime
context](https://docs.aws.amazon.com/cdk/latest/guide/context.html#context_methods),
which is information from the target AWS Account and Region the CDK needs to
synthesize CloudFormation templates appropriate for that environment. Examples
of this kind of context lookups are the number of Availability Zones available
to you, a Route53 Hosted Zone ID, or the ID of an AMI in a given region. This
information is automatically looked up when you run `cdk synth`.
By default, a `cdk synth` performed in a pipeline will not have permissions
to perform these lookups, and the lookups will fail. This is by design.
**Our recommended way of using lookups** is by running `cdk synth` on the
developer workstation and checking in the `cdk.context.json` file, which
contains the results of the context lookups. This will make sure your
synthesized infrastructure is consistent and repeatable. If you do not commit
`cdk.context.json`, the results of the lookups may suddenly be different in
unexpected ways, and even produce results that cannot be deployed or will cause
data loss. To give an account permissions to perform lookups against an
environment, without being able to deploy to it and make changes, run
`cdk bootstrap --trust-for-lookup=<account>`.
If you want to use lookups directly from the pipeline, you either need to accept
the risk of nondeterminism, or make sure you save and load the
`cdk.context.json` file somewhere between synth runs. Finally, you should
give the synth CodeBuild execution role permissions to assume the bootstrapped
lookup roles. As an example, doing so would look like this:
```python
pipelines.CodePipeline(self, "Pipeline",
synth=pipelines.CodeBuildStep("Synth",
input=pipelines.CodePipelineSource.connection("my-org/my-app", "main",
connection_arn="arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41"
),
commands=["...", "npm ci", "npm run build", "npx cdk synth", "..."
],
role_policy_statements=[
iam.PolicyStatement(
actions=["sts:AssumeRole"],
resources=["*"],
conditions={
"StringEquals": {
"iam:ResourceTag/aws-cdk:bootstrap-role": "lookup"
}
}
)
]
)
)
```
The above example requires that the target environments have all
been bootstrapped with bootstrap stack version `8`, released with
CDK CLI `1.114.0`.
## Security Considerations
It's important to stay safe while employing Continuous Delivery. The CDK Pipelines
library comes with secure defaults to the best of our ability, but by its
very nature the library cannot take care of everything.
We therefore expect you to mind the following:
* Maintain dependency hygiene and vet 3rd-party software you use. Any software you
run on your build machine has the ability to change the infrastructure that gets
deployed. Be careful with the software you depend on.
* Use dependency locking to prevent accidental upgrades! The default `CdkSynths` that
come with CDK Pipelines will expect `package-lock.json` and `yarn.lock` to
ensure your dependencies are the ones you expect.
* Credentials to production environments should be short-lived. After
bootstrapping and the initial pipeline provisioning, there is no more need for
developers to have access to any of the account credentials; all further
changes can be deployed through git. Avoid the chances of credentials leaking
by not having them in the first place!
### Confirm permissions broadening
To keep tabs on the security impact of changes going out through your pipeline,
you can insert a security check before any stage deployment. This security check
will check if the upcoming deployment would add any new IAM permissions or
security group rules, and if so pause the pipeline and require you to confirm
the changes.
The security check will appear as two distinct actions in your pipeline: first
a CodeBuild project that runs `cdk diff` on the stage that's about to be deployed,
followed by a Manual Approval action that pauses the pipeline. If it so happens
that there no new IAM permissions or security group rules will be added by the deployment,
the manual approval step is automatically satisfied. The pipeline will look like this:
```txt
Pipeline
├── ...
├── MyApplicationStage
│ ├── MyApplicationSecurityCheck // Security Diff Action
│ ├── MyApplicationManualApproval // Manual Approval Action
│ ├── Stack.Prepare
│ └── Stack.Deploy
└── ...
```
You can insert the security check by using a `ConfirmPermissionsBroadening` step:
```python
# pipeline: pipelines.CodePipeline
stage = MyApplicationStage(self, "MyApplication")
pipeline.add_stage(stage,
pre=[
pipelines.ConfirmPermissionsBroadening("Check", stage=stage)
]
)
```
To get notified when there is a change that needs your manual approval,
create an SNS Topic, subscribe your own email address, and pass it in as
as the `notificationTopic` property:
```python
# pipeline: pipelines.CodePipeline
topic = sns.Topic(self, "SecurityChangesTopic")
topic.add_subscription(subscriptions.EmailSubscription("test@email.com"))
stage = MyApplicationStage(self, "MyApplication")
pipeline.add_stage(stage,
pre=[
pipelines.ConfirmPermissionsBroadening("Check",
stage=stage,
notification_topic=topic
)
]
)
```
**Note**: Manual Approvals notifications only apply when an application has security
check enabled.
## Using a different deployment engine
CDK Pipelines supports multiple *deployment engines*, but this module vends a
construct for only one such engine: AWS CodePipeline. It is also possible to
use CDK Pipelines to build pipelines backed by other deployment engines.
Here is a list of CDK Libraries that integrate CDK Pipelines with
alternative deployment engines:
* GitHub Workflows: [`cdk-pipelines-github`](https://github.com/cdklabs/cdk-pipelines-github)
## Troubleshooting
Here are some common errors you may encounter while using this library.
### Pipeline: Internal Failure
If you see the following error during deployment of your pipeline:
```plaintext
CREATE_FAILED | AWS::CodePipeline::Pipeline | Pipeline/Pipeline
Internal Failure
```
There's something wrong with your GitHub access token. It might be missing, or not have the
right permissions to access the repository you're trying to access.
### Key: Policy contains a statement with one or more invalid principals
If you see the following error during deployment of your pipeline:
```plaintext
CREATE_FAILED | AWS::KMS::Key | Pipeline/Pipeline/ArtifactsBucketEncryptionKey
Policy contains a statement with one or more invalid principals.
```
One of the target (account, region) environments has not been bootstrapped
with the new bootstrap stack. Check your target environments and make sure
they are all bootstrapped.
### Message: no matching base directory path found for cdk.out
If you see this error during the **Synth** step, it means that CodeBuild
is expecting to find a `cdk.out` directory in the root of your CodeBuild project,
but the directory wasn't there. There are two common causes for this:
* `cdk synth` is not being executed: `cdk synth` used to be run
implicitly for you, but you now have to explicitly include the command.
For NPM-based projects, add `npx cdk synth` to the end of the `commands`
property, for other languages add `npm install -g aws-cdk` and `cdk synth`.
* Your CDK project lives in a subdirectory: you added a `cd <somedirectory>` command
to the list of commands; don't forget to tell the `ScriptStep` about the
different location of `cdk.out`, by passing `primaryOutputDirectory: '<somedirectory>/cdk.out'`.
### <Stack> is in ROLLBACK_COMPLETE state and can not be updated
If you see the following error during execution of your pipeline:
```plaintext
Stack ... is in ROLLBACK_COMPLETE state and can not be updated. (Service:
AmazonCloudFormation; Status Code: 400; Error Code: ValidationError; Request
ID: ...)
```
The stack failed its previous deployment, and is in a non-retryable state.
Go into the CloudFormation console, delete the stack, and retry the deployment.
### Cannot find module 'xxxx' or its corresponding type declarations
You may see this if you are using TypeScript or other NPM-based languages,
when using NPM 7 on your workstation (where you generate `package-lock.json`)
and NPM 6 on the CodeBuild image used for synthesizing.
It looks like NPM 7 has started writing less information to `package-lock.json`,
leading NPM 6 reading that same file to not install all required packages anymore.
Make sure you are using the same NPM version everywhere, either downgrade your
workstation's version or upgrade the CodeBuild version.
### Cannot find module '.../check-node-version.js' (MODULE_NOT_FOUND)
The above error may be produced by `npx` when executing the CDK CLI, or any
project that uses the AWS SDK for JavaScript, without the target application
having been installed yet. For example, it can be triggered by `npx cdk synth`
if `aws-cdk` is not in your `package.json`.
Work around this by either installing the target application using NPM *before*
running `npx`, or set the environment variable `NPM_CONFIG_UNSAFE_PERM=true`.
### Cannot connect to the Docker daemon at unix:///var/run/docker.sock
If, in the 'Synth' action (inside the 'Build' stage) of your pipeline, you get an error like this:
```console
stderr: docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
```
It means that the AWS CodeBuild project for 'Synth' is not configured to run in privileged mode,
which prevents Docker builds from happening. This typically happens if you use a CDK construct
that bundles asset using tools run via Docker, like `aws-lambda-nodejs`, `aws-lambda-python`,
`aws-lambda-go` and others.
Make sure you set the `privileged` environment variable to `true` in the synth definition:
```python
source_artifact = codepipeline.Artifact()
cloud_assembly_artifact = codepipeline.Artifact()
pipeline = pipelines.CdkPipeline(self, "MyPipeline",
cloud_assembly_artifact=cloud_assembly_artifact,
synth_action=pipelines.SimpleSynthAction.standard_npm_synth(
source_artifact=source_artifact,
cloud_assembly_artifact=cloud_assembly_artifact,
environment=codebuild.BuildEnvironment(
privileged=True
)
)
)
```
After turning on `privilegedMode: true`, you will need to do a one-time manual cdk deploy of your
pipeline to get it going again (as with a broken 'synth' the pipeline will not be able to self
update to the right state).
### S3 error: Access Denied
An "S3 Access Denied" error can have two causes:
* Asset hashes have changed, but self-mutation has been disabled in the pipeline.
* You have deleted and recreated the bootstrap stack, or changed its qualifier.
#### Self-mutation step has been removed
Some constructs, such as EKS clusters, generate nested stacks. When CloudFormation tries
to deploy those stacks, it may fail with this error:
```console
S3 error: Access Denied For more information check http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
```
This happens because the pipeline is not self-mutating and, as a consequence, the `FileAssetX`
build projects get out-of-sync with the generated templates. To fix this, make sure the
`selfMutating` property is set to `true`:
```python
cloud_assembly_artifact = codepipeline.Artifact()
pipeline = pipelines.CdkPipeline(self, "MyPipeline",
self_mutating=True,
cloud_assembly_artifact=cloud_assembly_artifact
)
```
#### Bootstrap roles have been renamed or recreated
While attempting to deploy an application stage, the "Prepare" or "Deploy" stage may fail with a cryptic error like:
`Action execution failed Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 0123456ABCDEFGH; S3 Extended Request ID: 3hWcrVkhFGxfiMb/rTJO0Bk7Qn95x5ll4gyHiFsX6Pmk/NT+uX9+Z1moEcfkL7H3cjH7sWZfeD0=; Proxy: null)`
This generally indicates that the roles necessary to deploy have been deleted (or deleted and re-created);
for example, if the bootstrap stack has been deleted and re-created, this scenario will happen. Under the hood,
the resources that rely on these roles (e.g., `cdk-$qualifier-deploy-role-$account-$region`) point to different
canonical IDs than the recreated versions of these roles, which causes the errors. There are no simple solutions
to this issue, and for that reason we **strongly recommend** that bootstrap stacks not be deleted and re-created
once created.
The most automated way to solve the issue is to introduce a secondary bootstrap stack. By changing the qualifier
that the pipeline stack looks for, a change will be detected and the impacted policies and resources will be updated.
A hypothetical recovery workflow would look something like this:
* First, for all impacted environments, create a secondary bootstrap stack:
```sh
$ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \
--qualifier random1234 \
--toolkit-stack-name CDKToolkitTemp \
aws://111111111111/us-east-1
```
* Update all impacted stacks in the pipeline to use this new qualifier.
See https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html for more info.
```python
Stack(self, "MyStack",
# Update this qualifier to match the one used above.
synthesizer=cdk.DefaultStackSynthesizer(
qualifier="randchars1234"
)
)
```
* Deploy the updated stacks. This will update the stacks to use the roles created in the new bootstrap stack.
* (Optional) Restore back to the original state:
* Revert the change made in step #2 above
* Re-deploy the pipeline to use the original qualifier.
* Delete the temporary bootstrap stack(s)
##### Manual Alternative
Alternatively, the errors can be resolved by finding each impacted resource and policy, and correcting the policies
by replacing the canonical IDs (e.g., `AROAYBRETNYCYV6ZF2R93`) with the appropriate ARNs. As an example, the KMS
encryption key policy for the artifacts bucket may have a statement that looks like the following:
```json
{
"Effect" : "Allow",
"Principal" : {
// "AWS" : "AROAYBRETNYCYV6ZF2R93" // Indicates this issue; replace this value
"AWS": "arn:aws:iam::0123456789012:role/cdk-hnb659fds-deploy-role-0123456789012-eu-west-1", // Correct value
},
"Action" : [ "kms:Decrypt", "kms:DescribeKey" ],
"Resource" : "*"
}
```
Any resource or policy that references the qualifier (`hnb659fds` by default) will need to be updated.
### This CDK CLI is not compatible with the CDK library used by your application
The CDK CLI version used in your pipeline is too old to read the Cloud Assembly
produced by your CDK app.
Most likely this happens in the `SelfMutate` action, you are passing the `cliVersion`
parameter to control the version of the CDK CLI, and you just updated the CDK
framework version that your application uses. You either forgot to change the
`cliVersion` parameter, or changed the `cliVersion` in the same commit in which
you changed the framework version. Because a change to the pipeline settings needs
a successful run of the `SelfMutate` step to be applied, the next iteration of the
`SelfMutate` step still executes with the *old* CLI version, and that old CLI version
is not able to read the cloud assembly produced by the new framework version.
Solution: change the `cliVersion` first, commit, push and deploy, and only then
change the framework version.
We recommend you avoid specifying the `cliVersion` parameter at all. By default
the pipeline will use the latest CLI version, which will support all cloud assembly
versions.
## Known Issues
There are some usability issues that are caused by underlying technology, and
cannot be remedied by CDK at this point. They are reproduced here for completeness.
* **Console links to other accounts will not work**: the AWS CodePipeline
console will assume all links are relative to the current account. You will
not be able to use the pipeline console to click through to a CloudFormation
stack in a different account.
* **If a change set failed to apply the pipeline must restarted**: if a change
set failed to apply, it cannot be retried. The pipeline must be restarted from
the top by clicking **Release Change**.
* **A stack that failed to create must be deleted manually**: if a stack
failed to create on the first attempt, you must delete it using the
CloudFormation console before starting the pipeline again by clicking
**Release Change**.
Raw data
{
"_id": null,
"home_page": "https://github.com/aws/aws-cdk",
"name": "aws-cdk.pipelines",
"maintainer": "",
"docs_url": null,
"requires_python": "~=3.7",
"maintainer_email": "",
"keywords": "",
"author": "Amazon Web Services",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/f7/cd/032df9a85d0cf6e1c01387434d0f757852093a2def30dc999f969dd7af71/aws-cdk.pipelines-1.204.0.tar.gz",
"platform": null,
"description": "# CDK Pipelines\n\n<!--BEGIN STABILITY BANNER-->---\n\n\n![End-of-Support](https://img.shields.io/badge/End--of--Support-critical.svg?style=for-the-badge)\n\n> AWS CDK v1 has reached End-of-Support on 2023-06-01.\n> This package is no longer being updated, and users should migrate to AWS CDK v2.\n>\n> For more information on how to migrate, see the [*Migrating to AWS CDK v2* guide](https://docs.aws.amazon.com/cdk/v2/guide/migrating-v2.html).\n\n---\n<!--END STABILITY BANNER-->\n\nA construct library for painless Continuous Delivery of CDK applications.\n\nCDK Pipelines is an *opinionated construct library*. It is purpose-built to\ndeploy one or more copies of your CDK applications using CloudFormation with a\nminimal amount of effort on your part. It is *not* intended to support arbitrary\ndeployment pipelines, and very specifically it is not built to use CodeDeploy to\napplications to instances, or deploy your custom-built ECR images to an ECS\ncluster directly: use CDK file assets with CloudFormation Init for instances, or\nCDK container assets for ECS clusters instead.\n\nGive the CDK Pipelines way of doing things a shot first: you might find it does\neverything you need. If you want or need more control, we recommend you drop\ndown to using the `aws-codepipeline` construct library directly.\n\n> This module contains two sets of APIs: an **original** and a **modern** version of\n> CDK Pipelines. The *modern* API has been updated to be easier to work with and\n> customize, and will be the preferred API going forward. The *original* version\n> of the API is still available for backwards compatibility, but we recommend migrating\n> to the new version if possible.\n>\n> Compared to the original API, the modern API: has more sensible defaults; is\n> more flexible; supports parallel deployments; supports multiple synth inputs;\n> allows more control of CodeBuild project generation; supports deployment\n> engines other than CodePipeline.\n>\n> The README for the original API, as well as a migration guide, can be found in [our GitHub repository](https://github.com/aws/aws-cdk/blob/master/packages/@aws-cdk/pipelines/ORIGINAL_API.md).\n\n## At a glance\n\nDeploying your application continuously starts by defining a\n`MyApplicationStage`, a subclass of `Stage` that contains the stacks that make\nup a single copy of your application.\n\nYou then define a `Pipeline`, instantiate as many instances of\n`MyApplicationStage` as you want for your test and production environments, with\ndifferent parameters for each, and calling `pipeline.addStage()` for each of\nthem. You can deploy to the same account and Region, or to a different one,\nwith the same amount of code. The *CDK Pipelines* library takes care of the\ndetails.\n\nCDK Pipelines supports multiple *deployment engines* (see\n[Using a different deployment engine](#using-a-different-deployment-engine)),\nand comes with a deployment engine that deploys CDK apps using AWS CodePipeline.\nTo use the CodePipeline engine, define a `CodePipeline` construct. The following\nexample creates a CodePipeline that deploys an application from GitHub:\n\n```python\n# The stacks for our app are minimally defined here. The internals of these\n# stacks aren't important, except that DatabaseStack exposes an attribute\n# \"table\" for a database table it defines, and ComputeStack accepts a reference\n# to this table in its properties.\n#\nclass DatabaseStack(Stack):\n\n def __init__(self, scope, id):\n super().__init__(scope, id)\n self.table = dynamodb.Table(self, \"Table\",\n partition_key=dynamodb.Attribute(name=\"id\", type=dynamodb.AttributeType.STRING)\n )\n\nclass ComputeStack(Stack):\n def __init__(self, scope, id, *, table):\n super().__init__(scope, id)\n\n#\n# Stack to hold the pipeline\n#\nclass MyPipelineStack(Stack):\n def __init__(self, scope, id, *, description=None, env=None, stackName=None, tags=None, synthesizer=None, terminationProtection=None, analyticsReporting=None):\n super().__init__(scope, id, description=description, env=env, stackName=stackName, tags=tags, synthesizer=synthesizer, terminationProtection=terminationProtection, analyticsReporting=analyticsReporting)\n\n pipeline = pipelines.CodePipeline(self, \"Pipeline\",\n synth=pipelines.ShellStep(\"Synth\",\n # Use a connection created using the AWS console to authenticate to GitHub\n # Other sources are available.\n input=pipelines.CodePipelineSource.connection(\"my-org/my-app\", \"main\",\n connection_arn=\"arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41\"\n ),\n commands=[\"npm ci\", \"npm run build\", \"npx cdk synth\"\n ]\n )\n )\n\n # 'MyApplication' is defined below. Call `addStage` as many times as\n # necessary with any account and region (may be different from the\n # pipeline's).\n pipeline.add_stage(MyApplication(self, \"Prod\",\n env=cdk.Environment(\n account=\"123456789012\",\n region=\"eu-west-1\"\n )\n ))\n\n#\n# Your application\n#\n# May consist of one or more Stacks (here, two)\n#\n# By declaring our DatabaseStack and our ComputeStack inside a Stage,\n# we make sure they are deployed together, or not at all.\n#\nclass MyApplication(Stage):\n def __init__(self, scope, id, *, env=None, outdir=None):\n super().__init__(scope, id, env=env, outdir=outdir)\n\n db_stack = DatabaseStack(self, \"Database\")\n ComputeStack(self, \"Compute\",\n table=db_stack.table\n )\n\n# In your main file\nMyPipelineStack(self, \"PipelineStack\",\n env=cdk.Environment(\n account=\"123456789012\",\n region=\"eu-west-1\"\n )\n)\n```\n\nThe pipeline is **self-mutating**, which means that if you add new\napplication stages in the source code, or new stacks to `MyApplication`, the\npipeline will automatically reconfigure itself to deploy those new stages and\nstacks.\n\n(Note that you have to *bootstrap* all environments before the above code\nwill work, and switch on \"Modern synthesis\" if you are using\nCDKv1. See the section **CDK Environment Bootstrapping** below for\nmore information).\n\n## Provisioning the pipeline\n\nTo provision the pipeline you have defined, make sure the target environment\nhas been bootstrapped (see below), and then execute deploying the\n`PipelineStack` *once*. Afterwards, the pipeline will keep itself up-to-date.\n\n> **Important**: be sure to `git commit` and `git push` before deploying the\n> Pipeline stack using `cdk deploy`!\n>\n> The reason is that the pipeline will start deploying and self-mutating\n> right away based on the sources in the repository, so the sources it finds\n> in there should be the ones you want it to find.\n\nRun the following commands to get the pipeline going:\n\n```console\n$ git commit -a\n$ git push\n$ cdk deploy PipelineStack\n```\n\nAdministrative permissions to the account are only necessary up until\nthis point. We recommend you remove access to these credentials after doing this.\n\n### Working on the pipeline\n\nThe self-mutation feature of the Pipeline might at times get in the way\nof the pipeline development workflow. Each change to the pipeline must be pushed\nto git, otherwise, after the pipeline was updated using `cdk deploy`, it will\nautomatically revert to the state found in git.\n\nTo make the development more convenient, the self-mutation feature can be turned\noff temporarily, by passing `selfMutation: false` property, example:\n\n```python\n# Modern API\nmodern_pipeline = pipelines.CodePipeline(self, \"Pipeline\",\n self_mutation=False,\n synth=pipelines.ShellStep(\"Synth\",\n input=pipelines.CodePipelineSource.connection(\"my-org/my-app\", \"main\",\n connection_arn=\"arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41\"\n ),\n commands=[\"npm ci\", \"npm run build\", \"npx cdk synth\"\n ]\n )\n)\n\n# Original API\ncloud_assembly_artifact = codepipeline.Artifact()\noriginal_pipeline = pipelines.CdkPipeline(self, \"Pipeline\",\n self_mutating=False,\n cloud_assembly_artifact=cloud_assembly_artifact\n)\n```\n\n## Definining the pipeline\n\nThis section of the documentation describes the AWS CodePipeline engine,\nwhich comes with this library. If you want to use a different deployment\nengine, read the section\n[Using a different deployment engine](#using-a-different-deployment-engine)below.\n\n### Synth and sources\n\nTo define a pipeline, instantiate a `CodePipeline` construct from the\n`@aws-cdk/pipelines` module. It takes one argument, a `synth` step, which is\nexpected to produce the CDK Cloud Assembly as its single output (the contents of\nthe `cdk.out` directory after running `cdk synth`). \"Steps\" are arbitrary\nactions in the pipeline, typically used to run scripts or commands.\n\nFor the synth, use a `ShellStep` and specify the commands necessary to install\ndependencies, the CDK CLI, build your project and run `cdk synth`; the specific\ncommands required will depend on the programming language you are using. For a\ntypical NPM-based project, the synth will look like this:\n\n```python\n# source: pipelines.IFileSetProducer\n# the repository source\n\npipeline = pipelines.CodePipeline(self, \"Pipeline\",\n synth=pipelines.ShellStep(\"Synth\",\n input=source,\n commands=[\"npm ci\", \"npm run build\", \"npx cdk synth\"\n ]\n )\n)\n```\n\nThe pipeline assumes that your `ShellStep` will produce a `cdk.out`\ndirectory in the root, containing the CDK cloud assembly. If your\nCDK project lives in a subdirectory, be sure to adjust the\n`primaryOutputDirectory` to match:\n\n```python\n# source: pipelines.IFileSetProducer\n# the repository source\n\npipeline = pipelines.CodePipeline(self, \"Pipeline\",\n synth=pipelines.ShellStep(\"Synth\",\n input=source,\n commands=[\"cd mysubdir\", \"npm ci\", \"npm run build\", \"npx cdk synth\"\n ],\n primary_output_directory=\"mysubdir/cdk.out\"\n )\n)\n```\n\nThe underlying `@aws-cdk/aws-codepipeline.Pipeline` construct will be produced\nwhen `app.synth()` is called. You can also force it to be produced\nearlier by calling `pipeline.buildPipeline()`. After you've called\nthat method, you can inspect the constructs that were produced by\naccessing the properties of the `pipeline` object.\n\n#### Commands for other languages and package managers\n\nThe commands you pass to `new ShellStep` will be very similar to the commands\nyou run on your own workstation to install dependencies and synth your CDK\nproject. Here are some (non-exhaustive) examples for what those commands might\nlook like in a number of different situations.\n\nFor Yarn, the install commands are different:\n\n```python\n# source: pipelines.IFileSetProducer\n# the repository source\n\npipeline = pipelines.CodePipeline(self, \"Pipeline\",\n synth=pipelines.ShellStep(\"Synth\",\n input=source,\n commands=[\"yarn install --frozen-lockfile\", \"yarn build\", \"npx cdk synth\"\n ]\n )\n)\n```\n\nFor Python projects, remember to install the CDK CLI globally (as\nthere is no `package.json` to automatically install it for you):\n\n```python\n# source: pipelines.IFileSetProducer\n# the repository source\n\npipeline = pipelines.CodePipeline(self, \"Pipeline\",\n synth=pipelines.ShellStep(\"Synth\",\n input=source,\n commands=[\"pip install -r requirements.txt\", \"npm install -g aws-cdk\", \"cdk synth\"\n ]\n )\n)\n```\n\nFor Java projects, remember to install the CDK CLI globally (as\nthere is no `package.json` to automatically install it for you),\nand the Maven compilation step is automatically executed for you\nas you run `cdk synth`:\n\n```python\n# source: pipelines.IFileSetProducer\n# the repository source\n\npipeline = pipelines.CodePipeline(self, \"Pipeline\",\n synth=pipelines.ShellStep(\"Synth\",\n input=source,\n commands=[\"npm install -g aws-cdk\", \"cdk synth\"\n ]\n )\n)\n```\n\nYou can adapt these examples to your own situation.\n\n#### Migrating from buildspec.yml files\n\nYou may currently have the build instructions for your CodeBuild Projects in a\n`buildspec.yml` file in your source repository. In addition to your build\ncommands, the CodeBuild Project's buildspec also controls some information that\nCDK Pipelines manages for you, like artifact identifiers, input artifact\nlocations, Docker authorization, and exported variables.\n\nSince there is no way in general for CDK Pipelines to modify the file in your\nresource repository, CDK Pipelines configures the BuildSpec directly on the\nCodeBuild Project, instead of loading it from the `buildspec.yml` file.\nThis requires a pipeline self-mutation to update.\n\nTo avoid this, put your build instructions in a separate script, for example\n`build.sh`, and call that script from the build `commands` array:\n\n```python\n# source: pipelines.IFileSetProducer\n\n\npipeline = pipelines.CodePipeline(self, \"Pipeline\",\n synth=pipelines.ShellStep(\"Synth\",\n input=source,\n commands=[\"./build.sh\"\n ]\n )\n)\n```\n\nDoing so keeps your exact build instructions in sync with your source code in\nthe source repository where it belongs, and provides a convenient build script\nfor developers at the same time.\n\n#### CodePipeline Sources\n\nIn CodePipeline, *Sources* define where the source of your application lives.\nWhen a change to the source is detected, the pipeline will start executing.\nSource objects can be created by factory methods on the `CodePipelineSource` class:\n\n##### GitHub, GitHub Enterprise, BitBucket using a connection\n\nThe recommended way of connecting to GitHub or BitBucket is by using a *connection*.\nYou will first use the AWS Console to authenticate to the source control\nprovider, and then use the connection ARN in your pipeline definition:\n\n```python\npipelines.CodePipelineSource.connection(\"org/repo\", \"branch\",\n connection_arn=\"arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41\"\n)\n```\n\n##### GitHub using OAuth\n\nYou can also authenticate to GitHub using a personal access token. This expects\nthat you've created a personal access token and stored it in Secrets Manager.\nBy default, the source object will look for a secret named **github-token**, but\nyou can change the name. The token should have the **repo** and **admin:repo_hook**\nscopes.\n\n```python\npipelines.CodePipelineSource.git_hub(\"org/repo\", \"branch\",\n # This is optional\n authentication=cdk.SecretValue.secrets_manager(\"my-token\")\n)\n```\n\n##### CodeCommit\n\nYou can use a CodeCommit repository as the source. Either create or import\nthat the CodeCommit repository and then use `CodePipelineSource.codeCommit`\nto reference it:\n\n```python\nrepository = codecommit.Repository.from_repository_name(self, \"Repository\", \"my-repository\")\npipelines.CodePipelineSource.code_commit(repository, \"main\")\n```\n\n##### S3\n\nYou can use a zip file in S3 as the source of the pipeline. The pipeline will be\ntriggered every time the file in S3 is changed:\n\n```python\nbucket = s3.Bucket.from_bucket_name(self, \"Bucket\", \"my-bucket\")\npipelines.CodePipelineSource.s3(bucket, \"my/source.zip\")\n```\n\n##### ECR\n\nYou can use a Docker image in ECR as the source of the pipeline. The pipeline will be\ntriggered every time an image is pushed to ECR:\n\n```python\nrepository = ecr.Repository(self, \"Repository\")\npipelines.CodePipelineSource.ecr(repository)\n```\n\n#### Additional inputs\n\n`ShellStep` allows passing in more than one input: additional\ninputs will be placed in the directories you specify. Any step that produces an\noutput file set can be used as an input, such as a `CodePipelineSource`, but\nalso other `ShellStep`:\n\n```python\nprebuild = pipelines.ShellStep(\"Prebuild\",\n input=pipelines.CodePipelineSource.git_hub(\"myorg/repo1\", \"main\"),\n primary_output_directory=\"./build\",\n commands=[\"./build.sh\"]\n)\n\npipeline = pipelines.CodePipeline(self, \"Pipeline\",\n synth=pipelines.ShellStep(\"Synth\",\n input=pipelines.CodePipelineSource.git_hub(\"myorg/repo2\", \"main\"),\n additional_inputs={\n \"subdir\": pipelines.CodePipelineSource.git_hub(\"myorg/repo3\", \"main\"),\n \"../siblingdir\": prebuild\n },\n\n commands=[\"./build.sh\"]\n )\n)\n```\n\n### CDK application deployments\n\nAfter you have defined the pipeline and the `synth` step, you can add one or\nmore CDK `Stages` which will be deployed to their target environments. To do\nso, call `pipeline.addStage()` on the Stage object:\n\n```python\n# pipeline: pipelines.CodePipeline\n\n# Do this as many times as necessary with any account and region\n# Account and region may different from the pipeline's.\npipeline.add_stage(MyApplicationStage(self, \"Prod\",\n env=cdk.Environment(\n account=\"123456789012\",\n region=\"eu-west-1\"\n )\n))\n```\n\nCDK Pipelines will automatically discover all `Stacks` in the given `Stage`\nobject, determine their dependency order, and add appropriate actions to the\npipeline to publish the assets referenced in those stacks and deploy the stacks\nin the right order.\n\nIf the `Stacks` are targeted at an environment in a different AWS account or\nRegion and that environment has been\n[bootstrapped](https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html)\n, CDK Pipelines will transparently make sure the IAM roles are set up\ncorrectly and any requisite replication Buckets are created.\n\n#### Deploying in parallel\n\nBy default, all applications added to CDK Pipelines by calling `addStage()` will\nbe deployed in sequence, one after the other. If you have a lot of stages, you can\nspeed up the pipeline by choosing to deploy some stages in parallel. You do this\nby calling `addWave()` instead of `addStage()`: a *wave* is a set of stages that\nare all deployed in parallel instead of sequentially. Waves themselves are still\ndeployed in sequence. For example, the following will deploy two copies of your\napplication to `eu-west-1` and `eu-central-1` in parallel:\n\n```python\n# pipeline: pipelines.CodePipeline\n\neurope_wave = pipeline.add_wave(\"Europe\")\neurope_wave.add_stage(MyApplicationStage(self, \"Ireland\",\n env=cdk.Environment(region=\"eu-west-1\")\n))\neurope_wave.add_stage(MyApplicationStage(self, \"Germany\",\n env=cdk.Environment(region=\"eu-central-1\")\n))\n```\n\n#### Deploying to other accounts / encrypting the Artifact Bucket\n\nCDK Pipelines can transparently deploy to other Regions and other accounts\n(provided those target environments have been\n[*bootstrapped*](https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html)).\nHowever, deploying to another account requires one additional piece of\nconfiguration: you need to enable `crossAccountKeys: true` when creating the\npipeline.\n\nThis will encrypt the artifact bucket(s), but incurs a cost for maintaining the\nKMS key.\n\nExample:\n\n```python\npipeline = pipelines.CodePipeline(self, \"Pipeline\",\n # Encrypt artifacts, required for cross-account deployments\n cross_account_keys=True,\n synth=pipelines.ShellStep(\"Synth\",\n input=pipelines.CodePipelineSource.connection(\"my-org/my-app\", \"main\",\n connection_arn=\"arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41\"\n ),\n commands=[\"npm ci\", \"npm run build\", \"npx cdk synth\"\n ]\n )\n)\n```\n\n### Validation\n\nEvery `addStage()` and `addWave()` command takes additional options. As part of these options,\nyou can specify `pre` and `post` steps, which are arbitrary steps that run before or after\nthe contents of the stage or wave, respectively. You can use these to add validations like\nmanual or automated gates to your pipeline. We recommend putting manual approval gates in the set of `pre` steps, and automated approval gates in\nthe set of `post` steps.\n\nThe following example shows both an automated approval in the form of a `ShellStep`, and\na manual approval in the form of a `ManualApprovalStep` added to the pipeline. Both must\npass in order to promote from the `PreProd` to the `Prod` environment:\n\n```python\n# pipeline: pipelines.CodePipeline\n\npreprod = MyApplicationStage(self, \"PreProd\")\nprod = MyApplicationStage(self, \"Prod\")\n\npipeline.add_stage(preprod,\n post=[\n pipelines.ShellStep(\"Validate Endpoint\",\n commands=[\"curl -Ssf https://my.webservice.com/\"]\n )\n ]\n)\npipeline.add_stage(prod,\n pre=[\n pipelines.ManualApprovalStep(\"PromoteToProd\")\n ]\n)\n```\n\nYou can also specify steps to be executed at the stack level. To achieve this, you can specify the stack and step via the `stackSteps` property:\n\n```python\n# pipeline: pipelines.CodePipeline\nclass MyStacksStage(Stage):\n\n def __init__(self, scope, id, *, env=None, outdir=None):\n super().__init__(scope, id, env=env, outdir=outdir)\n self.stack1 = Stack(self, \"stack1\")\n self.stack2 = Stack(self, \"stack2\")\nprod = MyStacksStage(self, \"Prod\")\n\npipeline.add_stage(prod,\n stack_steps=[pipelines.StackSteps(\n stack=prod.stack1,\n pre=[pipelines.ManualApprovalStep(\"Pre-Stack Check\")], # Executed before stack is prepared\n change_set=[pipelines.ManualApprovalStep(\"ChangeSet Approval\")], # Executed after stack is prepared but before the stack is deployed\n post=[pipelines.ManualApprovalStep(\"Post-Deploy Check\")]\n ), pipelines.StackSteps(\n stack=prod.stack2,\n post=[pipelines.ManualApprovalStep(\"Post-Deploy Check\")]\n )]\n)\n```\n\nIf you specify multiple steps, they will execute in parallel by default. You can add dependencies between them\nto if you wish to specify an order. To add a dependency, call `step.addStepDependency()`:\n\n```python\nfirst_step = pipelines.ManualApprovalStep(\"A\")\nsecond_step = pipelines.ManualApprovalStep(\"B\")\nsecond_step.add_step_dependency(first_step)\n```\n\nFor convenience, `Step.sequence()` will take an array of steps and dependencies between adjacent steps,\nso that the whole list executes in order:\n\n```python\n# Step A will depend on step B and step B will depend on step C\nordered_steps = pipelines.Step.sequence([\n pipelines.ManualApprovalStep(\"A\"),\n pipelines.ManualApprovalStep(\"B\"),\n pipelines.ManualApprovalStep(\"C\")\n])\n```\n\n#### Using CloudFormation Stack Outputs in approvals\n\nBecause many CloudFormation deployments result in the generation of resources with unpredictable\nnames, validations have support for reading back CloudFormation Outputs after a deployment. This\nmakes it possible to pass (for example) the generated URL of a load balancer to the test set.\n\nTo use Stack Outputs, expose the `CfnOutput` object you're interested in, and\npass it to `envFromCfnOutputs` of the `ShellStep`:\n\n```python\n# pipeline: pipelines.CodePipeline\nclass MyOutputStage(Stage):\n\n def __init__(self, scope, id, *, env=None, outdir=None):\n super().__init__(scope, id, env=env, outdir=outdir)\n self.load_balancer_address = CfnOutput(self, \"Output\", value=\"value\")\n\nlb_app = MyOutputStage(self, \"MyApp\")\npipeline.add_stage(lb_app,\n post=[\n pipelines.ShellStep(\"HitEndpoint\",\n env_from_cfn_outputs={\n # Make the load balancer address available as $URL inside the commands\n \"URL\": lb_app.load_balancer_address\n },\n commands=[\"curl -Ssf $URL\"]\n )\n ]\n)\n```\n\n#### Running scripts compiled during the synth step\n\nAs part of a validation, you probably want to run a test suite that's more\nelaborate than what can be expressed in a couple of lines of shell script.\nYou can bring additional files into the shell script validation by supplying\nthe `input` or `additionalInputs` property of `ShellStep`. The input can\nbe produced by the `Synth` step, or come from a source or any other build\nstep.\n\nHere's an example that captures an additional output directory in the synth\nstep and runs tests from there:\n\n```python\n# synth: pipelines.ShellStep\n\nstage = MyApplicationStage(self, \"MyApplication\")\npipeline = pipelines.CodePipeline(self, \"Pipeline\", synth=synth)\n\npipeline.add_stage(stage,\n post=[\n pipelines.ShellStep(\"Approve\",\n # Use the contents of the 'integ' directory from the synth step as the input\n input=synth.add_output_directory(\"integ\"),\n commands=[\"cd integ && ./run.sh\"]\n )\n ]\n)\n```\n\n### Customizing CodeBuild Projects\n\nCDK pipelines will generate CodeBuild projects for each `ShellStep` you use, and it\nwill also generate CodeBuild projects to publish assets and perform the self-mutation\nof the pipeline. To control the various aspects of the CodeBuild projects that get\ngenerated, use a `CodeBuildStep` instead of a `ShellStep`. This class has a number\nof properties that allow you to customize various aspects of the projects:\n\n```python\n# vpc: ec2.Vpc\n# my_security_group: ec2.SecurityGroup\n\npipelines.CodeBuildStep(\"Synth\",\n # ...standard ShellStep props...\n commands=[],\n env={},\n\n # If you are using a CodeBuildStep explicitly, set the 'cdk.out' directory\n # to be the synth step's output.\n primary_output_directory=\"cdk.out\",\n\n # Control the name of the project\n project_name=\"MyProject\",\n\n # Control parts of the BuildSpec other than the regular 'build' and 'install' commands\n partial_build_spec=codebuild.BuildSpec.from_object({\n \"version\": \"0.2\"\n }),\n\n # Control the build environment\n build_environment=codebuild.BuildEnvironment(\n compute_type=codebuild.ComputeType.LARGE\n ),\n timeout=Duration.minutes(90),\n\n # Control Elastic Network Interface creation\n vpc=vpc,\n subnet_selection=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT),\n security_groups=[my_security_group],\n\n # Additional policy statements for the execution role\n role_policy_statements=[\n iam.PolicyStatement()\n ]\n)\n```\n\nYou can also configure defaults for *all* CodeBuild projects by passing `codeBuildDefaults`,\nor just for the synth, asset publishing, and self-mutation projects by passing `synthCodeBuildDefaults`,\n`assetPublishingCodeBuildDefaults`, or `selfMutationCodeBuildDefaults`:\n\n```python\n# vpc: ec2.Vpc\n# my_security_group: ec2.SecurityGroup\n\npipelines.CodePipeline(self, \"Pipeline\",\n # Standard CodePipeline properties\n synth=pipelines.ShellStep(\"Synth\",\n input=pipelines.CodePipelineSource.connection(\"my-org/my-app\", \"main\",\n connection_arn=\"arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41\"\n ),\n commands=[\"npm ci\", \"npm run build\", \"npx cdk synth\"\n ]\n ),\n\n # Defaults for all CodeBuild projects\n code_build_defaults=pipelines.CodeBuildOptions(\n # Prepend commands and configuration to all projects\n partial_build_spec=codebuild.BuildSpec.from_object({\n \"version\": \"0.2\"\n }),\n\n # Control the build environment\n build_environment=codebuild.BuildEnvironment(\n compute_type=codebuild.ComputeType.LARGE\n ),\n\n # Control Elastic Network Interface creation\n vpc=vpc,\n subnet_selection=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT),\n security_groups=[my_security_group],\n\n # Additional policy statements for the execution role\n role_policy=[\n iam.PolicyStatement()\n ]\n ),\n\n synth_code_build_defaults=pipelines.CodeBuildOptions(),\n asset_publishing_code_build_defaults=pipelines.CodeBuildOptions(),\n self_mutation_code_build_defaults=pipelines.CodeBuildOptions()\n)\n```\n\n### Arbitrary CodePipeline actions\n\nIf you want to add a type of CodePipeline action to the CDK Pipeline that\ndoesn't have a matching class yet, you can define your own step class that extends\n`Step` and implements `ICodePipelineActionFactory`.\n\nHere's an example that adds a Jenkins step:\n\n```python\nclass MyJenkinsStep(pipelines.Steppipelines.ICodePipelineActionFactory):\n def __init__(self, provider, input):\n super().__init__(\"MyJenkinsStep\")\n\n # This is necessary if your step accepts parametres, like environment variables,\n # that may contain outputs from other steps. It doesn't matter what the\n # structure is, as long as it contains the values that may contain outputs.\n self.discover_referenced_outputs({\n \"env\": {}\n })\n\n def produce_action(self, stage, *, scope, actionName, runOrder, variablesNamespace=None, artifacts, fallbackArtifact=None, pipeline, codeBuildDefaults=None, beforeSelfMutation=None):\n\n # This is where you control what type of Action gets added to the\n # CodePipeline\n stage.add_action(cpactions.JenkinsAction(\n # Copy 'actionName' and 'runOrder' from the options\n action_name=action_name,\n run_order=run_order,\n\n # Jenkins-specific configuration\n type=cpactions.JenkinsActionType.TEST,\n jenkins_provider=self.provider,\n project_name=\"MyJenkinsProject\",\n\n # Translate the FileSet into a codepipeline.Artifact\n inputs=[artifacts.to_code_pipeline(self.input)]\n ))\n\n return pipelines.CodePipelineActionFactoryResult(run_orders_consumed=1)\n```\n\n## Using Docker in the pipeline\n\nDocker can be used in 3 different places in the pipeline:\n\n* If you are using Docker image assets in your application stages: Docker will\n run in the asset publishing projects.\n* If you are using Docker image assets in your stack (for example as\n images for your CodeBuild projects): Docker will run in the self-mutate project.\n* If you are using Docker to bundle file assets anywhere in your project (for\n example, if you are using such construct libraries as\n `@aws-cdk/aws-lambda-nodejs`): Docker will run in the\n *synth* project.\n\nFor the first case, you don't need to do anything special. For the other two cases,\nyou need to make sure that **privileged mode** is enabled on the correct CodeBuild\nprojects, so that Docker can run correctly. The follow sections describe how to do\nthat.\n\nYou may also need to authenticate to Docker registries to avoid being throttled.\nSee the section **Authenticating to Docker registries** below for information on how to do\nthat.\n\n### Using Docker image assets in the pipeline\n\nIf your `PipelineStack` is using Docker image assets (as opposed to the application\nstacks the pipeline is deploying), for example by the use of `LinuxBuildImage.fromAsset()`,\nyou need to pass `dockerEnabledForSelfMutation: true` to the pipeline. For example:\n\n```python\npipeline = pipelines.CodePipeline(self, \"Pipeline\",\n synth=pipelines.ShellStep(\"Synth\",\n input=pipelines.CodePipelineSource.connection(\"my-org/my-app\", \"main\",\n connection_arn=\"arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41\"\n ),\n commands=[\"npm ci\", \"npm run build\", \"npx cdk synth\"]\n ),\n\n # Turn this on because the pipeline uses Docker image assets\n docker_enabled_for_self_mutation=True\n)\n\npipeline.add_wave(\"MyWave\",\n post=[\n pipelines.CodeBuildStep(\"RunApproval\",\n commands=[\"command-from-image\"],\n build_environment=codebuild.BuildEnvironment(\n # The user of a Docker image asset in the pipeline requires turning on\n # 'dockerEnabledForSelfMutation'.\n build_image=codebuild.LinuxBuildImage.from_asset(self, \"Image\",\n directory=\"./docker-image\"\n )\n )\n )\n ]\n)\n```\n\n> **Important**: You must turn on the `dockerEnabledForSelfMutation` flag,\n> commit and allow the pipeline to self-update *before* adding the actual\n> Docker asset.\n\n### Using bundled file assets\n\nIf you are using asset bundling anywhere (such as automatically done for you\nif you add a construct like `@aws-cdk/aws-lambda-nodejs`), you need to pass\n`dockerEnabledForSynth: true` to the pipeline. For example:\n\n```python\npipeline = pipelines.CodePipeline(self, \"Pipeline\",\n synth=pipelines.ShellStep(\"Synth\",\n input=pipelines.CodePipelineSource.connection(\"my-org/my-app\", \"main\",\n connection_arn=\"arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41\"\n ),\n commands=[\"npm ci\", \"npm run build\", \"npx cdk synth\"]\n ),\n\n # Turn this on because the application uses bundled file assets\n docker_enabled_for_synth=True\n)\n```\n\n> **Important**: You must turn on the `dockerEnabledForSynth` flag,\n> commit and allow the pipeline to self-update *before* adding the actual\n> Docker asset.\n\n### Authenticating to Docker registries\n\nYou can specify credentials to use for authenticating to Docker registries as part of the\npipeline definition. This can be useful if any Docker image assets \u2014 in the pipeline or\nany of the application stages \u2014 require authentication, either due to being in a\ndifferent environment (e.g., ECR repo) or to avoid throttling (e.g., DockerHub).\n\n```python\ndocker_hub_secret = secretsmanager.Secret.from_secret_complete_arn(self, \"DHSecret\", \"arn:aws:...\")\ncustom_reg_secret = secretsmanager.Secret.from_secret_complete_arn(self, \"CRSecret\", \"arn:aws:...\")\nrepo1 = ecr.Repository.from_repository_arn(self, \"Repo\", \"arn:aws:ecr:eu-west-1:0123456789012:repository/Repo1\")\nrepo2 = ecr.Repository.from_repository_arn(self, \"Repo\", \"arn:aws:ecr:eu-west-1:0123456789012:repository/Repo2\")\n\npipeline = pipelines.CodePipeline(self, \"Pipeline\",\n docker_credentials=[\n pipelines.DockerCredential.docker_hub(docker_hub_secret),\n pipelines.DockerCredential.custom_registry(\"dockerregistry.example.com\", custom_reg_secret),\n pipelines.DockerCredential.ecr([repo1, repo2])\n ],\n synth=pipelines.ShellStep(\"Synth\",\n input=pipelines.CodePipelineSource.connection(\"my-org/my-app\", \"main\",\n connection_arn=\"arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41\"\n ),\n commands=[\"npm ci\", \"npm run build\", \"npx cdk synth\"]\n )\n)\n```\n\nFor authenticating to Docker registries that require a username and password combination\n(like DockerHub), create a Secrets Manager Secret with fields named `username`\nand `secret`, and import it (the field names change be customized).\n\nAuthentication to ECR repostories is done using the execution role of the\nrelevant CodeBuild job. Both types of credentials can be provided with an\noptional role to assume before requesting the credentials.\n\nBy default, the Docker credentials provided to the pipeline will be available to\nthe **Synth**, **Self-Update**, and **Asset Publishing** actions within the\n*pipeline. The scope of the credentials can be limited via the `DockerCredentialUsage` option.\n\n```python\ndocker_hub_secret = secretsmanager.Secret.from_secret_complete_arn(self, \"DHSecret\", \"arn:aws:...\")\n# Only the image asset publishing actions will be granted read access to the secret.\ncreds = pipelines.DockerCredential.docker_hub(docker_hub_secret, usages=[pipelines.DockerCredentialUsage.ASSET_PUBLISHING])\n```\n\n## CDK Environment Bootstrapping\n\nAn *environment* is an *(account, region)* pair where you want to deploy a\nCDK stack (see\n[Environments](https://docs.aws.amazon.com/cdk/latest/guide/environments.html)\nin the CDK Developer Guide). In a Continuous Deployment pipeline, there are\nat least two environments involved: the environment where the pipeline is\nprovisioned, and the environment where you want to deploy the application (or\ndifferent stages of the application). These can be the same, though best\npractices recommend you isolate your different application stages from each\nother in different AWS accounts or regions.\n\nBefore you can provision the pipeline, you have to *bootstrap* the environment you want\nto create it in. If you are deploying your application to different environments, you\nalso have to bootstrap those and be sure to add a *trust* relationship.\n\nAfter you have bootstrapped an environment and created a pipeline that deploys\nto it, it's important that you don't delete the stack or change its *Qualifier*,\nor future deployments to this environment will fail. If you want to upgrade\nthe bootstrap stack to a newer version, do that by updating it in-place.\n\n> This library requires the *modern* bootstrapping stack which has\n> been updated specifically to support cross-account continuous delivery.\n>\n> If you are using CDKv2, you do not need to do anything else. Modern\n> bootstrapping and modern stack synthesis (also known as \"default stack\n> synthesis\") is the default.\n>\n> If you are using CDKv1, you need to opt in to modern bootstrapping and\n> modern stack synthesis using a feature flag. Make sure `cdk.json` includes:\n>\n> ```json\n> {\n> \"context\": {\n> \"@aws-cdk/core:newStyleStackSynthesis\": true\n> }\n> }\n> ```\n>\n> And be sure to run `cdk bootstrap` in the same directory as the `cdk.json`\n> file.\n\nTo bootstrap an environment for provisioning the pipeline:\n\n```console\n$ npx cdk bootstrap \\\n [--profile admin-profile-1] \\\n --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \\\n aws://111111111111/us-east-1\n```\n\nTo bootstrap a different environment for deploying CDK applications into using\na pipeline in account `111111111111`:\n\n```console\n$ npx cdk bootstrap \\\n [--profile admin-profile-2] \\\n --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \\\n --trust 11111111111 \\\n aws://222222222222/us-east-2\n```\n\nIf you only want to trust an account to do lookups (e.g, when your CDK application has a\n`Vpc.fromLookup()` call), use the option `--trust-for-lookup`:\n\n```console\n$ npx cdk bootstrap \\\n [--profile admin-profile-2] \\\n --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \\\n --trust-for-lookup 11111111111 \\\n aws://222222222222/us-east-2\n```\n\nThese command lines explained:\n\n* `npx`: means to use the CDK CLI from the current NPM install. If you are using\n a global install of the CDK CLI, leave this out.\n* `--profile`: should indicate a profile with administrator privileges that has\n permissions to provision a pipeline in the indicated account. You can leave this\n flag out if either the AWS default credentials or the `AWS_*` environment\n variables confer these permissions.\n* `--cloudformation-execution-policies`: ARN of the managed policy that future CDK\n deployments should execute with. By default this is `AdministratorAccess`, but\n if you also specify the `--trust` flag to give another Account permissions to\n deploy into the current account, you must specify a value here.\n* `--trust`: indicates which other account(s) should have permissions to deploy\n CDK applications into this account. In this case we indicate the Pipeline's account,\n but you could also use this for developer accounts (don't do that for production\n application accounts though!).\n* `--trust-for-lookup`: gives a more limited set of permissions to the\n trusted account, only allowing it to look up values such as availability zones, EC2 images and\n VPCs. `--trust-for-lookup` does not give permissions to modify anything in the account.\n Note that `--trust` implies `--trust-for-lookup`, so you don't need to specify\n the same acocunt twice.\n* `aws://222222222222/us-east-2`: the account and region we're bootstrapping.\n\n> Be aware that anyone who has access to the trusted Accounts **effectively has all\n> permissions conferred by the configured CloudFormation execution policies**,\n> allowing them to do things like read arbitrary S3 buckets and create arbitrary\n> infrastructure in the bootstrapped account. Restrict the list of `--trust`ed Accounts,\n> or restrict the policies configured by `--cloudformation-execution-policies`.\n\n<br>\n\n> **Security tip**: we recommend that you use administrative credentials to an\n> account only to bootstrap it and provision the initial pipeline. Otherwise,\n> access to administrative credentials should be dropped as soon as possible.\n\n<br>\n\n> **On the use of AdministratorAccess**: The use of the `AdministratorAccess` policy\n> ensures that your pipeline can deploy every type of AWS resource to your account.\n> Make sure you trust all the code and dependencies that make up your CDK app.\n> Check with the appropriate department within your organization to decide on the\n> proper policy to use.\n>\n> If your policy includes permissions to create on attach permission to a role,\n> developers can escalate their privilege with more permissive permission.\n> Thus, we recommend implementing [permissions boundary](https://aws.amazon.com/premiumsupport/knowledge-center/iam-permission-boundaries/)\n> in the CDK Execution role. To do this, you can bootstrap with the `--template` option with\n> [a customized template](https://github.com/aws-samples/aws-bootstrap-kit-examples/blob/ba28a97d289128281bc9483bcba12c1793f2c27a/source/1-SDLC-organization/lib/cdk-bootstrap-template.yml#L395) that contains a permission boundary.\n\n### Migrating from old bootstrap stack\n\nThe bootstrap stack is a CloudFormation stack in your account named\n**CDKToolkit** that provisions a set of resources required for the CDK\nto deploy into that environment.\n\nThe \"new\" bootstrap stack (obtained by running `cdk bootstrap` with\n`CDK_NEW_BOOTSTRAP=1`) is slightly more elaborate than the \"old\" stack. It\ncontains:\n\n* An S3 bucket and ECR repository with predictable names, so that we can reference\n assets in these storage locations *without* the use of CloudFormation template\n parameters.\n* A set of roles with permissions to access these asset locations and to execute\n CloudFormation, assumable from whatever accounts you specify under `--trust`.\n\nIt is possible and safe to migrate from the old bootstrap stack to the new\nbootstrap stack. This will create a new S3 file asset bucket in your account\nand orphan the old bucket. You should manually delete the orphaned bucket\nafter you are sure you have redeployed all CDK applications and there are no\nmore references to the old asset bucket.\n\n## Context Lookups\n\nYou might be using CDK constructs that need to look up [runtime\ncontext](https://docs.aws.amazon.com/cdk/latest/guide/context.html#context_methods),\nwhich is information from the target AWS Account and Region the CDK needs to\nsynthesize CloudFormation templates appropriate for that environment. Examples\nof this kind of context lookups are the number of Availability Zones available\nto you, a Route53 Hosted Zone ID, or the ID of an AMI in a given region. This\ninformation is automatically looked up when you run `cdk synth`.\n\nBy default, a `cdk synth` performed in a pipeline will not have permissions\nto perform these lookups, and the lookups will fail. This is by design.\n\n**Our recommended way of using lookups** is by running `cdk synth` on the\ndeveloper workstation and checking in the `cdk.context.json` file, which\ncontains the results of the context lookups. This will make sure your\nsynthesized infrastructure is consistent and repeatable. If you do not commit\n`cdk.context.json`, the results of the lookups may suddenly be different in\nunexpected ways, and even produce results that cannot be deployed or will cause\ndata loss. To give an account permissions to perform lookups against an\nenvironment, without being able to deploy to it and make changes, run\n`cdk bootstrap --trust-for-lookup=<account>`.\n\nIf you want to use lookups directly from the pipeline, you either need to accept\nthe risk of nondeterminism, or make sure you save and load the\n`cdk.context.json` file somewhere between synth runs. Finally, you should\ngive the synth CodeBuild execution role permissions to assume the bootstrapped\nlookup roles. As an example, doing so would look like this:\n\n```python\npipelines.CodePipeline(self, \"Pipeline\",\n synth=pipelines.CodeBuildStep(\"Synth\",\n input=pipelines.CodePipelineSource.connection(\"my-org/my-app\", \"main\",\n connection_arn=\"arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41\"\n ),\n commands=[\"...\", \"npm ci\", \"npm run build\", \"npx cdk synth\", \"...\"\n ],\n role_policy_statements=[\n iam.PolicyStatement(\n actions=[\"sts:AssumeRole\"],\n resources=[\"*\"],\n conditions={\n \"StringEquals\": {\n \"iam:ResourceTag/aws-cdk:bootstrap-role\": \"lookup\"\n }\n }\n )\n ]\n )\n)\n```\n\nThe above example requires that the target environments have all\nbeen bootstrapped with bootstrap stack version `8`, released with\nCDK CLI `1.114.0`.\n\n## Security Considerations\n\nIt's important to stay safe while employing Continuous Delivery. The CDK Pipelines\nlibrary comes with secure defaults to the best of our ability, but by its\nvery nature the library cannot take care of everything.\n\nWe therefore expect you to mind the following:\n\n* Maintain dependency hygiene and vet 3rd-party software you use. Any software you\n run on your build machine has the ability to change the infrastructure that gets\n deployed. Be careful with the software you depend on.\n* Use dependency locking to prevent accidental upgrades! The default `CdkSynths` that\n come with CDK Pipelines will expect `package-lock.json` and `yarn.lock` to\n ensure your dependencies are the ones you expect.\n* Credentials to production environments should be short-lived. After\n bootstrapping and the initial pipeline provisioning, there is no more need for\n developers to have access to any of the account credentials; all further\n changes can be deployed through git. Avoid the chances of credentials leaking\n by not having them in the first place!\n\n### Confirm permissions broadening\n\nTo keep tabs on the security impact of changes going out through your pipeline,\nyou can insert a security check before any stage deployment. This security check\nwill check if the upcoming deployment would add any new IAM permissions or\nsecurity group rules, and if so pause the pipeline and require you to confirm\nthe changes.\n\nThe security check will appear as two distinct actions in your pipeline: first\na CodeBuild project that runs `cdk diff` on the stage that's about to be deployed,\nfollowed by a Manual Approval action that pauses the pipeline. If it so happens\nthat there no new IAM permissions or security group rules will be added by the deployment,\nthe manual approval step is automatically satisfied. The pipeline will look like this:\n\n```txt\nPipeline\n\u251c\u2500\u2500 ...\n\u251c\u2500\u2500 MyApplicationStage\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 MyApplicationSecurityCheck // Security Diff Action\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 MyApplicationManualApproval // Manual Approval Action\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 Stack.Prepare\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 Stack.Deploy\n\u2514\u2500\u2500 ...\n```\n\nYou can insert the security check by using a `ConfirmPermissionsBroadening` step:\n\n```python\n# pipeline: pipelines.CodePipeline\n\nstage = MyApplicationStage(self, \"MyApplication\")\npipeline.add_stage(stage,\n pre=[\n pipelines.ConfirmPermissionsBroadening(\"Check\", stage=stage)\n ]\n)\n```\n\nTo get notified when there is a change that needs your manual approval,\ncreate an SNS Topic, subscribe your own email address, and pass it in as\nas the `notificationTopic` property:\n\n```python\n# pipeline: pipelines.CodePipeline\n\ntopic = sns.Topic(self, \"SecurityChangesTopic\")\ntopic.add_subscription(subscriptions.EmailSubscription(\"test@email.com\"))\n\nstage = MyApplicationStage(self, \"MyApplication\")\npipeline.add_stage(stage,\n pre=[\n pipelines.ConfirmPermissionsBroadening(\"Check\",\n stage=stage,\n notification_topic=topic\n )\n ]\n)\n```\n\n**Note**: Manual Approvals notifications only apply when an application has security\ncheck enabled.\n\n## Using a different deployment engine\n\nCDK Pipelines supports multiple *deployment engines*, but this module vends a\nconstruct for only one such engine: AWS CodePipeline. It is also possible to\nuse CDK Pipelines to build pipelines backed by other deployment engines.\n\nHere is a list of CDK Libraries that integrate CDK Pipelines with\nalternative deployment engines:\n\n* GitHub Workflows: [`cdk-pipelines-github`](https://github.com/cdklabs/cdk-pipelines-github)\n\n## Troubleshooting\n\nHere are some common errors you may encounter while using this library.\n\n### Pipeline: Internal Failure\n\nIf you see the following error during deployment of your pipeline:\n\n```plaintext\nCREATE_FAILED | AWS::CodePipeline::Pipeline | Pipeline/Pipeline\nInternal Failure\n```\n\nThere's something wrong with your GitHub access token. It might be missing, or not have the\nright permissions to access the repository you're trying to access.\n\n### Key: Policy contains a statement with one or more invalid principals\n\nIf you see the following error during deployment of your pipeline:\n\n```plaintext\nCREATE_FAILED | AWS::KMS::Key | Pipeline/Pipeline/ArtifactsBucketEncryptionKey\nPolicy contains a statement with one or more invalid principals.\n```\n\nOne of the target (account, region) environments has not been bootstrapped\nwith the new bootstrap stack. Check your target environments and make sure\nthey are all bootstrapped.\n\n### Message: no matching base directory path found for cdk.out\n\nIf you see this error during the **Synth** step, it means that CodeBuild\nis expecting to find a `cdk.out` directory in the root of your CodeBuild project,\nbut the directory wasn't there. There are two common causes for this:\n\n* `cdk synth` is not being executed: `cdk synth` used to be run\n implicitly for you, but you now have to explicitly include the command.\n For NPM-based projects, add `npx cdk synth` to the end of the `commands`\n property, for other languages add `npm install -g aws-cdk` and `cdk synth`.\n* Your CDK project lives in a subdirectory: you added a `cd <somedirectory>` command\n to the list of commands; don't forget to tell the `ScriptStep` about the\n different location of `cdk.out`, by passing `primaryOutputDirectory: '<somedirectory>/cdk.out'`.\n\n### <Stack> is in ROLLBACK_COMPLETE state and can not be updated\n\nIf you see the following error during execution of your pipeline:\n\n```plaintext\nStack ... is in ROLLBACK_COMPLETE state and can not be updated. (Service:\nAmazonCloudFormation; Status Code: 400; Error Code: ValidationError; Request\nID: ...)\n```\n\nThe stack failed its previous deployment, and is in a non-retryable state.\nGo into the CloudFormation console, delete the stack, and retry the deployment.\n\n### Cannot find module 'xxxx' or its corresponding type declarations\n\nYou may see this if you are using TypeScript or other NPM-based languages,\nwhen using NPM 7 on your workstation (where you generate `package-lock.json`)\nand NPM 6 on the CodeBuild image used for synthesizing.\n\nIt looks like NPM 7 has started writing less information to `package-lock.json`,\nleading NPM 6 reading that same file to not install all required packages anymore.\n\nMake sure you are using the same NPM version everywhere, either downgrade your\nworkstation's version or upgrade the CodeBuild version.\n\n### Cannot find module '.../check-node-version.js' (MODULE_NOT_FOUND)\n\nThe above error may be produced by `npx` when executing the CDK CLI, or any\nproject that uses the AWS SDK for JavaScript, without the target application\nhaving been installed yet. For example, it can be triggered by `npx cdk synth`\nif `aws-cdk` is not in your `package.json`.\n\nWork around this by either installing the target application using NPM *before*\nrunning `npx`, or set the environment variable `NPM_CONFIG_UNSAFE_PERM=true`.\n\n### Cannot connect to the Docker daemon at unix:///var/run/docker.sock\n\nIf, in the 'Synth' action (inside the 'Build' stage) of your pipeline, you get an error like this:\n\n```console\nstderr: docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.\nSee 'docker run --help'.\n```\n\nIt means that the AWS CodeBuild project for 'Synth' is not configured to run in privileged mode,\nwhich prevents Docker builds from happening. This typically happens if you use a CDK construct\nthat bundles asset using tools run via Docker, like `aws-lambda-nodejs`, `aws-lambda-python`,\n`aws-lambda-go` and others.\n\nMake sure you set the `privileged` environment variable to `true` in the synth definition:\n\n```python\nsource_artifact = codepipeline.Artifact()\ncloud_assembly_artifact = codepipeline.Artifact()\npipeline = pipelines.CdkPipeline(self, \"MyPipeline\",\n cloud_assembly_artifact=cloud_assembly_artifact,\n synth_action=pipelines.SimpleSynthAction.standard_npm_synth(\n source_artifact=source_artifact,\n cloud_assembly_artifact=cloud_assembly_artifact,\n environment=codebuild.BuildEnvironment(\n privileged=True\n )\n )\n)\n```\n\nAfter turning on `privilegedMode: true`, you will need to do a one-time manual cdk deploy of your\npipeline to get it going again (as with a broken 'synth' the pipeline will not be able to self\nupdate to the right state).\n\n### S3 error: Access Denied\n\nAn \"S3 Access Denied\" error can have two causes:\n\n* Asset hashes have changed, but self-mutation has been disabled in the pipeline.\n* You have deleted and recreated the bootstrap stack, or changed its qualifier.\n\n#### Self-mutation step has been removed\n\nSome constructs, such as EKS clusters, generate nested stacks. When CloudFormation tries\nto deploy those stacks, it may fail with this error:\n\n```console\nS3 error: Access Denied For more information check http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html\n```\n\nThis happens because the pipeline is not self-mutating and, as a consequence, the `FileAssetX`\nbuild projects get out-of-sync with the generated templates. To fix this, make sure the\n`selfMutating` property is set to `true`:\n\n```python\ncloud_assembly_artifact = codepipeline.Artifact()\npipeline = pipelines.CdkPipeline(self, \"MyPipeline\",\n self_mutating=True,\n cloud_assembly_artifact=cloud_assembly_artifact\n)\n```\n\n#### Bootstrap roles have been renamed or recreated\n\nWhile attempting to deploy an application stage, the \"Prepare\" or \"Deploy\" stage may fail with a cryptic error like:\n\n`Action execution failed Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 0123456ABCDEFGH; S3 Extended Request ID: 3hWcrVkhFGxfiMb/rTJO0Bk7Qn95x5ll4gyHiFsX6Pmk/NT+uX9+Z1moEcfkL7H3cjH7sWZfeD0=; Proxy: null)`\n\nThis generally indicates that the roles necessary to deploy have been deleted (or deleted and re-created);\nfor example, if the bootstrap stack has been deleted and re-created, this scenario will happen. Under the hood,\nthe resources that rely on these roles (e.g., `cdk-$qualifier-deploy-role-$account-$region`) point to different\ncanonical IDs than the recreated versions of these roles, which causes the errors. There are no simple solutions\nto this issue, and for that reason we **strongly recommend** that bootstrap stacks not be deleted and re-created\nonce created.\n\nThe most automated way to solve the issue is to introduce a secondary bootstrap stack. By changing the qualifier\nthat the pipeline stack looks for, a change will be detected and the impacted policies and resources will be updated.\nA hypothetical recovery workflow would look something like this:\n\n* First, for all impacted environments, create a secondary bootstrap stack:\n\n```sh\n$ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \\\n --qualifier random1234 \\\n --toolkit-stack-name CDKToolkitTemp \\\n aws://111111111111/us-east-1\n```\n\n* Update all impacted stacks in the pipeline to use this new qualifier.\n See https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html for more info.\n\n```python\nStack(self, \"MyStack\",\n # Update this qualifier to match the one used above.\n synthesizer=cdk.DefaultStackSynthesizer(\n qualifier=\"randchars1234\"\n )\n)\n```\n\n* Deploy the updated stacks. This will update the stacks to use the roles created in the new bootstrap stack.\n* (Optional) Restore back to the original state:\n\n * Revert the change made in step #2 above\n * Re-deploy the pipeline to use the original qualifier.\n * Delete the temporary bootstrap stack(s)\n\n##### Manual Alternative\n\nAlternatively, the errors can be resolved by finding each impacted resource and policy, and correcting the policies\nby replacing the canonical IDs (e.g., `AROAYBRETNYCYV6ZF2R93`) with the appropriate ARNs. As an example, the KMS\nencryption key policy for the artifacts bucket may have a statement that looks like the following:\n\n```json\n{\n \"Effect\" : \"Allow\",\n \"Principal\" : {\n // \"AWS\" : \"AROAYBRETNYCYV6ZF2R93\" // Indicates this issue; replace this value\n \"AWS\": \"arn:aws:iam::0123456789012:role/cdk-hnb659fds-deploy-role-0123456789012-eu-west-1\", // Correct value\n },\n \"Action\" : [ \"kms:Decrypt\", \"kms:DescribeKey\" ],\n \"Resource\" : \"*\"\n}\n```\n\nAny resource or policy that references the qualifier (`hnb659fds` by default) will need to be updated.\n\n### This CDK CLI is not compatible with the CDK library used by your application\n\nThe CDK CLI version used in your pipeline is too old to read the Cloud Assembly\nproduced by your CDK app.\n\nMost likely this happens in the `SelfMutate` action, you are passing the `cliVersion`\nparameter to control the version of the CDK CLI, and you just updated the CDK\nframework version that your application uses. You either forgot to change the\n`cliVersion` parameter, or changed the `cliVersion` in the same commit in which\nyou changed the framework version. Because a change to the pipeline settings needs\na successful run of the `SelfMutate` step to be applied, the next iteration of the\n`SelfMutate` step still executes with the *old* CLI version, and that old CLI version\nis not able to read the cloud assembly produced by the new framework version.\n\nSolution: change the `cliVersion` first, commit, push and deploy, and only then\nchange the framework version.\n\nWe recommend you avoid specifying the `cliVersion` parameter at all. By default\nthe pipeline will use the latest CLI version, which will support all cloud assembly\nversions.\n\n## Known Issues\n\nThere are some usability issues that are caused by underlying technology, and\ncannot be remedied by CDK at this point. They are reproduced here for completeness.\n\n* **Console links to other accounts will not work**: the AWS CodePipeline\n console will assume all links are relative to the current account. You will\n not be able to use the pipeline console to click through to a CloudFormation\n stack in a different account.\n* **If a change set failed to apply the pipeline must restarted**: if a change\n set failed to apply, it cannot be retried. The pipeline must be restarted from\n the top by clicking **Release Change**.\n* **A stack that failed to create must be deleted manually**: if a stack\n failed to create on the first attempt, you must delete it using the\n CloudFormation console before starting the pipeline again by clicking\n **Release Change**.\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Continuous Delivery of CDK applications",
"version": "1.204.0",
"project_urls": {
"Homepage": "https://github.com/aws/aws-cdk",
"Source": "https://github.com/aws/aws-cdk.git"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ab3b079676df4dee84728758767ac66ebdcdd5b255fdeb3f252fd7431b88f313",
"md5": "13d78a95a49f35e803df283f8558ee94",
"sha256": "d22d483e92ae4f8b0e97ff55760106f46d01f3796ef4baaa12ca25c68d546a0e"
},
"downloads": -1,
"filename": "aws_cdk.pipelines-1.204.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "13d78a95a49f35e803df283f8558ee94",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "~=3.7",
"size": 591415,
"upload_time": "2023-06-19T21:02:29",
"upload_time_iso_8601": "2023-06-19T21:02:29.615239Z",
"url": "https://files.pythonhosted.org/packages/ab/3b/079676df4dee84728758767ac66ebdcdd5b255fdeb3f252fd7431b88f313/aws_cdk.pipelines-1.204.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f7cd032df9a85d0cf6e1c01387434d0f757852093a2def30dc999f969dd7af71",
"md5": "2b32d3d0368b0da1d1f5b0fbf306679b",
"sha256": "40431424841c38333109ad89f418411022f0cb79209c4a3e6876651543b58b80"
},
"downloads": -1,
"filename": "aws-cdk.pipelines-1.204.0.tar.gz",
"has_sig": false,
"md5_digest": "2b32d3d0368b0da1d1f5b0fbf306679b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.7",
"size": 624867,
"upload_time": "2023-06-19T21:08:31",
"upload_time_iso_8601": "2023-06-19T21:08:31.071820Z",
"url": "https://files.pythonhosted.org/packages/f7/cd/032df9a85d0cf6e1c01387434d0f757852093a2def30dc999f969dd7af71/aws-cdk.pipelines-1.204.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-06-19 21:08:31",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "aws",
"github_project": "aws-cdk",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "aws-cdk.pipelines"
}