# Amazon EKS Construct Library
<!--BEGIN STABILITY BANNER-->---
![cfn-resources: Stable](https://img.shields.io/badge/cfn--resources-stable-success.svg?style=for-the-badge)
![cdk-constructs: Stable](https://img.shields.io/badge/cdk--constructs-stable-success.svg?style=for-the-badge)
---
<!--END STABILITY BANNER-->
This construct library allows you to define [Amazon Elastic Container Service for Kubernetes (EKS)](https://aws.amazon.com/eks/) clusters.
In addition, the library also supports defining Kubernetes resource manifests within EKS clusters.
## Table Of Contents
* [Quick Start](#quick-start)
* [API Reference](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-eks-readme.html)
* [Architectural Overview](#architectural-overview)
* [Provisioning clusters](#provisioning-clusters)
* [Managed node groups](#managed-node-groups)
* [Fargate Profiles](#fargate-profiles)
* [Self-managed nodes](#self-managed-nodes)
* [Endpoint Access](#endpoint-access)
* [ALB Controller](#alb-controller)
* [VPC Support](#vpc-support)
* [Kubectl Support](#kubectl-support)
* [ARM64 Support](#arm64-support)
* [Masters Role](#masters-role)
* [Encryption](#encryption)
* [Permissions and Security](#permissions-and-security)
* [Applying Kubernetes Resources](#applying-kubernetes-resources)
* [Kubernetes Manifests](#kubernetes-manifests)
* [Helm Charts](#helm-charts)
* [CDK8s Charts](#cdk8s-charts)
* [Patching Kubernetes Resources](#patching-kubernetes-resources)
* [Querying Kubernetes Resources](#querying-kubernetes-resources)
* [Using existing clusters](#using-existing-clusters)
* [Known Issues and Limitations](#known-issues-and-limitations)
## Quick Start
This example defines an Amazon EKS cluster with the following configuration:
* Dedicated VPC with default configuration (Implicitly created using [ec2.Vpc](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-ec2-readme.html#vpc))
* A Kubernetes pod with a container based on the [paulbouwer/hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes) image.
```python
# provisiong a cluster
cluster = eks.Cluster(self, "hello-eks",
version=eks.KubernetesVersion.V1_21
)
# apply a kubernetes manifest to the cluster
cluster.add_manifest("mypod", {
"api_version": "v1",
"kind": "Pod",
"metadata": {"name": "mypod"},
"spec": {
"containers": [{
"name": "hello",
"image": "paulbouwer/hello-kubernetes:1.5",
"ports": [{"container_port": 8080}]
}
]
}
})
```
In order to interact with your cluster through `kubectl`, you can use the `aws eks update-kubeconfig` [AWS CLI command](https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html)
to configure your local kubeconfig. The EKS module will define a CloudFormation output in your stack which contains the command to run. For example:
```plaintext
Outputs:
ClusterConfigCommand43AAE40F = aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy
```
Execute the `aws eks update-kubeconfig ...` command in your terminal to create or update a local kubeconfig context:
```console
$ aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy
Added new context arn:aws:eks:rrrrr:112233445566:cluster/cluster-xxxxx to /home/boom/.kube/config
```
And now you can simply use `kubectl`:
```console
$ kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/aws-node-fpmwv 1/1 Running 0 21m
pod/aws-node-m9htf 1/1 Running 0 21m
pod/coredns-5cb4fb54c7-q222j 1/1 Running 0 23m
pod/coredns-5cb4fb54c7-v9nxx 1/1 Running 0 23m
...
```
## Architectural Overview
The following is a qualitative diagram of the various possible components involved in the cluster deployment.
```text
+-----------------------------------------------+ +-----------------+
| EKS Cluster | kubectl | |
|-----------------------------------------------|<-------------+| Kubectl Handler |
| | | |
| | +-----------------+
| +--------------------+ +-----------------+ |
| | | | | |
| | Managed Node Group | | Fargate Profile | | +-----------------+
| | | | | | | |
| +--------------------+ +-----------------+ | | Cluster Handler |
| | | |
+-----------------------------------------------+ +-----------------+
^ ^ +
| | |
| connect self managed capacity | | aws-sdk
| | create/update/delete |
+ | v
+--------------------+ + +-------------------+
| | --------------+| eks.amazonaws.com |
| Auto Scaling Group | +-------------------+
| |
+--------------------+
```
In a nutshell:
* `EKS Cluster` - The cluster endpoint created by EKS.
* `Managed Node Group` - EC2 worker nodes managed by EKS.
* `Fargate Profile` - Fargate worker nodes managed by EKS.
* `Auto Scaling Group` - EC2 worker nodes managed by the user.
* `KubectlHandler` - Lambda function for invoking `kubectl` commands on the cluster - created by CDK.
* `ClusterHandler` - Lambda function for interacting with EKS API to manage the cluster lifecycle - created by CDK.
A more detailed breakdown of each is provided further down this README.
## Provisioning clusters
Creating a new cluster is done using the `Cluster` or `FargateCluster` constructs. The only required property is the kubernetes `version`.
```python
eks.Cluster(self, "HelloEKS",
version=eks.KubernetesVersion.V1_21
)
```
You can also use `FargateCluster` to provision a cluster that uses only fargate workers.
```python
eks.FargateCluster(self, "HelloEKS",
version=eks.KubernetesVersion.V1_21
)
```
> **NOTE: Only 1 cluster per stack is supported.** If you have a use-case for multiple clusters per stack, or would like to understand more about this limitation, see [https://github.com/aws/aws-cdk/issues/10073](https://github.com/aws/aws-cdk/issues/10073).
Below you'll find a few important cluster configuration options. First of which is Capacity.
Capacity is the amount and the type of worker nodes that are available to the cluster for deploying resources. Amazon EKS offers 3 ways of configuring capacity, which you can combine as you like:
### Managed node groups
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
With Amazon EKS managed node groups, you don’t need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.
> For more details visit [Amazon EKS Managed Node Groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html).
**Managed Node Groups are the recommended way to allocate cluster capacity.**
By default, this library will allocate a managed node group with 2 *m5.large* instances (this instance type suits most common use-cases, and is good value for money).
At cluster instantiation time, you can customize the number of instances and their type:
```python
eks.Cluster(self, "HelloEKS",
version=eks.KubernetesVersion.V1_21,
default_capacity=5,
default_capacity_instance=ec2.InstanceType.of(ec2.InstanceClass.M5, ec2.InstanceSize.SMALL)
)
```
To access the node group that was created on your behalf, you can use `cluster.defaultNodegroup`.
Additional customizations are available post instantiation. To apply them, set the default capacity to 0, and use the `cluster.addNodegroupCapacity` method:
```python
cluster = eks.Cluster(self, "HelloEKS",
version=eks.KubernetesVersion.V1_21,
default_capacity=0
)
cluster.add_nodegroup_capacity("custom-node-group",
instance_types=[ec2.InstanceType("m5.large")],
min_size=4,
disk_size=100,
ami_type=eks.NodegroupAmiType.AL2_X86_64_GPU
)
```
To set node taints, you can set `taints` option.
```python
# cluster: eks.Cluster
cluster.add_nodegroup_capacity("custom-node-group",
instance_types=[ec2.InstanceType("m5.large")],
taints=[eks.TaintSpec(
effect=eks.TaintEffect.NO_SCHEDULE,
key="foo",
value="bar"
)
]
)
```
#### Spot Instances Support
Use `capacityType` to create managed node groups comprised of spot instances. To maximize the availability of your applications while using
Spot Instances, we recommend that you configure a Spot managed node group to use multiple instance types with the `instanceTypes` property.
> For more details visit [Managed node group capacity types](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html#managed-node-group-capacity-types).
```python
# cluster: eks.Cluster
cluster.add_nodegroup_capacity("extra-ng-spot",
instance_types=[
ec2.InstanceType("c5.large"),
ec2.InstanceType("c5a.large"),
ec2.InstanceType("c5d.large")
],
min_size=3,
capacity_type=eks.CapacityType.SPOT
)
```
#### Launch Template Support
You can specify a launch template that the node group will use. For example, this can be useful if you want to use
a custom AMI or add custom user data.
When supplying a custom user data script, it must be encoded in the MIME multi-part archive format, since Amazon EKS merges with its own user data. Visit the [Launch Template Docs](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-user-data)
for mode details.
```python
# cluster: eks.Cluster
user_data = """MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="
--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
echo "Running custom user data script"
--==MYBOUNDARY==--\\
"""
lt = ec2.CfnLaunchTemplate(self, "LaunchTemplate",
launch_template_data=ec2.CfnLaunchTemplate.LaunchTemplateDataProperty(
instance_type="t3.small",
user_data=Fn.base64(user_data)
)
)
cluster.add_nodegroup_capacity("extra-ng",
launch_template_spec=eks.LaunchTemplateSpec(
id=lt.ref,
version=lt.attr_latest_version_number
)
)
```
Note that when using a custom AMI, Amazon EKS doesn't merge any user data. Which means you do not need the multi-part encoding. and are responsible for supplying the required bootstrap commands for nodes to join the cluster.
In the following example, `/ect/eks/bootstrap.sh` from the AMI will be used to bootstrap the node.
```python
# cluster: eks.Cluster
user_data = ec2.UserData.for_linux()
user_data.add_commands("set -o xtrace", f"/etc/eks/bootstrap.sh {cluster.clusterName}")
lt = ec2.CfnLaunchTemplate(self, "LaunchTemplate",
launch_template_data=ec2.CfnLaunchTemplate.LaunchTemplateDataProperty(
image_id="some-ami-id", # custom AMI
instance_type="t3.small",
user_data=Fn.base64(user_data.render())
)
)
cluster.add_nodegroup_capacity("extra-ng",
launch_template_spec=eks.LaunchTemplateSpec(
id=lt.ref,
version=lt.attr_latest_version_number
)
)
```
You may specify one `instanceType` in the launch template or multiple `instanceTypes` in the node group, **but not both**.
> For more details visit [Launch Template Support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html).
Graviton 2 instance types are supported including `c6g`, `m6g`, `r6g` and `t4g`.
### Fargate profiles
AWS Fargate is a technology that provides on-demand, right-sized compute
capacity for containers. With AWS Fargate, you no longer have to provision,
configure, or scale groups of virtual machines to run containers. This removes
the need to choose server types, decide when to scale your node groups, or
optimize cluster packing.
You can control which pods start on Fargate and how they run with Fargate
Profiles, which are defined as part of your Amazon EKS cluster.
See [Fargate Considerations](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html#fargate-considerations) in the AWS EKS User Guide.
You can add Fargate Profiles to any EKS cluster defined in your CDK app
through the `addFargateProfile()` method. The following example adds a profile
that will match all pods from the "default" namespace:
```python
# cluster: eks.Cluster
cluster.add_fargate_profile("MyProfile",
selectors=[eks.Selector(namespace="default")]
)
```
You can also directly use the `FargateProfile` construct to create profiles under different scopes:
```python
# cluster: eks.Cluster
eks.FargateProfile(self, "MyProfile",
cluster=cluster,
selectors=[eks.Selector(namespace="default")]
)
```
To create an EKS cluster that **only** uses Fargate capacity, you can use `FargateCluster`.
The following code defines an Amazon EKS cluster with a default Fargate Profile that matches all pods from the "kube-system" and "default" namespaces. It is also configured to [run CoreDNS on Fargate](https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-coredns).
```python
cluster = eks.FargateCluster(self, "MyCluster",
version=eks.KubernetesVersion.V1_21
)
```
`FargateCluster` will create a default `FargateProfile` which can be accessed via the cluster's `defaultProfile` property. The created profile can also be customized by passing options as with `addFargateProfile`.
**NOTE**: Classic Load Balancers and Network Load Balancers are not supported on
pods running on Fargate. For ingress, we recommend that you use the [ALB Ingress
Controller](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html)
on Amazon EKS (minimum version v1.1.4).
### Self-managed nodes
Another way of allocating capacity to an EKS cluster is by using self-managed nodes.
EC2 instances that are part of the auto-scaling group will serve as worker nodes for the cluster.
This type of capacity is also commonly referred to as *EC2 Capacity** or *EC2 Nodes*.
For a detailed overview please visit [Self Managed Nodes](https://docs.aws.amazon.com/eks/latest/userguide/worker.html).
Creating an auto-scaling group and connecting it to the cluster is done using the `cluster.addAutoScalingGroupCapacity` method:
```python
# cluster: eks.Cluster
cluster.add_auto_scaling_group_capacity("frontend-nodes",
instance_type=ec2.InstanceType("t2.medium"),
min_capacity=3,
vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC)
)
```
To connect an already initialized auto-scaling group, use the `cluster.connectAutoScalingGroupCapacity()` method:
```python
# cluster: eks.Cluster
# asg: autoscaling.AutoScalingGroup
cluster.connect_auto_scaling_group_capacity(asg)
```
To connect a self-managed node group to an imported cluster, use the `cluster.connectAutoScalingGroupCapacity()` method:
```python
# cluster: eks.Cluster
# asg: autoscaling.AutoScalingGroup
imported_cluster = eks.Cluster.from_cluster_attributes(self, "ImportedCluster",
cluster_name=cluster.cluster_name,
cluster_security_group_id=cluster.cluster_security_group_id
)
imported_cluster.connect_auto_scaling_group_capacity(asg)
```
In both cases, the [cluster security group](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html#cluster-sg) will be automatically attached to
the auto-scaling group, allowing for traffic to flow freely between managed and self-managed nodes.
> **Note:** The default `updateType` for auto-scaling groups does not replace existing nodes. Since security groups are determined at launch time, self-managed nodes that were provisioned with version `1.78.0` or lower, will not be updated.
> To apply the new configuration on all your self-managed nodes, you'll need to replace the nodes using the `UpdateType.REPLACING_UPDATE` policy for the [`updateType`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-autoscaling.AutoScalingGroup.html#updatetypespan-classapi-icon-api-icon-deprecated-titlethis-api-element-is-deprecated-its-use-is-not-recommended%EF%B8%8Fspan) property.
You can customize the [/etc/eks/boostrap.sh](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh) script, which is responsible
for bootstrapping the node to the EKS cluster. For example, you can use `kubeletExtraArgs` to add custom node labels or taints.
```python
# cluster: eks.Cluster
cluster.add_auto_scaling_group_capacity("spot",
instance_type=ec2.InstanceType("t3.large"),
min_capacity=2,
bootstrap_options=eks.BootstrapOptions(
kubelet_extra_args="--node-labels foo=bar,goo=far",
aws_api_retry_attempts=5
)
)
```
To disable bootstrapping altogether (i.e. to fully customize user-data), set `bootstrapEnabled` to `false`.
You can also configure the cluster to use an auto-scaling group as the default capacity:
```python
cluster = eks.Cluster(self, "HelloEKS",
version=eks.KubernetesVersion.V1_21,
default_capacity_type=eks.DefaultCapacityType.EC2
)
```
This will allocate an auto-scaling group with 2 *m5.large* instances (this instance type suits most common use-cases, and is good value for money).
To access the `AutoScalingGroup` that was created on your behalf, you can use `cluster.defaultCapacity`.
You can also independently create an `AutoScalingGroup` and connect it to the cluster using the `cluster.connectAutoScalingGroupCapacity` method:
```python
# cluster: eks.Cluster
# asg: autoscaling.AutoScalingGroup
cluster.connect_auto_scaling_group_capacity(asg)
```
This will add the necessary user-data to access the apiserver and configure all connections, roles, and tags needed for the instances in the auto-scaling group to properly join the cluster.
#### Spot Instances
When using self-managed nodes, you can configure the capacity to use spot instances, greatly reducing capacity cost.
To enable spot capacity, use the `spotPrice` property:
```python
# cluster: eks.Cluster
cluster.add_auto_scaling_group_capacity("spot",
spot_price="0.1094",
instance_type=ec2.InstanceType("t3.large"),
max_capacity=10
)
```
> Spot instance nodes will be labeled with `lifecycle=Ec2Spot` and tainted with `PreferNoSchedule`.
The [AWS Node Termination Handler](https://github.com/aws/aws-node-termination-handler) `DaemonSet` will be
installed from [Amazon EKS Helm chart repository](https://github.com/aws/eks-charts/tree/master/stable/aws-node-termination-handler) on these nodes.
The termination handler ensures that the Kubernetes control plane responds appropriately to events that
can cause your EC2 instance to become unavailable, such as [EC2 maintenance events](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html)
and [EC2 Spot interruptions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html) and helps gracefully stop all pods running on spot nodes that are about to be
terminated.
> Handler Version: [1.7.0](https://github.com/aws/aws-node-termination-handler/releases/tag/v1.7.0)
>
> Chart Version: [0.9.5](https://github.com/aws/eks-charts/blob/v0.0.28/stable/aws-node-termination-handler/Chart.yaml)
To disable the installation of the termination handler, set the `spotInterruptHandler` property to `false`. This applies both to `addAutoScalingGroupCapacity` and `connectAutoScalingGroupCapacity`.
#### Bottlerocket
[Bottlerocket](https://aws.amazon.com/bottlerocket/) is a Linux-based open-source operating system that is purpose-built by Amazon Web Services for running containers on virtual machines or bare metal hosts.
`Bottlerocket` is supported when using managed nodegroups or self-managed auto-scaling groups.
To create a Bottlerocket managed nodegroup:
```python
# cluster: eks.Cluster
cluster.add_nodegroup_capacity("BottlerocketNG",
ami_type=eks.NodegroupAmiType.BOTTLEROCKET_X86_64
)
```
The following example will create an auto-scaling group of 2 `t3.small` Linux instances running with the `Bottlerocket` AMI.
```python
# cluster: eks.Cluster
cluster.add_auto_scaling_group_capacity("BottlerocketNodes",
instance_type=ec2.InstanceType("t3.small"),
min_capacity=2,
machine_image_type=eks.MachineImageType.BOTTLEROCKET
)
```
The specific Bottlerocket AMI variant will be auto selected according to the k8s version for the `x86_64` architecture.
For example, if the Amazon EKS cluster version is `1.17`, the Bottlerocket AMI variant will be auto selected as
`aws-k8s-1.17` behind the scene.
> See [Variants](https://github.com/bottlerocket-os/bottlerocket/blob/develop/README.md#variants) for more details.
Please note Bottlerocket does not allow to customize bootstrap options and `bootstrapOptions` properties is not supported when you create the `Bottlerocket` capacity.
For more details about Bottlerocket, see [Bottlerocket FAQs](https://aws.amazon.com/bottlerocket/faqs/) and [Bottlerocket Open Source Blog](https://aws.amazon.com/blogs/opensource/announcing-the-general-availability-of-bottlerocket-an-open-source-linux-distribution-purpose-built-to-run-containers/).
### Endpoint Access
When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as `kubectl`)
By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of
AWS Identity and Access Management (IAM) and native Kubernetes [Role Based Access Control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) (RBAC).
You can configure the [cluster endpoint access](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) by using the `endpointAccess` property:
```python
cluster = eks.Cluster(self, "hello-eks",
version=eks.KubernetesVersion.V1_21,
endpoint_access=eks.EndpointAccess.PRIVATE
)
```
The default value is `eks.EndpointAccess.PUBLIC_AND_PRIVATE`. Which means the cluster endpoint is accessible from outside of your VPC, but worker node traffic and `kubectl` commands issued by this library stay within your VPC.
### Alb Controller
Some Kubernetes resources are commonly implemented on AWS with the help of the [ALB Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/).
From the docs:
> AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.
>
> * It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers.
> * It satisfies Kubernetes Service resources by provisioning Network Load Balancers.
To deploy the controller on your EKS cluster, configure the `albController` property:
```python
eks.Cluster(self, "HelloEKS",
version=eks.KubernetesVersion.V1_21,
alb_controller=eks.AlbControllerOptions(
version=eks.AlbControllerVersion.V2_4_1
)
)
```
Querying the controller pods should look something like this:
```console
❯ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
aws-load-balancer-controller-76bd6c7586-d929p 1/1 Running 0 109m
aws-load-balancer-controller-76bd6c7586-fqxph 1/1 Running 0 109m
...
...
```
Every Kubernetes manifest that utilizes the ALB Controller is effectively dependant on the controller.
If the controller is deleted before the manifest, it might result in dangling ELB/ALB resources.
Currently, the EKS construct library does not detect such dependencies, and they should be done explicitly.
For example:
```python
# cluster: eks.Cluster
manifest = cluster.add_manifest("manifest", {})
if cluster.alb_controller:
manifest.node.add_dependency(cluster.alb_controller)
```
### VPC Support
You can specify the VPC of the cluster using the `vpc` and `vpcSubnets` properties:
```python
# vpc: ec2.Vpc
eks.Cluster(self, "HelloEKS",
version=eks.KubernetesVersion.V1_21,
vpc=vpc,
vpc_subnets=[ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT)]
)
```
> Note: Isolated VPCs (i.e with no internet access) are not currently supported. See https://github.com/aws/aws-cdk/issues/12171
If you do not specify a VPC, one will be created on your behalf, which you can then access via `cluster.vpc`. The cluster VPC will be associated to any EKS managed capacity (i.e Managed Node Groups and Fargate Profiles).
Please note that the `vpcSubnets` property defines the subnets where EKS will place the *control plane* ENIs. To choose
the subnets where EKS will place the worker nodes, please refer to the **Provisioning clusters** section above.
If you allocate self managed capacity, you can specify which subnets should the auto-scaling group use:
```python
# vpc: ec2.Vpc
# cluster: eks.Cluster
cluster.add_auto_scaling_group_capacity("nodes",
vpc_subnets=ec2.SubnetSelection(subnets=vpc.private_subnets),
instance_type=ec2.InstanceType("t2.medium")
)
```
There are two additional components you might want to provision within the VPC.
#### Kubectl Handler
The `KubectlHandler` is a Lambda function responsible to issuing `kubectl` and `helm` commands against the cluster when you add resource manifests to the cluster.
The handler association to the VPC is derived from the `endpointAccess` configuration. The rule of thumb is: *If the cluster VPC can be associated, it will be*.
Breaking this down, it means that if the endpoint exposes private access (via `EndpointAccess.PRIVATE` or `EndpointAccess.PUBLIC_AND_PRIVATE`), and the VPC contains **private** subnets, the Lambda function will be provisioned inside the VPC and use the private subnets to interact with the cluster. This is the common use-case.
If the endpoint does not expose private access (via `EndpointAccess.PUBLIC`) **or** the VPC does not contain private subnets, the function will not be provisioned within the VPC.
If your use-case requires control over the IAM role that the KubeCtl Handler assumes, a custom role can be passed through the ClusterProps (as `kubectlLambdaRole`) of the EKS Cluster construct.
#### Cluster Handler
The `ClusterHandler` is a set of Lambda functions (`onEventHandler`, `isCompleteHandler`) responsible for interacting with the EKS API in order to control the cluster lifecycle. To provision these functions inside the VPC, set the `placeClusterHandlerInVpc` property to `true`. This will place the functions inside the private subnets of the VPC based on the selection strategy specified in the [`vpcSubnets`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-eks.Cluster.html#vpcsubnetsspan-classapi-icon-api-icon-experimental-titlethis-api-element-is-experimental-it-may-change-without-noticespan) property.
You can configure the environment of the Cluster Handler functions by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:
```python
# proxy_instance_security_group: ec2.SecurityGroup
cluster = eks.Cluster(self, "hello-eks",
version=eks.KubernetesVersion.V1_21,
cluster_handler_environment={
"https_proxy": "http://proxy.myproxy.com"
},
#
# If the proxy is not open publicly, you can pass a security group to the
# Cluster Handler Lambdas so that it can reach the proxy.
#
cluster_handler_security_group=proxy_instance_security_group
)
```
### Kubectl Support
The resources are created in the cluster by running `kubectl apply` from a python lambda function.
By default, CDK will create a new python lambda function to apply your k8s manifests. If you want to use an existing kubectl provider function, for example with tight trusted entities on your IAM Roles - you can import the existing provider and then use the imported provider when importing the cluster:
```python
handler_role = iam.Role.from_role_arn(self, "HandlerRole", "arn:aws:iam::123456789012:role/lambda-role")
kubectl_provider = eks.KubectlProvider.from_kubectl_provider_attributes(self, "KubectlProvider",
function_arn="arn:aws:lambda:us-east-2:123456789012:function:my-function:1",
kubectl_role_arn="arn:aws:iam::123456789012:role/kubectl-role",
handler_role=handler_role
)
cluster = eks.Cluster.from_cluster_attributes(self, "Cluster",
cluster_name="cluster",
kubectl_provider=kubectl_provider
)
```
#### Environment
You can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:
```python
cluster = eks.Cluster(self, "hello-eks",
version=eks.KubernetesVersion.V1_21,
kubectl_environment={
"http_proxy": "http://proxy.myproxy.com"
}
)
```
#### Runtime
The kubectl handler uses `kubectl`, `helm` and the `aws` CLI in order to
interact with the cluster. These are bundled into AWS Lambda layers included in
the `@aws-cdk/lambda-layer-awscli` and `@aws-cdk/lambda-layer-kubectl` modules.
You can specify a custom `lambda.LayerVersion` if you wish to use a different
version of these tools. The handler expects the layer to include the following
three executables:
```text
helm/helm
kubectl/kubectl
awscli/aws
```
See more information in the
[Dockerfile](https://github.com/aws/aws-cdk/tree/master/packages/%40aws-cdk/lambda-layer-awscli/layer) for @aws-cdk/lambda-layer-awscli
and the
[Dockerfile](https://github.com/aws/aws-cdk/tree/master/packages/%40aws-cdk/lambda-layer-kubectl/layer) for @aws-cdk/lambda-layer-kubectl.
```python
layer = lambda_.LayerVersion(self, "KubectlLayer",
code=lambda_.Code.from_asset("layer.zip")
)
```
Now specify when the cluster is defined:
```python
# layer: lambda.LayerVersion
# vpc: ec2.Vpc
cluster1 = eks.Cluster(self, "MyCluster",
kubectl_layer=layer,
vpc=vpc,
cluster_name="cluster-name",
version=eks.KubernetesVersion.V1_21
)
# or
cluster2 = eks.Cluster.from_cluster_attributes(self, "MyCluster",
kubectl_layer=layer,
vpc=vpc,
cluster_name="cluster-name"
)
```
#### Memory
By default, the kubectl provider is configured with 1024MiB of memory. You can use the `kubectlMemory` option to specify the memory size for the AWS Lambda function:
```python
# or
# vpc: ec2.Vpc
eks.Cluster(self, "MyCluster",
kubectl_memory=Size.gibibytes(4),
version=eks.KubernetesVersion.V1_21
)
eks.Cluster.from_cluster_attributes(self, "MyCluster",
kubectl_memory=Size.gibibytes(4),
vpc=vpc,
cluster_name="cluster-name"
)
```
### ARM64 Support
Instance types with `ARM64` architecture are supported in both managed nodegroup and self-managed capacity. Simply specify an ARM64 `instanceType` (such as `m6g.medium`), and the latest
Amazon Linux 2 AMI for ARM64 will be automatically selected.
```python
# cluster: eks.Cluster
# add a managed ARM64 nodegroup
cluster.add_nodegroup_capacity("extra-ng-arm",
instance_types=[ec2.InstanceType("m6g.medium")],
min_size=2
)
# add a self-managed ARM64 nodegroup
cluster.add_auto_scaling_group_capacity("self-ng-arm",
instance_type=ec2.InstanceType("m6g.medium"),
min_capacity=2
)
```
### Masters Role
When you create a cluster, you can specify a `mastersRole`. The `Cluster` construct will associate this role with the `system:masters` [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) group, giving it super-user access to the cluster.
```python
# role: iam.Role
eks.Cluster(self, "HelloEKS",
version=eks.KubernetesVersion.V1_21,
masters_role=role
)
```
If you do not specify it, a default role will be created on your behalf, that can be assumed by anyone in the account with `sts:AssumeRole` permissions for this role.
This is the role you see as part of the stack outputs mentioned in the [Quick Start](#quick-start).
```console
$ aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy
Added new context arn:aws:eks:rrrrr:112233445566:cluster/cluster-xxxxx to /home/boom/.kube/config
```
### Encryption
When you create an Amazon EKS cluster, envelope encryption of Kubernetes secrets using the AWS Key Management Service (AWS KMS) can be enabled.
The documentation on [creating a cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)
can provide more details about the customer master key (CMK) that can be used for the encryption.
You can use the `secretsEncryptionKey` to configure which key the cluster will use to encrypt Kubernetes secrets. By default, an AWS Managed key will be used.
> This setting can only be specified when the cluster is created and cannot be updated.
```python
secrets_key = kms.Key(self, "SecretsKey")
cluster = eks.Cluster(self, "MyCluster",
secrets_encryption_key=secrets_key,
version=eks.KubernetesVersion.V1_21
)
```
You can also use a similar configuration for running a cluster built using the FargateCluster construct.
```python
secrets_key = kms.Key(self, "SecretsKey")
cluster = eks.FargateCluster(self, "MyFargateCluster",
secrets_encryption_key=secrets_key,
version=eks.KubernetesVersion.V1_21
)
```
The Amazon Resource Name (ARN) for that CMK can be retrieved.
```python
# cluster: eks.Cluster
cluster_encryption_config_key_arn = cluster.cluster_encryption_config_key_arn
```
## Permissions and Security
Amazon EKS provides several mechanism of securing the cluster and granting permissions to specific IAM users and roles.
### AWS IAM Mapping
As described in the [Amazon EKS User Guide](https://docs.aws.amazon.com/en_us/eks/latest/userguide/add-user-role.html), you can map AWS IAM users and roles to [Kubernetes Role-based access control (RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac).
The Amazon EKS construct manages the *aws-auth* `ConfigMap` Kubernetes resource on your behalf and exposes an API through the `cluster.awsAuth` for mapping
users, roles and accounts.
Furthermore, when auto-scaling group capacity is added to the cluster, the IAM instance role of the auto-scaling group will be automatically mapped to RBAC so nodes can connect to the cluster. No manual mapping is required.
For example, let's say you want to grant an IAM user administrative privileges on your cluster:
```python
# cluster: eks.Cluster
admin_user = iam.User(self, "Admin")
cluster.aws_auth.add_user_mapping(admin_user, groups=["system:masters"])
```
A convenience method for mapping a role to the `system:masters` group is also available:
```python
# cluster: eks.Cluster
# role: iam.Role
cluster.aws_auth.add_masters_role(role)
```
### Cluster Security Group
When you create an Amazon EKS cluster, a [cluster security group](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)
is automatically created as well. This security group is designed to allow all traffic from the control plane and managed node groups to flow freely
between each other.
The ID for that security group can be retrieved after creating the cluster.
```python
# cluster: eks.Cluster
cluster_security_group_id = cluster.cluster_security_group_id
```
### Node SSH Access
If you want to be able to SSH into your worker nodes, you must already have an SSH key in the region you're connecting to and pass it when
you add capacity to the cluster. You must also be able to connect to the hosts (meaning they must have a public IP and you
should be allowed to connect to them on port 22):
See [SSH into nodes](test/example.ssh-into-nodes.lit.ts) for a code example.
If you want to SSH into nodes in a private subnet, you should set up a bastion host in a public subnet. That setup is recommended, but is
unfortunately beyond the scope of this documentation.
### Service Accounts
With services account you can provide Kubernetes Pods access to AWS resources.
```python
# cluster: eks.Cluster
# add service account
service_account = cluster.add_service_account("MyServiceAccount")
bucket = s3.Bucket(self, "Bucket")
bucket.grant_read_write(service_account)
mypod = cluster.add_manifest("mypod", {
"api_version": "v1",
"kind": "Pod",
"metadata": {"name": "mypod"},
"spec": {
"service_account_name": service_account.service_account_name,
"containers": [{
"name": "hello",
"image": "paulbouwer/hello-kubernetes:1.5",
"ports": [{"container_port": 8080}]
}
]
}
})
# create the resource after the service account.
mypod.node.add_dependency(service_account)
# print the IAM role arn for this service account
CfnOutput(self, "ServiceAccountIamRole", value=service_account.role.role_arn)
```
Note that using `serviceAccount.serviceAccountName` above **does not** translate into a resource dependency.
This is why an explicit dependency is needed. See [https://github.com/aws/aws-cdk/issues/9910](https://github.com/aws/aws-cdk/issues/9910) for more details.
It is possible to pass annotations and labels to the service account.
```python
# cluster: eks.Cluster
# add service account with annotations and labels
service_account = cluster.add_service_account("MyServiceAccount",
annotations={
"eks.amazonaws.com/sts-regional-endpoints": "false"
},
labels={
"some-label": "with-some-value"
}
)
```
You can also add service accounts to existing clusters.
To do so, pass the `openIdConnectProvider` property when you import the cluster into the application.
```python
# or create a new one using an existing issuer url
# issuer_url: str
# you can import an existing provider
provider = eks.OpenIdConnectProvider.from_open_id_connect_provider_arn(self, "Provider", "arn:aws:iam::123456:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/AB123456ABC")
provider2 = eks.OpenIdConnectProvider(self, "Provider",
url=issuer_url
)
cluster = eks.Cluster.from_cluster_attributes(self, "MyCluster",
cluster_name="Cluster",
open_id_connect_provider=provider,
kubectl_role_arn="arn:aws:iam::123456:role/service-role/k8sservicerole"
)
service_account = cluster.add_service_account("MyServiceAccount")
bucket = s3.Bucket(self, "Bucket")
bucket.grant_read_write(service_account)
```
Note that adding service accounts requires running `kubectl` commands against the cluster.
This means you must also pass the `kubectlRoleArn` when importing the cluster.
See [Using existing Clusters](https://github.com/aws/aws-cdk/tree/master/packages/@aws-cdk/aws-eks#using-existing-clusters).
## Applying Kubernetes Resources
The library supports several popular resource deployment mechanisms, among which are:
### Kubernetes Manifests
The `KubernetesManifest` construct or `cluster.addManifest` method can be used
to apply Kubernetes resource manifests to this cluster.
> When using `cluster.addManifest`, the manifest construct is defined within the cluster's stack scope. If the manifest contains
> attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error.
> To avoid this, directly use `new KubernetesManifest` to create the manifest in the scope of the other stack.
The following examples will deploy the [paulbouwer/hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes)
service on the cluster:
```python
# cluster: eks.Cluster
app_label = {"app": "hello-kubernetes"}
deployment = {
"api_version": "apps/v1",
"kind": "Deployment",
"metadata": {"name": "hello-kubernetes"},
"spec": {
"replicas": 3,
"selector": {"match_labels": app_label},
"template": {
"metadata": {"labels": app_label},
"spec": {
"containers": [{
"name": "hello-kubernetes",
"image": "paulbouwer/hello-kubernetes:1.5",
"ports": [{"container_port": 8080}]
}
]
}
}
}
}
service = {
"api_version": "v1",
"kind": "Service",
"metadata": {"name": "hello-kubernetes"},
"spec": {
"type": "LoadBalancer",
"ports": [{"port": 80, "target_port": 8080}],
"selector": app_label
}
}
# option 1: use a construct
eks.KubernetesManifest(self, "hello-kub",
cluster=cluster,
manifest=[deployment, service]
)
# or, option2: use `addManifest`
cluster.add_manifest("hello-kub", service, deployment)
```
#### ALB Controller Integration
The `KubernetesManifest` construct can detect ingress resources inside your manifest and automatically add the necessary annotations
so they are picked up by the ALB Controller.
> See [Alb Controller](#alb-controller)
To that end, it offers the following properties:
* `ingressAlb` - Signal that the ingress detection should be done.
* `ingressAlbScheme` - Which ALB scheme should be applied. Defaults to `internal`.
#### Adding resources from a URL
The following example will deploy the resource manifest hosting on remote server:
```text
// This example is only available in TypeScript
import * as yaml from 'js-yaml';
import * as request from 'sync-request';
declare const cluster: eks.Cluster;
const manifestUrl = 'https://url/of/manifest.yaml';
const manifest = yaml.safeLoadAll(request('GET', manifestUrl).getBody());
cluster.addManifest('my-resource', manifest);
```
#### Dependencies
There are cases where Kubernetes resources must be deployed in a specific order.
For example, you cannot define a resource in a Kubernetes namespace before the
namespace was created.
You can represent dependencies between `KubernetesManifest`s using
`resource.node.addDependency()`:
```python
# cluster: eks.Cluster
namespace = cluster.add_manifest("my-namespace", {
"api_version": "v1",
"kind": "Namespace",
"metadata": {"name": "my-app"}
})
service = cluster.add_manifest("my-service", {
"metadata": {
"name": "myservice",
"namespace": "my-app"
},
"spec": {}
})
service.node.add_dependency(namespace)
```
**NOTE:** when a `KubernetesManifest` includes multiple resources (either directly
or through `cluster.addManifest()`) (e.g. `cluster.addManifest('foo', r1, r2, r3,...)`), these resources will be applied as a single manifest via `kubectl`
and will be applied sequentially (the standard behavior in `kubectl`).
---
Since Kubernetes manifests are implemented as CloudFormation resources in the
CDK. This means that if the manifest is deleted from your code (or the stack is
deleted), the next `cdk deploy` will issue a `kubectl delete` command and the
Kubernetes resources in that manifest will be deleted.
#### Resource Pruning
When a resource is deleted from a Kubernetes manifest, the EKS module will
automatically delete these resources by injecting a *prune label* to all
manifest resources. This label is then passed to [`kubectl apply --prune`](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#alternative-kubectl-apply-f-directory-prune-l-your-label).
Pruning is enabled by default but can be disabled through the `prune` option
when a cluster is defined:
```python
eks.Cluster(self, "MyCluster",
version=eks.KubernetesVersion.V1_21,
prune=False
)
```
#### Manifests Validation
The `kubectl` CLI supports applying a manifest by skipping the validation.
This can be accomplished by setting the `skipValidation` flag to `true` in the `KubernetesManifest` props.
```python
# cluster: eks.Cluster
eks.KubernetesManifest(self, "HelloAppWithoutValidation",
cluster=cluster,
manifest=[{"foo": "bar"}],
skip_validation=True
)
```
### Helm Charts
The `HelmChart` construct or `cluster.addHelmChart` method can be used
to add Kubernetes resources to this cluster using Helm.
> When using `cluster.addHelmChart`, the manifest construct is defined within the cluster's stack scope. If the manifest contains
> attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error.
> To avoid this, directly use `new HelmChart` to create the chart in the scope of the other stack.
The following example will install the [NGINX Ingress Controller](https://kubernetes.github.io/ingress-nginx/) to your cluster using Helm.
```python
# cluster: eks.Cluster
# option 1: use a construct
eks.HelmChart(self, "NginxIngress",
cluster=cluster,
chart="nginx-ingress",
repository="https://helm.nginx.com/stable",
namespace="kube-system"
)
# or, option2: use `addHelmChart`
cluster.add_helm_chart("NginxIngress",
chart="nginx-ingress",
repository="https://helm.nginx.com/stable",
namespace="kube-system"
)
```
Helm charts will be installed and updated using `helm upgrade --install`, where a few parameters
are being passed down (such as `repo`, `values`, `version`, `namespace`, `wait`, `timeout`, etc).
This means that if the chart is added to CDK with the same release name, it will try to update
the chart in the cluster.
Additionally, the `chartAsset` property can be an `aws-s3-assets.Asset`. This allows the use of local, private helm charts.
```python
import aws_cdk.aws_s3_assets as s3_assets
# cluster: eks.Cluster
chart_asset = s3_assets.Asset(self, "ChartAsset",
path="/path/to/asset"
)
cluster.add_helm_chart("test-chart",
chart_asset=chart_asset
)
```
### OCI Charts
OCI charts are also supported.
Also replace the `${VARS}` with appropriate values.
```python
# cluster: eks.Cluster
# option 1: use a construct
eks.HelmChart(self, "MyOCIChart",
cluster=cluster,
chart="some-chart",
repository="oci://${ACCOUNT_ID}.dkr.ecr.${ACCOUNT_REGION}.amazonaws.com/${REPO_NAME}",
namespace="oci",
version="0.0.1"
)
```
Helm charts are implemented as CloudFormation resources in CDK.
This means that if the chart is deleted from your code (or the stack is
deleted), the next `cdk deploy` will issue a `helm uninstall` command and the
Helm chart will be deleted.
When there is no `release` defined, a unique ID will be allocated for the release based
on the construct path.
By default, all Helm charts will be installed concurrently. In some cases, this
could cause race conditions where two Helm charts attempt to deploy the same
resource or if Helm charts depend on each other. You can use
`chart.node.addDependency()` in order to declare a dependency order between
charts:
```python
# cluster: eks.Cluster
chart1 = cluster.add_helm_chart("MyChart",
chart="foo"
)
chart2 = cluster.add_helm_chart("MyChart",
chart="bar"
)
chart2.node.add_dependency(chart1)
```
#### CDK8s Charts
[CDK8s](https://cdk8s.io/) is an open-source library that enables Kubernetes manifest authoring using familiar programming languages. It is founded on the same technologies as the AWS CDK, such as [`constructs`](https://github.com/aws/constructs) and [`jsii`](https://github.com/aws/jsii).
> To learn more about cdk8s, visit the [Getting Started](https://cdk8s.io/docs/latest/getting-started/) tutorials.
The EKS module natively integrates with cdk8s and allows you to apply cdk8s charts on AWS EKS clusters via the `cluster.addCdk8sChart` method.
In addition to `cdk8s`, you can also use [`cdk8s+`](https://cdk8s.io/docs/latest/plus/), which provides higher level abstraction for the core kubernetes api objects.
You can think of it like the `L2` constructs for Kubernetes. Any other `cdk8s` based libraries are also supported, for example [`cdk8s-debore`](https://github.com/toricls/cdk8s-debore).
To get started, add the following dependencies to your `package.json` file:
```json
"dependencies": {
"cdk8s": "^1.0.0",
"cdk8s-plus-21": "^1.0.0-beta.38",
"constructs": "^3.3.69"
}
```
Note that here we are using `cdk8s-plus-21` as we are targeting Kubernetes version 1.21.0. If you operate a different kubernetes version, you should
use the corresponding `cdk8s-plus-XX` library.
See [Select the appropriate cdk8s+ library](https://cdk8s.io/docs/latest/plus/#i-operate-kubernetes-version-1xx-which-cdk8s-library-should-i-be-using)
for more details.
Similarly to how you would create a stack by extending `@aws-cdk/core.Stack`, we recommend you create a chart of your own that extends `cdk8s.Chart`,
and add your kubernetes resources to it. You can use `aws-cdk` construct attributes and properties inside your `cdk8s` construct freely.
In this example we create a chart that accepts an `s3.Bucket` and passes its name to a kubernetes pod as an environment variable.
Notice that the chart must accept a `constructs.Construct` type as its scope, not an `@aws-cdk/core.Construct` as you would normally use.
For this reason, to avoid possible confusion, we will create the chart in a separate file:
`+ my-chart.ts`
```python
import aws_cdk.aws_s3 as s3
import constructs as constructs
import cdk8s as cdk8s
import cdk8s_plus_21 as kplus
class MyChart(cdk8s.Chart):
def __init__(self, scope, id, *, bucket):
super().__init__(scope, id)
kplus.Pod(self, "Pod",
containers=[
kplus.Container(
image="my-image",
env_variables={
"BUCKET_NAME": kplus.EnvValue.from_value(bucket.bucket_name)
}
)
]
)
```
Then, in your AWS CDK app:
```python
# cluster: eks.Cluster
# some bucket..
bucket = s3.Bucket(self, "Bucket")
# create a cdk8s chart and use `cdk8s.App` as the scope.
my_chart = MyChart(cdk8s.App(), "MyChart", bucket=bucket)
# add the cdk8s chart to the cluster
cluster.add_cdk8s_chart("my-chart", my_chart)
```
##### Custom CDK8s Constructs
You can also compose a few stock `cdk8s+` constructs into your own custom construct. However, since mixing scopes between `aws-cdk` and `cdk8s` is currently not supported, the `Construct` class
you'll need to use is the one from the [`constructs`](https://github.com/aws/constructs) module, and not from `@aws-cdk/core` like you normally would.
This is why we used `new cdk8s.App()` as the scope of the chart above.
```python
import constructs as constructs
import cdk8s as cdk8s
import cdk8s_plus_21 as kplus
app = cdk8s.App()
chart = cdk8s.Chart(app, "my-chart")
class LoadBalancedWebService(constructs.Construct):
def __init__(self, scope, id, props):
super().__init__(scope, id)
deployment = kplus.Deployment(chart, "Deployment",
replicas=props.replicas,
containers=[kplus.Container(image=props.image)]
)
deployment.expose_via_service(
port=props.port,
service_type=kplus.ServiceType.LOAD_BALANCER
)
```
##### Manually importing k8s specs and CRD's
If you find yourself unable to use `cdk8s+`, or just like to directly use the `k8s` native objects or CRD's, you can do so by manually importing them using the `cdk8s-cli`.
See [Importing kubernetes objects](https://cdk8s.io/docs/latest/cli/import/) for detailed instructions.
## Patching Kubernetes Resources
The `KubernetesPatch` construct can be used to update existing kubernetes
resources. The following example can be used to patch the `hello-kubernetes`
deployment from the example above with 5 replicas.
```python
# cluster: eks.Cluster
eks.KubernetesPatch(self, "hello-kub-deployment-label",
cluster=cluster,
resource_name="deployment/hello-kubernetes",
apply_patch={"spec": {"replicas": 5}},
restore_patch={"spec": {"replicas": 3}}
)
```
## Querying Kubernetes Resources
The `KubernetesObjectValue` construct can be used to query for information about kubernetes objects,
and use that as part of your CDK application.
For example, you can fetch the address of a [`LoadBalancer`](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) type service:
```python
# cluster: eks.Cluster
# query the load balancer address
my_service_address = eks.KubernetesObjectValue(self, "LoadBalancerAttribute",
cluster=cluster,
object_type="service",
object_name="my-service",
json_path=".status.loadBalancer.ingress[0].hostname"
)
# pass the address to a lambda function
proxy_function = lambda_.Function(self, "ProxyFunction",
handler="index.handler",
code=lambda_.Code.from_inline("my-code"),
runtime=lambda_.Runtime.NODEJS_14_X,
environment={
"my_service_address": my_service_address.value
}
)
```
Specifically, since the above use-case is quite common, there is an easier way to access that information:
```python
# cluster: eks.Cluster
load_balancer_address = cluster.get_service_load_balancer_address("my-service")
```
## Using existing clusters
The Amazon EKS library allows defining Kubernetes resources such as [Kubernetes
manifests](#kubernetes-resources) and [Helm charts](#helm-charts) on clusters
that are not defined as part of your CDK app.
First, you'll need to "import" a cluster to your CDK app. To do that, use the
`eks.Cluster.fromClusterAttributes()` static method:
```python
cluster = eks.Cluster.from_cluster_attributes(self, "MyCluster",
cluster_name="my-cluster-name",
kubectl_role_arn="arn:aws:iam::1111111:role/iam-role-that-has-masters-access"
)
```
Then, you can use `addManifest` or `addHelmChart` to define resources inside
your Kubernetes cluster. For example:
```python
# cluster: eks.Cluster
cluster.add_manifest("Test", {
"api_version": "v1",
"kind": "ConfigMap",
"metadata": {
"name": "myconfigmap"
},
"data": {
"Key": "value",
"Another": "123454"
}
})
```
At the minimum, when importing clusters for `kubectl` management, you will need
to specify:
* `clusterName` - the name of the cluster.
* `kubectlRoleArn` - the ARN of an IAM role mapped to the `system:masters` RBAC
role. If the cluster you are importing was created using the AWS CDK, the
CloudFormation stack has an output that includes an IAM role that can be used.
Otherwise, you can create an IAM role and map it to `system:masters` manually.
The trust policy of this role should include the the
`arn:aws::iam::${accountId}:root` principal in order to allow the execution
role of the kubectl resource to assume it.
If the cluster is configured with private-only or private and restricted public
Kubernetes [endpoint access](#endpoint-access), you must also specify:
* `kubectlSecurityGroupId` - the ID of an EC2 security group that is allowed
connections to the cluster's control security group. For example, the EKS managed [cluster security group](#cluster-security-group).
* `kubectlPrivateSubnetIds` - a list of private VPC subnets IDs that will be used
to access the Kubernetes endpoint.
## Logging
EKS supports cluster logging for 5 different types of events:
* API requests to the cluster.
* Cluster access via the Kubernetes API.
* Authentication requests into the cluster.
* State of cluster controllers.
* Scheduling decisions.
You can enable logging for each one separately using the `clusterLogging`
property. For example:
```python
cluster = eks.Cluster(self, "Cluster",
# ...
version=eks.KubernetesVersion.V1_21,
cluster_logging=[eks.ClusterLoggingTypes.API, eks.ClusterLoggingTypes.AUTHENTICATOR, eks.ClusterLoggingTypes.SCHEDULER
]
)
```
## Known Issues and Limitations
* [One cluster per stack](https://github.com/aws/aws-cdk/issues/10073)
* [Service Account dependencies](https://github.com/aws/aws-cdk/issues/9910)
* [Support isolated VPCs](https://github.com/aws/aws-cdk/issues/12171)
Raw data
{
"_id": null,
"home_page": "https://github.com/aws/aws-cdk",
"name": "aws-cdk.aws-eks",
"maintainer": "",
"docs_url": null,
"requires_python": "~=3.7",
"maintainer_email": "",
"keywords": "",
"author": "Amazon Web Services",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/8e/6b/ead20ba7f772b4d9cb6fac8a018c69b8c21888e933e06af9c2a41af6021b/aws-cdk.aws-eks-1.203.0.tar.gz",
"platform": null,
"description": "# Amazon EKS Construct Library\n\n<!--BEGIN STABILITY BANNER-->---\n\n\n![cfn-resources: Stable](https://img.shields.io/badge/cfn--resources-stable-success.svg?style=for-the-badge)\n\n![cdk-constructs: Stable](https://img.shields.io/badge/cdk--constructs-stable-success.svg?style=for-the-badge)\n\n---\n<!--END STABILITY BANNER-->\n\nThis construct library allows you to define [Amazon Elastic Container Service for Kubernetes (EKS)](https://aws.amazon.com/eks/) clusters.\nIn addition, the library also supports defining Kubernetes resource manifests within EKS clusters.\n\n## Table Of Contents\n\n* [Quick Start](#quick-start)\n* [API Reference](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-eks-readme.html)\n* [Architectural Overview](#architectural-overview)\n* [Provisioning clusters](#provisioning-clusters)\n\n * [Managed node groups](#managed-node-groups)\n * [Fargate Profiles](#fargate-profiles)\n * [Self-managed nodes](#self-managed-nodes)\n * [Endpoint Access](#endpoint-access)\n * [ALB Controller](#alb-controller)\n * [VPC Support](#vpc-support)\n * [Kubectl Support](#kubectl-support)\n * [ARM64 Support](#arm64-support)\n * [Masters Role](#masters-role)\n * [Encryption](#encryption)\n* [Permissions and Security](#permissions-and-security)\n* [Applying Kubernetes Resources](#applying-kubernetes-resources)\n\n * [Kubernetes Manifests](#kubernetes-manifests)\n * [Helm Charts](#helm-charts)\n * [CDK8s Charts](#cdk8s-charts)\n* [Patching Kubernetes Resources](#patching-kubernetes-resources)\n* [Querying Kubernetes Resources](#querying-kubernetes-resources)\n* [Using existing clusters](#using-existing-clusters)\n* [Known Issues and Limitations](#known-issues-and-limitations)\n\n## Quick Start\n\nThis example defines an Amazon EKS cluster with the following configuration:\n\n* Dedicated VPC with default configuration (Implicitly created using [ec2.Vpc](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-ec2-readme.html#vpc))\n* A Kubernetes pod with a container based on the [paulbouwer/hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes) image.\n\n```python\n# provisiong a cluster\ncluster = eks.Cluster(self, \"hello-eks\",\n version=eks.KubernetesVersion.V1_21\n)\n\n# apply a kubernetes manifest to the cluster\ncluster.add_manifest(\"mypod\", {\n \"api_version\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\"name\": \"mypod\"},\n \"spec\": {\n \"containers\": [{\n \"name\": \"hello\",\n \"image\": \"paulbouwer/hello-kubernetes:1.5\",\n \"ports\": [{\"container_port\": 8080}]\n }\n ]\n }\n})\n```\n\nIn order to interact with your cluster through `kubectl`, you can use the `aws eks update-kubeconfig` [AWS CLI command](https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html)\nto configure your local kubeconfig. The EKS module will define a CloudFormation output in your stack which contains the command to run. For example:\n\n```plaintext\nOutputs:\nClusterConfigCommand43AAE40F = aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy\n```\n\nExecute the `aws eks update-kubeconfig ...` command in your terminal to create or update a local kubeconfig context:\n\n```console\n$ aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy\nAdded new context arn:aws:eks:rrrrr:112233445566:cluster/cluster-xxxxx to /home/boom/.kube/config\n```\n\nAnd now you can simply use `kubectl`:\n\n```console\n$ kubectl get all -n kube-system\nNAME READY STATUS RESTARTS AGE\npod/aws-node-fpmwv 1/1 Running 0 21m\npod/aws-node-m9htf 1/1 Running 0 21m\npod/coredns-5cb4fb54c7-q222j 1/1 Running 0 23m\npod/coredns-5cb4fb54c7-v9nxx 1/1 Running 0 23m\n...\n```\n\n## Architectural Overview\n\nThe following is a qualitative diagram of the various possible components involved in the cluster deployment.\n\n```text\n +-----------------------------------------------+ +-----------------+\n | EKS Cluster | kubectl | |\n |-----------------------------------------------|<-------------+| Kubectl Handler |\n | | | |\n | | +-----------------+\n | +--------------------+ +-----------------+ |\n | | | | | |\n | | Managed Node Group | | Fargate Profile | | +-----------------+\n | | | | | | | |\n | +--------------------+ +-----------------+ | | Cluster Handler |\n | | | |\n +-----------------------------------------------+ +-----------------+\n ^ ^ +\n | | |\n | connect self managed capacity | | aws-sdk\n | | create/update/delete |\n + | v\n +--------------------+ + +-------------------+\n | | --------------+| eks.amazonaws.com |\n | Auto Scaling Group | +-------------------+\n | |\n +--------------------+\n```\n\nIn a nutshell:\n\n* `EKS Cluster` - The cluster endpoint created by EKS.\n* `Managed Node Group` - EC2 worker nodes managed by EKS.\n* `Fargate Profile` - Fargate worker nodes managed by EKS.\n* `Auto Scaling Group` - EC2 worker nodes managed by the user.\n* `KubectlHandler` - Lambda function for invoking `kubectl` commands on the cluster - created by CDK.\n* `ClusterHandler` - Lambda function for interacting with EKS API to manage the cluster lifecycle - created by CDK.\n\nA more detailed breakdown of each is provided further down this README.\n\n## Provisioning clusters\n\nCreating a new cluster is done using the `Cluster` or `FargateCluster` constructs. The only required property is the kubernetes `version`.\n\n```python\neks.Cluster(self, \"HelloEKS\",\n version=eks.KubernetesVersion.V1_21\n)\n```\n\nYou can also use `FargateCluster` to provision a cluster that uses only fargate workers.\n\n```python\neks.FargateCluster(self, \"HelloEKS\",\n version=eks.KubernetesVersion.V1_21\n)\n```\n\n> **NOTE: Only 1 cluster per stack is supported.** If you have a use-case for multiple clusters per stack, or would like to understand more about this limitation, see [https://github.com/aws/aws-cdk/issues/10073](https://github.com/aws/aws-cdk/issues/10073).\n\nBelow you'll find a few important cluster configuration options. First of which is Capacity.\nCapacity is the amount and the type of worker nodes that are available to the cluster for deploying resources. Amazon EKS offers 3 ways of configuring capacity, which you can combine as you like:\n\n### Managed node groups\n\nAmazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.\nWith Amazon EKS managed node groups, you don\u2019t need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.\n\n> For more details visit [Amazon EKS Managed Node Groups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html).\n\n**Managed Node Groups are the recommended way to allocate cluster capacity.**\n\nBy default, this library will allocate a managed node group with 2 *m5.large* instances (this instance type suits most common use-cases, and is good value for money).\n\nAt cluster instantiation time, you can customize the number of instances and their type:\n\n```python\neks.Cluster(self, \"HelloEKS\",\n version=eks.KubernetesVersion.V1_21,\n default_capacity=5,\n default_capacity_instance=ec2.InstanceType.of(ec2.InstanceClass.M5, ec2.InstanceSize.SMALL)\n)\n```\n\nTo access the node group that was created on your behalf, you can use `cluster.defaultNodegroup`.\n\nAdditional customizations are available post instantiation. To apply them, set the default capacity to 0, and use the `cluster.addNodegroupCapacity` method:\n\n```python\ncluster = eks.Cluster(self, \"HelloEKS\",\n version=eks.KubernetesVersion.V1_21,\n default_capacity=0\n)\n\ncluster.add_nodegroup_capacity(\"custom-node-group\",\n instance_types=[ec2.InstanceType(\"m5.large\")],\n min_size=4,\n disk_size=100,\n ami_type=eks.NodegroupAmiType.AL2_X86_64_GPU\n)\n```\n\nTo set node taints, you can set `taints` option.\n\n```python\n# cluster: eks.Cluster\n\ncluster.add_nodegroup_capacity(\"custom-node-group\",\n instance_types=[ec2.InstanceType(\"m5.large\")],\n taints=[eks.TaintSpec(\n effect=eks.TaintEffect.NO_SCHEDULE,\n key=\"foo\",\n value=\"bar\"\n )\n ]\n)\n```\n\n#### Spot Instances Support\n\nUse `capacityType` to create managed node groups comprised of spot instances. To maximize the availability of your applications while using\nSpot Instances, we recommend that you configure a Spot managed node group to use multiple instance types with the `instanceTypes` property.\n\n> For more details visit [Managed node group capacity types](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html#managed-node-group-capacity-types).\n\n```python\n# cluster: eks.Cluster\n\ncluster.add_nodegroup_capacity(\"extra-ng-spot\",\n instance_types=[\n ec2.InstanceType(\"c5.large\"),\n ec2.InstanceType(\"c5a.large\"),\n ec2.InstanceType(\"c5d.large\")\n ],\n min_size=3,\n capacity_type=eks.CapacityType.SPOT\n)\n```\n\n#### Launch Template Support\n\nYou can specify a launch template that the node group will use. For example, this can be useful if you want to use\na custom AMI or add custom user data.\n\nWhen supplying a custom user data script, it must be encoded in the MIME multi-part archive format, since Amazon EKS merges with its own user data. Visit the [Launch Template Docs](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-user-data)\nfor mode details.\n\n```python\n# cluster: eks.Cluster\n\n\nuser_data = \"\"\"MIME-Version: 1.0\nContent-Type: multipart/mixed; boundary=\"==MYBOUNDARY==\"\n\n--==MYBOUNDARY==\nContent-Type: text/x-shellscript; charset=\"us-ascii\"\n\n#!/bin/bash\necho \"Running custom user data script\"\n\n--==MYBOUNDARY==--\\\\\n\"\"\"\nlt = ec2.CfnLaunchTemplate(self, \"LaunchTemplate\",\n launch_template_data=ec2.CfnLaunchTemplate.LaunchTemplateDataProperty(\n instance_type=\"t3.small\",\n user_data=Fn.base64(user_data)\n )\n)\n\ncluster.add_nodegroup_capacity(\"extra-ng\",\n launch_template_spec=eks.LaunchTemplateSpec(\n id=lt.ref,\n version=lt.attr_latest_version_number\n )\n)\n```\n\nNote that when using a custom AMI, Amazon EKS doesn't merge any user data. Which means you do not need the multi-part encoding. and are responsible for supplying the required bootstrap commands for nodes to join the cluster.\nIn the following example, `/ect/eks/bootstrap.sh` from the AMI will be used to bootstrap the node.\n\n```python\n# cluster: eks.Cluster\n\nuser_data = ec2.UserData.for_linux()\nuser_data.add_commands(\"set -o xtrace\", f\"/etc/eks/bootstrap.sh {cluster.clusterName}\")\nlt = ec2.CfnLaunchTemplate(self, \"LaunchTemplate\",\n launch_template_data=ec2.CfnLaunchTemplate.LaunchTemplateDataProperty(\n image_id=\"some-ami-id\", # custom AMI\n instance_type=\"t3.small\",\n user_data=Fn.base64(user_data.render())\n )\n)\ncluster.add_nodegroup_capacity(\"extra-ng\",\n launch_template_spec=eks.LaunchTemplateSpec(\n id=lt.ref,\n version=lt.attr_latest_version_number\n )\n)\n```\n\nYou may specify one `instanceType` in the launch template or multiple `instanceTypes` in the node group, **but not both**.\n\n> For more details visit [Launch Template Support](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html).\n\nGraviton 2 instance types are supported including `c6g`, `m6g`, `r6g` and `t4g`.\n\n### Fargate profiles\n\nAWS Fargate is a technology that provides on-demand, right-sized compute\ncapacity for containers. With AWS Fargate, you no longer have to provision,\nconfigure, or scale groups of virtual machines to run containers. This removes\nthe need to choose server types, decide when to scale your node groups, or\noptimize cluster packing.\n\nYou can control which pods start on Fargate and how they run with Fargate\nProfiles, which are defined as part of your Amazon EKS cluster.\n\nSee [Fargate Considerations](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html#fargate-considerations) in the AWS EKS User Guide.\n\nYou can add Fargate Profiles to any EKS cluster defined in your CDK app\nthrough the `addFargateProfile()` method. The following example adds a profile\nthat will match all pods from the \"default\" namespace:\n\n```python\n# cluster: eks.Cluster\n\ncluster.add_fargate_profile(\"MyProfile\",\n selectors=[eks.Selector(namespace=\"default\")]\n)\n```\n\nYou can also directly use the `FargateProfile` construct to create profiles under different scopes:\n\n```python\n# cluster: eks.Cluster\n\neks.FargateProfile(self, \"MyProfile\",\n cluster=cluster,\n selectors=[eks.Selector(namespace=\"default\")]\n)\n```\n\nTo create an EKS cluster that **only** uses Fargate capacity, you can use `FargateCluster`.\nThe following code defines an Amazon EKS cluster with a default Fargate Profile that matches all pods from the \"kube-system\" and \"default\" namespaces. It is also configured to [run CoreDNS on Fargate](https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-coredns).\n\n```python\ncluster = eks.FargateCluster(self, \"MyCluster\",\n version=eks.KubernetesVersion.V1_21\n)\n```\n\n`FargateCluster` will create a default `FargateProfile` which can be accessed via the cluster's `defaultProfile` property. The created profile can also be customized by passing options as with `addFargateProfile`.\n\n**NOTE**: Classic Load Balancers and Network Load Balancers are not supported on\npods running on Fargate. For ingress, we recommend that you use the [ALB Ingress\nController](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html)\non Amazon EKS (minimum version v1.1.4).\n\n### Self-managed nodes\n\nAnother way of allocating capacity to an EKS cluster is by using self-managed nodes.\nEC2 instances that are part of the auto-scaling group will serve as worker nodes for the cluster.\nThis type of capacity is also commonly referred to as *EC2 Capacity** or *EC2 Nodes*.\n\nFor a detailed overview please visit [Self Managed Nodes](https://docs.aws.amazon.com/eks/latest/userguide/worker.html).\n\nCreating an auto-scaling group and connecting it to the cluster is done using the `cluster.addAutoScalingGroupCapacity` method:\n\n```python\n# cluster: eks.Cluster\n\ncluster.add_auto_scaling_group_capacity(\"frontend-nodes\",\n instance_type=ec2.InstanceType(\"t2.medium\"),\n min_capacity=3,\n vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC)\n)\n```\n\nTo connect an already initialized auto-scaling group, use the `cluster.connectAutoScalingGroupCapacity()` method:\n\n```python\n# cluster: eks.Cluster\n# asg: autoscaling.AutoScalingGroup\n\ncluster.connect_auto_scaling_group_capacity(asg)\n```\n\nTo connect a self-managed node group to an imported cluster, use the `cluster.connectAutoScalingGroupCapacity()` method:\n\n```python\n# cluster: eks.Cluster\n# asg: autoscaling.AutoScalingGroup\n\nimported_cluster = eks.Cluster.from_cluster_attributes(self, \"ImportedCluster\",\n cluster_name=cluster.cluster_name,\n cluster_security_group_id=cluster.cluster_security_group_id\n)\n\nimported_cluster.connect_auto_scaling_group_capacity(asg)\n```\n\nIn both cases, the [cluster security group](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html#cluster-sg) will be automatically attached to\nthe auto-scaling group, allowing for traffic to flow freely between managed and self-managed nodes.\n\n> **Note:** The default `updateType` for auto-scaling groups does not replace existing nodes. Since security groups are determined at launch time, self-managed nodes that were provisioned with version `1.78.0` or lower, will not be updated.\n> To apply the new configuration on all your self-managed nodes, you'll need to replace the nodes using the `UpdateType.REPLACING_UPDATE` policy for the [`updateType`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-autoscaling.AutoScalingGroup.html#updatetypespan-classapi-icon-api-icon-deprecated-titlethis-api-element-is-deprecated-its-use-is-not-recommended%EF%B8%8Fspan) property.\n\nYou can customize the [/etc/eks/boostrap.sh](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh) script, which is responsible\nfor bootstrapping the node to the EKS cluster. For example, you can use `kubeletExtraArgs` to add custom node labels or taints.\n\n```python\n# cluster: eks.Cluster\n\ncluster.add_auto_scaling_group_capacity(\"spot\",\n instance_type=ec2.InstanceType(\"t3.large\"),\n min_capacity=2,\n bootstrap_options=eks.BootstrapOptions(\n kubelet_extra_args=\"--node-labels foo=bar,goo=far\",\n aws_api_retry_attempts=5\n )\n)\n```\n\nTo disable bootstrapping altogether (i.e. to fully customize user-data), set `bootstrapEnabled` to `false`.\nYou can also configure the cluster to use an auto-scaling group as the default capacity:\n\n```python\ncluster = eks.Cluster(self, \"HelloEKS\",\n version=eks.KubernetesVersion.V1_21,\n default_capacity_type=eks.DefaultCapacityType.EC2\n)\n```\n\nThis will allocate an auto-scaling group with 2 *m5.large* instances (this instance type suits most common use-cases, and is good value for money).\nTo access the `AutoScalingGroup` that was created on your behalf, you can use `cluster.defaultCapacity`.\nYou can also independently create an `AutoScalingGroup` and connect it to the cluster using the `cluster.connectAutoScalingGroupCapacity` method:\n\n```python\n# cluster: eks.Cluster\n# asg: autoscaling.AutoScalingGroup\n\ncluster.connect_auto_scaling_group_capacity(asg)\n```\n\nThis will add the necessary user-data to access the apiserver and configure all connections, roles, and tags needed for the instances in the auto-scaling group to properly join the cluster.\n\n#### Spot Instances\n\nWhen using self-managed nodes, you can configure the capacity to use spot instances, greatly reducing capacity cost.\nTo enable spot capacity, use the `spotPrice` property:\n\n```python\n# cluster: eks.Cluster\n\ncluster.add_auto_scaling_group_capacity(\"spot\",\n spot_price=\"0.1094\",\n instance_type=ec2.InstanceType(\"t3.large\"),\n max_capacity=10\n)\n```\n\n> Spot instance nodes will be labeled with `lifecycle=Ec2Spot` and tainted with `PreferNoSchedule`.\n\nThe [AWS Node Termination Handler](https://github.com/aws/aws-node-termination-handler) `DaemonSet` will be\ninstalled from [Amazon EKS Helm chart repository](https://github.com/aws/eks-charts/tree/master/stable/aws-node-termination-handler) on these nodes.\nThe termination handler ensures that the Kubernetes control plane responds appropriately to events that\ncan cause your EC2 instance to become unavailable, such as [EC2 maintenance events](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html)\nand [EC2 Spot interruptions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html) and helps gracefully stop all pods running on spot nodes that are about to be\nterminated.\n\n> Handler Version: [1.7.0](https://github.com/aws/aws-node-termination-handler/releases/tag/v1.7.0)\n>\n> Chart Version: [0.9.5](https://github.com/aws/eks-charts/blob/v0.0.28/stable/aws-node-termination-handler/Chart.yaml)\n\nTo disable the installation of the termination handler, set the `spotInterruptHandler` property to `false`. This applies both to `addAutoScalingGroupCapacity` and `connectAutoScalingGroupCapacity`.\n\n#### Bottlerocket\n\n[Bottlerocket](https://aws.amazon.com/bottlerocket/) is a Linux-based open-source operating system that is purpose-built by Amazon Web Services for running containers on virtual machines or bare metal hosts.\n\n`Bottlerocket` is supported when using managed nodegroups or self-managed auto-scaling groups.\n\nTo create a Bottlerocket managed nodegroup:\n\n```python\n# cluster: eks.Cluster\n\ncluster.add_nodegroup_capacity(\"BottlerocketNG\",\n ami_type=eks.NodegroupAmiType.BOTTLEROCKET_X86_64\n)\n```\n\nThe following example will create an auto-scaling group of 2 `t3.small` Linux instances running with the `Bottlerocket` AMI.\n\n```python\n# cluster: eks.Cluster\n\ncluster.add_auto_scaling_group_capacity(\"BottlerocketNodes\",\n instance_type=ec2.InstanceType(\"t3.small\"),\n min_capacity=2,\n machine_image_type=eks.MachineImageType.BOTTLEROCKET\n)\n```\n\nThe specific Bottlerocket AMI variant will be auto selected according to the k8s version for the `x86_64` architecture.\nFor example, if the Amazon EKS cluster version is `1.17`, the Bottlerocket AMI variant will be auto selected as\n`aws-k8s-1.17` behind the scene.\n\n> See [Variants](https://github.com/bottlerocket-os/bottlerocket/blob/develop/README.md#variants) for more details.\n\nPlease note Bottlerocket does not allow to customize bootstrap options and `bootstrapOptions` properties is not supported when you create the `Bottlerocket` capacity.\n\nFor more details about Bottlerocket, see [Bottlerocket FAQs](https://aws.amazon.com/bottlerocket/faqs/) and [Bottlerocket Open Source Blog](https://aws.amazon.com/blogs/opensource/announcing-the-general-availability-of-bottlerocket-an-open-source-linux-distribution-purpose-built-to-run-containers/).\n\n### Endpoint Access\n\nWhen you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as `kubectl`)\n\nBy default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of\nAWS Identity and Access Management (IAM) and native Kubernetes [Role Based Access Control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) (RBAC).\n\nYou can configure the [cluster endpoint access](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) by using the `endpointAccess` property:\n\n```python\ncluster = eks.Cluster(self, \"hello-eks\",\n version=eks.KubernetesVersion.V1_21,\n endpoint_access=eks.EndpointAccess.PRIVATE\n)\n```\n\nThe default value is `eks.EndpointAccess.PUBLIC_AND_PRIVATE`. Which means the cluster endpoint is accessible from outside of your VPC, but worker node traffic and `kubectl` commands issued by this library stay within your VPC.\n\n### Alb Controller\n\nSome Kubernetes resources are commonly implemented on AWS with the help of the [ALB Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/).\n\nFrom the docs:\n\n> AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.\n>\n> * It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers.\n> * It satisfies Kubernetes Service resources by provisioning Network Load Balancers.\n\nTo deploy the controller on your EKS cluster, configure the `albController` property:\n\n```python\neks.Cluster(self, \"HelloEKS\",\n version=eks.KubernetesVersion.V1_21,\n alb_controller=eks.AlbControllerOptions(\n version=eks.AlbControllerVersion.V2_4_1\n )\n)\n```\n\nQuerying the controller pods should look something like this:\n\n```console\n\u276f kubectl get pods -n kube-system\nNAME READY STATUS RESTARTS AGE\naws-load-balancer-controller-76bd6c7586-d929p 1/1 Running 0 109m\naws-load-balancer-controller-76bd6c7586-fqxph 1/1 Running 0 109m\n...\n...\n```\n\nEvery Kubernetes manifest that utilizes the ALB Controller is effectively dependant on the controller.\nIf the controller is deleted before the manifest, it might result in dangling ELB/ALB resources.\nCurrently, the EKS construct library does not detect such dependencies, and they should be done explicitly.\n\nFor example:\n\n```python\n# cluster: eks.Cluster\n\nmanifest = cluster.add_manifest(\"manifest\", {})\nif cluster.alb_controller:\n manifest.node.add_dependency(cluster.alb_controller)\n```\n\n### VPC Support\n\nYou can specify the VPC of the cluster using the `vpc` and `vpcSubnets` properties:\n\n```python\n# vpc: ec2.Vpc\n\n\neks.Cluster(self, \"HelloEKS\",\n version=eks.KubernetesVersion.V1_21,\n vpc=vpc,\n vpc_subnets=[ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT)]\n)\n```\n\n> Note: Isolated VPCs (i.e with no internet access) are not currently supported. See https://github.com/aws/aws-cdk/issues/12171\n\nIf you do not specify a VPC, one will be created on your behalf, which you can then access via `cluster.vpc`. The cluster VPC will be associated to any EKS managed capacity (i.e Managed Node Groups and Fargate Profiles).\n\nPlease note that the `vpcSubnets` property defines the subnets where EKS will place the *control plane* ENIs. To choose\nthe subnets where EKS will place the worker nodes, please refer to the **Provisioning clusters** section above.\n\nIf you allocate self managed capacity, you can specify which subnets should the auto-scaling group use:\n\n```python\n# vpc: ec2.Vpc\n# cluster: eks.Cluster\n\ncluster.add_auto_scaling_group_capacity(\"nodes\",\n vpc_subnets=ec2.SubnetSelection(subnets=vpc.private_subnets),\n instance_type=ec2.InstanceType(\"t2.medium\")\n)\n```\n\nThere are two additional components you might want to provision within the VPC.\n\n#### Kubectl Handler\n\nThe `KubectlHandler` is a Lambda function responsible to issuing `kubectl` and `helm` commands against the cluster when you add resource manifests to the cluster.\n\nThe handler association to the VPC is derived from the `endpointAccess` configuration. The rule of thumb is: *If the cluster VPC can be associated, it will be*.\n\nBreaking this down, it means that if the endpoint exposes private access (via `EndpointAccess.PRIVATE` or `EndpointAccess.PUBLIC_AND_PRIVATE`), and the VPC contains **private** subnets, the Lambda function will be provisioned inside the VPC and use the private subnets to interact with the cluster. This is the common use-case.\n\nIf the endpoint does not expose private access (via `EndpointAccess.PUBLIC`) **or** the VPC does not contain private subnets, the function will not be provisioned within the VPC.\n\nIf your use-case requires control over the IAM role that the KubeCtl Handler assumes, a custom role can be passed through the ClusterProps (as `kubectlLambdaRole`) of the EKS Cluster construct.\n\n#### Cluster Handler\n\nThe `ClusterHandler` is a set of Lambda functions (`onEventHandler`, `isCompleteHandler`) responsible for interacting with the EKS API in order to control the cluster lifecycle. To provision these functions inside the VPC, set the `placeClusterHandlerInVpc` property to `true`. This will place the functions inside the private subnets of the VPC based on the selection strategy specified in the [`vpcSubnets`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-eks.Cluster.html#vpcsubnetsspan-classapi-icon-api-icon-experimental-titlethis-api-element-is-experimental-it-may-change-without-noticespan) property.\n\nYou can configure the environment of the Cluster Handler functions by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:\n\n```python\n# proxy_instance_security_group: ec2.SecurityGroup\n\ncluster = eks.Cluster(self, \"hello-eks\",\n version=eks.KubernetesVersion.V1_21,\n cluster_handler_environment={\n \"https_proxy\": \"http://proxy.myproxy.com\"\n },\n #\n # If the proxy is not open publicly, you can pass a security group to the\n # Cluster Handler Lambdas so that it can reach the proxy.\n #\n cluster_handler_security_group=proxy_instance_security_group\n)\n```\n\n### Kubectl Support\n\nThe resources are created in the cluster by running `kubectl apply` from a python lambda function.\n\nBy default, CDK will create a new python lambda function to apply your k8s manifests. If you want to use an existing kubectl provider function, for example with tight trusted entities on your IAM Roles - you can import the existing provider and then use the imported provider when importing the cluster:\n\n```python\nhandler_role = iam.Role.from_role_arn(self, \"HandlerRole\", \"arn:aws:iam::123456789012:role/lambda-role\")\nkubectl_provider = eks.KubectlProvider.from_kubectl_provider_attributes(self, \"KubectlProvider\",\n function_arn=\"arn:aws:lambda:us-east-2:123456789012:function:my-function:1\",\n kubectl_role_arn=\"arn:aws:iam::123456789012:role/kubectl-role\",\n handler_role=handler_role\n)\n\ncluster = eks.Cluster.from_cluster_attributes(self, \"Cluster\",\n cluster_name=\"cluster\",\n kubectl_provider=kubectl_provider\n)\n```\n\n#### Environment\n\nYou can configure the environment of this function by specifying it at cluster instantiation. For example, this can be useful in order to configure an http proxy:\n\n```python\ncluster = eks.Cluster(self, \"hello-eks\",\n version=eks.KubernetesVersion.V1_21,\n kubectl_environment={\n \"http_proxy\": \"http://proxy.myproxy.com\"\n }\n)\n```\n\n#### Runtime\n\nThe kubectl handler uses `kubectl`, `helm` and the `aws` CLI in order to\ninteract with the cluster. These are bundled into AWS Lambda layers included in\nthe `@aws-cdk/lambda-layer-awscli` and `@aws-cdk/lambda-layer-kubectl` modules.\n\nYou can specify a custom `lambda.LayerVersion` if you wish to use a different\nversion of these tools. The handler expects the layer to include the following\nthree executables:\n\n```text\nhelm/helm\nkubectl/kubectl\nawscli/aws\n```\n\nSee more information in the\n[Dockerfile](https://github.com/aws/aws-cdk/tree/master/packages/%40aws-cdk/lambda-layer-awscli/layer) for @aws-cdk/lambda-layer-awscli\nand the\n[Dockerfile](https://github.com/aws/aws-cdk/tree/master/packages/%40aws-cdk/lambda-layer-kubectl/layer) for @aws-cdk/lambda-layer-kubectl.\n\n```python\nlayer = lambda_.LayerVersion(self, \"KubectlLayer\",\n code=lambda_.Code.from_asset(\"layer.zip\")\n)\n```\n\nNow specify when the cluster is defined:\n\n```python\n# layer: lambda.LayerVersion\n# vpc: ec2.Vpc\n\n\ncluster1 = eks.Cluster(self, \"MyCluster\",\n kubectl_layer=layer,\n vpc=vpc,\n cluster_name=\"cluster-name\",\n version=eks.KubernetesVersion.V1_21\n)\n\n# or\ncluster2 = eks.Cluster.from_cluster_attributes(self, \"MyCluster\",\n kubectl_layer=layer,\n vpc=vpc,\n cluster_name=\"cluster-name\"\n)\n```\n\n#### Memory\n\nBy default, the kubectl provider is configured with 1024MiB of memory. You can use the `kubectlMemory` option to specify the memory size for the AWS Lambda function:\n\n```python\n# or\n# vpc: ec2.Vpc\neks.Cluster(self, \"MyCluster\",\n kubectl_memory=Size.gibibytes(4),\n version=eks.KubernetesVersion.V1_21\n)\neks.Cluster.from_cluster_attributes(self, \"MyCluster\",\n kubectl_memory=Size.gibibytes(4),\n vpc=vpc,\n cluster_name=\"cluster-name\"\n)\n```\n\n### ARM64 Support\n\nInstance types with `ARM64` architecture are supported in both managed nodegroup and self-managed capacity. Simply specify an ARM64 `instanceType` (such as `m6g.medium`), and the latest\nAmazon Linux 2 AMI for ARM64 will be automatically selected.\n\n```python\n# cluster: eks.Cluster\n\n# add a managed ARM64 nodegroup\ncluster.add_nodegroup_capacity(\"extra-ng-arm\",\n instance_types=[ec2.InstanceType(\"m6g.medium\")],\n min_size=2\n)\n\n# add a self-managed ARM64 nodegroup\ncluster.add_auto_scaling_group_capacity(\"self-ng-arm\",\n instance_type=ec2.InstanceType(\"m6g.medium\"),\n min_capacity=2\n)\n```\n\n### Masters Role\n\nWhen you create a cluster, you can specify a `mastersRole`. The `Cluster` construct will associate this role with the `system:masters` [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) group, giving it super-user access to the cluster.\n\n```python\n# role: iam.Role\n\neks.Cluster(self, \"HelloEKS\",\n version=eks.KubernetesVersion.V1_21,\n masters_role=role\n)\n```\n\nIf you do not specify it, a default role will be created on your behalf, that can be assumed by anyone in the account with `sts:AssumeRole` permissions for this role.\n\nThis is the role you see as part of the stack outputs mentioned in the [Quick Start](#quick-start).\n\n```console\n$ aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy\nAdded new context arn:aws:eks:rrrrr:112233445566:cluster/cluster-xxxxx to /home/boom/.kube/config\n```\n\n### Encryption\n\nWhen you create an Amazon EKS cluster, envelope encryption of Kubernetes secrets using the AWS Key Management Service (AWS KMS) can be enabled.\nThe documentation on [creating a cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)\ncan provide more details about the customer master key (CMK) that can be used for the encryption.\n\nYou can use the `secretsEncryptionKey` to configure which key the cluster will use to encrypt Kubernetes secrets. By default, an AWS Managed key will be used.\n\n> This setting can only be specified when the cluster is created and cannot be updated.\n\n```python\nsecrets_key = kms.Key(self, \"SecretsKey\")\ncluster = eks.Cluster(self, \"MyCluster\",\n secrets_encryption_key=secrets_key,\n version=eks.KubernetesVersion.V1_21\n)\n```\n\nYou can also use a similar configuration for running a cluster built using the FargateCluster construct.\n\n```python\nsecrets_key = kms.Key(self, \"SecretsKey\")\ncluster = eks.FargateCluster(self, \"MyFargateCluster\",\n secrets_encryption_key=secrets_key,\n version=eks.KubernetesVersion.V1_21\n)\n```\n\nThe Amazon Resource Name (ARN) for that CMK can be retrieved.\n\n```python\n# cluster: eks.Cluster\n\ncluster_encryption_config_key_arn = cluster.cluster_encryption_config_key_arn\n```\n\n## Permissions and Security\n\nAmazon EKS provides several mechanism of securing the cluster and granting permissions to specific IAM users and roles.\n\n### AWS IAM Mapping\n\nAs described in the [Amazon EKS User Guide](https://docs.aws.amazon.com/en_us/eks/latest/userguide/add-user-role.html), you can map AWS IAM users and roles to [Kubernetes Role-based access control (RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac).\n\nThe Amazon EKS construct manages the *aws-auth* `ConfigMap` Kubernetes resource on your behalf and exposes an API through the `cluster.awsAuth` for mapping\nusers, roles and accounts.\n\nFurthermore, when auto-scaling group capacity is added to the cluster, the IAM instance role of the auto-scaling group will be automatically mapped to RBAC so nodes can connect to the cluster. No manual mapping is required.\n\nFor example, let's say you want to grant an IAM user administrative privileges on your cluster:\n\n```python\n# cluster: eks.Cluster\n\nadmin_user = iam.User(self, \"Admin\")\ncluster.aws_auth.add_user_mapping(admin_user, groups=[\"system:masters\"])\n```\n\nA convenience method for mapping a role to the `system:masters` group is also available:\n\n```python\n# cluster: eks.Cluster\n# role: iam.Role\n\ncluster.aws_auth.add_masters_role(role)\n```\n\n### Cluster Security Group\n\nWhen you create an Amazon EKS cluster, a [cluster security group](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)\nis automatically created as well. This security group is designed to allow all traffic from the control plane and managed node groups to flow freely\nbetween each other.\n\nThe ID for that security group can be retrieved after creating the cluster.\n\n```python\n# cluster: eks.Cluster\n\ncluster_security_group_id = cluster.cluster_security_group_id\n```\n\n### Node SSH Access\n\nIf you want to be able to SSH into your worker nodes, you must already have an SSH key in the region you're connecting to and pass it when\nyou add capacity to the cluster. You must also be able to connect to the hosts (meaning they must have a public IP and you\nshould be allowed to connect to them on port 22):\n\nSee [SSH into nodes](test/example.ssh-into-nodes.lit.ts) for a code example.\n\nIf you want to SSH into nodes in a private subnet, you should set up a bastion host in a public subnet. That setup is recommended, but is\nunfortunately beyond the scope of this documentation.\n\n### Service Accounts\n\nWith services account you can provide Kubernetes Pods access to AWS resources.\n\n```python\n# cluster: eks.Cluster\n\n# add service account\nservice_account = cluster.add_service_account(\"MyServiceAccount\")\n\nbucket = s3.Bucket(self, \"Bucket\")\nbucket.grant_read_write(service_account)\n\nmypod = cluster.add_manifest(\"mypod\", {\n \"api_version\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\"name\": \"mypod\"},\n \"spec\": {\n \"service_account_name\": service_account.service_account_name,\n \"containers\": [{\n \"name\": \"hello\",\n \"image\": \"paulbouwer/hello-kubernetes:1.5\",\n \"ports\": [{\"container_port\": 8080}]\n }\n ]\n }\n})\n\n# create the resource after the service account.\nmypod.node.add_dependency(service_account)\n\n# print the IAM role arn for this service account\nCfnOutput(self, \"ServiceAccountIamRole\", value=service_account.role.role_arn)\n```\n\nNote that using `serviceAccount.serviceAccountName` above **does not** translate into a resource dependency.\nThis is why an explicit dependency is needed. See [https://github.com/aws/aws-cdk/issues/9910](https://github.com/aws/aws-cdk/issues/9910) for more details.\n\nIt is possible to pass annotations and labels to the service account.\n\n```python\n# cluster: eks.Cluster\n\n# add service account with annotations and labels\nservice_account = cluster.add_service_account(\"MyServiceAccount\",\n annotations={\n \"eks.amazonaws.com/sts-regional-endpoints\": \"false\"\n },\n labels={\n \"some-label\": \"with-some-value\"\n }\n)\n```\n\nYou can also add service accounts to existing clusters.\nTo do so, pass the `openIdConnectProvider` property when you import the cluster into the application.\n\n```python\n# or create a new one using an existing issuer url\n# issuer_url: str\n# you can import an existing provider\nprovider = eks.OpenIdConnectProvider.from_open_id_connect_provider_arn(self, \"Provider\", \"arn:aws:iam::123456:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/AB123456ABC\")\nprovider2 = eks.OpenIdConnectProvider(self, \"Provider\",\n url=issuer_url\n)\n\ncluster = eks.Cluster.from_cluster_attributes(self, \"MyCluster\",\n cluster_name=\"Cluster\",\n open_id_connect_provider=provider,\n kubectl_role_arn=\"arn:aws:iam::123456:role/service-role/k8sservicerole\"\n)\n\nservice_account = cluster.add_service_account(\"MyServiceAccount\")\n\nbucket = s3.Bucket(self, \"Bucket\")\nbucket.grant_read_write(service_account)\n```\n\nNote that adding service accounts requires running `kubectl` commands against the cluster.\nThis means you must also pass the `kubectlRoleArn` when importing the cluster.\nSee [Using existing Clusters](https://github.com/aws/aws-cdk/tree/master/packages/@aws-cdk/aws-eks#using-existing-clusters).\n\n## Applying Kubernetes Resources\n\nThe library supports several popular resource deployment mechanisms, among which are:\n\n### Kubernetes Manifests\n\nThe `KubernetesManifest` construct or `cluster.addManifest` method can be used\nto apply Kubernetes resource manifests to this cluster.\n\n> When using `cluster.addManifest`, the manifest construct is defined within the cluster's stack scope. If the manifest contains\n> attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error.\n> To avoid this, directly use `new KubernetesManifest` to create the manifest in the scope of the other stack.\n\nThe following examples will deploy the [paulbouwer/hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes)\nservice on the cluster:\n\n```python\n# cluster: eks.Cluster\n\napp_label = {\"app\": \"hello-kubernetes\"}\n\ndeployment = {\n \"api_version\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"metadata\": {\"name\": \"hello-kubernetes\"},\n \"spec\": {\n \"replicas\": 3,\n \"selector\": {\"match_labels\": app_label},\n \"template\": {\n \"metadata\": {\"labels\": app_label},\n \"spec\": {\n \"containers\": [{\n \"name\": \"hello-kubernetes\",\n \"image\": \"paulbouwer/hello-kubernetes:1.5\",\n \"ports\": [{\"container_port\": 8080}]\n }\n ]\n }\n }\n }\n}\n\nservice = {\n \"api_version\": \"v1\",\n \"kind\": \"Service\",\n \"metadata\": {\"name\": \"hello-kubernetes\"},\n \"spec\": {\n \"type\": \"LoadBalancer\",\n \"ports\": [{\"port\": 80, \"target_port\": 8080}],\n \"selector\": app_label\n }\n}\n\n# option 1: use a construct\neks.KubernetesManifest(self, \"hello-kub\",\n cluster=cluster,\n manifest=[deployment, service]\n)\n\n# or, option2: use `addManifest`\ncluster.add_manifest(\"hello-kub\", service, deployment)\n```\n\n#### ALB Controller Integration\n\nThe `KubernetesManifest` construct can detect ingress resources inside your manifest and automatically add the necessary annotations\nso they are picked up by the ALB Controller.\n\n> See [Alb Controller](#alb-controller)\n\nTo that end, it offers the following properties:\n\n* `ingressAlb` - Signal that the ingress detection should be done.\n* `ingressAlbScheme` - Which ALB scheme should be applied. Defaults to `internal`.\n\n#### Adding resources from a URL\n\nThe following example will deploy the resource manifest hosting on remote server:\n\n```text\n// This example is only available in TypeScript\n\nimport * as yaml from 'js-yaml';\nimport * as request from 'sync-request';\n\ndeclare const cluster: eks.Cluster;\nconst manifestUrl = 'https://url/of/manifest.yaml';\nconst manifest = yaml.safeLoadAll(request('GET', manifestUrl).getBody());\ncluster.addManifest('my-resource', manifest);\n```\n\n#### Dependencies\n\nThere are cases where Kubernetes resources must be deployed in a specific order.\nFor example, you cannot define a resource in a Kubernetes namespace before the\nnamespace was created.\n\nYou can represent dependencies between `KubernetesManifest`s using\n`resource.node.addDependency()`:\n\n```python\n# cluster: eks.Cluster\n\nnamespace = cluster.add_manifest(\"my-namespace\", {\n \"api_version\": \"v1\",\n \"kind\": \"Namespace\",\n \"metadata\": {\"name\": \"my-app\"}\n})\n\nservice = cluster.add_manifest(\"my-service\", {\n \"metadata\": {\n \"name\": \"myservice\",\n \"namespace\": \"my-app\"\n },\n \"spec\": {}\n})\n\nservice.node.add_dependency(namespace)\n```\n\n**NOTE:** when a `KubernetesManifest` includes multiple resources (either directly\nor through `cluster.addManifest()`) (e.g. `cluster.addManifest('foo', r1, r2, r3,...)`), these resources will be applied as a single manifest via `kubectl`\nand will be applied sequentially (the standard behavior in `kubectl`).\n\n---\n\n\nSince Kubernetes manifests are implemented as CloudFormation resources in the\nCDK. This means that if the manifest is deleted from your code (or the stack is\ndeleted), the next `cdk deploy` will issue a `kubectl delete` command and the\nKubernetes resources in that manifest will be deleted.\n\n#### Resource Pruning\n\nWhen a resource is deleted from a Kubernetes manifest, the EKS module will\nautomatically delete these resources by injecting a *prune label* to all\nmanifest resources. This label is then passed to [`kubectl apply --prune`](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#alternative-kubectl-apply-f-directory-prune-l-your-label).\n\nPruning is enabled by default but can be disabled through the `prune` option\nwhen a cluster is defined:\n\n```python\neks.Cluster(self, \"MyCluster\",\n version=eks.KubernetesVersion.V1_21,\n prune=False\n)\n```\n\n#### Manifests Validation\n\nThe `kubectl` CLI supports applying a manifest by skipping the validation.\nThis can be accomplished by setting the `skipValidation` flag to `true` in the `KubernetesManifest` props.\n\n```python\n# cluster: eks.Cluster\n\neks.KubernetesManifest(self, \"HelloAppWithoutValidation\",\n cluster=cluster,\n manifest=[{\"foo\": \"bar\"}],\n skip_validation=True\n)\n```\n\n### Helm Charts\n\nThe `HelmChart` construct or `cluster.addHelmChart` method can be used\nto add Kubernetes resources to this cluster using Helm.\n\n> When using `cluster.addHelmChart`, the manifest construct is defined within the cluster's stack scope. If the manifest contains\n> attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error.\n> To avoid this, directly use `new HelmChart` to create the chart in the scope of the other stack.\n\nThe following example will install the [NGINX Ingress Controller](https://kubernetes.github.io/ingress-nginx/) to your cluster using Helm.\n\n```python\n# cluster: eks.Cluster\n\n# option 1: use a construct\neks.HelmChart(self, \"NginxIngress\",\n cluster=cluster,\n chart=\"nginx-ingress\",\n repository=\"https://helm.nginx.com/stable\",\n namespace=\"kube-system\"\n)\n\n# or, option2: use `addHelmChart`\ncluster.add_helm_chart(\"NginxIngress\",\n chart=\"nginx-ingress\",\n repository=\"https://helm.nginx.com/stable\",\n namespace=\"kube-system\"\n)\n```\n\nHelm charts will be installed and updated using `helm upgrade --install`, where a few parameters\nare being passed down (such as `repo`, `values`, `version`, `namespace`, `wait`, `timeout`, etc).\nThis means that if the chart is added to CDK with the same release name, it will try to update\nthe chart in the cluster.\n\nAdditionally, the `chartAsset` property can be an `aws-s3-assets.Asset`. This allows the use of local, private helm charts.\n\n```python\nimport aws_cdk.aws_s3_assets as s3_assets\n\n# cluster: eks.Cluster\n\nchart_asset = s3_assets.Asset(self, \"ChartAsset\",\n path=\"/path/to/asset\"\n)\n\ncluster.add_helm_chart(\"test-chart\",\n chart_asset=chart_asset\n)\n```\n\n### OCI Charts\n\nOCI charts are also supported.\nAlso replace the `${VARS}` with appropriate values.\n\n```python\n# cluster: eks.Cluster\n\n# option 1: use a construct\neks.HelmChart(self, \"MyOCIChart\",\n cluster=cluster,\n chart=\"some-chart\",\n repository=\"oci://${ACCOUNT_ID}.dkr.ecr.${ACCOUNT_REGION}.amazonaws.com/${REPO_NAME}\",\n namespace=\"oci\",\n version=\"0.0.1\"\n)\n```\n\nHelm charts are implemented as CloudFormation resources in CDK.\nThis means that if the chart is deleted from your code (or the stack is\ndeleted), the next `cdk deploy` will issue a `helm uninstall` command and the\nHelm chart will be deleted.\n\nWhen there is no `release` defined, a unique ID will be allocated for the release based\non the construct path.\n\nBy default, all Helm charts will be installed concurrently. In some cases, this\ncould cause race conditions where two Helm charts attempt to deploy the same\nresource or if Helm charts depend on each other. You can use\n`chart.node.addDependency()` in order to declare a dependency order between\ncharts:\n\n```python\n# cluster: eks.Cluster\n\nchart1 = cluster.add_helm_chart(\"MyChart\",\n chart=\"foo\"\n)\nchart2 = cluster.add_helm_chart(\"MyChart\",\n chart=\"bar\"\n)\n\nchart2.node.add_dependency(chart1)\n```\n\n#### CDK8s Charts\n\n[CDK8s](https://cdk8s.io/) is an open-source library that enables Kubernetes manifest authoring using familiar programming languages. It is founded on the same technologies as the AWS CDK, such as [`constructs`](https://github.com/aws/constructs) and [`jsii`](https://github.com/aws/jsii).\n\n> To learn more about cdk8s, visit the [Getting Started](https://cdk8s.io/docs/latest/getting-started/) tutorials.\n\nThe EKS module natively integrates with cdk8s and allows you to apply cdk8s charts on AWS EKS clusters via the `cluster.addCdk8sChart` method.\n\nIn addition to `cdk8s`, you can also use [`cdk8s+`](https://cdk8s.io/docs/latest/plus/), which provides higher level abstraction for the core kubernetes api objects.\nYou can think of it like the `L2` constructs for Kubernetes. Any other `cdk8s` based libraries are also supported, for example [`cdk8s-debore`](https://github.com/toricls/cdk8s-debore).\n\nTo get started, add the following dependencies to your `package.json` file:\n\n```json\n\"dependencies\": {\n \"cdk8s\": \"^1.0.0\",\n \"cdk8s-plus-21\": \"^1.0.0-beta.38\",\n \"constructs\": \"^3.3.69\"\n}\n```\n\nNote that here we are using `cdk8s-plus-21` as we are targeting Kubernetes version 1.21.0. If you operate a different kubernetes version, you should\nuse the corresponding `cdk8s-plus-XX` library.\nSee [Select the appropriate cdk8s+ library](https://cdk8s.io/docs/latest/plus/#i-operate-kubernetes-version-1xx-which-cdk8s-library-should-i-be-using)\nfor more details.\n\nSimilarly to how you would create a stack by extending `@aws-cdk/core.Stack`, we recommend you create a chart of your own that extends `cdk8s.Chart`,\nand add your kubernetes resources to it. You can use `aws-cdk` construct attributes and properties inside your `cdk8s` construct freely.\n\nIn this example we create a chart that accepts an `s3.Bucket` and passes its name to a kubernetes pod as an environment variable.\n\nNotice that the chart must accept a `constructs.Construct` type as its scope, not an `@aws-cdk/core.Construct` as you would normally use.\nFor this reason, to avoid possible confusion, we will create the chart in a separate file:\n\n`+ my-chart.ts`\n\n```python\nimport aws_cdk.aws_s3 as s3\nimport constructs as constructs\nimport cdk8s as cdk8s\nimport cdk8s_plus_21 as kplus\n\nclass MyChart(cdk8s.Chart):\n def __init__(self, scope, id, *, bucket):\n super().__init__(scope, id)\n\n kplus.Pod(self, \"Pod\",\n containers=[\n kplus.Container(\n image=\"my-image\",\n env_variables={\n \"BUCKET_NAME\": kplus.EnvValue.from_value(bucket.bucket_name)\n }\n )\n ]\n )\n```\n\nThen, in your AWS CDK app:\n\n```python\n# cluster: eks.Cluster\n\n\n# some bucket..\nbucket = s3.Bucket(self, \"Bucket\")\n\n# create a cdk8s chart and use `cdk8s.App` as the scope.\nmy_chart = MyChart(cdk8s.App(), \"MyChart\", bucket=bucket)\n\n# add the cdk8s chart to the cluster\ncluster.add_cdk8s_chart(\"my-chart\", my_chart)\n```\n\n##### Custom CDK8s Constructs\n\nYou can also compose a few stock `cdk8s+` constructs into your own custom construct. However, since mixing scopes between `aws-cdk` and `cdk8s` is currently not supported, the `Construct` class\nyou'll need to use is the one from the [`constructs`](https://github.com/aws/constructs) module, and not from `@aws-cdk/core` like you normally would.\nThis is why we used `new cdk8s.App()` as the scope of the chart above.\n\n```python\nimport constructs as constructs\nimport cdk8s as cdk8s\nimport cdk8s_plus_21 as kplus\n\napp = cdk8s.App()\nchart = cdk8s.Chart(app, \"my-chart\")\n\nclass LoadBalancedWebService(constructs.Construct):\n def __init__(self, scope, id, props):\n super().__init__(scope, id)\n\n deployment = kplus.Deployment(chart, \"Deployment\",\n replicas=props.replicas,\n containers=[kplus.Container(image=props.image)]\n )\n\n deployment.expose_via_service(\n port=props.port,\n service_type=kplus.ServiceType.LOAD_BALANCER\n )\n```\n\n##### Manually importing k8s specs and CRD's\n\nIf you find yourself unable to use `cdk8s+`, or just like to directly use the `k8s` native objects or CRD's, you can do so by manually importing them using the `cdk8s-cli`.\n\nSee [Importing kubernetes objects](https://cdk8s.io/docs/latest/cli/import/) for detailed instructions.\n\n## Patching Kubernetes Resources\n\nThe `KubernetesPatch` construct can be used to update existing kubernetes\nresources. The following example can be used to patch the `hello-kubernetes`\ndeployment from the example above with 5 replicas.\n\n```python\n# cluster: eks.Cluster\n\neks.KubernetesPatch(self, \"hello-kub-deployment-label\",\n cluster=cluster,\n resource_name=\"deployment/hello-kubernetes\",\n apply_patch={\"spec\": {\"replicas\": 5}},\n restore_patch={\"spec\": {\"replicas\": 3}}\n)\n```\n\n## Querying Kubernetes Resources\n\nThe `KubernetesObjectValue` construct can be used to query for information about kubernetes objects,\nand use that as part of your CDK application.\n\nFor example, you can fetch the address of a [`LoadBalancer`](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) type service:\n\n```python\n# cluster: eks.Cluster\n\n# query the load balancer address\nmy_service_address = eks.KubernetesObjectValue(self, \"LoadBalancerAttribute\",\n cluster=cluster,\n object_type=\"service\",\n object_name=\"my-service\",\n json_path=\".status.loadBalancer.ingress[0].hostname\"\n)\n\n# pass the address to a lambda function\nproxy_function = lambda_.Function(self, \"ProxyFunction\",\n handler=\"index.handler\",\n code=lambda_.Code.from_inline(\"my-code\"),\n runtime=lambda_.Runtime.NODEJS_14_X,\n environment={\n \"my_service_address\": my_service_address.value\n }\n)\n```\n\nSpecifically, since the above use-case is quite common, there is an easier way to access that information:\n\n```python\n# cluster: eks.Cluster\n\nload_balancer_address = cluster.get_service_load_balancer_address(\"my-service\")\n```\n\n## Using existing clusters\n\nThe Amazon EKS library allows defining Kubernetes resources such as [Kubernetes\nmanifests](#kubernetes-resources) and [Helm charts](#helm-charts) on clusters\nthat are not defined as part of your CDK app.\n\nFirst, you'll need to \"import\" a cluster to your CDK app. To do that, use the\n`eks.Cluster.fromClusterAttributes()` static method:\n\n```python\ncluster = eks.Cluster.from_cluster_attributes(self, \"MyCluster\",\n cluster_name=\"my-cluster-name\",\n kubectl_role_arn=\"arn:aws:iam::1111111:role/iam-role-that-has-masters-access\"\n)\n```\n\nThen, you can use `addManifest` or `addHelmChart` to define resources inside\nyour Kubernetes cluster. For example:\n\n```python\n# cluster: eks.Cluster\n\ncluster.add_manifest(\"Test\", {\n \"api_version\": \"v1\",\n \"kind\": \"ConfigMap\",\n \"metadata\": {\n \"name\": \"myconfigmap\"\n },\n \"data\": {\n \"Key\": \"value\",\n \"Another\": \"123454\"\n }\n})\n```\n\nAt the minimum, when importing clusters for `kubectl` management, you will need\nto specify:\n\n* `clusterName` - the name of the cluster.\n* `kubectlRoleArn` - the ARN of an IAM role mapped to the `system:masters` RBAC\n role. If the cluster you are importing was created using the AWS CDK, the\n CloudFormation stack has an output that includes an IAM role that can be used.\n Otherwise, you can create an IAM role and map it to `system:masters` manually.\n The trust policy of this role should include the the\n `arn:aws::iam::${accountId}:root` principal in order to allow the execution\n role of the kubectl resource to assume it.\n\nIf the cluster is configured with private-only or private and restricted public\nKubernetes [endpoint access](#endpoint-access), you must also specify:\n\n* `kubectlSecurityGroupId` - the ID of an EC2 security group that is allowed\n connections to the cluster's control security group. For example, the EKS managed [cluster security group](#cluster-security-group).\n* `kubectlPrivateSubnetIds` - a list of private VPC subnets IDs that will be used\n to access the Kubernetes endpoint.\n\n## Logging\n\nEKS supports cluster logging for 5 different types of events:\n\n* API requests to the cluster.\n* Cluster access via the Kubernetes API.\n* Authentication requests into the cluster.\n* State of cluster controllers.\n* Scheduling decisions.\n\nYou can enable logging for each one separately using the `clusterLogging`\nproperty. For example:\n\n```python\ncluster = eks.Cluster(self, \"Cluster\",\n # ...\n version=eks.KubernetesVersion.V1_21,\n cluster_logging=[eks.ClusterLoggingTypes.API, eks.ClusterLoggingTypes.AUTHENTICATOR, eks.ClusterLoggingTypes.SCHEDULER\n ]\n)\n```\n\n## Known Issues and Limitations\n\n* [One cluster per stack](https://github.com/aws/aws-cdk/issues/10073)\n* [Service Account dependencies](https://github.com/aws/aws-cdk/issues/9910)\n* [Support isolated VPCs](https://github.com/aws/aws-cdk/issues/12171)\n\n\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "The CDK Construct Library for AWS::EKS",
"version": "1.203.0",
"project_urls": {
"Homepage": "https://github.com/aws/aws-cdk",
"Source": "https://github.com/aws/aws-cdk.git"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "223127a82b05566f81009391d5cd8ac128ab3f624e248743368da6d72bb7630c",
"md5": "13d4c3acdd26238a37382691a6fa017b",
"sha256": "19d56a445fc1232e950c73d2301d8363fc448f8003ca0d8741b1aafbc3111778"
},
"downloads": -1,
"filename": "aws_cdk.aws_eks-1.203.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "13d4c3acdd26238a37382691a6fa017b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "~=3.7",
"size": 674404,
"upload_time": "2023-05-31T22:54:40",
"upload_time_iso_8601": "2023-05-31T22:54:40.999443Z",
"url": "https://files.pythonhosted.org/packages/22/31/27a82b05566f81009391d5cd8ac128ab3f624e248743368da6d72bb7630c/aws_cdk.aws_eks-1.203.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "8e6bead20ba7f772b4d9cb6fac8a018c69b8c21888e933e06af9c2a41af6021b",
"md5": "a4c9d74e59cb0138e6c9a4efc9b08b40",
"sha256": "629c786ea9efc5a42da6dd3524c0299f1ee66d858bad9564306d817554447700"
},
"downloads": -1,
"filename": "aws-cdk.aws-eks-1.203.0.tar.gz",
"has_sig": false,
"md5_digest": "a4c9d74e59cb0138e6c9a4efc9b08b40",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.7",
"size": 705669,
"upload_time": "2023-05-31T23:02:13",
"upload_time_iso_8601": "2023-05-31T23:02:13.259326Z",
"url": "https://files.pythonhosted.org/packages/8e/6b/ead20ba7f772b4d9cb6fac8a018c69b8c21888e933e06af9c2a41af6021b/aws-cdk.aws-eks-1.203.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-05-31 23:02:13",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "aws",
"github_project": "aws-cdk",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "aws-cdk.aws-eks"
}