aws-cdk.aws-eks-legacy


Nameaws-cdk.aws-eks-legacy JSON
Version 1.203.0 PyPI version JSON
download
home_pagehttps://github.com/aws/aws-cdk
SummaryThe CDK Construct Library for AWS::EKS (Legacy)
upload_time2023-05-31 23:02:15
maintainer
docs_urlNone
authorAmazon Web Services
requires_python~=3.7
licenseApache-2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Amazon EKS Construct Library

<!--BEGIN STABILITY BANNER-->---


![Deprecated](https://img.shields.io/badge/deprecated-critical.svg?style=for-the-badge)

> This API may emit warnings. Backward compatibility is not guaranteed.

---
<!--END STABILITY BANNER-->

**This module is available for backwards compatibility purposes only ([details](https://github.com/aws/aws-cdk/pull/5540)). It will
no longer be released with the CDK starting March 1st, 2020. See [issue

## 5544](https://github.com/aws/aws-cdk/issues/5544) for upgrade instructions.**

---


This construct library allows you to define [Amazon Elastic Container Service
for Kubernetes (EKS)](https://aws.amazon.com/eks/) clusters programmatically.
This library also supports programmatically defining Kubernetes resource
manifests within EKS clusters.

This example defines an Amazon EKS cluster with the following configuration:

* 2x **m5.large** instances (this instance type suits most common use-cases, and is good value for money)
* Dedicated VPC with default configuration (see [ec2.Vpc](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-ec2-readme.html#vpc))
* A Kubernetes pod with a container based on the [paulbouwer/hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes) image.

```python
cluster = eks.Cluster(self, "hello-eks")

cluster.add_resource("mypod", {
    "api_version": "v1",
    "kind": "Pod",
    "metadata": {"name": "mypod"},
    "spec": {
        "containers": [{
            "name": "hello",
            "image": "paulbouwer/hello-kubernetes:1.5",
            "ports": [{"container_port": 8080}]
        }
        ]
    }
})
```

Here is a [complete sample](https://github.com/aws/aws-cdk/blob/master/packages/@aws-cdk/aws-eks-legacy/test/integ.eks-kubectl.lit.ts).

### Capacity

By default, `eks.Cluster` is created with x2 `m5.large` instances.

```python
eks.Cluster(self, "cluster-two-m5-large")
```

The quantity and instance type for the default capacity can be specified through
the `defaultCapacity` and `defaultCapacityInstance` props:

```python
eks.Cluster(self, "cluster",
    default_capacity=10,
    default_capacity_instance=ec2.InstanceType("m2.xlarge")
)
```

To disable the default capacity, simply set `defaultCapacity` to `0`:

```python
eks.Cluster(self, "cluster-with-no-capacity", default_capacity=0)
```

The `cluster.defaultCapacity` property will reference the `AutoScalingGroup`
resource for the default capacity. It will be `undefined` if `defaultCapacity`
is set to `0`:

```python
cluster = eks.Cluster(self, "my-cluster")
cluster.default_capacity.scale_on_cpu_utilization("up",
    target_utilization_percent=80
)
```

You can add customized capacity through `cluster.addCapacity()` or
`cluster.addAutoScalingGroup()`:

```python
# cluster: eks.Cluster

cluster.add_capacity("frontend-nodes",
    instance_type=ec2.InstanceType("t2.medium"),
    desired_capacity=3,
    vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC)
)
```

### Spot Capacity

If `spotPrice` is specified, the capacity will be purchased from spot instances:

```python
# cluster: eks.Cluster

cluster.add_capacity("spot",
    spot_price="0.1094",
    instance_type=ec2.InstanceType("t3.large"),
    max_capacity=10
)
```

Spot instance nodes will be labeled with `lifecycle=Ec2Spot` and tainted with `PreferNoSchedule`.

The [Spot Termination Handler](https://github.com/awslabs/ec2-spot-labs/tree/master/ec2-spot-eks-solution/spot-termination-handler)
DaemonSet will be installed on these nodes. The termination handler leverages
[EC2 Spot Instance Termination Notices](https://aws.amazon.com/blogs/aws/new-ec2-spot-instance-termination-notices/)
to gracefully stop all pods running on spot nodes that are about to be
terminated.

### Bootstrapping

When adding capacity, you can specify options for
[/etc/eks/boostrap.sh](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh)
which is responsible for associating the node to the EKS cluster. For example,
you can use `kubeletExtraArgs` to add custom node labels or taints.

```python
# up to ten spot instances
# cluster: eks.Cluster

cluster.add_capacity("spot",
    instance_type=ec2.InstanceType("t3.large"),
    desired_capacity=2,
    bootstrap_options=eks.BootstrapOptions(
        kubelet_extra_args="--node-labels foo=bar,goo=far",
        aws_api_retry_attempts=5
    )
)
```

To disable bootstrapping altogether (i.e. to fully customize user-data), set `bootstrapEnabled` to `false` when you add
the capacity.

### Masters Role

The Amazon EKS construct library allows you to specify an IAM role that will be
granted `system:masters` privileges on your cluster.

Without specifying a `mastersRole`, you will not be able to interact manually
with the cluster.

The following example defines an IAM role that can be assumed by all users
in the account and shows how to use the `mastersRole` property to map this
role to the Kubernetes `system:masters` group:

```python
# first define the role
cluster_admin = iam.Role(self, "AdminRole",
    assumed_by=iam.AccountRootPrincipal()
)

# now define the cluster and map role to "masters" RBAC group
eks.Cluster(self, "Cluster",
    masters_role=cluster_admin
)
```

When you `cdk deploy` this CDK app, you will notice that an output will be printed
with the `update-kubeconfig` command.

Something like this:

```plaintext
Outputs:
eks-integ-defaults.ClusterConfigCommand43AAE40F = aws eks update-kubeconfig --name cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 --role-arn arn:aws:iam::112233445566:role/eks-integ-defaults-Role1ABCC5F0-1EFK2W5ZJD98Y
```

Copy & paste the "`aws eks update-kubeconfig ...`" command to your shell in
order to connect to your EKS cluster with the "masters" role.

Now, given [AWS CLI](https://aws.amazon.com/cli/) is configured to use AWS
credentials for a user that is trusted by the masters role, you should be able
to interact with your cluster through `kubectl` (the above example will trust
all users in the account).

For example:

```console
$ aws eks update-kubeconfig --name cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 --role-arn arn:aws:iam::112233445566:role/eks-integ-defaults-Role1ABCC5F0-1EFK2W5ZJD98Y
Added new context arn:aws:eks:eu-west-2:112233445566:cluster/cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 to /Users/boom/.kube/config

$ kubectl get nodes # list all nodes
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-147-66.eu-west-2.compute.internal    Ready    <none>   21m   v1.13.7-eks-c57ff8
ip-10-0-169-151.eu-west-2.compute.internal   Ready    <none>   21m   v1.13.7-eks-c57ff8

$ kubectl get all -n kube-system
NAME                           READY   STATUS    RESTARTS   AGE
pod/aws-node-fpmwv             1/1     Running   0          21m
pod/aws-node-m9htf             1/1     Running   0          21m
pod/coredns-5cb4fb54c7-q222j   1/1     Running   0          23m
pod/coredns-5cb4fb54c7-v9nxx   1/1     Running   0          23m
pod/kube-proxy-d4jrh           1/1     Running   0          21m
pod/kube-proxy-q7hh7           1/1     Running   0          21m

NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
service/kube-dns   ClusterIP   172.20.0.10   <none>        53/UDP,53/TCP   23m

NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/aws-node     2         2         2       2            2           <none>          23m
daemonset.apps/kube-proxy   2         2         2       2            2           <none>          23m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   2/2     2            2           23m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-5cb4fb54c7   2         2         2       23m
```

For your convenience, an AWS CloudFormation output will automatically be
included in your template and will be printed when running `cdk deploy`.

**NOTE**: if the cluster is configured with `kubectlEnabled: false`, it
will be created with the role/user that created the AWS CloudFormation
stack. See [Kubectl Support](#kubectl-support) for details.

### Kubernetes Resources

The `KubernetesResource` construct or `cluster.addResource` method can be used
to apply Kubernetes resource manifests to this cluster.

The following examples will deploy the [paulbouwer/hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes)
service on the cluster:

```python
# cluster: eks.Cluster
app_label = {"app": "hello-kubernetes"}

deployment = {
    "api_version": "apps/v1",
    "kind": "Deployment",
    "metadata": {"name": "hello-kubernetes"},
    "spec": {
        "replicas": 3,
        "selector": {"match_labels": app_label},
        "template": {
            "metadata": {"labels": app_label},
            "spec": {
                "containers": [{
                    "name": "hello-kubernetes",
                    "image": "paulbouwer/hello-kubernetes:1.5",
                    "ports": [{"container_port": 8080}]
                }
                ]
            }
        }
    }
}

service = {
    "api_version": "v1",
    "kind": "Service",
    "metadata": {"name": "hello-kubernetes"},
    "spec": {
        "type": "LoadBalancer",
        "ports": [{"port": 80, "target_port": 8080}],
        "selector": app_label
    }
}
# option 1: use a construct
eks.KubernetesResource(self, "hello-kub",
    cluster=cluster,
    manifest=[deployment, service]
)

# or, option2: use `addResource`
cluster.add_resource("hello-kub", service, deployment)
```

Since Kubernetes resources are implemented as CloudFormation resources in the
CDK. This means that if the resource is deleted from your code (or the stack is
deleted), the next `cdk deploy` will issue a `kubectl delete` command and the
Kubernetes resources will be deleted.

### AWS IAM Mapping

As described in the [Amazon EKS User Guide](https://docs.aws.amazon.com/en_us/eks/latest/userguide/add-user-role.html),
you can map AWS IAM users and roles to [Kubernetes Role-based access control (RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac).

The Amazon EKS construct manages the **aws-auth ConfigMap** Kubernetes resource
on your behalf and exposes an API through the `cluster.awsAuth` for mapping
users, roles and accounts.

Furthermore, when auto-scaling capacity is added to the cluster (through
`cluster.addCapacity` or `cluster.addAutoScalingGroup`), the IAM instance role
of the auto-scaling group will be automatically mapped to RBAC so nodes can
connect to the cluster. No manual mapping is required any longer.

> NOTE: `cluster.awsAuth` will throw an error if your cluster is created with `kubectlEnabled: false`.

For example, let's say you want to grant an IAM user administrative privileges
on your cluster:

```python
# cluster: eks.Cluster

admin_user = iam.User(self, "Admin")
cluster.aws_auth.add_user_mapping(admin_user, groups=["system:masters"])
```

A convenience method for mapping a role to the `system:masters` group is also available:

```python
# cluster: eks.Cluster
# role: iam.Role

cluster.aws_auth.add_masters_role(role)
```

### Node ssh Access

If you want to be able to SSH into your worker nodes, you must already
have an SSH key in the region you're connecting to and pass it, and you must
be able to connect to the hosts (meaning they must have a public IP and you
should be allowed to connect to them on port 22):

```python
asg = cluster.add_capacity("Nodes",
    instance_type=ec2.InstanceType("t2.medium"),
    vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC),
    key_name="my-key-name"
)

# Replace with desired IP
asg.connections.allow_from(ec2.Peer.ipv4("1.2.3.4/32"), ec2.Port.tcp(22))
```

If you want to SSH into nodes in a private subnet, you should set up a
bastion host in a public subnet. That setup is recommended, but is
unfortunately beyond the scope of this documentation.

### kubectl Support

When you create an Amazon EKS cluster, the IAM entity user or role, such as a
[federated user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html)
that creates the cluster, is automatically granted `system:masters` permissions
in the cluster's RBAC configuration.

In order to allow programmatically defining **Kubernetes resources** in your AWS
CDK app and provisioning them through AWS CloudFormation, we will need to assume
this "masters" role every time we want to issue `kubectl` operations against your
cluster.

At the moment, the [AWS::EKS::Cluster](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-cluster.html)
AWS CloudFormation resource does not support this behavior, so in order to
support "programmatic kubectl", such as applying manifests
and mapping IAM roles from within your CDK application, the Amazon EKS
construct library uses a custom resource for provisioning the cluster.
This custom resource is executed with an IAM role that we can then use
to issue `kubectl` commands.

The default behavior of this library is to use this custom resource in order
to retain programmatic control over the cluster. In other words: to allow
you to define Kubernetes resources in your CDK code instead of having to
manage your Kubernetes applications through a separate system.

One of the implications of this design is that, by default, the user who
provisioned the AWS CloudFormation stack (executed `cdk deploy`) will
not have administrative privileges on the EKS cluster.

1. Additional resources will be synthesized into your template (the AWS Lambda
   function, the role and policy).
2. As described in [Interacting with Your Cluster](#interacting-with-your-cluster),
   if you wish to be able to manually interact with your cluster, you will need
   to map an IAM role or user to the `system:masters` group. This can be either
   done by specifying a `mastersRole` when the cluster is defined, calling
   `cluster.awsAuth.addMastersRole` or explicitly mapping an IAM role or IAM user to the
   relevant Kubernetes RBAC groups using `cluster.addRoleMapping` and/or
   `cluster.addUserMapping`.

If you wish to disable the programmatic kubectl behavior and use the standard
AWS::EKS::Cluster resource, you can specify `kubectlEnabled: false` when you define
the cluster:

```python
eks.Cluster(self, "cluster",
    kubectl_enabled=False
)
```

**Take care**: a change in this property will cause the cluster to be destroyed
and a new cluster to be created.

When kubectl is disabled, you should be aware of the following:

1. When you log-in to your cluster, you don't need to specify `--role-arn` as
   long as you are using the same user that created the cluster.
2. As described in the Amazon EKS User Guide, you will need to manually
   edit the [aws-auth ConfigMap](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html)
   when you add capacity in order to map the IAM instance role to RBAC to allow nodes to join the cluster.
3. Any `eks.Cluster` APIs that depend on programmatic kubectl support will fail
   with an error: `cluster.addResource`, `cluster.addChart`, `cluster.awsAuth`, `props.mastersRole`.

### Helm Charts

The `HelmChart` construct or `cluster.addChart` method can be used
to add Kubernetes resources to this cluster using Helm.

The following example will install the [NGINX Ingress Controller](https://kubernetes.github.io/ingress-nginx/)
to you cluster using Helm.

```python
# cluster: eks.Cluster

# option 1: use a construct
eks.HelmChart(self, "NginxIngress",
    cluster=cluster,
    chart="nginx-ingress",
    repository="https://helm.nginx.com/stable",
    namespace="kube-system"
)

# or, option2: use `addChart`
cluster.add_chart("NginxIngress",
    chart="nginx-ingress",
    repository="https://helm.nginx.com/stable",
    namespace="kube-system"
)
```

Helm charts will be installed and updated using `helm upgrade --install`.
This means that if the chart is added to CDK with the same release name, it will try to update
the chart in the cluster. The chart will exists as CloudFormation resource.

Helm charts are implemented as CloudFormation resources in CDK.
This means that if the chart is deleted from your code (or the stack is
deleted), the next `cdk deploy` will issue a `helm uninstall` command and the
Helm chart will be deleted.

When there is no `release` defined, the chart will be installed with a unique name allocated
based on the construct path.

### Roadmap

* [ ] AutoScaling (combine EC2 and Kubernetes scaling)



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/aws/aws-cdk",
    "name": "aws-cdk.aws-eks-legacy",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "~=3.7",
    "maintainer_email": "",
    "keywords": "",
    "author": "Amazon Web Services",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/c4/01/0ae2fa6dd1ff1d1d653391d6f7f06cdc42e396f85c517a23daff2b752330/aws-cdk.aws-eks-legacy-1.203.0.tar.gz",
    "platform": null,
    "description": "# Amazon EKS Construct Library\n\n<!--BEGIN STABILITY BANNER-->---\n\n\n![Deprecated](https://img.shields.io/badge/deprecated-critical.svg?style=for-the-badge)\n\n> This API may emit warnings. Backward compatibility is not guaranteed.\n\n---\n<!--END STABILITY BANNER-->\n\n**This module is available for backwards compatibility purposes only ([details](https://github.com/aws/aws-cdk/pull/5540)). It will\nno longer be released with the CDK starting March 1st, 2020. See [issue\n\n## 5544](https://github.com/aws/aws-cdk/issues/5544) for upgrade instructions.**\n\n---\n\n\nThis construct library allows you to define [Amazon Elastic Container Service\nfor Kubernetes (EKS)](https://aws.amazon.com/eks/) clusters programmatically.\nThis library also supports programmatically defining Kubernetes resource\nmanifests within EKS clusters.\n\nThis example defines an Amazon EKS cluster with the following configuration:\n\n* 2x **m5.large** instances (this instance type suits most common use-cases, and is good value for money)\n* Dedicated VPC with default configuration (see [ec2.Vpc](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-ec2-readme.html#vpc))\n* A Kubernetes pod with a container based on the [paulbouwer/hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes) image.\n\n```python\ncluster = eks.Cluster(self, \"hello-eks\")\n\ncluster.add_resource(\"mypod\", {\n    \"api_version\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\"name\": \"mypod\"},\n    \"spec\": {\n        \"containers\": [{\n            \"name\": \"hello\",\n            \"image\": \"paulbouwer/hello-kubernetes:1.5\",\n            \"ports\": [{\"container_port\": 8080}]\n        }\n        ]\n    }\n})\n```\n\nHere is a [complete sample](https://github.com/aws/aws-cdk/blob/master/packages/@aws-cdk/aws-eks-legacy/test/integ.eks-kubectl.lit.ts).\n\n### Capacity\n\nBy default, `eks.Cluster` is created with x2 `m5.large` instances.\n\n```python\neks.Cluster(self, \"cluster-two-m5-large\")\n```\n\nThe quantity and instance type for the default capacity can be specified through\nthe `defaultCapacity` and `defaultCapacityInstance` props:\n\n```python\neks.Cluster(self, \"cluster\",\n    default_capacity=10,\n    default_capacity_instance=ec2.InstanceType(\"m2.xlarge\")\n)\n```\n\nTo disable the default capacity, simply set `defaultCapacity` to `0`:\n\n```python\neks.Cluster(self, \"cluster-with-no-capacity\", default_capacity=0)\n```\n\nThe `cluster.defaultCapacity` property will reference the `AutoScalingGroup`\nresource for the default capacity. It will be `undefined` if `defaultCapacity`\nis set to `0`:\n\n```python\ncluster = eks.Cluster(self, \"my-cluster\")\ncluster.default_capacity.scale_on_cpu_utilization(\"up\",\n    target_utilization_percent=80\n)\n```\n\nYou can add customized capacity through `cluster.addCapacity()` or\n`cluster.addAutoScalingGroup()`:\n\n```python\n# cluster: eks.Cluster\n\ncluster.add_capacity(\"frontend-nodes\",\n    instance_type=ec2.InstanceType(\"t2.medium\"),\n    desired_capacity=3,\n    vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC)\n)\n```\n\n### Spot Capacity\n\nIf `spotPrice` is specified, the capacity will be purchased from spot instances:\n\n```python\n# cluster: eks.Cluster\n\ncluster.add_capacity(\"spot\",\n    spot_price=\"0.1094\",\n    instance_type=ec2.InstanceType(\"t3.large\"),\n    max_capacity=10\n)\n```\n\nSpot instance nodes will be labeled with `lifecycle=Ec2Spot` and tainted with `PreferNoSchedule`.\n\nThe [Spot Termination Handler](https://github.com/awslabs/ec2-spot-labs/tree/master/ec2-spot-eks-solution/spot-termination-handler)\nDaemonSet will be installed on these nodes. The termination handler leverages\n[EC2 Spot Instance Termination Notices](https://aws.amazon.com/blogs/aws/new-ec2-spot-instance-termination-notices/)\nto gracefully stop all pods running on spot nodes that are about to be\nterminated.\n\n### Bootstrapping\n\nWhen adding capacity, you can specify options for\n[/etc/eks/boostrap.sh](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh)\nwhich is responsible for associating the node to the EKS cluster. For example,\nyou can use `kubeletExtraArgs` to add custom node labels or taints.\n\n```python\n# up to ten spot instances\n# cluster: eks.Cluster\n\ncluster.add_capacity(\"spot\",\n    instance_type=ec2.InstanceType(\"t3.large\"),\n    desired_capacity=2,\n    bootstrap_options=eks.BootstrapOptions(\n        kubelet_extra_args=\"--node-labels foo=bar,goo=far\",\n        aws_api_retry_attempts=5\n    )\n)\n```\n\nTo disable bootstrapping altogether (i.e. to fully customize user-data), set `bootstrapEnabled` to `false` when you add\nthe capacity.\n\n### Masters Role\n\nThe Amazon EKS construct library allows you to specify an IAM role that will be\ngranted `system:masters` privileges on your cluster.\n\nWithout specifying a `mastersRole`, you will not be able to interact manually\nwith the cluster.\n\nThe following example defines an IAM role that can be assumed by all users\nin the account and shows how to use the `mastersRole` property to map this\nrole to the Kubernetes `system:masters` group:\n\n```python\n# first define the role\ncluster_admin = iam.Role(self, \"AdminRole\",\n    assumed_by=iam.AccountRootPrincipal()\n)\n\n# now define the cluster and map role to \"masters\" RBAC group\neks.Cluster(self, \"Cluster\",\n    masters_role=cluster_admin\n)\n```\n\nWhen you `cdk deploy` this CDK app, you will notice that an output will be printed\nwith the `update-kubeconfig` command.\n\nSomething like this:\n\n```plaintext\nOutputs:\neks-integ-defaults.ClusterConfigCommand43AAE40F = aws eks update-kubeconfig --name cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 --role-arn arn:aws:iam::112233445566:role/eks-integ-defaults-Role1ABCC5F0-1EFK2W5ZJD98Y\n```\n\nCopy & paste the \"`aws eks update-kubeconfig ...`\" command to your shell in\norder to connect to your EKS cluster with the \"masters\" role.\n\nNow, given [AWS CLI](https://aws.amazon.com/cli/) is configured to use AWS\ncredentials for a user that is trusted by the masters role, you should be able\nto interact with your cluster through `kubectl` (the above example will trust\nall users in the account).\n\nFor example:\n\n```console\n$ aws eks update-kubeconfig --name cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 --role-arn arn:aws:iam::112233445566:role/eks-integ-defaults-Role1ABCC5F0-1EFK2W5ZJD98Y\nAdded new context arn:aws:eks:eu-west-2:112233445566:cluster/cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 to /Users/boom/.kube/config\n\n$ kubectl get nodes # list all nodes\nNAME                                         STATUS   ROLES    AGE   VERSION\nip-10-0-147-66.eu-west-2.compute.internal    Ready    <none>   21m   v1.13.7-eks-c57ff8\nip-10-0-169-151.eu-west-2.compute.internal   Ready    <none>   21m   v1.13.7-eks-c57ff8\n\n$ kubectl get all -n kube-system\nNAME                           READY   STATUS    RESTARTS   AGE\npod/aws-node-fpmwv             1/1     Running   0          21m\npod/aws-node-m9htf             1/1     Running   0          21m\npod/coredns-5cb4fb54c7-q222j   1/1     Running   0          23m\npod/coredns-5cb4fb54c7-v9nxx   1/1     Running   0          23m\npod/kube-proxy-d4jrh           1/1     Running   0          21m\npod/kube-proxy-q7hh7           1/1     Running   0          21m\n\nNAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE\nservice/kube-dns   ClusterIP   172.20.0.10   <none>        53/UDP,53/TCP   23m\n\nNAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE\ndaemonset.apps/aws-node     2         2         2       2            2           <none>          23m\ndaemonset.apps/kube-proxy   2         2         2       2            2           <none>          23m\n\nNAME                      READY   UP-TO-DATE   AVAILABLE   AGE\ndeployment.apps/coredns   2/2     2            2           23m\n\nNAME                                 DESIRED   CURRENT   READY   AGE\nreplicaset.apps/coredns-5cb4fb54c7   2         2         2       23m\n```\n\nFor your convenience, an AWS CloudFormation output will automatically be\nincluded in your template and will be printed when running `cdk deploy`.\n\n**NOTE**: if the cluster is configured with `kubectlEnabled: false`, it\nwill be created with the role/user that created the AWS CloudFormation\nstack. See [Kubectl Support](#kubectl-support) for details.\n\n### Kubernetes Resources\n\nThe `KubernetesResource` construct or `cluster.addResource` method can be used\nto apply Kubernetes resource manifests to this cluster.\n\nThe following examples will deploy the [paulbouwer/hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes)\nservice on the cluster:\n\n```python\n# cluster: eks.Cluster\napp_label = {\"app\": \"hello-kubernetes\"}\n\ndeployment = {\n    \"api_version\": \"apps/v1\",\n    \"kind\": \"Deployment\",\n    \"metadata\": {\"name\": \"hello-kubernetes\"},\n    \"spec\": {\n        \"replicas\": 3,\n        \"selector\": {\"match_labels\": app_label},\n        \"template\": {\n            \"metadata\": {\"labels\": app_label},\n            \"spec\": {\n                \"containers\": [{\n                    \"name\": \"hello-kubernetes\",\n                    \"image\": \"paulbouwer/hello-kubernetes:1.5\",\n                    \"ports\": [{\"container_port\": 8080}]\n                }\n                ]\n            }\n        }\n    }\n}\n\nservice = {\n    \"api_version\": \"v1\",\n    \"kind\": \"Service\",\n    \"metadata\": {\"name\": \"hello-kubernetes\"},\n    \"spec\": {\n        \"type\": \"LoadBalancer\",\n        \"ports\": [{\"port\": 80, \"target_port\": 8080}],\n        \"selector\": app_label\n    }\n}\n# option 1: use a construct\neks.KubernetesResource(self, \"hello-kub\",\n    cluster=cluster,\n    manifest=[deployment, service]\n)\n\n# or, option2: use `addResource`\ncluster.add_resource(\"hello-kub\", service, deployment)\n```\n\nSince Kubernetes resources are implemented as CloudFormation resources in the\nCDK. This means that if the resource is deleted from your code (or the stack is\ndeleted), the next `cdk deploy` will issue a `kubectl delete` command and the\nKubernetes resources will be deleted.\n\n### AWS IAM Mapping\n\nAs described in the [Amazon EKS User Guide](https://docs.aws.amazon.com/en_us/eks/latest/userguide/add-user-role.html),\nyou can map AWS IAM users and roles to [Kubernetes Role-based access control (RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac).\n\nThe Amazon EKS construct manages the **aws-auth ConfigMap** Kubernetes resource\non your behalf and exposes an API through the `cluster.awsAuth` for mapping\nusers, roles and accounts.\n\nFurthermore, when auto-scaling capacity is added to the cluster (through\n`cluster.addCapacity` or `cluster.addAutoScalingGroup`), the IAM instance role\nof the auto-scaling group will be automatically mapped to RBAC so nodes can\nconnect to the cluster. No manual mapping is required any longer.\n\n> NOTE: `cluster.awsAuth` will throw an error if your cluster is created with `kubectlEnabled: false`.\n\nFor example, let's say you want to grant an IAM user administrative privileges\non your cluster:\n\n```python\n# cluster: eks.Cluster\n\nadmin_user = iam.User(self, \"Admin\")\ncluster.aws_auth.add_user_mapping(admin_user, groups=[\"system:masters\"])\n```\n\nA convenience method for mapping a role to the `system:masters` group is also available:\n\n```python\n# cluster: eks.Cluster\n# role: iam.Role\n\ncluster.aws_auth.add_masters_role(role)\n```\n\n### Node ssh Access\n\nIf you want to be able to SSH into your worker nodes, you must already\nhave an SSH key in the region you're connecting to and pass it, and you must\nbe able to connect to the hosts (meaning they must have a public IP and you\nshould be allowed to connect to them on port 22):\n\n```python\nasg = cluster.add_capacity(\"Nodes\",\n    instance_type=ec2.InstanceType(\"t2.medium\"),\n    vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC),\n    key_name=\"my-key-name\"\n)\n\n# Replace with desired IP\nasg.connections.allow_from(ec2.Peer.ipv4(\"1.2.3.4/32\"), ec2.Port.tcp(22))\n```\n\nIf you want to SSH into nodes in a private subnet, you should set up a\nbastion host in a public subnet. That setup is recommended, but is\nunfortunately beyond the scope of this documentation.\n\n### kubectl Support\n\nWhen you create an Amazon EKS cluster, the IAM entity user or role, such as a\n[federated user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html)\nthat creates the cluster, is automatically granted `system:masters` permissions\nin the cluster's RBAC configuration.\n\nIn order to allow programmatically defining **Kubernetes resources** in your AWS\nCDK app and provisioning them through AWS CloudFormation, we will need to assume\nthis \"masters\" role every time we want to issue `kubectl` operations against your\ncluster.\n\nAt the moment, the [AWS::EKS::Cluster](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-cluster.html)\nAWS CloudFormation resource does not support this behavior, so in order to\nsupport \"programmatic kubectl\", such as applying manifests\nand mapping IAM roles from within your CDK application, the Amazon EKS\nconstruct library uses a custom resource for provisioning the cluster.\nThis custom resource is executed with an IAM role that we can then use\nto issue `kubectl` commands.\n\nThe default behavior of this library is to use this custom resource in order\nto retain programmatic control over the cluster. In other words: to allow\nyou to define Kubernetes resources in your CDK code instead of having to\nmanage your Kubernetes applications through a separate system.\n\nOne of the implications of this design is that, by default, the user who\nprovisioned the AWS CloudFormation stack (executed `cdk deploy`) will\nnot have administrative privileges on the EKS cluster.\n\n1. Additional resources will be synthesized into your template (the AWS Lambda\n   function, the role and policy).\n2. As described in [Interacting with Your Cluster](#interacting-with-your-cluster),\n   if you wish to be able to manually interact with your cluster, you will need\n   to map an IAM role or user to the `system:masters` group. This can be either\n   done by specifying a `mastersRole` when the cluster is defined, calling\n   `cluster.awsAuth.addMastersRole` or explicitly mapping an IAM role or IAM user to the\n   relevant Kubernetes RBAC groups using `cluster.addRoleMapping` and/or\n   `cluster.addUserMapping`.\n\nIf you wish to disable the programmatic kubectl behavior and use the standard\nAWS::EKS::Cluster resource, you can specify `kubectlEnabled: false` when you define\nthe cluster:\n\n```python\neks.Cluster(self, \"cluster\",\n    kubectl_enabled=False\n)\n```\n\n**Take care**: a change in this property will cause the cluster to be destroyed\nand a new cluster to be created.\n\nWhen kubectl is disabled, you should be aware of the following:\n\n1. When you log-in to your cluster, you don't need to specify `--role-arn` as\n   long as you are using the same user that created the cluster.\n2. As described in the Amazon EKS User Guide, you will need to manually\n   edit the [aws-auth ConfigMap](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html)\n   when you add capacity in order to map the IAM instance role to RBAC to allow nodes to join the cluster.\n3. Any `eks.Cluster` APIs that depend on programmatic kubectl support will fail\n   with an error: `cluster.addResource`, `cluster.addChart`, `cluster.awsAuth`, `props.mastersRole`.\n\n### Helm Charts\n\nThe `HelmChart` construct or `cluster.addChart` method can be used\nto add Kubernetes resources to this cluster using Helm.\n\nThe following example will install the [NGINX Ingress Controller](https://kubernetes.github.io/ingress-nginx/)\nto you cluster using Helm.\n\n```python\n# cluster: eks.Cluster\n\n# option 1: use a construct\neks.HelmChart(self, \"NginxIngress\",\n    cluster=cluster,\n    chart=\"nginx-ingress\",\n    repository=\"https://helm.nginx.com/stable\",\n    namespace=\"kube-system\"\n)\n\n# or, option2: use `addChart`\ncluster.add_chart(\"NginxIngress\",\n    chart=\"nginx-ingress\",\n    repository=\"https://helm.nginx.com/stable\",\n    namespace=\"kube-system\"\n)\n```\n\nHelm charts will be installed and updated using `helm upgrade --install`.\nThis means that if the chart is added to CDK with the same release name, it will try to update\nthe chart in the cluster. The chart will exists as CloudFormation resource.\n\nHelm charts are implemented as CloudFormation resources in CDK.\nThis means that if the chart is deleted from your code (or the stack is\ndeleted), the next `cdk deploy` will issue a `helm uninstall` command and the\nHelm chart will be deleted.\n\nWhen there is no `release` defined, the chart will be installed with a unique name allocated\nbased on the construct path.\n\n### Roadmap\n\n* [ ] AutoScaling (combine EC2 and Kubernetes scaling)\n\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "The CDK Construct Library for AWS::EKS (Legacy)",
    "version": "1.203.0",
    "project_urls": {
        "Homepage": "https://github.com/aws/aws-cdk",
        "Source": "https://github.com/aws/aws-cdk.git"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7f14c75f5393ace488a096ef0cd3d375c2de936b6760ce4db4bbd20a508db1d4",
                "md5": "9710a43abb3d4976a25c4183b8bc92a3",
                "sha256": "fea4e00d56d839c57e4e11e94f8c3ac9abcfcb7b2b061c9ceff1fc4aeb5f0d78"
            },
            "downloads": -1,
            "filename": "aws_cdk.aws_eks_legacy-1.203.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9710a43abb3d4976a25c4183b8bc92a3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "~=3.7",
            "size": 293743,
            "upload_time": "2023-05-31T22:54:43",
            "upload_time_iso_8601": "2023-05-31T22:54:43.456415Z",
            "url": "https://files.pythonhosted.org/packages/7f/14/c75f5393ace488a096ef0cd3d375c2de936b6760ce4db4bbd20a508db1d4/aws_cdk.aws_eks_legacy-1.203.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c4010ae2fa6dd1ff1d1d653391d6f7f06cdc42e396f85c517a23daff2b752330",
                "md5": "ead4ec97239c7e36d51e0403b2747483",
                "sha256": "034ef1c7806c744ea21cd26404f3bd601b63b728c60565777be296706074ae5f"
            },
            "downloads": -1,
            "filename": "aws-cdk.aws-eks-legacy-1.203.0.tar.gz",
            "has_sig": false,
            "md5_digest": "ead4ec97239c7e36d51e0403b2747483",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "~=3.7",
            "size": 295226,
            "upload_time": "2023-05-31T23:02:15",
            "upload_time_iso_8601": "2023-05-31T23:02:15.390154Z",
            "url": "https://files.pythonhosted.org/packages/c4/01/0ae2fa6dd1ff1d1d653391d6f7f06cdc42e396f85c517a23daff2b752330/aws-cdk.aws-eks-legacy-1.203.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-31 23:02:15",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "aws",
    "github_project": "aws-cdk",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "aws-cdk.aws-eks-legacy"
}
        
Elapsed time: 0.33547s