Name | ecsdep JSON |
Version |
0.2.14
JSON |
| download |
home_page | https://gitlab.com/skitai/ecsdep |
Summary | AWS ECS Deployment Tool With Terraform |
upload_time | 2024-05-11 07:43:45 |
maintainer | None |
docs_url | None |
author | Hans Roh |
requires_python | None |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
## Introduce
ECS deploy using `docker compose` and `terraform`.
You need to manage just `yml` file for `docker compose` and:
```shell
ecsdep cluster create
ecsdep service up
```
That's all.
## Prequisition
### Gitlab Repository Read Credential
Create Gitlab access token in `Project Settings > Access Tokens`,
then check `read/write registry` grants.
With this token, create secret in AWS
https://console.aws.amazon.com/secretsmanager/
```json
{
"username" : "<gitlab username>",
"password" : "<access token>"
}
```
And save `secret arn`.
### Issuing Domain Certification
It need AWS domain certification for linking load balancer
### S3 Bucket For Terraform State Backend
Create bucket like `terraform.my-company.com`
### SSH Key Pairs
SSH key file, for generating key file,
```shell
ssh-keygen -t rsa -b 2048 -C "email@example.com" -f ./mykeypair
```
It makes 2 files. `mykeypair` is private key and `mykeypair.pub` is public key.
Please keep private key file carefully for accessing your EC2 instances.
### AWS Access Key For ECS Deploy
Create policy named like ` ProgramaticECSDeploy`.
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"acm:*",
"application-autoscaling:*",
"autoscaling:*",
"cloudformation:*",
"cognito-identity:*",
"ec2:*",
"ecs:*",
"elasticloadbalancing:*",
"iam:*",
"kms:DescribeKey",
"kms:ListAliases",
"kms:ListKeys",
"logs:*",
"route53:*",
"s3:*",
"secretsmanager:*",
"servicediscovery:*",
"tag:GetResources"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
},
{
"Action": "iam:PassRole",
"Effect": "Allow",
"Resource": [
"*"
],
"Condition": {
"StringLike": {
"iam:PassedToService": "ecs-tasks.amazonaws.com"
}
}
},
{
"Action": "iam:PassRole",
"Effect": "Allow",
"Resource": [
"arn:aws:iam::*:role/ecsInstanceRole*"
],
"Condition": {
"StringLike": {
"iam:PassedToService": [
"ec2.amazonaws.com",
"ec2.amazonaws.com.cn"
]
}
}
},
{
"Action": "iam:PassRole",
"Effect": "Allow",
"Resource": [
"arn:aws:iam::*:role/ecsAutoscaleRole*"
],
"Condition": {
"StringLike": {
"iam:PassedToService": [
"application-autoscaling.amazonaws.com",
"application-autoscaling.amazonaws.com.cn"
]
}
}
}
]
}
```
Create user `ecsdep` and save access key.
## Starting Deploy Docker
### Local Deploy
### Using Host System
It is required that your system has terraform, docker and docker-compose.
```shell
pip3 install -U ecsdep
```
### Using Deploy Docker
Docker contains `terrform`, `awscli` and `ecsdep`.
```shell
docker run -d --privileged \
--name ecsdep \
-v /path/to/myproject:/app \
-v ${HOME}/.aws:${HOME}/.aws \
hansroh/ecsdep:24-dind
docker exec -it ecsdep bash
```
Within docker,
```shell
pip3 install -U ecsdep
```
### Gitlab CI/CD Deploy
Add these lines into `.gitlab-ci.yml`:
```yml
image: hansroh/ecsdep:24
services:
- name: docker:dind
alias: dind-service
before_script:
- pip3 install -U ecsdep
```
## Setup Deploy Environment
### Docker Login
```shell
docker login -u <username> -p <personal access token> registry.gitlab.com
```
### Confoguring AWS Access Key
First of all, configure AWS access key,
```shell
aws configure set aws_access_key_id <value>
aws configure set aws_secret_access_key <value>
aws configure set region ap-northeast-2
```
## Creating ECS Cluster
Create file with `docker-compose.yml` or `docker.ecs.yml`.
### Terraform Setting
```yaml
x-terraform:
provider: aws
region: ap-northeast-2
template-version: 1.1
state-backend:
region: "ap-northeast-2"
bucket: "states-data"
key-prefix: "terraform/ecs-cluster"
```
Make sure you create s3 bucket.
### Cluster Settings
```yaml
x-ecs-cluster:
name: my-cluster
public-key-file: "mykeypair.pub"
instance-type: t3.medium
ami: amzn2-ami-ecs-hvm-*-x86_64-*
autoscaling:
desired: 2
min: 2
max: 10
cpu: 60
memory: 60
target-capacity: 0
loadbalancer:
cert-name: mydoamin.com
vpc:
cidr_block: 10.0.0.0/16
```
It creates resources like:
- Load Balancer
- VPC with subnets: 10.0.10.0/24, 10.0.20.0/24 and 10.0.30.0/24
- Cluster Instance Auto Scaling Launch Configuration
- Cluster Auto Scaling Group
- Cluster
#### Cluster-Level Auto Scaling
- `autoscaling.cpu` mean that your cpu reservations of your containers
reaches 60% of all cluster instances's CPU units, scale out ECS instances.
If 0, nothing will be happen.
- `autoscaling.memory` is same.
- `autoscaling.target-capacity` try to keep new instances by percent of `desired`.
- value is 100: auto scaling group doesn't need to scale in or scale out
- value under 100: at least one instance that's not running a non-daemon task
#### Instance Type
If you specified `x-ecs-gpus`, `instance-type` must have GPU.
#### VPC
If you consider VPC peering, choose carefully `cidr_block`.
If you would like to create default VPC, just remove `vpc` key.
For VPC peering,
```yml
x-ecs-cluster:
vpc:
peering_vpc_ids:
- default # mean default VPC
- vpc-e2ecb79ef6da46
```
Note that this will be work only if your own VPCs.
#### Container AWS Resouce Access Policies
```yml
x-ecs-cluster:
task-iam-policies:
- arn:aws:iam::aws:policy/AmazonS3FullAccess
```
#### Creating/Destroying ECS Cluster
Finally you can create cluster,
`ecsdep` find default yaml named as `docker-compose.yml` or `compose.ecs.yml`.
```shell
ecsdep cluster create
ecsdep cluster destroy
```
Specifying file is also possible.
```shell
ecsdep -f /path/to/docker-compose.yml cluster create
ecsdep -f /path/to/docker-compose.yml cluster destroy
```
#### Creating S3 Bucket and Cognito Pool
```yml
x-ecs-cluster:
s3-cors-hosts:
- mayservice.com
```
## Configuring Containsers
This example launch 1 container named `skitai-app`.
```yaml
version: '3.3'
services:
skitai-app:
image: registry.gitlab.com/skitai/ecsdep
container_name: skitai-app
build:
context: .
dockerfile: ./Dockerfile
ports:
- 5000:5000
```
Make sure service name (skitai-app) is same as `container_name`.
And test image build and container.
```shell
docker-compose build
docker-compose up -d
docker-compose down
```
### Adding ECS Related Settings
#### Specify Deploy Containers
Add `services.skitai-app.deploy` key. Otherwise this container will not be included in ECS service.
```yaml
services:
skitai-app:
image: registry.gitlab.com/skitai/ecsdep
deploy:
```
#### Docker Container Registry Pull Credential
```yaml
services:
skitai-app:
image: registry.gitlab.com/skitai/ecsdep
x-ecs-pull-credentials: arn:aws:secretsmanager:ap-northeast-2:000000000:secret:gitlab/registry/mysecret-PrENMF
```
See `Prequisition` section.
#### Logging (Optaional)
For integrating Cloud Watch log group.
```yaml
services:
skitai-app:
logging:
x-ecs-driver: awslogs
```
#### Container Health Checking (Optaional)
```yaml
healthcheck:
test:
- "CMD-SHELL"
- "wget -O/dev/null -q http://localhost:5000/ping || exit 1"
```
#### Container Level Reosurce Requirements
For minimal memory reservation - called soft memory limit,
```yaml
services:
skitai-app:
deploy:
resources:
reservations:
memory: "256M"
```
For hard memory limit,
```yaml
deploy:
resources:
reservations:
memory: "256M"
limits:
memory: "320M"
```
Make sure hard limits must be greater or equal than reservation value.
For miniaml CPU units reservation,
```yaml
deploy:
resources:
reservations:
cpus: "1024"
```
1024 means 1 vCPU. ECS will deploys container which fulfills these reservation requirements.
Value of `cpus` must be string.
Maybe you are using GPU,
```yaml
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
```
But `ecsdep` ignores above statement. Just add `x-ecs-gpus`.
```yaml
deploy:
resources:
reservations:
x-ecs-gpus: 1
devices: ...
```
Ok, test docker build and container.
```shell
docker-compose build
```
### Configuring ECS Task Definition
```yaml
x-ecs-service:
name: ecsdep
stages:
default:
env-service-stage: "qa"
hosts: ["qa.myservice.com"]
listener-priority: 100
production:
env-service-stage: "production"
hosts: ["myservice.com"]
listener-priority: 101
autoscaling:
min: 3
max: 7
loadbalancing:
pathes:
- /*
protocol: http
healthcheck:
path: "/ping"
matcher: "200,301,302,404"
deploy:
compatibilities:
- ec2
resources:
limits:
memory: 256M
cpus: "1024"
autoscaling:
desired: 1
min: 1
max: 4
cpu: 100
memory: 80
strategy:
minimum_healthy_percent: 50
maximum_percent: 150
```
#### Staging
`stages` can make deploy stages like `production`, `qa` or `staging`.
It find your current environment variable named `SERVICE_STAGE`.
```shell
export SERVICE_STAGE=qa
```
Current deploy stages is selected by current `SERVICE_STAGE` value matched with `env-service-stage`.
If `SERVICE_STAGE` is `qa`, your container routed `qa.myservice.com` by load balancer.
#### Deployment Strategy
- `strategy.maximum_percent`: The upper limit (as a percentage of the service's desiredCount) of the number of running tasks that can be running in a service during a deployment
- `strategy.minimum_healthy_percent`: The lower limit (as a percentage of the service's desiredCount) of the number of running tasks that must remain running and healthy in a service during a deployment
#### Auto Scaling
Default container-level auto scaling settings are placed in `x-ecs-service.autoscaling`.
But each stages can overwrite this values by `x-ecs-service.[workspace name].autoscaling`.
#### Load Balancing (Optional)
Your container routed to `loadbalancing.pathes` by your load balancer.
##### Using Fargate
If you wnat `fargate` deploy, set `x-ecs-service.deploy.compatibilities` to `- fargate`
#### Resource Limiting
It is required when 'fargate' laubching, but it is optional for 'ec2' launching.
`x-ecs-service.deploy.autoscaling.cpu` and `x-ecs-service.deploy.autoscaling.memory`
are both percentages of reserved cpu units or memory MBytes.
If `x-ecs-service.deploy.resources.limits.cpus` is defined, service cannot use units
over this value.
If `x-ecs-service.deploy.resources.limits.memory` is defined and your container over
this value, container will be terminated.
#### Deploying Service
you need env var `CI_COMMIT_SHA`. It is used as image tag with first 8chars from git commit hash string. it will be provided on gitlab runner, but at local testing, `lateset` is just OK.
```shell
export CI_COMMIT_SHA=latest
export SERVICE_STAGE=qa
ecsdep -f dep/compose.ecs.yml service up
```
Whenever commanding `ecsdep service up`, your containers will be deployed to ECS by rolling update way.
As a results, AWS resources will be created.
- Task Definition
- Update Service and Run
*Note*: Sometimes, service-level auto-scaling settings are not applied at initial deploy.
I don't know why but please deploy twice in this case.
#### Shutdown/Remove Service
```shell
ecsdep service down
```
### Deploying Other Services Into Cluster
It is recommended cluster settings are kept in your main app only.
In other service's yml, keep `x-ecs-cluster.name` only and remove other `x-ecs-cluster` settings.
You just care about `x-ecs-service` and `services` definition.
```yaml
services:
skitai-app-2:
...
x-terrform:
...
x-ecs-cluster:
name: my-cluster
```
### Deploying Service With Multiple Containers
```yaml
services:
skitai-app:
deploy:
ports:
- 5000
healthcheck:
test:
- "CMD-SHELL"
- "wget -O/dev/null -q http://localhost:5000 || exit 1"
skitai-nginx:
depends_on:
- skitai-app
x-ecs-wait-conditions:
- HEALTHY
ports:
- "80:80"
```
Make sure only single loadbalancable container can have host port
mapping like "80:80", other wise just docker internal port like "5000".
### Deploying Non Web Service
Get rid of ports and load balancing settings like:
- `services.your-app.ports`
- `x-ecs-service.loadbalancing`
- `x-ecs-service.stages.default.hosts`
- `x-ecs-service.stages.default.listener-priority`
### Using Secrets
```yaml
version: '3.3'
services:
skitai-app:
environment:
DB_PASSWORD=$DB_PASSWORD
secrets:
DB_PASSWORD:
name: "arn:aws:secretsmanager:ap-northeast-1:0000000000:secret:gitlab/registry/hansroh-PrENMF:DBPASSWORD::"
external: true
```
At ECS deploy time, environment `DB_PASSWORD` will be overwritten with
`secrets.DB_PASSWORD.name` value by AWS ECS Service.
Raw data
{
"_id": null,
"home_page": "https://gitlab.com/skitai/ecsdep",
"name": "ecsdep",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "Hans Roh",
"author_email": "hansroh@gmail.com",
"download_url": "https://pypi.python.org/pypi/ecsdep",
"platform": "posix",
"description": "## Introduce\nECS deploy using `docker compose` and `terraform`.\n\nYou need to manage just `yml` file for `docker compose` and:\n```shell\necsdep cluster create\necsdep service up\n```\nThat's all.\n\n\n## Prequisition\n### Gitlab Repository Read Credential\nCreate Gitlab access token in `Project Settings > Access Tokens`,\nthen check `read/write registry` grants.\n\nWith this token, create secret in AWS\n\nhttps://console.aws.amazon.com/secretsmanager/\n```json\n{\n \"username\" : \"<gitlab username>\",\n \"password\" : \"<access token>\"\n}\n```\nAnd save `secret arn`.\n\n\n### Issuing Domain Certification\nIt need AWS domain certification for linking load balancer\n\n### S3 Bucket For Terraform State Backend\nCreate bucket like `terraform.my-company.com`\n\n### SSH Key Pairs\nSSH key file, for generating key file,\n```shell\nssh-keygen -t rsa -b 2048 -C \"email@example.com\" -f ./mykeypair\n```\nIt makes 2 files. `mykeypair` is private key and `mykeypair.pub` is public key.\nPlease keep private key file carefully for accessing your EC2 instances.\n\n### AWS Access Key For ECS Deploy\nCreate policy named like ` ProgramaticECSDeploy`.\n\n```json\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"acm:*\",\n \"application-autoscaling:*\",\n \"autoscaling:*\",\n \"cloudformation:*\",\n \"cognito-identity:*\",\n \"ec2:*\",\n \"ecs:*\",\n \"elasticloadbalancing:*\",\n \"iam:*\",\n \"kms:DescribeKey\",\n \"kms:ListAliases\",\n \"kms:ListKeys\",\n \"logs:*\",\n \"route53:*\",\n \"s3:*\",\n \"secretsmanager:*\",\n \"servicediscovery:*\",\n \"tag:GetResources\"\n ],\n \"Resource\": [\n \"*\"\n ]\n },\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"ecr:GetAuthorizationToken\",\n \"ecr:BatchCheckLayerAvailability\",\n \"ecr:GetDownloadUrlForLayer\",\n \"ecr:BatchGetImage\",\n \"logs:CreateLogStream\",\n \"logs:PutLogEvents\"\n ],\n \"Resource\": \"*\"\n },\n {\n \"Action\": \"iam:PassRole\",\n \"Effect\": \"Allow\",\n \"Resource\": [\n \"*\"\n ],\n \"Condition\": {\n \"StringLike\": {\n \"iam:PassedToService\": \"ecs-tasks.amazonaws.com\"\n }\n }\n },\n {\n \"Action\": \"iam:PassRole\",\n \"Effect\": \"Allow\",\n \"Resource\": [\n \"arn:aws:iam::*:role/ecsInstanceRole*\"\n ],\n \"Condition\": {\n \"StringLike\": {\n \"iam:PassedToService\": [\n \"ec2.amazonaws.com\",\n \"ec2.amazonaws.com.cn\"\n ]\n }\n }\n },\n {\n \"Action\": \"iam:PassRole\",\n \"Effect\": \"Allow\",\n \"Resource\": [\n \"arn:aws:iam::*:role/ecsAutoscaleRole*\"\n ],\n \"Condition\": {\n \"StringLike\": {\n \"iam:PassedToService\": [\n \"application-autoscaling.amazonaws.com\",\n \"application-autoscaling.amazonaws.com.cn\"\n ]\n }\n }\n }\n ]\n}\n```\nCreate user `ecsdep` and save access key.\n\n\n\n\n\n\n\n\n\n## Starting Deploy Docker\n### Local Deploy\n\n### Using Host System\nIt is required that your system has terraform, docker and docker-compose.\n```shell\npip3 install -U ecsdep\n```\n\n### Using Deploy Docker\nDocker contains `terrform`, `awscli` and `ecsdep`.\n```shell\ndocker run -d --privileged \\\n --name ecsdep \\\n -v /path/to/myproject:/app \\\n -v ${HOME}/.aws:${HOME}/.aws \\\n hansroh/ecsdep:24-dind\ndocker exec -it ecsdep bash\n```\n\nWithin docker,\n```shell\npip3 install -U ecsdep\n```\n\n### Gitlab CI/CD Deploy\nAdd these lines into `.gitlab-ci.yml`:\n```yml\nimage: hansroh/ecsdep:24\nservices:\n - name: docker:dind\n alias: dind-service\nbefore_script:\n - pip3 install -U ecsdep\n```\n\n\n\n\n\n\n\n\n\n\n\n## Setup Deploy Environment\n\n### Docker Login\n```shell\ndocker login -u <username> -p <personal access token> registry.gitlab.com\n```\n\n### Confoguring AWS Access Key\nFirst of all, configure AWS access key,\n```shell\naws configure set aws_access_key_id <value>\naws configure set aws_secret_access_key <value>\naws configure set region ap-northeast-2\n```\n\n\n\n\n\n\n\n\n\n\n## Creating ECS Cluster\n\nCreate file with `docker-compose.yml` or `docker.ecs.yml`.\n\n\n### Terraform Setting\n\n```yaml\nx-terraform:\n provider: aws\n region: ap-northeast-2\n template-version: 1.1\n state-backend:\n region: \"ap-northeast-2\"\n bucket: \"states-data\"\n key-prefix: \"terraform/ecs-cluster\"\n```\nMake sure you create s3 bucket.\n\n\n\n### Cluster Settings\n```yaml\nx-ecs-cluster:\n name: my-cluster\n public-key-file: \"mykeypair.pub\"\n instance-type: t3.medium\n ami: amzn2-ami-ecs-hvm-*-x86_64-*\n autoscaling:\n desired: 2\n min: 2\n max: 10\n cpu: 60\n memory: 60\n target-capacity: 0\n loadbalancer:\n cert-name: mydoamin.com\n vpc:\n cidr_block: 10.0.0.0/16\n```\n\nIt creates resources like:\n- Load Balancer\n- VPC with subnets: 10.0.10.0/24, 10.0.20.0/24 and 10.0.30.0/24\n- Cluster Instance Auto Scaling Launch Configuration\n- Cluster Auto Scaling Group\n- Cluster\n\n#### Cluster-Level Auto Scaling\n- `autoscaling.cpu` mean that your cpu reservations of your containers\nreaches 60% of all cluster instances's CPU units, scale out ECS instances.\nIf 0, nothing will be happen.\n- `autoscaling.memory` is same.\n- `autoscaling.target-capacity` try to keep new instances by percent of `desired`.\n - value is 100: auto scaling group doesn't need to scale in or scale out\n - value under 100: at least one instance that's not running a non-daemon task\n\n\n#### Instance Type\nIf you specified `x-ecs-gpus`, `instance-type` must have GPU.\n\n#### VPC\nIf you consider VPC peering, choose carefully `cidr_block`.\nIf you would like to create default VPC, just remove `vpc` key.\n\nFor VPC peering,\n```yml\nx-ecs-cluster:\n vpc:\n peering_vpc_ids:\n - default # mean default VPC\n - vpc-e2ecb79ef6da46\n```\nNote that this will be work only if your own VPCs.\n\n#### Container AWS Resouce Access Policies\n```yml\nx-ecs-cluster:\n task-iam-policies:\n - arn:aws:iam::aws:policy/AmazonS3FullAccess\n```\n\n#### Creating/Destroying ECS Cluster\nFinally you can create cluster,\n\n`ecsdep` find default yaml named as `docker-compose.yml` or `compose.ecs.yml`.\n```shell\necsdep cluster create\necsdep cluster destroy\n```\n\nSpecifying file is also possible.\n```shell\necsdep -f /path/to/docker-compose.yml cluster create\necsdep -f /path/to/docker-compose.yml cluster destroy\n```\n\n\n#### Creating S3 Bucket and Cognito Pool\n```yml\nx-ecs-cluster:\n s3-cors-hosts:\n - mayservice.com\n```\n\n\n\n\n## Configuring Containsers\nThis example launch 1 container named `skitai-app`.\n\n```yaml\nversion: '3.3'\n\nservices:\n skitai-app:\n image: registry.gitlab.com/skitai/ecsdep\n container_name: skitai-app\n build:\n context: .\n dockerfile: ./Dockerfile\n ports:\n - 5000:5000\n```\n\nMake sure service name (skitai-app) is same as `container_name`.\n\nAnd test image build and container.\n```shell\ndocker-compose build\ndocker-compose up -d\ndocker-compose down\n```\n\n\n\n### Adding ECS Related Settings\n\n#### Specify Deploy Containers\nAdd `services.skitai-app.deploy` key. Otherwise this container will not be included in ECS service.\n\n```yaml\nservices:\n skitai-app:\n image: registry.gitlab.com/skitai/ecsdep\n deploy:\n```\n\n#### Docker Container Registry Pull Credential\n```yaml\nservices:\n skitai-app:\n image: registry.gitlab.com/skitai/ecsdep\n x-ecs-pull-credentials: arn:aws:secretsmanager:ap-northeast-2:000000000:secret:gitlab/registry/mysecret-PrENMF\n```\nSee `Prequisition` section.\n\n\n#### Logging (Optaional)\nFor integrating Cloud Watch log group.\n```yaml\nservices:\n skitai-app:\n logging:\n x-ecs-driver: awslogs\n```\n\n#### Container Health Checking (Optaional)\n```yaml\n healthcheck:\n test:\n - \"CMD-SHELL\"\n - \"wget -O/dev/null -q http://localhost:5000/ping || exit 1\"\n```\n\n#### Container Level Reosurce Requirements\nFor minimal memory reservation - called soft memory limit,\n```yaml\nservices:\n skitai-app:\n deploy:\n resources:\n reservations:\n memory: \"256M\"\n```\n\nFor hard memory limit,\n```yaml\n deploy:\n resources:\n reservations:\n memory: \"256M\"\n limits:\n memory: \"320M\"\n```\nMake sure hard limits must be greater or equal than reservation value.\n\nFor miniaml CPU units reservation,\n```yaml\n deploy:\n resources:\n reservations:\n cpus: \"1024\"\n```\n\n1024 means 1 vCPU. ECS will deploys container which fulfills these reservation requirements.\nValue of `cpus` must be string.\n\nMaybe you are using GPU,\n```yaml\n deploy:\n resources:\n reservations:\n devices:\n - driver: nvidia\n count: 1\n capabilities: [gpu]\n```\n\nBut `ecsdep` ignores above statement. Just add `x-ecs-gpus`.\n```yaml\n deploy:\n resources:\n reservations:\n x-ecs-gpus: 1\n devices: ...\n```\n\nOk, test docker build and container.\n\n```shell\ndocker-compose build\n```\n\n\n\n### Configuring ECS Task Definition\n```yaml\nx-ecs-service:\n name: ecsdep\n stages:\n default:\n env-service-stage: \"qa\"\n hosts: [\"qa.myservice.com\"]\n listener-priority: 100\n production:\n env-service-stage: \"production\"\n hosts: [\"myservice.com\"]\n listener-priority: 101\n autoscaling:\n min: 3\n max: 7\n loadbalancing:\n pathes:\n - /*\n protocol: http\n healthcheck:\n path: \"/ping\"\n matcher: \"200,301,302,404\"\n deploy:\n compatibilities:\n - ec2\n resources:\n limits:\n memory: 256M\n cpus: \"1024\"\n autoscaling:\n desired: 1\n min: 1\n max: 4\n cpu: 100\n memory: 80\n strategy:\n minimum_healthy_percent: 50\n maximum_percent: 150\n```\n\n#### Staging\n`stages` can make deploy stages like `production`, `qa` or `staging`.\nIt find your current environment variable named `SERVICE_STAGE`.\n```shell\nexport SERVICE_STAGE=qa\n```\n\nCurrent deploy stages is selected by current `SERVICE_STAGE` value matched with `env-service-stage`.\n\nIf `SERVICE_STAGE` is `qa`, your container routed `qa.myservice.com` by load balancer.\n\n\n#### Deployment Strategy\n- `strategy.maximum_percent`: The upper limit (as a percentage of the service's desiredCount) of the number of running tasks that can be running in a service during a deployment\n- `strategy.minimum_healthy_percent`: The lower limit (as a percentage of the service's desiredCount) of the number of running tasks that must remain running and healthy in a service during a deployment\n\n\n#### Auto Scaling\n\nDefault container-level auto scaling settings are placed in `x-ecs-service.autoscaling`.\nBut each stages can overwrite this values by `x-ecs-service.[workspace name].autoscaling`.\n\n\n#### Load Balancing (Optional)\n\nYour container routed to `loadbalancing.pathes` by your load balancer.\n\n\n\n##### Using Fargate\nIf you wnat `fargate` deploy, set `x-ecs-service.deploy.compatibilities` to `- fargate`\n\n\n\n#### Resource Limiting\n\nIt is required when 'fargate' laubching, but it is optional for 'ec2' launching.\n\n`x-ecs-service.deploy.autoscaling.cpu` and `x-ecs-service.deploy.autoscaling.memory`\nare both percentages of reserved cpu units or memory MBytes.\n\nIf `x-ecs-service.deploy.resources.limits.cpus` is defined, service cannot use units\nover this value.\n\nIf `x-ecs-service.deploy.resources.limits.memory` is defined and your container over\nthis value, container will be terminated.\n\n\n\n#### Deploying Service\n\nyou need env var `CI_COMMIT_SHA`. It is used as image tag with first 8chars from git commit hash string. it will be provided on gitlab runner, but at local testing, `lateset` is just OK.\n\n```shell\nexport CI_COMMIT_SHA=latest\nexport SERVICE_STAGE=qa\n\necsdep -f dep/compose.ecs.yml service up\n```\n\nWhenever commanding `ecsdep service up`, your containers will be deployed to ECS by rolling update way.\n\nAs a results, AWS resources will be created.\n- Task Definition\n- Update Service and Run\n\n*Note*: Sometimes, service-level auto-scaling settings are not applied at initial deploy.\nI don't know why but please deploy twice in this case.\n\n\n\n#### Shutdown/Remove Service\n```shell\necsdep service down\n```\n\n\n### Deploying Other Services Into Cluster\n\nIt is recommended cluster settings are kept in your main app only.\n\nIn other service's yml, keep `x-ecs-cluster.name` only and remove other `x-ecs-cluster` settings.\n\nYou just care about `x-ecs-service` and `services` definition.\n\n```yaml\nservices:\n skitai-app-2:\n ...\n\nx-terrform:\n ...\n\nx-ecs-cluster:\n name: my-cluster\n```\n\n\n### Deploying Service With Multiple Containers\n\n```yaml\nservices:\n skitai-app:\n deploy:\n ports:\n - 5000\n healthcheck:\n test:\n - \"CMD-SHELL\"\n - \"wget -O/dev/null -q http://localhost:5000 || exit 1\"\n\n skitai-nginx:\n depends_on:\n - skitai-app\n x-ecs-wait-conditions:\n - HEALTHY\n ports:\n - \"80:80\"\n```\n\nMake sure only single loadbalancable container can have host port\nmapping like \"80:80\", other wise just docker internal port like \"5000\".\n\n\n\n\n### Deploying Non Web Service\n\nGet rid of ports and load balancing settings like:\n- `services.your-app.ports`\n- `x-ecs-service.loadbalancing`\n- `x-ecs-service.stages.default.hosts`\n- `x-ecs-service.stages.default.listener-priority`\n\n\n\n\n### Using Secrets\n```yaml\nversion: '3.3'\n\nservices:\n skitai-app:\n environment:\n DB_PASSWORD=$DB_PASSWORD\n\nsecrets:\n DB_PASSWORD:\n name: \"arn:aws:secretsmanager:ap-northeast-1:0000000000:secret:gitlab/registry/hansroh-PrENMF:DBPASSWORD::\"\n external: true\n```\nAt ECS deploy time, environment `DB_PASSWORD` will be overwritten with\n`secrets.DB_PASSWORD.name` value by AWS ECS Service.\n\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "AWS ECS Deployment Tool With Terraform",
"version": "0.2.14",
"project_urls": {
"Download": "https://pypi.python.org/pypi/ecsdep",
"Homepage": "https://gitlab.com/skitai/ecsdep"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "38f11ded9be007b8eb800e6d018dbb057b393df31e63d13e6842694e1ae11b54",
"md5": "ebf7890482e1656342a092d40898a6d2",
"sha256": "a6cac54a77deb8a015f0912dc39493b945d7b65c01aa216af19a6873343c0c0d"
},
"downloads": -1,
"filename": "ecsdep-0.2.14-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ebf7890482e1656342a092d40898a6d2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 32522,
"upload_time": "2024-05-11T07:43:45",
"upload_time_iso_8601": "2024-05-11T07:43:45.256863Z",
"url": "https://files.pythonhosted.org/packages/38/f1/1ded9be007b8eb800e6d018dbb057b393df31e63d13e6842694e1ae11b54/ecsdep-0.2.14-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-11 07:43:45",
"github": false,
"gitlab": true,
"bitbucket": false,
"codeberg": false,
"gitlab_user": "skitai",
"gitlab_project": "ecsdep",
"lcname": "ecsdep"
}