# AWS Batch Construct Library
<!--BEGIN STABILITY BANNER-->---
![cfn-resources: Stable](https://img.shields.io/badge/cfn--resources-stable-success.svg?style=for-the-badge)
> All classes with the `Cfn` prefix in this module ([CFN Resources](https://docs.aws.amazon.com/cdk/latest/guide/constructs.html#constructs_lib)) are always stable and safe to use.
![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge)
> The APIs of higher level constructs in this module are experimental and under active development.
> They are subject to non-backward compatible changes or removal in any future version. These are
> not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be
> announced in the release notes. This means that while you may use them, you may need to update
> your source code when upgrading to a newer version of this package.
---
<!--END STABILITY BANNER-->
This module is part of the [AWS Cloud Development Kit](https://github.com/aws/aws-cdk) project.
AWS Batch is a batch processing tool for efficiently running hundreds of thousands computing jobs in AWS. Batch can dynamically provision different types of compute resources based on the resource requirements of submitted jobs.
AWS Batch simplifies the planning, scheduling, and executions of your batch workloads across a full range of compute services like [Amazon EC2](https://aws.amazon.com/ec2/) and [Spot Resources](https://aws.amazon.com/ec2/spot/).
Batch achieves this by utilizing queue processing of batch job requests. To successfully submit a job for execution, you need the following resources:
1. [Job Definition](#job-definition) - *Group various job properties (container image, resource requirements, env variables...) into a single definition. These definitions are used at job submission time.*
2. [Compute Environment](#compute-environment) - *the execution runtime of submitted batch jobs*
3. [Job Queue](#job-queue) - *the queue where batch jobs can be submitted to via AWS SDK/CLI*
For more information on **AWS Batch** visit the [AWS Docs for Batch](https://docs.aws.amazon.com/batch/index.html).
## Compute Environment
At the core of AWS Batch is the compute environment. All batch jobs are processed within a compute environment, which uses resource like OnDemand/Spot EC2 instances or Fargate.
In **MANAGED** mode, AWS will handle the provisioning of compute resources to accommodate the demand. Otherwise, in **UNMANAGED** mode, you will need to manage the provisioning of those resources.
Below is an example of each available type of compute environment:
```python
# vpc: ec2.Vpc
# default is managed
aws_managed_environment = batch.ComputeEnvironment(self, "AWS-Managed-Compute-Env",
compute_resources=batch.ComputeResources(
vpc=vpc
)
)
customer_managed_environment = batch.ComputeEnvironment(self, "Customer-Managed-Compute-Env",
managed=False
)
```
### Spot-Based Compute Environment
It is possible to have AWS Batch submit spotfleet requests for obtaining compute resources. Below is an example of how this can be done:
```python
vpc = ec2.Vpc(self, "VPC")
spot_environment = batch.ComputeEnvironment(self, "MySpotEnvironment",
compute_resources=batch.ComputeResources(
type=batch.ComputeResourceType.SPOT,
bid_percentage=75, # Bids for resources at 75% of the on-demand price
vpc=vpc
)
)
```
### Fargate Compute Environment
It is possible to have AWS Batch submit jobs to be run on Fargate compute resources. Below is an example of how this can be done:
```python
vpc = ec2.Vpc(self, "VPC")
fargate_spot_environment = batch.ComputeEnvironment(self, "MyFargateEnvironment",
compute_resources=batch.ComputeResources(
type=batch.ComputeResourceType.FARGATE_SPOT,
vpc=vpc
)
)
```
### Understanding Progressive Allocation Strategies
AWS Batch uses an [allocation strategy](https://docs.aws.amazon.com/batch/latest/userguide/allocation-strategies.html) to determine what compute resource will efficiently handle incoming job requests. By default, **BEST_FIT** will pick an available compute instance based on vCPU requirements. If none exist, the job will wait until resources become available. However, with this strategy, you may have jobs waiting in the queue unnecessarily despite having more powerful instances available. Below is an example of how that situation might look like:
```plaintext
Compute Environment:
1. m5.xlarge => 4 vCPU
2. m5.2xlarge => 8 vCPU
```
```plaintext
Job Queue:
---------
| A | B |
---------
Job Requirements:
A => 4 vCPU - ALLOCATED TO m5.xlarge
B => 2 vCPU - WAITING
```
In this situation, Batch will allocate **Job A** to compute resource #1 because it is the most cost efficient resource that matches the vCPU requirement. However, with this `BEST_FIT` strategy, **Job B** will not be allocated to our other available compute resource even though it is strong enough to handle it. Instead, it will wait until the first job is finished processing or wait a similar `m5.xlarge` resource to be provisioned.
The alternative would be to use the `BEST_FIT_PROGRESSIVE` strategy in order for the remaining job to be handled in larger containers regardless of vCPU requirement and costs.
### Launch template support
Simply define your Launch Template:
```text
// This example is only available in TypeScript
const myLaunchTemplate = new ec2.CfnLaunchTemplate(this, 'LaunchTemplate', {
launchTemplateName: 'extra-storage-template',
launchTemplateData: {
blockDeviceMappings: [
{
deviceName: '/dev/xvdcz',
ebs: {
encrypted: true,
volumeSize: 100,
volumeType: 'gp2',
},
},
],
},
});
```
and use it:
```python
# vpc: ec2.Vpc
# my_launch_template: ec2.CfnLaunchTemplate
my_compute_env = batch.ComputeEnvironment(self, "ComputeEnv",
compute_resources=batch.ComputeResources(
launch_template=batch.LaunchTemplateSpecification(
launch_template_name=my_launch_template.launch_template_name
),
vpc=vpc
),
compute_environment_name="MyStorageCapableComputeEnvironment"
)
```
### Importing an existing Compute Environment
To import an existing batch compute environment, call `ComputeEnvironment.fromComputeEnvironmentArn()`.
Below is an example:
```python
compute_env = batch.ComputeEnvironment.from_compute_environment_arn(self, "imported-compute-env", "arn:aws:batch:us-east-1:555555555555:compute-environment/My-Compute-Env")
```
### Change the baseline AMI of the compute resources
Occasionally, you will need to deviate from the default processing AMI.
ECS Optimized Amazon Linux 2 example:
```python
# vpc: ec2.Vpc
my_compute_env = batch.ComputeEnvironment(self, "ComputeEnv",
compute_resources=batch.ComputeResources(
image=ecs.EcsOptimizedAmi(
generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX_2
),
vpc=vpc
)
)
```
Custom based AMI example:
```python
# vpc: ec2.Vpc
my_compute_env = batch.ComputeEnvironment(self, "ComputeEnv",
compute_resources=batch.ComputeResources(
image=ec2.MachineImage.generic_linux({
"[aws-region]": "[ami-ID]"
}),
vpc=vpc
)
)
```
## Job Queue
Jobs are always submitted to a specific queue. This means that you have to create a queue before you can start submitting jobs. Each queue is mapped to at least one (and no more than three) compute environment. When the job is scheduled for execution, AWS Batch will select the compute environment based on ordinal priority and available capacity in each environment.
```python
# compute_environment: batch.ComputeEnvironment
job_queue = batch.JobQueue(self, "JobQueue",
compute_environments=[batch.JobQueueComputeEnvironment(
# Defines a collection of compute resources to handle assigned batch jobs
compute_environment=compute_environment,
# Order determines the allocation order for jobs (i.e. Lower means higher preference for job assignment)
order=1
)
]
)
```
### Priorty-Based Queue Example
Sometimes you might have jobs that are more important than others, and when submitted, should take precedence over the existing jobs. To achieve this, you can create a priority based execution strategy, by assigning each queue its own priority:
```python
# shared_compute_envs: batch.ComputeEnvironment
high_prio_queue = batch.JobQueue(self, "JobQueue",
compute_environments=[batch.JobQueueComputeEnvironment(
compute_environment=shared_compute_envs,
order=1
)],
priority=2
)
low_prio_queue = batch.JobQueue(self, "JobQueue",
compute_environments=[batch.JobQueueComputeEnvironment(
compute_environment=shared_compute_envs,
order=1
)],
priority=1
)
```
By making sure to use the same compute environments between both job queues, we will give precedence to the `highPrioQueue` for the assigning of jobs to available compute environments.
### Importing an existing Job Queue
To import an existing batch job queue, call `JobQueue.fromJobQueueArn()`.
Below is an example:
```python
job_queue = batch.JobQueue.from_job_queue_arn(self, "imported-job-queue", "arn:aws:batch:us-east-1:555555555555:job-queue/High-Prio-Queue")
```
## Job Definition
A Batch Job definition helps AWS Batch understand important details about how to run your application in the scope of a Batch Job. This involves key information like resource requirements, what containers to run, how the compute environment should be prepared, and more. Below is a simple example of how to create a job definition:
```python
import aws_cdk.aws_ecr as ecr
repo = ecr.Repository.from_repository_name(self, "batch-job-repo", "todo-list")
batch.JobDefinition(self, "batch-job-def-from-ecr",
container=batch.JobDefinitionContainer(
image=ecs.EcrImage(repo, "latest")
)
)
```
### Using a local Docker project
Below is an example of how you can create a Batch Job Definition from a local Docker application.
```python
batch.JobDefinition(self, "batch-job-def-from-local",
container=batch.JobDefinitionContainer(
# todo-list is a directory containing a Dockerfile to build the application
image=ecs.ContainerImage.from_asset("../todo-list")
)
)
```
### Providing custom log configuration
You can provide custom log driver and its configuration for the container.
```python
import aws_cdk.aws_ssm as ssm
batch.JobDefinition(self, "job-def",
container=batch.JobDefinitionContainer(
image=ecs.EcrImage.from_registry("docker/whalesay"),
log_configuration=batch.LogConfiguration(
log_driver=batch.LogDriver.AWSLOGS,
options={"awslogs-region": "us-east-1"},
secret_options=[
batch.ExposedSecret.from_parameters_store("xyz", ssm.StringParameter.from_string_parameter_name(self, "parameter", "xyz"))
]
)
)
)
```
### Importing an existing Job Definition
#### From ARN
To import an existing batch job definition from its ARN, call `JobDefinition.fromJobDefinitionArn()`.
Below is an example:
```python
job = batch.JobDefinition.from_job_definition_arn(self, "imported-job-definition", "arn:aws:batch:us-east-1:555555555555:job-definition/my-job-definition")
```
#### From Name
To import an existing batch job definition from its name, call `JobDefinition.fromJobDefinitionName()`.
If name is specified without a revision then the latest active revision is used.
Below is an example:
```python
# Without revision
job1 = batch.JobDefinition.from_job_definition_name(self, "imported-job-definition", "my-job-definition")
# With revision
job2 = batch.JobDefinition.from_job_definition_name(self, "imported-job-definition", "my-job-definition:3")
```
Raw data
{
"_id": null,
"home_page": "https://github.com/aws/aws-cdk",
"name": "aws-cdk.aws-batch",
"maintainer": "",
"docs_url": null,
"requires_python": "~=3.7",
"maintainer_email": "",
"keywords": "",
"author": "Amazon Web Services",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/6d/5d/c21320dc290b3084ecb91badd905d76abb978b2bd1ff3f2e90b2fa9a22fe/aws-cdk.aws-batch-1.203.0.tar.gz",
"platform": null,
"description": "# AWS Batch Construct Library\n\n<!--BEGIN STABILITY BANNER-->---\n\n\n![cfn-resources: Stable](https://img.shields.io/badge/cfn--resources-stable-success.svg?style=for-the-badge)\n\n> All classes with the `Cfn` prefix in this module ([CFN Resources](https://docs.aws.amazon.com/cdk/latest/guide/constructs.html#constructs_lib)) are always stable and safe to use.\n\n![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge)\n\n> The APIs of higher level constructs in this module are experimental and under active development.\n> They are subject to non-backward compatible changes or removal in any future version. These are\n> not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be\n> announced in the release notes. This means that while you may use them, you may need to update\n> your source code when upgrading to a newer version of this package.\n\n---\n<!--END STABILITY BANNER-->\n\nThis module is part of the [AWS Cloud Development Kit](https://github.com/aws/aws-cdk) project.\n\nAWS Batch is a batch processing tool for efficiently running hundreds of thousands computing jobs in AWS. Batch can dynamically provision different types of compute resources based on the resource requirements of submitted jobs.\n\nAWS Batch simplifies the planning, scheduling, and executions of your batch workloads across a full range of compute services like [Amazon EC2](https://aws.amazon.com/ec2/) and [Spot Resources](https://aws.amazon.com/ec2/spot/).\n\nBatch achieves this by utilizing queue processing of batch job requests. To successfully submit a job for execution, you need the following resources:\n\n1. [Job Definition](#job-definition) - *Group various job properties (container image, resource requirements, env variables...) into a single definition. These definitions are used at job submission time.*\n2. [Compute Environment](#compute-environment) - *the execution runtime of submitted batch jobs*\n3. [Job Queue](#job-queue) - *the queue where batch jobs can be submitted to via AWS SDK/CLI*\n\nFor more information on **AWS Batch** visit the [AWS Docs for Batch](https://docs.aws.amazon.com/batch/index.html).\n\n## Compute Environment\n\nAt the core of AWS Batch is the compute environment. All batch jobs are processed within a compute environment, which uses resource like OnDemand/Spot EC2 instances or Fargate.\n\nIn **MANAGED** mode, AWS will handle the provisioning of compute resources to accommodate the demand. Otherwise, in **UNMANAGED** mode, you will need to manage the provisioning of those resources.\n\nBelow is an example of each available type of compute environment:\n\n```python\n# vpc: ec2.Vpc\n\n\n# default is managed\naws_managed_environment = batch.ComputeEnvironment(self, \"AWS-Managed-Compute-Env\",\n compute_resources=batch.ComputeResources(\n vpc=vpc\n )\n)\n\ncustomer_managed_environment = batch.ComputeEnvironment(self, \"Customer-Managed-Compute-Env\",\n managed=False\n)\n```\n\n### Spot-Based Compute Environment\n\nIt is possible to have AWS Batch submit spotfleet requests for obtaining compute resources. Below is an example of how this can be done:\n\n```python\nvpc = ec2.Vpc(self, \"VPC\")\n\nspot_environment = batch.ComputeEnvironment(self, \"MySpotEnvironment\",\n compute_resources=batch.ComputeResources(\n type=batch.ComputeResourceType.SPOT,\n bid_percentage=75, # Bids for resources at 75% of the on-demand price\n vpc=vpc\n )\n)\n```\n\n### Fargate Compute Environment\n\nIt is possible to have AWS Batch submit jobs to be run on Fargate compute resources. Below is an example of how this can be done:\n\n```python\nvpc = ec2.Vpc(self, \"VPC\")\n\nfargate_spot_environment = batch.ComputeEnvironment(self, \"MyFargateEnvironment\",\n compute_resources=batch.ComputeResources(\n type=batch.ComputeResourceType.FARGATE_SPOT,\n vpc=vpc\n )\n)\n```\n\n### Understanding Progressive Allocation Strategies\n\nAWS Batch uses an [allocation strategy](https://docs.aws.amazon.com/batch/latest/userguide/allocation-strategies.html) to determine what compute resource will efficiently handle incoming job requests. By default, **BEST_FIT** will pick an available compute instance based on vCPU requirements. If none exist, the job will wait until resources become available. However, with this strategy, you may have jobs waiting in the queue unnecessarily despite having more powerful instances available. Below is an example of how that situation might look like:\n\n```plaintext\nCompute Environment:\n\n1. m5.xlarge => 4 vCPU\n2. m5.2xlarge => 8 vCPU\n```\n\n```plaintext\nJob Queue:\n---------\n| A | B |\n---------\n\nJob Requirements:\nA => 4 vCPU - ALLOCATED TO m5.xlarge\nB => 2 vCPU - WAITING\n```\n\nIn this situation, Batch will allocate **Job A** to compute resource #1 because it is the most cost efficient resource that matches the vCPU requirement. However, with this `BEST_FIT` strategy, **Job B** will not be allocated to our other available compute resource even though it is strong enough to handle it. Instead, it will wait until the first job is finished processing or wait a similar `m5.xlarge` resource to be provisioned.\n\nThe alternative would be to use the `BEST_FIT_PROGRESSIVE` strategy in order for the remaining job to be handled in larger containers regardless of vCPU requirement and costs.\n\n### Launch template support\n\nSimply define your Launch Template:\n\n```text\n// This example is only available in TypeScript\nconst myLaunchTemplate = new ec2.CfnLaunchTemplate(this, 'LaunchTemplate', {\n launchTemplateName: 'extra-storage-template',\n launchTemplateData: {\n blockDeviceMappings: [\n {\n deviceName: '/dev/xvdcz',\n ebs: {\n encrypted: true,\n volumeSize: 100,\n volumeType: 'gp2',\n },\n },\n ],\n },\n});\n```\n\nand use it:\n\n```python\n# vpc: ec2.Vpc\n# my_launch_template: ec2.CfnLaunchTemplate\n\n\nmy_compute_env = batch.ComputeEnvironment(self, \"ComputeEnv\",\n compute_resources=batch.ComputeResources(\n launch_template=batch.LaunchTemplateSpecification(\n launch_template_name=my_launch_template.launch_template_name\n ),\n vpc=vpc\n ),\n compute_environment_name=\"MyStorageCapableComputeEnvironment\"\n)\n```\n\n### Importing an existing Compute Environment\n\nTo import an existing batch compute environment, call `ComputeEnvironment.fromComputeEnvironmentArn()`.\n\nBelow is an example:\n\n```python\ncompute_env = batch.ComputeEnvironment.from_compute_environment_arn(self, \"imported-compute-env\", \"arn:aws:batch:us-east-1:555555555555:compute-environment/My-Compute-Env\")\n```\n\n### Change the baseline AMI of the compute resources\n\nOccasionally, you will need to deviate from the default processing AMI.\n\nECS Optimized Amazon Linux 2 example:\n\n```python\n# vpc: ec2.Vpc\n\nmy_compute_env = batch.ComputeEnvironment(self, \"ComputeEnv\",\n compute_resources=batch.ComputeResources(\n image=ecs.EcsOptimizedAmi(\n generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX_2\n ),\n vpc=vpc\n )\n)\n```\n\nCustom based AMI example:\n\n```python\n# vpc: ec2.Vpc\n\nmy_compute_env = batch.ComputeEnvironment(self, \"ComputeEnv\",\n compute_resources=batch.ComputeResources(\n image=ec2.MachineImage.generic_linux({\n \"[aws-region]\": \"[ami-ID]\"\n }),\n vpc=vpc\n )\n)\n```\n\n## Job Queue\n\nJobs are always submitted to a specific queue. This means that you have to create a queue before you can start submitting jobs. Each queue is mapped to at least one (and no more than three) compute environment. When the job is scheduled for execution, AWS Batch will select the compute environment based on ordinal priority and available capacity in each environment.\n\n```python\n# compute_environment: batch.ComputeEnvironment\n\njob_queue = batch.JobQueue(self, \"JobQueue\",\n compute_environments=[batch.JobQueueComputeEnvironment(\n # Defines a collection of compute resources to handle assigned batch jobs\n compute_environment=compute_environment,\n # Order determines the allocation order for jobs (i.e. Lower means higher preference for job assignment)\n order=1\n )\n ]\n)\n```\n\n### Priorty-Based Queue Example\n\nSometimes you might have jobs that are more important than others, and when submitted, should take precedence over the existing jobs. To achieve this, you can create a priority based execution strategy, by assigning each queue its own priority:\n\n```python\n# shared_compute_envs: batch.ComputeEnvironment\n\nhigh_prio_queue = batch.JobQueue(self, \"JobQueue\",\n compute_environments=[batch.JobQueueComputeEnvironment(\n compute_environment=shared_compute_envs,\n order=1\n )],\n priority=2\n)\n\nlow_prio_queue = batch.JobQueue(self, \"JobQueue\",\n compute_environments=[batch.JobQueueComputeEnvironment(\n compute_environment=shared_compute_envs,\n order=1\n )],\n priority=1\n)\n```\n\nBy making sure to use the same compute environments between both job queues, we will give precedence to the `highPrioQueue` for the assigning of jobs to available compute environments.\n\n### Importing an existing Job Queue\n\nTo import an existing batch job queue, call `JobQueue.fromJobQueueArn()`.\n\nBelow is an example:\n\n```python\njob_queue = batch.JobQueue.from_job_queue_arn(self, \"imported-job-queue\", \"arn:aws:batch:us-east-1:555555555555:job-queue/High-Prio-Queue\")\n```\n\n## Job Definition\n\nA Batch Job definition helps AWS Batch understand important details about how to run your application in the scope of a Batch Job. This involves key information like resource requirements, what containers to run, how the compute environment should be prepared, and more. Below is a simple example of how to create a job definition:\n\n```python\nimport aws_cdk.aws_ecr as ecr\n\n\nrepo = ecr.Repository.from_repository_name(self, \"batch-job-repo\", \"todo-list\")\n\nbatch.JobDefinition(self, \"batch-job-def-from-ecr\",\n container=batch.JobDefinitionContainer(\n image=ecs.EcrImage(repo, \"latest\")\n )\n)\n```\n\n### Using a local Docker project\n\nBelow is an example of how you can create a Batch Job Definition from a local Docker application.\n\n```python\nbatch.JobDefinition(self, \"batch-job-def-from-local\",\n container=batch.JobDefinitionContainer(\n # todo-list is a directory containing a Dockerfile to build the application\n image=ecs.ContainerImage.from_asset(\"../todo-list\")\n )\n)\n```\n\n### Providing custom log configuration\n\nYou can provide custom log driver and its configuration for the container.\n\n```python\nimport aws_cdk.aws_ssm as ssm\n\n\nbatch.JobDefinition(self, \"job-def\",\n container=batch.JobDefinitionContainer(\n image=ecs.EcrImage.from_registry(\"docker/whalesay\"),\n log_configuration=batch.LogConfiguration(\n log_driver=batch.LogDriver.AWSLOGS,\n options={\"awslogs-region\": \"us-east-1\"},\n secret_options=[\n batch.ExposedSecret.from_parameters_store(\"xyz\", ssm.StringParameter.from_string_parameter_name(self, \"parameter\", \"xyz\"))\n ]\n )\n )\n)\n```\n\n### Importing an existing Job Definition\n\n#### From ARN\n\nTo import an existing batch job definition from its ARN, call `JobDefinition.fromJobDefinitionArn()`.\n\nBelow is an example:\n\n```python\njob = batch.JobDefinition.from_job_definition_arn(self, \"imported-job-definition\", \"arn:aws:batch:us-east-1:555555555555:job-definition/my-job-definition\")\n```\n\n#### From Name\n\nTo import an existing batch job definition from its name, call `JobDefinition.fromJobDefinitionName()`.\nIf name is specified without a revision then the latest active revision is used.\n\nBelow is an example:\n\n```python\n# Without revision\njob1 = batch.JobDefinition.from_job_definition_name(self, \"imported-job-definition\", \"my-job-definition\")\n\n# With revision\njob2 = batch.JobDefinition.from_job_definition_name(self, \"imported-job-definition\", \"my-job-definition:3\")\n```\n\n\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "The CDK Construct Library for AWS::Batch",
"version": "1.203.0",
"project_urls": {
"Homepage": "https://github.com/aws/aws-cdk",
"Source": "https://github.com/aws/aws-cdk.git"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5545c4846a31484b8a04fd5a52e94d6db11f7f509008b995dfb559f8fd29ace7",
"md5": "cb4119584f851c3e32a411e1e0d62b08",
"sha256": "9830a5972f8bc7f32cf7529843796ddbaf2a0f830ec8fa1334843af23c9115ae"
},
"downloads": -1,
"filename": "aws_cdk.aws_batch-1.203.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "cb4119584f851c3e32a411e1e0d62b08",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "~=3.7",
"size": 403128,
"upload_time": "2023-05-31T22:53:07",
"upload_time_iso_8601": "2023-05-31T22:53:07.064918Z",
"url": "https://files.pythonhosted.org/packages/55/45/c4846a31484b8a04fd5a52e94d6db11f7f509008b995dfb559f8fd29ace7/aws_cdk.aws_batch-1.203.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "6d5dc21320dc290b3084ecb91badd905d76abb978b2bd1ff3f2e90b2fa9a22fe",
"md5": "7d3af2c7591df1b5dc312bf07940ac7a",
"sha256": "da160ef26bd5dd8f0ec4f508faa6fc575471535d4ee12a69288b8de5fe7527c8"
},
"downloads": -1,
"filename": "aws-cdk.aws-batch-1.203.0.tar.gz",
"has_sig": false,
"md5_digest": "7d3af2c7591df1b5dc312bf07940ac7a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "~=3.7",
"size": 404218,
"upload_time": "2023-05-31T23:00:59",
"upload_time_iso_8601": "2023-05-31T23:00:59.161099Z",
"url": "https://files.pythonhosted.org/packages/6d/5d/c21320dc290b3084ecb91badd905d76abb978b2bd1ff3f2e90b2fa9a22fe/aws-cdk.aws-batch-1.203.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-05-31 23:00:59",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "aws",
"github_project": "aws-cdk",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "aws-cdk.aws-batch"
}