aws-cdk.aws-redshift-alpha


Nameaws-cdk.aws-redshift-alpha JSON
Version 2.170.0a0 PyPI version JSON
download
home_pagehttps://github.com/aws/aws-cdk
SummaryThe CDK Construct Library for AWS::Redshift
upload_time2024-11-22 04:42:40
maintainerNone
docs_urlNone
authorAmazon Web Services
requires_python~=3.8
licenseApache-2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Amazon Redshift Construct Library

<!--BEGIN STABILITY BANNER-->---


![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge)

> The APIs of higher level constructs in this module are experimental and under active development.
> They are subject to non-backward compatible changes or removal in any future version. These are
> not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be
> announced in the release notes. This means that while you may use them, you may need to update
> your source code when upgrading to a newer version of this package.

---
<!--END STABILITY BANNER-->

## Starting a Redshift Cluster Database

To set up a Redshift cluster, define a `Cluster`. It will be launched in a VPC.
You can specify a VPC, otherwise one will be created. The nodes are always launched in private subnets and are encrypted by default.

```python
import aws_cdk.aws_ec2 as ec2


vpc = ec2.Vpc(self, "Vpc")
cluster = Cluster(self, "Redshift",
    master_user=Login(
        master_username="admin"
    ),
    vpc=vpc
)
```

By default, the master password will be generated and stored in AWS Secrets Manager.
You can specify characters to not include in generated passwords by setting `excludeCharacters` property.

```python
import aws_cdk.aws_ec2 as ec2


vpc = ec2.Vpc(self, "Vpc")
cluster = Cluster(self, "Redshift",
    master_user=Login(
        master_username="admin",
        exclude_characters="\"@/\\ '`"
    ),
    vpc=vpc
)
```

A default database named `default_db` will be created in the cluster. To change the name of this database set the `defaultDatabaseName` attribute in the constructor properties.

By default, the cluster will not be publicly accessible.
Depending on your use case, you can make the cluster publicly accessible with the `publiclyAccessible` property.

## Adding a logging bucket for database audit logging to S3

Amazon Redshift logs information about connections and user activities in your database. These logs help you to monitor the database for security and troubleshooting purposes, a process called database auditing. To send these logs to an S3 bucket, specify the `loggingProperties` when creating a new cluster.

```python
import aws_cdk.aws_ec2 as ec2
import aws_cdk.aws_s3 as s3


vpc = ec2.Vpc(self, "Vpc")
bucket = s3.Bucket.from_bucket_name(self, "bucket", "amzn-s3-demo-bucket")

cluster = Cluster(self, "Redshift",
    master_user=Login(
        master_username="admin"
    ),
    vpc=vpc,
    logging_properties=LoggingProperties(
        logging_bucket=bucket,
        logging_key_prefix="prefix"
    )
)
```

## Availability Zone Relocation

By using [relocation in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-recovery.html), you allow Amazon Redshift to move a cluster to another Availability Zone (AZ) without any loss of data or changes to your applications.
This feature can be applied to both new and existing clusters.

To enable this feature, set the `availabilityZoneRelocation` property to `true`.

```python
# Example automatically generated from non-compiling source. May contain errors.
import aws_cdk.aws_ec2 as ec2

# vpc: ec2.IVpc


cluster = Cluster(self, "Redshift",
    master_user=Login(
        master_username="admin"
    ),
    vpc=vpc,
    node_type=NodeType.RA3_XLPLUS,
    availability_zone_relocation=True
)
```

**Note**: The `availabilityZoneRelocation` property is only available for RA3 node types.

## Connecting

To control who can access the cluster, use the `.connections` attribute. Redshift Clusters have
a default port, so you don't need to specify the port:

```python
cluster.connections.allow_default_port_from_any_ipv4("Open to the world")
```

The endpoint to access your database cluster will be available as the `.clusterEndpoint` attribute:

```python
cluster.cluster_endpoint.socket_address
```

## Database Resources

This module allows for the creation of non-CloudFormation database resources such as users
and tables. This allows you to manage identities, permissions, and stateful resources
within your Redshift cluster from your CDK application.

Because these resources are not available in CloudFormation, this library leverages
[custom
resources](https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html)
to manage them. In addition to the IAM permissions required to make Redshift service
calls, the execution role for the custom resource handler requires database credentials to
create resources within the cluster.

These database credentials can be supplied explicitly through the `adminUser` properties
of the various database resource constructs. Alternatively, the credentials can be
automatically pulled from the Redshift cluster's default administrator
credentials. However, this option is only available if the password for the credentials
was generated by the CDK application (ie., no value vas provided for [the `masterPassword`
property](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-redshift.Login.html#masterpasswordspan-classapi-icon-api-icon-experimental-titlethis-api-element-is-experimental-it-may-change-without-noticespan)
of
[`Cluster.masterUser`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-redshift.Cluster.html#masteruserspan-classapi-icon-api-icon-experimental-titlethis-api-element-is-experimental-it-may-change-without-noticespan)).

### Creating Users

Create a user within a Redshift cluster database by instantiating a `User` construct. This
will generate a username and password, store the credentials in a [AWS Secrets Manager
`Secret`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-secretsmanager.Secret.html),
and make a query to the Redshift cluster to create a new database user with the
credentials.

```python
User(self, "User",
    cluster=cluster,
    database_name="databaseName"
)
```

By default, the user credentials are encrypted with your AWS account's default Secrets
Manager encryption key. You can specify the encryption key used for this purpose by
supplying a key in the `encryptionKey` property.

```python
import aws_cdk.aws_kms as kms


encryption_key = kms.Key(self, "Key")
User(self, "User",
    encryption_key=encryption_key,
    cluster=cluster,
    database_name="databaseName"
)
```

By default, a username is automatically generated from the user construct ID and its path
in the construct tree. You can specify a particular username by providing a value for the
`username` property. Usernames must be valid identifiers; see: [Names and
identifiers](https://docs.aws.amazon.com/redshift/latest/dg/r_names.html) in the *Amazon
Redshift Database Developer Guide*.

```python
User(self, "User",
    username="myuser",
    cluster=cluster,
    database_name="databaseName"
)
```

The user password is generated by AWS Secrets Manager using the default configuration
found in
[`secretsmanager.SecretStringGenerator`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-secretsmanager.SecretStringGenerator.html),
except with password length `30` and some SQL-incompliant characters excluded. The
plaintext for the password will never be present in the CDK application; instead, a
[CloudFormation Dynamic
Reference](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html)
will be used wherever the password value is required.

You can specify characters to not include in generated passwords by setting `excludeCharacters` property.

```python
User(self, "User",
    cluster=cluster,
    database_name="databaseName",
    exclude_characters="\"@/\\ '`"
)
```

### Creating Tables

Create a table within a Redshift cluster database by instantiating a `Table`
construct. This will make a query to the Redshift cluster to create a new database table
with the supplied schema.

```python
Table(self, "Table",
    table_columns=[Column(name="col1", data_type="varchar(4)"), Column(name="col2", data_type="float")],
    cluster=cluster,
    database_name="databaseName"
)
```

Tables greater than v2.114.1 can have their table name changed, for versions <= v2.114.1, this would not be possible.
Therefore, changing of table names for <= v2.114.1 have been disabled.

```python
# Example automatically generated from non-compiling source. May contain errors.
Table(self, "Table",
    table_name="oldTableName",  # This value can be change for versions greater than v2.114.1
    table_columns=[Column(name="col1", data_type="varchar(4)"), Column(name="col2", data_type="float")],
    cluster=cluster,
    database_name="databaseName"
)
```

The table can be configured to have distStyle attribute and a distKey column:

```python
Table(self, "Table",
    table_columns=[Column(name="col1", data_type="varchar(4)", dist_key=True), Column(name="col2", data_type="float")
    ],
    cluster=cluster,
    database_name="databaseName",
    dist_style=TableDistStyle.KEY
)
```

The table can also be configured to have sortStyle attribute and sortKey columns:

```python
Table(self, "Table",
    table_columns=[Column(name="col1", data_type="varchar(4)", sort_key=True), Column(name="col2", data_type="float", sort_key=True)
    ],
    cluster=cluster,
    database_name="databaseName",
    sort_style=TableSortStyle.COMPOUND
)
```

Tables and their respective columns can be configured to contain comments:

```python
Table(self, "Table",
    table_columns=[Column(name="col1", data_type="varchar(4)", comment="This is a column comment"), Column(name="col2", data_type="float", comment="This is a another column comment")
    ],
    cluster=cluster,
    database_name="databaseName",
    table_comment="This is a table comment"
)
```

Table columns can be configured to use a specific compression encoding:

```python
from aws_cdk.aws_redshift_alpha import ColumnEncoding


Table(self, "Table",
    table_columns=[Column(name="col1", data_type="varchar(4)", encoding=ColumnEncoding.TEXT32K), Column(name="col2", data_type="float", encoding=ColumnEncoding.DELTA32K)
    ],
    cluster=cluster,
    database_name="databaseName"
)
```

Table columns can also contain an `id` attribute, which can allow table columns to be renamed.

**NOTE** To use the `id` attribute, you must also enable the `@aws-cdk/aws-redshift:columnId` feature flag.

```python
Table(self, "Table",
    table_columns=[Column(id="col1", name="col1", data_type="varchar(4)"), Column(id="col2", name="col2", data_type="float")
    ],
    cluster=cluster,
    database_name="databaseName"
)
```

Query execution duration is limited to 1 minute by default. You can change this by setting the `timeout` property.

Valid timeout values are between 1 seconds and 15 minutes.

```python
from aws_cdk import Duration


Table(self, "Table",
    table_columns=[Column(id="col1", name="col1", data_type="varchar(4)"), Column(id="col2", name="col2", data_type="float")
    ],
    cluster=cluster,
    database_name="databaseName",
    timeout=Duration.minutes(15)
)
```

### Granting Privileges

You can give a user privileges to perform certain actions on a table by using the
`Table.grant()` method.

```python
user = User(self, "User",
    cluster=cluster,
    database_name="databaseName"
)
table = Table(self, "Table",
    table_columns=[Column(name="col1", data_type="varchar(4)"), Column(name="col2", data_type="float")],
    cluster=cluster,
    database_name="databaseName"
)

table.grant(user, TableAction.DROP, TableAction.SELECT)
```

Take care when managing privileges via the CDK, as attempting to manage a user's
privileges on the same table in multiple CDK applications could lead to accidentally
overriding these permissions. Consider the following two CDK applications which both refer
to the same user and table. In application 1, the resources are created and the user is
given `INSERT` permissions on the table:

```python
database_name = "databaseName"
username = "myuser"
table_name = "mytable"

user = User(self, "User",
    username=username,
    cluster=cluster,
    database_name=database_name
)
table = Table(self, "Table",
    table_columns=[Column(name="col1", data_type="varchar(4)"), Column(name="col2", data_type="float")],
    cluster=cluster,
    database_name=database_name
)
table.grant(user, TableAction.INSERT)
```

In application 2, the resources are imported and the user is given `INSERT` permissions on
the table:

```python
database_name = "databaseName"
username = "myuser"
table_name = "mytable"

user = User.from_user_attributes(self, "User",
    username=username,
    password=SecretValue.unsafe_plain_text("NOT_FOR_PRODUCTION"),
    cluster=cluster,
    database_name=database_name
)
table = Table.from_table_attributes(self, "Table",
    table_name=table_name,
    table_columns=[Column(name="col1", data_type="varchar(4)"), Column(name="col2", data_type="float")],
    cluster=cluster,
    database_name="databaseName"
)
table.grant(user, TableAction.INSERT)
```

Both applications attempt to grant the user the appropriate privilege on the table by
submitting a `GRANT USER` SQL query to the Redshift cluster. Note that the latter of these
two calls will have no effect since the user has already been granted the privilege.

Now, if application 1 were to remove the call to `grant`, a `REVOKE USER` SQL query is
submitted to the Redshift cluster. In general, application 1 does not know that
application 2 has also granted this permission and thus cannot decide not to issue the
revocation. This leads to the undesirable state where application 2 still contains the
call to `grant` but the user does not have the specified permission.

Note that this does not occur when duplicate privileges are granted within the same
application, as such privileges are de-duplicated before any SQL query is submitted.

## Rotating credentials

When the master password is generated and stored in AWS Secrets Manager, it can be rotated automatically:

```python
cluster.add_rotation_single_user()
```

The multi user rotation scheme is also available:

```python
user = User(self, "User",
    cluster=cluster,
    database_name="databaseName"
)
cluster.add_rotation_multi_user("MultiUserRotation",
    secret=user.secret
)
```

## Adding Parameters

You can add a parameter to a parameter group with`ClusterParameterGroup.addParameter()`.

```python
from aws_cdk.aws_redshift_alpha import ClusterParameterGroup


params = ClusterParameterGroup(self, "Params",
    description="desc",
    parameters={
        "require_ssl": "true"
    }
)

params.add_parameter("enable_user_activity_logging", "true")
```

Additionally, you can add a parameter to the cluster's associated parameter group with `Cluster.addToParameterGroup()`. If the cluster does not have an associated parameter group, a new parameter group is created.

```python
import aws_cdk.aws_ec2 as ec2
import aws_cdk as cdk
# vpc: ec2.Vpc


cluster = Cluster(self, "Cluster",
    master_user=Login(
        master_username="admin",
        master_password=cdk.SecretValue.unsafe_plain_text("tooshort")
    ),
    vpc=vpc
)

cluster.add_to_parameter_group("enable_user_activity_logging", "true")
```

## Rebooting for Parameter Updates

In most cases, existing clusters [must be manually rebooted](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-parameter-groups.html) to apply parameter changes. You can automate parameter related reboots by setting the cluster's `rebootForParameterChanges` property to `true` , or by using `Cluster.enableRebootForParameterChanges()`.

```python
import aws_cdk.aws_ec2 as ec2
import aws_cdk as cdk
# vpc: ec2.Vpc


cluster = Cluster(self, "Cluster",
    master_user=Login(
        master_username="admin",
        master_password=cdk.SecretValue.unsafe_plain_text("tooshort")
    ),
    vpc=vpc
)

cluster.add_to_parameter_group("enable_user_activity_logging", "true")
cluster.enable_reboot_for_parameter_changes()
```

## Elastic IP

If you configure your cluster to be publicly accessible, you can optionally select an *elastic IP address* to use for the external IP address. An elastic IP address is a static IP address that is associated with your AWS account. You can use an elastic IP address to connect to your cluster from outside the VPC. An elastic IP address gives you the ability to change your underlying configuration without affecting the IP address that clients use to connect to your cluster. This approach can be helpful for situations such as recovery after a failure.

```python
import aws_cdk.aws_ec2 as ec2
import aws_cdk as cdk
# vpc: ec2.Vpc


Cluster(self, "Redshift",
    master_user=Login(
        master_username="admin",
        master_password=cdk.SecretValue.unsafe_plain_text("tooshort")
    ),
    vpc=vpc,
    publicly_accessible=True,
    elastic_ip="10.123.123.255"
)
```

If the Cluster is in a VPC and you want to connect to it using the private IP address from within the cluster, it is important to enable *DNS resolution* and *DNS hostnames* in the VPC config. If these parameters would not be set, connections from within the VPC would connect to the elastic IP address and not the private IP address.

```python
import aws_cdk.aws_ec2 as ec2

vpc = ec2.Vpc(self, "VPC",
    enable_dns_support=True,
    enable_dns_hostnames=True
)
```

Note that if there is already an existing, public accessible Cluster, which VPC configuration is changed to use *DNS hostnames* and *DNS resolution*, connections still use the elastic IP address until the cluster is resized.

### Elastic IP vs. Cluster node public IP

The elastic IP address is an external IP address for accessing the cluster outside of a VPC. It's not related to the cluster node public IP addresses and private IP addresses that are accessible via the `clusterEndpoint` property. The public and private cluster node IP addresses appear regardless of whether the cluster is publicly accessible or not. They are used only in certain circumstances to configure ingress rules on the remote host. These circumstances occur when you load data from an Amazon EC2 instance or other remote host using a Secure Shell (SSH) connection.

### Attach Elastic IP after Cluster creation

In some cases, you might want to associate the cluster with an elastic IP address or change an elastic IP address that is associated with the cluster. To attach an elastic IP address after the cluster is created, first update the cluster so that it is not publicly accessible, then make it both publicly accessible and add an Elastic IP address in the same operation.

## Enhanced VPC Routing

When you use Amazon Redshift enhanced VPC routing, Amazon Redshift forces all COPY and UNLOAD traffic between your cluster and your data repositories through your virtual private cloud (VPC) based on the Amazon VPC service. By using enhanced VPC routing, you can use standard VPC features, such as VPC security groups, network access control lists (ACLs), VPC endpoints, VPC endpoint policies, internet gateways, and Domain Name System (DNS) servers, as described in the Amazon VPC User Guide. You use these features to tightly manage the flow of data between your Amazon Redshift cluster and other resources. When you use enhanced VPC routing to route traffic through your VPC, you can also use VPC flow logs to monitor COPY and UNLOAD traffic.

```python
import aws_cdk.aws_ec2 as ec2
import aws_cdk as cdk
# vpc: ec2.Vpc


Cluster(self, "Redshift",
    master_user=Login(
        master_username="admin",
        master_password=cdk.SecretValue.unsafe_plain_text("tooshort")
    ),
    vpc=vpc,
    enhanced_vpc_routing=True
)
```

If enhanced VPC routing is not enabled, Amazon Redshift routes traffic through the internet, including traffic to other services within the AWS network.

## Default IAM role

Some Amazon Redshift features require Amazon Redshift to access other AWS services on your behalf. For your Amazon Redshift clusters to act on your behalf, you supply security credentials to your clusters. The preferred method to supply security credentials is to specify an AWS Identity and Access Management (IAM) role.

When you create an IAM role and set it as the default for the cluster using console, you don't have to provide the IAM role's Amazon Resource Name (ARN) to perform authentication and authorization.

```python
import aws_cdk.aws_ec2 as ec2
import aws_cdk.aws_iam as iam
# vpc: ec2.Vpc


default_role = iam.Role(self, "DefaultRole",
    assumed_by=iam.ServicePrincipal("redshift.amazonaws.com")
)

Cluster(self, "Redshift",
    master_user=Login(
        master_username="admin"
    ),
    vpc=vpc,
    roles=[default_role],
    default_role=default_role
)
```

A default role can also be added to a cluster using the `addDefaultIamRole` method.

```python
import aws_cdk.aws_ec2 as ec2
import aws_cdk.aws_iam as iam
# vpc: ec2.Vpc


default_role = iam.Role(self, "DefaultRole",
    assumed_by=iam.ServicePrincipal("redshift.amazonaws.com")
)

redshift_cluster = Cluster(self, "Redshift",
    master_user=Login(
        master_username="admin"
    ),
    vpc=vpc,
    roles=[default_role]
)

redshift_cluster.add_default_iam_role(default_role)
```

## IAM roles

Attaching IAM roles to a Redshift Cluster grants permissions to the Redshift service to perform actions on your behalf.

```python
import aws_cdk.aws_ec2 as ec2
import aws_cdk.aws_iam as iam
# vpc: ec2.Vpc


role = iam.Role(self, "Role",
    assumed_by=iam.ServicePrincipal("redshift.amazonaws.com")
)
cluster = Cluster(self, "Redshift",
    master_user=Login(
        master_username="admin"
    ),
    vpc=vpc,
    roles=[role]
)
```

Additional IAM roles can be attached to a cluster using the `addIamRole` method.

```python
import aws_cdk.aws_ec2 as ec2
import aws_cdk.aws_iam as iam
# vpc: ec2.Vpc


role = iam.Role(self, "Role",
    assumed_by=iam.ServicePrincipal("redshift.amazonaws.com")
)
cluster = Cluster(self, "Redshift",
    master_user=Login(
        master_username="admin"
    ),
    vpc=vpc
)
cluster.add_iam_role(role)
```

## Multi-AZ

Amazon Redshift supports [multiple Availability Zones (Multi-AZ) deployments]((https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-multi-az.html)) for provisioned RA3 clusters.
By using Multi-AZ deployments, your Amazon Redshift data warehouse can continue operating in failure scenarios when an unexpected event happens in an Availability Zone.

To create a Multi-AZ cluster, set the `multiAz` property to `true` when creating the cluster.

```python
# Example automatically generated from non-compiling source. May contain errors.
# vpc: ec2.IVpc


redshift.Cluster(stack, "Cluster",
    master_user={
        "master_username": "admin"
    },
    vpc=vpc,  # 3 AZs are required for Multi-AZ
    node_type=redshift.NodeType.RA3_XLPLUS,  # must be RA3 node type
    cluster_type=redshift.ClusterType.MULTI_NODE,  # must be MULTI_NODE
    number_of_nodes=2,  # must be 2 or more
    multi_az=True
)
```

## Resizing

As your data warehousing needs change, it's possible to resize your Redshift cluster. If the cluster was deployed via CDK,
it's important to resize it via CDK so the change is registered in the AWS CloudFormation template.
There are two types of resize operations:

* Elastic resize - Number of nodes and node type can be changed, but not at the same time. Elastic resize is the default behavior,
  as it's a fast operation and typically completes in minutes. Elastic resize is only supported on clusters of the following types:

  * dc1.large (if your cluster is in a VPC)
  * dc1.8xlarge (if your cluster is in a VPC)
  * dc2.large
  * dc2.8xlarge
  * ds2.xlarge
  * ds2.8xlarge
  * ra3.xlplus
  * ra3.4xlarge
  * ra3.16xlarge
* Classic resize - Number of nodes, node type, or both, can be changed. This operation takes longer to complete,
  but is useful when the resize operation doesn't meet the criteria of an elastic resize. If you prefer classic resizing,
  you can set the `classicResizing` flag when creating the cluster.

There are other constraints to be aware of, for example, elastic resizing does not support single-node clusters and there are
limits on the number of nodes you can add to a cluster. See the [AWS Redshift Documentation](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-operations.html#rs-resize-tutorial) and [AWS API Documentation](https://docs.aws.amazon.com/redshift/latest/APIReference/API_ResizeCluster.html) for more details.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/aws/aws-cdk",
    "name": "aws-cdk.aws-redshift-alpha",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "~=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Amazon Web Services",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/7b/0d/b7ee859d0ebf38031e86bfb2b5fca5deb17a4ddbcc0d49900e5e03d76934/aws_cdk_aws_redshift_alpha-2.170.0a0.tar.gz",
    "platform": null,
    "description": "# Amazon Redshift Construct Library\n\n<!--BEGIN STABILITY BANNER-->---\n\n\n![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge)\n\n> The APIs of higher level constructs in this module are experimental and under active development.\n> They are subject to non-backward compatible changes or removal in any future version. These are\n> not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be\n> announced in the release notes. This means that while you may use them, you may need to update\n> your source code when upgrading to a newer version of this package.\n\n---\n<!--END STABILITY BANNER-->\n\n## Starting a Redshift Cluster Database\n\nTo set up a Redshift cluster, define a `Cluster`. It will be launched in a VPC.\nYou can specify a VPC, otherwise one will be created. The nodes are always launched in private subnets and are encrypted by default.\n\n```python\nimport aws_cdk.aws_ec2 as ec2\n\n\nvpc = ec2.Vpc(self, \"Vpc\")\ncluster = Cluster(self, \"Redshift\",\n    master_user=Login(\n        master_username=\"admin\"\n    ),\n    vpc=vpc\n)\n```\n\nBy default, the master password will be generated and stored in AWS Secrets Manager.\nYou can specify characters to not include in generated passwords by setting `excludeCharacters` property.\n\n```python\nimport aws_cdk.aws_ec2 as ec2\n\n\nvpc = ec2.Vpc(self, \"Vpc\")\ncluster = Cluster(self, \"Redshift\",\n    master_user=Login(\n        master_username=\"admin\",\n        exclude_characters=\"\\\"@/\\\\ '`\"\n    ),\n    vpc=vpc\n)\n```\n\nA default database named `default_db` will be created in the cluster. To change the name of this database set the `defaultDatabaseName` attribute in the constructor properties.\n\nBy default, the cluster will not be publicly accessible.\nDepending on your use case, you can make the cluster publicly accessible with the `publiclyAccessible` property.\n\n## Adding a logging bucket for database audit logging to S3\n\nAmazon Redshift logs information about connections and user activities in your database. These logs help you to monitor the database for security and troubleshooting purposes, a process called database auditing. To send these logs to an S3 bucket, specify the `loggingProperties` when creating a new cluster.\n\n```python\nimport aws_cdk.aws_ec2 as ec2\nimport aws_cdk.aws_s3 as s3\n\n\nvpc = ec2.Vpc(self, \"Vpc\")\nbucket = s3.Bucket.from_bucket_name(self, \"bucket\", \"amzn-s3-demo-bucket\")\n\ncluster = Cluster(self, \"Redshift\",\n    master_user=Login(\n        master_username=\"admin\"\n    ),\n    vpc=vpc,\n    logging_properties=LoggingProperties(\n        logging_bucket=bucket,\n        logging_key_prefix=\"prefix\"\n    )\n)\n```\n\n## Availability Zone Relocation\n\nBy using [relocation in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-recovery.html), you allow Amazon Redshift to move a cluster to another Availability Zone (AZ) without any loss of data or changes to your applications.\nThis feature can be applied to both new and existing clusters.\n\nTo enable this feature, set the `availabilityZoneRelocation` property to `true`.\n\n```python\n# Example automatically generated from non-compiling source. May contain errors.\nimport aws_cdk.aws_ec2 as ec2\n\n# vpc: ec2.IVpc\n\n\ncluster = Cluster(self, \"Redshift\",\n    master_user=Login(\n        master_username=\"admin\"\n    ),\n    vpc=vpc,\n    node_type=NodeType.RA3_XLPLUS,\n    availability_zone_relocation=True\n)\n```\n\n**Note**: The `availabilityZoneRelocation` property is only available for RA3 node types.\n\n## Connecting\n\nTo control who can access the cluster, use the `.connections` attribute. Redshift Clusters have\na default port, so you don't need to specify the port:\n\n```python\ncluster.connections.allow_default_port_from_any_ipv4(\"Open to the world\")\n```\n\nThe endpoint to access your database cluster will be available as the `.clusterEndpoint` attribute:\n\n```python\ncluster.cluster_endpoint.socket_address\n```\n\n## Database Resources\n\nThis module allows for the creation of non-CloudFormation database resources such as users\nand tables. This allows you to manage identities, permissions, and stateful resources\nwithin your Redshift cluster from your CDK application.\n\nBecause these resources are not available in CloudFormation, this library leverages\n[custom\nresources](https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html)\nto manage them. In addition to the IAM permissions required to make Redshift service\ncalls, the execution role for the custom resource handler requires database credentials to\ncreate resources within the cluster.\n\nThese database credentials can be supplied explicitly through the `adminUser` properties\nof the various database resource constructs. Alternatively, the credentials can be\nautomatically pulled from the Redshift cluster's default administrator\ncredentials. However, this option is only available if the password for the credentials\nwas generated by the CDK application (ie., no value vas provided for [the `masterPassword`\nproperty](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-redshift.Login.html#masterpasswordspan-classapi-icon-api-icon-experimental-titlethis-api-element-is-experimental-it-may-change-without-noticespan)\nof\n[`Cluster.masterUser`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-redshift.Cluster.html#masteruserspan-classapi-icon-api-icon-experimental-titlethis-api-element-is-experimental-it-may-change-without-noticespan)).\n\n### Creating Users\n\nCreate a user within a Redshift cluster database by instantiating a `User` construct. This\nwill generate a username and password, store the credentials in a [AWS Secrets Manager\n`Secret`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-secretsmanager.Secret.html),\nand make a query to the Redshift cluster to create a new database user with the\ncredentials.\n\n```python\nUser(self, \"User\",\n    cluster=cluster,\n    database_name=\"databaseName\"\n)\n```\n\nBy default, the user credentials are encrypted with your AWS account's default Secrets\nManager encryption key. You can specify the encryption key used for this purpose by\nsupplying a key in the `encryptionKey` property.\n\n```python\nimport aws_cdk.aws_kms as kms\n\n\nencryption_key = kms.Key(self, \"Key\")\nUser(self, \"User\",\n    encryption_key=encryption_key,\n    cluster=cluster,\n    database_name=\"databaseName\"\n)\n```\n\nBy default, a username is automatically generated from the user construct ID and its path\nin the construct tree. You can specify a particular username by providing a value for the\n`username` property. Usernames must be valid identifiers; see: [Names and\nidentifiers](https://docs.aws.amazon.com/redshift/latest/dg/r_names.html) in the *Amazon\nRedshift Database Developer Guide*.\n\n```python\nUser(self, \"User\",\n    username=\"myuser\",\n    cluster=cluster,\n    database_name=\"databaseName\"\n)\n```\n\nThe user password is generated by AWS Secrets Manager using the default configuration\nfound in\n[`secretsmanager.SecretStringGenerator`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-secretsmanager.SecretStringGenerator.html),\nexcept with password length `30` and some SQL-incompliant characters excluded. The\nplaintext for the password will never be present in the CDK application; instead, a\n[CloudFormation Dynamic\nReference](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html)\nwill be used wherever the password value is required.\n\nYou can specify characters to not include in generated passwords by setting `excludeCharacters` property.\n\n```python\nUser(self, \"User\",\n    cluster=cluster,\n    database_name=\"databaseName\",\n    exclude_characters=\"\\\"@/\\\\ '`\"\n)\n```\n\n### Creating Tables\n\nCreate a table within a Redshift cluster database by instantiating a `Table`\nconstruct. This will make a query to the Redshift cluster to create a new database table\nwith the supplied schema.\n\n```python\nTable(self, \"Table\",\n    table_columns=[Column(name=\"col1\", data_type=\"varchar(4)\"), Column(name=\"col2\", data_type=\"float\")],\n    cluster=cluster,\n    database_name=\"databaseName\"\n)\n```\n\nTables greater than v2.114.1 can have their table name changed, for versions <= v2.114.1, this would not be possible.\nTherefore, changing of table names for <= v2.114.1 have been disabled.\n\n```python\n# Example automatically generated from non-compiling source. May contain errors.\nTable(self, \"Table\",\n    table_name=\"oldTableName\",  # This value can be change for versions greater than v2.114.1\n    table_columns=[Column(name=\"col1\", data_type=\"varchar(4)\"), Column(name=\"col2\", data_type=\"float\")],\n    cluster=cluster,\n    database_name=\"databaseName\"\n)\n```\n\nThe table can be configured to have distStyle attribute and a distKey column:\n\n```python\nTable(self, \"Table\",\n    table_columns=[Column(name=\"col1\", data_type=\"varchar(4)\", dist_key=True), Column(name=\"col2\", data_type=\"float\")\n    ],\n    cluster=cluster,\n    database_name=\"databaseName\",\n    dist_style=TableDistStyle.KEY\n)\n```\n\nThe table can also be configured to have sortStyle attribute and sortKey columns:\n\n```python\nTable(self, \"Table\",\n    table_columns=[Column(name=\"col1\", data_type=\"varchar(4)\", sort_key=True), Column(name=\"col2\", data_type=\"float\", sort_key=True)\n    ],\n    cluster=cluster,\n    database_name=\"databaseName\",\n    sort_style=TableSortStyle.COMPOUND\n)\n```\n\nTables and their respective columns can be configured to contain comments:\n\n```python\nTable(self, \"Table\",\n    table_columns=[Column(name=\"col1\", data_type=\"varchar(4)\", comment=\"This is a column comment\"), Column(name=\"col2\", data_type=\"float\", comment=\"This is a another column comment\")\n    ],\n    cluster=cluster,\n    database_name=\"databaseName\",\n    table_comment=\"This is a table comment\"\n)\n```\n\nTable columns can be configured to use a specific compression encoding:\n\n```python\nfrom aws_cdk.aws_redshift_alpha import ColumnEncoding\n\n\nTable(self, \"Table\",\n    table_columns=[Column(name=\"col1\", data_type=\"varchar(4)\", encoding=ColumnEncoding.TEXT32K), Column(name=\"col2\", data_type=\"float\", encoding=ColumnEncoding.DELTA32K)\n    ],\n    cluster=cluster,\n    database_name=\"databaseName\"\n)\n```\n\nTable columns can also contain an `id` attribute, which can allow table columns to be renamed.\n\n**NOTE** To use the `id` attribute, you must also enable the `@aws-cdk/aws-redshift:columnId` feature flag.\n\n```python\nTable(self, \"Table\",\n    table_columns=[Column(id=\"col1\", name=\"col1\", data_type=\"varchar(4)\"), Column(id=\"col2\", name=\"col2\", data_type=\"float\")\n    ],\n    cluster=cluster,\n    database_name=\"databaseName\"\n)\n```\n\nQuery execution duration is limited to 1 minute by default. You can change this by setting the `timeout` property.\n\nValid timeout values are between 1 seconds and 15 minutes.\n\n```python\nfrom aws_cdk import Duration\n\n\nTable(self, \"Table\",\n    table_columns=[Column(id=\"col1\", name=\"col1\", data_type=\"varchar(4)\"), Column(id=\"col2\", name=\"col2\", data_type=\"float\")\n    ],\n    cluster=cluster,\n    database_name=\"databaseName\",\n    timeout=Duration.minutes(15)\n)\n```\n\n### Granting Privileges\n\nYou can give a user privileges to perform certain actions on a table by using the\n`Table.grant()` method.\n\n```python\nuser = User(self, \"User\",\n    cluster=cluster,\n    database_name=\"databaseName\"\n)\ntable = Table(self, \"Table\",\n    table_columns=[Column(name=\"col1\", data_type=\"varchar(4)\"), Column(name=\"col2\", data_type=\"float\")],\n    cluster=cluster,\n    database_name=\"databaseName\"\n)\n\ntable.grant(user, TableAction.DROP, TableAction.SELECT)\n```\n\nTake care when managing privileges via the CDK, as attempting to manage a user's\nprivileges on the same table in multiple CDK applications could lead to accidentally\noverriding these permissions. Consider the following two CDK applications which both refer\nto the same user and table. In application 1, the resources are created and the user is\ngiven `INSERT` permissions on the table:\n\n```python\ndatabase_name = \"databaseName\"\nusername = \"myuser\"\ntable_name = \"mytable\"\n\nuser = User(self, \"User\",\n    username=username,\n    cluster=cluster,\n    database_name=database_name\n)\ntable = Table(self, \"Table\",\n    table_columns=[Column(name=\"col1\", data_type=\"varchar(4)\"), Column(name=\"col2\", data_type=\"float\")],\n    cluster=cluster,\n    database_name=database_name\n)\ntable.grant(user, TableAction.INSERT)\n```\n\nIn application 2, the resources are imported and the user is given `INSERT` permissions on\nthe table:\n\n```python\ndatabase_name = \"databaseName\"\nusername = \"myuser\"\ntable_name = \"mytable\"\n\nuser = User.from_user_attributes(self, \"User\",\n    username=username,\n    password=SecretValue.unsafe_plain_text(\"NOT_FOR_PRODUCTION\"),\n    cluster=cluster,\n    database_name=database_name\n)\ntable = Table.from_table_attributes(self, \"Table\",\n    table_name=table_name,\n    table_columns=[Column(name=\"col1\", data_type=\"varchar(4)\"), Column(name=\"col2\", data_type=\"float\")],\n    cluster=cluster,\n    database_name=\"databaseName\"\n)\ntable.grant(user, TableAction.INSERT)\n```\n\nBoth applications attempt to grant the user the appropriate privilege on the table by\nsubmitting a `GRANT USER` SQL query to the Redshift cluster. Note that the latter of these\ntwo calls will have no effect since the user has already been granted the privilege.\n\nNow, if application 1 were to remove the call to `grant`, a `REVOKE USER` SQL query is\nsubmitted to the Redshift cluster. In general, application 1 does not know that\napplication 2 has also granted this permission and thus cannot decide not to issue the\nrevocation. This leads to the undesirable state where application 2 still contains the\ncall to `grant` but the user does not have the specified permission.\n\nNote that this does not occur when duplicate privileges are granted within the same\napplication, as such privileges are de-duplicated before any SQL query is submitted.\n\n## Rotating credentials\n\nWhen the master password is generated and stored in AWS Secrets Manager, it can be rotated automatically:\n\n```python\ncluster.add_rotation_single_user()\n```\n\nThe multi user rotation scheme is also available:\n\n```python\nuser = User(self, \"User\",\n    cluster=cluster,\n    database_name=\"databaseName\"\n)\ncluster.add_rotation_multi_user(\"MultiUserRotation\",\n    secret=user.secret\n)\n```\n\n## Adding Parameters\n\nYou can add a parameter to a parameter group with`ClusterParameterGroup.addParameter()`.\n\n```python\nfrom aws_cdk.aws_redshift_alpha import ClusterParameterGroup\n\n\nparams = ClusterParameterGroup(self, \"Params\",\n    description=\"desc\",\n    parameters={\n        \"require_ssl\": \"true\"\n    }\n)\n\nparams.add_parameter(\"enable_user_activity_logging\", \"true\")\n```\n\nAdditionally, you can add a parameter to the cluster's associated parameter group with `Cluster.addToParameterGroup()`. If the cluster does not have an associated parameter group, a new parameter group is created.\n\n```python\nimport aws_cdk.aws_ec2 as ec2\nimport aws_cdk as cdk\n# vpc: ec2.Vpc\n\n\ncluster = Cluster(self, \"Cluster\",\n    master_user=Login(\n        master_username=\"admin\",\n        master_password=cdk.SecretValue.unsafe_plain_text(\"tooshort\")\n    ),\n    vpc=vpc\n)\n\ncluster.add_to_parameter_group(\"enable_user_activity_logging\", \"true\")\n```\n\n## Rebooting for Parameter Updates\n\nIn most cases, existing clusters [must be manually rebooted](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-parameter-groups.html) to apply parameter changes. You can automate parameter related reboots by setting the cluster's `rebootForParameterChanges` property to `true` , or by using `Cluster.enableRebootForParameterChanges()`.\n\n```python\nimport aws_cdk.aws_ec2 as ec2\nimport aws_cdk as cdk\n# vpc: ec2.Vpc\n\n\ncluster = Cluster(self, \"Cluster\",\n    master_user=Login(\n        master_username=\"admin\",\n        master_password=cdk.SecretValue.unsafe_plain_text(\"tooshort\")\n    ),\n    vpc=vpc\n)\n\ncluster.add_to_parameter_group(\"enable_user_activity_logging\", \"true\")\ncluster.enable_reboot_for_parameter_changes()\n```\n\n## Elastic IP\n\nIf you configure your cluster to be publicly accessible, you can optionally select an *elastic IP address* to use for the external IP address. An elastic IP address is a static IP address that is associated with your AWS account. You can use an elastic IP address to connect to your cluster from outside the VPC. An elastic IP address gives you the ability to change your underlying configuration without affecting the IP address that clients use to connect to your cluster. This approach can be helpful for situations such as recovery after a failure.\n\n```python\nimport aws_cdk.aws_ec2 as ec2\nimport aws_cdk as cdk\n# vpc: ec2.Vpc\n\n\nCluster(self, \"Redshift\",\n    master_user=Login(\n        master_username=\"admin\",\n        master_password=cdk.SecretValue.unsafe_plain_text(\"tooshort\")\n    ),\n    vpc=vpc,\n    publicly_accessible=True,\n    elastic_ip=\"10.123.123.255\"\n)\n```\n\nIf the Cluster is in a VPC and you want to connect to it using the private IP address from within the cluster, it is important to enable *DNS resolution* and *DNS hostnames* in the VPC config. If these parameters would not be set, connections from within the VPC would connect to the elastic IP address and not the private IP address.\n\n```python\nimport aws_cdk.aws_ec2 as ec2\n\nvpc = ec2.Vpc(self, \"VPC\",\n    enable_dns_support=True,\n    enable_dns_hostnames=True\n)\n```\n\nNote that if there is already an existing, public accessible Cluster, which VPC configuration is changed to use *DNS hostnames* and *DNS resolution*, connections still use the elastic IP address until the cluster is resized.\n\n### Elastic IP vs. Cluster node public IP\n\nThe elastic IP address is an external IP address for accessing the cluster outside of a VPC. It's not related to the cluster node public IP addresses and private IP addresses that are accessible via the `clusterEndpoint` property. The public and private cluster node IP addresses appear regardless of whether the cluster is publicly accessible or not. They are used only in certain circumstances to configure ingress rules on the remote host. These circumstances occur when you load data from an Amazon EC2 instance or other remote host using a Secure Shell (SSH) connection.\n\n### Attach Elastic IP after Cluster creation\n\nIn some cases, you might want to associate the cluster with an elastic IP address or change an elastic IP address that is associated with the cluster. To attach an elastic IP address after the cluster is created, first update the cluster so that it is not publicly accessible, then make it both publicly accessible and add an Elastic IP address in the same operation.\n\n## Enhanced VPC Routing\n\nWhen you use Amazon Redshift enhanced VPC routing, Amazon Redshift forces all COPY and UNLOAD traffic between your cluster and your data repositories through your virtual private cloud (VPC) based on the Amazon VPC service. By using enhanced VPC routing, you can use standard VPC features, such as VPC security groups, network access control lists (ACLs), VPC endpoints, VPC endpoint policies, internet gateways, and Domain Name System (DNS) servers, as described in the Amazon VPC User Guide. You use these features to tightly manage the flow of data between your Amazon Redshift cluster and other resources. When you use enhanced VPC routing to route traffic through your VPC, you can also use VPC flow logs to monitor COPY and UNLOAD traffic.\n\n```python\nimport aws_cdk.aws_ec2 as ec2\nimport aws_cdk as cdk\n# vpc: ec2.Vpc\n\n\nCluster(self, \"Redshift\",\n    master_user=Login(\n        master_username=\"admin\",\n        master_password=cdk.SecretValue.unsafe_plain_text(\"tooshort\")\n    ),\n    vpc=vpc,\n    enhanced_vpc_routing=True\n)\n```\n\nIf enhanced VPC routing is not enabled, Amazon Redshift routes traffic through the internet, including traffic to other services within the AWS network.\n\n## Default IAM role\n\nSome Amazon Redshift features require Amazon Redshift to access other AWS services on your behalf. For your Amazon Redshift clusters to act on your behalf, you supply security credentials to your clusters. The preferred method to supply security credentials is to specify an AWS Identity and Access Management (IAM) role.\n\nWhen you create an IAM role and set it as the default for the cluster using console, you don't have to provide the IAM role's Amazon Resource Name (ARN) to perform authentication and authorization.\n\n```python\nimport aws_cdk.aws_ec2 as ec2\nimport aws_cdk.aws_iam as iam\n# vpc: ec2.Vpc\n\n\ndefault_role = iam.Role(self, \"DefaultRole\",\n    assumed_by=iam.ServicePrincipal(\"redshift.amazonaws.com\")\n)\n\nCluster(self, \"Redshift\",\n    master_user=Login(\n        master_username=\"admin\"\n    ),\n    vpc=vpc,\n    roles=[default_role],\n    default_role=default_role\n)\n```\n\nA default role can also be added to a cluster using the `addDefaultIamRole` method.\n\n```python\nimport aws_cdk.aws_ec2 as ec2\nimport aws_cdk.aws_iam as iam\n# vpc: ec2.Vpc\n\n\ndefault_role = iam.Role(self, \"DefaultRole\",\n    assumed_by=iam.ServicePrincipal(\"redshift.amazonaws.com\")\n)\n\nredshift_cluster = Cluster(self, \"Redshift\",\n    master_user=Login(\n        master_username=\"admin\"\n    ),\n    vpc=vpc,\n    roles=[default_role]\n)\n\nredshift_cluster.add_default_iam_role(default_role)\n```\n\n## IAM roles\n\nAttaching IAM roles to a Redshift Cluster grants permissions to the Redshift service to perform actions on your behalf.\n\n```python\nimport aws_cdk.aws_ec2 as ec2\nimport aws_cdk.aws_iam as iam\n# vpc: ec2.Vpc\n\n\nrole = iam.Role(self, \"Role\",\n    assumed_by=iam.ServicePrincipal(\"redshift.amazonaws.com\")\n)\ncluster = Cluster(self, \"Redshift\",\n    master_user=Login(\n        master_username=\"admin\"\n    ),\n    vpc=vpc,\n    roles=[role]\n)\n```\n\nAdditional IAM roles can be attached to a cluster using the `addIamRole` method.\n\n```python\nimport aws_cdk.aws_ec2 as ec2\nimport aws_cdk.aws_iam as iam\n# vpc: ec2.Vpc\n\n\nrole = iam.Role(self, \"Role\",\n    assumed_by=iam.ServicePrincipal(\"redshift.amazonaws.com\")\n)\ncluster = Cluster(self, \"Redshift\",\n    master_user=Login(\n        master_username=\"admin\"\n    ),\n    vpc=vpc\n)\ncluster.add_iam_role(role)\n```\n\n## Multi-AZ\n\nAmazon Redshift supports [multiple Availability Zones (Multi-AZ) deployments]((https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-multi-az.html)) for provisioned RA3 clusters.\nBy using Multi-AZ deployments, your Amazon Redshift data warehouse can continue operating in failure scenarios when an unexpected event happens in an Availability Zone.\n\nTo create a Multi-AZ cluster, set the `multiAz` property to `true` when creating the cluster.\n\n```python\n# Example automatically generated from non-compiling source. May contain errors.\n# vpc: ec2.IVpc\n\n\nredshift.Cluster(stack, \"Cluster\",\n    master_user={\n        \"master_username\": \"admin\"\n    },\n    vpc=vpc,  # 3 AZs are required for Multi-AZ\n    node_type=redshift.NodeType.RA3_XLPLUS,  # must be RA3 node type\n    cluster_type=redshift.ClusterType.MULTI_NODE,  # must be MULTI_NODE\n    number_of_nodes=2,  # must be 2 or more\n    multi_az=True\n)\n```\n\n## Resizing\n\nAs your data warehousing needs change, it's possible to resize your Redshift cluster. If the cluster was deployed via CDK,\nit's important to resize it via CDK so the change is registered in the AWS CloudFormation template.\nThere are two types of resize operations:\n\n* Elastic resize - Number of nodes and node type can be changed, but not at the same time. Elastic resize is the default behavior,\n  as it's a fast operation and typically completes in minutes. Elastic resize is only supported on clusters of the following types:\n\n  * dc1.large (if your cluster is in a VPC)\n  * dc1.8xlarge (if your cluster is in a VPC)\n  * dc2.large\n  * dc2.8xlarge\n  * ds2.xlarge\n  * ds2.8xlarge\n  * ra3.xlplus\n  * ra3.4xlarge\n  * ra3.16xlarge\n* Classic resize - Number of nodes, node type, or both, can be changed. This operation takes longer to complete,\n  but is useful when the resize operation doesn't meet the criteria of an elastic resize. If you prefer classic resizing,\n  you can set the `classicResizing` flag when creating the cluster.\n\nThere are other constraints to be aware of, for example, elastic resizing does not support single-node clusters and there are\nlimits on the number of nodes you can add to a cluster. See the [AWS Redshift Documentation](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-operations.html#rs-resize-tutorial) and [AWS API Documentation](https://docs.aws.amazon.com/redshift/latest/APIReference/API_ResizeCluster.html) for more details.\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "The CDK Construct Library for AWS::Redshift",
    "version": "2.170.0a0",
    "project_urls": {
        "Homepage": "https://github.com/aws/aws-cdk",
        "Source": "https://github.com/aws/aws-cdk.git"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "be0baf0a6d72d940712b2a207f7c722a29df8bb84ded3da566c5135ae4fbcff9",
                "md5": "20958de39a3bd9321b1ae267d1db483a",
                "sha256": "ec7608b75c7babe87eba3b3f5a9b4123beb32ebf12760d674f0f0d92ff66c097"
            },
            "downloads": -1,
            "filename": "aws_cdk.aws_redshift_alpha-2.170.0a0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "20958de39a3bd9321b1ae267d1db483a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "~=3.8",
            "size": 189054,
            "upload_time": "2024-11-22T04:41:55",
            "upload_time_iso_8601": "2024-11-22T04:41:55.578454Z",
            "url": "https://files.pythonhosted.org/packages/be/0b/af0a6d72d940712b2a207f7c722a29df8bb84ded3da566c5135ae4fbcff9/aws_cdk.aws_redshift_alpha-2.170.0a0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7b0db7ee859d0ebf38031e86bfb2b5fca5deb17a4ddbcc0d49900e5e03d76934",
                "md5": "b56a3b0786d70c40390d667537f06086",
                "sha256": "de4cdf6d81b374a0d953dca2788995293340b8c37c1b92df982ad3fbd83ab06e"
            },
            "downloads": -1,
            "filename": "aws_cdk_aws_redshift_alpha-2.170.0a0.tar.gz",
            "has_sig": false,
            "md5_digest": "b56a3b0786d70c40390d667537f06086",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "~=3.8",
            "size": 197340,
            "upload_time": "2024-11-22T04:42:40",
            "upload_time_iso_8601": "2024-11-22T04:42:40.548714Z",
            "url": "https://files.pythonhosted.org/packages/7b/0d/b7ee859d0ebf38031e86bfb2b5fca5deb17a4ddbcc0d49900e5e03d76934/aws_cdk_aws_redshift_alpha-2.170.0a0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-22 04:42:40",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "aws",
    "github_project": "aws-cdk",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "aws-cdk.aws-redshift-alpha"
}
        
Elapsed time: 0.81007s