Terraform AWS
Terraform AWS
The Amazon Web Services (AWS) provider is used to interact with the many resources supported by AWS. The provider
needs to be con gured with the proper credentials before it can be used.
Use the navigation to the left to read about the available resources.
Example Usage
provider "aws" {
version = "~> 2.0"
region = "us-east-1"
}
Authentication
The AWS provider o ers a exible means of providing credentials for authentication. The following methods are supported,
in this order, and explained below:
Static credentials
Environment variables
Shared credentials le
EC2 Role
Static credentials
Warning: Hard-coding credentials into any Terraform con guration is not recommended, and risks secret leakage
should this le ever be committed to a public version control system.
Static credentials can be provided by adding an access_key and secret_key in-line in the AWS provider block:
Usage:
provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
Environment variables
You can provide your credentials via the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY , environment variables,
representing your AWS Access Key and AWS Secret Key, respectively. Note that setting your AWS credentials using either
these (or legacy) environment variables will override the use of AWS_SHARED_CREDENTIALS_FILE and AWS_PROFILE . The
AWS_DEFAULT_REGION and AWS_SESSION_TOKEN environment variables are also used, if applicable:
provider "aws" {}
Usage:
$ export AWS_ACCESS_KEY_ID=
= "anaccesskey"
$ export AWS_SECRET_ACCESS_KEY=
= "asecretkey"
$ export AWS_DEFAULT_REGION=
= "us-west-2"
$ terraform plan
Shared Credentials le
You can use an AWS credentials le to specify your credentials. The default location is $HOME/.aws/credentials on Linux
and OS X, or "%USERPROFILE%\.aws\credentials" for Windows users. If we fail to detect credentials inline, or in the
environment, Terraform will check this location. You can optionally specify a di erent location in the con guration by
providing the shared_credentials_file attribute, or in the environment with the AWS_SHARED_CREDENTIALS_FILE
variable. This method also supports a profile con guration and matching AWS_PROFILE environment variable:
Usage:
provider "aws" {
region = "us-west-2"
shared_credentials_file = "/Users/tf_user/.aws/creds"
profile = "customprofile"
}
If specifying the pro le through the AWS_PROFILE environment variable, you may also need to set AWS_SDK_LOAD_CONFIG
to a truthy value (e.g. AWS_SDK_LOAD_CONFIG=1 ) for advanced AWS client con gurations, such as pro les that use the
source_profile or role_arn con gurations.
EC2 Role
If you're running Terraform from an EC2 instance with IAM Instance Pro le using IAM Role, Terraform will just ask the
metadata API (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-
security-credentials) endpoint for credentials.
This is a preferred approach over any other when running in EC2 as you can avoid hard coding credentials. Instead these are
leased on-the- y by Terraform which reduces the chance of leakage.
You can provide the custom metadata API endpoint via the AWS_METADATA_URL variable which expects the endpoint URL,
including the version, and defaults to http://169.254.169.254:80/latest .
The default deadline for the EC2 metadata API endpoint is 100 milliseconds, which can be overidden by setting the
AWS_METADATA_TIMEOUT environment variable. The variable expects a positive golang Time.Duration string, which is a
sequence of decimal numbers and a unit su x; valid su xes are ns (nanoseconds), us (microseconds), ms (milliseconds),
s (seconds), m (minutes), and h (hours). Examples of valid inputs: 100ms , 250ms , 1s , 2.5s , 2.5m , 1m30s .
Assume role
If provided with a role ARN, Terraform will attempt to assume this role using the supplied credentials.
Usage:
provider "aws" {
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
Argument Reference
In addition to generic provider arguments (https://www.terraform.io/docs/con guration/providers.html) (e.g. alias and
version ), the following arguments are supported in the AWS provider block:
access_key - (Optional) This is the AWS access key. It must be provided, but it can also be sourced from the
AWS_ACCESS_KEY_ID environment variable, or via a shared credentials le if profile is speci ed.
secret_key - (Optional) This is the AWS secret key. It must be provided, but it can also be sourced from the
AWS_SECRET_ACCESS_KEY environment variable, or via a shared credentials le if profile is speci ed.
region - (Required) This is the AWS region. It must be provided, but it can also be sourced from the
AWS_DEFAULT_REGION environment variables, or via a shared credentials le if profile is speci ed.
profile - (Optional) This is the AWS pro le name as set in the shared credentials le.
assume_role - (Optional) An assume_role block (documented below). Only one assume_role block may be in the
con guration.
endpoints - (Optional) Con guration block for customizing service endpoints. See the Custom Service Endpoints
Guide (/docs/providers/aws/guides/custom-service-endpoints.html) for more information about connecting to
alternate AWS endpoints or AWS compatible solutions.
shared_credentials_file = (Optional) This is the path to the shared credentials le. If this is not set and a pro le is
speci ed, ~/.aws/credentials will be used.
token - (Optional) Session token for validating temporary credentials. Typically provided after successful identity
federation or Multi-Factor Authentication (MFA) login. With MFA login, this is the session token provided afterwards,
not the 6 digit MFA code used to get temporary credentials. It can also be sourced from the AWS_SESSION_TOKEN
environment variable.
max_retries - (Optional) This is the maximum number of times an API call is retried, in the case where requests are
being throttled or experiencing transient failures. The delay between the subsequent API calls increases exponentially.
allowed_account_ids - (Optional) List of allowed, white listed, AWS account IDs to prevent you from mistakenly
using an incorrect one (and potentially end up destroying a live environment). Con icts with
forbidden_account_ids .
forbidden_account_ids - (Optional) List of forbidden, blacklisted, AWS account IDs to prevent you mistakenly using
a wrong one (and potentially end up destroying a live environment). Con icts with allowed_account_ids .
insecure - (Optional) Explicitly allow the provider to perform "insecure" SSL requests. If omitted, default value is
false .
skip_credentials_validation - (Optional) Skip the credentials validation via the STS API. Useful for AWS API
implementations that do not have STS available or implemented.
skip_get_ec2_platforms - (Optional) Skip getting the supported EC2 platforms. Used by users that don't have
ec2:DescribeAccountAttributes permissions.
skip_region_validation - (Optional) Skip validation of provided region name. Useful for AWS-like implementations
that use their own region names or to bypass the validation for regions that aren't publicly available yet.
skip_requesting_account_id - (Optional) Skip requesting the account ID. Useful for AWS API implementations that
do not have the IAM, STS API, or metadata API. When set to true and not determined previously, returns an empty
account ID when manually constructing ARN attributes with the following:
aws_dx_hosted_private_virtual_interface_accepter resource
(/docs/providers/aws/r/dx_hosted_private_virtual_interface_accepter.html)
aws_dx_hosted_private_virtual_interface resource
(/docs/providers/aws/r/dx_hosted_private_virtual_interface.html)
aws_dx_hosted_public_virtual_interface_accepter resource
(/docs/providers/aws/r/dx_hosted_public_virtual_interface_accepter.html)
aws_dx_hosted_public_virtual_interface resource
(/docs/providers/aws/r/dx_hosted_public_virtual_interface.html)
aws_s3_account_public_access_block resource
(/docs/providers/aws/r/s3_account_public_access_block.html)
aws_ses_domain_identity_verification resource
(/docs/providers/aws/r/ses_domain_identity_veri cation.html)
skip_metadata_api_check - (Optional) Skip the AWS Metadata API check. Useful for AWS API implementations that
do not have a metadata API endpoint. Setting to true prevents Terraform from authenticating via the Metadata API.
You may need to use other authentication methods like static credentials, con guration variables, or environment
variables.
s3_force_path_style - (Optional) Set this to true to force the request to use path-style addressing, i.e.,
http://s3.amazonaws.com/BUCKET/KEY . By default, the S3 client will use virtual hosted bucket addressing,
http://BUCKET.s3.amazonaws.com/KEY , when possible. Speci c to the Amazon S3 service.
session_name - (Optional) The session name to use when making the AssumeRole call.
external_id - (Optional) The external ID to use when making the AssumeRole call.
policy - (Optional) A more restrictive policy to apply to the temporary credentials. This gives you a way to further
restrict the permissions for the resulting temporary security credentials. You cannot use the passed policy to grant
permissions that are in excess of those allowed by the access policy of the role that is being assumed.
All other providers (environment variable, shared credentials le, ...) will try three approaches in the following order
iam:GetUser - Typically useful for IAM Users. It also means that each user needs to be privileged to call
iam:GetUser for themselves.
sts:GetCallerIdentity - Should work for both IAM Users and federated IAM Roles, introduced in Terraform
0.6.16 .
iam:ListRoles - This is speci cally useful for IdP-federated pro les which cannot use iam:GetUser . It also
means that each federated user need to be assuming an IAM role which allows iam:ListRoles . Used in
Terraform 0.6.16+ . There used to be no better way to get account ID out of the API when using federated
account until sts:GetCallerIdentity was introduced.
Data Source: aws_acm_certi cate
Use this data source to get the ARN of a certi cate in AWS Certi cate Manager (ACM), you can reference it by domain
without having to hard code the ARNs as input.
Example Usage
Argument Reference
domain - (Required) The domain of the certi cate to look up. If no certi cate is found with this name, an error will be
returned.
key_types - (Optional) A list of key algorithms to lter certi cates. By default, ACM does not return all certi cate
types when searching. Valid values are RSA_1024 , RSA_2048 , RSA_4096 , EC_prime256v1 , EC_secp384r1 , and
EC_secp521r1 .
statuses - (Optional) A list of statuses on which to lter the returned list. Valid values are PENDING_VALIDATION ,
ISSUED , INACTIVE , EXPIRED , VALIDATION_TIMED_OUT , REVOKED and FAILED . If no value is speci ed, only
certi cates in the ISSUED state are returned.
types - (Optional) A list of types on which to lter the returned list. Valid values are AMAZON_ISSUED and IMPORTED .
most_recent - (Optional) If set to true, it sorts the certi cates matched by previous criteria by the NotBefore eld,
returning only the most recent one. If set to false, it returns an error if more than one certi cate is found. Defaults to
false.
Attributes Reference
arn - Set to the ARN of the found certi cate, suitable for referencing in other resources that support ACM certi cates.
Data Source: aws_acmpca_certi cate_authority
Get information on a AWS Certi cate Manager Private Certi cate Authority (ACM PCA Certi cate Authority).
Example Usage
Argument Reference
The following arguments are supported:
arn - (Required) Amazon Resource Name (ARN) of the certi cate authority.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
certificate - Base64-encoded certi cate authority (CA) certi cate. Only available after the certi cate authority
certi cate has been imported.
certificate_chain - Base64-encoded certi cate chain that includes any intermediate certi cates and chains up to
root on-premises certi cate that you used to sign your private CA certi cate. The chain does not include your private
CA certi cate. Only available after the certi cate authority certi cate has been imported.
certificate_signing_request - The base64 PEM-encoded certi cate signing request (CSR) for your private CA
certi cate.
not_after - Date and time after which the certi cate authority is not valid. Only available after the certi cate
authority certi cate has been imported.
not_before - Date and time before which the certi cate authority is not valid. Only available after the certi cate
authority certi cate has been imported.
serial - Serial number of the certi cate authority. Only available after the certi cate authority certi cate has been
imported.
tags - Speci es a key-value map of user-de ned tags that are attached to the certi cate authority.
Example Usage
filter {
name = "name"
values = ["myami-*"]
}
filter {
name = "root-device-type"
values = ["ebs"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
Argument Reference
owners - (Required) List of AMI owners to limit search. At least 1 value must be speci ed. Valid values: an AWS account
ID, self (the current account), or an AWS owner alias (e.g. amazon , aws-marketplace , microsoft ).
most_recent - (Optional) If more than one result is returned, use the most recent AMI.
executable_users - (Optional) Limit search to users with explicit launch permission on the image. Valid items are the
numeric account ID or self .
filter - (Optional) One or more name/value pairs to lter o of. There are several valid keys, for a full reference,
check out describe-images in the AWS CLI reference (http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-
images.html).
name_regex - (Optional) A regex string to apply to the AMI list returned by AWS. This allows more advanced ltering
not supported from the AWS API. This ltering is done locally on what AWS returns, and could have a performance
impact if the result is large. It is recommended to combine this with other options to narrow down the list AWS
returns.
NOTE: If more or less than a single match is returned by the search, Terraform will fail. Ensure that your search is
speci c enough to return a single AMI ID only, or use most_recent to choose the most recent one. If you want to
match multiple AMIs, use the aws_ami_ids data source instead.
Attributes Reference
id is set to the ID of the found AMI. In addition, the following attributes are exported:
NOTE: Some values are not always set and may not be available for interpolation.
block_device_mappings.#.ebs.iops - 0 if the EBS volume is not a provisioned IOPS image, otherwise the
supported IOPS count.
block_device_mappings.#.no_device - Suppresses the speci ed device included in the block device mapping
of the AMI.
description - The description of the AMI that was provided during image creation.
image_owner_alias - The AWS account alias (for example, amazon , self ) or the AWS account ID of the AMI owner.
kernel_id - The kernel associated with the image, if any. Only applicable for machine images.
name - The name of the AMI that was provided during image creation.
ramdisk_id - The RAM disk associated with the image, if any. Only applicable for machine images.
root_snapshot_id - The snapshot id associated with the root device, if any (only applies to ebs root devices).
state - The current state of the AMI. If the state is available , the image is successfully registered and can be used
to launch an instance.
Example Usage
filter {
name = "name"
values = ["ubuntu/images/ubuntu-*-*-amd64-server-*"]
}
}
Argument Reference
owners - (Required) List of AMI owners to limit search. At least 1 value must be speci ed. Valid values: an AWS account
ID, self (the current account), or an AWS owner alias (e.g. amazon , aws-marketplace , microsoft ).
executable_users - (Optional) Limit search to users with explicit launch permission on the image. Valid items are the
numeric account ID or self .
filter - (Optional) One or more name/value pairs to lter o of. There are several valid keys, for a full reference,
check out describe-images in the AWS CLI reference (http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-
images.html).
name_regex - (Optional) A regex string to apply to the AMI list returned by AWS. This allows more advanced ltering
not supported from the AWS API. This ltering is done locally on what AWS returns, and could have a performance
impact if the result is large. It is recommended to combine this with other options to narrow down the list AWS
returns.
Attributes Reference
ids is set to the list of AMI IDs, sorted by creation time according to sort_ascending .
Data Source: aws_api_gateway_api_key
Use this data source to get the name and value of a pre-existing API Key, for example to supply credentials for a dependency
microservice.
Example Usage
Argument Reference
id - (Required) The ID of the API Key to look up.
Attributes Reference
id - Set to the ID of the API Key.
Example Usage
Argument Reference
rest_api_id - (Required) The REST API id that owns the resource. If no REST API is found, an error will be returned.
path - (Required) The full path of the resource. If no path is found, an error will be returned.
Attributes Reference
id - Set to the ID of the found Resource.
Example Usage
Argument Reference
name - (Required) The name of the REST API to look up. If no REST API is found with this name, an error will be
returned. If multiple REST APIs are found with this name, an error will be returned.
Attributes Reference
id - Set to the ID of the found REST API.
root_resource_id - Set to the ID of the API Gateway Resource on the found REST API where the route matches '/'.
Data Source: aws_api_gateway_vpc_link
Use this data source to get the id of a VPC Link in API Gateway. To fetch the VPC Link you must provide a name to match
against. As there is no unique name constraint on API Gateway VPC Links this data source will error if there is more than one
match.
Example Usage
Argument Reference
name - (Required) The name of the API Gateway VPC Link to look up. If no API Gateway VPC Link is found with this
name, an error will be returned. If multiple API Gateway VPC Links are found with this name, an error will be returned.
Attributes Reference
id - Set to the ID of the found API Gateway VPC Link.
Data Source: aws_arn
Parses an Amazon Resource Name (ARN) into its constituent parts.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
region - The region the resource resides in. Note that the ARNs for some resources do not require a region, so this
component might be omitted.
resource - The content of this part of the ARN varies by service. It often includes an indicator of the type of resource
—for example, an IAM user or Amazon RDS database —followed by a slash (/) or a colon (:), followed by the resource
name itself.
Data Source: aws_autoscaling_group
Use this data source to get information on an existing autoscaling group.
Example Usage
Argument Reference
name - Specify the exact name of the desired autoscaling group.
Attributes Reference
NOTE: Some values are not always set and may not be available for interpolation.
arn - The Amazon Resource Name (ARN) of the Auto Scaling group.
default_cool_down - The amount of time, in seconds, after a scaling activity completes before another scaling
activity can start.
health_check_grace_period - The amount of time, in seconds, that Amazon EC2 Auto Scaling waits before checking
the health status of an EC2 instance that has come into service.
health_check_type - The service to use for the health checks. The valid values are EC2 and ELB.
placement_group - The name of the placement group into which to launch your instances, if any. For more
information, see Placement Groups (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html)) in the Amazon Elastic Compute
Cloud User Guide.
service_linked_role_arn - The Amazon Resource Name (ARN) of the service-linked role that the Auto Scaling
group uses to call other AWS services on your behalf.
target_group_arns - The Amazon Resource Names (ARN) of the target groups for your load balancer.
Example Usage
filter {
name = "value"
values = ["Pets"]
}
}
notifications = [
"autoscaling:EC2_INSTANCE_LAUNCH",
"autoscaling:EC2_INSTANCE_TERMINATE",
"autoscaling:EC2_INSTANCE_LAUNCH_ERROR",
"autoscaling:EC2_INSTANCE_TERMINATE_ERROR",
]
Argument Reference
filter - (Optional) A lter used to scope the list e.g. by tags. See related docs
(http://docs.aws.amazon.com/AutoScaling/latest/APIReference/API_Filter.html).
name - (Required) The name of the lter. The valid values are: auto-scaling-group , key , value , and
propagate-at-launch .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
This can be used both to validate an availability zone given in a variable and to split the AZ name into its component parts of
an AWS region and an AZ identi er letter. The latter may be useful e.g. for implementing a consistent subnet numbering
scheme across several regions by mapping both the region and the subnet letter to network numbers.
This is di erent from the aws_availability_zones (plural) data source, which provides a list of the available zones.
Example Usage
The following example shows how this data source might be used to derive VPC and subnet CIDR pre xes systematically for
an availability zone.
variable "region_number" {
default = {
us-
- east-
-1 = 1
us-
- west-
-1 = 2
us-
- west-
-2 = 3
eu-
- central-
-1 = 4
ap-
- northeast-
-1 = 5
}
}
variable "az_number" {
default = {
a = 1
b = 2
c = 3
d = 4
e = 5
f = 6
}
}
Argument Reference
The arguments of this data source act as lters for querying the available availability zones. The given lters must match
exactly one availability zone whose data will be exported as attributes.
state - (Optional) A speci c availability zone state to require. May be any of "available" , "information" or
"impaired" .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
region - The region where the selected availability zone resides. This is always the region selected on the provider,
since this data source searches only within that region.
name_suffix - The part of the AZ name that appears after the region name, uniquely identifying the AZ within its
region.
This is di erent from the aws_availability_zone (singular) data source, which provides some details about a speci c
availability zone.
Example Usage
Argument Reference
The following arguments are supported:
state - (Optional) Allows to lter list of Availability Zones based on their current state. Can be either "available" ,
"information" , "impaired" or "unavailable" . By default the list includes a complete set of Availability Zones to
which the underlying AWS account has access, regardless of their state.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
ecs_cluster_arn - The ARN of the underlying Amazon ECS cluster used by the compute environment.
service_role - The ARN of the IAM role that allows AWS Batch to make calls to other AWS services on your behalf.
type - The type of the compute environment (for example, MANAGED or UNMANAGED ).
status - The current status of the compute environment (for example, CREATING or VALID ).
status_reason - A short, human-readable string to provide additional details about the current status of the
compute environment.
state - The state of the compute environment (for example, ENABLED or DISABLED ). If the state is ENABLED , then
the compute environment accepts jobs from a queue and can scale out automatically based on queues.
Data Source: aws_batch_job_queue
The Batch Job Queue data source allows access to details of a speci c job queue within AWS Batch.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
status - The current status of the job queue (for example, CREATING or VALID ).
status_reason - A short, human-readable string to provide additional details about the current status of the job
queue.
state - Describes the ability of the queue to accept new jobs (for example, ENABLED or DISABLED ).
priority - The priority of the job queue. Job queues with a higher priority are evaluated rst when associated with
the same compute environment.
compute_environment_order - The compute environments that are attached to the job queue and the order in
which job placement is preferred. Compute environments are selected for job placement in ascending order.
compute_environment_order.#.order - The order of the compute environment.
Example Usage
policy = <<POLICY
<<
{
"Id": "Policy",
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetBucketAcl", "s3:GetBucketPolicy"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-billing-tf-test-bucket",
"Principal": {
"AWS": [
"${data.aws_billing_service_account.main.arn}"
]
}
},
{
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-billing-tf-test-bucket/*",
"Principal": {
"AWS": [
"${data.aws_billing_service_account.main.arn}"
]
}
}
]
}
POLICY
}
Attributes Reference
id - The ID of the AWS billing service account.
arn - The ARN of the AWS billing service account.
Data Source: aws_caller_identity
Use this data source to get the access to the e ective Account ID, User ID, and ARN in which Terraform is authorized.
Example Usage
output "account_id" {
value = "${data.aws_caller_identity.current.account_id}"
}
output "caller_arn" {
value = "${data.aws_caller_identity.current.arn}"
}
output "caller_user" {
value = "${data.aws_caller_identity.current.user_id}"
}
Argument Reference
There are no arguments available for this data source.
Attributes Reference
account_id - The AWS Account ID number of the account that owns or contains the calling entity.
Example Usage
output "canonical_user_id" {
value = "${data.aws_canonical_user_id.current.id}"
}
Argument Reference
There are no arguments available for this data source.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
display_name - The human-friendly name linked to the canonical user ID. The bucket owner's display name. NOTE:
This value (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html) is only included in the response
in the US East (N. Virginia), US West (N. California), US West (Oregon), Asia Paci c (Singapore), Asia Paci c (Sydney), Asia
Paci c (Tokyo), EU (Ireland), and South America (São Paulo) regions.
Data Source: aws_cloudformation_export
The CloudFormation Export data source allows access to stack exports speci ed in the Output
(http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html) section of the
Cloudformation Template using the optional Export Property.
Note: If you are trying to use a value from a Cloudformation Stack in the same Terraform run please use normal
interpolation or Cloudformation Outputs.
Example Usage
Argument Reference
name - (Required) The name of the export as it appears in the console or from list-exports
(http://docs.aws.amazon.com/cli/latest/reference/cloudformation/list-exports.html)
Attributes Reference
In addition to all arguments above, the following attributes are exported:
value - The value from Cloudformation export identi ed by the export name found from list-exports
(http://docs.aws.amazon.com/cli/latest/reference/cloudformation/list-exports.html)
Example Usage
tags = {
Name = "HelloWorld"
}
}
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
disable_rollback - Whether the rollback of the stack is disabled when stack creation fails
parameters - A map of parameters that specify input parameters for the stack.
iam_role_arn - The ARN of the IAM role used to create the stack.
timeout_in_minutes - The amount of time that can pass before the stack status becomes CREATE_FAILED
Data Source: aws_cloudhsm_v2_cluster
Use this data source to get information about a CloudHSM v2 cluster
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
The following attributes are exported:
vpc_id - The id of the VPC that the CloudHSM cluster resides in.
security_group_id - The ID of the security group associated with the CloudHSM cluster.
cluster_certificates.0.cluster_csr - The certi cate signing request (CSR). Available only in UNINITIALIZED
state.
cluster_certificates.0.hsm_certificate - The HSM certi cate issued (signed) by the HSM hardware.
Example Usage
policy = <<EOF
<<
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Put bucket policy needed for trails",
"Effect": "Allow",
"Principal": {
"AWS": "${data.aws_cloudtrail_service_account.main.arn}"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::tf-cloudtrail-logging-test-bucket/*"
},
{
"Sid": "Get bucket policy needed for trails",
"Effect": "Allow",
"Principal": {
"AWS": "${data.aws_cloudtrail_service_account.main.arn}"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::tf-cloudtrail-logging-test-bucket"
}
]
}
EOF
}
Argument Reference
region - (Optional) Name of the region whose AWS CloudTrail account ID is desired. Defaults to the region from the
AWS provider con guration.
Attributes Reference
id - The ID of the AWS CloudTrail service account in the selected region.
arn - The ARN of the AWS CloudTrail service account in the selected region.
Data Source: aws_cloudwatch_log_group
Use this data source to get information about an AWS Cloudwatch Log Group
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
creation_time - The creation time of the log group, expressed as the number of milliseconds after Jan 1, 1970
00:00:00 UTC.
Data Source: aws_codecommit_repository
The CodeCommit Repository data source allows the ARN, Repository ID, Repository URL for HTTP and Repository URL for
SSH to be retrieved for an CodeCommit repository.
Example Usage
Argument Reference
The following arguments are supported:
repository_name - (Required) The name for the repository. This needs to be less than 100 characters.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
clone_url_http - The URL to use for cloning the repository over HTTPS.
clone_url_ssh - The URL to use for cloning the repository over SSH.
Data Source: aws_cognito_user_pools
Use this data source to get a list of cognito user pools.
Example Usage
Argument Reference
name - (required) Name of the cognito user pools. Name is not a unique attribute for cognito user pool, so multiple
pools might be returned with given name.
Attributes Reference
ids - The list of cognito user pool ids.
Data Source: aws_cur_report_de nition
Use this data source to get information on an AWS Cost and Usage Report De nition.
NOTE: The AWS Cost and Usage Report service is only available in us-east-1 currently.
NOTE: If AWS Organizations is enabled, only the master account can use this resource.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
time_unit - The frequency on which report data are measured and displayed.
Example Usage
Argument Reference
The following arguments are supported:
Attribute Reference
In addition to the arguments above, the following attributes are exported:
bgp_asn - (Optional) The gateway's Border Gateway Protocol (BGP) Autonomous System Number (ASN).
type - (Optional) The type of customer gateway. The only type AWS supports at this time is "ipsec.1".
Data Source: aws_db_cluster_snapshot
Use this data source to get information about a DB Cluster Snapshot for use when provisioning DB clusters.
NOTE: This data source does not apply to snapshots created on DB Instances. See the aws_db_snapshot data source
(/docs/providers/aws/d/db_snapshot.html) for DB Instance snapshots.
Example Usage
lifecycle {
ignore_changes = ["snapshot_identifier"]
}
}
Argument Reference
The following arguments are supported:
most_recent - (Optional) If more than one result is returned, use the most recent Snapshot.
db_cluster_identifier - (Optional) Returns the list of snapshots created by the speci c db_cluster
snapshot_type - (Optional) The type of snapshots to be returned. If you don't specify a SnapshotType value, then
both automated and manual DB cluster snapshots are returned. Shared and public DB Cluster Snapshots are not
included in the returned results by default. Possible values are, automated , manual , shared and public .
include_shared - (Optional) Set this value to true to include shared manual DB Cluster Snapshots from other AWS
accounts that this AWS account has been given permission to copy or restore, otherwise set this value to false. The
default is false .
include_public - (Optional) Set this value to true to include manual DB Cluster Snapshots that are public and can be
copied or restored by any AWS account, otherwise set this value to false. The default is false .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
availability_zones - List of EC2 Availability Zones that instances in the DB cluster snapshot can be restored in.
db_cluster_identifier - Speci es the DB cluster identi er of the DB cluster that this DB cluster snapshot was
created from.
db_cluster_snapshot_arn - The Amazon Resource Name (ARN) for the DB Cluster Snapshot.
kms_key_id - If storage_encrypted is true, the AWS KMS key identi er for the encrypted DB cluster snapshot.
port - Port that the DB cluster was listening on at the time of the snapshot.
snapshot_create_time - Time when the snapshot was taken, in Universal Coordinated Time (UTC).
source_db_cluster_snapshot_identifier - The DB Cluster Snapshot Arn that the DB Cluster Snapshot was copied
from. It only has value in case of cross customer or cross region copy.
Example Usage
List the event categories of all the RDS resources.
output "example" {
value = "${data.aws_db_event_categories.example.event_categories}"
}
output "example" {
value = "${data.aws_db_event_categories.example.event_categories}"
}
Argument Reference
The following arguments are supported:
source_type - (Optional) The type of source that will be generating the events. Valid options are db-instance, db-
security-group, db-parameter-group, db-snapshot, db-cluster or db-cluster-snapshot.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
address - The hostname of the RDS instance. See also endpoint and port .
availability_zone - Speci es the name of the Availability Zone the DB instance is located in.
backup_retention_period - Speci es the number of days for which automatic DB snapshots are retained.
db_cluster_identifier - If the DB instance is a member of a DB cluster, contains the name of the DB cluster that
the DB instance is a member of.
db_instance_class - Contains the name of the compute and memory capacity class of the DB instance.
db_name - Contains the name of the initial database of this instance that was provided at create time, if one was
speci ed when the DB instance was created. This same name is returned for the life of the DB instance.
db_subnet_group - Speci es the name of the subnet group associated with the DB instance.
engine - Provides the name of the database engine to be used for this DB instance.
hosted_zone_id - The canonical hosted zone ID of the DB instance (to be used in a Route 53 Alias record).
iops - Speci es the Provisioned IOPS (I/O operations per second) value.
kms_key_id - If StorageEncrypted is true, the KMS key identi er for the encrypted DB instance.
monitoring_interval - The interval, in seconds, between points when Enhanced Monitoring metrics are collected
for the DB instance.
monitoring_role_arn - The ARN for the IAM role that permits RDS to send Enhanced Monitoring metrics to
CloudWatch Logs.
option_group_memberships - Provides the list of option group memberships for this DB instance.
preferred_backup_window - Speci es the daily time range during which automated backups are created.
preferred_maintenance_window - Speci es the weekly time range during which system maintenance can occur in
UTC.
vpc_security_groups - Provides a list of VPC security group elements that the DB instance belongs to.
ca_cert_identifier - Speci es the identi er of the CA certi cate for the DB instance.
Data Source: aws_db_snapshot
Use this data source to get information about a DB Snapshot for use when provisioning DB instances
NOTE: This data source does not apply to snapshots created on Aurora DB clusters. See the
aws_db_cluster_snapshot data source (/docs/providers/aws/d/db_cluster_snapshot.html) for DB Cluster snapshots.
Example Usage
lifecycle {
ignore_changes = ["snapshot_identifier"]
}
}
Argument Reference
most_recent - (Optional) If more than one result is returned, use the most recent Snapshot.
db_instance_identifier - (Optional) Returns the list of snapshots created by the speci c db_instance
include_shared - (Optional) Set this value to true to include shared manual DB snapshots from other AWS accounts
that this AWS account has been given permission to copy or restore, otherwise set this value to false. The default is
false .
include_public - (Optional) Set this value to true to include manual DB snapshots that are public and can be copied
or restored by any AWS account, otherwise set this value to false. The default is false .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
availability_zone - Speci es the name of the Availability Zone the DB instance was located in at the time of the DB
snapshot.
iops - Speci es the Provisioned IOPS (I/O operations per second) value of the DB instance at the time of the
snapshot.
source_db_snapshot_identifier - The DB snapshot Arn that the DB snapshot was copied from. It only has value in
case of cross customer or cross region copy.
source_region - The region that the DB snapshot was created in or copied from.
snapshot_create_time - Provides the time when the snapshot was taken, in Universal Coordinated Time (UTC).
Data Source: aws_dx_gateway
Retrieve information about a Direct Connect Gateway.
Example Usage
Argument Reference
name - (Required) The name of the gateway to retrieve.
Attributes Reference
amazon_side_asn - The ASN on the Amazon side of the connection.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
See the DynamoDB Table Resource (/docs/providers/aws/r/dynamodb_table.html) for details on the returned attributes -
they are identical.
Data Source: aws_ebs_default_kms_key
Use this data source to get the default EBS encryption KMS key in the current region.
Example Usage
encrypted = true
kms_key_id = "${data.aws_ebs_default_kms_key.current.key_id}"
Attributes Reference
The following attributes are exported: * key_arn - Amazon Resource Name (ARN) of the default KMS key uses to encrypt
an EBS volume in this region when no key is speci ed in an API call that creates the volume and encryption by default is
enabled.
Data Source: aws_ebs_encryption_by_default
Provides a way to check whether default EBS encryption is enabled for your AWS account in the current AWS region.
Example Usage
Attributes Reference
The following attributes are exported:
enabled - Whether or not default EBS encryption is enabled. Returns as true or false .
Data Source: aws_ebs_snapshot
Use this data source to get information about an EBS Snapshot for use when provisioning EBS Volumes
Example Usage
filter {
name = "volume-size"
values = ["40"]
}
filter {
name = "tag:Name"
values = ["Example"]
}
}
Argument Reference
The following arguments are supported:
most_recent - (Optional) If more than one result is returned, use the most recent snapshot.
owners - (Optional) Returns the snapshots owned by the speci ed owner id. Multiple owners can be speci ed.
restorable_by_user_ids - (Optional) One or more AWS accounts IDs that can create volumes from the snapshot.
filter - (Optional) One or more name/value pairs to lter o of. There are several valid keys, for a full reference,
check out describe-snapshots in the AWS CLI reference
(http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-snapshots.html).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
filter {
name = "volume-size"
values = ["40"]
}
filter {
name = "tag:Name"
values = ["Example"]
}
}
Argument Reference
The following arguments are supported:
owners - (Optional) Returns the snapshots owned by the speci ed owner id. Multiple owners can be speci ed.
restorable_by_user_ids - (Optional) One or more AWS accounts IDs that can create volumes from the snapshot.
filter - (Optional) One or more name/value pairs to lter o of. There are several valid keys, for a full reference,
check out describe-volumes in the AWS CLI reference (http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-
snapshots.html).
Attributes Reference
ids is set to the list of EBS snapshot IDs, sorted by creation time in descending order.
Data Source: aws_ebs_volume
Use this data source to get information about an EBS volume for use in other resources.
Example Usage
filter {
name = "volume-type"
values = ["gp2"]
}
filter {
name = "tag:Name"
values = ["Example"]
}
}
Argument Reference
The following arguments are supported:
most_recent - (Optional) If more than one result is returned, use the most recent Volume.
filter - (Optional) One or more name/value pairs to lter o of. There are several valid keys, for a full reference,
check out describe-volumes in the AWS CLI reference (http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-
volumes.html).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
By Filter
By Identi er
Argument Reference
The following arguments are supported:
filter - (Optional) One or more con guration blocks containing name-values lters. Detailed below.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
amazon_side_asn - Private Autonomous System Number (ASN) for the Amazon side of a BGP session
default_route_table_association - Whether resource attachments are automatically associated with the default
association route table.
owner_id - Identi er of the AWS account that owns the EC2 Transit Gateway
Example Usage
By Filter
filter {
name = "transit-gateway-id"
values = ["tgw-12345678"]
}
}
By Identi er
Argument Reference
The following arguments are supported:
filter - (Optional) One or more con guration blocks containing name-values lters. Detailed below.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
default_association_route_table - Boolean whether this is the default association route table for the EC2 Transit
Gateway
default_propagation_route_table - Boolean whether this is the default propagation route table for the EC2
Transit Gateway
tags - Key-value tags for the EC2 Transit Gateway Route Table
Data Source: aws_ec2_transit_gateway_vpc_attachment
Get information on an EC2 Transit Gateway VPC Attachment.
Example Usage
By Filter
By Identi er
Argument Reference
The following arguments are supported:
filter - (Optional) One or more con guration blocks containing name-values lters. Detailed below.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
tags - Key-value tags for the EC2 Transit Gateway VPC Attachment
vpc_owner_id - Identi er of the AWS account that owns the EC2 VPC.
Data Source: aws_ec2_transit_gateway_vpn_attachment
Get information on an EC2 Transit Gateway VPN Attachment.
Example Usage
Argument Reference
The following arguments are supported:
Attribute Reference
In addition to all arguments above, the following attributes are exported:
tags - Key-value tags for the EC2 Transit Gateway VPN Attachment
Data Source: aws_ecr_image
The ECR Image data source allows the details of an image with a particular tag or digest to be retrieved.
Example Usage
Argument Reference
The following arguments are supported:
image_digest - (Optional) The sha256 digest of the image manifest. At least one of image_digest or image_tag
must be speci ed.
image_tag - (Optional) The tag associated with this image. At least one of image_digest or image_tag must be
speci ed.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
image_pushed_at - The date and time, expressed as a unix timestamp, at which the current image was pushed to the
repository.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
registered_container_instances_count - The number of registered container instances for the ECS Cluster
Example Usage
Argument Reference
The following arguments are supported:
task_definition - (Required) The ARN of the task de nition which contains the container
Attributes Reference
In addition to all arguments above, the following attributes are exported:
memory_reservation - The soft limit (in MiB) of memory to reserve for the container. When system memory is under
contention, Docker attempts to keep the container memory to this soft limit
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
container_definitions = <<DEFINITION
<<
[
{
"cpu": 128,
"environment": [{
"name": "SECRET",
"value": "KEY"
}],
"essential": true,
"image": "mongo:latest",
"memory": 128,
"memoryReservation": 64,
"name": "mongodb"
}
]
DEFINITION
}
task_definition = "${aws_ecs_task_definition.mongo.family}:${max("${aws_ecs_task_definition.mongo
mongo.revis
revis
ion}",
ion "${data.aws_ecs_task_definition
aws_ecs_task_definition.mongo
mongo.revision
revision}")}"
}
Argument Reference
The following arguments are supported:
task_definition - (Required) The family for the latest ACTIVE revision, family and revision (family:revision) for a
speci c revision in the family, the ARN of the task de nition to access to.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
network_mode - The Docker networking mode to use for the containers in this task.
task_role_arn - The ARN of the IAM role that containers in this task can assume
Data Source: aws_efs_ le_system
Provides information about an Elastic File System (EFS).
Example Usage
variable "file_system_id" {
type = "string"
default = ""
}
Argument Reference
The following arguments are supported:
creation_token - (Optional) Restricts the list to the le system with this creation token.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
dns_name - The DNS name for the lesystem per documented convention
(http://docs.aws.amazon.com/efs/latest/ug/mounting-fs-mount-cmd-dns-name.html).
Data Source: aws_efs_mount_target
Provides information about an Elastic File System Mount Target (EFS).
Example Usage
variable "mount_target_id" {
type = "string"
default = ""
}
Argument Reference
The following arguments are supported:
mount_target_id - (Required) ID of the mount target that you want to have described
Attributes Reference
In addition to all arguments above, the following attributes are exported:
file_system_arn - Amazon Resource Name of the le system for which the mount target is intended.
ip_address - Address at which the le system may be mounted via the mount target.
security_groups - List of VPC security group IDs attached to the mount target.
dns_name - The DNS name for the given subnet/AZ per documented convention
(http://docs.aws.amazon.com/efs/latest/ug/mounting-fs-mount-cmd-dns-name.html).
network_interface_id - The ID of the network interface that Amazon EFS created when it created the mount target.
Data Source: aws_eip
aws_eip provides details about a speci c Elastic IP.
Example Usage
Argument Reference
The arguments of this data source act as lters for querying the available Elastic IPs in the current region. The given lters
must match exactly one Elastic IP whose data will be exported as attributes.
filter - (Optional) One or more name/value pairs to use as lters. There are several valid keys, for a full reference,
check out the EC2 API Reference
(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeAddresses.html).
id - (Optional) The allocation id of the speci c VPC EIP to retrieve. If a classic EIP is required, do NOT set id , only set
public_ip
tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired Elastic IP
Attributes Reference
In addition to all arguments above, the following attributes are exported:
association_id - The ID representing the association of the address with an instance in a VPC.
domain - Indicates whether the address is for use in EC2-Classic (standard) or in a VPC (vpc).
id - If VPC Elastic IP, the allocation identi er. If EC2-Classic Elastic IP, the public IP address.
instance_id - The ID of the instance that the address is associated with (if any).
network_interface_owner_id - The ID of the AWS account that owns the network interface.
Note: The data source computes the public_dns and private_dns attributes according to the VPC DNS Guide
(https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-hostnames) as they are not available with
the EC2 API.
Data Source: aws_eks_cluster_auth
Get an authentication token to communicate with an EKS cluster.
Uses IAM credentials from the AWS provider to generate a temporary token that is compatible with AWS IAM Authenticator
(https://github.com/kubernetes-sigs/aws-iam-authenticator) authentication. This can be used to authenticate to an EKS
cluster or to a cluster that has the AWS IAM Authenticator server con gured.
Example Usage
provider "kubernetes" {
host = "${data.aws_eks_cluster.example.endpoint}"
cluster_ca_certificate = "${base64decode(data.aws_eks_cluster.example.certificate_authority.0.data)}"
token = "${data.aws_eks_cluster_auth.example.token}"
load_config_file = false
}
Argument Reference
name - (Required) The name of the cluster
Attributes Reference
token - The token to use to authenticate with the cluster.
Data Source: aws_eks_cluster
Retrieve information about an EKS Cluster.
Example Usage
output "endpoint" {
value = "${data.aws_eks_cluster.example.endpoint}"
}
output "kubeconfig-certificate-authority-data" {
value = "${data.aws_eks_cluster.example.certificate_authority.0.data}"
}
output "identity-oidc-issuer" {
value = "${data.aws_eks_cluster.example.identity.0.oidc.0.issuer}"
}
Argument Reference
name - (Required) The name of the cluster
Attributes Reference
id - The name of the cluster
created_at - The Unix epoch time stamp in seconds for when the cluster was created.
identity - Nested attribute containing identity provider information for your cluster. Only available on Kubernetes
version 1.13 and 1.14 clusters created or upgraded on or after September 3, 2019. For an example using this
information to enable IAM Roles for Service Accounts, see the aws_eks_cluster resource documentation
(/docs/providers/aws/r/eks_cluster.html).
oidc - Nested attribute containing OpenID Connect (https://openid.net/connect/) identity provider information
for the cluster.
role_arn - The Amazon Resource Name (ARN) of the IAM role that provides permissions for the Kubernetes control
plane to make calls to AWS API operations on your behalf.
status - The status of the EKS cluster. One of CREATING , ACTIVE , DELETING , FAILED .
vpc_config - Nested attribute containing VPC con guration for the cluster.
endpoint_private_access - Indicates whether or not the Amazon EKS private API server endpoint is enabled.
endpoint_public_access - Indicates whether or not the Amazon EKS public API server endpoint is enabled.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
num_cache_nodes – The number of cache nodes that the cache cluster has.
security_group_names – List of security group names associated with this cache cluster.
security_group_ids – List VPC security groups associated with the cache cluster.
parameter_group_name – Name of the parameter group associated with this cache cluster.
maintenance_window – Speci es the weekly time range for when maintenance on the cache cluster is performed.
snapshot_window - The daily time range (in UTC) during which ElastiCache will begin taking a daily snapshot of the
cache cluster.
snapshot_retention_limit - The number of days for which ElastiCache will retain automatic cache cluster
snapshots before deleting them.
notification_topic_arn – An Amazon Resource Name (ARN) of an SNS topic that ElastiCache noti cations get sent
to.
port – The port number on which each of the cache nodes will accept connections.
configuration_endpoint - (Memcached only) The con guration endpoint to allow host discovery.
cluster_address - (Memcached only) The DNS name of the cache cluster without the port appended.
cache_nodes - List of node objects including id , address , port and availability_zone . Referenceable e.g. as
${data.aws_elasticache_cluster.bar.cache_nodes.0.address}
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
auth_token_enabled - A ag that enables using an AuthToken (password) when issuing Redis commands.
number_cache_clusters – The number of cache clusters that the replication group has.
member_clusters - The identi ers of all the nodes that are part of this replication group.
snapshot_window - The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your
node group (shard).
snapshot_retention_limit - The number of days for which ElastiCache retains automatic cache cluster snapshots
before deleting them.
port – The port number on which the con guration endpoint will accept connections.
primary_endpoint_address - The endpoint of the primary node in this node group (shard).
Data Source: aws_elastic_beanstalk_application
Retrieve information about an Elastic Beanstalk Application.
Example Usage
output "arn" {
value = "${data.aws_elastic_beanstalk_application.example.arn}"
}
output "description" {
value = "${data.aws_elastic_beanstalk_application.example.description}"
}
Argument Reference
name - (Required) The name of the application
Attributes Reference
id - The name of the application
service_role - The ARN of an IAM service role under which the application version is deleted. Elastic Beanstalk must
have permission to assume this role.
delete_source_from_s3 - Speci es whether delete a version's source bundle from S3 when the application version
is deleted.
Data Source: aws_elastic_beanstalk_hosted_zone
Use this data source to get the ID of an elastic beanstalk hosted zone
(http://docs.aws.amazon.com/general/latest/gr/rande.html#elasticbeanstalk_region).
Example Usage
Argument Reference
region - (Optional) The region you'd like the zone for. By default, fetches the current region.
Attributes Reference
id - The ID of the hosted zone.
Example Usage
Argument Reference
most_recent - (Optional) If more than one result is returned, use the most recent solution stack.
name_regex - A regex string to apply to the solution stack list returned by AWS. See Elastic Beanstalk Supported
Platforms (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.platforms.html) from AWS documentation
for reference solution stack names.
NOTE: If more or less than a single match is returned by the search, Terraform will fail. Ensure that your search is
speci c enough to return a single solution stack, or use most_recent to choose the most recent one.
Attributes Reference
name - The name of the solution stack.
Data Source: aws_elasticsearch_domain
Use this data source to get information about an Elasticsearch Domain
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
The following attributes are exported:
dedicated_master_enabled - Indicates whether dedicated master nodes are enabled for the cluster.
volume_size - The size of EBS volumes attached to data nodes (in GB).
iops - The baseline input/output (I/O) performance of EBS volumes attached to data nodes.
endpoint – Domain-speci c endpoint used to submit index, search, and data upload requests.
cloudwatch_log_group_arn - The CloudWatch Log Group where the logs are published.
Example Usage
alias {
name = "${aws_elb.main.dns_name}"
zone_id = "${data.aws_elb_hosted_zone_id.main.id}"
evaluate_target_health = true
}
}
Argument Reference
region - (Optional) Name of the region whose AWS ELB HostedZoneId is desired. Defaults to the region from the
AWS provider con guration.
Attributes Reference
id - The ID of the AWS ELB HostedZoneId in the selected region.
aws_elb
Provides information about a "classic" Elastic Load Balancer (ELB). See LB Data Source (/docs/providers/aws/d/lb.html) if you
are looking for "v2" Application Load Balancer (ALB) or Network Load Balancer (NLB).
This data source can prove useful when a module accepts an LB as an input variable and needs to, for example, determine
the security groups associated with it, etc.
Example Usage
variable "lb_name" {
type = "string"
default = ""
}
Argument Reference
The following arguments are supported:
Attributes Reference
See the ELB Resource (/docs/providers/aws/r/elb.html) for details on the returned attributes - they are identical.
Data Source: aws_elb_service_account
Use this data source to get the Account ID of the AWS Elastic Load Balancing Service Account
(http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html#attach-bucket-policy) in a given
region for the purpose of whitelisting in S3 bucket policy.
Example Usage
policy = <<POLICY
<<
{
"Id": "Policy",
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-elb-tf-test-bucket/AWSLogs/*",
"Principal": {
"AWS": [
"${data.aws_elb_service_account.main.arn}"
]
}
}
]
}
POLICY
}
access_logs {
bucket = "${aws_s3_bucket.elb_logs.bucket}"
interval = 5
}
listener {
instance_port = 8000
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
}
Argument Reference
region - (Optional) Name of the region whose AWS ELB account ID is desired. Defaults to the region from the AWS
provider con guration.
Attributes Reference
id - The ID of the AWS ELB service account in the selected region.
arn - The ARN of the AWS ELB service account in the selected region.
Data Source: aws_glue_script
Use this data source to generate a Glue script from a Directed Acyclic Graph (DAG).
Example Usage
dag_edge {
source = "datasource0"
target = "applymapping1"
}
dag_edge {
source = "applymapping1"
target = "selectfields2"
}
dag_edge {
source = "selectfields2"
target = "resolvechoice3"
}
dag_edge {
source = "resolvechoice3"
target = "datasink4"
}
dag_node {
id = "datasource0"
node_type = "DataSource"
args {
name = "database"
value = "\"${aws_glue_catalog_database.source.name}\""
}
args {
name = "table_name"
value = "\"${aws_glue_catalog_table.source.name}\""
}
}
dag_node {
id = "applymapping1"
node_type = "ApplyMapping"
args {
name = "mapping"
value = "[(\"column1\", \"string\", \"column1\", \"string\")]"
}
}
dag_node {
id = "selectfields2"
node_type = "SelectFields"
args {
name = "paths"
value = "[\"column1\"]"
}
}
dag_node {
id = "resolvechoice3"
node_type = "ResolveChoice"
args {
name = "choice"
value = "\"MATCH_CATALOG\""
}
args {
name = "database"
value = "\"${aws_glue_catalog_database.destination.name}\""
}
args {
name = "table_name"
value = "\"${aws_glue_catalog_table.destination.name}\""
}
}
dag_node {
id = "datasink4"
node_type = "DataSink"
args {
name = "database"
value = "\"${aws_glue_catalog_database.destination.name}\""
}
args {
name = "table_name"
value = "\"${aws_glue_catalog_table.destination.name}\""
}
}
}
output "python_script" {
value = "${data.aws_glue_script.example.python_script}"
}
dag_edge {
source = "datasource0"
target = "applymapping1"
}
dag_edge {
source = "applymapping1"
target = "selectfields2"
}
dag_edge {
source = "selectfields2"
target = "resolvechoice3"
}
dag_edge {
source = "resolvechoice3"
target = "datasink4"
}
dag_node {
id = "datasource0"
node_type = "DataSource"
args {
name = "database"
value = "\"${aws_glue_catalog_database.source.name}\""
}
args {
name = "table_name"
value = "\"${aws_glue_catalog_table.source.name}\""
}
}
dag_node {
id = "applymapping1"
node_type = "ApplyMapping"
args {
name = "mappings"
value = "[(\"column1\", \"string\", \"column1\", \"string\")]"
}
}
dag_node {
id = "selectfields2"
node_type = "SelectFields"
args {
name = "paths"
value = "[\"column1\"]"
}
}
dag_node {
id = "resolvechoice3"
node_type = "ResolveChoice"
node_type = "ResolveChoice"
args {
name = "choice"
value = "\"MATCH_CATALOG\""
}
args {
name = "database"
value = "\"${aws_glue_catalog_database.destination.name}\""
}
args {
name = "table_name"
value = "\"${aws_glue_catalog_table.destination.name}\""
}
}
dag_node {
id = "datasink4"
node_type = "DataSink"
args {
name = "database"
value = "\"${aws_glue_catalog_database.destination.name}\""
}
args {
name = "table_name"
value = "\"${aws_glue_catalog_table.destination.name}\""
}
}
}
output "scala_code" {
value = "${data.aws_glue_script.example.scala_code}"
}
Argument Reference
dag_edge - (Required) A list of the edges in the DAG. De ned below.
language - (Optional) The programming language of the resulting code from the DAG. Defaults to PYTHON . Valid
values are PYTHON and SCALA .
Attributes Reference
python_script - The Python script generated from the DAG when the language argument is set to PYTHON .
scala_code - The Scala code generated from the DAG when the language argument is set to SCALA .
Data Source: aws_iam_account_alias
The IAM Account Alias data source allows access to the account alias for the e ective account in which Terraform is working.
Example Usage
output "account_id" {
value = "${data.aws_iam_account_alias.current.account_alias}"
}
Argument Reference
There are no arguments available for this data source.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
group_name - (Required) The friendly IAM group name to match.
Attributes Reference
arn - The Amazon Resource Name (ARN) specifying the group.
Example Usage
Argument Reference
name - (Required) The friendly IAM instance pro le name to match.
Attributes Reference
arn - The Amazon Resource Name (ARN) specifying the instance pro le.
create_date - The string representation of the date the instance pro le was created.
role_arn - The role arn associated with this instance pro le.
role_name - The role name associated with this instance pro le.
Data Source: aws_iam_policy_document
Generates an IAM policy document in JSON format.
This is a data source which can be used to construct a JSON representation of an IAM policy document, for use with
resources which expect policy documents, such as the aws_iam_policy resource.
For more information about building AWS IAM policy documents with Terraform, see the AWS IAM Policy Document
Guide (/docs/providers/aws/guides/iam-policy-documents.html).
data "aws_iam_policy_document" "example" {
statement {
sid = "1"
actions = [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation",
]
resources = [
"arn:aws:s3:::*",
]
}
statement {
actions = [
"s3:ListBucket",
]
resources = [
"arn:aws:s3:::${var.s3_bucket_name}",
]
condition {
test = "StringLike"
variable = "s3:prefix"
values = [
"",
"home/",
"home/&{aws:username}/",
]
}
}
statement {
actions = [
"s3:*",
]
resources = [
"arn:aws:s3:::${var.s3_bucket_name}/home/&{aws:username}",
"arn:aws:s3:::${var.s3_bucket_name}/home/&{aws:username}/*",
]
}
}
Using this data source to generate policy documents is optional. It is also valid to use literal JSON strings within your
con guration, or to use the file interpolation function to read a raw JSON policy document from a le.
Argument Reference
The following arguments are supported:
source_json (Optional) - An IAM policy document to import as a base for the current policy document. Statements
with non-blank sid s in the current policy document will overwrite statements with the same sid in the source json.
Statements without an sid cannot be overwritten.
override_json (Optional) - An IAM policy document to import and override the current policy document. Statements
with non-blank sid s in the override document will overwrite statements with the same sid in the current document.
Statements without an sid cannot be overwritten.
statement (Optional) - A nested con guration block (described below) con guring one statement to be included in
the policy document.
version (Optional) - IAM policy document version. Valid values: 2008-10-17 , 2012-10-17 . Defaults to 2012-10-
17 . For more information, see the AWS IAM User Guide
(https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_version.html).
Each document con guration may have one or more statement blocks, which each accept the following arguments:
effect (Optional) - Either "Allow" or "Deny", to specify whether this statement allows or denies the given actions. The
default is "Allow".
actions (Optional) - A list of actions that this statement either allows or denies. For example,
["ec2:RunInstances", "s3:*"] .
not_actions (Optional) - A list of actions that this statement does not apply to. Used to apply a policy statement to all
actions except those listed.
resources (Optional) - A list of resource ARNs that this statement applies to. This is required by AWS if used for an
IAM policy.
not_resources (Optional) - A list of resource ARNs that this statement does not apply to. Used to apply a policy
statement to all resources except those listed.
principals (Optional) - A nested con guration block (described below) specifying a resource (or resource pattern) to
which this statement applies.
not_principals (Optional) - Like principals except gives resources that the statement does not apply to.
condition (Optional) - A nested con guration block (described below) that de nes a further, possibly-service-speci c
condition that constrains whether this statement applies.
Each policy may have either zero or more principals blocks or zero or more not_principals blocks, both of which each
accept the following arguments:
type (Required) The type of principal. For AWS ARNs this is "AWS". For AWS services (e.g. Lambda), this is "Service".
identifiers (Required) List of identi ers for principals. When type is "AWS", these are IAM user or role ARNs. When
type is "Service", these are AWS Service roles e.g. lambda.amazonaws.com .
Each policy statement may have zero or more condition blocks, which each accept the following arguments:
values (Required) The values to evaluate the condition against. If multiple values are provided, the condition matches
if at least one of them applies. (That is, the tests are combined with the "OR" boolean operation.)
When multiple condition blocks are provided, they must all evaluate to true for the policy statement to apply. (In other
words, the conditions are combined with the "AND" boolean operation.)
Wildcard Principal
In order to de ne wildcard principal (a.k.a. anonymous user) use type = "*" and identifiers = ["*"] . In that case the
rendered json will contain "Principal": "*" . Note, that even though the IAM Documentation
(https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) states that "Principal":
"*" and "Principal": {"AWS": "*"} are equivalent, those principals have di erent behavior for IAM Role Trust Policy.
Therefore Terraform will normalize the principal eld only in above-mentioned case and principals like type = "AWS" and
identifiers = ["*"] will be rendered as "Principal": {"AWS": "*"} .
Attributes Reference
The following attribute is exported:
principals {
type = "Service"
identifiers = ["firehose.amazonaws.com"]
}
principals {
type = "AWS"
identifiers = ["${var.trusted_role_arn}"]
}
}
}
statement {
sid = "SidToOverwrite"
actions = ["s3:*"]
resources = ["*"]
}
}
statement {
sid = "SidToOverwrite"
actions = ["s3:*"]
resources = [
"arn:aws:s3:::somebucket",
"arn:aws:s3:::somebucket/*",
]
}
}
actions = ["s3:*"]
resources = ["*"]
}
}
statement {
actions = ["ec2:*"]
resources = ["*"]
}
statement {
sid = "SidToOverwrite"
actions = ["s3:*"]
resources = [
"arn:aws:s3:::somebucket",
"arn:aws:s3:::somebucket/*",
]
}
}
data.aws_iam_policy_document.source_json_example.json will evaluate to:
{
"Version":
"Version" "2012-10-17",
"Statement":
"Statement" [
{
"Sid":
"Sid" "",
"Effect":
"Effect" "Allow",
"Action":
"Action" "ec2:*",
"Resource":
"Resource" "*"
},
{
"Sid":
"Sid" "SidToOverwrite",
"Effect":
"Effect" "Allow",
"Action":
"Action" "s3:*",
"Resource":
"Resource" [
"arn:aws:s3:::somebucket/*",
"arn:aws:s3:::somebucket"
]
}
]
}
{
"Version":
"Version" "2012-10-17",
"Statement":
"Statement" [
{
"Sid":
"Sid" "",
"Effect":
"Effect" "Allow",
"Action":
"Action" "ec2:*",
"Resource":
"Resource" "*"
},
{
"Sid":
"Sid" "SidToOverwrite",
"Effect":
"Effect" "Allow",
"Action":
"Action" "s3:*",
"Resource":
"Resource" "*"
}
]
}
You can also combine source_json and override_json in the same document.
{
"Version":
"Version" "2012-10-17",
"Statement":
"Statement" [
{
"Sid":
"Sid" "OverridePlaceholder",
"Effect":
"Effect" "Allow",
"Action":
"Action" "s3:GetObject",
"Resource":
"Resource" "*"
}
]
}
aws_iam_policy
This data source can be used to fetch information about a speci c IAM policy.
Example Usage
Argument Reference
arn - (Required) ARN of the IAM policy.
Attributes Reference
name - The name of the IAM policy.
Example Usage
Argument Reference
name - (Required) The friendly IAM role name to match.
Attributes Reference
id - The friendly IAM role name to match.
permissions_boundary - The ARN of the policy that is used to set the permissions boundary for the role.
Example Usage
listener {
instance_port = 8000
instance_protocol = "https"
lb_port = 443
lb_protocol = "https"
ssl_certificate_id = "${data.aws_iam_server_certificate.my-domain.arn}"
}
}
Argument Reference
name_prefix - pre x of cert to lter by
latest - sort results by expiration date. returns the certi cate with expiration date in furthest in the future.
Attributes Reference
arn is set to the ARN of the IAM Server Certi cate
expiration_date is set to the expiration date of the IAM Server Certi cate
upload_date is the date when the server certi cate was uploaded
certificate_body is the public key certi cate (PEM-encoded). This is useful when con guring back-end instance
authentication (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-
balancer.html) policy for load balancer
certificate_chain is the public key certi cate chain (PEM-encoded) if exists, empty otherwise
Import
The terraform import function will read in certi cate body, certi cate chain (if it exists), id, name, path, and arn. It will not
retrieve the private key which is not available through the AWS API.
Data Source: aws_iam_user
This data source can be used to fetch information about a speci c IAM user. By using this data source, you can reference
IAM user properties without having to hard code ARNs or unique IDs as input.
Example Usage
Argument Reference
user_name - (Required) The friendly IAM user name to match.
Attributes Reference
arn - The Amazon Resource Name (ARN) assigned by AWS for this user.
permissions_boundary - The ARN of the policy that is used to set the permissions boundary for the user.
Example Usage
rules_package_arns = ["${data.aws_inspector_rules_packages.rules.arns}"]
}
Attributes Reference
In addition to all arguments above, the following attributes are exported:
arns - A list of the AWS Inspector Rules Packages arns available in the AWS region.
Data Source: aws_instance
Use this data source to get the ID of an Amazon EC2 Instance for use in other resources.
Example Usage
filter {
name = "image-id"
values = ["ami-xxxxxxxx"]
}
filter {
name = "tag:Name"
values = ["instance-name-tag"]
}
}
Argument Reference
instance_id - (Optional) Specify the exact Instance ID with which to populate the data source.
instance_tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired Instance.
filter - (Optional) One or more name/value pairs to use as lters. There are several valid keys, for a full reference,
check out describe-instances in the AWS CLI reference (http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-
instances.html).
get_password_data - (Optional) If true, wait for password data to become available and retrieve it. Useful for getting
the administrator password for instances running Microsoft Windows. The password data is exported to the
password_data attribute. See GetPasswordData
(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_GetPasswordData.html) for more information.
get_user_data - (Optional) Retrieve Base64 encoded User Data contents into the user_data_base64 attribute. A
SHA-1 hash of the User Data contents will always be present in the user_data attribute. Defaults to false .
NOTE: If anything other than a single match is returned by the search, Terraform will fail. Ensure that your search is
speci c enough to return a single Instance ID only.
Attributes Reference
id is set to the ID of the found Instance. In addition, the following attributes are exported:
NOTE: Some values are not always set and may not be available for interpolation.
associate_public_ip_address - Whether or not the Instance is associated with a public IP address or not (Boolean).
iops - 0 If the EBS volume is not a provisioned IOPS image, otherwise the supported IOPS count.
kms_key_arn - Amazon Resource Name (ARN) of KMS Key, if EBS volume is encrypted.
no_device - Whether the speci ed device included in the device mapping was suppressed or not (Boolean).
iam_instance_profile - The name of the instance pro le associated with the Instance.
ipv6_addresses - The IPv6 addresses associated to the Instance, if applicable. NOTE: Unlike the IPv4 address, this
doesn't change if you attach an EIP to the instance.
instance_state - The state of the instance. One of: pending , running , shutting-down , terminated ,
stopping , stopped . See Instance Lifecycle (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-
lifecycle.html) for more information.
monitoring - Whether detailed monitoring is enabled or disabled for the Instance (Boolean).
network_interface_id - The ID of the network interface that was created with the Instance.
password_data - Base-64 encoded encrypted password data for the instance. Useful for getting the administrator
password for instances running Microsoft Windows. This attribute is only exported if get_password_data is true. See
GetPasswordData (https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_GetPasswordData.html) for more
information.
private_dns - The private DNS name assigned to the Instance. Can only be used inside the Amazon EC2, and only
available if you've enabled DNS hostnames for your VPC.
public_dns - The public DNS name assigned to the Instance. For EC2-VPC, this is only available if you've enabled DNS
hostnames for your VPC.
public_ip - The public IP address assigned to the Instance, if applicable. NOTE: If you are using an aws_eip
(/docs/providers/aws/r/eip.html) with your instance, you should refer to the EIP's address directly and not use
public_ip , as this eld will change after the EIP is attached.
iops - 0 If the volume is not a provisioned IOPS image, otherwise the supported IOPS count.
kms_key_arn - Amazon Resource Name (ARN) of KMS Key, if EBS volume is encrypted.
user_data_base64 - Base64 encoded contents of User Data supplied to the Instance. Valid UTF-8 contents can be
decoded with the base64decode function (/docs/con guration/functions/base64decode.html). This attribute is only
exported if get_user_data is true.
host_id - The Id of the dedicated host the instance will be assigned to.
Note: It's strongly discouraged to use this data source for querying ephemeral instances (e.g. managed via autoscaling
group), as the output may change at any time and you'd need to re-run apply every time an instance comes up or dies.
Example Usage
filter {
name = "instance.group-id"
values = ["sg-12345678"]
}
Argument Reference
instance_tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on desired instances.
instance_state_names - (Optional) A list of instance states that should be applicable to the desired instances. The
permitted values are: pending, running, shutting-down, stopped, stopping, terminated . The default value is
running .
filter - (Optional) One or more name/value pairs to use as lters. There are several valid keys, for a full reference,
check out describe-instances in the AWS CLI reference (http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-
instances.html).
Attributes Reference
ids - IDs of instances found through the lter
Example Usage
variable "vpc_id" {}
Argument Reference
The arguments of this data source act as lters for querying the available Internet Gateway in the current region. The given
lters must match exactly one Internet Gateway whose data will be exported as attributes.
tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired Internet Gateway.
More complex lters can be expressed using one or more filter sub-blocks, which take the following arguments:
name - (Required) The name of the eld to lter by, as de ned by the underlying AWS API
(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInternetGateways.html).
values - (Required) Set of values that are accepted for the given eld. An Internet Gateway will be selected if any one
of the given values matches.
Attributes Reference
All of the argument attributes except filter block are also exported as result attributes. This data source will complete the
data by populating any elds that are not included in the con guration with the data for the selected Internet Gateway.
attachments are also exported with the following attributes, when there are relevants: Each attachement supports the
following:
owner_id - The ID of the AWS account that owns the internet gateway.
state - The current state of the attachment between the gateway and the VPC. Present only if a VPC is attached
Example Usage
spec {
container {
image = "gcr.io/my-project/image-name"
name = "image-name"
env = [
{
name = "IOT_ENDPOINT"
value = "${data.aws_iot_endpoint.example.endpoint_address}"
},
]
}
}
}
Argument Reference
endpoint_type - (Optional) Endpoint type. Valid values: iot:CredentialProvider , iot:Data , iot:Data-ATS ,
iot:Job .
Attributes Reference
endpoint_address - The endpoint based on endpoint_type :
No endpoint_type : Either iot:Data or iot:Data-ATS depending on region
(https://aws.amazon.com/blogs/iot/aws-iot-core-ats-endpoints/)
iot:CredentialsProvider : IDENTIFIER.credentials.iot.REGION.amazonaws.com
iot:Data : IDENTIFIER.iot.REGION.amazonaws.com
iot:Data-ATS : IDENTIFIER-ats.iot.REGION.amazonaws.com
iot:Job : IDENTIFIER.jobs.iot.REGION.amazonaws.com
Data Source: aws_ip_ranges
Use this data source to get the IP ranges of various AWS products and services. For more information about the contents of
this data source and required JSON syntax if referencing a custom URL, see the AWS IP Address Ranges documention
(https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html).
Example Usage
ingress {
from_port = "443"
to_port = "443"
protocol = "tcp"
cidr_blocks = ["${data.aws_ip_ranges.european_ec2.cidr_blocks}"]
ipv6_cidr_blocks = ["${data.aws_ip_ranges.european_ec2.ipv6_cidr_blocks}"]
}
tags = {
CreateDate = "${data.aws_ip_ranges.european_ec2.create_date}"
SyncToken = "${data.aws_ip_ranges.european_ec2.sync_token}"
}
}
Argument Reference
regions - (Optional) Filter IP ranges by regions (or include all regions, if omitted). Valid items are global (for
cloudfront ) as well as all AWS regions (e.g. eu-central-1 )
services - (Required) Filter IP ranges by services. Valid items are amazon (for amazon.com), cloudfront ,
codebuild , ec2 , route53 , route53_healthchecks and S3 .
NOTE: If the speci ed combination of regions and services does not yield any CIDR blocks, Terraform will fail.
url - (Optional) Custom URL for source JSON le. Syntax must match AWS IP Address Ranges documention
(https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html). Defaults to https://ip-
ranges.amazonaws.com/ip-ranges.json .
Attributes Reference
cidr_blocks - The lexically ordered list of CIDR blocks.
sync_token - The publication time of the IP ranges, in Unix epoch time format (e.g. 1470267965 ).
Data Source: aws_kinesis_stream
Use this data source to get information about a Kinesis Stream for use in other resources.
Example Usage
Argument Reference
name - (Required) The name of the Kinesis Stream.
Attributes Reference
id is set to the Amazon Resource Name (ARN) of the Kinesis Stream. In addition, the following attributes are exported:
arn - The Amazon Resource Name (ARN) of the Kinesis Stream (same as id).
creation_timestamp - The approximate UNIX timestamp that the stream was created.
status - The current status of the stream. The stream status is one of CREATING, DELETING, ACTIVE, or UPDATING.
retention_period - Length of time (in hours) data records are accessible after they are added to the stream.
open_shards - The list of shard ids in the OPEN state. See Shard State
(https://docs.aws.amazon.com/streams/latest/dev/kinesis-using-sdk-java-after-resharding.html#kinesis-using-sdk-
java-resharding-data-routing) for more.
closed_shards - The list of shard ids in the CLOSED state. See Shard State
(https://docs.aws.amazon.com/streams/latest/dev/kinesis-using-sdk-java-after-resharding.html#kinesis-using-sdk-
java-resharding-data-routing) for more.
shard_level_metrics - A list of shard-level CloudWatch metrics which are enabled for the stream. See Monitoring
with CloudWatch (https://docs.aws.amazon.com/streams/latest/dev/monitoring-with-cloudwatch.html) for more.
Example Usage
Argument Reference
name - (Required) The display name of the alias. The name must start with the word "alias" followed by a forward
slash (alias/)
Attributes Reference
arn - The Amazon Resource Name(ARN) of the key alias.
Note: All arguments including the plaintext be stored in the raw state as plain-text. Read more about sensitive data in
state (/docs/state/sensitive-data.html).
Example Usage
plaintext = <<EOF
<<
{
"client_id": "e587dbae22222f55da22",
"client_secret": "8289575d00000ace55e1815ec13673955721b8a5"
}
EOF
}
Argument Reference
The following arguments are supported:
plaintext - (Required) Data to be encrypted. Note that this may show up in logs, and it will be stored in the state le.
key_id - (Required) Globally unique key ID for the customer master key.
Attributes Reference
All of the argument attributes are also exported as result attributes.
Example Usage
Argument Reference
key_id - (Required) Key identi er which can be one of the following format:
Key ID. E.g: 1234abcd-12ab-34cd-56ef-1234567890ab
Attributes Reference
id : The globally unique identi er for the key
aws_account_id : The twelve-digit account ID of the AWS account that owns the key
creation_date : The date and time when the key was created
deletion_date : The date and time after which AWS KMS deletes the key. This value is present only when key_state
is PendingDeletion , otherwise this value is 0
description : The description of the key.
enabled : Speci es whether the key is enabled. When key_state is Enabled this value is true, otherwise it is false
expiration_model : Speci es whether the Key's key material expires. This value is present only when origin is
EXTERNAL , otherwise this value is empty
origin : When this value is AWS_KMS , AWS KMS created the key material. When this value is EXTERNAL , the key
material was imported from your existing key management infrastructure or the CMK lacks key material
valid_to : The time at which the imported key material expires. This value is present only when origin is
EXTERNAL and whose expiration_model is KEY_MATERIAL_EXPIRES , otherwise this value is 0
Data Source: aws_kms_secret
WARNING: This data source was removed in version 2.0.0 of the Terraform AWS Provider. You can migrate existing
con gurations to the aws_kms_secrets data source (/docs/providers/aws/d/kms_secrets.html) following instructions
available in the Version 2 Upgrade Guide (/docs/providers/aws/guides/version-2-upgrade.html#data-source-
aws_kms_secret).
Data Source: aws_kms_secrets
Decrypt multiple secrets from data encrypted with the AWS KMS service.
NOTE: Using this data provider will allow you to conceal secret data within your resource de nitions but does not take
care of protecting that data in all Terraform logging and state output. Please take care to secure your secret data
beyond just the Terraform con guration.
Example Usage
If you do not already have a CiphertextBlob from encrypting a KMS secret, you can use the below commands to obtain
one using the AWS CLI kms encrypt (https://docs.aws.amazon.com/cli/latest/reference/kms/encrypt.html) command. This
requires you to have your AWS CLI setup correctly and replace the --key-id with your own. Alternatively you can use --
plaintext 'password' instead of reading from a le.
If you have a newline character at the end of your le, it will be decrypted with this newline character intact. For most
use cases this is undesirable and leads to incorrect passwords or invalid values, as well as possible changes in the plan.
Be sure to use echo -n if necessary.
That encrypted output can now be inserted into Terraform con gurations without exposing the plaintext secret directly.
data "aws_kms_secrets" "example" {
secret {
name = "master_password"
payload = "AQECAHgaPa0J8WadplGCqqVAr4HNvDaFSQ+NaiwIBhmm6qDSFwAAAGIwYAYJKoZIhvcNAQcGoFMwUQIBADBMBgkqhk
iG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDI+LoLdvYv8l41OhAAIBEIAfx49FFJCLeYrkfMfAw6XlnxP23MmDBdqP8dPp28OoAQ=="
context = {
foo = "bar"
}
}
secret {
name = "master_username"
payload = "AQECAHgaPa0J8WadplGCqqVAr4HNvDaFSQ+NaiwIBhmm6qDSFwAAAGIwYAYJKoZIhvcNAQcGoFMwUQIBADBMBgkqhk
iG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDI+LoLdvYv8l41OhAAIBEIAfx49FFJCLeYrkfMfAw6XlnxP23MmDBdqP8dPp28OoAQ=="
}
}
master_password = "${data.aws_kms_secrets.example.plaintext["master_password"]}"
master_username = "${data.aws_kms_secrets.example.plaintext["master_username"]}"
}
Argument Reference
The following arguments are supported:
secret - (Required) One or more encrypted payload de nitions from the KMS service. See the Secret De nitions
below.
Secret De nitions
Each secret supports the following arguments:
name - (Required) The name to export this secret under in the attributes.
payload - (Required) Base64 encoded payload, as returned from a KMS encrypt operation.
context - (Optional) An optional mapping that makes up the Encryption Context for the secret.
For more information on context and grant_tokens see the KMS Concepts
(https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html)
Attributes Reference
In addition to all arguments above, the following attributes are exported:
plaintext - Map containing each secret name as the key with its decrypted plaintext value
aws_lambda_function
Provides information about a Lambda Function.
Example Usage
variable "function_name" {
type = "string"
}
Argument Reference
The following arguments are supported:
qualifier - (Optional) Alias name or version number of the lambda function. e.g. $LATEST , my-alias , or 1
Attributes Reference
In addition to all arguments above, the following attributes are exported:
arn - Unquali ed (no :QUALIFIER or :VERSION su x) Amazon Resource Name (ARN) identifying your Lambda
Function. See also qualified_arn .
invoke_arn - The ARN to be used for invoking Lambda Function from API Gateway.
qualified_arn - Quali ed ( :QUALIFIER or :VERSION su x) Amazon Resource Name (ARN) identifying your
Lambda Function. See also arn .
reserved_concurrent_executions - The amount of reserved concurrent executions for this lambda function or -1
if unreserved.
timeout - The function execution time at which Lambda should terminate the function.
Example Usage
input = <<JSON
<<
{
"key1": "value1",
"key2": "value2"
}
JSON
}
output "result" {
description = "String result of Lambda execution"
value = "${data.aws_lambda_invocation.example.result}"
}
output "result_entry_tf011" {
value = "${data.aws_lambda_invocation.example.result_map["key1"]}"
}
output "result_entry_tf012" {
value = jsondecode(data.aws_lambda_invocation
aws_lambda_invocation.example
example.result
result)["key1"]
}
Argument Reference
function_name - (Required) The name of the lambda function.
input - (Required) A string in JSON format that is passed as payload to the lambda function.
qualifier - (Optional) The quali er (a.k.a version) of the lambda function. Defaults to $LATEST .
Attributes Reference
result - String result of the lambda function invocation.
result_map - This eld is set only if result is a map of primitive types, where the map is string keys and string values.
In Terraform 0.12 and later, use the jsondecode() function (/docs/con guration/functions/jsondecode.html) with the
result attribute instead to convert the result to all supported native Terraform types.
aws_lambda_layer_version
Provides information about a Lambda Layer Version.
Example Usage
variable "layer_name" {
type = "string"
}
Argument Reference
The following arguments are supported:
version - (Optional) Speci c layer version. Con icts with compatible_runtime . If omitted, the latest available layer
version will be used.
compatible_runtime (Optional) Speci c runtime the layer version must support. Con icts with version . If
speci ed, the latest available layer version supporting the provided runtime will be used.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
license_info - License info associated with the speci c Lambda Layer version.
arn - The Amazon Resource Name (ARN) of the Lambda Layer with version.
layer_arn - The Amazon Resource Name (ARN) of the Lambda Layer without version.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
key_name - The Key Name that should be used for the instance.
vpc_classic_link_security_groups - The IDs of one or more Security Groups for the speci ed ClassicLink-enabled
VPC.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
block_device_mappings - Specify volumes to attach to the instance besides the volumes speci ed by the AMI.
credit_specification - Customize the credit speci cation of the instance. See Credit Speci cation below for more
details.
elastic_gpu_specifications - The elastic GPU to attach to the instance. See Elastic GPU below for more details.
iam_instance_profile - The IAM Instance Pro le to launch the instance with. See Instance Pro le below for more
details.
instance_market_options - The market (purchasing) option for the instance. below for details.
network_interfaces - Customize network interfaces to be attached at instance boot time. See Network Interfaces
below for more details.
security_group_names - A list of security group names to associate with. If you are creating Instances in a VPC, use
vpc_security_group_ids instead.
user_data - The Base64-encoded user data to provide when launching the instance.
Data Source: aws_lb
This data source can prove useful when a module accepts an LB as an input variable and needs to, for example, determine
the security groups associated with it, etc.
Example Usage
variable "lb_arn" {
type = "string"
default = ""
}
variable "lb_name" {
type = "string"
default = ""
}
Argument Reference
The following arguments are supported:
NOTE: When both arn and name are speci ed, arn takes precedence.
Attributes Reference
See the LB Resource (/docs/providers/aws/r/lb.html) for details on the returned attributes - they are identical.
Data Source: aws_lb_listener
This data source can prove useful when a module accepts an LB Listener as an input variable and needs to know the LB it is
attached to, or other information speci c to the listener in question.
Example Usage
variable "listener_arn" {
type = "string"
}
Argument Reference
The following arguments are supported:
arn - (Optional) The arn of the listener. Required if load_balancer_arn and port is not set.
load_balancer_arn - (Optional) The arn of the load balancer. Required if arn is not set.
port - (Optional) The port of the listener. Required if arn is not set.
Attributes Reference
See the LB Listener Resource (/docs/providers/aws/r/lb_listener.html) for details on the returned attributes - they are
identical.
Data Source: aws_lb_target_group
This data source can prove useful when a module accepts an LB Target Group as an input variable and needs to know its
attributes. It can also be used to get the ARN of an LB Target Group for use in other resources, given LB Target Group name.
Example Usage
variable "lb_tg_arn" {
type = "string"
default = ""
}
variable "lb_tg_name" {
type = "string"
default = ""
}
Argument Reference
The following arguments are supported:
NOTE: When both arn and name are speci ed, arn takes precedence.
Attributes Reference
See the LB Target Group Resource (/docs/providers/aws/r/lb_target_group.html) for details on the returned attributes - they
are identical.
Data Source: aws_mq_broker
Provides information about a MQ Broker.
Example Usage
variable "broker_id" {
type = "string"
default = ""
}
variable "broker_name" {
type = "string"
default = ""
}
Argument Reference
The following arguments are supported:
Attributes Reference
See the aws_mq_broker resource (/docs/providers/aws/r/mq_broker.html) for details on the returned attributes. They are
identical except for user password, which is not returned when describing broker.
Data Source: aws_msk_cluster
Get information on an Amazon MSK Cluster.
Example Usage
Argument Reference
The following arguments are supported:
Attribute Reference
In addition to all arguments above, the following attributes are exported:
bootstrap_brokers - A comma separated list of one or more hostname:port pairs of Kafka brokers suitable to
boostrap connectivity to the Kafka cluster.
bootstrap_brokers_tls - A comma separated list of one or more DNS names (or IPs) and TLS port pairs kafka
brokers suitable to boostrap connectivity to the kafka cluster.
zookeeper_connect_string - A comma separated list of one or more IP:port pairs to use to connect to the Apache
Zookeeper cluster.
Data Source: aws_msk_con guration
Get information on an Amazon MSK Con guration.
Example Usage
Argument Reference
The following arguments are supported:
Attribute Reference
In addition to all arguments above, the following attributes are exported:
kafka_versions - List of Apache Kafka versions which can use this con guration.
Example Usage
variable "subnet_id" {}
tags = {
Name = "gw NAT"
}
}
Argument Reference
The arguments of this data source act as lters for querying the available Nat Gateways in the current region. The given
lters must match exactly one Nat Gateway whose data will be exported as attributes.
subnet_id - (Optional) The id of subnet that the Nat Gateway resides in.
vpc_id - (Optional) The id of the VPC that the Nat Gateway resides in.
state - (Optional) The state of the NAT gateway (pending | failed | available | deleting | deleted ).
tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired Nat Gateway.
More complex lters can be expressed using one or more filter sub-blocks, which take the following arguments:
name - (Required) The name of the eld to lter by, as de ned by the underlying AWS API
(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeNatGateways.html).
values - (Required) Set of values that are accepted for the given eld. An Nat Gateway will be selected if any one of
the given values matches.
Attributes Reference
All of the argument attributes except filter block are also exported as result attributes. This data source will complete the
data by populating any elds that are not included in the con guration with the data for the selected Nat Gateway.
addresses are also exported with the following attributes, when they are relevant: Each attachement supports the
following:
Example Usage
The following shows outputing all network ACL ids in a vpc.
output "example" {
value = "${data.aws_network_acls.example.ids}"
}
The following example retrieves a list of all network ACL ids in a VPC with a custom tag of Tier set to a value of "Private".
tags = {
Tier = "Private"
}
}
The following example retrieves a network ACL id in a VPC which associated with speci c subnet.
filter {
name = "association.subnet-id"
values = ["${aws_subnet.test.id}"]
}
}
Argument Reference
vpc_id - (Optional) The VPC ID that you want to lter from.
tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired network ACLs.
More complex lters can be expressed using one or more filter sub-blocks, which take the following arguments:
name - (Required) The name of the eld to lter by, as de ned by the underlying AWS API
(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeNetworkAcls.html).
values - (Required) Set of values that are accepted for the given eld. A VPC will be selected if any one of the given
values matches.
Attributes Reference
ids - A list of all the network ACL ids found. This data source will fail if none are found.
aws_network_interface
Use this data source to get information about a Network Interface.
Example Usage
Argument Reference
The following arguments are supported:
filter – (Optional) One or more name/value pairs to lter o of. There are several valid keys, for a full reference,
check out describe-network-interfaces (https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-network-
interfaces.html) in the AWS CLI reference.
Attributes Reference
See the Network Interface (/docs/providers/aws/r/network_interface.html) for details on the returned attributes.
association - The association information for an Elastic IP address (IPv4) associated with the network interface. See
supported elds below.
private_ip - The private IPv4 address of the network interface within the subnet.
private_ips - The private IPv4 addresses associated with the network interface.
requester_id - The ID of the entity that launched the instance on your behalf.
security_groups - The list of security groups for the network interface.
association
public_ip - The address of the Elastic IP address bound to the network interface.
Import
Elastic Network Interfaces can be imported using the id , e.g.
Example Usage
The following shows outputing all network interface ids in a region.
output "example" {
value = "${data.aws_network_interfaces.example.ids}"
}
The following example retrieves a list of all network interface ids with a custom tag of Name set to a value of test .
output "example1" {
value = "${data.aws_network_interfaces.example.ids}"
}
The following example retrieves a network interface ids which associated with speci c subnet.
output "example" {
value = "${data.aws_network_interfaces.example.ids}"
}
Argument Reference
tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired network interfaces.
More complex lters can be expressed using one or more filter sub-blocks, which take the following arguments:
name - (Required) The name of the eld to lter by, as de ned by the underlying AWS API
(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeNetworkInterfaces.html).
values - (Required) Set of values that are accepted for the given eld.
Attributes Reference
ids - A list of all the network interface ids found. This data source will fail if none are found.
Data Source: aws_organizations_organization
Get information about the organization that the user's account belongs to
Example Usage
output "account_ids" {
value = data.aws_organizations_organization
aws_organizations_organization.example
example.accounts
accounts[*
* ].id
id
}
policy = "${data.aws_iam_policy_document.sns_topic_policy.json}"
}
actions = [
"SNS:Subscribe",
"SNS:Publish",
]
condition {
test = "StringEquals"
variable = "aws:PrincipalOrgID"
values = [
"${data.aws_organizations_organization.example.id}",
]
}
principals {
type = "AWS"
identifiers = ["*"]
}
resources = [
"${aws_sns_topic.sns_topic.arn}",
]
}
}
Argument Reference
There are no arguments available for this data source.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
master_account_arn - The Amazon Resource Name (ARN) of the account that is designated as the master account
for the organization.
master_account_email - The email address that is associated with the AWS account that is designated as the master
account for the organization.
accounts - List of organization accounts including the master account. For a list excluding the master account, see
the non_master_accounts attribute. All elements have these attributes:
arn - ARN of the account
aws_service_access_principals - A list of AWS service principal names that have integration enabled with your
organization. Organization must have feature_set set to ALL . For additional information, see the AWS
Organizations User Guide
(https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services.html).
enabled_policy_types - A list of Organizations policy types that are enabled in the Organization Root. Organization
must have feature_set set to ALL . For additional information about valid policy types (e.g.
SERVICE_CONTROL_POLICY ), see the AWS Organizations API Reference
(https://docs.aws.amazon.com/organizations/latest/APIReference/API_EnablePolicyType.html).
non_master_accounts - List of organization accounts excluding the master account. For a list including the master
account, see the accounts attribute. All elements have these attributes:
arn - ARN of the account
policy_types - List of policy types enabled for this root. All elements have these attributes:
status - The status of the policy type as it relates to the associated root
Data Source: aws_partition
Use this data source to lookup information about the current AWS partition in which Terraform is working.
Example Usage
actions = [
"s3:ListBucket",
]
resources = [
"arn:${data.aws_partition.current.partition}:s3:::my-bucket",
]
}
}
Argument Reference
There are no arguments available for this data source.
Attributes Reference
partition is set to the identi er of the current partition.
dns_suffix is set to the base DNS domain name for the current partition (e.g. amazonaws.com in AWS Commercial,
amazonaws.com.cn in AWS China).
Data Source: aws_pre x_list
aws_prefix_list provides details about a speci c pre x list (PL) in the current region.
This can be used both to validate a pre x list given in a variable and to obtain the CIDR blocks (IP address ranges) for the
associated AWS service. The latter may be useful e.g. for adding network ACL rules.
Example Usage
Argument Reference
The arguments of this data source act as lters for querying the available pre x lists. The given lters must match exactly
one pre x list whose data will be exported as attributes.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
filters {
field = "instanceType"
value = "c5.xlarge"
}
filters {
field = "operatingSystem"
value = "Linux"
}
filters {
field = "location"
value = "US East (N. Virginia)"
}
filters {
field = "preInstalledSw"
value = "NA"
}
filters {
field = "licenseModel"
value = "No License required"
}
filters {
field = "tenancy"
value = "Shared"
}
}
data "aws_pricing_product" "example" {
service_code = "AmazonRedshift"
filters {
field = "instanceType"
value = "ds1.xlarge"
}
filters {
field = "location"
value = "US East (N. Virginia)"
}
}
Argument Reference
service_code - (Required) The code of the service. Available service codes can be fetched using the DescribeServices
pricing API call.
filters - (Required) A list of lters. Passed directly to the API (see GetProducts API reference). These lters must
describe a single product, this resource will fail if more than one product is returned by the API.
lters
field (Required) The product attribute name that you want to lter on.
value (Required) The product attribute value that you want to lter on.
Attributes Reference
result - Set to the product returned from the API.
Data Source: aws_ram_resource_share
aws_ram_resource_share Retrieve information about a RAM Resource Share.
Example Usage
Search by lters
filter {
name = "NameOfTag"
values = ["exampleNameTagValue"]
}
}
Argument Reference
The following Arguments are supported
resource_owner (Required) The owner of the resource share. Valid values are SELF or OTHER-ACCOUNTS
filter - (Optional) A lter used to scope the list e.g. by tags. See related docs
(https://docs.aws.amazon.com/ram/latest/APIReference/API_TagFilter.html).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
See the RDS Cluster Resource (/docs/providers/aws/r/rds_cluster.html) for details on the returned attributes - they are
identical.
Data Source: aws_redshift_cluster
Provides details about a speci c redshift cluster.
Example Usage
s3_configuration {
role_arn = "${aws_iam_role.firehose_role.arn}"
bucket_arn = "${aws_s3_bucket.bucket.arn}"
buffer_size = 10
buffer_interval = 400
compression_format = "GZIP"
}
redshift_configuration {
role_arn = "${aws_iam_role.firehose_role.arn}"
cluster_jdbcurl = "jdbc:redshift://${data.aws_redshift_cluster.test_cluster.endpoint}/${data.aws_r
edshift_cluster.test_cluster.database_name}"
username = "testuser"
password = "T3stPass"
data_table_name = "test-table"
copy_options = "delimiter '|'"
data_table_columns = "test-col"
}
}
Argument Reference
The following arguments are supported:
Attribute Reference
In addition to all arguments above, the following attributes are exported:
allow_version_upgrade - Whether major version upgrades can be applied during maintenance period
cluster_parameter_group_name - The name of the parameter group to be associated with this cluster
cluster_subnet_group_name - The name of a cluster subnet group to be associated with this cluster
s3_key_prefix - The folder inside the S3 bucket where the log les are stored
vpc_security_group_ids - The VPC security group Ids associated with the cluster
Data Source: aws_redshift_service_account
Use this data source to get the Account ID of the AWS Redshift Service Account
(http://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html#db-auditing-enable-logging) in a given region for the
purpose of allowing Redshift to store audit data in S3.
Example Usage
policy = <<EOF
<<
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Put bucket policy needed for audit logging",
"Effect": "Allow",
"Principal": {
"AWS": "${data.aws_redshift_service_account.main.arn}"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::tf-redshift-logging-test-bucket/*"
},
{
"Sid": "Get bucket policy needed for audit logging ",
"Effect": "Allow",
"Principal": {
"AWS": "${data.aws_redshift_service_account.main.arn}"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::tf-redshift-logging-test-bucket"
}
]
}
EOF
}
Argument Reference
region - (Optional) Name of the region whose AWS Redshift account ID is desired. Defaults to the region from the
AWS provider con guration.
Attributes Reference
id - The ID of the AWS Redshift service account in the selected region.
arn - The ARN of the AWS Redshift service account in the selected region.
Data Source: aws_region
aws_region provides details about a speci c AWS region.
As well as validating a given region name this resource can be used to discover the name of the region con gured within the
provider. The latter can be useful in a child module which is inheriting an AWS provider con guration from its parent
module.
Example Usage
The following example shows how the resource might be used to obtain the name of the AWS region con gured on the
provider.
Argument Reference
The arguments of this data source act as lters for querying the available regions. The given lters must match exactly one
region whose data will be exported as attributes.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
This data source allows to nd a list of name servers associated with a speci c delegation set.
Example Usage
The following example shows how to get a delegation set from its id.
Argument Reference
id - (Required) The Hosted Zone id of the desired delegation set.
name_servers - The list of DNS name servers for the delegation set.
Data Source: aws_route53_resolver_rule
aws_route53_resolver_rule provides details about a speci c Route53 Resolver rule.
Example Usage
The following example shows how to get a Route53 Resolver rule based on its associated domain name and rule type.
Argument Reference
The arguments of this data source act as lters for querying the available resolver rules in the current region. The given
lters must match exactly one resolver rule whose data will be exported as attributes.
domain_name - (Optional) The domain name the desired resolver rule forwards DNS queries for. Con icts with
resolver_rule_id .
name - (Optional) The friendly name of the desired resolver rule. Con icts with resolver_rule_id .
resolver_endpoint_id (Optional) The ID of the outbound resolver endpoint of the desired resolver rule. Con icts
with resolver_rule_id .
resolver_rule_id (Optional) The ID of the desired resolver rule. Con icts with domain_name , name ,
resolver_endpoint_id and rule_type .
rule_type - (Optional) The rule type of the desired resolver rule. Valid values are FORWARD , SYSTEM and
RECURSIVE . Con icts with resolver_rule_id .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
arn - The ARN (Amazon Resource Name) for the resolver rule.
owner_id - When a rule is shared with another AWS account, the account ID of the account that the rule is shared
with.
share_status - Whether the rules is shared and, if so, whether the current account is sharing the rule with another
account, or another account is sharing the rule with the current account. Values are NOT_SHARED , SHARED_BY_ME or
SHARED_WITH_ME
Example Usage
The following example shows how to get Route53 Resolver rules based on tags.
Argument Reference
The arguments of this data source act as lters for querying the available resolver rules in the current region.
owner_id (Optional) When the desired resolver rules are shared with another AWS account, the account ID of the
account that the rules are shared with.
resolver_endpoint_id (Optional) The ID of the outbound resolver endpoint for the desired resolver rules.
rule_type (Optional) The rule type of the desired resolver rules. Valid values are FORWARD , SYSTEM and
RECURSIVE .
share_status (Optional) Whether the desired resolver rules are shared and, if so, whether the current account is
sharing the rules with another account, or another account is sharing the rules with the current account. Values are
NOT_SHARED , SHARED_BY_ME or SHARED_WITH_ME
Attributes Reference
The following attributes are exported:
This data source allows to nd a Hosted Zone ID given Hosted Zone name and certain search criteria.
Example Usage
The following example shows how to get a Hosted Zone from its name and from this data how to create a Record Set.
Argument Reference
The arguments of this data source act as lters for querying the available Hosted Zone. You have to use zone_id or name ,
not both of them. The given lter must match exactly one Hosted Zone. If you use name eld for private Hosted Zone, you
need to add private_zone eld to true
name - (Optional) The Hosted Zone name of the desired Hosted Zone.
private_zone - (Optional) Used with name eld to get a private Hosted Zone.
vpc_id - (Optional) Used with name eld to get a private Hosted Zone associated with the vpc_id (in this case,
private_zone is not mandatory).
tags - (Optional) Used with name eld. A mapping of tags, each pair of which must exactly match a pair on the
desired Hosted Zone.
Attributes Reference
All of the argument attributes are also exported as result attributes. This data source will complete the data by populating
any elds that are not included in the con guration with the data for the selected Hosted Zone.
name_servers - The list of DNS name servers for the Hosted Zone.
linked_service_principal - The service that created the Hosted Zone (e.g. servicediscovery.amazonaws.com ).
linked_service_description - The description provided by the service that created the Hosted Zone (e.g.
arn:aws:servicediscovery:us-east-1:1234567890:namespace/ns-xxxxxxxxxxxxxxxx ).
Data Source: aws_route
aws_route provides details about a speci c Route.
This resource can prove useful when nding the resource associated with a CIDR. For example, nding the peering
connection associated with a CIDR value.
Example Usage
The following example shows how one might use a CIDR value to nd a network interface id and use this to create a data
source of that network interface.
variable "subnet_id" {}
Argument Reference
The arguments of this data source act as lters for querying the available Route in the current region. The given lters must
match exactly one Route whose data will be exported as attributes.
route_table_id - (Required) The id of the speci c Route Table containing the Route entry.
destination_cidr_block - (Optional) The CIDR block of the Route belonging to the Route Table.
destination_ipv6_cidr_block - (Optional) The IPv6 CIDR block of the Route belonging to the Route Table.
egress_only_gateway_id - (Optional) The Egress Only Gateway ID of the Route belonging to the Route Table.
gateway_id - (Optional) The Gateway ID of the Route belonging to the Route Table.
instance_id - (Optional) The Instance ID of the Route belonging to the Route Table.
nat_gateway_id - (Optional) The NAT Gateway ID of the Route belonging to the Route Table.
transit_gateway_id - (Optional) The EC2 Transit Gateway ID of the Route belonging to the Route Table.
vpc_peering_connection_id - (Optional) The VPC Peering Connection ID of the Route belonging to the Route Table.
network_interface_id - (Optional) The Network Interface ID of the Route belonging to the Route Table.
Attributes Reference
All of the argument attributes are also exported as result attributes when there is data available. For example, the
vpc_peering_connection_id eld will be empty when the route is attached to a Network Interface.
Data Source: aws_route_table
aws_route_table provides details about a speci c Route Table.
This resource can prove useful when a module accepts a Subnet id as an input variable and needs to, for example, add a
route in the Route Table.
Example Usage
The following example shows how one might accept a Route Table id as a variable and use this data source to obtain the
data necessary to create a route.
variable "subnet_id" {}
Argument Reference
The arguments of this data source act as lters for querying the available Route Table in the current region. The given lters
must match exactly one Route Table whose data will be exported as attributes.
tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired Route Table.
vpc_id - (Optional) The id of the VPC that the desired Route Table belongs to.
subnet_id - (Optional) The id of a Subnet which is connected to the Route Table (not be exported if not given in
parameter).
More complex lters can be expressed using one or more filter sub-blocks, which take the following arguments:
name - (Required) The name of the eld to lter by, as de ned by the underlying AWS API
(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRouteTables.html).
values - (Required) Set of values that are accepted for the given eld. A Route Table will be selected if any one of the
given values matches.
Attributes Reference
All of the argument attributes except filter and subnet_id blocks are also exported as result attributes. This data
source will complete the data by populating any elds that are not included in the con guration with the data for the
selected Route Table. In addition the following attributes are exported:
owner_id - The ID of the AWS account that owns the route table
routes are also exported with the following attributes, when there are relevants: Each route supports the following:
Example Usage
The following adds a route for a particular cidr block to every (private kops) route table in a speci ed vpc to use a particular
vpc peering connection.
filter {
name = "tag:kubernetes.io/kops/role"
values = ["private*"]
}
}
Argument Reference
filter - (Optional) Custom lter block as described below.
tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired route tables.
More complex lters can be expressed using one or more filter sub-blocks, which take the following arguments:
name - (Required) The name of the eld to lter by, as de ned by the underlying AWS API
(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRouteTables.html).
values - (Required) Set of values that are accepted for the given eld. A Route Table will be selected if any one of the
given values matches.
Attributes Reference
ids - A list of all the route table ids found. This data source will fail if none are found.
Data Source: aws_s3_bucket
Provides details about a speci c S3 bucket.
This resource may prove useful when setting up a Route53 record, or an origin for a CloudFront Distribution.
Example Usage
Route53 Record
alias {
name = "${data.aws_s3_bucket.selected.website_domain}"
zone_id = "${data.aws_s3_bucket.selected.hosted_zone_id}"
}
}
CloudFront Origin
Argument Reference
The following arguments are supported:
bucket - (Required) The name of the bucket
Attribute Reference
In addition to all arguments above, the following attributes are exported:
bucket_regional_domain_name - The bucket region-speci c domain name. The bucket domain name including the
region name, please refer here (https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) for format.
Note: The AWS CloudFront allows specifying S3 region-speci c endpoint when creating S3 origin, it will prevent redirect
issues (https://forums.aws.amazon.com/thread.jspa?threadID=216814) from CloudFront to S3 Origin URL.
website_endpoint - The website endpoint, if the bucket is con gured with a website. If not, this will be an empty
string.
website_domain - The domain of the website endpoint, if the bucket is con gured with a website. If not, this will be
an empty string. This is used to create Route 53 alias records.
Data Source: aws_s3_bucket_object
The S3 object data source allows access to the metadata and optionally (see below) content of an object stored inside S3
bucket.
Note: The content of an object ( body eld) is available only for objects which have a human-readable Content-Type
( text/* and application/json ). This is to prevent printing unsafe characters and potentially downloading large
amount of data which would be thrown away in favour of metadata.
Example Usage
The following example retrieves a text object (which must have a Content-Type value starting with text/ ) and uses it as
the user_data for an EC2 instance:
The following, more-complex example retrieves only the metadata for a zip le stored in S3, which is then used to pass the
most recent version_id to AWS Lambda for use as a function implementation. More information about Lambda functions
is available in the documentation for aws_lambda_function (/docs/providers/aws/r/lambda_function.html).
Argument Reference
The following arguments are supported:
bucket - (Required) The name of the bucket to read the object from
key - (Required) The full path to the object inside the bucket
version_id - (Optional) Speci c version ID of the object returned (defaults to latest version)
Attributes Reference
In addition to all arguments above, the following attributes are exported:
body - Object data (see limitations above to understand cases in which this eld is actually available)
content_encoding - Speci es what content encodings have been applied to the object and thus what decoding
mechanisms must be applied to obtain the media-type referenced by the Content-Type header eld.
content_type - A standard MIME type describing the format of the object data.
etag - ETag (https://en.wikipedia.org/wiki/HTTP_ETag) generated for the object (an MD5 sum of the object content in
case it's not encrypted)
expiration - If the object expiration is con gured (see object lifecycle management
(http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html)), the eld includes this header. It
includes the expiry-date and rule-id key value pairs providing object expiration information. The value of the rule-id is
URL encoded.
expires - The date and time at which the object is no longer cacheable.
last_modified - Last modi ed date of the object in RFC1123 format (e.g. Mon, 02 Jan 2006 15:04:05 MST )
object_lock_retain_until_date - The date and time when this object's object lock will expire.
server_side_encryption - If the object is stored using server-side encryption (KMS or Amazon S3-managed
encryption key), this eld includes the chosen encryption and algorithm used.
sse_kms_key_id - If present, speci es the ID of the Key Management Service (KMS) master encryption key that was
used for the object.
website_redirect_location - If the bucket is con gured as a website, redirects requests for this object to another
object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.
Note: Terraform ignores all leading / s in the object's key and treats multiple / s in the rest of the object's key as a
single / , so values of /index.html and index.html correspond to the same S3 object as do
first//second///third// and first/second/third/ .
Data Source: aws_s3_bucket_objects
NOTE on max_keys : Retrieving very large numbers of keys can adversely a ect Terraform's performance.
The bucket-objects data source returns keys (i.e., le names) and other metadata about objects in an S3 bucket.
Example Usage
The following example retrieves a list of all object keys in an S3 bucket and creates corresponding Terraform object data
sources:
Argument Reference
The following arguments are supported:
prefix - (Optional) Limits results to object keys with this pre x (Default: none)
encoding_type - (Optional) Encodes keys using this method (Default: none; besides none, only "url" can be used)
start_after - (Optional) Returns key names lexicographically after a speci c object key in your bucket (Default:
none; S3 lists object keys in UTF-8 character encoding in lexicographical order)
fetch_owner - (Optional) Boolean specifying whether to populate the owner list (Default: false)
Attributes Reference
In addition to all arguments above, the following attributes are exported:
common_prefixes - List of any keys between prefix and the next occurrence of delimiter (i.e., similar to
subdirectories of the prefix "directory"); the list is only returned when you specify delimiter
owners - List of strings representing object owner IDs (see fetch_owner above)
Data Source: aws_secretsmanager_secret
Retrieve metadata information about a Secrets Manager secret. To retrieve a secret value, see the
aws_secretsmanager_secret_version data source (/docs/providers/aws/d/secretsmanager_secret_version.html).
Example Usage
ARN
Name
Argument Reference
arn - (Optional) The Amazon Resource Name (ARN) of the secret to retrieve.
Attributes Reference
arn - The Amazon Resource Name (ARN) of the secret.
kms_key_id - The Key Management Service (KMS) Customer Master Key (CMK) associated with the secret.
rotation_lambda_arn - Rotation Lambda function Amazon Resource Name (ARN) if rotation is enabled.
Example Usage
output "example" {
value = jsondecode(data.aws_secretsmanager_secret_version
aws_secretsmanager_secret_version.example
example.secret_string
secret_string)["key1"]
}
Argument Reference
secret_id - (Required) Speci es the secret containing the version that you want to retrieve. You can specify either
the Amazon Resource Name (ARN) or the friendly name of the secret.
version_id - (Optional) Speci es the unique identi er of the version of the secret that you want to retrieve.
Overrides version_stage .
version_stage - (Optional) Speci es the secret version that you want to retrieve by the staging label attached to the
version. Defaults to AWSCURRENT .
Attributes Reference
arn - The ARN of the secret.
secret_string - The decrypted part of the protected secret information that was originally provided as a string.
secret_binary - The decrypted part of the protected secret information that was originally provided as a binary.
Base64 encoded.
This resource can prove useful when a module accepts a Security Group id as an input variable and needs to, for example,
determine the id of the VPC that the security group belongs to.
Example Usage
The following example shows how one might accept a Security Group id as a variable and use this data source to obtain the
data necessary to create a subnet.
variable "security_group_id" {}
Argument Reference
The arguments of this data source act as lters for querying the available security group in the current region. The given
lters must match exactly one security group whose data will be exported as attributes.
name - (Optional) The name that the desired security group must have.
tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired security group.
vpc_id - (Optional) The id of the VPC that the desired security group belongs to.
More complex lters can be expressed using one or more filter sub-blocks, which take the following arguments:
name - (Required) The name of the eld to lter by, as de ned by the underlying AWS API
(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html).
values - (Required) Set of values that are accepted for the given eld. A Security Group will be selected if any one of
the given values matches.
Attributes Reference
All of the argument attributes except filter blocks are also exported as result attributes. This data source will complete
the data by populating any elds that are not included in the con guration with the data for the selected Security Group.
Example Usage
filter {
name = "vpc-id"
values = ["${var.vpc_id}"]
}
}
Argument Reference
tags - (Optional) A mapping of tags, each pair of which must exactly match for desired security groups.
filter - (Optional) One or more name/value pairs to use as lters. There are several valid keys, for a full reference,
check out describe-security-groups in the AWS CLI reference
(https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-security-groups.html).
Attributes Reference
ids - IDs of the matches security groups.
vpc_ids - The VPC IDs of the matched security groups. The data source's tag or lter will span VPCs unless the vpc-
id lter is also used.
Data Source: aws_servicequotas_service
Retrieve information about a Service Quotas Service.
Example Usage
Argument Reference
service_name - (Required) Service name to lookup within Service Quotas. Available values can be found with the AWS
CLI service-quotas list-services command (https://docs.aws.amazon.com/cli/latest/reference/service-quotas/list-
services.html).
Attributes Reference
id - Code of the service.
Example Usage
Argument Reference
service_code - (Required) Service code for the quota. Available values can be found with the
aws_servicequotas_service data source (/docs/providers/aws/d/servicequotas_service.html) or AWS CLI service-
quotas list-services command (https://docs.aws.amazon.com/cli/latest/reference/service-quotas/list-services.html).
quota_code - (Optional) Quota code within the service. When con gured, the data source directly looks up the service
quota. Available values can be found with the AWS CLI service-quotas list-service-quotas command
(https://docs.aws.amazon.com/cli/latest/reference/service-quotas/list-service-quotas.html).
quota_name - (Optional) Quota name within the service. When con gured, the data source searches through all
service quotas to nd the matching quota name. Available values can be found with the AWS CLI service-quotas list-
service-quotas command (https://docs.aws.amazon.com/cli/latest/reference/service-quotas/list-service-quotas.html).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
global_quota - Whether the service quota is global for the AWS account.
Example Usage
Argument Reference
name - (Required) The friendly name of the topic to match.
Attributes Reference
arn - Set to the ARN of the found topic, suitable for referencing in other resources that support SNS topics.
Data Source: aws_sqs_queue
Use this data source to get the ARN and URL of queue in AWS Simple Queue Service (SQS). By using this data source, you can
reference SQS queues without having to hardcode the ARNs as input.
Example Usage
Argument Reference
name - (Required) The name of the queue to match.
Attributes Reference
arn - The Amazon Resource Name (ARN) of the queue.
Example Usage
To get the contents of the document owned by AWS.
output "content" {
value = "${data.aws_ssm_document.foo.content}"
}
Argument Reference
The following arguments are supported:
document_format - (Optional) Returns the document in the speci ed format. The document format can be either
JSON or YAML. JSON is the default format.
document_version - (Optional) The document version for which you want information.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Note: The unencrypted value of a SecureString will be stored in the raw state as plain-text. Read more about sensitive
data in state (/docs/state/sensitive-data.html).
Note: The data source is currently following the behavior of the SSM API (https://docs.aws.amazon.com/sdk-for-
go/api/service/ssm/#Parameter) to return a string value, regardless of parameter type. For type StringList , we can
use the built-in split() (https://www.terraform.io/docs/con guration/functions/split.html) function to get values in a list.
Example: split(",", data.aws_ssm_parameter.subnets.value)
Argument Reference
The following arguments are supported:
type - The type of the parameter. Valid types are String , StringList and SecureString .
Example Usage
Argument Reference
gateway_arn - (Required) The Amazon Resource Name (ARN) of the gateway.
disk_node - (Optional) The device node of the local disk to retrieve. For example, /dev/sdb .
disk_path - (Optional) The device path of the local disk to retrieve. For example, /dev/xvdb or /dev/nvme1n1 .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
This resource can prove useful when a module accepts a subnet id as an input variable and needs to, for example, determine
the id of the VPC that the subnet belongs to.
Example Usage
The following example shows how one might accept a subnet id as a variable and use this data source to obtain the data
necessary to create a security group that allows connections from hosts in that subnet.
variable "subnet_id" {}
ingress {
cidr_blocks = ["${data.aws_subnet.selected.cidr_block}"]
from_port = 80
to_port = 80
protocol = "tcp"
}
}
Argument Reference
The arguments of this data source act as lters for querying the available subnets in the current region. The given lters
must match exactly one subnet whose data will be exported as attributes.
availability_zone - (Optional) The availability zone where the subnet must reside.
default_for_az - (Optional) Boolean constraint for whether the desired subnet must be the default subnet for its
associated availability zone.
state - (Optional) The state that the desired subnet must have.
tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired subnet.
vpc_id - (Optional) The id of the VPC that the desired subnet belongs to.
More complex lters can be expressed using one or more filter sub-blocks, which take the following arguments:
name - (Required) The name of the eld to lter by, as de ned by the underlying AWS API
(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSubnets.html). For example, if matching
against tag Name , use:
values - (Required) Set of values that are accepted for the given eld. A subnet will be selected if any one of the given
values matches.
Attributes Reference
All of the argument attributes except filter blocks are also exported as result attributes. This data source will complete
the data by populating any elds that are not included in the con guration with the data for the selected subnet.
This resource can be useful for getting back a list of subnet ids for a vpc.
Example Usage
The following shows outputing all cidr blocks for every subnet id in a vpc.
output "subnet_cidr_blocks" {
value = ["${data.aws_subnet.example.*.cidr_block}"]
}
The following example retrieves a list of all subnets in a VPC with a custom tag of Tier set to a value of "Private" so that the
aws_instance resource can loop through the subnets, putting instances across availability zones.
tags = {
Tier = "Private"
}
}
Argument Reference
vpc_id - (Required) The VPC ID that you want to lter from.
tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired subnets.
More complex lters can be expressed using one or more filter sub-blocks, which take the following arguments:
name - (Required) The name of the eld to lter by, as de ned by the underlying AWS API
(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSubnets.html). For example, if matching
against tag Name , use:
values - (Required) Set of values that are accepted for the given eld. Subnet IDs will be selected if any one of the
given values match.
Attributes Reference
ids - A set of all the subnet ids found. This data source will fail if none are found.
Data Source: aws_transfer_server
Use this data source to get the ARN of an AWS Transfer Server for use in other resources.
Example Usage
Argument Reference
server_id - (Required) ID for an SFTP server.
Attributes Reference
arn - Amazon Resource Name (ARN) of Transfer Server
identity_provider_type - The mode of authentication enabled for this service. The default value is
SERVICE_MANAGED , which allows you to store and access SFTP user credentials within the service. API_GATEWAY
indicates that user authentication requires a call to an API Gateway endpoint URL provided by you to integrate an
identity provider of your choice.
invocation_role - Amazon Resource Name (ARN) of the IAM role used to authenticate the user account with an
identity_provider_type of API_GATEWAY .
logging_role - Amazon Resource Name (ARN) of an IAM role that allows the service to write your SFTP users’ activity
to your Amazon CloudWatch logs for monitoring and auditing purposes.
url - URL of the service endpoint used to authenticate users with an identity_provider_type of API_GATEWAY .
Data Source: aws_vpc_dhcp_options
Retrieve information about an EC2 DHCP Options con guration.
Example Usage
Lookup by Filter
filter {
name = "value"
values = ["example.com"]
}
}
Argument Reference
dhcp_options_id - (Optional) The EC2 DHCP Options ID.
lter
For more information about ltering, see the EC2 API documentation
(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeDhcpOptions.html).
Attributes Reference
dhcp_options_id - EC2 DHCP Options ID
domain_name - The su x domain name to used when resolving non Fully Quali ed Domain Names. e.g. the search
value in the /etc/resolv.conf le.
netbios_node_type - The NetBIOS node type (1, 2, 4, or 8). For more information about these node types, see RFC
2132 (http://www.ietf.org/rfc/rfc2132.txt).
owner_id - The ID of the AWS account that owns the DHCP options set.
Data Source: aws_vpc_endpoint
The VPC Endpoint data source provides details about a speci c VPC endpoint.
Example Usage
Argument Reference
The arguments of this data source act as lters for querying the available VPC endpoints. The given lters must match
exactly one VPC endpoint whose data will be exported as attributes.
service_name - (Optional) The AWS service name of the speci c VPC Endpoint to retrieve.
vpc_id - (Optional) The ID of the VPC in which the speci c VPC Endpoint is used.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
cidr_blocks - The list of CIDR blocks for the exposed AWS service. Applicable for endpoints of type Gateway .
dns_entry - The DNS entries for the VPC Endpoint. Applicable for endpoints of type Interface . DNS blocks are
documented below.
network_interface_ids - One or more network interfaces for the VPC Endpoint. Applicable for endpoints of type
Interface .
owner_id - The ID of the AWS account that owns the VPC endpoint.
policy - The policy document associated with the VPC Endpoint. Applicable for endpoints of type Gateway .
prefix_list_id - The pre x list ID of the exposed AWS service. Applicable for endpoints of type Gateway .
private_dns_enabled - Whether or not the VPC is associated with a private hosted zone - true or false .
Applicable for endpoints of type Interface .
requester_managed - Whether or not the VPC Endpoint is being managed by its service - true or false .
route_table_ids - One or more route tables associated with the VPC Endpoint. Applicable for endpoints of type
Gateway .
security_group_ids - One or more security groups associated with the network interfaces. Applicable for endpoints
of type Interface .
subnet_ids - One or more subnets in which the VPC Endpoint is located. Applicable for endpoints of type
Interface .
Example Usage
AWS service usage:
Argument Reference
The arguments of this data source act as lters for querying the available VPC endpoint services. The given lters must
match exactly one VPC endpoint service whose data will be exported as attributes.
service_name - (Optional) The service name that can be speci ed when creating a VPC endpoint.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
acceptance_required - Whether or not VPC endpoint connection requests to the service must be accepted by the
service owner - true or false .
manages_vpc_endpoints - Whether or not the service manages its VPC endpoints - true or false .
vpc_endpoint_policy_supported - Whether or not the service supports endpoint policies - true or false .
Data Source: aws_vpc
aws_vpc provides details about a speci c VPC.
This resource can prove useful when a module accepts a vpc id as an input variable and needs to, for example, determine
the CIDR block of that VPC.
Example Usage
The following example shows how one might accept a VPC id as a variable and use this data source to obtain the data
necessary to create a subnet within it.
variable "vpc_id" {}
Argument Reference
The arguments of this data source act as lters for querying the available VPCs in the current region. The given lters must
match exactly one VPC whose data will be exported as attributes.
default - (Optional) Boolean constraint on whether the desired VPC is the default VPC for the region.
state - (Optional) The current state of the desired VPC. Can be either "pending" or "available" .
tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired VPC.
More complex lters can be expressed using one or more filter sub-blocks, which take the following arguments:
name - (Required) The name of the eld to lter by, as de ned by the underlying AWS API
(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcs.html).
values - (Required) Set of values that are accepted for the given eld. A VPC will be selected if any one of the given
values matches.
Attributes Reference
All of the argument attributes except filter blocks are also exported as result attributes. This data source will complete
the data by populating any elds that are not included in the con guration with the data for the selected VPC.
instance_tenancy - The allowed tenancy of instances launched into the selected VPC. May be any of "default" ,
"dedicated" , or "host" .
main_route_table_id - The ID of the main route table associated with this VPC.
Example Usage
Argument Reference
The arguments of this data source act as lters for querying the available VPC peering connection. The given lters must
match exactly one VPC peering connection whose data will be exported as attributes.
status - (Optional) The status of the speci c VPC Peering Connection to retrieve.
vpc_id - (Optional) The ID of the requester VPC of the speci c VPC Peering Connection to retrieve.
owner_id - (Optional) The AWS account ID of the owner of the requester VPC of the speci c VPC Peering Connection
to retrieve.
cidr_block - (Optional) The CIDR block of the requester VPC of the speci c VPC Peering Connection to retrieve.
region - (Optional) The region of the requester VPC of the speci c VPC Peering Connection to retrieve.
peer_vpc_id - (Optional) The ID of the accepter VPC of the speci c VPC Peering Connection to retrieve.
peer_owner_id - (Optional) The AWS account ID of the owner of the accepter VPC of the speci c VPC Peering
Connection to retrieve.
peer_cidr_block - (Optional) The CIDR block of the accepter VPC of the speci c VPC Peering Connection to retrieve.
peer_region - (Optional) The region of the accepter VPC of the speci c VPC Peering Connection to retrieve.
filter - (Optional) Custom lter block as described below.
tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired VPC Peering
Connection.
More complex lters can be expressed using one or more filter sub-blocks, which take the following arguments:
name - (Required) The name of the eld to lter by, as de ned by the underlying AWS API
(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcPeeringConnections.html).
values - (Required) Set of values that are accepted for the given eld. A VPC Peering Connection will be selected if
any one of the given values matches.
Attributes Reference
All of the argument attributes except filter are also exported as result attributes.
allow_remote_vpc_dns_resolution - Indicates whether a local VPC can resolve public DNS hostnames to private IP
addresses when queried from instances in a peer VPC.
allow_classic_link_to_remote_vpc - Indicates whether a local ClassicLink connection can communicate with the
peer VPC over the VPC peering connection.
allow_vpc_to_remote_classic_link - Indicates whether a local VPC can communicate with a ClassicLink connection
in the peer VPC over the VPC peering connection.
Data Source: aws_vpcs
This resource can be useful for getting back a list of VPC Ids for a region.
The following example retrieves a list of VPC Ids with a custom tag of service set to a value of "production".
Example Usage
The following shows outputing all VPC Ids.
output "foo" {
value = "${data.aws_vpcs.foo.ids}"
}
An example use case would be interpolate the aws_vpcs output into count of an aws_ ow_log resource.
output "foo" {
value = "${data.aws_vpcs.foo.ids}"
}
Argument Reference
tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired vpcs.
More complex lters can be expressed using one or more filter sub-blocks, which take the following arguments:
name - (Required) The name of the eld to lter by, as de ned by the underlying AWS API
(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcs.html).
values - (Required) Set of values that are accepted for the given eld. A VPC will be selected if any one of the given
values matches.
Attributes Reference
ids - A list of all the VPC Ids found. This data source will fail if none are found.
Data Source: aws_vpn_gateway
The VPN Gateway data source provides details about a speci c VPN gateway.
Example Usage
output "vpn_gateway_id" {
value = "${data.aws_vpn_gateway.selected.id}"
}
Argument Reference
The arguments of this data source act as lters for querying the available VPN gateways. The given lters must match exactly
one VPN gateway whose data will be exported as attributes.
availability_zone - (Optional) The Availability Zone of the speci c VPN Gateway to retrieve.
attached_vpc_id - (Optional) The ID of a VPC attached to the speci c VPN Gateway to retrieve.
tags - (Optional) A mapping of tags, each pair of which must exactly match a pair on the desired VPN Gateway.
amazon_side_asn - (Optional) The Autonomous System Number (ASN) for the Amazon side of the speci c VPN
Gateway to retrieve.
More complex lters can be expressed using one or more filter sub-blocks, which take the following arguments:
name - (Required) The name of the eld to lter by, as de ned by the underlying AWS API
(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpnGateways.html).
values - (Required) Set of values that are accepted for the given eld. A VPN Gateway will be selected if any one of
the given values matches.
Attributes Reference
All of the argument attributes are also exported as result attributes.
Data Source: aws_waf_ipset
aws_waf_ipset Retrieves a WAF IP Set Resource Id.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
name - (Required) The name of the WAF Regional rate based rule.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
The following attributes are exported:
compute_type
root_storage
user_storage
This guide outlines how to get started with customizing endpoints, the available endpoint con gurations, and o ers
example con gurations for working with certain local development and testing solutions.
NOTE: Support for connecting the Terraform AWS Provider with custom endpoints and AWS compatible solutions is
o ered as best e ort. Individual Terraform resources may require compatibility updates to work in certain
environments. Integration testing by HashiCorp during provider changes is exclusively done against default AWS
endpoints at this time.
LocalStack
provider "aws" {
endpoints {
dynamodb = "http://localhost:4569"
s3 = "http://localhost:4572"
}
}
If multiple, di erent Terraform AWS Provider con gurations are required, see the Terraform documentation on multiple
provider instances (https://www.terraform.io/docs/con guration/providers.html#alias-multiple-provider-instances) for
additional information about the alias provider con guration and its usage.
acm
acmpca
amplify
apigateway
applicationautoscaling
applicationinsights
appmesh
appstream
appsync
athena
autoscaling
autoscalingplans
backup
batch
budgets
cloud9
cloudformation
cloudfront
cloudhsm
cloudsearch
cloudtrail
cloudwatch
cloudwatchevents
cloudwatchlogs
codebuild
codecommit
codedeploy
codepipeline
cognitoidentity
cognitoidp
configservice
cur
datapipeline
datasync
dax
devicefarm
directconnect
dlm
dms
docdb
ds
dynamodb
ec2
ecr
ecs
efs
eks
elasticache
elasticbeanstalk
elastictranscoder
elb
emr
es
firehose
fms
forecast
fsx
gamelift
glacier
globalaccelerator
glue
guardduty
iam
inspector
iot
iotanalytics
iotevents
kafka
kinesis
kinesisanalytics
kinesisvideo
kms
lakeformation
lambda
lexmodels
licensemanager
lightsail
macie
managedblockchain
mediaconnect
mediaconvert
medialive
mediapackage
mediastore
mediastoredata
mq
neptune
opsworks
organizations
personalize
pinpoint
pricing
qldb
quicksight
ram
rds
redshift
resourcegroups
route53
route53resolver
s3
s3control
sagemaker
sdb
secretsmanager
securityhub
serverlessrepo
servicecatalog
servicediscovery
servicequotas
ses
shield
sns
sqs
ssm
stepfunctions
storagegateway
sts
swf
transfer
waf
wafregional
worklink
workspaces
xray
NOTE: This information is not intended to be exhaustive for all local AWS compatible solutions or necessarily
authoritative con gurations for those documented. Check the documentation for each of these solutions for the most
up to date information.
DynamoDB Local
The Amazon DynamoDB service o ers a downloadable version for writing and testing applications without accessing the
DynamoDB web service. For more information about this solution, see the DynamoDB Local documentation in the Amazon
DynamoDB Developer Guide
(https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html).
provider "aws" {
access_key = "mock_access_key"
region = "us-east-1"
secret_key = "mock_secret_key"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
dynamodb = "http://localhost:8000"
}
}
LocalStack
LocalStack (https://localstack.cloud/) provides an easy-to-use test/mocking framework for developing Cloud applications.
endpoints {
apigateway = "http://localhost:4567"
cloudformation = "http://localhost:4581"
cloudwatch = "http://localhost:4582"
dynamodb = "http://localhost:4569"
es = "http://localhost:4578"
firehose = "http://localhost:4573"
iam = "http://localhost:4593"
kinesis = "http://localhost:4568"
lambda = "http://localhost:4574"
route53 = "http://localhost:4580"
redshift = "http://localhost:4577"
s3 = "http://localhost:4572"
secretsmanager = "http://localhost:4584"
ses = "http://localhost:4579"
sns = "http://localhost:4575"
sqs = "http://localhost:4576"
ssm = "http://localhost:4583"
stepfunctions = "http://localhost:4585"
sts = "http://localhost:4592"
}
}
Getting Started with AWS EKS
The Amazon Web Services EKS service allows for simpli ed management of Kubernetes (https://kubernetes.io/) servers.
While the service itself is quite simple from an operator perspective, understanding how it interconnects with other pieces of
the AWS service universe and how to con gure local Kubernetes clients to manage clusters can be helpful.
While the EKS User Guide (https://docs.aws.amazon.com/eks/latest/userguide/) provides much of the up-to-date
information about getting started with the service from a generic standpoint, this guide provides a Terraform con guration
based introduction.
This guide will show how to deploy a sample architecture using Terraform. The guide assumes some basic familiarity with
Kubernetes but does not assume any pre-existing deployment. It also assumes that you are familiar with the usual
Terraform plan/apply work ow; if you're new to Terraform itself, refer rst to the Getting Started guide (/intro/getting-
started/install.html).
It is worth noting that there are other valid ways to use these services and resources that make di erent tradeo s. We
encourage readers to consult the o cial documentation for the respective services and resources for additional context and
best-practices. This guide can still serve as an introduction to the main resources associated with these services, even if you
choose a di erent architecture.
Guide Overview
Preparation
Kubernetes Masters
EKS Master Cluster IAM Role
Guide Overview
Warning: Following this guide will create objects in your AWS account that will cost you money against your AWS bill.
AutoScaling Group containing 2 m4.large instances based on the latest EKS Amazon Linux 2 AMI: Operator managed
Kubernetes worker nodes for running Kubernetes service deployments
Associated VPC, Internet Gateway, Security Groups, and Subnets: Operator managed networking resources for the EKS
Cluster and worker node instances
Associated IAM Roles and Policies: Operator managed access resources for EKS and worker node instances
Preparation
In order to follow this guide you will need an AWS account and to have Terraform installed. Con gure your credentials
(/docs/providers/aws/index.html#authentication) so that Terraform is able to act on your behalf.
For simplicity here, we will assume you are already using a set of IAM credentials with suitable access to create AutoScaling,
EC2, EKS, and IAM resources. If you are not sure and are working in an AWS account used only for development, the
simplest approach to get started is to use credentials with full administrative access to the target AWS account.
If you are planning to locally use the standard Kubernetes client, kubectl , it must be at least version 1.10 to support exec
authentication with usage of aws-iam-authenticator . For additional information about installation and con guration of
these applications, see their o cial documentation.
Relevant Links:
NOTE: We recommend using this guide to build a separate Terraform con guration (for easy tear down) and more
importantly running it in a separate AWS account as your production infrastructure. While it is self-contained and
should not a ect existing infrastructure, its always best to be cautious!
NOTE: If you would rather see the full sample Terraform con guration for this guide rather than the individual pieces, it
can be found at: https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-
started (https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-started)
variable "cluster-name" {
default = "terraform-eks-demo"
type = "string"
}
NOTE: The usage of the speci c kubernetes.io/cluster/* resource tags below are required for EKS and Kubernetes
to discover and manage networking resources.
The below will create a 10.0.0.0/16 VPC, two 10.0.X.0/24 subnets, an internet gateway, and setup the subnet routing to route
external tra c through the internet gateway:
data "aws_availability_zones" "available" {}
tags = "${
map(
"Name", "terraform-
- eks-
- demo-
- node",
"kubernetes.io
io/
/ cluster/
/ ${var.cluster
cluster-
- name}", "shared",
)
}"
}
availability_zone = "${data.aws_availability_zones.available.names[count.index]}"
cidr_block = "10.0.${count.index}.0/24"
vpc_id = "${aws_vpc.demo.id}"
tags = "${
map(
"Name", "terraform-
- eks-
- demo-
- node",
"kubernetes.io
io/
/ cluster/
/ ${var.cluster
cluster-
- name}", "shared",
)
}"
}
tags = {
Name = "terraform-eks-demo"
}
}
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.demo.id}"
}
}
subnet_id = "${aws_subnet.demo.*.id[count.index]}"
route_table_id = "${aws_route_table.demo.id}"
}
Kubernetes Masters
This is where the EKS service comes into play. It requires a few operator managed resources beforehand so that Kubernetes
can properly manage other AWS services as well as allow inbound networking communication from your local workstation (if
desired) and worker nodes.
The below is an example IAM role and policy to allow the EKS service to manage or retrieve data from other AWS services. It
is also possible to create these policies with the aws_iam_policy_document data source
(/docs/providers/aws/d/iam_policy_document.html)
For the latest required policy, see the EKS User Guide (https://docs.aws.amazon.com/eks/latest/userguide/).
assume_role_policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
This security group controls networking access to the Kubernetes masters. We will later con gure this with an ingress rule to
allow tra c from the worker nodes.
resource "aws_security_group" "demo-cluster" {
name = "terraform-eks-demo-cluster"
description = "Cluster communication with worker nodes"
vpc_id = "${aws_vpc.demo.id}"
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "terraform-eks-demo"
}
}
This resource is the actual Kubernetes master cluster. It can take a few minutes to provision in AWS.
vpc_config {
security_group_ids = ["${aws_security_group.demo-cluster.id}"]
subnet_ids = ["${aws_subnet.demo.*.id}"]
}
depends_on = [
"aws_iam_role_policy_attachment.demo-cluster-AmazonEKSClusterPolicy",
"aws_iam_role_policy_attachment.demo-cluster-AmazonEKSServicePolicy",
]
}
This section only provides some example methods for con guring kubectl to communicate with EKS servers.
Managing Kubernetes clients and con gurations is outside the scope of this guide.
If you are planning on using kubectl to manage the Kubernetes cluster, now might be a great time to con gure your client.
After con guration, you can verify cluster access via kubectl version displaying server version information in addition to
local client version information.
If you would rather update your con guration manually, the below Terraform output generates a sample kubectl
con guration to connect to your cluster. This can be placed into a Kubernetes con guration le, e.g. ~/.kube/config
locals {
kubeconfig = <<KUBECONFIG
<<
apiVersion: v1
clusters:
- cluster:
server: ${aws_eks_cluster.demo.endpoint}
certificate-authority-data: ${aws_eks_cluster.demo.certificate_authority.0.data}
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "${var.cluster-name}"
KUBECONFIG
}
output "kubeconfig" {
value = "${local.kubeconfig}"
}
The below is an example IAM role and policy to allow the worker nodes to manage or retrieve data from other AWS services.
It is used by Kubernetes to allow worker nodes to join the cluster. It is also possible to create these policies with the
aws_iam_policy_document data source (/docs/providers/aws/d/iam_policy_document.html)
For the latest required policy, see the EKS User Guide (https://docs.aws.amazon.com/eks/latest/userguide/).
assume_role_policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
This security group controls networking access to the Kubernetes worker nodes.
resource "aws_security_group" "demo-node" {
name = "terraform-eks-demo-node"
description = "Security group for all nodes in the cluster"
vpc_id = "${aws_vpc.demo.id}"
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${
map(
"Name", "terraform-
- eks-
- demo-
- node",
"kubernetes.io
io/
/ cluster/
/ ${var.cluster
cluster-
- name}", "owned",
)
}"
}
Now that we have a way to know where tra c from the worker nodes is coming from, we can allow the worker nodes
networking access to the EKS master cluster.
resource "aws_security_group_rule" "demo-cluster-ingress-node-https" {
description = "Allow pods to communicate with the cluster API Server"
from_port = 443
protocol = "tcp"
security_group_id = "${aws_security_group.demo-cluster.id}"
source_security_group_id = "${aws_security_group.demo-node.id}"
to_port = 443
type = "ingress"
}
Now we have everything in place to create and manage EC2 instances that will serve as our worker nodes in the Kubernetes
cluster. This setup utilizes an EC2 AutoScaling Group (ASG) rather than manually working with EC2 instances. This o ers
exibility to scale up and down the worker nodes on demand when used in conjunction with AutoScaling policies (not
implemented here).
First, let us create a data source to fetch the latest Amazon Machine Image (AMI) that Amazon provides with an EKS
compatible Kubernetes baked in. It will lter for and select an AMI compatible with the speci c Kubernetes version being
deployed.
most_recent = true
owners = ["602401143452"]
}
Next, lets create an AutoScaling Launch Con guration that uses all our prerequisite resources to de ne how to create EC2
instances using them.
data "aws_region" "current" {}
locals {
demo-
- node-
- userdata = <<USERDATA
<<
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --apiserver-endpoint '${aws_eks_cluster.demo.endpoint}' --b64-cluster-ca '${aws_eks
_cluster.demo.certificate_authority.0.data}' '${var.cluster-name}'
USERDATA
}
lifecycle {
create_before_destroy = true
}
}
Finally, we create an AutoScaling Group that actually launches EC2 instances based on the AutoScaling Launch Con guration.
NOTE: The usage of the speci c kubernetes.io/cluster/* resource tag below is required for EKS and Kubernetes to
discover and manage compute resources.
resource "aws_autoscaling_group" "demo" {
desired_capacity = 2
launch_configuration = "${aws_launch_configuration.demo.id}"
max_size = 2
min_size = 1
name = "terraform-eks-demo"
vpc_zone_identifier = ["${aws_subnet.demo.*.id}"]
tag {
key = "Name"
value = "terraform-eks-demo"
propagate_at_launch = true
}
tag {
key = "kubernetes.io/cluster/${var.cluster-name}"
value = "owned"
propagate_at_launch = true
}
}
NOTE: At this point, your Kubernetes cluster will have running masters and worker nodes, however, the worker nodes
will not be able to join the Kubernetes cluster quite yet. The next section has the required Kubernetes con guration to
enable the worker nodes to join the cluster.
While managing Kubernetes cluster and client con gurations are beyond the scope of this guide, we provide an
example of how to apply the required Kubernetes ConfigMap (http://kubernetes.io/docs/user-guide/con gmap/) via
kubectl below for completeness. See also the Con guring kubectl for EKS section.
The EKS service does not provide a cluster-level API parameter or resource to automatically con gure the underlying
Kubernetes cluster to allow worker nodes to join the cluster via AWS IAM role authentication.
To output an example IAM Role authentication ConfigMap from your Terraform con guration:
locals {
config_map_aws_auth = <<CONFIGMAPAWSAUTH
<<
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: ${aws_iam_role.demo-node.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
CONFIGMAPAWSAUTH
}
output "config_map_aws_auth" {
value = "${local.config_map_aws_auth}"
}
Run terraform output config_map_aws_auth and save the con guration into a le, e.g.
config_map_aws_auth.yaml
You can verify the worker nodes are joining the cluster via: kubectl get nodes --watch
The example policy documents and resources in this guide are for illustrative purposes only. Full documentation about the
IAM policy format and supported elements can be found in the AWS IAM User Guide
(https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html).
NOTE: Some AWS services only allow a subset of the policy elements or policy variables. For more information, see the
AWS User Guide for the service you are con guring.
The recommended approach to building AWS IAM policy documents within Terraform is the highly customizable
aws_iam_policy_document data source. A short list of bene ts over other methods include:
Native Terraform con guration - no need to worry about JSON formatting or syntax
Policy layering - create policy documents that combine and/or overwrite other policy documents
Additional methods are available, such single line string syntax, the le() interpolation function, and the template_ le data
source, however their usage is discouraged due to their complexity.
For complete implementation information and examples, see the aws_iam_policy_document data source
documentation (/docs/providers/aws/d/iam_policy_document.html).
policy = "${data.aws_iam_policy_document.example.json}"
}
For example:
policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
}
POLICY
}
Other Con guration Method Examples
These other con guration methods are provided only for reference and not meant to be an authoritative source of
information.
For example:
{
"Version":
"Version" "2012-10-17",
"Statement":
"Statement" {
"Effect":
"Effect" "Allow",
"Action":
"Action" "*",
"Resource":
"Resource" "*"
}
}
Those contents can be read into the Terraform con guration via:
policy = "${file("policy.json
json")}"
}
{
"Version":
"Version" "2012-10-17",
"Statement":
"Statement" {
"Effect":
"Effect" "Allow",
"Action":
"Action" "*",
"Resource":
"Resource" "${resource}"
}
}
Those contents can be read and interpolated into the Terraform con guration via:
vars = {
resource = "${aws_vpc.example.arn}"
}
}
policy = "${data.template_file.example.rendered}"
}
Serverless Applications with AWS Lambda and API
Gateway
Serverless computing is a cloud computing model in which a cloud provider automatically manages the provisioning and
allocation of compute resources. This contrasts with traditional cloud computing where the user is responsible for directly
managing virtual servers.
A popular approach to running "serverless" web applications is to implement the application functionality as one or more
functions in AWS Lambda (https://aws.amazon.com/lambda/) and then expose these for public consumption using Amazon
API Gateway (https://aws.amazon.com/api-gateway/).
This guide will show how to deploy such an architecture using Terraform. The guide assumes some basic familiarity with
Lambda and API Gateway but does not assume any pre-existing deployment. It also assumes that you are familiar with the
usual Terraform plan/apply work ow; if you're new to Terraform itself, refer rst to the Getting Started guide (/intro/getting-
started/install.html).
This is a slightly-opinionated guide, which chooses to ignore the built-in versioning and staged deployment mechanisms in
AWS Lambda and API Gateway. In many cases these features are not necessary when using Terraform because changes can
be tracked and deployed by keeping the Terraform con guration in a version-control repository. It also uses API Gateway in
a very simple way, proxying all requests to a single AWS Lambda function that is expected to contain its own request routing
logic.
As usual, there are other valid ways to use these services that make di erent tradeo s. We encourage readers to consult the
o cial documentation for the respective services for additional context and best-practices. This guide can still serve as an
introduction to the main resources associated with these services, even if you choose a di erent architecture.
Preparation
In order to follow this guide you will need an AWS account and to have Terraform installed. Con gure your credentials
(/docs/providers/aws/index.html#authentication) so that Terraform is able to act on your behalf.
For simplicity here we will assume you are already using a set of IAM credentials with suitable access to create Lambda
functions and work with API Gateway. If you aren't sure and are working in an AWS account used only for development, the
simplest approach to get started is to use credentials with full administrative access to the target AWS account.
In the following section we will manually emulate an automatic build process using the zip command line tool and the AWS
CLI (https://aws.amazon.com/cli/). The latter must also have access to your AWS credentials, and the easiest way to achieve
this is to provide them via environment variables so that they can be used by both the AWS CLI and Terraform.
Warning: Following this tutorial will create objects in your AWS account that will cost you money against your AWS bill.
For the sake of this tutorial we will perform these build steps manually and build a very simple AWS Lambda function. Start
by creating a new directory called example that will be used to create the archive, and place in it a single source le. We will
use the JavaScript runtime in this example, so our le is called main.js and will contain the following source code:
'use strict';
The above is the simplest possible Lambda function for use with API Gateway, returning a hard-coded "Hello world!"
response in the object structure that API Gateway expects.
From your command prompt, change to the directory containing that le and add it to a zip le in the parent directory:
$ cd example
$ zip ../example.zip main.js
adding: main.js (deflated 33%)
$ cd ..
In a real build and deploy scenario we would have an S3 bucket set aside for staging our archive and would use this to "hand
o " these artifacts between the build and deploy process. For the sake of this tutorial we will create a temporary S3 bucket
using the AWS CLI. S3 bucket names are globally unique, so you may need to change the --bucket= argument in the
following example and substitute your new bucket name throughout the rest of this tutorial.
You can now upload your build artifact into this S3 bucket:
A version number is included in the object path to identify this build. Later we will demonstrate deploying a new version,
which will create another separate object.
provider "aws" {
region = "us-east-1"
}
s3_bucket = "terraform-serverless-example"
s3_key = "v1.0.0/example.zip"
handler = "main.handler"
runtime = "nodejs8.10"
role = "${aws_iam_role.lambda_exec.arn}"
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
Each Lambda function must have an associated IAM role which dictates what access it has to other AWS services. The above
con guration speci es a role with no access policy, e ectively giving the function no access to any AWS services, since our
example application requires no such access.
Before you can work with a new con guration directory, it must be initialized using terraform init , which in this case will
install the AWS provider:
$ terraform init
# ...
# ...
$ terraform apply
# ....
aws_iam_role.lambda_exec: Creating...
arn: "" => "<computed>"
assume_role_policy: "" => "{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Act
ion\": \"sts:AssumeRole\",\n \"Principal\": {\n \"Service\": \"lambda.amazonaws.com\"\n
},\n \"Effect\": \"Allow\",\n \"Sid\": \"\"\n }\n ]\n}\n"
create_date: "" => "<computed>"
force_detach_policies: "" => "false"
name: "" => "serverless_example_lambda"
path: "" => "/"
unique_id: "" => "<computed>"
aws_iam_role.lambda_exec: Creation complete after 1s (ID: serverless_example_lambda)
aws_lambda_function.example: Creating...
arn: "" => "<computed>"
function_name: "" => "ServerlessExample"
handler: "" => "main.handler"
invoke_arn: "" => "<computed>"
last_modified: "" => "<computed>"
memory_size: "" => "128"
publish: "" => "false"
qualified_arn: "" => "<computed>"
role: "" => "arn:aws:iam::123456:role/serverless_example_lambda"
runtime: "" => "nodejs8.10"
s3_bucket: "" => "terraform-serverless-example"
s3_key: "" => "v1.0.0/example.zip"
source_code_hash: "" => "<computed>"
timeout: "" => "3"
tracing_config.#: "" => "<computed>"
version: "" => "<computed>"
aws_lambda_function.example: Still creating... (10s elapsed)
aws_lambda_function.example: Creation complete after 11s (ID: ServerlessExample)
After the function is created successfully, try invoking it using the AWS CLI:
$ aws lambda invoke --region=us-east-1 --function-name=ServerlessExample output.txt
{"StatusCode": 200}
$ cat output.txt
{
"statusCode":200,
"headers":{
"Content-Type":"text/html; charset=utf-8"
},
"body":"<p>Hello world!</p>"
}
With the function working as expected, the next step is to create the API Gateway REST API that will provide access to it.
Create a new le api_gateway.tf in the same directory as our lambda.tf from the previous step. First, con gure the
root "REST API" object, as follows:
The "REST API" is the container for all of the other API Gateway objects we will create.
All incoming requests to API Gateway must match with a con gured resource and method in order to be handled. Append
the following to the lambda.tf le to de ne a single proxy resource:
The special path_part value "{proxy+}" activates proxy behavior, which means that this resource will match any request
path. Similarly, the aws_api_gateway_method block uses a http_method of "ANY" , which allows any request method to
be used. Taken together, this means that all incoming requests will match this resource.
Each method on an API gateway resource has an integration which speci es where incoming requests are routed. Add the
following con guration to specify that requests to this method should be sent to the Lambda function de ned earlier:
integration_http_method = "POST"
type = "AWS_PROXY"
uri = "${aws_lambda_function.example.invoke_arn}"
}
The AWS_PROXY integration type causes API gateway to call into the API of another AWS service. In this case, it will call the
AWS Lambda API to create an "invocation" of the Lambda function.
Unfortunately the proxy resource cannot match an empty path at the root of the API. To handle that, a similar con guration
must be applied to the root resource that is built in to the REST API object:
integration_http_method = "POST"
type = "AWS_PROXY"
uri = "${aws_lambda_function.example.invoke_arn}"
}
Finally, you need to create an API Gateway "deployment" in order to activate the con guration and expose the API at a URL
that can be used for testing:
rest_api_id = "${aws_api_gateway_rest_api.example.id}"
stage_name = "test"
}
With all of the above con guration changes in place, run terraform apply again to create these new objects:
$ terraform apply
# ...
aws_api_gateway_rest_api.example: Creating...
created_date: "" => "<computed>"
description: "" => "Terraform Serverless Application Example"
name: "" => "ServerlessExample"
root_resource_id: "" => "<computed>"
aws_api_gateway_rest_api.example: Creation complete after 1s (ID: bkqhuuz8r8)
# ...etc, etc...
After the creation steps are complete, the new objects will be visible in the API Gateway console
(https://console.aws.amazon.com/apigateway/home?region=us-east-1).
The integration with the Lambda function is not functional yet because API Gateway does not have the necessary access to
invoke the function. The next step will address this, making the application fully-functional.
source_arn = "${aws_api_gateway_rest_api.example.execution_arn}/*/*"
}
In order to test the created API you will need to access its test URL. To make this easier to access, add the following output to
api_gateway.tf :
output "base_url" {
value = "${aws_api_gateway_deployment.example.invoke_url}"
}
# ...
aws_lambda_permission.apigw: Creating...
statement_id: "" => "AllowAPIGatewayInvoke"
action: "" => "lambda:InvokeFunction"
function_name: "" => "ServerlessExample"
# ...
aws_lambda_permission.apigw: Creation complete after 1s
Outputs:
base_url = https://bkqhuuz8r8.execute-api.us-east-1.amazonaws.com/test
Load the URL given in the output from your run in your favorite web browser. If everything has worked, you will see the text
"Hello world!". This message is being returned from the Lambda function code uploaded earlier, via the API Gateway
endpoint.
This is a good milestone! The rst version of the application is deployed and accessible. Next we will see how to deploy a
new version of the application.
Returning to the example directory containing the main.js from earlier, update the source code to change the message.
For example:
'use strict';
Update the zip le and upload a new version to the artifact S3 bucket:
$ cd example
$ zip ../example.zip main.js
updating: main.js (deflated 33%)
$ cd ..
$ aws s3 cp example.zip s3://terraform-serverless-example/v1.0.1/example.zip
Notice that a di erent version number was used in the S3 object path, so the previous archive is retained. In order to allow
easy switching between versions you can de ne a variable to allow the version number to be chosen dynamically. Add the
following to lambda.tf :
variable "app_version" {
}
Then locate the aws_lambda_function resource de ned earlier and change its s3_key argument to include the version
variable:
s3_bucket = "terraform-serverless-example"
s3_key = "v${var.app_version}/example.zip"
# ...
~ aws_lambda_function.example
s3_key: "v1.0.0/example.zip" => "v1.0.1/example.zip"
# ...
After the change has been applied, visit again the test URL and you should see the updated greeting message.
After this apply completes, the test URL will return the original message again.
Conclusion
In this guide you created an AWS Lambda function that produces a result compatible with Amazon API Gateway proxy
resources and then con gured API Gateway.
Although the AWS Lambda function used in this guide is very simple, in more practical applications it is possible to use
helper libraries to map API Gateway proxy requests to standard HTTP application APIs in various languages, such as Python's
WSGI (https://pypi.python.org/pypi/aws-wsgi/0.0.6) or the NodeJS Express Framework (https://github.com/awslabs/aws-
serverless-express).
When combined with an automated build process running in a CI system, Terraform can help to deploy applications as AWS
Lambda functions, with suitable IAM policies to connect with other AWS services for persistent storage, access to secrets,
etc.
Cleaning Up
Once you are nished with this guide, you can destroy the example objects with Terraform. Since our con guration requires
a version number as an input variable, provide a placeholder value to destroy:
Since the artifact zip les and the S3 bucket itself were created outside of Terraform, they must also be cleaned up outside
of Terraform. This can be done via the S3 console (https://s3.console.aws.amazon.com/s3/home). Note that all of the objects
in the bucket must be deleted before the bucket itself can be deleted.
Further Reading
The following Terraform resource types are used in this tutorial:
aws_lambda_function (/docs/providers/aws/r/lambda_function.html)
aws_lambda_permission (/docs/providers/aws/r/lambda_permission.html)
aws_api_gateway_rest_api (/docs/providers/aws/r/api_gateway_rest_api.html)
aws_api_gateway_resource (/docs/providers/aws/r/api_gateway_resource.html)
aws_api_gateway_method (/docs/providers/aws/r/api_gateway_method.html)
aws_api_gateway_integration (/docs/providers/aws/r/api_gateway_integration.html)
aws_iam_role (/docs/providers/aws/r/iam_role.html)
The reference page for each resource type provides full details on all of its supported arguments and exported attributes.
Custom Domain Names and TLS Certi cates
For the sake of example, this guide uses the test URLs o ered by default by API Gateway. In practice, most applications will
be deployed at a custom hostname.
To use a custom domain name you must rst register that domain and con gure DNS hosting for it. You must also either
create an Amazon Certi cate Manager (https://aws.amazon.com/certi cate-manager/) certi cate or register a TLS certi cate
with a third-party certi cate authority.
Con guring the domain name is beyond the scope of this tutorial, but if you already have a hostname and TLS certi cate
you wish to use then you can register it with API Gateway using the aws_api_gateway_domain_name
(/docs/providers/aws/r/api_gateway_domain_name.html) resource type.
A registered domain name is then mapped to a particular "REST API" object using aws_api_gateway_base_path_mapping
(/docs/providers/aws/r/api_gateway_base_path_mapping.html). The con gured domain name then becomes an alias for a
particular deployment stage.
Due to API Gateway's staged deployment model, if you do need to make changes to the API Gateway con guration you must
explicitly request that it be re-deployed by "tainting" the deployment resource:
This command ags that this object must be re-created in the next Terraform plan, so a subsequent terraform apply will
then replace the deployment and thus activate the latest con guration changes.
Please note that this "re-deployment" will cause some downtime, since Terraform will need to delete the stage and
associated deployment before re-creating it. Downtime can be avoided by triggering the deployment action via the API
Gateway console, outside of Terraform. The approach covered in this guide intentionally minimizes the need to amend the
API Gateway con guration over time to mitigate this limitation. Better support for this work ow will be added to Terraform's
AWS provider in a future release.
Terraform AWS Provider Version 2 Upgrade Guide
Version 2.0.0 of the AWS provider for Terraform is a major release and includes some changes that you will need to consider
when upgrading. This guide is intended to help with that process and focuses only on changes from version 1.60.0 to version
2.0.0.
Most of the changes outlined in this guide have been previously marked as deprecated in the Terraform plan/apply output
throughout previous provider releases. These changes, such as deprecation notices, can always be found in the Terraform
AWS Provider CHANGELOG (https://github.com/terraform-providers/terraform-provider-aws/blob/master/CHANGELOG.md).
Upgrade topics:
Resource: aws_api_gateway_api_key
Resource: aws_api_gateway_integration
Resource: aws_api_gateway_integration_response
Resource: aws_api_gateway_method
Resource: aws_api_gateway_method_response
Resource: aws_appautoscaling_policy
Resource: aws_autoscaling_policy
Resource: aws_batch_compute_environment
Resource: aws_cloudfront_distribution
Resource: aws_cognito_user_pool
Resource: aws_dx_lag
Resource: aws_ecs_service
Resource: aws_elasticache_cluster
Resource: aws_iam_user_login_pro le
Resource: aws_instance
Resource: aws_lambda_function
Resource: aws_lambda_layer_version
Resource: aws_network_acl
Resource: aws_redshift_cluster
Resource: aws_route_table
Resource: aws_route53_record
Resource: aws_route53_zone
Resource: aws_wafregional_byte_match_set
Before upgrading to version 2.0.0 or later, it is recommended to upgrade to the most recent 1.X version of the provider
(version 1.60.0) and ensure that your environment successfully runs terraform plan
(https://www.terraform.io/docs/commands/plan.html) without unexpected changes or deprecation notices.
provider "aws" {
provider "aws" {
The provider will now return an error to ensure operators understand the implications of the missing AWS account ID in the
provider.
provider "aws" {
skip_requesting_account_id = true
}
As an example, lets take the below sample con guration and migrate it.
name = "master_password"
payload = "AQEC..."
}
secret {
name = "master_username"
payload = "AQEC..."
}
}
master_password = "${data.aws_kms_secret.example.master_password}"
master_username = "${data.aws_kms_secret.example.master_username}"
}
Notice that the aws_kms_secret data source previously was taking the two secret con guration block name arguments
and generating those as attribute names ( master_password and master_username in this case). To remove the
incompatible behavior, this updated version of the data source provides the decrypted value of each of those secret
con guration block name arguments within a map attribute named plaintext .
name = "master_password"
payload = "AQEC..."
}
secret {
name = "master_username"
payload = "AQEC..."
}
}
master_password = "${data.aws_kms_secrets.example.plaintext["master_password"]}"
master_username = "${data.aws_kms_secrets.example.plaintext["master_username"]}"
}
Resource: aws_api_gateway_api_key
aws_api_gateway_usage_plan (/docs/providers/aws/r/api_gateway_usage_plan.html)
aws_api_gateway_usage_plan_key (/docs/providers/aws/r/api_gateway_usage_plan_key.html)
stage_key {
rest_api_id = "${aws_api_gateway_rest_api.example.id}"
stage_name = "${aws_api_gateway_deployment.example.stage_name}"
}
}
api_stages {
api_id = "${aws_api_gateway_rest_api.example.id}"
stage = "${aws_api_gateway_deployment.example.stage_name}"
}
}
Resource: aws_api_gateway_integration
request_parameters_in_json = <<PARAMS
<<
{
"integration.request.header.X-Authorization": "'static'"
}
PARAMS
}
request_parameters = {
"integration.request.header.X-Authorization" = "'static'"
}
}
Resource: aws_api_gateway_integration_response
response_parameters_in_json = <<PARAMS
<<
{
"method.response.header.Content-Type": "integration.response.body.type"
}
PARAMS
}
response_parameters = {
"method.response.header.Content-Type" = "integration.response.body.type"
}
}
Resource: aws_api_gateway_method
request_parameters_in_json = <<PARAMS
<<
{
"method.request.header.Content-Type": false,
"method.request.querystring.page": true
}
PARAMS
}
request_parameters = {
"method.request.header.Content-Type" = false
"method.request.querystring.page" = true
}
}
Resource: aws_api_gateway_method_response
response_parameters_in_json = <<PARAMS
<<
{
"method.response.header.Content-Type": true
}
PARAMS
}
response_parameters = {
"method.response.header.Content-Type" = true
}
}
Resource: aws_appautoscaling_policy
Argument Removals
The following arguments have been moved into a nested argument named step_scaling_policy_configuration :
adjustment_type
cooldown
metric_aggregation_type
min_adjustment_magnitude
step_adjustment
adjustment_type = "ChangeInCapacity"
cooldown = 60
metric_aggregation_type = "Maximum"
step_adjustment {
metric_interval_upper_bound = 0
scaling_adjustment = -1
}
}
step_scaling_policy_configuration {
adjustment_type = "ChangeInCapacity"
cooldown = 60
metric_aggregation_type = "Maximum"
step_adjustment {
metric_interval_upper_bound = 0
scaling_adjustment = -1
}
}
}
Resource: aws_autoscaling_policy
min_adjustment_step = 2
}
min_adjustment_magnitude = 2
}
Resource: aws_batch_compute_environment
Resource: aws_cloudfront_distribution
cache_behavior Argument Removal
Switch your Terraform con guration to the ordered_cache_behavior argument instead. It behaves similar to the previous
cache_behavior argument, however the ordering of the con gurations in Terraform is now re ected in the distribution
where previously it was indeterminate.
cache_behavior {
cache_behavior {
}
}
ordered_cache_behavior {
ordered_cache_behavior {
}
}
Resource: aws_cognito_user_pool
Resource: aws_dx_lag
aws_dx_connection (/docs/providers/aws/r/dx_connection.html)
aws_dx_connection_association (/docs/providers/aws/r/dx_connection_association.html)
placement_strategy {
placement_strategy {
}
}
ordered_placement_strategy {
ordered_placement_strategy {
}
}
reference_name = "example"
}
creation_token = "example"
}
Resource: aws_elasticache_cluster
Resource: aws_iam_user_login_pro le
Resource: aws_instance
Resource: aws_lambda_function
Previously reserved_concurrent_executions accepted 0 and below for unreserved concurrency, which means it was
not previously possible to disable invocations. The argument now di erentiates between a new value for unreserved
concurrency ( -1 ) and disabling Lambda invocations ( 0 ). If previously con guring this value to 0 for unreserved
concurrency, update the con gured value to -1 or the resource will disable Lambda Function invocations on update. If
previously uncon gured, the argument does not require any changes.
Resource: aws_lambda_layer_version
Resource: aws_network_acl
subnet_id = "subnet-12345678"
}
subnet_ids = ["subnet-12345678"]
}
Resource: aws_redshift_cluster
Argument Removals
The following arguments have been moved into a nested argument named logging :
bucket_name
s3_key_prefix
bucket_name = "example"
enable_logging = true
s3_key_prefix = "example"
}
logging {
bucket_name = "example"
enable = true
s3_key_prefix = "example"
}
}
Resource: aws_route_table
Import Change
Previously, importing this resource resulted in an aws_route resource for each route, in addition to the aws_route_table ,
in the Terraform state. Support for importing aws_route resources has been added and importing this resource only adds
the aws_route_table resource, with in-line routes, to the state.
Resource: aws_route53_record
For example, if the www.example.com Route 53 Record in the example.com Route 53 Hosted Zone existed previously and
this new Terraform con guration was introduced:
name = "www.example.com"
}
During resource creation in version 1.X and prior, it would silently perform an UPSERT changeset to the existing Route 53
Record and not report back an error. In version 2.0.0 of the Terraform AWS Provider, the resource now performs a CREATE
changeset, which will error for existing Route 53 Records.
The allow_overwrite argument provides a workaround to keep the old behavior, but most existing work ows should be
updated to perform a terraform import command like the following instead:
Resource: aws_route53_zone
vpc_id = "..."
}
vpc {
vpc_id = "..."
}
}
Resource: aws_wafregional_byte_match_set
byte_match_tuple {
byte_match_tuple {
}
}
byte_match_tuples {
byte_match_tuples {
}
}
Terraform AWS Provider Version 3 Upgrade Guide
NOTE: This upgrade guide is a work in progress and will not be completed until the release of version 3.0.0 of the
provider in the coming months. Many of the topics discussed, except for the actual provider upgrade, can be performed
using the most recent 2.X version of the provider.
Version 3.0.0 of the AWS provider for Terraform is a major release and includes some changes that you will need to consider
when upgrading. This guide is intended to help with that process and focuses only on changes from version 1.X to version
3.0.0.
Most of the changes outlined in this guide have been previously marked as deprecated in the Terraform plan/apply output
throughout previous provider releases. These changes, such as deprecation notices, can always be found in the Terraform
AWS Provider CHANGELOG (https://github.com/terraform-providers/terraform-provider-aws/blob/master/CHANGELOG.md).
Upgrade topics:
Resource: aws_emr_cluster
WARNING: This topic is placeholder documentation until version 3.0.0 is released in the coming months.
Before upgrading to version 3.0.0, it is recommended to upgrade to the most recent 2.X version of the provider and
ensure that your environment successfully runs terraform plan
(https://www.terraform.io/docs/commands/plan.html) without unexpected changes or deprecation notices.
provider "aws" {
Resource: aws_emr_cluster
core_instance_count = 2
}
core_instance_group {
instance_count = 2
}
}
core_instance_type = "m4.large"
}
core_instance_group {
instance_type = "m4.large"
}
}
instance_group {
instance_role = "MASTER"
instance_type = "m4.large"
}
instance_group {
instance_count = 1
instance_role = "CORE"
instance_type = "c4.large"
}
instance_group {
instance_count = 2
instance_role = "TASK"
instance_type = "c4.xlarge"
}
}
master_instance_group {
instance_type = "m4.large"
}
core_instance_group {
instance_count = 1
instance_type = "c4.large"
}
}
master_instance_type = "m4.large"
}
master_instance_group {
instance_type = "m4.large"
}
}
Resource: aws_acm_certi cate
The ACM certi cate resource allows requesting and management of certi cates from the Amazon Certi cate Manager.
It deals with requesting certi cates and managing their attributes and life-cycle. This resource does not deal with validation
of a certi cate but can provide inputs for other resources implementing the validation. It does not wait for a certi cate to be
issued. Use a aws_acm_certificate_validation (/docs/providers/aws/r/acm_certi cate_validation.html) resource for this.
Domain validation through E-Mail is also supported but should be avoided as it requires a manual step outside of Terraform.
Example Usage
tags = {
Environment = "test"
}
lifecycle {
create_before_destroy = true
}
}
subject {
common_name = "example.com"
organization = "ACME Examples, Inc"
}
validity_period_hours = 12
allowed_uses = [
"key_encipherment",
"digital_signature",
"server_auth",
]
}
Argument Reference
The following arguments are supported:
subject_alternative_names - (Optional) A list of domains that should be SANs in the issued certi cate
validation_method - (Required) Which method to use for validation. DNS or EMAIL are valid, NONE can be
used for certi cates that were imported into ACM and then into Terraform.
options - (Optional) Con guration block used to set certi cate options. Detailed below.
subject_alternative_names - (Optional) A list of domains that should be SANs in the issued certi cate
tags - (Optional) A mapping of tags to assign to the resource.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
domain_name - The domain name for which the certi cate is issued
domain_validation_options - A list of attributes to feed into other resources to complete certi cate validation. Can
have more than one element, e.g. if SANs are de ned. Only set if DNS -validation was used.
validation_emails - A list of addresses that received a validation E-Mail. Only set if EMAIL -validation was used.
resource_record_name - The name of the DNS record to create to validate the certi cate
Import
Certi cates can be imported using their ARN, e.g.
WARNING: This resource implements a part of the validation work ow. It does not represent a real-world entity in AWS,
therefore changing or deleting this resource on its own has no immediate e ect.
Example Usage
certificate_arn = "${aws_acm_certificate_validation.cert.certificate_arn}"
}
validation_record_fqdns = [
"${aws_route53_record.cert_validation.fqdn}",
"${aws_route53_record.cert_validation_alt1.fqdn}",
"${aws_route53_record.cert_validation_alt2.fqdn}",
]
}
certificate_arn = "${aws_acm_certificate_validation.cert.certificate_arn}"
}
Email Validation
In this situation, the resource is simply a waiter for manual email approval of ACM certi cates.
Argument Reference
The following arguments are supported:
certificate_arn - (Required) The ARN of the certi cate that is being validated.
validation_record_fqdns - (Optional) List of FQDNs that implement the validation. Only valid for DNS validation
method ACM certi cates. If this is set, the resource can implement additional sanity checks and has an explicit
dependency on the resource that is implementing the validation
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
acm_certificate_validation provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
create - (Default 45m ) How long to wait for a certi cate to be issued.
Resource: aws_acmpca_certi cate_authority
Provides a resource to manage AWS Certi cate Manager Private Certi cate Authorities (ACM PCA Certi cate Authorities).
NOTE: Creating this resource will leave the certi cate authority in a PENDING_CERTIFICATE status, which means it
cannot yet issue certi cates. To complete this setup, you must fully sign the certi cate authority CSR available in the
certificate_signing_request attribute and import the signed certi cate outside of Terraform. Terraform can
support another resource to manage that work ow automatically in the future.
Example Usage
Basic
subject {
common_name = "example.com"
}
}
permanent_deletion_time_in_days = 7
}
resources = [
"${aws_s3_bucket.example.arn}",
"${aws_s3_bucket.example.arn}/*",
]
principals {
identifiers = ["acm-pca.amazonaws.com"]
type = "Service"
}
}
}
subject {
common_name = "example.com"
}
}
revocation_configuration {
crl_configuration {
custom_cname = "crl.example.com"
enabled = true
expiration_in_days = 7
s3_bucket_name = "${aws_s3_bucket.example.id}"
}
}
depends_on = ["aws_s3_bucket_policy.example"]
}
Argument Reference
The following arguments are supported:
certificate_authority_configuration - (Required) Nested argument containing algorithms and certi cate
subject information. De ned below.
enabled - (Optional) Whether the certi cate authority is enabled or disabled. Defaults to true .
revocation_configuration - (Optional) Nested argument containing revocation con guration. De ned below.
tags - (Optional) Speci es a key-value map of user-de ned tags that are attached to the certi cate authority.
type - (Optional) The type of the certi cate authority. Defaults to SUBORDINATE . Valid values: ROOT and
SUBORDINATE .
permanent_deletion_time_in_days - (Optional) The number of days to make a CA restorable after it has been
deleted, must be between 7 to 30 days, with default to 30 days.
signing_algorithm - (Required) Name of the algorithm your private CA uses to sign certi cate requests. Valid values
can be found in the ACM PCA Documentation (https://docs.aws.amazon.com/acm-
pca/latest/APIReference/API_Certi cateAuthorityCon guration.html).
subject - (Required) Nested argument that contains X.500 distinguished name information. At least one nested
attribute must be speci ed.
subject
Contains information about the certi cate subject. Identi es the entity that owns or controls the public key in the certi cate.
The entity can be a user, computer, device, or service.
common_name - (Optional) Fully quali ed domain name (FQDN) associated with the certi cate subject.
country - (Optional) Two digit code that speci es the country in which the certi cate subject located.
generation_qualifier - (Optional) Typically a quali er appended to the name of an individual. Examples include Jr.
for junior, Sr. for senior, and III for third.
initials - (Optional) Concatenation that typically contains the rst letter of the given_name , the rst letter of the
middle name if one exists, and the rst letter of the surname .
locality - (Optional) The locality (such as a city or town) in which the certi cate subject is located.
organization - (Optional) Legal name of the organization with which the certi cate subject is a liated.
organizational_unit - (Optional) A subdivision or unit of the organization (such as sales or nance) with which the
certi cate subject is a liated.
pseudonym - (Optional) Typically a shortened version of a longer given_name . For example, Jonathan is often
shortened to John. Elizabeth is often shortened to Beth, Liz, or Eliza.
state - (Optional) State in which the subject of the certi cate is located.
surname - (Optional) Family name. In the US and the UK for example, the surname of an individual is ordered last. In
Asian cultures the surname is typically ordered rst.
title - (Optional) A title such as Mr. or Ms. which is pre-pended to the name to refer formally to the certi cate
subject.
revocation_con guration
crl_configuration - (Optional) Nested argument containing con guration of the certi cate revocation list (CRL), if
any, maintained by the certi cate authority. De ned below.
crl_con guration
custom_cname - (Optional) Name inserted into the certi cate CRL Distribution Points extension that enables the use
of an alias for the CRL distribution point. Use this value if you don't want the name of your S3 bucket to be public.
enabled - (Optional) Boolean value that speci es whether certi cate revocation lists (CRLs) are enabled. Defaults to
false .
expiration_in_days - (Required) Number of days until a certi cate expires. Must be between 1 and 5000.
s3_bucket_name - (Optional) Name of the S3 bucket that contains the CRL. If you do not provide a value for the
custom_cname argument, the name of your S3 bucket is placed into the CRL Distribution Points extension of the
issued certi cate. You must specify a bucket policy that allows ACM PCA to write the CRL to your bucket.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
certificate - Base64-encoded certi cate authority (CA) certi cate. Only available after the certi cate authority
certi cate has been imported.
certificate_chain - Base64-encoded certi cate chain that includes any intermediate certi cates and chains up to
root on-premises certi cate that you used to sign your private CA certi cate. The chain does not include your private
CA certi cate. Only available after the certi cate authority certi cate has been imported.
certificate_signing_request - The base64 PEM-encoded certi cate signing request (CSR) for your private CA
certi cate.
not_after - Date and time after which the certi cate authority is not valid. Only available after the certi cate
authority certi cate has been imported.
not_before - Date and time before which the certi cate authority is not valid. Only available after the certi cate
authority certi cate has been imported.
serial - Serial number of the certi cate authority. Only available after the certi cate authority certi cate has been
imported.
Timeouts
aws_acmpca_certificate_authority provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
create - (Default 1m ) How long to wait for a certi cate authority to be created.
Import
aws_acmpca_certificate_authority can be imported by using the certi cate authority Amazon Resource Name (ARN),
e.g.
If the source AMI has associated EBS snapshots, those will also be duplicated along with the AMI.
This is useful for taking a single AMI provisioned in one region and making it available in another for a multi-region
deployment.
Copying an AMI can take several minutes. The creation of this resource will block until the new AMI is available for use on
new instances.
Example Usage
tags = {
Name = "HelloWorld"
}
}
Argument Reference
The following arguments are supported:
source_ami_id - (Required) The id of the AMI to copy. This id must be valid in the region given by
source_ami_region .
source_ami_region - (Required) The region from which the AMI will be copied. This may be the same as the AWS
provider region in order to create a copy within the same region.
encrypted - (Optional) Speci es whether the destination snapshots of the copied image should be encrypted.
Defaults to false
kms_key_id - (Optional) The full ARN of the KMS Key to use when encrypting the snapshots of an image during a copy
operation. If not speci ed, then the default AWS KMS Key will be used
This resource also exposes the full set of arguments from the aws_ami (/docs/providers/aws/r/ami.html) resource.
Timeouts
The timeouts block allows you to specify timeouts
(https://www.terraform.io/docs/con guration/resources.html#timeouts) for certain actions:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
This resource also exports a full set of attributes corresponding to the arguments of the aws_ami
(/docs/providers/aws/r/ami.html) resource, allowing the properties of the created AMI to be used elsewhere in the
con guration.
Resource: aws_ami_from_instance
The "AMI from instance" resource allows the creation of an Amazon Machine Image (AMI) modelled after an existing EBS-
backed EC2 instance.
The created AMI will refer to implicitly-created snapshots of the instance's EBS volumes and mimick its assigned block device
con guration at the time the resource is created.
This resource is best applied to an instance that is stopped when this instance is created, so that the contents of the created
image are predictable. When applied to an instance that is running, the instance will be stopped before taking the snapshots
and then started back up again, resulting in a period of downtime.
Note that the source instance is inspected only at the initial creation of this resource. Ongoing updates to the referenced
instance will not be propagated into the generated AMI. Users may taint or otherwise recreate the resource in order to
produce a fresh snapshot.
Example Usage
Argument Reference
The following arguments are supported:
source_instance_id - (Required) The id of the instance to use as the basis of the AMI.
snapshot_without_reboot - (Optional) Boolean that overrides the behavior of stopping the instance before
snapshotting. This is risky since it may cause a snapshot of an inconsistent lesystem state, but can be used to avoid
downtime if the user otherwise guarantees that no lesystem writes will be underway at the time of snapshot.
Timeouts
The timeouts block allows you to specify timeouts
(https://www.terraform.io/docs/con guration/resources.html#timeouts) for certain actions:
This resource also exports a full set of attributes corresponding to the arguments of the aws_ami resource, allowing the
properties of the created AMI to be used elsewhere in the con guration.
Resource: aws_ami
The AMI resource allows the creation and management of a completely-custom Amazon Machine Image (AMI).
If you just want to duplicate an existing AMI, possibly copying it to another region, it's better to use aws_ami_copy instead.
If you just want to share an existing AMI with another AWS account, it's better to use aws_ami_launch_permission instead.
Example Usage
ebs_block_device {
device_name = "/dev/xvda"
snapshot_id = "snap-xxxxxxxx"
volume_size = 8
}
}
Argument Reference
The following arguments are supported:
ena_support - (Optional) Speci es whether enhanced networking with ENA is enabled. Defaults to false .
root_device_name - (Optional) The name of the root device (for example, /dev/sda1 , or /dev/xvda ).
virtualization_type - (Optional) Keyword to choose what virtualization mode created instances will use. Can be
either "paravirtual" (the default) or "hvm". The choice of virtualization type changes the set of further arguments that
are required, as described below.
ebs_block_device - (Optional) Nested block describing an EBS block device that should be attached to created
instances. The structure of this block is described below.
ephemeral_block_device - (Optional) Nested block describing an ephemeral block device that should be attached to
created instances. The structure of this block is described below.
image_location - (Required) Path to an S3 object containing an image manifest, e.g. created by the ec2-upload-
bundle command in the EC2 command line tools.
kernel_id - (Required) The id of the kernel image (AKI) that will be used as the paravirtual kernel in created
instances.
ramdisk_id - (Optional) The id of an initrd image (ARI) that will be used when booting the created instances.
sriov_net_support - (Optional) When set to "simple" (the default), enables enhanced networking for created
instances. No other value is supported at this time.
device_name - (Required) The path at which the device is exposed to created instances.
delete_on_termination - (Optional) Boolean controlling whether the EBS volumes created to support each created
instance will be deleted once that instance is terminated.
encrypted - (Optional) Boolean controlling whether the created EBS volumes will be encrypted. Can't be used with
snapshot_id .
iops - (Required only when volume_type is "io1") Number of I/O operations per second the created volumes will
support.
snapshot_id - (Optional) The id of an EBS snapshot that will be used to initialize the created EBS volumes. If set, the
volume_size attribute must be at least as large as the referenced snapshot.
volume_size - (Required unless snapshot_id is set) The size of created volumes in GiB. If snapshot_id is set and
volume_size is omitted then the volume will have the same size as the selected snapshot.
volume_type - (Optional) The type of EBS volume to create. Can be one of "standard" (the default), "io1" or "gp2".
kms_key_id - (Optional) The full ARN of the AWS Key Management Service (AWS KMS) CMK to use when encrypting
the snapshots of an image during a copy operation. This parameter is only required if you want to use a non-default
CMK; if this parameter is not speci ed, the default CMK for EBS is used
device_name - (Required) The path at which the device is exposed to created instances.
virtual_name - (Required) A name for the ephemeral device, of the form "ephemeralN" where N is a volume number
starting from zero.
Timeouts
The timeouts block allows you to specify timeouts
(https://www.terraform.io/docs/con guration/resources.html#timeouts) for certain actions:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
root_snapshot_id - The Snapshot ID for the root volume (for EBS-backed AMIs)
Import
aws_ami can be imported using the ID of the AMI, e.g.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Note: As there is no API method for deleting account settings or resetting it to defaults, destroying this resource will
keep your account settings intact
Example Usage
resource "aws_api_gateway_account" "demo" {
cloudwatch_role_arn = "${aws_iam_role.cloudwatch.arn}"
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents",
"logs:GetLogEvents",
"logs:FilterLogEvents"
],
"Resource": "*"
}
]
}
EOF
}
Argument Reference
The following argument is supported:
cloudwatch_role_arn - (Optional) The ARN of an IAM role for CloudWatch (to allow logging & monitoring). See more
in AWS Docs (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-stage-settings.html#how-to-
stage-settings-console). Logging & monitoring can be enabled/disabled and otherwise tuned on the API Gateway Stage
level.
Attribute Reference
The following attribute is exported:
burst_limit - The absolute maximum number of times API Gateway allows the API to be called per second (RPS).
rate_limit - The number of times API Gateway allows the API to be called per second on average (RPS).
Import
API Gateway Accounts can be imported using the word api-gateway-account , e.g.
NOTE: Since the API Gateway usage plans feature was launched on August 11, 2016, usage plans are now required to
associate an API key with an API stage.
Example Usage
Argument Reference
The following arguments are supported:
enabled - (Optional) Speci es whether the API key can be used by callers. Defaults to true .
value - (Optional) The value of the API key. If not speci ed, it will be automatically generated by AWS on creation.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Import
API Gateway Keys can be imported using the id , e.g.
Example Usage
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "lambda:InvokeFunction",
"Effect": "Allow",
"Resource": "${aws_lambda_function.authorizer.arn}"
}
]
}
EOF
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
source_code_hash = "${filebase64sha256("lambda-
- function.zip
zip")}"
}
Argument Reference
The following arguments are supported:
authorizer_uri - (Optional, required for type TOKEN / REQUEST ) The authorizer's Uniform Resource Identi er (URI). This must be a well-formed
Lambda function URI in the form of arn:aws:apigateway:{region}:lambda:path/{service_api} , e.g. arn:aws:apigateway:us-west-
2:lambda:path/2015-03-31/functions/arn:aws:lambda:us-west-2:012345678912:function:my-function/invocations
identity_source - (Optional) The source of the identity in an incoming request. Defaults to method.request.header.Authorization . For
REQUEST type, this may be a comma-separated list of values, including headers, query string parameters and stage variables - e.g.
"method.request.header.SomeHeaderName,method.request.querystring.SomeQueryStringName,stageVariables.SomeStageVariableName"
type - (Optional) The type of the authorizer. Possible values are TOKEN for a Lambda function using a single authorization token submitted in a
custom header, REQUEST for a Lambda function using incoming request parameters, or COGNITO_USER_POOLS for using an Amazon Cognito user
pool. Defaults to TOKEN .
authorizer_credentials - (Optional) The credentials required for the authorizer. To specify an IAM Role for API Gateway to assume, use the
IAM Role ARN.
authorizer_result_ttl_in_seconds - (Optional) The TTL of cached authorizer results in seconds. Defaults to 300 .
identity_validation_expression - (Optional) A validation expression for the incoming identity. For TOKEN type, this value should be a regular
expression. The incoming token from the client is matched against this expression, and will proceed if the token matches. If the token doesn't
match, the client receives a 401 Unauthorized response.
provider_arns - (Optional, required for type COGNITO_USER_POOLS ) A list of the Amazon Cognito user pool ARNs. Each element is of this
format: arn:aws:cognito-idp:{region}:{account_id}:userpool/{user_pool_id} .
Resource: aws_api_gateway_base_path_mapping
Connects a custom domain name registered via aws_api_gateway_domain_name with a deployed API so that its methods
can be called via the custom domain name.
Example Usage
rest_api_id = "${aws_api_gateway_rest_api.MyDemoAPI.id}"
stage_name = "live"
}
certificate_name = "example-api"
certificate_body = "${file("${path.module
module}/
/ example.com
com/
/ example.crt
crt")}"
certificate_chain = "${file("${path.module
module}/
/ example.com
com/
/ ca.crt
crt")}"
certificate_private_key = "${file("${path.module
module}/
/ example.com
com/
/ example.key
key")}"
}
Argument Reference
The following arguments are supported:
domain_name - (Required) The already-registered domain name to connect the API to.
stage_name - (Optional) The name of a speci c deployment stage to expose at the given path. If omitted, callers may
select any stage by including its name as a path element after the base path.
base_path - (Optional) Path segment that must be prepended to the path when accessing the API via this mapping. If
omitted, the API is exposed at the root of the given domain.
Import
aws_api_gateway_base_path_mapping can be imported by using the domain name and base path, e.g.
Otherwise:
Example Usage
Argument Reference
The following arguments are supported:
Attribute Reference
In addition to all arguments above, the following attributes are exported:
created_date - The date when the client certi cate was created.
expiration_date - The date when the client certi cate will expire.
Import
API Gateway Client Certi cates can be imported using the id, e.g.
Note: Depends on having aws_api_gateway_integration inside your rest api (which in turn depends on
aws_api_gateway_method ). To avoid race conditions you might need to add an explicit depends_on =
["aws_api_gateway_integration.name"] .
Example Usage
rest_api_id = "${aws_api_gateway_rest_api.MyDemoAPI.id}"
stage_name = "test"
variables = {
"answer" = "42"
}
}
Argument Reference
The following arguments are supported:
rest_api_id - (Required) The ID of the associated REST API
stage_name - (Optional) The name of the stage. If the speci ed stage already exists, it will be updated to point to the
new deployment. If the stage does not exist, a new one will be created and point to this deployment.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
invoke_url - The URL to invoke the API pointing to the stage, e.g. https://z4675bid1j.execute-api.eu-west-
2.amazonaws.com/prod
Example Usage
Argument Reference
The following argument is supported:
location - (Required) The location of the targeted API entity of the to-be-created documentation part. See below.
properties - (Required) A content map of API-speci c key-value pairs describing the targeted API entity. The map
must be encoded as a JSON string, e.g., "{ \"description\": \"The API does ...\" }". Only Swagger-compliant key-value
pairs can be exported and, hence, published.
Nested elds
location
See supported entity types for each eld in the o cial docs (https://docs.aws.amazon.com/apigateway/api-
reference/resource/documentation-part/).
method - (Optional) The HTTP verb of a method. The default value is * for any method.
path - (Optional) The URL path of the target. The default value is / for the root resource.
status_code - (Optional) The HTTP status code of a response. The default value is * for any status code.
type - (Required) The type of API entity to which the documentation content applies. e.g. API , METHOD or
REQUEST_BODY
Attribute Reference
The following attribute is exported in addition to the arguments listed above:
Import
API Gateway documentation_parts can be imported using REST-API-ID/DOC-PART-ID , e.g.
Example Usage
properties = "{\"description\":\"Example\"}"
rest_api_id = "${aws_api_gateway_rest_api.example.id}"
}
Argument Reference
The following argument is supported:
Attribute Reference
The arguments listed above are all exported as attributes.
Import
API Gateway documentation versions can be imported using REST-API-ID/VERSION , e.g.
This resource just establishes ownership of and the TLS settings for a particular domain name. An API can be attached to a
particular path under the registered domain name using the aws_api_gateway_base_path_mapping resource
(/docs/providers/aws/r/api_gateway_base_path_mapping.html).
API Gateway domains can be de ned as either 'edge-optimized' or 'regional'. In an edge-optimized con guration, API
Gateway internally creates and manages a CloudFront distribution to route requests on the given hostname. In addition to
this resource it's necessary to create a DNS record corresponding to the given domain name which is an alias (either
Route53 alias or traditional CNAME) to the Cloudfront domain name exported in the cloudfront_domain_name attribute.
In a regional con guration, API Gateway does not create a CloudFront distribution to route requests to the API, though a
distribution can be created if needed. In either case, it is necessary to create a DNS record corresponding to the given
domain name which is an alias (either Route53 alias or traditional CNAME) to the regional domain name exported in the
regional_domain_name attribute.
Note: API Gateway requires the use of AWS Certi cate Manager (ACM) certi cates instead of Identity and Access
Management (IAM) certi cates in regions that support ACM. Regions that support ACM can be found in the Regions and
Endpoints Documentation (https://docs.aws.amazon.com/general/latest/gr/rande.html#acm_region). To import an
existing private key and certi cate into ACM or request an ACM certi cate, see the aws_acm_certificate resource
(/docs/providers/aws/r/acm_certi cate.html).
Note: All arguments including the private key will be stored in the raw state as plain-text. Read more about sensitive
data in state (/docs/state/sensitive-data.html).
Example Usage
alias {
evaluate_target_health = true
name = "${aws_api_gateway_domain_name.example.cloudfront_domain_name}"
zone_id = "${aws_api_gateway_domain_name.example.cloudfront_zone_id}"
}
}
certificate_name = "example-api"
certificate_body = "${file("${path.module
module}/
/ example.com
com/
/ example.crt
crt")}"
certificate_chain = "${file("${path.module
module}/
/ example.com
com/
/ ca.crt
crt")}"
certificate_private_key = "${file("${path.module
module}/
/ example.com
com/
/ example.key
key")}"
}
name = "${aws_api_gateway_domain_name.example.domain_name}"
type = "A"
alias {
name = "${aws_api_gateway_domain_name.example.cloudfront_domain_name}"
zone_id = "${aws_api_gateway_domain_name.example.cloudfront_zone_id}"
evaluate_target_health = true
}
}
endpoint_configuration {
types = ["REGIONAL"]
}
}
alias {
evaluate_target_health = true
name = "${aws_api_gateway_domain_name.example.regional_domain_name}"
zone_id = "${aws_api_gateway_domain_name.example.regional_zone_id}"
}
}
endpoint_configuration {
types = ["REGIONAL"]
}
}
alias {
evaluate_target_health = true
name = "${aws_api_gateway_domain_name.example.regional_domain_name}"
zone_id = "${aws_api_gateway_domain_name.example.regional_zone_id}"
}
}
Argument Reference
The following arguments are supported:
endpoint_configuration - (Optional) Con guration block de ning API endpoint information including type. De ned
below.
security_policy - (Optional) The Transport Layer Security (TLS) version + cipher suite for this DomainName. The
valid values are TLS_1_0 and TLS_1_2 . Must be con gured to perform drift detection.
When referencing an AWS-managed certi cate, the following arguments are supported:
certificate_arn - (Optional) The ARN for an AWS-managed certi cate. AWS Certi cate Manager is the only
supported source. Used when an edge-optimized domain name is desired. Con icts with certificate_name ,
certificate_body , certificate_chain , certificate_private_key , regional_certificate_arn , and
regional_certificate_name .
regional_certificate_arn - (Optional) The ARN for an AWS-managed certi cate. AWS Certi cate Manager is the
only supported source. Used when a regional domain name is desired. Con icts with certificate_arn ,
certificate_name , certificate_body , certificate_chain , and certificate_private_key .
certificate_name - (Optional) The unique name to use when registering this certi cate as an IAM server certi cate.
Con icts with certificate_arn , regional_certificate_arn , and regional_certificate_name . Required if
certificate_arn is not set.
certificate_body - (Optional) The certi cate issued for the domain name being registered, in PEM format. Only
valid for EDGE endpoint con guration type. Con icts with certificate_arn , regional_certificate_arn , and
regional_certificate_name .
certificate_chain - (Optional) The certi cate for the CA that issued the certi cate, along with any intermediate CA
certi cates required to create an unbroken chain to a certi cate trusted by the intended API clients. Only valid for
EDGE endpoint con guration type. Con icts with certificate_arn , regional_certificate_arn , and
regional_certificate_name .
certificate_private_key - (Optional) The private key associated with the domain certi cate given in
certificate_body . Only valid for EDGE endpoint con guration type. Con icts with certificate_arn ,
regional_certificate_arn , and regional_certificate_name .
regional_certificate_name - (Optional) The user-friendly name of the certi cate that will be used by regional
endpoint for this domain name. Con icts with certificate_arn , certificate_name , certificate_body ,
certificate_chain , and certificate_private_key .
endpoint_con guration
types - (Required) A list of endpoint types. This resource currently only supports managing a single value. Valid
values: EDGE or REGIONAL . If unspeci ed, defaults to EDGE . Must be declared as REGIONAL in non-Commercial
partitions. Refer to the documentation (https://docs.aws.amazon.com/apigateway/latest/developerguide/create-
regional-api.html) for more information on the di erence between edge-optimized and regional APIs.
Attributes Reference
In addition to the arguments, the following attributes are exported:
certificate_upload_date - The upload date associated with the domain certi cate.
cloudfront_domain_name - The hostname created by Cloudfront to represent the distribution that implements this
domain name mapping.
cloudfront_zone_id - For convenience, the hosted zone ID ( Z2FDTNDATAQYW2 ) that can be used to create a Route53
alias record for the distribution.
regional_zone_id - The hosted zone ID that can be used to create a Route53 alias record for the regional endpoint.
Import
API Gateway domain names can be imported using their name , e.g.
Example Usage
response_templates = {
"application/json" = "{'message':$context.error.messageString}"
}
response_parameters = {
"gatewayresponse.header.Authorization" = "'Basic'"
}
}
Argument Reference
The following arguments are supported:
response_templates - (Optional) A map specifying the templates used to transform the response body.
response_parameters - (Optional) A map specifying the parameters (paths, query strings and headers) of the
Gateway Response.
Import
aws_api_gateway_gateway_response can be imported using REST-API-ID/RESPONSE-TYPE , e.g.
Example Usage
request_parameters = {
"integration.request.header.X-Authorization" = "'static'"
}
request_templates = {
"application/xml" = <<EOF
<<
{
"body" : $input.json('$')
}
EOF
}
}
Lambda integration
variable "myregion" {}
variable "myregion" {}
variable "accountId" {}
source_arn = "arn:aws:execute-api:${var.myregion}:${var.accountId}:${aws_api_gateway_rest_api.api.id}/*
/${aws_api_gateway_method.method.http_method}${aws_api_gateway_resource.resource.path}"
}
source_code_hash = "${filebase64sha256("lambda.zip
zip")}"
}
VPC Link
variable "name" {}
variable "subnet_id" {}
request_models = {
"application/json" = "Error"
}
}
resource "aws_api_gateway_integration" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
resource_id = "${aws_api_gateway_resource.test.id}"
http_method = "${aws_api_gateway_method.test.http_method}"
request_templates = {
"application/json" = ""
"application/xml" = "#set($inputRoot = $input.path('$'))\n{ }"
}
request_parameters = {
"integration.request.header.X-Authorization" = "'static'"
"integration.request.header.X-Foo" = "'Bar'"
}
type = "HTTP"
uri = "https://www.google.de"
integration_http_method = "GET"
passthrough_behavior = "WHEN_NO_MATCH"
content_handling = "CONVERT_TO_TEXT"
connection_type = "VPC_LINK"
connection_id = "${aws_api_gateway_vpc_link.test.id}"
}
Argument Reference
The following arguments are supported:
http_method - (Required) The HTTP method ( GET , POST , PUT , DELETE , HEAD , OPTION , ANY ) when calling the
associated resource.
integration_http_method - (Optional) The integration HTTP method ( GET , POST , PUT , DELETE , HEAD , OPTIONs ,
ANY , PATCH ) specifying how API Gateway will interact with the back end. Required if type is AWS , AWS_PROXY ,
HTTP or HTTP_PROXY . Not all methods are compatible with all AWS integrations. e.g. Lambda function can only be
invoked (https://github.com/awslabs/aws-apigateway-importer/issues/9#issuecomment-129651005) via POST .
connection_id - (Optional) The id of the VpcLink used for the integration. Required if connection_type is
VPC_LINK
uri - (Optional) The input's URI. Required if type is AWS , AWS_PROXY , HTTP or HTTP_PROXY . For HTTP
integrations, the URI must be a fully formed, encoded HTTP(S) URL according to the RFC-3986 speci cation . For AWS
integrations, the URI should be of the form arn:aws:apigateway:{region}:{subdomain.service|service}:
{path|action}/{service_api} . region , subdomain and service are used to determine the right endpoint. e.g.
arn:aws:apigateway:eu-west-1:lambda:path/2015-03-31/functions/arn:aws:lambda:eu-west-
1:012345678901:function:my-func/invocations
credentials - (Optional) The credentials required for the integration. For AWS integrations, 2 options are available.
To specify an IAM Role for Amazon API Gateway to assume, use the role's ARN. To require that the caller's identity be
passed through from the request, specify the string arn:aws:iam::\*:user/\* .
request_parameters - (Optional) A map of request query string parameters and headers that should be passed to
the backend responder. For example: request_parameters = { "integration.request.header.X-Some-Other-
Header" = "method.request.header.X-Some-Header" }
content_handling - (Optional) Speci es how to handle request payload content type conversions. Supported values
are CONVERT_TO_BINARY and CONVERT_TO_TEXT . If this property is not de ned, the request payload will be passed
through from the method request to integration request without modi cation, provided that the
passthroughBehaviors is con gured to support payload pass-through.
timeout_milliseconds - (Optional) Custom timeout between 50 and 29,000 milliseconds. The default value is 29,000
milliseconds.
Import
aws_api_gateway_integration can be imported using REST-API-ID/RESOURCE-ID/HTTP-METHOD , e.g.
Note: Depends on having aws_api_gateway_integration inside your rest api. To ensure this you might need to add
an explicit depends_on for clean runs.
Example Usage
resource "aws_api_gateway_rest_api" "MyDemoAPI" {
name = "MyDemoAPI"
description = "This is my API for demonstration purposes"
}
response_templates {
"application/xml" = <<EOF
<<
#set($inputRoot = $input.path('$'))
<?xml version="1.0" encoding="UTF-8"?>
<message>
$inputRoot.body
</message>
EOF
}
}
Argument Reference
The following arguments are supported:
http_method - (Required) The HTTP method ( GET , POST , PUT , DELETE , HEAD , OPTIONS , ANY )
selection_pattern - (Optional) Speci es the regular expression pattern used to choose an integration response
based on the response from the backend. Setting this to - makes the integration the default one. If the backend is an
AWS Lambda function, the AWS Lambda function error header is matched. For all other HTTP and AWS backends, the
HTTP status code is matched.
response_templates - (Optional) A map specifying the templates used to transform the integration response body
response_parameters - (Optional) A map of response parameters that can be read from the backend response. For
example: response_parameters = { "method.response.header.X-Some-Header" =
"integration.response.header.X-Some-Other-Header" }
content_handling - (Optional) Speci es how to handle request payload content type conversions. Supported values
are CONVERT_TO_BINARY and CONVERT_TO_TEXT . If this property is not de ned, the response payload will be passed
through from the integration response to the method response without modi cation.
Import
aws_api_gateway_integration_response can be imported using REST-API-ID/RESOURCE-ID/HTTP-METHOD/STATUS-
CODE , e.g.
Example Usage
request_parameters = {
"method.request.path.proxy" = true
}
}
Argument Reference
The following arguments are supported:
http_method - (Required) The HTTP Method ( GET , POST , PUT , DELETE , HEAD , OPTIONS , ANY )
authorization - (Required) The type of authorization used for the method ( NONE , CUSTOM , AWS_IAM ,
COGNITO_USER_POOLS )
authorizer_id - (Optional) The authorizer id to be used when the authorization is CUSTOM or COGNITO_USER_POOLS
authorization_scopes - (Optional) The authorization scopes used when the authorization is COGNITO_USER_POOLS
request_models - (Optional) A map of the API models used for the request's content type where key is the content
type (e.g. application/json ) and value is either Error , Empty (built-in models) or aws_api_gateway_model 's
name .
request_parameters - (Optional) A map of request parameters (from the path, query string and headers) that should
be passed to the integration. The boolean value indicates whether the parameter is required ( true ) or optional
( false ). For example: request_parameters = {"method.request.header.X-Some-Header" = true
"method.request.querystring.some-query-param" = true} would de ne that the header X-Some-Header and
the query string some-query-param must be provided in the request.
Import
aws_api_gateway_method can be imported using REST-API-ID/RESOURCE-ID/HTTP-METHOD , e.g.
Example Usage
Argument Reference
The following arguments are supported:
http_method - (Required) The HTTP Method ( GET , POST , PUT , DELETE , HEAD , OPTIONS , ANY )
response_models - (Optional) A map of the API models used for the response's content type
response_parameters - (Optional) A map of response parameters that can be sent to the caller. For example:
response_parameters = { "method.response.header.X-Some-Header" = true } would de ne that the header
X-Some-Header can be provided on the response.
Import
aws_api_gateway_method_response can be imported using REST-API-ID/RESOURCE-ID/HTTP-METHOD/STATUS-CODE , e.g.
Example Usage
resource "aws_api_gateway_method_settings" "s" {
rest_api_id = "${aws_api_gateway_rest_api.test.id}"
stage_name = "${aws_api_gateway_stage.test.stage_name}"
method_path = "${aws_api_gateway_resource.test.path_part}/${aws_api_gateway_method.test.http_method}"
settings {
metrics_enabled = true
logging_level = "INFO"
}
}
request_templates = {
"application/xml" = <<EOF
<<
{
"body" : $input.json('$')
}
EOF
}
}
Argument Reference
The following arguments are supported:
settings
metrics_enabled - (Optional) Speci es whether Amazon CloudWatch metrics are enabled for this method.
logging_level - (Optional) Speci es the logging level for this method, which e ects the log entries pushed to
Amazon CloudWatch Logs. The available levels are OFF , ERROR , and INFO .
data_trace_enabled - (Optional) Speci es whether data trace logging is enabled for this method, which e ects the
log entries pushed to Amazon CloudWatch Logs.
caching_enabled - (Optional) Speci es whether responses should be cached and returned for requests. A cache
cluster must be enabled on the stage for responses to be cached.
cache_ttl_in_seconds - (Optional) Speci es the time to live (TTL), in seconds, for cached responses. The higher the
TTL, the longer the response will be cached.
Example Usage
schema = <<EOF
<<
{
"type": "object"
}
EOF
}
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
aws_api_gateway_model can be imported using REST-API-ID/NAME , e.g.
$ terraform import aws_api_gateway_model.example 12345abcde/example
Resource: aws_api_gateway_request_validator
Manages an API Gateway Request Validator.
Example Usage
Argument Reference
The following argument is supported:
Attribute Reference
The following attribute is exported in addition to the arguments listed above:
Import
aws_api_gateway_request_validator can be imported using REST-API-ID/REQUEST-VALIDATOR-ID , e.g.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
path - The complete path for this API resource, including all parent paths.
Import
aws_api_gateway_resource can be imported using REST-API-ID/RESOURCE-ID , e.g.
Example Usage
Basic
endpoint_configuration {
types = ["REGIONAL"]
}
}
Argument Reference
The following arguments are supported:
endpoint_configuration - (Optional) Nested argument de ning API endpoint con guration including endpoint type.
De ned below.
binary_media_types - (Optional) The list of binary media types supported by the RestApi. By default, the RestApi
supports only UTF-8-encoded text payloads.
minimum_compression_size - (Optional) Minimum response size to compress for the REST API. Integer between -1
and 10485760 (10MB). Setting a value greater than -1 will enable compression, -1 disables compression (default).
body - (Optional) An OpenAPI speci cation that de nes the set of routes and integrations to create as part of the
REST API.
policy - (Optional) JSON formatted policy document that controls access to the API Gateway. For more information
about building AWS IAM policy documents with Terraform, see the AWS IAM Policy Document Guide
(/docs/providers/aws/guides/iam-policy-documents.html)
api_key_source - (Optional) The source of the API key for requests. Valid values are HEADER (default) and
AUTHORIZER.
Note: If the body argument is provided, the OpenAPI speci cation will be used to con gure the resources, methods and
integrations for the Rest API. If this argument is provided, the following resources should not be managed as separate ones,
as updates may cause manual resource updates to be overwritten:
aws_api_gateway_resource
aws_api_gateway_method
aws_api_gateway_method_response
aws_api_gateway_method_settings
aws_api_gateway_integration
aws_api_gateway_integration_response
aws_api_gateway_gateway_response
aws_api_gateway_model
endpoint_con guration
types - (Required) A list of endpoint types. This resource currently only supports managing a single value. Valid
values: EDGE , REGIONAL or PRIVATE . If unspeci ed, defaults to EDGE . Must be declared as REGIONAL in non-
Commercial partitions. Refer to the documentation
(https://docs.aws.amazon.com/apigateway/latest/developerguide/create-regional-api.html) for more information on
the di erence between edge-optimized and regional APIs.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
aws_api_gateway_rest_api can be imported by using the REST API ID, e.g.
$ terraform import aws_api_gateway_rest_api.example 12345abcde
NOTE: Resource import does not currently support the body attribute.
Resource: aws_api_gateway_stage
Provides an API Gateway Stage.
Example Usage
settings {
metrics_enabled = true
logging_level = "INFO"
}
}
variable "stage_name" {
default = "example"
type = "string"
}
name = "${var.stage_name}"
Argument Reference
The following arguments are supported:
access_log_settings - (Optional) Enables access logs for the API stage. Detailed below.
cache_cluster_enabled - (Optional) Speci es whether a cache cluster is enabled for the stage
cache_cluster_size - (Optional) The size of the cache cluster for the stage, if enabled. Allowed values include 0.5 ,
1.6 , 6.1 , 13.5 , 28.4 , 58.2 , 118 and 237 .
client_certificate_id - (Optional) The identi er of a client certi cate for the stage.
xray_tracing_enabled - (Optional) Whether active tracing with X-ray is enabled. Defaults to false .
Nested Blocks
access_log_settings
destination_arn - (Required) ARN of the log group to send the logs to. Automatically removes trailing :* if present.
format - (Required) The formatting and values recorded in the logs. For more information on con guring the log
format rules visit the AWS documentation (https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-
logging.html)
Attribute Reference
In addition to all arguments above, the following attributes are exported:
invoke_url - The URL to invoke the API pointing to the stage, e.g. https://z4675bid1j.execute-api.eu-west-
2.amazonaws.com/prod
Import
aws_api_gateway_stage can be imported using REST-API-ID/STAGE-NAME , e.g.
Example Usage
api_stages {
api_id = "${aws_api_gateway_rest_api.myapi.id}"
stage = "${aws_api_gateway_deployment.dev.stage_name}"
}
api_stages {
api_id = "${aws_api_gateway_rest_api.myapi.id}"
stage = "${aws_api_gateway_deployment.prod.stage_name}"
}
quota_settings {
limit = 20
offset = 2
period = "WEEK"
}
throttle_settings {
burst_limit = 5
rate_limit = 10
}
}
Argument Reference
The API Gateway Usage Plan argument layout is a structure composed of several sub-resources - these resources are laid
out below.
Top-Level Arguments
name - (Required) The name of the usage plan.
product_code - (Optional) The AWS Markeplace product identi er to associate with the usage plan as a SaaS product
on AWS Marketplace.
stage (Required) - API stage name of the associated API stage in a usage plan.
limit (Optional) - The maximum number of requests that can be made in a given time period.
offset (Optional) - The number of requests subtracted from the given limit in the initial time period.
period (Optional) - The time period in which the limit applies. Valid values are "DAY", "WEEK" or "MONTH".
burst_limit (Optional) - The API request burst limit, the maximum rate limit over a time ranging from one to a few
seconds, depending upon whether the underlying token bucket is at its full capacity.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
product_code - The AWS Markeplace product identi er to associate with the usage plan as a SaaS product on AWS
Marketplace.
Import
AWS API Gateway Usage Plan can be imported using the id , e.g.
Example Usage
api_stages {
api_id = "${aws_api_gateway_rest_api.test.id}"
stage = "${aws_api_gateway_deployment.foo.stage_name}"
}
}
Argument Reference
The following arguments are supported:
key_type - (Required) The type of the API key resource. Currently, the valid key type is API_KEY.
usage_plan_id - (Required) The Id of the usage plan resource representing to associate the key to.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
subnet_mapping {
subnet_id = "12345"
}
}
Argument Reference
The following arguments are supported:
name - (Required) The name used to label and identify the VPC link.
target_arns - (Required, ForceNew) The list of network load balancer arns in the VPC targeted by the VPC link.
Currently AWS only supports 1 target.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
API Gateway VPC Link can be imported using the id , e.g.
Example Usage
target_tracking_scaling_policy_configuration {
predefined_metric_specification {
predefined_metric_type = "DynamoDBReadCapacityUtilization"
}
target_value = 70
}
}
step_scaling_policy_configuration {
adjustment_type = "ChangeInCapacity"
cooldown = 60
metric_aggregation_type = "Maximum"
step_adjustment {
metric_interval_upper_bound = 0
scaling_adjustment = -1
}
}
}
Preserve desired count when updating an autoscaled ECS Service
lifecycle {
ignore_changes = ["desired_count"]
}
}
target_tracking_scaling_policy_configuration {
predefined_metric_specification {
predefined_metric_type = "RDSReaderAverageCPUUtilization"
}
target_value = 75
scale_in_cooldown = 300
scale_out_cooldown = 300
}
}
Argument Reference
The following arguments are supported:
policy_type - (Optional) For DynamoDB, only TargetTrackingScaling is supported. For Amazon ECS, Spot Fleet, and Amazon RDS, both
StepScaling and TargetTrackingScaling are supported. For any other service, only StepScaling is supported. Defaults to StepScaling .
resource_id - (Required) The resource type and unique identi er string for the resource associated with the scaling policy. Documentation can be found
in the ResourceId parameter at: AWS Application Auto Scaling API Reference
(http://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_RegisterScalableTarget.html#API_RegisterScalableTarget_RequestParameters)
scalable_dimension - (Required) The scalable dimension of the scalable target. Documentation can be found in the ScalableDimension parameter at:
AWS Application Auto Scaling API Reference
(http://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_RegisterScalableTarget.html#API_RegisterScalableTarget_RequestParameters)
service_namespace - (Required) The AWS service namespace of the scalable target. Documentation can be found in the ServiceNamespace parameter
at: AWS Application Auto Scaling API Reference
(http://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_RegisterScalableTarget.html#API_RegisterScalableTarget_RequestParameters)
step_scaling_policy_configuration - (Optional) Step scaling policy con guration, requires policy_type = "StepScaling" (default). See supported
elds below.
Nested elds
step_scaling_policy_configuration
adjustment_type - (Required) Speci es whether the adjustment is an absolute number or a percentage of the current capacity. Valid values are
ChangeInCapacity , ExactCapacity , and PercentChangeInCapacity .
cooldown - (Required) The amount of time, in seconds, after a scaling activity completes and before the next scaling activity can start.
metric_aggregation_type - (Optional) The aggregation type for the policy's metrics. Valid values are "Minimum", "Maximum", and "Average". Without a
value, AWS will treat the aggregation type as "Average".
min_adjustment_magnitude - (Optional) The minimum number to adjust your scalable dimension as a result of a scaling activity. If the adjustment type
is PercentChangeInCapacity, the scaling policy changes the scalable dimension of the scalable target by this amount.
step_adjustment - (Optional) A set of adjustments that manage scaling. These have the following structure:
step_scaling_policy_configuration {
step_adjustment {
metric_interval_lower_bound = 1.0
metric_interval_upper_bound = 2.0
scaling_adjustment = -1
}
step_adjustment {
metric_interval_lower_bound = 2.0
metric_interval_upper_bound = 3.0
scaling_adjustment = 1
}
}
}
metric_interval_lower_bound - (Optional) The lower bound for the di erence between the alarm threshold and the CloudWatch metric. Without a
value, AWS will treat this bound as negative in nity.
metric_interval_upper_bound - (Optional) The upper bound for the di erence between the alarm threshold and the CloudWatch metric. Without a
value, AWS will treat this bound as in nity. The upper bound must be greater than the lower bound.
scaling_adjustment - (Required) The number of members by which to scale, when the adjustment bounds are breached. A positive value scales up. A
negative value scales down.
target_tracking_scaling_policy_configuration
disable_scale_in - (Optional) Indicates whether scale in by the target tracking policy is disabled. If the value is true, scale in is disabled and the target
tracking policy won't remove capacity from the scalable resource. Otherwise, scale in is enabled and the target tracking policy can remove capacity from
the scalable resource. The default value is false .
scale_in_cooldown - (Optional) The amount of time, in seconds, after a scale in activity completes before another scale in activity can start.
scale_out_cooldown - (Optional) The amount of time, in seconds, after a scale out activity completes before another scale out activity can start.
customized_metric_specification - (Optional) A custom CloudWatch metric. Documentation can be found at: AWS Customized Metric Speci cation
(https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_CustomizedMetricSpeci cation.html). See supported elds below.
predefined_metric_specification
predefined_metric_type - (Required) The metric type.
Attribute Reference
arn - The ARN assigned by AWS to the scaling policy.
Import
Application AutoScaling Policy can be imported using the service-namespace , resource-id , scalable-dimension and policy-name separated by / .
Example Usage
scalable_target_action {
min_capacity = 1
max_capacity = 200
}
}
scalable_target_action {
min_capacity = 1
max_capacity = 10
}
}
Argument Reference
The following arguments are supported:
service_namespace - (Required) The namespace of the AWS service. Documentation can be found in the parameter at: AWS
Application Auto Scaling API Reference
(https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_PutScheduledAction.html#ApplicationAutoScaling-
PutScheduledAction-request-ServiceNamespace) Example: ecs
resource_id - (Required) The identi er of the resource associated with the scheduled action. Documentation can be found in
the parameter at: AWS Application Auto Scaling API Reference
(https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_PutScheduledAction.html#ApplicationAutoScaling-
PutScheduledAction-request-ResourceId)
scalable_dimension - (Optional) The scalable dimension. Documentation can be found in the parameter at: AWS Application
Auto Scaling API Reference
(https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_PutScheduledAction.html#ApplicationAutoScaling-
PutScheduledAction-request-ScalableDimension) Example: ecs:service:DesiredCount
scalable_target_action - (Optional) The new minimum and maximum capacity. You can set both values or just one. See
below
schedule - (Optional) The schedule for this action. The following formats are supported: At expressions - at(yyyy-mm-
ddThh:mm:ss), Rate expressions - rate(valueunit), Cron expressions - cron( elds). In UTC. Documentation can be found in the
parameter at: AWS Application Auto Scaling API Reference
(https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_PutScheduledAction.html#ApplicationAutoScaling-
PutScheduledAction-request-Schedule)
start_time - (Optional) The date and time for the scheduled action to start. Specify the following format: 2006-01-
02T15:04:05Z
end_time - (Optional) The date and time for the scheduled action to end. Specify the following format: 2006-01-02T15:04:05Z
Scalable Target Action Arguments
max_capacity - (Optional) The maximum capacity.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
max_capacity - (Required) The max capacity of the scalable target.
resource_id - (Required) The resource type and unique identi er string for the resource associated with the scaling policy. Documentation can be
found in the ResourceId parameter at: AWS Application Auto Scaling API Reference
(https://docs.aws.amazon.com/autoscaling/application/APIReference/API_RegisterScalableTarget.html#API_RegisterScalableTarget_RequestParameters)
role_arn - (Optional) The ARN of the IAM role that allows Application AutoScaling to modify your scalable target on your behalf.
scalable_dimension - (Required) The scalable dimension of the scalable target. Documentation can be found in the ScalableDimension
parameter at: AWS Application Auto Scaling API Reference
(https://docs.aws.amazon.com/autoscaling/application/APIReference/API_RegisterScalableTarget.html#API_RegisterScalableTarget_RequestParameters)
service_namespace - (Required) The AWS service namespace of the scalable target. Documentation can be found in the ServiceNamespace
parameter at: AWS Application Auto Scaling API Reference
(https://docs.aws.amazon.com/autoscaling/application/APIReference/API_RegisterScalableTarget.html#API_RegisterScalableTarget_RequestParameters)
Import
Application AutoScaling Target can be imported using the service-namespace , resource-id and scalable-dimension separated by / .
Example Usage
listener {
instance_port = 8000
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
}
Argument Reference
The following arguments are supported:
load_balancer - (Required) The name of load balancer to which the policy should be attached.
lb_port - (Required) The load balancer port to which the policy should be applied. This must be an active listener on
the load balancer.
cookie_name - (Required) The application cookie whose lifetime the ELB's cookie should follow.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
cookie_name - The application cookie whose lifetime the ELB's cookie should follow.
Import
Application cookie stickiness policies can be imported using the ELB name, port, and policy name separated by colons ( : ),
e.g.
Example Usage
Basic
Egress Filter
spec {
egress_filter {
type = "ALLOW_ALL"
}
}
}
Argument Reference
The following arguments are supported:
egress_filter - (Optional) The egress lter rules for the service mesh.
type - (Optional) The egress lter type. By default, the type is DROP_ALL . Valid values are ALLOW_ALL and
DROP_ALL .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
App Mesh service meshes can be imported using the name , e.g.
Example Usage
HTTP Routing
spec {
http_route {
match {
prefix = "/"
}
action {
weighted_target {
virtual_node = "${aws_appmesh_virtual_node.serviceb1.name}"
weight = 90
}
weighted_target {
virtual_node = "${aws_appmesh_virtual_node.serviceb2.name}"
weight = 10
}
}
}
}
}
TCP Routing
resource "aws_appmesh_route" "serviceb" {
name = "serviceB-route"
mesh_name = "${aws_appmesh_mesh.simple.id}"
virtual_router_name = "${aws_appmesh_virtual_router.serviceb.name}"
spec {
tcp_route {
action {
weighted_target {
virtual_node = "${aws_appmesh_virtual_node.serviceb1.name}"
weight = 100
}
}
}
}
}
Argument Reference
The following arguments are supported:
mesh_name - (Required) The name of the service mesh in which to create the route.
virtual_router_name - (Required) The name of the virtual router in which to create the route.
weighted_target - (Required) The targets that tra c is routed to when a request matches the route. You can specify
one or more targets and their relative weights with which to distribute tra c.
prefix - (Required) Speci es the path with which to match requests. This parameter must always start with /, which
by itself matches all requests to the virtual router service name.
The weighted_target object supports the following:
virtual_node - (Required) The virtual node to associate with the weighted target.
weight - (Required) The relative weight of the weighted target. An integer between 0 and 100.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
App Mesh virtual routes can be imported using mesh_name and virtual_router_name together with the route's name ,
e.g.
Breaking Changes
Because of backward incompatible API changes (read here (https://github.com/awslabs/aws-app-mesh-
examples/issues/92)), aws_appmesh_virtual_node resource de nitions created with provider versions earlier than v2.3.0
will need to be modi ed:
Replace the backends attribute of the spec object with one or more backend con guration blocks, setting
virtual_service_name to the name of the service.
The Terraform state associated with existing resources will automatically be migrated.
Example Usage
Basic
spec {
backend {
virtual_service {
virtual_service_name = "servicea.simpleapp.local"
}
}
listener {
port_mapping {
port = 8080
protocol = "http"
}
}
service_discovery {
dns {
hostname = "serviceb.simpleapp.local"
}
}
}
}
AWS Cloud Map Service Discovery
spec {
backend {
virtual_service {
virtual_service_name = "servicea.simpleapp.local"
}
}
listener {
port_mapping {
port = 8080
protocol = "http"
}
}
service_discovery {
aws_cloud_map {
attributes = {
stack = "blue"
}
service_name = "serviceb1"
namespace_name = "${aws_service_discovery_http_namespace.example.name}"
}
}
}
}
spec {
backend {
virtual_service {
virtual_service_name = "servicea.simpleapp.local"
}
}
listener {
port_mapping {
port = 8080
protocol = "http"
}
health_check {
protocol = "http"
path = "/ping"
healthy_threshold = 2
unhealthy_threshold = 2
timeout_millis = 2000
interval_millis = 5000
}
}
service_discovery {
dns {
hostname = "serviceb.simpleapp.local"
}
}
}
}
Logging
resource "aws_appmesh_virtual_node" "serviceb1" {
name = "serviceBv1"
mesh_name = "${aws_appmesh_mesh.simple.id}"
spec {
backend {
virtual_service {
virtual_service_name = "servicea.simpleapp.local"
}
}
listener {
port_mapping {
port = 8080
protocol = "http"
}
}
service_discovery {
dns {
hostname = "serviceb.simpleapp.local"
}
}
logging {
access_log {
file {
path = "/dev/stdout"
}
}
}
}
}
Argument Reference
The following arguments are supported:
mesh_name - (Required) The name of the service mesh in which to create the virtual node.
backend - (Optional) The backends to which the virtual node is expected to send outbound tra c.
listener - (Optional) The listeners from which the virtual node is expected to receive inbound tra c.
logging - (Optional) The inbound and outbound access logging information for the virtual node.
service_discovery - (Optional) The service discovery information for the virtual node.
The backend object supports the following:
virtual_service - (Optional) Speci es a virtual service to use as a backend for a virtual node.
virtual_service_name - (Required) The name of the virtual service that is acting as a virtual node backend.
access_log - (Optional) The access log con guration for a virtual node.
file - (Optional) The le object to send virtual node access logs to.
path - (Required) The le path to write access logs to. You can use /dev/stdout to send access logs to standard out.
aws_cloud_map - (Optional) Speci es any AWS Cloud Map information for the virtual node.
dns - (Optional) Speci es the DNS service name for the virtual node.
attributes - (Optional) A string map that contains attributes with values that you can use to lter instances by any
custom attribute that you speci ed when you registered the instance. Only instances that match all of the speci ed
key/value pairs will be returned.
namespace_name - (Required) The name of the AWS Cloud Map namespace to use. Use the
aws_service_discovery_http_namespace (/docs/providers/aws/r/service_discovery_http_namespace.html)
resource to con gure a Cloud Map namespace.
service_name - (Required) The name of the AWS Cloud Map service to use. Use the
aws_service_discovery_service (/docs/providers/aws/r/service_discovery_service.html) resource to con gure a
Cloud Map service.
hostname - (Required) The DNS host name for your virtual node.
protocol - (Required) The protocol used for the port mapping. Valid values are http and tcp .
healthy_threshold - (Required) The number of consecutive successful health checks that must occur before
declaring listener healthy.
interval_millis - (Required) The time period in milliseconds between each health check execution.
protocol - (Required) The protocol for the health check request. Valid values are http and tcp .
timeout_millis - (Required) The amount of time to wait when receiving a response from the health check, in
milliseconds.
unhealthy_threshold - (Required) The number of consecutive failed health checks that must occur before declaring
a virtual node unhealthy.
path - (Optional) The destination path for the health check request. This is only required if the speci ed protocol is
http .
port - (Optional) The destination port for the health check request. This port must match the port de ned in the
port_mapping for the listener.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
App Mesh virtual nodes can be imported using mesh_name together with the virtual node's name , e.g.
Breaking Changes
Because of backward incompatible API changes (read here (https://github.com/awslabs/aws-app-mesh-examples/issues/92)
and here (https://github.com/awslabs/aws-app-mesh-examples/issues/94)), aws_appmesh_virtual_router resource
de nitions created with provider versions earlier than v2.3.0 will need to be modi ed:
Remove service service_names from the spec argument. AWS has created a aws_appmesh_virtual_service
resource for each of service names. These resource can be imported using terraform import .
The Terraform state associated with existing resources will automatically be migrated.
Example Usage
spec {
listener {
port_mapping {
port = 8080
protocol = "http"
}
}
}
}
Argument Reference
The following arguments are supported:
mesh_name - (Required) The name of the service mesh in which to create the virtual router.
listener - (Required) The listeners that the virtual router is expected to receive inbound tra c from. Currently only
one listener is supported per virtual router.
The listener object supports the following:
protocol - (Required) The protocol used for the port mapping. Valid values are http and tcp .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
App Mesh virtual routers can be imported using mesh_name together with the virtual router's name , e.g.
Example Usage
spec {
provider {
virtual_node {
virtual_node_name = "${aws_appmesh_virtual_node.serviceb1.name}"
}
}
}
}
spec {
provider {
virtual_router {
virtual_router_name = "${aws_appmesh_virtual_router.serviceb.name}"
}
}
}
}
Argument Reference
The following arguments are supported:
mesh_name - (Required) The name of the service mesh in which to create the virtual service.
provider - (Optional) The App Mesh object that is acting as the provider for a virtual service. You can specify a single
virtual node or virtual router.
virtual_node_name - (Required) The name of the virtual node that is acting as a service provider.
virtual_router_name - (Required) The name of the virtual router that is acting as a service provider.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
App Mesh virtual services can be imported using mesh_name together with the virtual service's name , e.g.
Example Usage
Argument Reference
The following arguments are supported:
expires - (Optional) RFC3339 string representation of the expiry date. Rounded down to nearest hour. By default, it
is 7 days from the date of creation.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
aws_appsync_api_key can be imported using the AppSync API ID and key separated by : , e.g.
Example Usage
attribute {
name = "UserId"
type = "S"
}
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "appsync.amazonaws.com"
},
"Effect": "Allow"
}
]
}
EOF
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"dynamodb:*"
],
"Effect": "Allow",
"Resource": [
"${aws_dynamodb_table.example.arn}"
]
}
]
]
}
EOF
}
dynamodb_config {
table_name = "${aws_dynamodb_table.example.name}"
}
}
Argument Reference
The following arguments are supported:
api_id - (Required) The API ID for the GraphQL API for the DataSource.
type - (Required) The type of the DataSource. Valid values: AWS_LAMBDA , AMAZON_DYNAMODB ,
AMAZON_ELASTICSEARCH , HTTP , NONE .
service_role_arn - (Optional) The IAM service role ARN for the data source.
dynamodb_con g
The following arguments are supported:
region - (Optional) AWS region of the DynamoDB table. Defaults to current region.
use_caller_credentials - (Optional) Set to true to use Amazon Cognito credentials with this data source.
elasticsearch_con g
The following arguments are supported:
http_con g
The following arguments are supported:
lambda_con g
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
aws_appsync_datasource can be imported with their api_id , a hyphen, and name , e.g.
Example Usage
resource "aws_appsync_graphql_api" "test" {
authentication_type = "API_KEY"
name = "tf-example"
schema = <<EOF
<<
type Mutation {
putPost(id: ID!, title: String!): Post
}
type Post {
id: ID!
title: String!
}
type Query {
singlePost(id: ID!): Post
}
schema {
query: Query
mutation: Mutation
}
EOF
}
http_config {
endpoint = "http://example.com"
}
}
name - (Required) The Function name. The function name does not have to be unique.
request_mapping_template - (Required) The Function request mapping template. Functions support only the 2018-
05-29 version of the request mapping template.
function_version - (Optional) The version of the request mapping template. Currently the supported value is
2018-05-29 .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
aws_appsync_function can be imported using the AppSync API ID and Function ID separated by - , e.g.
Example Usage
user_pool_config {
aws_region = "${data.aws_region.current.name}"
default_action = "DENY"
user_pool_id = "${aws_cognito_user_pool.example.id}"
}
}
With Schema
resource "aws_appsync_graphql_api" "example" {
authentication_type = "AWS_IAM"
name = "example"
schema = <<EOF
<<
schema {
query: Query
}
type Query {
test: Int
}
EOF
}
openid_connect_config {
issuer = "https://example.com"
}
}
additional_authentication_provider {
authentication_type = "AWS_IAM"
}
}
Enabling Logging
resource "aws_iam_role" "example" {
name = "example"
assume_role_policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "appsync.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
log_config {
cloudwatch_logs_role_arn = "${aws_iam_role.example.arn}"
field_log_level = "ERROR"
}
}
Argument Reference
The following arguments are supported:
log_config - (Optional) Nested argument containing logging con guration. De ned below.
openid_connect_config - (Optional) Nested argument containing OpenID Connect con guration. De ned below.
user_pool_config - (Optional) The Amazon Cognito User Pool con guration. De ned below.
schema - (Optional) The schema de nition, in GraphQL schema language format. Terraform cannot perform drift
detection of this con guration.
cloudwatch_logs_role_arn - (Required) Amazon Resource Name of the service role that AWS AppSync will assume
to publish to Amazon CloudWatch logs in your account.
field_log_level - (Required) Field logging level. Valid values: ALL , ERROR , NONE .
additional_authentication_provider
The following arguments are supported:
openid_connect_config - (Optional) Nested argument containing OpenID Connect con guration. De ned below.
user_pool_config - (Optional) The Amazon Cognito User Pool con guration. De ned below.
openid_connect_con g
The following arguments are supported:
issuer - (Required) Issuer for the OpenID Connect con guration. The issuer returned by discovery MUST exactly
match the value of iss in the ID Token.
client_id - (Optional) Client identi er of the Relying party at the OpenID identity provider. This identi er is typically
obtained when the Relying party is registered with the OpenID identity provider. You can specify a regular expression
so the AWS AppSync can validate against multiple client identi ers at a time.
iat_ttl - (Optional) Number of milliseconds a token is valid after being issued to a user.
user_pool_con g
The following arguments are supported:
default_action - (Required only if Cognito is used as the default auth provider) The action that you want your
GraphQL API to take when a request that uses Amazon Cognito User Pool authentication doesn't match the Amazon
Cognito User Pool con guration. Valid: ALLOW and DENY
app_id_client_regex - (Optional) A regular expression for validating the incoming Amazon Cognito User Pool app
client ID.
aws_region - (Optional) The AWS region in which the user pool was created.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - API ID
uris - Map of URIs associated with the API. e.g. uris["GRAPHQL"] = https://ID.appsync-
api.REGION.amazonaws.com/graphql
Import
AppSync GraphQL API can be imported using the GraphQL API ID, e.g.
Example Usage
schema = <<EOF
<<
type Mutation {
putPost(id: ID!, title: String!): Post
}
type Post {
id: ID!
title: String!
}
type Query {
singlePost(id: ID!): Post
}
schema {
query: Query
mutation: Mutation
}
EOF
}
http_config {
endpoint = "http://example.com"
}
}
request_template = <<EOF
<<
{
"version": "2018-05-29",
"method": "GET",
"resourcePath": "/",
"params":{
"headers": $utils.http.copyheaders($ctx.request.headers)
"headers": $utils.http.copyheaders($ctx.request.headers)
}
}
EOF
response_template = <<EOF
<<
#if($ctx.result.statusCode == 200)
$ctx.result.body
#else
$utils.appendError($ctx.result.body, $ctx.result.statusCode)
#end
EOF
}
Argument Reference
The following arguments are supported:
type - (Required) The type name from the schema de ned in the GraphQL API.
field - (Required) The eld name from the schema de ned in the GraphQL API.
request_template - (Required) The request mapping template for UNIT resolver or 'before mapping template' for
PIPELINE resolver.
response_template - (Required) The response mapping template for UNIT resolver or 'after mapping template' for
PIPELINE resolver.
kind - (Optional) The resolver type. Valid values are UNIT and PIPELINE .
Import
aws_appsync_resolver can be imported with their api_id , a hyphen, type , a hypen and field e.g.
Example Usage
Argument Reference
The following arguments are supported:
bucket - (Required) Name of s3 bucket to save the results of the query execution.
encryption_configuration - (Optional) The encryption key block AWS Athena uses to decrypt the data in S3, such
as an AWS Key Management Service (AWS KMS) key. An encryption_configuration block is documented below.
force_destroy - (Optional, Default: false) A boolean that indicates all tables should be deleted from the database so
that the database can be destroyed without error. The tables are not recoverable.
kms_key - (Optional) The KMS key ARN or ID; required for key types SSE_KMS and CSE_KMS .
NOTE: When Athena queries are executed, result les may be created in the speci ed bucket. Consider using
force_destroy on the bucket too in order to avoid any problems when destroying the bucket.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
configuration {
result_configuration {
encryption_configuration {
encryption_option = "SSE_KMS"
kms_key_arn = "${aws_kms_key.test.arn}"
}
}
}
}
Argument Reference
The following arguments are supported:
name - (Required) The plain language name for the query. Maximum length of 128.
workgroup - (Optional) The workgroup to which the query belongs. Defaults to primary
query - (Required) The text of the query itself. In other words, all query statements. Maximum length of 262144.
description - (Optional) A brief explanation of the query. Maximum length of 1024.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Athena Named Query can be imported using the query ID, e.g.
Example Usage
configuration {
enforce_workgroup_configuration = true
publish_cloudwatch_metrics_enabled = true
result_configuration {
output_location = "s3://{aws_s3_bucket.example.bucket}/output/"
encryption_configuration {
encryption_option = "SSE_KMS"
kms_key_arn = "${aws_kms_key.example.arn}"
}
}
}
}
Argument Reference
The following arguments are supported:
configuration - (Optional) Con guration block with various settings for the workgroup. Documented below.
state - (Optional) State of the workgroup. Valid values are DISABLED or ENABLED . Defaults to ENABLED .
bytes_scanned_cutoff_per_query - (Optional) Integer for the upper data usage limit (cuto ) for the amount of
bytes a single query in a workgroup is allowed to scan. Must be at least 10485760 .
enforce_workgroup_configuration - (Optional) Boolean whether the settings for the workgroup override client-side
settings. For more information, see Workgroup Settings Override Client-Side Settings
(https://docs.aws.amazon.com/athena/latest/ug/workgroups-settings-override.html). Defaults to true .
publish_cloudwatch_metrics_enabled - (Optional) Boolean whether Amazon CloudWatch metrics are enabled for
the workgroup. Defaults to true .
result_configuration - (Optional) Con guration block with result settings. Documented below.
The result_configuration con guration block within the configuration supports the following arguments:
encryption_configuration - (Optional) Con guration block with encryption settings. Documented below.
output_location - (Optional) The location in Amazon S3 where your query results are stored, such as
s3://path/to/query/bucket/ . For more information, see Queries and Query Result Files
(https://docs.aws.amazon.com/athena/latest/ug/querying.html).
The encryption_configuration con guration block within the result_configuration of the configuration supports
the following arguments:
encryption_option - (Required) Indicates whether Amazon S3 server-side encryption with Amazon S3-managed keys
(SSE-S3), server-side encryption with KMS-managed keys (SSE-KMS), or client-side encryption with KMS-managed keys
(CSE-KMS) is used. If a query runs in a workgroup and the workgroup overrides client-side settings, then the
workgroup's setting for encryption is used. It speci es whether query results must be encrypted, for all queries that
run in this workgroup.
kms_key_arn - (Optional) For SSE-KMS and CSE-KMS, this is the KMS key Amazon Resource Name (ARN).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Athena Workgroups can be imported using their name, e.g.
NOTE on AutoScaling Groups and ASG Attachments: Terraform currently provides both a standalone ASG Attachment
resource (describing an ASG attached to an ELB), and an AutoScaling Group resource
(/docs/providers/aws/r/autoscaling_group.html) with load_balancers de ned in-line. At this time you cannot use an
ASG with in-line load balancers in conjunction with an ASG Attachment resource. Doing so will cause a con ict and will
overwrite attachments.
Example Usage
Argument Reference
The following arguments are supported:
Example Usage
resource "aws_placement_group" "test" {
name = "test"
strategy = "cluster"
}
initial_lifecycle_hook {
name = "foobar"
default_result = "CONTINUE"
heartbeat_timeout = 2000
lifecycle_transition = "autoscaling:EC2_INSTANCE_LAUNCHING"
notification_metadata = <<EOF
<<
{
"foo": "bar"
}
EOF
notification_target_arn = "arn:aws:sqs:us-east-1:444455556666:queue1*"
role_arn = "arn:aws:iam::123456789012:role/S3Access"
}
tag {
key = "foo"
value = "bar"
propagate_at_launch = true
}
timeouts {
delete = "15m"
}
tag {
key = "lorem"
value = "ipsum"
propagate_at_launch = false
}
}
launch_template {
id = "${aws_launch_template.foobar.id}"
version = "$Latest"
}
}
mixed_instances_policy {
launch_template {
launch_template_specification {
launch_template_id = "${aws_launch_template.example.id}"
}
override {
instance_type = "c4.large"
}
override {
instance_type = "c3.large"
}
}
}
}
Interpolated tags
variable "extra_tags" {
default = [
{
key = "Foo"
value = "Bar"
propagate_at_launch = true
},
{
key = "Baz"
value = "Bam"
propagate_at_launch = true
},
]
}
tags = [
{
key = "explicit1"
value = "value1"
propagate_at_launch = true
},
{
key = "explicit2"
value = "value2"
propagate_at_launch = true
},
]
tags = ["${concat(
list(
map("key", "interpolation1", "value", "value3", "propagate_at_launch", true),
map("key", "interpolation2", "value", "value4", "propagate_at_launch", true)
),
var.extra_tags)
}"]
}
Argument Reference
The following arguments are supported:
name - (Optional) The name of the auto scaling group. By default generated by Terraform.
name_prefix - (Optional) Creates a unique name beginning with the speci ed pre x. Con icts with name .
min_size - (Required) The minimum size of the auto scale group. (See also Waiting for Capacity below.)
availability_zones - (Required only for EC2-Classic) A list of one or more availability zones for the group. This
parameter should not be speci ed when using vpc_zone_identifier .
default_cooldown - (Optional) The amount of time, in seconds, after a scaling activity completes before another
scaling activity can start.
launch_template - (Optional) Nested argument with Launch template speci cation to use to launch instances.
De ned below.
mixed_instances_policy (Optional) Con guration block containing settings to de ne launch targets for Auto Scaling
groups. De ned below.
health_check_grace_period - (Optional, Default: 300) Time (in seconds) after instance comes into service before
checking health.
desired_capacity - (Optional) The number of Amazon EC2 instances that should be running in the group. (See also
Waiting for Capacity below.)
force_delete - (Optional) Allows deleting the autoscaling group without waiting for all instances in the pool to
terminate. You can force an autoscaling group to delete even if it's in the process of scaling a resource. Normally,
Terraform drains all the instances before deleting the group. This bypasses that behavior and potentially leaves
resources dangling.
load_balancers (Optional) A list of elastic load balancer names to add to the autoscaling group names. Only valid for
classic load balancers. For ALBs, use target_group_arns instead.
target_group_arns (Optional) A list of aws_alb_target_group ARNs, for use with Application or Network Load
Balancing.
termination_policies (Optional) A list of policies to decide how the instances in the auto scale group should be
terminated. The allowed values are OldestInstance , NewestInstance , OldestLaunchConfiguration ,
ClosestToNextInstanceHour , OldestLaunchTemplate , AllocationStrategy , Default .
suspended_processes - (Optional) A list of processes to suspend for the AutoScaling Group. The allowed values are
Launch , Terminate , HealthCheck , ReplaceUnhealthy , AZRebalance , AlarmNotification ,
ScheduledActions , AddToLoadBalancer . Note that if you suspend either the Launch or Terminate process types,
it can prevent your autoscaling group from functioning properly.
placement_group (Optional) The name of the placement group into which you'll launch your instances, if any.
metrics_granularity - (Optional) The granularity to associate with the metrics to collect. The only valid value is
1Minute . Default is 1Minute .
enabled_metrics - (Optional) A list of metrics to collect. The allowed values are GroupMinSize , GroupMaxSize ,
GroupDesiredCapacity , GroupInServiceInstances , GroupPendingInstances , GroupStandbyInstances ,
GroupTerminatingInstances , GroupTotalInstances .
min_elb_capacity - (Optional) Setting this causes Terraform to wait for this number of instances from this
autoscaling group to show up healthy in the ELB only on creation. Updates will not wait on ELB instance number
changes. (See also Waiting for Capacity below.)
wait_for_elb_capacity - (Optional) Setting this will cause Terraform to wait for exactly this number of healthy
instances from this autoscaling group in all attached load balancers on both create and update operations. (Takes
precedence over min_elb_capacity behavior.) (See also Waiting for Capacity below.)
protect_from_scale_in (Optional) Allows setting instance protection. The autoscaling group will not select instances
with this setting for terminination during scale in events.
service_linked_role_arn (Optional) The ARN of the service-linked role that the ASG will use to call other AWS
services
launch_template
name - (Optional) The name of the launch template. Con icts with id .
version - (Optional) Template version. Can be version number, $Latest , or $Default . (Default: $Default ).
mixed_instances_policy
instances_distribution - (Optional) Nested argument containing settings on how to mix on-demand and Spot
instances in the Auto Scaling group. De ned below.
launch_template - (Required) Nested argument containing launch template settings along with the overrides to
specify multiple instance types. De ned below.
mixed_instances_policy instances_distribution
on_demand_allocation_strategy - (Optional) Strategy to use when launching on-demand instances. Valid values:
prioritized . Default: prioritized .
on_demand_base_capacity - (Optional) Absolute minimum amount of desired capacity that must be ful lled by on-
demand instances. Default: 0 .
spot_allocation_strategy - (Optional) How to allocate capacity across the Spot pools. Valid values: lowest-
price . Default: lowest-price .
spot_instance_pools - (Optional) Number of Spot pools per availability zone to allocate capacity. EC2 Auto Scaling
selects the cheapest Spot pools and evenly allocates Spot capacity across the number of Spot pools that you specify.
Default: 2 .
spot_max_price - (Optional) Maximum price per unit hour that the user is willing to pay for the Spot instances.
Default: an empty string which means the on-demand price.
mixed_instances_policy launch_template
launch_template_specification - (Required) Nested argument de nes the Launch Template. De ned below.
override - (Optional) List of nested arguments provides the ability to specify multiple instance types. This will
override the same parameter in the launch template. For on-demand instances, Auto Scaling considers the order of
preference of instance types to launch based on the order speci ed in the overrides list. De ned below.
launch_template_id - (Optional) The ID of the launch template. Con icts with launch_template_name .
launch_template_name - (Optional) The name of the launch template. Con icts with launch_template_id .
version - (Optional) Template version. Can be version number, $Latest , or $Default . (Default: $Default ).
propagate_at_launch - (Required) Enables propagation of the tag to Amazon EC2 instances launched via this ASG
To declare multiple tags additional tag blocks can be speci ed. Alternatively the tags attributes can be used, which
accepts a list of maps containing the above eld names as keys and their respective values. This allows the construction of
dynamic lists of tags which is not possible using the single tag attribute. tag and tags are mutually exclusive, only one of
them can be speci ed.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
default_cooldown - Time between a scaling activity and the succeeding scaling activity.
health_check_grace_period - Time after instance comes into service before checking health.
desired_capacity -The number of Amazon EC2 instances that should be running in the group.
load_balancers (Optional) The load balancer names associated with the autoscaling group.
target_group_arns (Optional) list of Target Group ARNs that apply to this AutoScaling Group
NOTE: Terraform has two types of ways you can add lifecycle hooks - via the initial_lifecycle_hook attribute from
this resource, or via the separate aws_autoscaling_lifecycle_hook
(/docs/providers/aws/r/autoscaling_lifecycle_hooks.html) resource. initial_lifecycle_hook exists here because any
lifecycle hooks added with aws_autoscaling_lifecycle_hook will not be added until the autoscaling group has been
created, and depending on your capacity settings, after the initial instances have been launched, creating unintended
behavior. If you need hooks to run on all instances, add them with initial_lifecycle_hook here, but take care to
not duplicate these hooks in aws_autoscaling_lifecycle_hook .
Timeouts
autoscaling_group provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration
options:
delete - (Default 10 minutes ) Used for destroying ASG.
On ASG Update, changes to these values also take time to result in the target number of instances providing service.
Terraform provides two mechanisms to help consistently manage ASG scale up time across dependent resources.
The rst is default behavior. Terraform waits after ASG creation for min_size (or desired_capacity , if speci ed) healthy
instances to show up in the ASG before continuing.
If min_size or desired_capacity are changed in a subsequent update, Terraform will also wait for the correct number of
healthy instances before continuing.
Terraform considers an instance "healthy" when the ASG reports HealthStatus: "Healthy" and LifecycleState:
"InService" . See the AWS AutoScaling Docs
(https://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroupLifecycle.html) for more information
on an ASG's lifecycle.
Terraform will wait for healthy instances for up to wait_for_capacity_timeout . If ASG creation is taking more than a few
minutes, it's worth investigating for scaling activity errors, which can be caused by problems with the selected Launch
Con guration.
The second mechanism is optional, and a ects ASGs with attached ELBs speci ed via the load_balancers attribute or with
ALBs speci ed with target_group_arns .
The min_elb_capacity parameter causes Terraform to wait for at least the requested number of instances to show up
"InService" in all attached ELBs during ASG creation. It has no e ect on ASG updates.
If wait_for_elb_capacity is set, Terraform will wait for exactly that number of Instances to be "InService" in all
attached ELBs on both creation and updates.
These parameters can be used to ensure that service is being provided before Terraform moves on. If new instances don't
pass the ELB's health checks for any reason, the Terraform apply will time out, and the ASG will be marked as tainted (i.e.
marked to be destroyed in a follow up run).
As with ASG Capacity, Terraform will wait for up to wait_for_capacity_timeout for the proper number of instances to be
healthy.
Import
AutoScaling Groups can be imported using the name , e.g.
NOTE: Terraform has two types of ways you can add lifecycle hooks - via the initial_lifecycle_hook attribute from
the aws_autoscaling_group (/docs/providers/aws/r/autoscaling_group.html) resource, or via this one. Hooks added
via this resource will not be added until the autoscaling group has been created, and depending on your capacity
(/docs/providers/aws/r/autoscaling_group.html#waiting-for-capacity) settings, after the initial instances have been
launched, creating unintended behavior. If you need hooks to run on all instances, add them with
initial_lifecycle_hook in aws_autoscaling_group (/docs/providers/aws/r/autoscaling_group.html), but take care
to not duplicate those hooks with this resource.
Example Usage
tag {
key = "Foo"
value = "foo-bar"
propagate_at_launch = true
}
}
notification_metadata = <<EOF
<<
{
"foo": "bar"
}
EOF
notification_target_arn = "arn:aws:sqs:us-east-1:444455556666:queue1*"
role_arn = "arn:aws:iam::123456789012:role/S3Access"
}
Argument Reference
The following arguments are supported:
name - (Required) The name of the lifecycle hook.
autoscaling_group_name - (Required) The name of the Auto Scaling group to which you want to assign the lifecycle
hook
default_result - (Optional) De nes the action the Auto Scaling group should take when the lifecycle hook timeout
elapses or if an unexpected failure occurs. The value for this parameter can be either CONTINUE or ABANDON. The
default value for this parameter is ABANDON.
heartbeat_timeout - (Optional) De nes the amount of time, in seconds, that can elapse before the lifecycle hook
times out. When the lifecycle hook times out, Auto Scaling performs the action de ned in the DefaultResult parameter
lifecycle_transition - (Required) The instance state to which you want to attach the lifecycle hook. For a list of
lifecycle hook types, see describe-lifecycle-hook-types
(https://docs.aws.amazon.com/cli/latest/reference/autoscaling/describe-lifecycle-hook-types.html#examples)
notification_metadata - (Optional) Contains additional information that you want to include any time Auto Scaling
sends a message to the noti cation target.
notification_target_arn - (Optional) The ARN of the noti cation target that Auto Scaling will use to notify you
when an instance is in the transition state for the lifecycle hook. This ARN target can be either an SQS queue or an SNS
topic.
role_arn - (Optional) The ARN of the IAM role that allows the Auto Scaling group to publish to the speci ed
noti cation target.
Import
AutoScaling Lifecycle Hook can be imported using the role autoscaling_group_name and name separated by / .
Example Usage
Basic usage:
notifications = [
"autoscaling:EC2_INSTANCE_LAUNCH",
"autoscaling:EC2_INSTANCE_TERMINATE",
"autoscaling:EC2_INSTANCE_LAUNCH_ERROR",
"autoscaling:EC2_INSTANCE_TERMINATE_ERROR",
]
topic_arn = "${aws_sns_topic.example.arn}"
}
Argument Reference
The following arguments are supported:
notifications - (Required) A list of Noti cation Types that trigger noti cations. Acceptable values are documented
in the AWS documentation here
(https://docs.aws.amazon.com/AutoScaling/latest/APIReference/API_Noti cationCon guration.html)
topic_arn - (Required) The Topic ARN for noti cations to be sent through
Attributes Reference
In addition to all arguments above, the following attributes are exported:
group_names
notifications
topic_arn
Resource: aws_autoscaling_policy
Provides an AutoScaling Scaling Policy resource.
NOTE: You may want to omit desired_capacity attribute from attached aws_autoscaling_group when using
autoscaling policies. It's good practice to pick either manual
(https://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-manual-scaling.html) or dynamic
(https://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-scale-based-on-demand.html) (policy-based)
scaling.
Example Usage
Argument Reference
The following arguments are supported:
adjustment_type - (Optional) Speci es whether the adjustment is an absolute number or a percentage of the
current capacity. Valid values are ChangeInCapacity , ExactCapacity , and PercentChangeInCapacity .
policy_type - (Optional) The policy type, either "SimpleScaling", "StepScaling" or "TargetTrackingScaling". If this value
isn't provided, AWS will default to "SimpleScaling."
estimated_instance_warmup - (Optional) The estimated time, in seconds, until a newly launched instance will
contribute CloudWatch metrics. Without a value, AWS will default to the group's speci ed cooldown period.
scaling_adjustment - (Optional) The number of instances by which to scale. adjustment_type determines the
interpretation of this number (e.g., as an absolute number or as a percentage of the existing Auto Scaling group size). A
positive increment adds to the current capacity and a negative value removes from the current capacity.
metric_aggregation_type - (Optional) The aggregation type for the policy's metrics. Valid values are "Minimum",
"Maximum", and "Average". Without a value, AWS will treat the aggregation type as "Average".
step_adjustments - (Optional) A set of adjustments that manage group scaling. These have the following structure:
step_adjustment {
scaling_adjustment = -1
metric_interval_lower_bound = 1.0
metric_interval_upper_bound = 2.0
}
step_adjustment {
scaling_adjustment = 1
metric_interval_lower_bound = 2.0
metric_interval_upper_bound = 3.0
}
scaling_adjustment - (Required) The number of members by which to scale, when the adjustment bounds are
breached. A positive value scales up. A negative value scales down.
metric_interval_lower_bound - (Optional) The lower bound for the di erence between the alarm threshold and
the CloudWatch metric. Without a value, AWS will treat this bound as in nity.
metric_interval_upper_bound - (Optional) The upper bound for the di erence between the alarm threshold and
the CloudWatch metric. Without a value, AWS will treat this bound as in nity. The upper bound must be greater than
the lower bound.
target_tracking_configuration - (Optional) A target tracking policy. These have the following structure:
target_tracking_configuration {
predefined_metric_specification {
predefined_metric_type = "ASGAverageCPUUtilization"
}
target_value = 40.0
}
target_tracking_configuration {
customized_metric_specification {
metric_dimension {
name = "fuga"
value = "fuga"
}
metric_name = "hoge"
namespace = "hoge"
statistic = "Average"
}
target_value = 40.0
}
disable_scale_in - (Optional, Default: false) Indicates whether scale in by the target tracking policy is disabled.
resource_label - (Optional) Identi es the resource associated with the metric type.
customized_metric_speci cation
The following arguments are supported:
metric_dimension
Attribute Reference
arn - The ARN assigned by AWS to the scaling policy.
Import
AutoScaling scaling policy can be imported using the role autoscaling_group_name and name separated by / .
Example Usage
Argument Reference
The following arguments are supported:
autoscaling_group_name - (Required) The name or Amazon Resource Name (ARN) of the Auto Scaling group.
start_time - (Optional) The time for this action to start, in "YYYY-MM-DDThh:mm:ssZ" format in UTC/GMT only (for
example, 2014-06-01T00:00:00Z ). If you try to schedule your action in the past, Auto Scaling returns an error message.
end_time - (Optional) The time for this action to end, in "YYYY-MM-DDThh:mm:ssZ" format in UTC/GMT only (for
example, 2014-06-01T00:00:00Z ). If you try to schedule your action in the past, Auto Scaling returns an error message.
recurrence - (Optional) The time when recurring future actions will start. Start time is speci ed by the user following
the Unix cron syntax format.
min_size - (Optional) The minimum size for the Auto Scaling group. Default 0. Set to -1 if you don't want to change
the minimum size at the scheduled time.
max_size - (Optional) The maximum size for the Auto Scaling group. Default 0. Set to -1 if you don't want to change
the maximum size at the scheduled time.
desired_capacity - (Optional) The number of EC2 instances that should be running in the group. Default 0. Set to -1
if you don't want to change the desired capacity at the scheduled time.
NOTE: When start_time and end_time are speci ed with recurrence , they form the boundaries of when the
recurring action will start and stop.
Attribute Reference
arn - The ARN assigned by AWS to the autoscaling schedule.
Import
AutoScaling ScheduledAction can be imported using the auto-scaling-group-name and scheduled-action-name , e.g.
Example Usage
rule {
rule_name = "tf_example_backup_rule"
target_vault_name = "${aws_backup_vault.test.name}"
schedule = "cron(0 12 * * ? *)"
}
}
Argument Reference
The following arguments are supported:
rule - (Required) A rule object that speci es a scheduled task that is used to back up a selection of resources.
tags - (Optional) Metadata that you can assign to help organize the plans you create.
Rule Arguments
For rule the following attributes are supported:
target_vault_name (Required) - The name of a logical container where backups are stored.
schedule (Optional) - A CRON expression specifying when AWS Backup initiates a backup job.
completion_window (Optional) - The amount of time AWS Backup attempts a backup before canceling the job and
returning an error.
lifecycle (Optional) - The lifecycle de nes when a protected resource is transitioned to cold storage and when it
expires. Fields documented below.
recovery_point_tags (Optional) - Metadata that you can assign to help organize the resources that you create.
Lifecycle Arguments
For lifecycle the following attributes are supported:
cold_storage_after - (Optional) Speci es the number of days after creation that a recovery point is moved to cold
storage.
delete_after (Optional) - Speci es the number of days after creation that a recovery point is deleted. Must be 90
days greater than cold_storage_after .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
version - Unique, randomly generated, Unicode, UTF-8 encoded string that serves as the version ID of the backup
plan.
Resource: aws_backup_selection
Manages selection conditions for AWS Backup plan resources.
Example Usage
IAM Role
For more information about creating and managing IAM Roles for backups and restores, see the AWS Backup
Developer Guide (https://docs.aws.amazon.com/aws-backup/latest/devguide/iam-service-roles.html).
The below example creates an IAM role with the default managed IAM Policy for allowing AWS Backup to create backups.
iam_role_arn = "${aws_iam_role.example.arn}"
}
selection_tag {
type = "STRINGEQUALS"
key = "foo"
value = "bar"
}
}
resources = [
"${aws_db_instance.example.arn}",
"${aws_ebs_volume.example.arn}",
"${aws_efs_file_system.example.arn}",
]
}
Argument Reference
The following arguments are supported:
plan_id - (Required) The backup plan ID to be associated with the selection of resources.
iam_role_arn - (Required) The ARN of the IAM role that AWS Backup uses to authenticate when restoring and
backing up the target resource. See the AWS Backup Developer Guide (https://docs.aws.amazon.com/aws-
backup/latest/devguide/access-control.html#managed-policies) for additional information about using AWS managed
policies or creating custom policies attached to the IAM role.
selection_tag - (Optional) Tag-based conditions used to specify a set of resources to assign to a backup plan.
resources - (Optional) An array of strings that either contain Amazon Resource Names (ARNs) or match patterns of
resources to assign to a backup plan..
type - (Required) An operation, such as StringEquals , that is applied to a key-value pair used to lter resources in a
selection.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Backup selection can be imported using the role plan_id and id separated by | .
Example Usage
Argument Reference
The following arguments are supported:
tags - (Optional) Metadata that you can assign to help organize the resources that you create.
kms_key_arn - (Optional) The server-side encryption key that is used to protect your backups.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
recovery_points - The number of recovery points that are stored in a backup vault.
Import
Backup vault can be imported using the name , e.g.
For information about AWS Batch, see What is AWS Batch? (http://docs.aws.amazon.com/batch/latest/userguide/what-is-
batch.html) . For information about compute environment, see Compute Environments
(http://docs.aws.amazon.com/batch/latest/userguide/compute_environments.html) .
Note: To prevent a race condition during environment deletion, make sure to set depends_on to the related
aws_iam_role_policy_attachment ; otherwise, the policy may be destroyed too soon and the compute environment
will then get stuck in the DELETING state, see Troubleshooting AWS Batch
(http://docs.aws.amazon.com/batch/latest/userguide/troubleshooting.html) .
Example Usage
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
}
EOF
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "batch.amazonaws.com"
}
}
]
}
EOF
}
compute_resources {
instance_role = "${aws_iam_instance_profile.ecs_instance_role.arn}"
instance_type = [
"c4.large",
]
max_vcpus = 16
min_vcpus = 0
security_group_ids = [
"${aws_security_group.sample.id}",
]
subnets = [
"${aws_subnet.sample.id}",
]
type = "EC2"
}
service_role = "${aws_iam_role.aws_batch_service_role.arn}"
type = "MANAGED"
depends_on = ["aws_iam_role_policy_attachment.aws_batch_service_role"]
}
Argument Reference
compute_environment_name - (Required) The name for your compute environment. Up to 128 letters (uppercase and
lowercase), numbers, and underscores are allowed.
compute_resources - (Optional) Details of the compute resources managed by the compute environment. This
parameter is required for managed compute environments. See details below.
service_role - (Required) The full Amazon Resource Name (ARN) of the IAM role that allows AWS Batch to make
calls to other AWS services on your behalf.
state - (Optional) The state of the compute environment. If the state is ENABLED , then the compute environment
accepts jobs from a queue and can scale out automatically based on queues. Valid items are ENABLED or DISABLED .
Defaults to ENABLED .
type - (Required) The type of the compute environment. Valid items are MANAGED or UNMANAGED .
bid_percentage - (Optional) Integer of minimum percentage that a Spot Instance price must be when compared with
the On-Demand price for that instance type before instances are launched. For example, if your bid percentage is 20%
( 20 ), then the Spot price must be below 20% of the current On-Demand price for that EC2 instance. This parameter is
required for SPOT compute environments.
desired_vcpus - (Optional) The desired number of EC2 vCPUS in the compute environment.
ec2_key_pair - (Optional) The EC2 key pair that is used for instances launched in the compute environment.
image_id - (Optional) The Amazon Machine Image (AMI) ID used for instances launched in the compute environment.
instance_role - (Required) The Amazon ECS instance role applied to Amazon EC2 instances in a compute
environment.
launch_template - (Optional) The launch template to use for your compute resources. See details below.
max_vcpus - (Required) The maximum number of EC2 vCPUs that an environment can reach.
min_vcpus - (Required) The minimum number of EC2 vCPUs that an environment should maintain.
security_group_ids - (Required) A list of EC2 security group that are associated with instances launched in the
compute environment.
spot_iam_fleet_role - (Optional) The Amazon Resource Name (ARN) of the Amazon EC2 Spot Fleet IAM role applied
to a SPOT compute environment. This parameter is required for SPOT compute environments.
subnets - (Required) A list of VPC subnets into which the compute resources are launched.
tags - (Optional) Key-value pair tags to be applied to resources that are launched in the compute environment.
type - (Required) The type of compute environment. Valid items are EC2 or SPOT .
launch_template
launch_template supports the following:
launch_template_id - (Optional) ID of the launch template. You must specify either the launch template ID or launch
template name in the request, but not both.
version - (Optional) The version number of the launch template. Default: The default version of the launch template.
Attributes Reference
arn - The Amazon Resource Name (ARN) of the compute environment.
ecs_cluster_arn - The Amazon Resource Name (ARN) of the underlying Amazon ECS cluster used by the compute
environment.
status - The current status of the compute environment (for example, CREATING or VALID).
status_reason - A short, human-readable string to provide additional details about the current status of the
compute environment.
Resource: aws_batch_job_de nition
Provides a Batch Job De nition resource.
Example Usage
container_properties = <<CONTAINER_PROPERTIES
<<
{
"command": ["ls", "-la"],
"image": "busybox",
"memory": 1024,
"vcpus": 1,
"volumes": [
{
"host": {
"sourcePath": "/tmp"
},
"name": "tmp"
}
],
"environment": [
{"name": "VARNAME", "value": "VARVAL"}
],
"mountPoints": [
{
"sourceVolume": "tmp",
"containerPath": "/tmp",
"readOnly": false
}
],
"ulimits": [
{
"hardLimit": 1024,
"name": "nofile",
"softLimit": 1024
}
]
}
CONTAINER_PROPERTIES
}
Argument Reference
The following arguments are supported:
parameters - (Optional) Speci es the parameter substitution placeholders to set in the job de nition.
retry_strategy - (Optional) Speci es the retry strategy to use for failed jobs that are submitted with this job
de nition. Maximum number of retry_strategy is 1 . De ned below.
timeout - (Optional) Speci es the timeout for jobs so that if a job runs longer, AWS Batch terminates the job.
Maximum number of timeout is 1 . De ned below.
retry_strategy
retry_strategy supports the following:
attempts - (Optional) The number of times to move a job to the RUNNABLE status. You may specify between 1 and
10 attempts.
timeout
timeout supports the following:
attempt_duration_seconds - (Optional) The time duration in seconds after which AWS Batch terminates your jobs if
they have not nished. The minimum value for the timeout is 60 seconds.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
compute_environments - (Required) Speci es the set of compute environments mapped to a job queue and their
order. The position of the compute environments in the list will dictate the order. You can associate up to 3 compute
environments with a job queue.
priority - (Required) The priority of the job queue. Job queues with a higher priority are evaluated rst when
associated with the same compute environment.
state - (Required) The state of the job queue. Must be one of: ENABLED or DISABLED
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
cost_filters = {
Service = "Amazon Elastic Compute Cloud - Compute"
}
notification {
comparison_operator = "GREATER_THAN"
threshold = 100
threshold_type = "PERCENTAGE"
notification_type = "FORECASTED"
subscriber_email_addresses = ["test@example.com"]
}
}
budget_type = "COST"
limit_amount = "100"
limit_unit = "USD"
}
budget_type = "USAGE"
limit_amount = "3"
limit_unit = "GB"
}
Argument Reference
For more detailed documentation about each argument, refer to the AWS o cial documentation
(http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/data-type-budget.html).
account_id - (Optional) The ID of the target account for budget. Will use current user's account_id by default if
omitted.
name_prefix - (Optional) The pre x of the name of a budget. Unique within accounts.
cost_types - (Optional) Object containing CostTypes The types of cost included in a budget, such as tax and
subscriptions..
limit_amount - (Required) The amount of cost or usage being measured for a budget.
limit_unit - (Required) The unit of measurement used for the budget forecast, actual spend, or budget threshold,
such as dollars or GB. See Spend (http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/data-type-
spend.html) documentation.
time_period_end - (Optional) The end of the time period covered by the budget. There are no restrictions on the
end date. Format: 2017-01-01_12:00 .
time_period_start - (Required) The start of the time period covered by the budget. The start date must come
before the end date. Format: 2017-01-01_12:00 .
time_unit - (Required) The length of time until a budget resets the actual and forecasted spend. Valid values:
MONTHLY , QUARTERLY , ANNUALLY .
notification - (Optional) Object containing Budget Noti cations. Can be used multiple times to de ne more than
one budget noti cation
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - id of resource.
CostTypes
Valid keys for cost_types parameter.
include_credit - A boolean value whether to include credits in the cost budget. Defaults to true
include_other_subscription - A boolean value whether to include other subscription costs in the cost budget.
Defaults to true
include_recurring - A boolean value whether to include recurring costs in the cost budget. Defaults to true
include_refund - A boolean value whether to include refunds in the cost budget. Defaults to true
include_subscription - A boolean value whether to include subscriptions in the cost budget. Defaults to true
include_support - A boolean value whether to include support costs in the cost budget. Defaults to true
include_tax - A boolean value whether to include tax in the cost budget. Defaults to true
include_upfront - A boolean value whether to include upfront costs in the cost budget. Defaults to true
use_amortized - Speci es whether a budget uses the amortized rate. Defaults to false
use_blended - A boolean value whether to use blended costs in the cost budget. Defaults to false
CostFilters
Valid keys for cost_filters parameter vary depending on the budget_type value.
cost
AZ
LinkedAccount
Operation
PurchaseType
Service
TagKeyValue
usage
AZ
LinkedAccount
Operation
PurchaseType
UsageType:<service name>
TagKeyValue
BudgetNoti cation
Valid keys for notification parameter.
comparison_operator - (Required) Comparison operator to use to evaluate the condition. Can be LESS_THAN ,
EQUAL_TO or GREATER_THAN .
notification_type - (Required) What kind of budget value to notify on. Can be ACTUAL or FORECASTED
Import
Budgets can be imported using AccountID:BudgetName , e.g.
Example Usage
Argument Reference
The following arguments are supported:
instance_type - (Required) The type of instance to connect to the environment, e.g. t2.micro .
automatic_stop_time_minutes - (Optional) The number of minutes until the running instance is shut down after the
environment has last been used.
owner_arn - (Optional) The ARN of the environment owner. This can be ARN of any AWS IAM principal. Defaults to the
environment's creator.
subnet_id - (Optional) The ID of the subnet in Amazon VPC that AWS Cloud9 will use to communicate with the
Amazon EC2 instance.
Attributes Reference
In addition the the arguments listed above the following attributes are exported:
Example Usage
parameters = {
VPCCidr = "10.0.0.0/16"
}
template_body = <<STACK
<<
{
"Parameters" : {
"VPCCidr" : {
"Type" : "String",
"Default" : "10.0.0.0/16",
"Description" : "Enter the CIDR block for the VPC. Default is 10.0.0.0/16."
}
},
"Resources" : {
"myVpc": {
"Type" : "AWS::EC2::VPC",
"Properties" : {
"CidrBlock" : { "Ref" : "VPCCidr" },
"Tags" : [
{"Key": "Name", "Value": "Primary_CF_VPC"}
]
}
}
}
}
STACK
}
Argument Reference
The following arguments are supported:
template_body - (Optional) Structure containing the template body (max size: 51,200 bytes).
template_url - (Optional) Location of a le containing the template body (max size: 460,800 bytes).
disable_rollback - (Optional) Set to true to disable rollback of the stack if stack creation failed. Con icts with
on_failure .
notification_arns - (Optional) A list of SNS topic ARNs to publish stack related events.
on_failure - (Optional) Action to be taken if stack creation fails. This must be one of: DO_NOTHING , ROLLBACK , or
DELETE . Con icts with disable_rollback .
parameters - (Optional) A map of Parameter structures that specify input parameters for the stack.
policy_body - (Optional) Structure containing the stack policy body. Con icts w/ policy_url .
policy_url - (Optional) Location of a le containing the stack policy. Con icts w/ policy_body .
iam_role_arn - (Optional) The ARN of an IAM role that AWS CloudFormation assumes to create the stack. If you don't
specify a value, AWS CloudFormation uses the role that was previously associated with the stack. If no role is available,
AWS CloudFormation uses a temporary session that is generated from your user credentials.
timeout_in_minutes - (Optional) The amount of time that can pass before the stack status becomes
CREATE_FAILED .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Cloudformation Stacks can be imported using the name , e.g.
Timeouts
aws_cloudformation_stack provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration
options:
NOTE: All template parameters, including those with a Default , must be con gured or ignored with the lifecycle
con guration block ignore_changes argument.
NOTE: All NoEcho template parameters must be ignored with the lifecycle con guration block ignore_changes
argument.
Example Usage
principals {
identifiers = ["cloudformation.amazonaws.com"]
type = "Service"
}
}
}
parameters = {
VPCCidr = "10.0.0.0/16"
}
template_body = <<TEMPLATE
<<
{
"Parameters" : {
"VPCCidr" : {
"Type" : "String",
"Default" : "10.0.0.0/16",
"Description" : "Enter the CIDR block for the VPC. Default is 10.0.0.0/16."
}
},
},
"Resources" : {
"myVpc": {
"Type" : "AWS::EC2::VPC",
"Properties" : {
"CidrBlock" : { "Ref" : "VPCCidr" },
"Tags" : [
{"Key": "Name", "Value": "Primary_CF_VPC"}
]
}
}
}
}
TEMPLATE
}
Argument Reference
The following arguments are supported:
administration_role_arn - (Required) Amazon Resource Number (ARN) of the IAM Role in the administrator
account.
name - (Required) Name of the Stack Set. The name must be unique in the region where you create your Stack Set.
The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphabetic
character and cannot be longer than 128 characters.
execution_role_name - (Optional) Name of the IAM Role in all target accounts for Stack Set operations. Defaults to
AWSCloudFormationStackSetExecutionRole .
parameters - (Optional) Key-value map of input parameters for the Stack Set template. All template parameters,
including those with a Default , must be con gured or ignored with lifecycle con guration block
ignore_changes argument. All NoEcho template parameters must be ignored with the lifecycle con guration
block ignore_changes argument.
tags - (Optional) Key-value map of tags to associate with this Stack Set and the Stacks created from it. AWS
CloudFormation also propagates these tags to supported resources that are created in the Stacks. A maximum
number of 50 tags can be speci ed.
template_body - (Optional) String containing the CloudFormation template body. Maximum size: 51,200 bytes.
Con icts with template_url .
template_url - (Optional) String containing the location of a le containing the CloudFormation template body. The
URL must point to a template that is located in an Amazon S3 bucket. Maximum location le size: 460,800 bytes.
Con icts with template_body .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
CloudFormation Stack Sets can be imported using the name , e.g.
NOTE: All target accounts must have an IAM Role created that matches the name of the execution role con gured in the
Stack Set (the execution_role_name argument in the aws_cloudformation_stack_set resource) in a trust
relationship with the administrative account or administration IAM Role. The execution role must have appropriate
permissions to manage resources de ned in the template along with those required for Stack Sets to operate. See the
AWS CloudFormation User Guide (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-
prereqs.html) for more details.
NOTE: To retain the Stack during Terraform resource destroy, ensure retain_stack = true has been successfully
applied into the Terraform state rst. This must be completed before an apply that would destroy the resource.
Example Usage
principals {
identifiers = ["${aws_iam_role.AWSCloudFormationStackSetAdministrationRole.arn}"]
type = "AWS"
}
}
}
effect = "Allow"
resources = ["*"]
}
}
Argument Reference
The following arguments are supported:
account_id - (Optional) Target AWS Account ID to create a Stack based on the Stack Set. Defaults to current account.
parameter_overrides - (Optional) Key-value map of input parameters to override from the Stack Set for this
Instance.
region - (Optional) Target AWS Region to create a Stack based on the Stack Set. Defaults to current region.
retain_stack - (Optional) During Terraform resource destroy, remove Instance from Stack Set while keeping the
Stack and its associated resources. Must be enabled in Terraform state before destroy operation to take e ect. You
cannot reassociate a retained Stack or add an existing, saved Stack to a new Stack Set. Defaults to false .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - Stack Set name, target AWS account ID, and target AWS region separated by commas ( , )
Timeouts
aws_cloudformation_stack_set_instance provides the following Timeouts
(/docs/con guration/resources.html#timeouts) con guration options:
Import
CloudFormation Stack Set Instances can be imported using the Stack Set name, target AWS account ID, and target AWS
region separated by commas ( , ) e.g.
For information about CloudFront distributions, see the Amazon CloudFront Developer Guide
(http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html). For speci c information about creating
CloudFront web distributions, see the POST Distribution
(https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateDistribution.html) page in the Amazon CloudFront API
Reference.
NOTE: CloudFront distributions take about 15 minutes to a deployed state after creation or modi cation. During this time, deletes to
resources will be blocked. If you need to delete a distribution that is enabled and you do not want to wait, you need to use the
retain_on_delete ag.
Example Usage
The following example below creates a CloudFront distribution with an S3 origin.
tags = {
Name = "My bucket"
}
}
locals {
s3_origin_id = "myS3Origin"
}
s3_origin_config {
origin_access_identity = "origin-access-identity/cloudfront/ABCDEFG1234567"
}
}
enabled = true
is_ipv6_enabled = true
comment = "Some comment"
default_root_object = "index.html"
logging_config {
include_cookies = false
bucket = "mylogs.s3.amazonaws.com"
prefix = "myprefix"
}
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "${local.s3_origin_id}"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
ordered_cache_behavior {
path_pattern = "/content/immutable/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = "${local.s3_origin_id}"
forwarded_values {
query_string = false
headers = ["Origin"]
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = "redirect-to-https"
}
ordered_cache_behavior {
path_pattern = "/content/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "${local.s3_origin_id}"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
compress = true
viewer_protocol_policy = "redirect-to-https"
}
price_class = "PriceClass_200"
restrictions {
geo_restriction {
restriction_type = "whitelist"
locations = ["US", "CA", "GB", "DE"]
locations = ["US", "CA", "GB", "DE"]
}
}
tags = {
Environment = "production"
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
The following example below creates a Cloudfront distribution with an origin group for failover routing:
failover_criteria {
status_codes = [403, 404, 500, 502]
}
member {
origin_id = "primaryS3"
}
member {
origin_id = "failoverS3"
}
}
origin {
domain_name = "${aws_s3_bucket.primary.bucket_regional_domain_name}"
origin_id = "primaryS3"
s3_origin_config {
origin_access_identity = "${aws_cloudfront_origin_access_identity.default.cloudfront_access_identity_path}"
}
}
origin {
domain_name = "${aws_s3_bucket.failover.bucket_regional_domain_name}"
origin_id = "failoverS3"
s3_origin_config {
origin_access_identity = "${aws_cloudfront_origin_access_identity.default.cloudfront_access_identity_path}"
}
}
default_cache_behavior {
target_origin_id = "groupS3"
}
Argument Reference
The CloudFront distribution argument layout is a complex structure composed of several sub-resources - these resources are laid out
below.
Top-Level Arguments
aliases (Optional) - Extra CNAMEs (alternate domain names), if any, for this distribution.
comment (Optional) - Any comments you want to include about the distribution.
custom_error_response (Optional) - One or more custom error response elements (multiples allowed).
default_cache_behavior (Required) - The default cache behavior for this distribution (maximum one).
default_root_object (Optional) - The object that you want CloudFront to return (for example, index.html) when an end user
requests the root URL.
enabled (Required) - Whether the distribution is enabled to accept end user requests for content.
http_version (Optional) - The maximum HTTP version to support on the distribution. Allowed values are http1.1 and http2 .
The default is http2 .
logging_config (Optional) - The logging con guration that controls how logs are written to your distribution (maximum one).
ordered_cache_behavior (Optional) - An ordered list of cache behaviors resource for this distribution. List from top to bottom in
order of precedence. The topmost cache behavior will have precedence 0.
origin (Required) - One or more origins for this distribution (multiples allowed).
origin_group (Optional) - One or more origin_group for this distribution (multiples allowed).
price_class (Optional) - The price class for this distribution. One of PriceClass_All , PriceClass_200 , PriceClass_100
restrictions (Required) - The restriction con guration for this distribution (maximum one).
viewer_certificate (Required) - The SSL con guration for this distribution (maximum one).
web_acl_id (Optional) - If you're using AWS WAF to lter CloudFront requests, the Id of the AWS WAF web ACL that is associated
with the distribution. The WAF Web ACL must exist in the WAF Global (CloudFront) region and the credentials con guring this
argument must have waf:GetWebACL permissions assigned.
retain_on_delete (Optional) - Disables the distribution instead of deleting it when destroying the resource through Terraform. If
this is set, the distribution needs to be deleted manually afterwards. Default: false .
wait_for_deployment (Optional) - If enabled, the resource will wait for the distribution status to change from InProgress to
Deployed . Setting this to false will skip the process. Default: true .
allowed_methods (Required) - Controls which HTTP methods CloudFront processes and forwards to your Amazon S3 bucket or
your custom origin.
cached_methods (Required) - Controls whether CloudFront caches the response to requests using the speci ed HTTP methods.
compress (Optional) - Whether you want CloudFront to automatically compress content for web requests that include Accept-
Encoding: gzip in the request header (default: false ).
default_ttl (Optional) - The default amount of time (in seconds) that an object is in a CloudFront cache before CloudFront
forwards another request in the absence of an Cache-Control max-age or Expires header. Defaults to 1 day.
field_level_encryption_id (Optional) - Field level encryption con guration ID
forwarded_values (Required) - The forwarded values con guration that speci es how CloudFront handles query strings, cookies
and headers (maximum one).
lambda_function_association (Optional) - A con g block that triggers a lambda function with speci c actions. De ned below,
maximum 4.
max_ttl (Optional) - The maximum amount of time (in seconds) that an object is in a CloudFront cache before CloudFront
forwards another request to your origin to determine whether the object has been updated. Only e ective in the presence of
Cache-Control max-age , Cache-Control s-maxage , and Expires headers. Defaults to 365 days.
min_ttl (Optional) - The minimum amount of time that you want objects to stay in CloudFront caches before CloudFront queries
your origin to see whether the object has been updated. Defaults to 0 seconds.
path_pattern (Required) - The pattern (for example, images/*.jpg) that speci es which requests you want this cache behavior
to apply to.
smooth_streaming (Optional) - Indicates whether you want to distribute media les in Microsoft Smooth Streaming format using
the origin that is associated with this cache behavior.
target_origin_id (Required) - The value of ID for the origin that you want CloudFront to route requests to when a request
matches the path pattern either for a cache behavior or for the default cache behavior.
trusted_signers (Optional) - The AWS accounts, if any, that you want to allow to create signed URLs for private content.
viewer_protocol_policy (Required) - Use this element to specify the protocol that users can use to access the les in the origin
speci ed by TargetOriginId when a request matches the path pattern in PathPattern. One of allow-all , https-only , or
redirect-to-https .
cookies (Required) - The forwarded values cookies that speci es how CloudFront handles cookies (maximum one).
headers (Optional) - Speci es the Headers, if any, that you want CloudFront to vary upon for this cache behavior. Specify * to
include all headers.
query_string (Required) - Indicates whether you want CloudFront to forward query strings to the origin that is associated with
this cache behavior.
query_string_cache_keys (Optional) - When speci ed, along with a value of true for query_string , all query strings are
forwarded, however only the query string keys listed in this argument are cached. When omitted with a value of true for
query_string , all query string keys are cached.
Lambda@Edge allows you to associate an AWS Lambda Function with a prede ned event. You can associate a single function per event
type. See What is Lambda@Edge (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/what-is-lambda-at-edge.html)
for more information.
ordered_cache_behavior {
lambda_function_association {
event_type = "viewer-request"
lambda_arn = "${aws_lambda_function.example.qualified_arn}"
include_body = false
}
}
}
event_type (Required) - The speci c event to trigger this function. Valid values: viewer-request , origin-request , viewer-
response , origin-response
include_body (Optional) - When set to true it exposes the request body to the lambda function. Defaults to false. Valid values:
true , false .
Cookies Arguments
forward (Required) - Speci es whether you want CloudFront to forward cookies to the origin that is associated with this cache
behavior. You can specify all , none or whitelist . If whitelist , you must include the subsequent whitelisted_names
whitelisted_names (Optional) - If you have speci ed whitelist to forward , the whitelisted cookies that you want CloudFront
to forward to your origin.
error_caching_min_ttl (Optional) - The minimum amount of time you want HTTP error codes to stay in CloudFront caches
before CloudFront queries your origin to see whether the object has been updated.
error_code (Required) - The 4xx or 5xx HTTP status code that you want to customize.
response_code (Optional) - The HTTP status code that you want CloudFront to return with the custom error page to the viewer.
response_page_path (Optional) - The path of the custom error page (for example, /custom_404.html ).
The arguments for default_cache_behavior are the same as for ordered_cache_behavior , except for the path_pattern argument
is not required.
bucket (Required) - The Amazon S3 bucket to store the access logs in, for example, myawslogbucket.s3.amazonaws.com .
include_cookies (Optional) - Speci es whether you want CloudFront to include cookies in access logs (default: false ).
prefix (Optional) - An optional string that you want CloudFront to pre x to the access log lenames for this distribution, for
example, myprefix/ .
Origin Arguments
custom_origin_config - The CloudFront custom origin con guration information. If an S3 origin is required, use
s3_origin_config instead.
domain_name (Required) - The DNS domain name of either the S3 bucket, or web site of your custom origin.
custom_header (Optional) - One or more sub-resources with name and value parameters that specify header data that will be
sent to the origin (multiples allowed).
origin_path (Optional) - An optional element that causes CloudFront to request your content from a directory in your Amazon S3
bucket or your custom origin.
s3_origin_config - The CloudFront S3 origin con guration information. If a custom origin is required, use
custom_origin_config instead.
http_port (Required) - The HTTP port the custom origin listens on.
https_port (Required) - The HTTPS port the custom origin listens on.
origin_protocol_policy (Required) - The origin protocol policy to apply to your origin. One of http-only , https-only , or
match-viewer .
origin_ssl_protocols (Required) - The SSL/TLS protocols that you want CloudFront to use when communicating with your
origin over HTTPS. A list of one or more of SSLv3 , TLSv1 , TLSv1.1 , and TLSv1.2 .
origin_keepalive_timeout - (Optional) The Custom KeepAlive timeout, in seconds. By default, AWS enforces a limit of 60 . But
you can request an increase
(http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html#request-
custom-request-timeout).
origin_read_timeout - (Optional) The Custom Read timeout, in seconds. By default, AWS enforces a limit of 60 . But you can
request an increase
(http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html#request-
custom-request-timeout).
failover_criteria (Required) - The failover criteria for when to failover to the secondary origin
member (Required) - Ordered member con guration blocks assigned to the origin group, where the rst member is the primary
origin. Minimum 2.
status_codes (Required) - A list of HTTP status codes for the origin group
Member Arguments
Restrictions Arguments
The restrictions sub-resource takes another single sub-resource named geo_restriction (see the example for usage).
The arguments of geo_restriction are:
restriction_type (Required) - The method that you want to use to restrict distribution of your content by country: none ,
whitelist , or blacklist .
acm_certificate_arn - The ARN of the AWS Certi cate Manager (https://aws.amazon.com/certi cate-manager/) certi cate that
you wish to use with this distribution. Specify this, cloudfront_default_certificate , or iam_certificate_id . The ACM
certi cate must be in US-EAST-1.
cloudfront_default_certificate - true if you want viewers to use HTTPS to request your objects and you're using the
CloudFront domain name for your distribution. Specify this, acm_certificate_arn , or iam_certificate_id .
iam_certificate_id - The IAM certi cate identi er of the custom viewer certi cate for this distribution if you are using a custom
domain. Specify this, acm_certificate_arn , or cloudfront_default_certificate .
minimum_protocol_version - The minimum version of the SSL protocol that you want CloudFront to use for HTTPS connections.
One of SSLv3 , TLSv1 , TLSv1_2016 , TLSv1.1_2016 or TLSv1.2_2018 . Default: TLSv1 . NOTE: If you are using a custom
certi cate (speci ed with acm_certificate_arn or iam_certificate_id ), and have speci ed sni-only in
ssl_support_method , TLSv1 or later must be speci ed. If you have speci ed vip in ssl_support_method , only SSLv3 or
TLSv1 can be speci ed. If you have speci ed cloudfront_default_certificate , TLSv1 must be speci ed.
ssl_support_method : Speci es how you want CloudFront to serve HTTPS requests. One of vip or sni-only . Required if you
specify acm_certificate_arn or iam_certificate_id . NOTE: vip causes CloudFront to use a dedicated IP address and may
incur extra charges.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
arn - The ARN (Amazon Resource Name) for the distribution. For example:
arn:aws:cloudfront::123456789012:distribution/EDFDVBD632BHDS5, where 123456789012 is your AWS account ID.
caller_reference - Internal value used by CloudFront to allow future updates to the distribution con guration.
status - The current status of the distribution. Deployed if the distribution's information is fully propagated throughout the
Amazon CloudFront system.
active_trusted_signers - The key pair IDs that CloudFront is aware of for each trusted signer, if the distribution is set up to
serve private content with signed URLs.
domain_name - The domain name corresponding to the distribution. For example: d604721fxaaqy9.cloudfront.net .
last_modified_time - The date and time the distribution was last modi ed.
etag - The current version of the distribution's information. For example: E2QWRUHAPOMQZL .
hosted_zone_id - The CloudFront Route 53 zone ID that can be used to route an Alias Resource Record Set
(http://docs.aws.amazon.com/Route53/latest/APIReference/CreateAliasRRSAPI.html) to. This attribute is simply an alias for the zone
ID Z2FDTNDATAQYW2 .
Import
Cloudfront Distributions can be imported using the id , e.g.
For information about CloudFront distributions, see the Amazon CloudFront Developer Guide
(http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html). For more information on
generating origin access identities, see Using an Origin Access Identity to Restrict Access to Your Amazon S3 Content
(http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html).
Example Usage
The following example below creates a CloudFront origin access identity.
Argument Reference
comment (Optional) - An optional comment for the origin access identity.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
caller_reference - Internal value used by CloudFront to allow future updates to the origin access identity.
cloudfront_access_identity_path - A shortcut to the full path for the origin access identity to use in CloudFront,
see below.
etag - The current version of the origin access identity's information. For example: E2QWRUHAPOMQZL .
iam_arn - A pre-generated ARN for use in S3 bucket policies (see below). Example:
arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E2QWRUHAPOMQZL .
s3_canonical_user_id - The Amazon S3 canonical user ID for the origin access identity, which you use when giving
the origin access identity read permission to an object in Amazon S3.
s3_origin_config {
origin_access_identity = "${aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_acc
ess_identity_path}"
}
principals {
type = "AWS"
identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
}
}
statement {
actions = ["s3:ListBucket"]
resources = ["${aws_s3_bucket.example.arn}"]
principals {
type = "AWS"
identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
}
}
}
Import
Cloudfront Origin Access Identities can be imported using the id , e.g.
Example Usage
The following example below creates a CloudFront public key.
Argument Reference
The following arguments are supported:
encoded_key - (Required) The encoded public key that you want to add to CloudFront to use with features like eld-
level encryption.
name - (Optional) The name for the public key. By default generated by Terraform.
name_prefix - (Optional) The name for the public key. Con icts with name .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
caller_reference - Internal value used by CloudFront to allow future updates to the public key con guration.
etag - The current version of the public key. For example: E2QWRUHAPOMQZL .
For information about CloudHSM v2, see the AWS CloudHSM User Guide
(https://docs.aws.amazon.com/cloudhsm/latest/userguide/introduction.html) and the Amazon CloudHSM API Reference
(https://docs.aws.amazon.com/cloudhsm/latest/APIReference/Welcome.html).
NOTE: CloudHSM can take up to several minutes to be set up. Practically no single attribute can be updated except
TAGS. If you need to delete a cluster, you have to remove its HSM modules rst. To initialize cluster, you have to add an
hsm instance to the cluster then sign CSR and upload it.
Example Usage
The following example below creates a CloudHSM cluster.
provider "aws" {
region = "${var.aws_region}"
}
tags = {
Name = "example-aws_cloudhsm_v2_cluster"
}
}
tags = {
Name = "example-aws_cloudhsm_v2_cluster"
}
}
tags = {
Name = "example-aws_cloudhsm_v2_cluster"
}
}
Argument Reference
The following arguments are supported:
hsm_type - (Required) The type of HSM module in the cluster. Currently, only hsm1.medium is supported.
Attributes Reference
The following attributes are exported:
vpc_id - The id of the VPC that the CloudHSM cluster resides in.
security_group_id - The ID of the security group associated with the CloudHSM cluster.
cluster_certificates.0.cluster_csr - The certi cate signing request (CSR). Available only in UNINITIALIZED
state after an hsm instance is added to the cluster.
cluster_certificates.0.hsm_certificate - The HSM certi cate issued (signed) by the HSM hardware.
Example Usage
The following example below creates an HSM module in CloudHSM cluster.
Argument Reference
The following arguments are supported:
cluster_id - (Required) The ID of Cloud HSM v2 cluster to which HSM will be added.
availability_zone - (Optional) The IDs of AZ in which HSM module will be located. Do not use together with
subnet_id.
ip_address - (Optional) The IP address of HSM module. Must be within the CIDR of selected subnet.
Attributes Reference
The following attributes are exported:
NOTE: For a multi-region trail, this resource must be in the home region of the trail.
NOTE: For an organization trail, this resource must be in the master account of the organization.
Example Usage
Basic
Enable CloudTrail to capture all compatible management events in region. For capturing events from services like IAM,
include_global_service_events must be enabled.
data "aws_caller_identity" "current" {}
policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AWSCloudTrailAclCheck",
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::tf-test-trail"
},
{
"Sid": "AWSCloudTrailWrite",
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::tf-test-trail/prefix/AWSLogs/${data.aws_caller_identity.current.acc
ount_id}/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
POLICY
}
event_selector {
read_write_type = "All"
include_management_events = true
data_resource {
type = "AWS::Lambda::Function"
values = ["arn:aws:lambda"]
}
}
}
event_selector {
read_write_type = "All"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["arn:aws:s3:::"]
}
}
}
event_selector {
read_write_type = "All"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.important-bucket.arn}/"]
}
}
}
Argument Reference
The following arguments are supported:
s3_bucket_name - (Required) Speci es the name of the S3 bucket designated for publishing log les.
s3_key_prefix - (Optional) Speci es the S3 key pre x that follows the name of the bucket you have designated for
log le delivery.
cloud_watch_logs_role_arn - (Optional) Speci es the role for the CloudWatch Logs endpoint to assume to write to
a user’s log group.
cloud_watch_logs_group_arn - (Optional) Speci es a log group name using an Amazon Resource Name (ARN), that
represents the log group to which CloudTrail logs will be delivered.
enable_logging - (Optional) Enables logging for the trail. Defaults to true . Setting this to false will pause logging.
include_global_service_events - (Optional) Speci es whether the trail is publishing events from global services
such as IAM to the log les. Defaults to true .
is_multi_region_trail - (Optional) Speci es whether the trail is created in the current region or in all regions.
Defaults to false .
is_organization_trail - (Optional) Speci es whether the trail is an AWS Organizations trail. Organization trails log
events for the master account and all member accounts. Can only be created in the organization master account.
Defaults to false .
sns_topic_name - (Optional) Speci es the name of the Amazon SNS topic de ned for noti cation of log le delivery.
event_selector - (Optional) Speci es an event selector for enabling data event logging. Fields documented below.
Please note the CloudTrail limits (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/WhatIsCloudTrail-
Limits.html) when con guring these.
read_write_type (Optional) - Specify if you want your trail to log read-only events, write-only events, or all. By
default, the value is All. You can specify only the following value: "ReadOnly", "WriteOnly", "All". Defaults to All .
include_management_events (Optional) - Specify if you want your event selector to include management events for
your trail.
type (Required) - The resource type in which you want to log data events. You can specify only the follwing value:
"AWS::S3::Object", "AWS::Lambda::Function"
values (Required) - A list of ARN for the speci ed S3 buckets and object pre xes..
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Import
Cloudtrails can be imported using the name , e.g.
Example Usage
dashboard_body = <<EOF
<<
{
"widgets": [
{
"type":"metric",
"x":0,
"y":0,
"width":12,
"height":6,
"properties":{
"metrics":[
[
"AWS/EC2",
"CPUUtilization",
"InstanceId",
"i-012345"
]
],
"period":300,
"stat":"Average",
"region":"us-east-1",
"title":"EC2 Instance CPU"
}
},
{
"type":"text",
"x":0,
"y":7,
"width":3,
"height":3,
"properties":{
"markdown":"Hello world"
}
}
]
}
EOF
}
Argument Reference
The following arguments are supported:
dashboard_name - (Required) The name of the dashboard.
dashboard_body - (Required) The detailed information about the dashboard, including what widgets are included and
their location on the dashboard. You can read more about the body structure in the documentation
(https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/CloudWatch-Dashboard-Body-
Structure.html).
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Import
CloudWatch dashboards can be imported using the dashboard_name , e.g.
Example Usage
Account Access
Organization Access
condition {
key = "aws:PrincipalOrgID"
type = "StringEquals"
value = "${aws_organizations_organization.example.id}"
}
}
Argument Reference
The following arguments are supported:
principal - (Required) The 12-digit AWS account ID that you are permitting to put events to your default event bus.
Specify * to permit any account to put events to your default event bus, optionally limited by condition .
statement_id - (Required) An identi er string for the external account that you are granting permissions to.
action - (Optional) The action that you are enabling the other account to perform. Defaults to events:PutEvents .
condition - (Optional) Con guration block to limit the event bus permissions you are granting to only accounts that
ful ll the condition. Speci ed below.
condition
key - (Required) Key for the condition. Valid values: aws:PrincipalOrgID .
type - (Required) Type of condition. Value values: StringEquals .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
CloudWatch Events permissions can be imported using the statement ID, e.g.
Example Usage
event_pattern = <<PATTERN
<<
{
"detail-type": [
"AWS Console Sign In via CloudTrail"
]
}
PATTERN
}
principals {
type = "Service"
identifiers = ["events.amazonaws.com"]
}
resources = ["${aws_sns_topic.aws_logins.arn}"]
}
}
Argument Reference
The following arguments are supported:
name - (Optional) The rule's name. By default generated by Terraform.
schedule_expression - (Required, if event_pattern isn't speci ed) The scheduling expression. For example,
cron(0 20 * * ? *) or rate(5 minutes) .
event_pattern - (Required, if schedule_expression isn't speci ed) Event pattern described a JSON object. See full
documentation of CloudWatch Events and Event Patterns
(http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CloudWatchEventsandEventPatterns.html)
for details.
role_arn - (Optional) The Amazon Resource Name (ARN) associated with the role that is used for target invocation.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Cloudwatch Event Rules can be imported using the name , e.g.
Example Usage
run_command_targets {
key = "tag:Name"
values = ["FooBar"]
}
run_command_targets {
key = "InstanceIds"
values = ["i-162058cd308bffec2"]
}
}
event_pattern = <<PATTERN
<<
{
"source": [
"aws.autoscaling"
],
"detail-type": [
"EC2 Instance Launch Successful",
"EC2 Instance Terminate Successful",
"EC2 Instance Launch Unsuccessful",
"EC2 Instance Terminate Unsuccessful"
]
}
PATTERN
}
principals {
type = "Service"
identifiers = ["events.amazonaws.com"]
}
}
}
condition {
test = "StringEquals"
variable = "ec2:ResourceTag/Terminate"
values = ["*"]
}
}
statement {
effect = "Allow"
actions = ["ssm:SendCommand"]
resources = ["${aws_ssm_document.stop_instance.arn}"]
}
}
content = <<DOC
<<
{
"schemaVersion": "1.2",
"description": "Stop an instance",
"parameters": {
},
"runtimeConfig": {
"aws:runShellScript": {
"properties": [
{
"id": "0.aws:runShellScript",
"runCommand": ["halt"]
}
]
}
}
}
}
DOC
}
run_command_targets {
key = "tag:Terminate"
values = ["midnight"]
}
}
run_command_targets {
key = "tag:Terminate"
values = ["midnight"]
}
}
Example ECS Run Task with Role and Task Override Usage
assume_role_policy = <<DOC
<<
{
"Version": "2012-10-17",
"Statement": [
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
DOC
}
policy = <<DOC
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ecs:RunTask",
"Resource": "${replace(aws_ecs_task_definition.task_name.arn, "/:\\d+$/", ":*")}"
}
]
}
DOC
}
ecs_target {
task_count = 1
task_definition_arn = "${aws_ecs_task_definition.task_name.arn}"
}
input = <<DOC
<<
{
"containerOverrides": [
{
"name": "name-of-container-to-override",
"command": ["bin/console", "scheduled-task"]
}
]
}
DOC
}
Argument Reference
Note: In order to be able to have your AWS Lambda function or SNS topic invoked by a CloudWatch Events rule, you
must setup the right permissions using aws_lambda_permission
(https://www.terraform.io/docs/providers/aws/r/lambda_permission.html) or aws_sns_topic.policy
(https://www.terraform.io/docs/providers/aws/r/sns_topic.html#policy). More info here
(https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/resource-based-policies-cwe.html).
rule - (Required) The name of the rule you want to add targets to.
target_id - (Optional) The unique target assignment ID. If missing, will generate a random, unique id.
arn - (Required) The Amazon Resource Name (ARN) associated of the target.
input_path - (Optional) The value of the JSONPath (http://goessner.net/articles/JsonPath/) that is used for extracting
part of the matched event when passing it to the target.
role_arn - (Optional) The Amazon Resource Name (ARN) of the IAM role to be used for this target when the rule is
triggered. Required if ecs_target is used.
run_command_targets - (Optional) Parameters used when you are using the rule to invoke Amazon EC2 Run
Command. Documented below. A maximum of 5 are allowed.
ecs_target - (Optional) Parameters used when you are using the rule to invoke Amazon ECS Task. Documented
below. A maximum of 1 are allowed.
batch_target - (Optional) Parameters used when you are using the rule to invoke an Amazon Batch Job.
Documented below. A maximum of 1 are allowed.
kinesis_target - (Optional) Parameters used when you are using the rule to invoke an Amazon Kinesis Stream.
Documented below. A maximum of 1 are allowed.
sqs_target - (Optional) Parameters used when you are using the rule to invoke an Amazon SQS Queue.
Documented below. A maximum of 1 are allowed.
input_transformer - (Optional) Parameters used when you are providing a custom input to a target based on
certain event data.
values - (Required) If Key is tag:tag-key , Values is a list of tag values. If Key is InstanceIds , Values is a list of
Amazon EC2 instance IDs.
group - (Optional) Speci es an ECS task group for the task. The maximum length is 255 characters.
launch_type - (Optional) Speci es the launch type on which your task is running. The launch type that you specify
here must match one of the launch type (compatibilities) of the target task. Valid values are EC2 or FARGATE.
network_configuration - (Optional) Use this if the ECS task uses the awsvpc network mode. This speci es the VPC
subnets and security groups associated with the task, and whether a public IP address is to be used. Required if
launch_type is FARGATE because the awsvpc mode is required for Fargate tasks.
platform_version - (Optional) Speci es the platform version for the task. Specify only the numeric portion of the
platform version, such as 1.1.0. This is used only if LaunchType is FARGATE. For more information about valid platform
versions, see AWS Fargate Platform Versions
(http://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html).
task_count - (Optional) The number of tasks to create based on the TaskDe nition. The default is 1.
task_definition_arn - (Required) The ARN of the task de nition to use if the event target is an Amazon ECS cluster.
security_groups - (Optional) The security groups associated with the task or service. If you do not specify a security
group, the default security group for the VPC is used.
assign_public_ip - (Optional) Assign a public IP address to the ENI (Fargate launch type only). Valid values are
true or false . Default false .
job_definition - (Required) The ARN or name of the job de nition to use if the event target is an AWS Batch job.
This job de nition must already exist.
job_name - (Required) The name to use for this execution of the job, if the target is an AWS Batch job.
array_size - (Optional) The size of the array, if this is an array batch job. Valid values are integers between 2 and
10,000.
job_attempts - (Optional) The number of times to attempt to retry, if the job fails. Valid values are 1 to 10.
partition_key_path - (Optional) The JSON path to be extracted from the event and used as the partition key.
input_paths - (Optional) Key value pairs speci ed in the form of JSONPath (for example, time = $.time)
Import
Cloud Watch Event Target can be imported using the role event_rule and target_id separated by / .
Example Usage
Argument Reference
The following arguments are supported:
role_arn - (Required) The ARN of an IAM role that grants Amazon CloudWatch Logs permissions to put data into the
target
target_arn - (Required) The ARN of the target Amazon Kinesis stream or Amazon Lambda resource for the
destination
Attributes Reference
In addition to all arguments above, the following attributes are exported:
arn - The Amazon Resource Name (ARN) specifying the log destination.
Import
CloudWatch Logs destinations can be imported using the name , e.g.
Example Usage
principals {
type = "AWS"
identifiers = [
"123456789012",
]
}
actions = [
"logs:PutSubscriptionFilter",
]
resources = [
"${aws_cloudwatch_log_destination.test_destination.arn}",
]
}
}
Argument Reference
The following arguments are supported:
Import
CloudWatch Logs destination policies can be imported using the destination_name , e.g.
Example Usage
tags = {
Environment = "production"
Application = "serviceA"
}
}
Argument Reference
The following arguments are supported:
name - (Optional, Forces new resource) The name of the log group. If omitted, Terraform will assign a random, unique
name.
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name .
retention_in_days - (Optional) Speci es the number of days you want to retain log events in the speci ed log
group.
kms_key_id - (Optional) The ARN of the KMS Key to use when encrypting log data. Please note, after the AWS KMS
CMK is disassociated from the log group, AWS CloudWatch Logs stops encrypting newly ingested data for the log
group. All previously ingested data remains encrypted, and AWS CloudWatch Logs requires permissions for the CMK
whenever the encrypted data is requested.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
arn - The Amazon Resource Name (ARN) specifying the log group.
Import
Cloudwatch Log Groups can be imported using the name , e.g.
$ terraform import aws_cloudwatch_log_group.test_group yada
Resource: aws_cloudwatch_log_metric_ lter
Provides a CloudWatch Log Metric Filter resource.
Example Usage
metric_transformation {
name = "EventCount"
namespace = "YourNamespace"
value = "1"
}
}
Argument Reference
The following arguments are supported:
log_group_name - (Required) The name of the log group to associate the metric lter with.
metric_transformation - (Required) A block de ning collection of information needed to de ne how metric data
gets emitted. See below.
name - (Required) The name of the CloudWatch metric to which the monitored log information should be published
(e.g. ErrorCount )
value - (Required) What to publish to the metric. For example, if you're counting the occurrences of a particular term
like "Error", the value will be "1" for each occurrence. If you're counting the bytes transferred the published value will
be the value in the log event.
default_value - (Optional) The value to emit when a lter pattern does not match a log event.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
resources = ["arn:aws:logs:*"]
principals {
identifiers = ["es.amazonaws.com"]
type = "Service"
}
}
}
resources = ["arn:aws:logs:*:*:log-group:/aws/route53/*"]
principals {
identifiers = ["route53.amazonaws.com"]
type = "Service"
}
}
}
Argument Reference
The following arguments are supported:
policy_document - (Required) Details of the resource policy, including the identity of the principal that is enabled to
put logs to this account. This is formatted as a JSON string. Maximum length of 5120 characters.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
CloudWatch log resource policies can be imported using the policy name, e.g.
Example Usage
Argument Reference
The following arguments are supported:
name - (Required) The name of the log stream. Must not be longer than 512 characters and must not contain :
log_group_name - (Required) The name of the log group under which the log stream is to be created.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
arn - The Amazon Resource Name (ARN) specifying the log stream.
Resource: aws_cloudwatch_log_subscription_ lter
Provides a CloudWatch Logs subscription lter resource.
Example Usage
Argument Reference
The following arguments are supported:
destination_arn - (Required) The ARN of the destination to deliver matching log events to. Kinesis stream or
Lambda function ARN.
filter_pattern - (Required) A valid CloudWatch Logs lter pattern for subscribing to a ltered stream of log events.
log_group_name - (Required) The name of the log group to associate the subscription lter with
role_arn - (Optional) The ARN of an IAM role that grants Amazon CloudWatch Logs permissions to deliver ingested
log events to the destination. If you use Lambda as a destination, you should skip this argument and use
aws_lambda_permission resource for granting access from CloudWatch logs to the destination Lambda function.
distribution - (Optional) The method used to distribute log data to the destination. By default log data is grouped
by log stream, but the grouping can be set to random for a more even distribution. This property is only applicable
when the destination is an Amazon Kinesis stream. Valid values are "Random" and "ByLogStream".
Attributes Reference
No extra attributes are exported.
Import
CloudWatch Logs subscription lter can be imported using the log group name and subscription lter name separated by
|.
$ terraform import aws_cloudwatch_log_subscription_filter.test_lambdafunction_logfilter /aws/lambda/examp
le_lambda_name|test_lambdafunction_logfilter
Resource: aws_cloudwatch_metric_alarm
Provides a CloudWatch Metric Alarm resource.
Example Usage
dimensions = {
AutoScalingGroupName = "${aws_autoscaling_group.bar.name}"
}
metric_query {
id = "e1"
expression = "m2/m1*100"
label = "Error Rate"
return_data = "true"
}
metric_query {
id = "m1"
metric {
metric_name = "RequestCount"
namespace = "AWS/ApplicationELB"
period = "120"
stat = "Sum"
unit = "Count"
dimensions = {
LoadBalancer = "app/web"
}
}
}
metric_query {
id = "m2"
metric {
metric_name = "HTTPCode_ELB_5XX_Count"
namespace = "AWS/ApplicationELB"
period = "120"
stat = "Sum"
unit = "Count"
dimensions = {
LoadBalancer = "app/web"
}
}
}
}
NOTE: You cannot create a metric alarm consisting of both statistic and extended_statistic parameters. You
must choose one or the other
Argument Reference
See related part of AWS Docs
(https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricAlarm.html) for details about valid
values.
alarm_name - (Required) The descriptive name for the alarm. This name must be unique within the user's AWS
account
comparison_operator - (Required) The arithmetic operation to use when comparing the speci ed Statistic and
Threshold. The speci ed Statistic value is used as the rst operand. Either of the following is supported:
GreaterThanOrEqualToThreshold , GreaterThanThreshold , LessThanThreshold ,
LessThanOrEqualToThreshold .
evaluation_periods - (Required) The number of periods over which data is compared to the speci ed threshold.
metric_name - (Optional) The name for the alarm's associated metric. See docs for supported metrics
(https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CW_Support_For_AWS.html).
namespace - (Optional) The namespace for the alarm's associated metric. See docs for the list of namespaces
(https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.html). See docs for
supported metrics
(https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CW_Support_For_AWS.html).
period - (Optional) The period in seconds over which the speci ed statistic is applied.
statistic - (Optional) The statistic to apply to the alarm's associated metric. Either of the following is supported:
SampleCount , Average , Sum , Minimum , Maximum
threshold - (Required) The value against which the speci ed statistic is compared.
actions_enabled - (Optional) Indicates whether or not actions should be executed during any changes to the alarm's
state. Defaults to true .
alarm_actions - (Optional) The list of actions to execute when this alarm transitions into an ALARM state from any
other state. Each action is speci ed as an Amazon Resource Name (ARN).
datapoints_to_alarm - (Optional) The number of datapoints that must be breaching to trigger the alarm.
dimensions - (Optional) The dimensions for the alarm's associated metric. For the list of available dimensions see the
AWS documentation here
(http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CW_Support_For_AWS.html).
insufficient_data_actions - (Optional) The list of actions to execute when this alarm transitions into an
INSUFFICIENT_DATA state from any other state. Each action is speci ed as an Amazon Resource Name (ARN).
ok_actions - (Optional) The list of actions to execute when this alarm transitions into an OK state from any other
state. Each action is speci ed as an Amazon Resource Name (ARN).
extended_statistic - (Optional) The percentile statistic for the metric associated with the alarm. Specify a value
between p0.0 and p100.
treat_missing_data - (Optional) Sets how this alarm is to handle missing data points. The following values are
supported: missing , ignore , breaching and notBreaching . Defaults to missing .
evaluate_low_sample_count_percentiles - (Optional) Used only for alarms based on percentiles. If you specify
ignore , the alarm state will not change during periods with too few data points to be statistically signi cant. If you
specify evaluate or omit this parameter, the alarm will always be evaluated and possibly change state no matter how
many data points are available. The following values are supported: ignore , and evaluate .
metric_query (Optional) Enables you to create an alarm based on a metric math expression. You may specify at most
20.
NOTE: If you specify at least one metric_query , you may not specify a metric_name , namespace , period or
statistic . If you do not specify a metric_query , you must specify each of these (although you may use
extended_statistic instead of statistic ).
Nested elds
metric_query
id - (Required) A short name used to tie this object to the results in the response. If you are performing math
expressions on this set of data, this name represents that data and can serve as a variable in the mathematical
expression. The valid characters are letters, numbers, and underscore. The rst character must be a lowercase letter.
expression - (Optional) The math expression to be performed on the returned data, if this object is performing a
math expression. This expression can use the id of the other metrics to refer to those metrics, and can also use the id
of other expressions to use the result of those expressions. For more information about metric math expressions, see
Metric Math Syntax and Functions in the Amazon CloudWatch User Guide
(https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html#metric-math-syntax).
label - (Optional) A human-readable label for this metric or expression. This is especially useful if this is an
expression, so that you know what the value represents.
return_data (Optional) Specify exactly one metric_query to be true to use that metric_query result as the
alarm.
metric (Optional) The metric to be returned, along with statistics, period, and units. Use this parameter only if this
object is retrieving a metric and not performing a math expression on returned data.
metric
dimensions - (Optional) The dimensions for this metric. For the list of available dimensions see the AWS
documentation here
(http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CW_Support_For_AWS.html).
metric_name - (Required) The name for this metric. See docs for supported metrics
(https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CW_Support_For_AWS.html).
namespace - (Required) The namespace for this metric. See docs for the list of namespaces
(https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.html). See docs for
supported metrics
(https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CW_Support_For_AWS.html).
period - (Required) The period in seconds over which the speci ed stat is applied.
stat - (Required) The statistic to apply to this metric. Either of the following is supported: SampleCount , Average ,
Sum , Minimum , Maximum
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Cloud Metric Alarms can be imported using the alarm_name , e.g.
Example Usage
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codebuild.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": [
"*"
],
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DescribeDhcpOptions",
"ec2:DescribeDhcpOptions",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeVpcs"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterfacePermission"
],
"Resource": [
"arn:aws:ec2:us-east-1:123456789012:network-interface/*"
],
"Condition": {
"StringEquals": {
"ec2:Subnet": [
"${aws_subnet.example1.arn}",
"${aws_subnet.example2.arn}"
],
"ec2:AuthorizedService": "codebuild.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"${aws_s3_bucket.example.arn}",
"${aws_s3_bucket.example.arn}/*"
]
}
]
}
POLICY
}
artifacts {
type = "NO_ARTIFACTS"
}
cache {
type = "S3"
location = "${aws_s3_bucket.example.bucket}"
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/standard:1.0"
image = "aws/codebuild/standard:1.0"
type = "LINUX_CONTAINER"
image_pull_credentials_type = "CODEBUILD"
environment_variable {
name = "SOME_KEY1"
value = "SOME_VALUE1"
}
environment_variable {
name = "SOME_KEY2"
value = "SOME_VALUE2"
type = "PARAMETER_STORE"
}
}
logs_config {
cloudwatch_logs {
group_name = "log-group"
stream_name = "log-stream"
}
s3_logs {
status = "ENABLED"
location = "${aws_s3_bucket.example.id}/build-log"
}
}
source {
type = "GITHUB"
location = "https://github.com/mitchellh/packer.git"
git_clone_depth = 1
}
vpc_config {
vpc_id = "${aws_vpc.example.id}"
subnets = [
"${aws_subnet.example1.id}",
"${aws_subnet.example2.id}",
]
security_group_ids = [
"${aws_security_group.example1.id}",
"${aws_security_gorup.example2.id}",
]
}
tags = {
Environment = "Test"
}
}
artifacts {
type = "NO_ARTIFACTS"
type = "NO_ARTIFACTS"
}
cache {
type = "LOCAL"
modes = ["LOCAL_DOCKER_LAYER_CACHE", "LOCAL_SOURCE_CACHE"]
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/standard:1.0"
type = "LINUX_CONTAINER"
image_pull_credentials_type = "CODEBUILD"
environment_variable {
name = "SOME_KEY1"
value = "SOME_VALUE1"
}
}
source {
type = "GITHUB"
location = "https://github.com/mitchellh/packer.git"
git_clone_depth = 1
}
tags = {
Environment = "Test"
}
}
Argument Reference
The following arguments are supported:
artifacts - (Required) Information about the project's build output artifacts. Artifact blocks are documented below.
environment - (Required) Information about the project's build environment. Environment blocks are documented
below.
source - (Required) Information about the project's input source code. Source blocks are documented below.
badge_enabled - (Optional) Generates a publicly-accessible URL for the projects build badge. Available as badge_url
attribute when enabled.
build_timeout - (Optional) How long in minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait until timing out
any related build that does not get marked as completed. The default is 60 minutes.
cache - (Optional) Information about the cache storage for the project. Cache blocks are documented below.
encryption_key - (Optional) The AWS Key Management Service (AWS KMS) customer master key (CMK) to be used
for encrypting the build project's build output artifacts.
logs_config - (Optional) Con guration for the builds to store log data to CloudWatch or S3.
service_role - (Required) The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM)
role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
vpc_config - (Optional) Con guration for the builds to run inside a VPC. VPC con g blocks are documented below.
secondary_artifacts - (Optional) A set of secondary artifacts to be used inside the build. Secondary artifacts blocks
are documented below.
secondary_sources - (Optional) A set of secondary sources to be used inside the build. Secondary sources blocks are
documented below.
type - (Required) The build output artifact's type. Valid values for this parameter are: CODEPIPELINE ,
NO_ARTIFACTS or S3 .
artifact_identifier - (Optional) The artifact identi er. Must be the same speci ed inside AWS CodeBuild
buildspec.
encryption_disabled - (Optional) If set to true, output artifacts will not be encrypted. If type is set to
NO_ARTIFACTS then this value will be ignored. Defaults to false .
override_artifact_name (Optional) If set to true, a name speci ed in the build spec le overrides the artifact name.
location - (Optional) Information about the build output artifact location. If type is set to CODEPIPELINE or
NO_ARTIFACTS then this value will be ignored. If type is set to S3 , this is the name of the output bucket.
name - (Optional) The name of the project. If type is set to S3 , this is the name of the output artifact object
namespace_type - (Optional) The namespace to use in storing build artifacts. If type is set to S3 , then valid values
for this parameter are: BUILD_ID or NONE .
packaging - (Optional) The type of build output artifact to create. If type is set to S3 , valid values for this parameter
are: NONE or ZIP
path - (Optional) If type is set to S3 , this is the path to the output artifact
type - (Optional) The type of storage that will be used for the AWS CodeBuild project cache. Valid values: NO_CACHE ,
LOCAL , and S3 . Defaults to NO_CACHE .
location - (Required when cache type is S3 ) The location where the AWS CodeBuild project stores cached
resources. For type S3 the value must be a valid S3 bucket name/pre x.
modes - (Required when cache type is LOCAL ) Speci es settings that AWS CodeBuild uses to store and reuse build
dependencies. Valid values: LOCAL_SOURCE_CACHE , LOCAL_DOCKER_LAYER_CACHE , and LOCAL_CUSTOM_CACHE
compute_type - (Required) Information about the compute resources the build project will use. Available values for
this parameter are: BUILD_GENERAL1_SMALL , BUILD_GENERAL1_MEDIUM or BUILD_GENERAL1_LARGE .
BUILD_GENERAL1_SMALL is only valid if type is set to LINUX_CONTAINER
image - (Required) The Docker image to use for this build project. Valid values include Docker images provided by
CodeBuild (https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html) (e.g
aws/codebuild/standard:2.0 ), Docker Hub images (https://hub.docker.com/) (e.g.
hashicorp/terraform:latest ), and full Docker repository URIs such as those for ECR (e.g.
137112412989.dkr.ecr.us-west-2.amazonaws.com/amazonlinux:latest ).
type - (Required) The type of build environment to use for related builds. Available values are: LINUX_CONTAINER or
WINDOWS_CONTAINER .
image_pull_credentials_type - (Optional) The type of credentials AWS CodeBuild uses to pull images in your build.
Available values for this parameter are CODEBUILD or SERVICE_ROLE . When you use a cross-account or private
registry image, you must use SERVICE_ROLE credentials. When you use an AWS CodeBuild curated image, you must
use CODEBUILD credentials. Default to CODEBUILD
environment_variable - (Optional) A set of environment variables to make available to builds for this build project.
privileged_mode - (Optional) If set to true, enables running the Docker daemon inside a Docker container. Defaults
to false .
certificate - (Optional) The ARN of the S3 bucket, path pre x and object key that contains the PEM-encoded
certi cate.
registry_credential - (Optional) Information about credentials for access to a private Docker registry. Registry
Credential con g blocks are documented below.
type - (Optional) The type of environment variable. Valid values: PARAMETER_STORE , PLAINTEXT .
cloudwatch_logs - (Optional) Con guration for the builds to store logs to CloudWatch
s3_logs - (Optional) Con guration for the builds to store logs to S3.
status - (Optional) Current status of logs in CloudWatch Logs for a build project. Valid values: ENABLED , DISABLED .
Defaults to ENABLED .
status - (Optional) Current status of logs in S3 for a build project. Valid values: ENABLED , DISABLED . Defaults to
DISABLED .
location - (Optional) The name of the S3 bucket and the path pre x for S3 logs. Must be set if status is ENABLED ,
otherwise it must be empty.
encryption_disabled - (Optional) Set to true if you do not want S3 logs encrypted. Defaults to false .
source supports the following:
type - (Required) The type of repository that contains the source code to be built. Valid values for this parameter are:
CODECOMMIT , CODEPIPELINE , GITHUB , GITHUB_ENTERPRISE , BITBUCKET , S3 or NO_SOURCE .
auth - (Optional) Information about the authorization settings for AWS CodeBuild to access the source code to be
built. Auth blocks are documented below.
buildspec - (Optional) The build spec declaration to use for this build project's related builds. This must be set when
type is NO_SOURCE .
location - (Optional) The location of the source code from git or s3.
report_build_status - (Optional) Set to true to report the status of a build's start and nish to your source
provider. This option is only valid when the type is BITBUCKET or GITHUB .
type - (Required) The authorization type to use. The only valid value is OAUTH
resource - (Optional) The resource value that applies to the speci ed authorization type.
credential - (Required) The Amazon Resource Name (ARN) or name of credentials created using AWS Secrets
Manager.
credential_provider - (Required) The service that created the credentials to access a private Docker registry. The
valid value, SECRETS_MANAGER, is for AWS Secrets Manager.
type - (Required) The build output artifact's type. Valid values for this parameter are: CODEPIPELINE ,
NO_ARTIFACTS or S3 .
artifact_identifier - (Required) The artifact identi er. Must be the same speci ed inside AWS CodeBuild
buildspec.
encryption_disabled - (Optional) If set to true, output artifacts will not be encrypted. If type is set to
NO_ARTIFACTS then this value will be ignored. Defaults to false .
override_artifact_name (Optional) If set to true, a name speci ed in the build spec le overrides the artifact name.
location - (Optional) Information about the build output artifact location. If type is set to CODEPIPELINE or
NO_ARTIFACTS then this value will be ignored. If type is set to S3 , this is the name of the output bucket. If path is
not also speci ed, then location can also specify the path of the output artifact in the output bucket.
name - (Optional) The name of the project. If type is set to S3 , this is the name of the output artifact object
namespace_type - (Optional) The namespace to use in storing build artifacts. If type is set to S3 , then valid values
for this parameter are: BUILD_ID or NONE .
packaging - (Optional) The type of build output artifact to create. If type is set to S3 , valid values for this parameter
are: NONE or ZIP
path - (Optional) If type is set to S3 , this is the path to the output artifact
type - (Required) The type of repository that contains the source code to be built. Valid values for this parameter are:
CODECOMMIT , CODEPIPELINE , GITHUB , GITHUB_ENTERPRISE , BITBUCKET or S3 .
source_identifier - (Required) The source identi er. Source data will be put inside a folder named as this
parameter inside AWS CodeBuild source directory
auth - (Optional) Information about the authorization settings for AWS CodeBuild to access the source code to be
built. Auth blocks are documented below.
buildspec - (Optional) The build spec declaration to use for this build project's related builds.
location - (Optional) The location of the source code from git or s3.
report_build_status - (Optional) Set to true to report the status of a build's start and nish to your source
provider. This option is only valid when your source provider is GITHUB , BITBUCKET , or GITHUB_ENTERPRISE .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The name (if imported via name ) or ARN (if created via Terraform or imported via ARN) of the CodeBuild project.
Import
CodeBuild Project can be imported using the name , e.g.
Example Usage
Argument Reference
The following arguments are supported:
auth_type - (Required) The type of authentication used to connect to a GitHub, GitHub Enterprise, or Bitbucket
repository. An OAUTH connection is not supported by the API.
token - (Required) For GitHub or GitHub Enterprise , this is the personal access token. For Bitbucket , this is the
app password.
user_name - (Optional) The Bitbucket username when the authType is BASIC_AUTH . This parameter is not valid for
other types of source providers or connections.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Note: The AWS account that Terraform uses to create this resource must have authorized CodeBuild to access
Bitbucket/GitHub's OAuth API in each applicable region. This is a manual step that must be done before creating
webhooks with this resource. If OAuth is not con gured, AWS will return an error similar to
ResourceNotFoundException: Could not find access token for server type github. More information can
be found in the CodeBuild User Guide for Bitbucket (https://docs.aws.amazon.com/codebuild/latest/userguide/sample-
bitbucket-pull-request.html) and GitHub (https://docs.aws.amazon.com/codebuild/latest/userguide/sample-github-pull-
request.html).
Note: Further managing the automatically created Bitbucket/GitHub webhook with the
bitbucket_hook / github_repository_webhook resource is only possible with importing that resource after creation
of the aws_codebuild_webhook resource. The CodeBuild API does not ever provide the secret attribute for the
aws_codebuild_webhook resource in this scenario.
filter_group {
filter {
type = "EVENT"
pattern = "PUSH"
}
filter {
type = "HEAD_REF"
pattern = "master"
}
}
}
GitHub Enterprise
When working with GitHub Enterprise (https://enterprise.github.com/) source CodeBuild webhooks, the GHE repository
webhook must be separately managed (e.g. manually or with the github_repository_webhook resource).
More information creating webhooks with GitHub Enterprise can be found in the CodeBuild User Guide
(https://docs.aws.amazon.com/codebuild/latest/userguide/sample-github-enterprise.html).
configuration {
url = "${aws_codebuild_webhook.example.payload_url}"
secret = "${aws_codebuild_webhook.example.secret}"
content_type = "json"
insecure_ssl = false
}
}
Argument Reference
The following arguments are supported:
branch_filter - (Optional) A regular expression used to determine which branches get built. Default is all branches
are built. It is recommended to use filter_group over branch_filter .
filter_group - (Optional) Information about the webhook's trigger. Filter group blocks are documented below.
filter - (Required) A webhook lter for the group. Filter blocks are documented below.
type - (Required) The webhook lter group's type. Valid values for this parameter are: EVENT , BASE_REF ,
HEAD_REF , ACTOR_ACCOUNT_ID , FILE_PATH . At least one lter group must specify EVENT as its type.
pattern - (Required) For a lter that uses EVENT type, a comma-separated string that speci es one event: PUSH ,
PULL_REQUEST_CREATED , PULL_REQUEST_UPDATED , PULL_REQUEST_REOPENED . PULL_REQUEST_MERGED works with
GitHub & GitHub Enterprise only. For a lter that uses any of the other lter types, a regular expression.
exclude_matched_pattern - (Optional) If set to true , the speci ed lter does not trigger a build. Defaults to
false .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
secret - The secret token of the associated repository. Not returned by the CodeBuild API for all source types.
Note: The secret attribute is only set on resource creation, so if the secret is manually rotated, terraform will not pick
up the change on subsequent runs. In that case, the webhook resource should be tainted and re-created to get the
secret back in sync.
Import
CodeBuild Webhooks can be imported using the CodeBuild Project name, e.g.
NOTE on CodeCommit Availability: The CodeCommit is not yet rolled out in all regions - available regions are listed the
AWS Docs (https://docs.aws.amazon.com/general/latest/gr/rande.html#codecommit_region).
Example Usage
Argument Reference
The following arguments are supported:
repository_name - (Required) The name for the repository. This needs to be less than 100 characters.
description - (Optional) The description of the repository. This needs to be less than 1000 characters
default_branch - (Optional) The default branch of the repository. The branch speci ed here needs to exist.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
clone_url_http - The URL to use for cloning the repository over HTTPS.
clone_url_ssh - The URL to use for cloning the repository over SSH.
Import
Codecommit repository can be imported using repository name, e.g.
NOTE on CodeCommit: The CodeCommit is not yet rolled out in all regions - available regions are listed the AWS Docs
(https://docs.aws.amazon.com/general/latest/gr/rande.html#codecommit_region).
Example Usage
trigger {
name = "all"
events = ["all"]
destination_arn = "${aws_sns_topic.test.arn}"
}
}
Argument Reference
The following arguments are supported:
repository_name - (Required) The name for the repository. This needs to be less than 100 characters.
destination_arn - (Required) The ARN of the resource that is the target for a trigger. For example, the ARN of a topic
in Amazon Simple Noti cation Service (SNS).
custom_data - (Optional) Any custom data associated with the trigger that will be included in the information sent to
the target of the trigger.
branches - (Optional) The branches that will be included in the trigger con guration. If no branches are speci ed, the
trigger will apply to all branches.
events - (Required) The repository events that will cause the trigger to run actions in another service, such as sending
a noti cation through Amazon Simple Noti cation Service (SNS). If no events are speci ed, the trigger will run for all
repository events. Event types include: all , updateReference , createReference , deleteReference .
Resource: aws_codedeploy_app
Provides a CodeDeploy application to be used as a basis for deployments
Example Usage
ECS Application
Lambda Application
Server Application
Argument Reference
The following arguments are supported:
compute_platform - (Optional) The compute platform can either be ECS , Lambda , or Server . Default is Server .
Attribute Reference
The following arguments are exported:
Import
CodeDeploy Applications can be imported using the name , e.g.
Example Usage
Server Usage
minimum_healthy_hosts {
type = "HOST_COUNT"
value = 2
}
}
ec2_tag_filter {
key = "filterkey"
type = "KEY_AND_VALUE"
value = "filtervalue"
}
trigger_configuration {
trigger_events = ["DeploymentFailure"]
trigger_name = "foo-trigger"
trigger_target_arn = "foo-topic-arn"
}
auto_rollback_configuration {
enabled = true
events = ["DEPLOYMENT_FAILURE"]
}
alarm_configuration {
alarms = ["my-alarm-name"]
enabled = true
}
}
Lambda Usage
resource "aws_codedeploy_deployment_config" "foo" {
deployment_config_name = "test-deployment-config"
compute_platform = "Lambda"
traffic_routing_config {
type = "TimeBasedLinear"
time_based_linear {
interval = 10
percentage = 10
}
}
}
auto_rollback_configuration {
enabled = true
events = ["DEPLOYMENT_STOP_ON_ALARM"]
}
alarm_configuration {
alarms = ["my-alarm-name"]
enabled = true
}
}
Argument Reference
The following arguments are supported:
compute_platform - (Optional) The compute platform can be Server , Lambda , or ECS . Default is Server .
traffic_routing_config - (Optional) A tra c_routing_con g block. Tra c Routing Con g is documented below.
value - (Required) The value when the type is FLEET_PERCENT represents the minimum number of healthy instances
as a percentage of the total number of instances in the deployment. If you specify FLEET_PERCENT, at the start of the
deployment, AWS CodeDeploy converts the percentage to the equivalent number of instance and rounds up fractional
instances. When the type is HOST_COUNT , the value represents the minimum number of healthy instances as an
absolute value.
The traffic_routing_config block supports the following:
type - (Optional) Type of tra c routing con g. One of TimeBasedCanary , TimeBasedLinear , AllAtOnce .
time_based_canary - (Optional) The time based canary con guration information. If type is TimeBasedLinear ,
use time_based_linear instead.
time_based_linear - (Optional) The time based linear con guration information. If type is TimeBasedCanary , use
time_based_canary instead.
interval - (Optional) The number of minutes between the rst and second tra c shifts of a TimeBasedCanary
deployment.
percentage - (Optional) The percentage of tra c to shift in the rst increment of a TimeBasedCanary deployment.
interval - (Optional) The number of minutes between each incremental tra c shift of a TimeBasedLinear
deployment.
percentage - (Optional) The percentage of tra c that is shifted at the start of each increment of a TimeBasedLinear
deployment.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
CodeDeploy Deployment Con gurations can be imported using the deployment_config_name , e.g.
Example Usage
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "codedeploy.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
ec2_tag_set {
ec2_tag_filter {
key = "filterkey1"
type = "KEY_AND_VALUE"
type = "KEY_AND_VALUE"
value = "filtervalue"
}
ec2_tag_filter {
key = "filterkey2"
type = "KEY_AND_VALUE"
value = "filtervalue"
}
}
trigger_configuration {
trigger_events = ["DeploymentFailure"]
trigger_name = "example-trigger"
trigger_target_arn = "${aws_sns_topic.example.arn}"
}
auto_rollback_configuration {
enabled = true
events = ["DEPLOYMENT_FAILURE"]
}
alarm_configuration {
alarms = ["my-alarm-name"]
enabled = true
}
}
auto_rollback_configuration {
enabled = true
events = ["DEPLOYMENT_FAILURE"]
}
blue_green_deployment_config {
deployment_ready_option {
action_on_timeout = "CONTINUE_DEPLOYMENT"
}
terminate_blue_instances_on_deployment_success {
action = "TERMINATE"
termination_wait_time_in_minutes = 5
}
}
deployment_style {
deployment_option = "WITH_TRAFFIC_CONTROL"
deployment_type = "BLUE_GREEN"
}
ecs_service {
cluster_name = "${aws_ecs_cluster.example.name}"
service_name = "${aws_ecs_service.example.name}"
}
load_balancer_info {
target_group_pair_info {
prod_traffic_route {
listener_arns = ["${aws_lb_listener.example.arn}"]
}
target_group {
name = "${aws_lb_target_group.blue.name}"
}
target_group {
name = "${aws_lb_target_group.green.name}"
}
}
}
}
deployment_style {
deployment_option = "WITH_TRAFFIC_CONTROL"
deployment_type = "BLUE_GREEN"
}
load_balancer_info {
elb_info {
name = "${aws_elb.example.name}"
}
}
blue_green_deployment_config {
deployment_ready_option {
action_on_timeout = "STOP_DEPLOYMENT"
wait_time_in_minutes = 60
}
green_fleet_provisioning_option {
action = "DISCOVER_EXISTING"
}
terminate_blue_instances_on_deployment_success {
action = "KEEP_ALIVE"
}
}
}
Argument Reference
The following arguments are supported:
alarm_configuration - (Optional) Con guration block of alarms associated with the deployment group
(documented below).
auto_rollback_configuration - (Optional) Con guration block of the automatic rollback con guration associated
with the deployment group (documented below).
blue_green_deployment_config - (Optional) Con guration block of the blue/green deployment options for a
deployment group (documented below).
deployment_config_name - (Optional) The name of the group's deployment con g. The default is
"CodeDeployDefault.OneAtATime".
deployment_style - (Optional) Con guration block of the type of deployment, either in-place or blue/green, you
want to run and whether to route deployment tra c behind a load balancer (documented below).
ec2_tag_filter - (Optional) Tag lters associated with the deployment group. See the AWS docs for details.
ec2_tag_set - (Optional) Con guration block(s) of Tag lters associated with the deployment group, which are also
referred to as tag groups (documented below). See the AWS docs for details.
ecs_service - (Optional) Con guration block(s) of the ECS services for a deployment group (documented below).
load_balancer_info - (Optional) Single con guration block of the load balancer to use in a blue/green deployment
(documented below).
on_premises_instance_tag_filter - (Optional) On premise tag lters associated with the group. See the AWS docs
for details.
trigger_configuration - (Optional) Con guration block(s) of the triggers for the deployment group (documented
below).
alarms - (Optional) A list of alarms con gured for the deployment group. A maximum of 10 alarms can be added to a
deployment group.
enabled - (Optional) Indicates whether the alarm con guration is enabled. This option is useful when you want to
temporarily deactivate alarm monitoring for a deployment group without having to add the same alarms again later.
ignore_poll_alarm_failure - (Optional) Indicates whether a deployment should continue if information about the
current state of alarms cannot be retrieved from CloudWatch. The default value is false .
true : The deployment will proceed even if alarm status information can't be retrieved.
false : The deployment will stop if alarm status information can't be retrieved.
enabled - (Optional) Indicates whether a de ned automatic rollback con guration is currently enabled for this
Deployment Group. If you enable automatic rollback, you must specify at least one event type.
events - (Optional) The event type or types that trigger a rollback. Supported types are DEPLOYMENT_FAILURE and
DEPLOYMENT_STOP_ON_ALARM .
deployment_ready_option - (Optional) Information about the action to take when newly provisioned instances are
ready to receive tra c in a blue/green deployment (documented below).
green_fleet_provisioning_option - (Optional) Information about how instances are provisioned for a replacement
environment in a blue/green deployment (documented below).
You can con gure how tra c is rerouted to instances in a replacement environment in a blue/green deployment.
deployment_ready_option supports the following:
action_on_timeout - (Optional) When to reroute tra c from an original environment to a replacement environment
in a blue/green deployment.
CONTINUE_DEPLOYMENT : Register new instances with the load balancer immediately after the new application
revision is installed on the instances in the replacement environment.
STOP_DEPLOYMENT : Do not register new instances with load balancer unless tra c is rerouted manually. If tra c
is not rerouted manually before the end of the speci ed wait period, the deployment status is changed to
Stopped.
wait_time_in_minutes - (Optional) The number of minutes to wait before the status of a blue/green deployment
changed to Stopped if rerouting is not started manually. Applies only to the STOP_DEPLOYMENT option for
action_on_timeout .
You can con gure how instances will be added to the replacement environment in a blue/green deployment.
green_fleet_provisioning_option supports the following:
COPY_AUTO_SCALING_GROUP : Use settings from a speci ed Auto Scaling group to de ne and create instances in
a new Auto Scaling group. Exactly one Auto Scaling group must be speci ed when selecting
COPY_AUTO_SCALING_GROUP . Use autoscaling_groups to specify the Auto Scaling group.
You can con gure how instances in the original environment are terminated when a blue/green deployment is successful.
terminate_blue_instances_on_deployment_success supports the following:
action - (Optional) The action to take on instances in the original environment after a successful blue/green
deployment.
TERMINATE : Instances are terminated after a speci ed wait time.
KEEP_ALIVE : Instances are left running after they are deregistered from the load balancer and removed from
the deployment group.
termination_wait_time_in_minutes - (Optional) The number of minutes to wait after a successful blue/green
deployment before terminating instances from the original environment.
deployment_option - (Optional) Indicates whether to route deployment tra c behind a load balancer. Valid Values
are WITH_TRAFFIC_CONTROL or WITHOUT_TRAFFIC_CONTROL .
deployment_type - (Optional) Indicates whether to run an in-place deployment or a blue/green deployment. Valid
Values are IN_PLACE or BLUE_GREEN .
type - (Optional) The type of the tag lter, either KEY_ONLY , VALUE_ONLY , or KEY_AND_VALUE .
Multiple occurrences of ec2_tag_filter are allowed, where any instance that matches to at least one of the tag lters is
selected.
elb_info - (Optional) The Classic Elastic Load Balancer to use in a deployment. Con icts with target_group_info
and target_group_pair_info .
target_group_info - (Optional) The (Application/Network Load Balancer) target group to use in a deployment.
Con icts with elb_info and target_group_pair_info .
target_group_pair_info - (Optional) The (Application/Network Load Balancer) target group pair to use in a
deployment. Con icts with elb_info and target_group_info .
name - (Optional) The name of the load balancer that will be used to route tra c from original instances to
replacement instances in a blue/green deployment. For in-place deployments, the name of the load balancer that
instances are deregistered from so they are not serving tra c during a deployment, and then re-registered with after
the deployment completes.
name - (Optional) The name of the target group that instances in the original environment are deregistered from, and
instances in the replacement environment registered with. For in-place deployments, the name of the target group
that instances are deregistered from, so they are not serving tra c during a deployment, and then re-registered with
after the deployment completes.
prod_traffic_route - (Required) Con guration block for the production tra c route (documented below).
target_group - (Required) Con guration blocks for a target group within a target group pair (documented below).
test_traffic_route - (Optional) Con guration block for the test tra c route (documented below).
listener_arns - (Required) List of Amazon Resource Names (ARNs) of the load balancer listeners.
listener_arns - (Required) List of Amazon Resource Names (ARNs) of the load balancer listeners.
type - (Optional) The type of the tag lter, either KEY_ONLY , VALUE_ONLY , or KEY_AND_VALUE .
trigger_events - (Required) The event type or types for which noti cations are triggered. Some values that are
supported: DeploymentStart , DeploymentSuccess , DeploymentFailure , DeploymentStop ,
DeploymentRollback , InstanceStart , InstanceSuccess , InstanceFailure . See the CodeDeploy
documentation (http://docs.aws.amazon.com/codedeploy/latest/userguide/monitoring-sns-event-noti cations-create-
trigger.html) for all possible values.
trigger_target_arn - (Required) The ARN of the SNS topic through which noti cations are sent.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
CodeDeploy Deployment Groups can be imported by their app_name , a colon, and deployment_group_name , e.g.
NOTE on aws_codepipeline : - the GITHUB_TOKEN environment variable must be set if the GitHub provider is
speci ed.
Example Usage
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codepipeline.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetBucketVersioning",
"s3:PutObject"
],
"Resource": [
"${aws_s3_bucket.codepipeline_bucket.arn}",
"${aws_s3_bucket.codepipeline_bucket.arn}/*"
]
},
},
{
"Effect": "Allow",
"Action": [
"codebuild:BatchGetBuilds",
"codebuild:StartBuild"
],
"Resource": "*"
}
]
}
EOF
}
artifact_store {
location = "${aws_s3_bucket.codepipeline_bucket.bucket}"
type = "S3"
encryption_key {
id = "${data.aws_kms_alias.s3kmskey.arn}"
type = "KMS"
}
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "ThirdParty"
provider = "GitHub"
version = "1"
output_artifacts = ["source_output"]
configuration = {
Owner = "my-organization"
Repo = "test"
Branch = "master"
}
}
}
stage {
name = "Build"
action {
name = "Build"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
input_artifacts = ["source_output"]
input_artifacts = ["source_output"]
output_artifacts = ["build_output"]
version = "1"
configuration = {
ProjectName = "test"
}
}
}
stage {
name = "Deploy"
action {
name = "Deploy"
category = "Deploy"
owner = "AWS"
provider = "CloudFormation"
input_artifacts = ["build_output"]
version = "1"
configuration = {
ActionMode = "REPLACE_ON_FAILURE"
Capabilities = "CAPABILITY_AUTO_EXPAND,CAPABILITY_IAM"
OutputFileName = "CreateStackOutput.json"
StackName = "MyStack"
TemplatePath = "build_output::sam-templated.yaml"
}
}
}
}
Argument Reference
The following arguments are supported:
role_arn - (Required) A service role Amazon Resource Name (ARN) that grants AWS CodePipeline permission to
make calls to AWS services on your behalf.
stage (Minimum of at least two stage blocks is required) A stage block. Stages are documented below.
location - (Required) The location where AWS CodePipeline stores artifacts for a pipeline, such as an S3 bucket.
encryption_key - (Optional) The encryption key block AWS CodePipeline uses to encrypt the data in the artifact
store, such as an AWS Key Management Service (AWS KMS) key. If you don't specify a key, AWS CodePipeline uses the
default key for Amazon Simple Storage Service (Amazon S3). An encryption_key block is documented below.
An encryption_key block supports the following arguments:
action - (Required) The action(s) to include in the stage. De ned as an action block below
category - (Required) A category de nes what kind of action can be taken in the stage, and constrains the provider
type for the action. Possible values are Approval , Build , Deploy , Invoke , Source and Test .
owner - (Required) The creator of the action being called. Possible values are AWS , Custom and ThirdParty .
provider - (Required) The provider of the service being called by the action. Valid providers are determined by the
action category. For example, an action in the Deploy category type might have a provider of AWS CodeDeploy, which
would be speci ed as CodeDeploy.
configuration - (Optional) A Map of the action declaration's con guration. Find out more about con guring action
con gurations in the Reference Pipeline Structure documentation
(http://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#action-requirements).
output_artifacts - (Optional) A list of artifact names to output. Output artifact names must be unique within a
pipeline.
role_arn - (Optional) The ARN of the IAM service role that will perform the declared action. This is assumed through
the roleArn for the pipeline.
Note: The input artifact of an action must exactly match the output artifact declared in a preceding action, but the input
artifact does not have to be the next action in strict sequence from the action that provided the output artifact. Actions
in parallel can declare di erent output artifacts, which are in turn consumed by di erent following actions.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
artifact_store {
location = "${aws_s3_bucket.bar.bucket}"
type = "S3"
encryption_key {
id = "${data.aws_kms_alias.s3kmskey.arn}"
type = "KMS"
}
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "ThirdParty"
provider = "GitHub"
version = "1"
output_artifacts = ["test"]
configuration = {
Owner = "my-organization"
Repo = "test"
Branch = "master"
}
}
}
stage {
name = "Build"
action {
name = "Build"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
input_artifacts = ["test"]
version = "1"
configuration = {
ProjectName = "test"
}
}
}
}
}
locals {
webhook_secret = "super-secret"
}
authentication_configuration {
secret_token = "${local.webhook_secret}"
}
filter {
json_path = "$.ref"
match_equals = "refs/heads/{Branch}"
}
}
name = "web"
configuration {
url = "${aws_codepipeline_webhook.bar.url}"
content_type = "json"
insecure_ssl = true
secret = "${local.webhook_secret}"
}
events = ["push"]
}
Argument Reference
The following arguments are supported:
authentication_configuration - (Optional) An auth block. Required for IP and GITHUB_HMAC . Auth blocks are
documented below.
filter (Required) One or more filter blocks. Filter blocks are documented below.
target_action - (Required) The name of the action in a pipeline you want to connect to the webhook. The action
must be from the source ( rst) stage of the pipeline.
secret_token - (Optional) The shared secret for the GitHub repository webhook. Set this as secret in your
github_repository_webhook 's configuration block. Required for GITHUB_HMAC .
match_equals - (Required) The value to match on (e.g. refs/heads/{Branch} ). See AWS docs
(https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_WebhookFilterRule.html) for details.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
url - The CodePipeline webhook's URL. POST events to this endpoint to trigger the target.
Import
CodePipeline Webhooks can be imported by their ARN, e.g.
Example Usage
cognito_identity_providers {
client_id = "6lhlkkfbfb4q5kpp90urffae"
provider_name = "cognito-idp.us-east-1.amazonaws.com/us-east-1_Tv0493apJ"
server_side_token_check = false
}
cognito_identity_providers {
client_id = "7kodkvfqfb4qfkp39eurffae"
provider_name = "cognito-idp.us-east-1.amazonaws.com/eu-west-1_Zr231apJu"
server_side_token_check = false
}
supported_login_providers = {
"graph.facebook.com" = "7346241598935552"
"accounts.google.com" = "123456789012.apps.googleusercontent.com"
}
saml_provider_arns = ["${aws_iam_saml_provider.default.arn}"]
openid_connect_provider_arns = ["arn:aws:iam::123456789012:oidc-provider/foo.example.com"]
}
Argument Reference
The Cognito Identity Pool argument layout is a structure composed of several sub-resources - these resources are laid out
below.
allow_unauthenticated_identities (Required) - Whether the identity pool supports unauthenticated logins or not.
developer_provider_name (Optional) - The "domain" by which Cognito will refer to your users. This name acts as a
placeholder that allows your backend and the Cognito service to communicate about the developer provider.
cognito_identity_providers (Optional) - An array of Amazon Cognito Identity user pools and their client IDs.
supported_login_providers (Optional) - Key-Value pairs mapping provider names to provider app IDs.
client_id (Optional) - The client ID for the Amazon Cognito Identity User Pool.
provider_name (Optional) - The provider name for an Amazon Cognito Identity User Pool.
server_side_token_check (Optional) - Whether server-side token validation is enabled for the identity provider’s
token or not.
Attributes Reference
In addition to the arguments, which are exported, the following attributes are exported:
Import
Cognito Identity Pool can be imported using the name, e.g.
Example Usage
supported_login_providers = {
"graph.facebook.com" = "7346241598935555"
}
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "cognito-identity.amazonaws.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"cognito-identity.amazonaws.com:aud": "${aws_cognito_identity_pool.main.id}"
},
"ForAnyValue:StringLike": {
"cognito-identity.amazonaws.com:amr": "authenticated"
}
}
}
]
}
EOF
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"mobileanalytics:PutEvents",
"cognito-sync:*",
"cognito-sync:*",
"cognito-identity:*"
],
"Resource": [
"*"
]
}
]
}
EOF
}
role_mapping {
identity_provider = "graph.facebook.com"
ambiguous_role_resolution = "AuthenticatedRole"
type = "Rules"
mapping_rule {
claim = "isAdmin"
match_type = "Equals"
role_arn = "${aws_iam_role.authenticated.arn}"
value = "paid"
}
}
roles = {
"authenticated" = "${aws_iam_role.authenticated.arn}"
}
}
Argument Reference
The Cognito Identity Pool Roles Attachment argument layout is a structure composed of several sub-resources - these
resources are laid out below.
roles (Required) - The map of roles associated with this pool. For a given role, the key will be either "authenticated"
or "unauthenticated" and the value will be the Role ARN.
Role Mappings
identity_provider (Required) - A string identifying the identity provider, for example, "graph.facebook.com" or
"cognito-idp.us-east-1.amazonaws.com/us-east-1_abcdefghi:app_client_id".
ambiguous_role_resolution (Optional) - Speci es the action to be taken if either no rules match the claim value for
the Rules type, or there is no cognito:preferred_role claim and there are multiple cognito:roles matches for the Token
type. Required if you specify Token or Rules as the Type.
mapping_rule (Optional) - The Rules Con guration to be used for mapping users to roles. You can specify up to 25
rules per identity provider. Rules are evaluated in order. The rst one to match speci es the role.
claim (Required) - The claim name that must be present in the token, for example, "isAdmin" or "paid".
match_type (Required) - The match condition that speci es how closely the claim value in the IdP token must match
Value.
value (Required) - A brief string that the claim must match, for example, "paid" or "yes".
Attributes Reference
In addition to the arguments, which are exported, the following attributes are exported:
roles (Required) - The map of roles associated with this pool. For a given role, the key will be either "authenticated"
or "unauthenticated" and the value will be the Role ARN.
Resource: aws_cognito_identity_provider
Provides a Cognito User Identity Provider resource.
Example Usage
provider_details = {
authorize_scopes = "email"
client_id = "your client_id"
client_secret = "your client_secret"
}
attribute_mapping = {
email = "email"
username = "sub"
}
}
Argument Reference
The following arguments are supported:
provider_type (Required) - The provider type. See AWS API for valid values (https://docs.aws.amazon.com/cognito-
user-identity-pools/latest/APIReference/API_CreateIdentityProvider.html#CognitoUserPools-CreateIdentityProvider-
request-ProviderType)
attribute_mapping (Optional) - The map of attribute mapping of user pool attributes. AttributeMapping in AWS API
documentation (https://docs.aws.amazon.com/cognito-user-identity-
pools/latest/APIReference/API_CreateIdentityProvider.html#CognitoUserPools-CreateIdentityProvider-request-
AttributeMapping)
Example Usage
user_pool_id = "${aws_cognito_user_pool.pool.id}"
}
scope {
scope_name = "sample-scope"
scope_description = "a Sample Scope Description"
}
user_pool_id = "${aws_cognito_user_pool.pool.id}"
}
Argument Reference
The following arguments are supported:
Attribute Reference
In addition to the arguments, which are exported, the following attributes are exported:
scope_identifiers - A list of all scopes con gured for this resource server in the format identi er/scope_name.
Import
aws_cognito_resource_server can be imported using their User Pool ID and Identi er, e.g.
Example Usage
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Federated": "cognito-identity.amazonaws.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"cognito-identity.amazonaws.com:aud": "us-east-1:12345678-dead-beef-cafe-123456790ab"
},
"ForAnyValue:StringLike": {
"cognito-identity.amazonaws.com:amr": "authenticated"
}
}
}
]
}
EOF
}
Argument Reference
The following arguments are supported:
role_arn - (Optional) The ARN of the IAM role to be associated with the user group.
Import
Cognito User Groups can be imported using the user_pool_id / name attributes concatenated, e.g.
Example Usage
user_pool_id = "${aws_cognito_user_pool.pool.id}"
}
user_pool_id = "${aws_cognito_user_pool.pool.id}"
generate_secret = true
explicit_auth_flows = ["ADMIN_NO_SRP_AUTH"]
}
Argument Reference
The following arguments are supported:
allowed_oauth_flows_user_pool_client - (Optional) Whether the client is allowed to follow the OAuth protocol
when interacting with Cognito user pools.
allowed_oauth_scopes - (Optional) List of allowed OAuth scopes (phone, email, openid, pro le, and
aws.cognito.signin.user.admin).
callback_urls - (Optional) List of allowed callback URLs for the identity providers.
default_redirect_uri - (Optional) The default redirect URI. Must be in the list of callback URLs.
logout_urls - (Optional) List of allowed logout URLs for the identity providers.
read_attributes - (Optional) List of user pool attributes the application client can read from.
refresh_token_validity - (Optional) The time limit in days refresh tokens are valid for.
supported_identity_providers - (Optional) List of provider names for the identity providers that are supported on
this client.
write_attributes - (Optional) List of user pool attributes the application client can write to.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Import
Cognito User Pool Clients can be imported using the id of the Cognito User Pool, and the id of the Cognito User Pool
Client, e.g.
Example Usage
Argument Reference
The following arguments are supported:
certificate_arn - (Optional) The ARN of an ISSUED ACM certi cate in us-east-1 for a custom domain.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
s3_bucket - The S3 bucket where the static les for this domain are stored.
Import
Cognito User Pool Domains can be imported using the domain , e.g.
Example Usage
Argument Reference
The following arguments are supported:
alias_attributes - (Optional) Attributes supported as an alias for this user pool. Possible values: phone_number,
email, or preferred_username. Con icts with username_attributes .
auto_verified_attributes - (Optional) The attributes to be auto-veri ed. Possible values: email, phone_number.
device_configuration (Optional) - The con guration for the user pool's device tracking.
email_verification_subject - (Optional) A string representing the email veri cation subject. Con icts with
verification_message_template con guration block email_subject argument.
email_verification_message - (Optional) A string representing the email veri cation message. Con icts with
verification_message_template con guration block email_message argument.
lambda_config (Optional) - A container for the AWS Lambda triggers associated with the user pool.
mfa_configuration - (Optional, Default: OFF) Set to enable multi-factor authentication. Must be one of the following
values (ON, OFF, OPTIONAL)
password_policy (Optional) - A container for information about the user pool password policy.
schema (Optional) - A container with the schema attributes of a user pool. Maximum of 50 attributes.
sms_verification_message - (Optional) A string representing the SMS veri cation message. Con icts with
verification_message_template con guration block sms_message argument.
username_attributes - (Optional) Speci es whether email addresses or phone numbers can be speci ed as
usernames when a user signs up. Con icts with alias_attributes .
user_pool_add_ons - (Optional) Con guration block for user pool add-ons to enable user pool advanced security
mode features.
allow_admin_create_user_only (Optional) - Set to True if only the administrator is allowed to create user pro les.
Set to False if users can sign themselves up via an app.
unused_account_validity_days (Optional) - The user account expiration limit, in days, after which the account is no
longer usable.
email_message (Optional) - The message template for email messages. Must contain {username} and {####}
placeholders, for username and temporary password, respectively.
sms_message (Optional) - The message template for SMS messages. Must contain {username} and {####}
placeholders, for username and temporary password, respectively.
email_sending_account (Optional) - Instruct Cognito to either use its built-in functional or Amazon SES to send out
emails.
pre_token_generation (Optional) - Allow to customize identity token claims before token generation.
Password Policy
minimum_length (Optional) - The minimum length of the password policy that you have set.
require_lowercase (Optional) - Whether you have required users to use at least one lowercase letter in their
password.
require_numbers (Optional) - Whether you have required users to use at least one number in their password.
require_symbols (Optional) - Whether you have required users to use at least one symbol in their password.
require_uppercase (Optional) - Whether you have required users to use at least one uppercase letter in their
password.
Schema Attributes
NOTE: When de ning an attribute_data_type of String or Number , the respective attribute constraints
con guration block (e.g string_attribute_constraints or number_attribute_contraints ) is required to prevent
recreation of the Terraform resource. This requirement is true for both standard (e.g. name, email) and custom schema
attributes.
attribute_data_type (Required) - The attribute data type. Must be one of Boolean , Number , String , DateTime .
mutable (Optional) - Speci es whether the attribute can be changed once it has been created.
number_attribute_constraints (Optional) - Speci es the constraints for an attribute of the number type.
required (Optional) - Speci es whether a user pool attribute is required. If the attribute is required and the user
does not provide a value, registration or sign-in will fail.
string_attribute_constraints (Optional) -Speci es the constraints for an attribute of the string type.
max_value (Optional) - The maximum value of an attribute that is of the number data type.
min_value (Optional) - The minimum value of an attribute that is of the number data type.
max_length (Optional) - The maximum length of an attribute value of the string type.
min_length (Optional) - The minimum length of an attribute value of the string type.
external_id (Required) - The external ID used in IAM role trust relationships. For more information about using
external IDs, see How to Use an External ID When Granting Access to Your AWS Resources to a Third Party
(http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html).
sns_caller_arn (Required) - The ARN of the Amazon SNS caller. This is usually the IAM role that you've given Cognito
permission to assume.
advanced_security_mode (Required) - The mode for advanced security, must be one of OFF , AUDIT or ENFORCED .
email_message (Optional) - The email message template. Must contain the {####} placeholder. Con icts with
email_verification_message argument.
email_message_by_link (Optional) - The email message template for sending a con rmation link to the user, it must
contain the {##Click Here##} placeholder.
email_subject (Optional) - The subject line for the email message template. Con icts with
email_verification_subject argument.
email_subject_by_link (Optional) - The subject line for the email message template for sending a con rmation link
to the user.
sms_message (Optional) - The SMS message template. Must contain the {####} placeholder. Con icts with
sms_verification_message argument.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
endpoint - The endpoint name of the user pool. Example format: cognito-idp.REGION.amazonaws.com/xxxx_yyyyy
Import
Cognito User Pools can be imported using the id , e.g.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Con g aggregate authorizations can be imported using account_id:region , e.g.
Example Usage
source {
owner = "AWS"
source_identifier = "S3_BUCKET_VERSIONING_ENABLED"
}
depends_on = ["aws_config_configuration_recorder.foo"]
}
assume_role_policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "config.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
POLICY
}
policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "config:Put*",
"Effect": "Allow",
"Resource": "*"
}
]
}
POLICY
}
Custom Rules
Custom rules can be used by setting the source owner to CUSTOM_LAMBDA and the source identi er to the Amazon
Resource Name (ARN) of the Lambda Function. The AWS Con g service must have permissions to invoke the Lambda
Function, e.g. via the aws_lambda_permission resource (/docs/providers/aws/r/lambda_permission.html). More
information about custom rules can be found in the AWS Con g Developer Guide
(https://docs.aws.amazon.com/con g/latest/developerguide/evaluate-con g_develop-rules.html).
source {
owner = "CUSTOM_LAMBDA"
source_identifier = "${aws_lambda_function.example.arn}"
}
Argument Reference
The following arguments are supported:
input_parameters - (Optional) A string in JSON format that is passed to the AWS Con g rule Lambda function.
maximum_execution_frequency - (Optional) The maximum frequency with which AWS Con g runs evaluations for a
rule.
scope - (Optional) Scope de nes which resources can trigger an evaluation for the rule as documented below.
source - (Required) Source speci es the rule owner, the rule identi er, and the noti cations that cause the function
to evaluate your AWS resources as documented below.
De nes which resources can trigger an evaluation for the rule. If you do not specify a scope, evaluations are triggered when
any resource in the recording group changes.
compliance_resource_id - (Optional) The IDs of the only AWS resource that you want to trigger an evaluation for the
rule. If you specify a resource ID, you must specify one resource type for compliance_resource_types .
compliance_resource_types - (Optional) A list of resource types of only those AWS resources that you want to
trigger an evaluation for the rule. e.g. AWS::EC2::Instance . You can only specify one type if you also specify a
resource ID for compliance_resource_id . See relevant part of AWS Docs
(http://docs.aws.amazon.com/con g/latest/APIReference/API_ResourceIdenti er.html#con g-Type-ResourceIdenti er-
resourceType) for available types.
tag_key - (Optional, Required if tag_value is speci ed) The tag key that is applied to only those AWS resources that
you want you want to trigger an evaluation for the rule.
tag_value - (Optional) The tag value applied to only those AWS resources that you want to trigger an evaluation for
the rule.
source
Provides the rule owner (AWS or customer), the rule identi er, and the noti cations that cause the function to evaluate your
AWS resources.
owner - (Required) Indicates whether AWS or the customer owns and manages the AWS Con g rule. Valid values are
AWS or CUSTOM_LAMBDA . For more information about managed rules, see the AWS Con g Managed Rules
documentation (https://docs.aws.amazon.com/con g/latest/developerguide/evaluate-con g_use-managed-
rules.html). For more information about custom rules, see the AWS Con g Custom Rules documentation
(https://docs.aws.amazon.com/con g/latest/developerguide/evaluate-con g_develop-rules.html). Custom Lambda
Functions require permissions to allow the AWS Con g service to invoke them, e.g. via the aws_lambda_permission
resource (/docs/providers/aws/r/lambda_permission.html).
source_identifier - (Required) For AWS Con g managed rules, a prede ned identi er, e.g IAM_PASSWORD_POLICY .
For custom Lambda rules, the identi er is the ARN of the Lambda Function, such as arn:aws:lambda:us-east-
1:123456789012:function:custom_rule_name or the arn attribute of the aws_lambda_function resource
(/docs/providers/aws/r/lambda_function.html#arn).
source_detail - (Optional) Provides the source and type of the event that causes AWS Con g to evaluate your AWS
resources. Only valid if owner is CUSTOM_LAMBDA .
event_source - (Optional) The source of the event, such as an AWS service, that triggers AWS Con g to evaluate
your AWS resources. This defaults to aws.config and is the only valid value.
maximum_execution_frequency - (Optional) The frequency that you want AWS Con g to run evaluations for a
rule that is triggered periodically. If speci ed, requires message_type to be ScheduledNotification .
message_type - (Optional) The type of noti cation that triggers AWS Con g to run an evaluation for a rule. You
can specify the following noti cation types:
ConfigurationItemChangeNotification - Triggers an evaluation when AWS Con g delivers a
con guration item as a result of a resource change.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Con g Rule can be imported using the name, e.g.
Example Usage
account_aggregation_source {
account_ids = ["123456789012"]
regions = ["us-west-2"]
}
}
name = "example"
organization_aggregation_source {
all_regions = true
role_arn = "${aws_iam_role.organization.arn}"
}
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "config.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
Argument Reference
The following arguments are supported:
account_aggregation_source - (Optional) The account(s) to aggregate con g data from as documented below.
account_aggregation_source
account_ids - (Required) List of 12-digit account IDs of the account(s) being aggregated.
all_regions - (Optional) If true, aggregate existing AWS Con g regions and future regions.
organization_aggregation_source
Note: If your source type is an organization, you must be signed in to the master account and all features must be
enabled in your organization. AWS Con g calls EnableAwsServiceAccess API to enable integration between AWS Con g
and AWS Organizations.
all_regions - (Optional) If true, aggregate existing AWS Con g regions and future regions.
role_arn - (Required) ARN of the IAM role used to retrieve AWS Organization details associated with the aggregator
account.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Con guration Aggregators can be imported using the name, e.g.
Example Usage
assume_role_policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "config.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
POLICY
}
Argument Reference
The following arguments are supported:
name - (Optional) The name of the recorder. Defaults to default . Changing it recreates the resource.
role_arn - (Required) Amazon Resource Name (ARN) of the IAM role. used to make read or write requests to the
delivery channel and to describe the AWS resources associated with the account. See AWS Docs
(http://docs.aws.amazon.com/con g/latest/developerguide/iamrole-permissions.html) for more details.
all_supported - (Optional) Speci es whether AWS Con g records con guration changes for every supported type of
regional resource (which includes any new type that will become supported in the future). Con icts with
resource_types . Defaults to true .
include_global_resource_types - (Optional) Speci es whether AWS Con g includes all supported types of global
resources with the resources that it records. Requires all_supported = true . Con icts with resource_types .
resource_types - (Optional) A list that speci es the types of AWS resources for which AWS Con g records
con guration changes (for example, AWS::EC2::Instance or AWS::CloudTrail::Trail ). See relevant part of AWS
Docs (http://docs.aws.amazon.com/con g/latest/APIReference/API_ResourceIdenti er.html#con g-Type-
ResourceIdenti er-resourceType) for available types.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Con guration Recorder can be imported using the name, e.g.
Example Usage
assume_role_policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "config.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
POLICY
}
policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"${aws_s3_bucket.b.arn}",
"${aws_s3_bucket.b.arn}/*"
]
}
]
}
POLICY
}
Argument Reference
The following arguments are supported:
is_enabled - (Required) Whether the con guration recorder should be enabled or disabled.
Import
Con guration Recorder Status can be imported using the name of the Con guration Recorder, e.g.
Note: Delivery Channel requires a Con guration Recorder (/docs/providers/aws/r/con g_con guration_recorder.html) to be present. Use of
depends_on (as shown below) is recommended to avoid race conditions.
Example Usage
assume_role_policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "config.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
POLICY
}
policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"${aws_s3_bucket.b.arn}",
"${aws_s3_bucket.b.arn}/*"
]
}
]
}
POLICY
}
Argument Reference
The following arguments are supported:
name - (Optional) The name of the delivery channel. Defaults to default . Changing it recreates the resource.
s3_bucket_name - (Required) The name of the S3 bucket used to store the con guration history.
sns_topic_arn - (Optional) The ARN of the SNS topic that AWS Con g delivers noti cations to.
snapshot_delivery_properties - (Optional) Options for how AWS Con g delivers con guration snapshots. See below
snapshot_delivery_properties
delivery_frequency - (Optional) - The frequency with which AWS Con g recurringly delivers con guration snapshots. e.g. One_Hour or
Three_Hours . Valid values are listed here
(https://docs.aws.amazon.com/con g/latest/APIReference/API_Con gSnapshotDeliveryProperties.html#API_Con gSnapshotDeliveryProperties_Contents).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Delivery Channel can be imported using the name, e.g.
NOTE: This resource must be created in the Organization master account and rules will include the master account
unless its ID is added to the excluded_accounts argument.
NOTE: The proper Lambda permission to allow the AWS Con g service invoke the Lambda Function must be in place
before the rule will successfully create or update. See also the aws_lambda_permission resource
(/docs/providers/aws/r/lambda_permission.html).
Example Usage
lambda_function_arn = "${aws_lambda_function.example.arn}"
name = "example"
trigger_types = ["ConfigurationItemChangeNotification"]
}
Argument Reference
The following arguments are supported:
lambda_function_arn - (Required) Amazon Resource Name (ARN) of the rule Lambda Function
trigger_types - (Required) List of noti cation types that trigger AWS Con g to run an evaluation for the rule. Valid
values: ConfigurationItemChangeNotification , OversizedConfigurationItemChangeNotification , and
ScheduledNotification
excluded_accounts - (Optional) List of AWS account identi ers to exclude from the rule
input_parameters - (Optional) A string in JSON format that is passed to the AWS Con g Rule Lambda Function
maximum_execution_frequency - (Optional) The maximum frequency with which AWS Con g runs evaluations for a
rule, if the rule is triggered at a periodic frequency. Defaults to TwentyFour_Hours for periodic frequency triggered
rules. Valid values: One_Hour , Three_Hours , Six_Hours , Twelve_Hours , or TwentyFour_Hours .
tag_key_scope - (Optional, Required if tag_value_scope is con gured) Tag key of AWS resources to evaluate
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_config_organization_custom_rule provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
Import
Con g Organization Custom Rules can be imported using the name, e.g.
NOTE: This resource must be created in the Organization master account and rules will include the master account
unless its ID is added to the excluded_accounts argument.
NOTE: Every Organization account except those con gured in the excluded_accounts argument must have a
Con guration Recorder with proper IAM permissions before the rule will successfully create or update. See also the
aws_config_configuration_recorder resource (/docs/providers/aws/r/con g_con guration_recorder.html).
Example Usage
name = "example"
rule_identifier = "IAM_PASSWORD_POLICY"
}
Argument Reference
The following arguments are supported:
rule_identifier - (Required) Identi er of an available AWS Con g Managed Rule to call. For available values, see the
List of AWS Con g Managed Rules (https://docs.aws.amazon.com/con g/latest/developerguide/managed-rules-by-
aws-con g.html) documentation
excluded_accounts - (Optional) List of AWS account identi ers to exclude from the rule
input_parameters - (Optional) A string in JSON format that is passed to the AWS Con g Rule Lambda Function
maximum_execution_frequency - (Optional) The maximum frequency with which AWS Con g runs evaluations for a
rule, if the rule is triggered at a periodic frequency. Defaults to TwentyFour_Hours for periodic frequency triggered
rules. Valid values: One_Hour , Three_Hours , Six_Hours , Twelve_Hours , or TwentyFour_Hours .
tag_key_scope - (Optional, Required if tag_value_scope is con gured) Tag key of AWS resources to evaluate
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_config_organization_managed_rule provides the following Timeouts
(/docs/con guration/resources.html#timeouts) con guration options:
Import
Con g Organization Managed Rules can be imported using the name, e.g.
NOTE: The AWS Cost and Usage Report service is only available in us-east-1 currently.
NOTE: If AWS Organizations is enabled, only the master account can use this resource.
Example Usage
Argument Reference
The following arguments are supported:
report_name - (Required) Unique name for the report. Must start with a number/letter and is case sensitive. Limited
to 256 characters.
time_unit - (Required) The frequency on which report data are measured and displayed. Valid values are: HOURLY,
DAILY.
compression - (Required) Compression format for report. Valid values are: GZIP, ZIP.
additional_artifacts - (Required) A list of additional artifacts. Valid values are: REDSHIFT, QUICKSIGHT.
Import
Report De nitions can be imported using the report_name , e.g.
Example Usage
tags = {
Name = "main-customer-gateway"
}
}
Argument Reference
The following arguments are supported:
bgp_asn - (Required) The gateway's Border Gateway Protocol (BGP) Autonomous System Number (ASN).
type - (Required) The type of customer gateway. The only type AWS supports at this time is "ipsec.1".
Attribute Reference
In addition to all arguments above, the following attributes are exported:
bgp_asn - The gateway's Border Gateway Protocol (BGP) Autonomous System Number (ASN).
Import
Customer Gateways can be imported using the id , e.g.
$ terraform import aws_customer_gateway.main cgw-b4dc3961
Resource: aws_datapipeline_pipeline
Provides a Data Pipeline resource.
Example Usage
Argument Reference
The following arguments are supported:
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Import
aws_datapipeline_pipeline can be imported by using the id (Pipeline ID), e.g.
NOTE: One of activation_key or ip_address must be provided for resource creation (agent activation). Neither is
required for resource import. If using ip_address , Terraform must be able to make an HTTP (port 80) GET request to
the speci ed IP address from where it is running. The agent will turn o that HTTP server after activation.
Example Usage
Argument Reference
The following arguments are supported:
activation_key - (Optional) DataSync Agent activation key during resource creation. Con icts with ip_address . If
an ip_address is provided instead, Terraform will retrieve the activation_key as part of the resource creation.
ip_address - (Optional) DataSync Agent IP address to retrieve activation key during resource creation. Con icts with
activation_key . DataSync Agent must be accessible on port 80 from where Terraform is running.
tags - (Optional) Key-value pairs of resource tags to assign to the DataSync Agent.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_datasync_agent provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration
options:
create - (Default 10m ) How long to wait for agent activation and connection to DataSync.
Import
aws_datasync_agent can be imported by using the DataSync Agent Amazon Resource Name (ARN), e.g.
NOTE: The EFS File System must have a mounted EFS Mount Target before creating this resource.
Example Usage
efs_file_system_arn = "${aws_efs_mount_target.example.file_system_arn}"
ec2_config {
security_group_arns = ["${aws_security_group.example.arn}"]
subnet_arn = "${aws_subnet.example.arn}"
}
}
Argument Reference
The following arguments are supported:
ec2_config - (Required) Con guration block containing EC2 con gurations for connecting to the EFS File System.
tags - (Optional) Key-value pairs of resource tags to assign to the DataSync Location.
security_group_arns - (Required) List of Amazon Resource Names (ARNs) of the EC2 Security Groups that are
associated with the EFS Mount Target.
subnet_arn - (Required) Amazon Resource Name (ARN) of the EC2 Subnet that is associated with the EFS Mount
Target.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
id - Amazon Resource Name (ARN) of the DataSync Location.
Import
aws_datasync_location_efs can be imported by using the DataSync Task Amazon Resource Name (ARN), e.g.
NOTE: The DataSync Agents must be available before creating this resource.
Example Usage
on_prem_config {
agent_arns = ["${aws_datasync_agent.example.arn}"]
}
}
Argument Reference
The following arguments are supported:
on_prem_config - (Required) Con guration block containing information for connecting to the NFS File System.
server_hostname - (Required) Speci es the IP address or DNS name of the NFS server. The DataSync Agent(s) use
this to mount the NFS server.
subdirectory - (Required) Subdirectory to perform actions as source or destination. Should be exported by the NFS
server.
tags - (Optional) Key-value pairs of resource tags to assign to the DataSync Location.
agent_arns - (Required) List of Amazon Resource Names (ARNs) of the DataSync Agents used to connect to the NFS
server.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
s3_config {
bucket_access_role_arn = "${aws_iam_role.example.arn}"
}
}
Argument Reference
The following arguments are supported:
s3_config - (Required) Con guration block containing information for connecting to S3.
tags - (Optional) Key-value pairs of resource tags to assign to the DataSync Location.
bucket_access_role_arn - (Required) Amazon Resource Names (ARN) of the IAM Role used to connect to the S3
Bucket.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Import
aws_datasync_location_s3 can be imported by using the DataSync Task Amazon Resource Name (ARN), e.g.
$ terraform import aws_datasync_location_s3.example arn:aws:datasync:us-east-1:123456789012:location/loc-
12345678901234567
Resource: aws_datasync_task
Manages an AWS DataSync Task, which represents a con guration for synchronization. Starting an execution of these
DataSync Tasks (actually synchronizing les) is performed outside of this Terraform resource.
Example Usage
options {
bytes_per_second = - 1
}
}
Argument Reference
The following arguments are supported:
cloudwatch_log_group_arn - (Optional) Amazon Resource Name (ARN) of the CloudWatch Log Group that is used to
monitor and log events in the sync task.
options - (Optional) Con guration block containing option that controls the default behavior when you start an
execution of this DataSync Task. For each individual task execution, you can override these options by specifying an
overriding con guration in those executions.
tags - (Optional) Key-value pairs of resource tags to assign to the DataSync Task.
NOTE: If atime is set to BEST_EFFORT , mtime must be set to PRESERVE . If atime is set to NONE , mtime must be set
to NONE .
The following arguments are supported inside the options con guration block:
atime - (Optional) A le metadata that shows the last time a le was accessed (that is when the le was read or
written to). If set to BEST_EFFORT , the DataSync Task attempts to preserve the original (that is, the version before sync
PREPARING phase) atime attribute on all source les. Valid values: BEST_EFFORT , NONE . Default: BEST_EFFORT .
bytes_per_second - (Optional) Limits the bandwidth utilized. For example, to set a maximum of 1 MB, set this value
to 1048576 . Value values: -1 or greater. Default: -1 (unlimited).
gid - (Optional) Group identi er of the le's owners. Valid values: BOTH , INT_VALUE , NAME , NONE . Default:
INT_VALUE (preserve integer value of the ID).
mtime - (Optional) A le metadata that indicates the last time a le was modi ed (written to) before the sync
PREPARING phase. Value values: NONE , PRESERVE . Default: PRESERVE .
posix_permissions - (Optional) Determines which users or groups can access a le for a speci c purpose such as
reading, writing, or execution of the le. Valid values: BEST_EFFORT , NONE , PRESERVE . Default: PRESERVE .
preserve_deleted_files - (Optional) Whether les deleted in the source should be removed or preserved in the
destination le system. Valid values: PRESERVE , REMOVE . Default: PRESERVE .
preserve_devices - (Optional) Whether the DataSync Task should preserve the metadata of block and character
devices in the source les system, and recreate the les with that device name and metadata on the destination. The
DataSync Task can’t sync the actual contents of such devices, because many of the devices are non-terminal and don’t
return an end of le (EOF) marker. Valid values: NONE , PRESERVE . Default: NONE (ignore special devices).
uid - (Optional) User identi er of the le's owners. Valid values: BOTH , INT_VALUE , NAME , NONE . Default:
INT_VALUE (preserve integer value of the ID).
verify_mode - (Optional) Whether a data integrity veri cation should be performed at the end of a task execution
after all data and metadata have been transferred. Valid values: NONE , POINT_IN_TIME_CONSISTENT . Default:
POINT_IN_TIME_CONSISTENT .
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_datasync_task provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration
options:
Import
aws_datasync_task can be imported by using the DataSync Task Amazon Resource Name (ARN), e.g.
Example Usage
Argument Reference
The following arguments are supported:
cluster_name – (Required) Group identi er. DAX converts this name to lowercase
iam_role_arn - (Required) A valid Amazon Resource Name (ARN) that identi es an IAM role. At runtime, DAX will
assume this role and use the role's permissions to access DynamoDB on your behalf
node_type – (Required) The compute and memory capacity of the nodes. See Nodes
(http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.concepts.cluster.html#DAX.concepts.nodes)
for supported node types
replication_factor – (Required) The number of nodes in the DAX cluster. A replication factor of 1 will create a
single-node cluster, without any read replicas
availability_zones - (Optional) List of Availability Zones in which the nodes will be created
notification_topic_arn – (Optional) An Amazon Resource Name (ARN) of an SNS topic to send DAX noti cations to.
Example: arn:aws:sns:us-east-1:012345678999:my_sns_topic
parameter_group_name – (Optional) Name of the parameter group to associate with this DAX cluster
maintenance_window – (Optional) Speci es the weekly time range for when maintenance on the cluster is performed.
The format is ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60 minute period.
Example: sun:05:00-sun:09:00
security_group_ids – (Optional) One or more VPC security groups associated with the cluster
subnet_group_name – (Optional) Name of the subnet group to be used for the cluster
Attributes Reference
In addition to all arguments above, the following attributes are exported:
nodes - List of node objects including id , address , port and availability_zone . Referenceable e.g. as
${aws_dax_cluster.test.nodes.0.address}
configuration_endpoint - The con guration endpoint for this DAX cluster, consisting of a DNS name and a port
number
cluster_address - The DNS name of the DAX cluster without the port appended
Timeouts
aws_dax_cluster provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration options:
Import
DAX Clusters can be imported using the cluster_id , e.g.
Example Usage
parameters {
name = "query-ttl-millis"
value = "100000"
}
parameters {
name = "record-ttl-millis"
value = "100000"
}
}
Argument Reference
The following arguments are supported:
parameters
parameters supports the following:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
DAX Parameter Group can be imported using the name , e.g.
Example Usage
Argument Reference
The following arguments are supported:
subnet_ids – (Required) A list of VPC subnet IDs for the subnet group.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
DAX Subnet Group can be imported using the name , e.g.
Example Usage
Argument Reference
The following arguments are supported:
db_cluster_identifier - (Required) The DB Cluster Identi er from which to take the snapshot.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
availability_zones - List of EC2 Availability Zones that instances in the DB cluster snapshot can be restored in.
db_cluster_snapshot_arn - The Amazon Resource Name (ARN) for the DB Cluster Snapshot.
kms_key_id - If storage_encrypted is true, the AWS KMS key identi er for the encrypted DB cluster snapshot.
port - Port that the DB cluster was listening on at the time of the snapshot.
source_db_cluster_snapshot_identifier - The DB Cluster Snapshot Arn that the DB Cluster Snapshot was copied
from. It only has value in case of cross customer or cross region copy.
create - (Default 20m ) How long to wait for the snapshot to be available.
Import
aws_db_cluster_snapshot can be imported by using the cluster snapshot identi er, e.g.
Example Usage
source_type = "db-instance"
source_ids = ["${aws_db_instance.default.id}"]
event_categories = [
"availability",
"deletion",
"failover",
"failure",
"low storage",
"maintenance",
"notification",
"read replica",
"recovery",
"restoration",
]
}
Argument Reference
The following arguments are supported:
name - (Optional) The name of the DB event subscription. By default generated by Terraform.
name_prefix - (Optional) The name of the DB event subscription. Con icts with name .
sns_topic - (Required) The SNS topic to send events to.
source_ids - (Optional) A list of identi ers of the event sources for which events will be returned. If not speci ed,
then all sources are included in the response. If speci ed, a source_type must also be speci ed.
source_type - (Optional) The type of source that will be generating the events. Valid options are db-instance , db-
security-group , db-parameter-group , db-snapshot , db-cluster or db-cluster-snapshot . If not set, all
sources will be subscribed to.
event_categories - (Optional) A list of event categories for a SourceType that you want to subscribe to. See
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html
(http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html) or run aws rds describe-event-
categories .
Attributes
The following additional atttributes are provided:
arn - The Amazon Resource Name of the RDS event noti cation subscription
customer_aws_id - The AWS customer account associated with the RDS event noti cation subscription
Timeouts
aws_db_event_subscription provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
create - (Default 40m ) How long to wait for a RDS event noti cation subscription to be ready.
delete - (Default 40m ) How long to wait for a RDS event noti cation subscription to be deleted.
update - (Default 40m ) How long to wait for a RDS event noti cation subscription to be updated.
Import
DB Event Subscriptions can be imported using the name , e.g.
Changes to a DB instance can occur when you manually change a parameter, such as allocated_storage , and are re ected in the next
maintenance window. Because of this, Terraform may report a di erence in its planning phase because a modi cation has not yet taken place.
You can use the apply_immediately ag to instruct the service to apply the change immediately (see documentation below).
When upgrading the major version of an engine, allow_major_version_upgrade must be set to true .
Note: using apply_immediately can result in a brief downtime as the server reboots. See the AWS Docs on RDS Maintenance
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html) for more information.
Note: All arguments including the username and password will be stored in the raw state as plain-text. Read more about sensitive data in
state (/docs/state/sensitive-data.html).
Example Usage
Basic Usage
Storage Autoscaling
To enable Storage Autoscaling with instances that support the feature, de ne the max_allocated_storage argument higher than the
allocated_storage argument. Terraform will automatically hide di erences with the allocated_storage argument value if autoscaling
occurs.
allocated_storage = 50
max_allocated_storage = 100
}
Argument Reference
For more detailed documentation about each argument, refer to the AWS o cial documentation
(http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html).
allow_major_version_upgrade - (Optional) Indicates that major version upgrades are allowed. Changing this parameter does not result in
an outage and the change is asynchronously applied as soon as possible.
apply_immediately - (Optional) Speci es whether any database modi cations are applied immediately, or during the next maintenance
window. Default is false . See Amazon RDS Documentation for more information.
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html)
auto_minor_version_upgrade - (Optional) Indicates that minor engine upgrades will be applied automatically to the DB instance during
the maintenance window. Defaults to true.
backup_retention_period - (Optional) The days to retain backups for. Must be between 0 and 35 . Must be greater than 0 if the
database is used as a source for a Read Replica. See Read Replica
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Replication.html).
backup_window - (Optional) The daily time range (in UTC) during which automated backups are created if they are enabled. Example:
"09:46-10:16". Must not overlap with maintenance_window .
character_set_name - (Optional) The character set name to use for DB encoding in Oracle instances. This can't be changed. See Oracle
Character Sets Supported in Amazon RDS
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.OracleCharacterSets.html) for more information.
copy_tags_to_snapshot – (Optional, boolean) Copy all Instance tags to snapshots. Default is false .
deletion_protection - (Optional) If the DB instance should have deletion protection enabled. The database can't be deleted when this
value is set to true . The default is false .
domain - (Optional) The ID of the Directory Service Active Directory domain to create the instance in.
domain_iam_role_name - (Optional, but required if domain is provided) The name of the IAM role to be used when making API calls to the
Directory Service.
enabled_cloudwatch_logs_exports - (Optional) List of log types to enable for exporting to CloudWatch logs. If omitted, no logs will be
exported. Valid values (depending on engine ): alert , audit , error , general , listener , slowquery , trace , postgresql
(PostgreSQL), upgrade (PostgreSQL).
engine - (Required unless a snapshot_identifier or replicate_source_db is provided) The database engine to use. For supported
values, see the Engine parameter in API action CreateDBInstance
(https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html). Note that for Amazon Aurora instances the
engine must match the DB cluster (/docs/providers/aws/r/rds_cluster.html)'s engine'. For information on the di erence between the
available Aurora MySQL engines see Comparison between Aurora MySQL 1 and Aurora MySQL 2
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraMySQL.Updates.20180206.html) in the Amazon RDS User Guide.
engine_version - (Optional) The engine version to use. If auto_minor_version_upgrade is enabled, you can provide a pre x of the
version such as 5.7 (for 5.7.10 ) and this attribute will ignore di erences in the patch version automatically (e.g. 5.7.17 ). For supported
values, see the EngineVersion parameter in API action CreateDBInstance
(https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html). Note that for Amazon Aurora instances the
engine version must match the DB cluster (/docs/providers/aws/r/rds_cluster.html)'s engine version'.
final_snapshot_identifier - (Optional) The name of your nal DB snapshot when this DB instance is deleted. Must be provided if
skip_final_snapshot is set to false .
iam_database_authentication_enabled - (Optional) Speci es whether or mappings of AWS Identity and Access Management (IAM)
accounts to database accounts is enabled.
identifier - (Optional, Forces new resource) The name of the RDS instance, if omitted, Terraform will assign a random, unique identi er.
identifier_prefix - (Optional, Forces new resource) Creates a unique identi er beginning with the speci ed pre x. Con icts with
identifier .
iops - (Optional) The amount of provisioned IOPS. Setting this implies a storage_type of "io1".
kms_key_id - (Optional) The ARN for the KMS encryption key. If creating an encrypted replica, set this to the destination KMS ARN.
license_model - (Optional, but required for some DB engines, i.e. Oracle SE1) License model information for this DB instance.
maintenance_window - (Optional) The window to perform maintenance in. Syntax: "ddd:hh24:mi-ddd:hh24:mi". Eg: "Mon:00:00-Mon:03:00".
See RDS Maintenance Window docs
(http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html#AdjustingTheMaintenanceWindow)
for more information.
max_allocated_storage - (Optional) When con gured, the upper limit to which Amazon RDS can automatically scale the storage of the DB
instance. Con guring this will automatically ignore di erences to allocated_storage . Must be greater than or equal to
allocated_storage or 0 to disable Storage Autoscaling.
monitoring_interval - (Optional) The interval, in seconds, between points when Enhanced Monitoring metrics are collected for the DB
instance. To disable collecting Enhanced Monitoring metrics, specify 0. The default is 0. Valid Values: 0, 1, 5, 10, 15, 30, 60.
monitoring_role_arn - (Optional) The ARN for the IAM role that permits RDS to send enhanced monitoring metrics to CloudWatch Logs.
You can nd more information on the AWS Documentation
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.html) what IAM permissions are needed to allow Enhanced
Monitoring for RDS Instances.
name - (Optional) The name of the database to create when the DB instance is created. If this parameter is not speci ed, no database is
created in the DB instance. Note that this does not apply for Oracle or SQL Server engines. See the AWS documentation
(http://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) for more details on what applies for those engines.
password - (Required unless a snapshot_identifier or replicate_source_db is provided) Password for the master DB user. Note that
this may show up in logs, and it will be stored in the state le.
replicate_source_db - (Optional) Speci es that this resource is a Replicate database, and to use this value as the source database. This
correlates to the identifier of another Amazon RDS Database to replicate. Note that if you are creating a cross-region replica of an
encrypted database you will also need to specify a kms_key_id . See DB Instance Replication
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Replication.html) and Working with PostgreSQL and MySQL Read
Replicas (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html) for more information on using Replication.
security_group_names - (Optional/Deprecated) List of DB Security Groups to associate. Only used for DB Instances on the EC2-Classic
Platform (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html#USER_VPC.FindDefaultVPC).
skip_final_snapshot - (Optional) Determines whether a nal DB snapshot is created before the DB instance is deleted. If true is speci ed,
no DBSnapshot is created. If false is speci ed, a DB snapshot is created before the DB instance is deleted, using the value from
final_snapshot_identifier . Default is false .
snapshot_identifier - (Optional) Speci es whether or not to create this database from a snapshot. This correlates to the snapshot ID
you'd nd in the RDS console, e.g: rds:production-2015-06-26-06-05.
storage_encrypted - (Optional) Speci es whether the DB instance is encrypted. Note that if you are creating a cross-region read replica
this eld is ignored and you should instead declare kms_key_id with a valid ARN. The default is false if not speci ed.
storage_type - (Optional) One of "standard" (magnetic), "gp2" (general purpose SSD), or "io1" (provisioned IOPS SSD). The default is "io1" if
iops is speci ed, "gp2" if not.
timezone - (Optional) Time zone of the DB instance. timezone is currently only supported by Microsoft SQL Server. The timezone can
only be set on creation. See MSSQL User Guide
(http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.html#SQLServer.Concepts.General.TimeZone) for more
information.
username - (Required unless a snapshot_identifier or replicate_source_db is provided) Username for the master DB user.
s3_import - (Optional) Restore from a Percona Xtrabackup in S3. See Importing Data into an Amazon RDS MySQL DB Instance
(http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.html)
performance_insights_enabled - (Optional) Speci es whether Performance Insights are enabled. Defaults to false.
performance_insights_kms_key_id - (Optional) The ARN for the KMS key to encrypt Performance Insights data. When specifying
performance_insights_kms_key_id , performance_insights_enabled needs to be set to true. Once KMS key is set, it can never be
changed.
performance_insights_retention_period - (Optional) The amount of time in days to retain Performance Insights data. Either 7 (7 days)
or 731 (2 years). When specifying performance_insights_retention_period , performance_insights_enabled needs to be set to true.
Defaults to '7'.
NOTE: Removing the replicate_source_db attribute from an existing RDS Replicate database managed by Terraform will promote the
database to a fully standalone database.
S3 Import Options
Full details on the core parameters and impacts are in the API Docs: RestoreDBInstanceFromS3
(http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromS3.html). Sample
source_engine - (Required, as of Feb 2018 only 'mysql' supported) Source engine for the backup
source_engine_version - (Required, as of Feb 2018 only '5.6' supported) Version of the source engine used to make the backup
This will not recreate the resource if the S3 object changes in some way. It's only used to initialize the database
Timeouts
aws_db_instance provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration options:
create - (Default 40 minutes ) Used for Creating Instances, Replicas, and restoring from Snapshots.
delete - (Default 40 minutes ) Used for destroying databases. This includes the time required to take snapshots.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
address - The hostname of the RDS instance. See also endpoint and port .
ca_cert_identifier - Speci es the identi er of the CA certi cate for the DB instance.
domain - The ID of the Directory Service Active Directory domain the instance is joined to
domain_iam_role_name - The name of the IAM role to be used when making API calls to the Directory Service.
hosted_zone_id - The canonical hosted zone ID of the DB instance (to be used in a Route 53 Alias record).
Example Usage
Argument Reference
The following arguments are supported:
feature_name - (Required) Name of the feature for association. This can be found in the AWS documentation
relevant to the integration or a full list is available in the SupportedFeatureNames list returned by AWS CLI rds
describe-db-engine-versions (https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-engine-versions.html).
role_arn - (Required) Amazon Resource Name (ARN) of the IAM Role to associate with the DB Instance.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
aws_db_instance_role_association can be imported using the DB Instance Identi er and IAM Role ARN separated by a
comma ( , ), e.g.
$ terraform import aws_db_instance_role_association.example my-db-instance,arn:aws:iam::123456789012:role
/my-role
Resource: aws_db_option_group
Provides an RDS DB option group resource. Documentation of the available options for various RDS engines can be found at:
* MariaDB Options (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.MariaDB.Options.html) *
Microsoft SQL Server Options
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.Options.html) * MySQL Options
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.MySQL.Options.html) * Oracle Options
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.html)
Example Usage
option {
option_name = "Timezone"
option_settings {
name = "TIME_ZONE"
value = "UTC"
}
}
option {
option_name = "SQLSERVER_BACKUP_RESTORE"
option_settings {
name = "IAM_ROLE_ARN"
value = "${aws_iam_role.example.arn}"
}
}
option {
option_name = "TDE"
}
}
Note: Any modi cations to the db_option_group are set to happen immediately as we default to applying
immediately.
Argument Reference
The following arguments are supported:
name - (Optional, Forces new resource) The name of the option group. If omitted, Terraform will assign a random,
unique name. Must be lowercase, to match as it is stored in AWS.
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name . Must be lowercase, to match as it is stored in AWS.
option_group_description - (Optional) The description of the option group. Defaults to "Managed by Terraform".
engine_name - (Required) Speci es the name of the engine that this option group should be associated with.
major_engine_version - (Required) Speci es the major version of the engine that this option group should be
associated with.
port - (Optional) The Port number when connecting to the Option (e.g. 11211).
db_security_group_memberships - (Optional) A list of DB Security Groups for which the option is enabled.
vpc_security_group_memberships - (Optional) A list of VPC Security Groups for which the option is enabled.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_db_option_group provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration
options:
Import
DB Option groups can be imported using the name , e.g.
Example Usage
parameter {
name = "character_set_server"
value = "utf8"
}
parameter {
name = "character_set_client"
value = "utf8"
}
}
Argument Reference
The following arguments are supported:
name - (Optional, Forces new resource) The name of the DB parameter group. If omitted, Terraform will assign a random, unique name.
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts with name .
description - (Optional) The description of the DB parameter group. Defaults to "Managed by Terraform".
parameter - (Optional) A list of DB parameters to apply. Note that parameters may di er from a family to an other. Full list of all parameters can be
discovered via aws rds describe-db-parameters (https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-parameters.html) after initial
creation of the group.
apply_method - (Optional) "immediate" (default), or "pending-reboot". Some engines can't apply some parameters without a reboot, and you will need to
specify "pending-reboot" here.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
DB Parameter groups can be imported using the name , e.g.
Example Usage
ingress {
cidr = "10.0.0.0/24"
}
}
Argument Reference
The following arguments are supported:
description - (Optional) The description of the DB security group. Defaults to "Managed by Terraform".
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
DB Security groups can be imported using the name , e.g.
Example Usage
maintenance_window = "Fri:09:00-Fri:09:30"
backup_retention_period = 0
parameter_group_name = "default.mysql5.6"
}
Argument Reference
The following arguments are supported:
db_instance_identifier - (Required) The DB Instance Identi er from which to take the snapshot.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
availability_zone - Speci es the name of the Availability Zone the DB instance was located in at the time of the DB
snapshot.
iops - Speci es the Provisioned IOPS (I/O operations per second) value of the DB instance at the time of the
snapshot.
source_db_snapshot_identifier - The DB snapshot Arn that the DB snapshot was copied from. It only has value in
case of cross customer or cross region copy.
source_region - The region that the DB snapshot was created in or copied from.
Example Usage
tags = {
Name = "My DB subnet group"
}
}
Argument Reference
The following arguments are supported:
name - (Optional, Forces new resource) The name of the DB subnet group. If omitted, Terraform will assign a random,
unique name.
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name .
description - (Optional) The description of the DB subnet group. Defaults to "Managed by Terraform".
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
DB Subnet groups can be imported using the name , e.g.
Each VPC created in AWS comes with a Default Network ACL that can be managed, but not destroyed. This is an advanced
resource, and has special caveats to be aware of when using it. Please read this document in its entirety before using this
resource.
The aws_default_network_acl behaves di erently from normal resources, in that Terraform does not create this
resource, but instead attempts to "adopt" it into management. We can do this because each VPC created has a Default
Network ACL that cannot be destroyed, and is created with a known set of default rules.
When Terraform rst adopts the Default Network ACL, it immediately removes all rules in the ACL. It then proceeds to
create any rules speci ed in the con guration. This step is required so that only the rules speci ed in the con guration are
created.
This resource treats its inline rules as absolute; only the rules de ned inline are created, and any additions/removals
external to this resource will result in di s being shown. For these reasons, this resource is incompatible with the
aws_network_acl_rule resource.
For more information about Network ACLs, see the AWS Documentation on Network ACLs
(http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html).
ingress {
protocol = -1
rule_no = 100
action = "allow"
cidr_block =
from_port = 0
to_port = 0
}
egress {
protocol = -1
rule_no = 100
action = "allow"
cidr_block = "0.0.0.0/0"
from_port = 0
to_port = 0
}
}
ingress {
protocol = -1
rule_no = 100
action = "allow"
cidr_block =
from_port = 0
to_port = 0
}
}
Argument Reference
The following arguments are supported:
default_network_acl_id - (Required) The Network ACL ID to manage. This attribute is exported from aws_vpc , or
manually found via the AWS Console.
subnet_ids - (Optional) A list of Subnet IDs to apply the ACL to. See the notes below on managing Subnets in the
Default Network ACL
protocol - (Required) The protocol to match. If using the -1 'all' protocol, you must specify a from and to port of 0.
cidr_block - (Optional) The CIDR block to match. This must be a valid network mask.
Note: For more information on ICMP types and codes, see here: https://www.iana.org/assignments/icmp-
parameters/icmp-parameters.xhtml (https://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml)
Managing Subnets in the Default Network ACL
Within a VPC, all Subnets must be associated with a Network ACL. In order to "delete" the association between a Subnet and
a non-default Network ACL, the association is destroyed by replacing it with an association between the Subnet and the
Default ACL instead.
When managing the Default Network ACL, you cannot "remove" Subnets. Instead, they must be reassigned to another
Network ACL, or the Subnet itself must be destroyed. Because of these requirements, removing the subnet_ids attribute
from the con guration of a aws_default_network_acl resource may result in a reoccurring plan, until the Subnets are
reassigned to another Network ACL or are destroyed.
Because Subnets are by default associated with the Default Network ACL, any non-explicit association will show up as a plan
to remove the Subnet. For example: if you have a custom aws_network_acl with two subnets attached, and you remove
the aws_network_acl resource, after successfully destroying this resource future plans will show a di on the managed
aws_default_network_acl , as those two Subnets have been orphaned by the now destroyed network acl and thus
adopted by the Default Network ACL. In order to avoid a reoccurring plan, they will need to be reassigned, destroyed, or
added to the subnet_ids attribute of the aws_default_network_acl entry.
As an alternative to the above, you can also specify the following lifecycle con guration in your aws_default_network_acl
resource:
lifecycle {
ignore_changes = ["subnet_ids"]
}
Attributes Reference
In addition to all arguments above, the following attributes are exported:
owner_id - The ID of the AWS account that owns the Default Network ACL
Resource: aws_default_route_table
Provides a resource to manage a Default VPC Routing Table.
Each VPC created in AWS comes with a Default Route Table that can be managed, but not destroyed. This is an advanced
resource, and has special caveats to be aware of when using it. Please read this document in its entirety before using this
resource. It is recommended you do not use both aws_default_route_table to manage the default route table and use
the aws_main_route_table_association , due to possible con ict in routes.
The aws_default_route_table behaves di erently from normal resources, in that Terraform does not create this
resource, but instead attempts to "adopt" it into management. We can do this because each VPC created has a Default Route
Table that cannot be destroyed, and is created with a single route.
When Terraform rst adopts the Default Route Table, it immediately removes all de ned routes. It then proceeds to create
any routes speci ed in the con guration. This step is required so that only the routes speci ed in the con guration present
in the Default Route Table.
For more information about Route Tables, see the AWS Documentation on Route Tables
(http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html#Route_Replacing_Main_Table).
For more information about managing normal Route Tables in Terraform, see our documentation on aws_route_table
(/docs/providers/aws/r/route_table.html).
NOTE on Route Tables and Routes: Terraform currently provides both a standalone Route resource
(/docs/providers/aws/r/route.html) and a Route Table resource with routes de ned in-line. At this time you cannot use
a Route Table with in-line routes in conjunction with any Route resources. Doing so will cause a con ict of rule settings
and will overwrite routes.
route {
tags = {
Name = "default table"
}
}
Argument Reference
The following arguments are supported:
Note that the default route, mapping the VPC's CIDR block to "local", is created implicitly and cannot be speci ed.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
owner_id - The ID of the AWS account that owns the route table
Resource: aws_default_security_group
Provides a resource to manage the default AWS Security Group.
For EC2 Classic accounts, each region comes with a Default Security Group. Additionally, each VPC created in AWS comes
with a Default Security Group that can be managed, but not destroyed. This is an advanced resource, and has special
caveats to be aware of when using it. Please read this document in its entirety before using this resource.
The aws_default_security_group behaves di erently from normal resources, in that Terraform does not create this
resource, but instead "adopts" it into management. We can do this because these default security groups cannot be
destroyed, and are created with a known set of default ingress/egress rules.
When Terraform rst adopts the Default Security Group, it immediately removes all ingress and egress rules in the
Security Group. It then proceeds to create any rules speci ed in the con guration. This step is required so that only the rules
speci ed in the con guration are created.
This resource treats its inline rules as absolute; only the rules de ned inline are created, and any additions/removals
external to this resource will result in di shown. For these reasons, this resource is incompatible with the
aws_security_group_rule resource.
For more information about Default Security Groups, see the AWS Documentation on Default Security Groups
(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#default-security-group).
ingress {
protocol = -1
self = true
from_port = 0
to_port = 0
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
Example con g to deny all Egress tra c, allowing Ingress
The following denies all Egress tra c by omitting any egress rules, while including the default ingress rule to allow all
tra c.
ingress {
protocol = -1
self = true
from_port = 0
to_port = 0
}
}
Argument Reference
The arguments of an aws_default_security_group di er slightly from aws_security_group resources. Namely, the
name argument is computed, and the name_prefix attribute removed. The following arguments are still supported:
ingress - (Optional) Can be speci ed multiple times for each ingress rule. Each ingress block supports elds
documented below.
egress - (Optional, VPC only) Can be speci ed multiple times for each egress rule. Each egress block supports elds
documented below.
vpc_id - (Optional, Forces new resource) The VPC ID. Note that changing the vpc_id will not restore any default
security group rules that were modi ed, added, or removed. It will be left in its current state
Usage
With the exceptions mentioned above, aws_default_security_group should identical behavior to aws_security_group .
Please consult AWS_SECURITY_GROUP (/docs/providers/aws/r/security_group.html) for further usage documentation.
The aws_default_subnet behaves di erently from normal resources, in that Terraform does not create this resource, but
instead "adopts" it into management.
Example Usage
Basic usage with tags:
tags = {
Name = "Default subnet for us-west-2a"
}
}
Argument Reference
The arguments of an aws_default_subnet di er from aws_subnet resources. Namely, the availability_zone
argument is required and the availability_zone_id , vpc_id , cidr_block , ipv6_cidr_block , and
assign_ipv6_address_on_creation arguments are computed. The following arguments are still supported:
map_public_ip_on_launch - (Optional) Specify true to indicate that instances launched into the subnet should be
assigned a public IP address.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Each AWS region comes with a default set of DHCP options. This is an advanced resource, and has special caveats to be
aware of when using it. Please read this document in its entirety before using this resource.
The aws_default_vpc_dhcp_options behaves di erently from normal resources, in that Terraform does not create this
resource, but instead "adopts" it into management.
Example Usage
Basic usage with tags:
Argument Reference
The arguments of an aws_default_vpc_dhcp_options di er slightly from aws_vpc_dhcp_options resources. Namely,
the domain_name , domain_name_servers and ntp_servers arguments are computed. The following arguments are still
supported:
netbios_node_type - (Optional) The NetBIOS node type (1, 2, 4, or 8). AWS recommends to specify 2 since broadcast
and multicast are not supported in their network. For more information about these node types, see RFC 2132
(http://www.ietf.org/rfc/rfc2132.txt).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The ID of the DHCP Options Set.
owner_id - The ID of the AWS account that owns the DHCP options set.
Resource: aws_default_vpc
Provides a resource to manage the default AWS VPC (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-
vpc.html) in the current region.
For AWS accounts created after 2013-12-04, each region comes with a Default VPC. This is an advanced resource, and has
special caveats to be aware of when using it. Please read this document in its entirety before using this resource.
The aws_default_vpc behaves di erently from normal resources, in that Terraform does not create this resource, but
instead "adopts" it into management.
Example Usage
Basic usage with tags:
Argument Reference
The arguments of an aws_default_vpc di er slightly from aws_vpc resources. Namely, the cidr_block ,
instance_tenancy and assign_generated_ipv6_cidr_block arguments are computed. The following arguments are
still supported:
enable_dns_support - (Optional) A boolean ag to enable/disable DNS support in the VPC. Defaults true.
enable_dns_hostnames - (Optional) A boolean ag to enable/disable DNS hostnames in the VPC. Defaults false.
enable_classiclink - (Optional) A boolean ag to enable/disable ClassicLink for the VPC. Only valid in regions and
accounts that support EC2 Classic. See the ClassicLink documentation
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vpc-classiclink.html) for more information. Defaults false.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
assign_generated_ipv6_cidr_block - Whether or not an Amazon-provided IPv6 CIDR block with a /56 pre x length
for the VPC was assigned
main_route_table_id - The ID of the main route table associated with this VPC. Note that you can change a VPC's
main route table by using an aws_main_route_table_association
(/docs/providers/aws/r/main_route_table_assoc.html)
ipv6_association_id - The association ID for the IPv6 CIDR block of the VPC
Import
Default VPCs can be imported using the vpc id , e.g.
For more information about Device Farm Projects, see the AWS Documentation on Device Farm Projects
(http://docs.aws.amazon.com/devicefarm/latest/APIReference/API_GetProject.html).
Argument Reference
name - (Required) The name of the project
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
dns_ips = [
"8.8.8.8",
"8.8.4.4",
]
}
Argument Reference
The following arguments are supported:
remote_domain_name - (Required) The fully quali ed domain name of the remote domain for which forwarders will
be used.
Import
Conditional forwarders can be imported using the directory id and remote_domain_name, e.g.
Note: All arguments including the password and customer username will be stored in the raw state as plain-text. Read
more about sensitive data in state (/docs/state/sensitive-data.html).
Example Usage
SimpleAD
vpc_settings {
vpc_id = "${aws_vpc.main.id}"
subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"]
}
tags = {
Project = "foo"
}
}
vpc_settings {
vpc_id = "${aws_vpc.main.id}"
subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"]
}
tags = {
Project = "foo"
}
}
connect_settings {
customer_dns_ips = ["A.B.C.D"]
customer_username = "Admin"
subnet_ids = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"]
vpc_id = "${aws_vpc.main.id}"
}
}
Argument Reference
The following arguments are supported:
name - (Required) The fully quali ed name for the directory, such as corp.example.com
password - (Required) The password for the directory administrator or connector user.
size - (Required for SimpleAD and ADConnector ) The size of the directory ( Small or Large are accepted values).
vpc_settings - (Required for SimpleAD and MicrosoftAD ) VPC related information about the directory. Fields
documented below.
connect_settings - (Required for ADConnector ) Connector related information about the directory. Fields
documented below.
alias - (Optional) The alias for the directory (must be unique amongst all aliases in AWS). Required for enable_sso .
enable_sso - (Optional) Whether to enable single-sign on for the directory. Requires alias . Defaults to false .
type (Optional) - The directory type ( SimpleAD , ADConnector or MicrosoftAD are accepted values). Defaults to
SimpleAD .
edition - (Optional) The MicrosoftAD edition ( Standard or Enterprise ). Defaults to Enterprise (applies to
MicrosoftAD type only).
subnet_ids - (Required) The identi ers of the subnets for the directory servers (2 subnets in 2 di erent AZs).
vpc_id - (Required) The identi er of the VPC that the directory is in.
subnet_ids - (Required) The identi ers of the subnets for the directory servers (2 subnets in 2 di erent AZs).
vpc_id - (Required) The identi er of the VPC that the directory is in.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
dns_ip_addresses - A list of IP addresses of the DNS servers for the directory or connector.
Import
DirectoryService directories can be imported using the directory id , e.g.
Example Usage
principals {
identifiers = ["ds.amazonaws.com"]
type = "Service"
}
resources = ["${aws_cloudwatch_log_group.example.arn}"]
effect = "Allow"
}
}
Argument Reference
The following arguments are supported:
log_group_name - (Required) Name of the cloudwatch log group to which the logs should be published. The log
group should be already created and the directory service principal should be provided with required permission to
create stream and publish logs. Changing this value would delete the current subscription and create a new one. A
directory can only have one log subscription at a time.
Import
Directory Service Log Subscriptions can be imported using the directory id, e.g.
Example Usage
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "dlm.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot",
"ec2:DescribeVolumes",
"ec2:DescribeSnapshots"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:*::snapshot/*"
}
]
}
EOF
EOF
}
policy_details {
resource_types = ["VOLUME"]
schedule {
name = "2 weeks of daily snapshots"
create_rule {
interval = 24
interval_unit = "HOURS"
times = ["23:45"]
}
retain_rule {
count = 14
}
tags_to_add = {
SnapshotCreator = "DLM"
}
copy_tags = false
}
target_tags = {
Snapshot = "true"
}
}
}
Argument Reference
The following arguments are supported:
execution_role_arn - (Required) The ARN of an IAM role that is able to be assumed by the DLM service.
state - (Optional) Whether the lifecycle policy should be enabled or disabled. ENABLED or DISABLED are valid
values. Defaults to ENABLED .
resource_types - (Required) A list of resource types that should be targeted by the lifecycle policy. VOLUME is
currently the only allowed value.
schedule - (Required) See the schedule con guration block.
target_tags (Required) A mapping of tag keys and their values. Any resources that match the resource_types and
are tagged with any of these tags will be targeted.
Note: You cannot have overlapping lifecycle policies that share the same target_tags . Terraform is unable to detect
this at plan time but it will fail during apply.
Schedule arguments
copy_tags - (Optional) Copy all user-de ned tags on a source volume to snapshots of the volume created by this
policy.
tags_to_add - (Optional) A mapping of tag keys and their values. DLM lifecycle policies will already tag the snapshot
with the tags on the volume. This con guration adds extra tags on top of these.
interval - (Required) How often this lifecycle policy should be evaluated. 2 , 3 , 4 , 6 , 8 , 12 or 24 are valid values.
interval_unit - (Optional) The unit for how often the lifecycle policy should be evaluated. HOURS is currently the
only allowed value and also the default value.
times - (Optional) A list of times in 24 hour clock format that sets when the lifecycle policy should be evaluated. Max
of 1.
count - (Required) How many snapshots to keep. Must be an integer between 1 and 1000.
Attributes Reference
All of the arguments above are exported as attributes.
Import
DLM lifecyle policies can be imported by their policy ID:
Note: All arguments including the PEM encoded certi cate will be stored in the raw state as plain-text. Read more about
sensitive data in state (/docs/state/sensitive-data.html).
Example Usage
Argument Reference
The following arguments are supported:
certificate_pem - (Optional) The contents of the .pem X.509 certi cate le for the certi cate. Either
certificate_pem or certificate_wallet must be set.
certificate_wallet - (Optional) The contents of the Oracle Wallet certi cate for use with SSL. Either
certificate_pem or certificate_wallet must be set.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
certificate_arn - The Amazon Resource Name (ARN) for the certi cate.
Import
Certi cates can be imported using the certificate_arn , e.g.
Note: All arguments including the password will be stored in the raw state as plain-text. Read more about sensitive data
in state (/docs/state/sensitive-data.html).
Example Usage
tags = {
Name = "test"
}
username = "test"
}
Argument Reference
The following arguments are supported:
certificate_arn - (Optional, Default: empty string) The Amazon Resource Name (ARN) for the certi cate.
engine_name - (Required) The type of engine for the endpoint. Can be one of aurora | azuredb | db2 | docdb |
dynamodb | mariadb | mongodb | mysql | oracle | postgres | redshift | s3 | sqlserver | sybase .
extra_connection_attributes - (Optional) Additional attributes associated with the connection. For available
attributes see Using Extra Connection Attributes with AWS Database Migration Service
(http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.ConnectionAttributes.html).
kms_key_arn - (Required when engine_name is mongodb , optional otherwise) The Amazon Resource Name (ARN)
for the KMS key that will be used to encrypt the connection parameters. If you do not specify a value for
kms_key_arn , then AWS DMS will use your default encryption key. AWS KMS creates the default encryption key for
your AWS account. Your AWS account has a di erent default encryption key for each AWS region.
ssl_mode - (Optional, Default: none) The SSL mode to use for the connection. Can be one of none | require |
verify-ca | verify-full
username - (Optional) The user name to be used to login to the endpoint database.
service_access_role - (Optional) The Amazon Resource Name (ARN) used by the service access IAM role for
dynamodb endpoints.
mongodb_settings - (Optional) Settings for the source MongoDB endpoint. Available settings are auth_type
(default: password ), auth_mechanism (default: default ), nesting_level (default: none ), extract_doc_id
(default: false ), docs_to_investigate (default: 1000 ) and auth_source (default: admin ). For more details, see
Using MongoDB as a Source for AWS DMS
(https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html).
s3_settings - (Optional) Settings for the target S3 endpoint. Available settings are service_access_role_arn ,
external_table_definition , csv_row_delimiter (default: \\n ), csv_delimiter (default: , ), bucket_folder ,
bucket_name and compression_type (default: NONE ). For more details, see Using Amazon S3 as a Target for AWS
Database Migration Service (https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Endpoints can be imported using the endpoint_id , e.g.
$ terraform import aws_dms_endpoint.test test-dms-endpoint-tf
Resource: aws_dms_replication_instance
Provides a DMS (Data Migration Service) replication instance resource. DMS replication instances can be created, updated,
deleted, and imported.
Example Usage
principals {
identifiers = ["dms.amazonaws.com"]
type = "Service"
}
}
}
tags = {
Name = "test"
}
vpc_security_group_ids = [
"sg-12345678",
]
}
Argument Reference
The following arguments are supported:
allocated_storage - (Optional, Default: 50, Min: 5, Max: 6144) The amount of storage (in gigabytes) to be initially
allocated for the replication instance.
apply_immediately - (Optional, Default: false) Indicates whether the changes should be applied immediately or
during the next maintenance window. Only used when updating an existing resource.
auto_minor_version_upgrade - (Optional, Default: false) Indicates that minor engine upgrades will be applied
automatically to the replication instance during the maintenance window.
availability_zone - (Optional) The EC2 Availability Zone that the replication instance will be created in.
kms_key_arn - (Optional) The Amazon Resource Name (ARN) for the KMS key that will be used to encrypt the
connection parameters. If you do not specify a value for kms_key_arn , then AWS DMS will use your default encryption
key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a di erent default
encryption key for each AWS region.
multi_az - (Optional) Speci es if the replication instance is a multi-az deployment. You cannot set the
availability_zone parameter if the multi_az parameter is set to true .
preferred_maintenance_window - (Optional) The weekly time range during which system maintenance can occur, in
Universal Coordinated Time (UTC).
Default: A 30-minute window selected at random from an 8-hour block of time per region, occurring on a
random day of the week.
Format: ddd:hh24:mi-ddd:hh24:mi
publicly_accessible - (Optional, Default: false) Speci es the accessibility options for the replication instance. A
value of true represents an instance with a public IP address. A value of false represents an instance with a private IP
address.
replication_instance_class - (Required) The compute and memory capacity of the replication instance as
speci ed by the replication instance class. Can be one of dms.t2.micro | dms.t2.small | dms.t2.medium |
dms.t2.large | dms.c4.large | dms.c4.xlarge | dms.c4.2xlarge | dms.c4.4xlarge
replication_instance_id - (Required) The replication instance identi er. This parameter is stored as a lowercase
string.
vpc_security_group_ids - (Optional) A list of VPC security group IDs to be used with the replication instance. The
VPC security groups must work with the VPC containing the replication instance.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_dms_replication_instance provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
Import
Replication instances can be imported using the replication_instance_id , e.g.
Example Usage
subnet_ids = [
"subnet-12345678",
]
tags = {
Name = "test"
}
}
Argument Reference
The following arguments are supported:
replication_subnet_group_id - (Required) The name for the replication subnet group. This value is stored as a
lowercase string.
Must contain no more than 255 alphanumeric characters, periods, spaces, underscores, or hyphens.
subnet_ids - (Required) A list of the EC2 subnet IDs for the subnet group.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Replication subnet groups can be imported using the replication_subnet_group_id , e.g.
$ terraform import aws_dms_replication_subnet_group.test test-dms-replication-subnet-group-tf
Resource: aws_dms_replication_task
Provides a DMS (Data Migration Service) replication task resource. DMS replication tasks can be created, updated, deleted,
and imported.
Example Usage
tags = {
Name = "test"
}
target_endpoint_arn = "${aws_dms_endpoint.test-dms-target-endpoint-tf.endpoint_arn}"
}
Argument Reference
The following arguments are supported:
cdc_start_time - (Optional) The Unix timestamp integer for the start of the Change Data Capture (CDC) operation.
migration_type - (Required) The migration type. Can be one of full-load | cdc | full-load-and-cdc .
replication_instance_arn - (Required) The Amazon Resource Name (ARN) of the replication instance.
replication_task_settings - (Optional) An escaped JSON string that contains the task settings. For a complete list
of task settings, see Task Settings for AWS Database Migration Service Tasks
(http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html).
source_endpoint_arn - (Required) The Amazon Resource Name (ARN) string that uniquely identi es the source
endpoint.
table_mappings - (Required) An escaped JSON string that contains the table mappings. For information on table
mapping see Using Table Mapping with an AWS Database Migration Service Task to Select and Filter Data
(http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.html)
target_endpoint_arn - (Required) The Amazon Resource Name (ARN) string that uniquely identi es the target
endpoint.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
replication_task_arn - The Amazon Resource Name (ARN) for the replication task.
Import
Replication tasks can be imported using the replication_task_id , e.g.
Changes to a DocDB Cluster can occur when you manually change a parameter, such as port , and are re ected in the next
maintenance window. Because of this, Terraform may report a di erence in its planning phase because a modi cation has
not yet taken place. You can use the apply_immediately ag to instruct the service to apply the change immediately (see
documentation below).
Note: using apply_immediately can result in a brief downtime as the server reboots. Note: All arguments including
the username and password will be stored in the raw state as plain-text. Read more about sensitive data in state
(/docs/state/sensitive-data.html).
Example Usage
Argument Reference
For more detailed documentation about each argument, refer to the AWS o cial documentation
(https://docs.aws.amazon.com/cli/latest/reference/docdb/create-db-cluster.html).
apply_immediately - (Optional) Speci es whether any cluster modi cations are applied immediately, or during the
next maintenance window. Default is false .
availability_zones - (Optional) A list of EC2 Availability Zones that instances in the DB cluster can be created in.
cluster_identifier_prefix - (Optional, Forces new resource) Creates a unique cluster identi er beginning with the
speci ed pre x. Con icts with cluster_identifer .
cluster_identifier - (Optional, Forces new resources) The cluster identi er. If omitted, Terraform will assign a
random, unique identi er.
engine_version - (Optional) The database engine version. Updating this argument results in an outage.
engine - (Optional) The name of the database engine to be used for this DB cluster. Defaults to docdb . Valid Values:
docdb
final_snapshot_identifier - (Optional) The name of your nal DB snapshot when this DB cluster is deleted. If
omitted, no nal snapshot will be made.
kms_key_id - (Optional) The ARN for the KMS encryption key. When specifying kms_key_id , storage_encrypted
needs to be set to true.
master_password - (Required unless a snapshot_identifier is provided) Password for the master DB user. Note
that this may show up in logs, and it will be stored in the state le. Please refer to the DocDB Naming Constraints.
master_username - (Required unless a snapshot_identifier is provided) Username for the master DB user.
preferred_backup_window - (Optional) The daily time range during which automated backups are created if
automated backups are enabled using the BackupRetentionPeriod parameter.Time in UTC Default: A 30-minute
window selected at random from an 8-hour block of time per region. e.g. 04:00-09:00
skip_final_snapshot - (Optional) Determines whether a nal DB snapshot is created before the DB cluster is
deleted. If true is speci ed, no DB snapshot is created. If false is speci ed, a DB snapshot is created before the DB
cluster is deleted, using the value from final_snapshot_identifier . Default is false .
snapshot_identifier - (Optional) Speci es whether or not to create this cluster from a snapshot. You can use either
the name or ARN when specifying a DB cluster snapshot, or the ARN when specifying a DB snapshot.
storage_encrypted - (Optional) Speci es whether the DB cluster is encrypted. The default is false .
vpc_security_group_ids - (Optional) List of VPC security groups to associate with the Cluster
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_docdb_cluster provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration
options:
delete - (Default 120 minutes ) Used for destroying cluster. This includes any cleanup task during the destroying
process.
Import
DocDB Clusters can be imported using the cluster_identifier , e.g.
You do not designate a primary and subsequent replicas. Instead, you simply add DocDB Instances and DocDB manages the
replication. You can use the count (/docs/con guration/resources.html#count) meta-parameter to make multiple instances
and join them all to the same DocDB Cluster, or you may specify di erent Cluster Instance resources with various
instance_class sizes.
Example Usage
Argument Reference
For more detailed documentation about each argument, refer to the AWS o cial documentation
(https://docs.aws.amazon.com/cli/latest/reference/docdb/create-db-instance.html).
apply_immediately - (Optional) Speci es whether any database modi cations are applied immediately, or during
the next maintenance window. Default is false .
auto_minor_version_upgrade - (Optional) Indicates that minor engine upgrades will be applied automatically to the
DB instance during the maintenance window. Default true .
availability_zone - (Optional, Computed) The EC2 Availability Zone that the DB instance is created in. See docs
(https://docs.aws.amazon.com/documentdb/latest/developerguide/API_CreateDBInstance.html) about the details.
engine - (Optional) The name of the database engine to be used for the DocDB instance. Defaults to docdb . Valid
Values: docdb .
identifier - (Optional, Forces new resource) The indenti er for the DocDB instance, if omitted, Terraform will assign
a random, unique identi er.
identifier_prefix - (Optional, Forces new resource) Creates a unique identi er beginning with the speci ed pre x.
Con icts with identifer .
instance_class - (Required) The instance class to use. For details on CPU and memory, see Scaling for DocDB
Instances (https://docs.aws.amazon.com/documentdb/latest/developerguide/db-cluster-manage-
performance.html#db-cluster-manage-scaling-instance). DocDB currently supports the below instance classes. Please
see AWS Documentation (https://docs.aws.amazon.com/documentdb/latest/developerguide/db-instance-
classes.html#db-instance-class-specs) for complete details.
db.r4.large
db.r4.xlarge
db.r4.2xlarge
db.r4.4xlarge
db.r4.8xlarge
db.r4.16xlarge
promotion_tier - (Optional) Default 0. Failover Priority setting on instance level. The reader who has lower tier has
higher priority to get promoter to writer.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
endpoint - The DNS address for this instance. May not be writable
kms_key_id - The ARN for the KMS encryption key if one is set to the cluster.
preferred_backup_window - The daily time range during which automated backups are created if automated
backups are enabled.
writer – Boolean indicating if this instance is writable. False indicates this instance is a read replica.
Timeouts
aws_docdb_cluster_instance provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
create - (Default 90 minutes ) Used for Creating Instances, Replicas, and restoring from Snapshots
delete - (Default 90 minutes ) Used for destroying databases. This includes the time required to take snapshots
Import
DocDB Cluster Instances can be imported using the identifier , e.g.
Example Usage
parameter {
name = "tls"
value = "enabled"
}
}
Argument Reference
The following arguments are supported:
name - (Optional, Forces new resource) The name of the documentDB cluster parameter group. If omitted, Terraform
will assign a random, unique name.
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name .
family - (Required, Forces new resource) The family of the documentDB cluster parameter group.
description - (Optional, Forces new resource) The description of the documentDB cluster parameter group.
Defaults to "Managed by Terraform".
apply_method - (Optional) Valid values are immediate and pending-reboot . Defaults to pending-reboot .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
DocumentDB Cluster Parameter Groups can be imported using the name , e.g.
Example Usage
Argument Reference
The following arguments are supported:
db_cluster_identifier - (Required) The DocDB Cluster Identi er from which to take the snapshot.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
availability_zones - List of EC2 Availability Zones that instances in the DocDB cluster snapshot can be restored in.
db_cluster_snapshot_arn - The Amazon Resource Name (ARN) for the DocDB Cluster Snapshot.
engine_version - Version of the database engine for this DocDB cluster snapshot.
kms_key_id - If storage_encrypted is true, the AWS KMS key identi er for the encrypted DocDB cluster snapshot.
port - Port that the DocDB cluster was listening on at the time of the snapshot.
source_db_cluster_snapshot_identifier - The DocDB Cluster Snapshot Arn that the DocDB Cluster Snapshot was
copied from. It only has value in case of cross customer or cross region copy.
Timeouts
aws_docdb_cluster_snapshot provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
create - (Default 20m ) How long to wait for the snapshot to be available.
Import
aws_docdb_cluster_snapshot can be imported by using the cluster snapshot identi er, e.g.
Example Usage
tags = {
Name = "My docdb subnet group"
}
}
Argument Reference
The following arguments are supported:
name - (Optional, Forces new resource) The name of the docDB subnet group. If omitted, Terraform will assign a
random, unique name.
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name .
description - (Optional) The description of the docDB subnet group. Defaults to "Managed by Terraform".
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
DocumentDB Subnet groups can be imported using the name , e.g.
Example Usage
Argument Reference
The following arguments are supported:
address_family - (Required) The address family for the BGP peer. ipv4 or ipv6 .
bgp_asn - (Required) The autonomous system (AS) number for Border Gateway Protocol (BGP) con guration.
virtual_interface_id - (Required) The ID of the Direct Connect virtual interface on which to create the BGP peer.
amazon_address - (Optional) The IPv4 CIDR address to use to send tra c to Amazon. Required for IPv4 BGP peers on
public virtual interfaces.
customer_address - (Optional) The IPv4 CIDR destination address to which Amazon should send tra c. Required for
IPv4 BGP peers on public virtual interfaces.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
aws_device - The Direct Connect endpoint on which the BGP peer terminates.
Timeouts
aws_dx_bgp_peer provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration options:
Example Usage
Argument Reference
The following arguments are supported:
lag_id - (Required) The ID of the LAG with which to associate the connection.
Resource: aws_dx_connection
Provides a Connection of Direct Connect.
Example Usage
Argument Reference
The following arguments are supported:
bandwidth - (Required) The bandwidth of the connection. Available values: 1Gbps, 10Gbps. Case sensitive.
location - (Required) The AWS Direct Connect location where the connection is located. See DescribeLocations
(https://docs.aws.amazon.com/directconnect/latest/APIReference/API_DescribeLocations.html) for the list of AWS
Direct Connect locations. Use locationCode .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
jumbo_frame_capable - Boolean value representing if jumbo frames have been enabled for this connection.
has_logical_redundancy - Indicates whether the connection supports a secondary BGP peer in the same address
family (IPv4/IPv6).
aws_device - The Direct Connect endpoint on which the physical connection terminates.
Import
Direct Connect connections can be imported using the connection id , e.g.
$ terraform import aws_dx_connection.test_connection dxcon-ffre0ec3
Resource: aws_dx_gateway_association
Associates a Direct Connect Gateway with a VGW or transit gateway.
Example Usage
allowed_prefixes = [
"10.255.255.0/30",
"10.255.255.8/30",
]
}
allowed_prefixes = [
"210.52.109.0/24",
"175.45.176.0/22",
]
}
A full example of how to create a VPN Gateway in one AWS account, create a Direct Connect Gateway in a second AWS
account, and associate the VPN Gateway with the Direct Connect Gateway via the
aws_dx_gateway_association_proposal and aws_dx_gateway_association resources can be found in the
./examples/dx-gateway-cross-account-vgw-association directory within the Github Repository
(https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/dx-gateway-cross-account-vgw-
association).
Argument Reference
NOTE: dx_gateway_id plus one of associated_gateway_id , or vpn_gateway_id must be speci ed for single
account Direct Connect gateway associations.
associated_gateway_id - (Optional) The ID of the VGW or transit gateway with which to associate the Direct Connect
gateway. Used for single account Direct Connect gateway associations.
vpn_gateway_id - (Optional) Deprecated: Use associated_gateway_id instead. The ID of the VGW with which to
associate the gateway. Used for single account Direct Connect gateway associations.
associated_gateway_owner_account_id - (Optional) The ID of the AWS account that owns the VGW or transit
gateway with which to associate the Direct Connect gateway. Used for cross-account Direct Connect gateway
associations.
proposal_id - (Optional) The ID of the Direct Connect gateway association proposal. Used for cross-account Direct
Connect gateway associations.
allowed_prefixes - (Optional) VPC pre xes (CIDRs) to advertise to the Direct Connect gateway. Defaults to the CIDR
block of the VPC associated with the Virtual Gateway. To enable drift detection, must be con gured.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
dx_gateway_owner_account_id - The ID of the AWS account that owns the Direct Connect gateway.
Timeouts
aws_dx_gateway_association provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
Import
Direct Connect gateway associations can be imported using dx_gateway_id together with associated_gateway_id , e.g.
$ terraform import aws_dx_gateway_association.example dxgw-12345678/vgw-98765432
Resource: aws_dx_gateway_association_proposal
Manages a Direct Connect Gateway Association Proposal, typically for enabling cross-account associations. For single
account associations, see the aws_dx_gateway_association resource
(/docs/providers/aws/r/dx_gateway_association.html).
Example Usage
A full example of how to create a VPN Gateway in one AWS account, create a Direct Connect Gateway in a second AWS
account, and associate the VPN Gateway with the Direct Connect Gateway via the
aws_dx_gateway_association_proposal and aws_dx_gateway_association resources can be found in the
./examples/dx-gateway-cross-account-vgw-association directory within the Github Repository
(https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/dx-gateway-cross-account-vgw-
association).
Argument Reference
dx_gateway_owner_account_id - (Required) AWS Account identi er of the Direct Connect Gateway's owner.
associated_gateway_id - (Optional) The ID of the VGW or transit gateway with which to associate the Direct Connect
gateway.
allowed_prefixes - (Optional) VPC pre xes (CIDRs) to advertise to the Direct Connect gateway. Defaults to the CIDR
block of the VPC associated with the Virtual Gateway. To enable drift detection, must be con gured.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Direct Connect Gateway Association Proposals can be imported using the proposal ID, e.g.
Example Usage
Argument Reference
The following arguments are supported:
amazon_side_asn - (Required) The ASN to be con gured on the Amazon side of the connection. The ASN must be in
the private range of 64,512 to 65,534 or 4,200,000,000 to 4,294,967,294.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_dx_gateway provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration options:
Import
Direct Connect Gateways can be imported using the gateway id , e.g.
Example Usage
provider "aws" {
provider "aws" {
alias = "accepter"
name = "vif-foo"
vlan = 4094
address_family = "ipv4"
bgp_asn = 65352
depends_on = ["aws_vpn_gateway.vpn_gw"]
}
tags = {
Side = "Accepter"
}
}
Argument Reference
The following arguments are supported:
dx_gateway_id - (Optional) The ID of the Direct Connect gateway to which to connect the virtual interface.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_dx_hosted_private_virtual_interface_accepter provides the following Timeouts
(/docs/con guration/resources.html#timeouts) con guration options:
Import
Direct Connect hosted private virtual interfaces can be imported using the vif id , e.g.
Example Usage
name = "vif-foo"
vlan = 4094
address_family = "ipv4"
bgp_asn = 65352
}
Argument Reference
The following arguments are supported:
address_family - (Required) The address family for the BGP peer. ipv4 or ipv6 .
bgp_asn - (Required) The autonomous system (AS) number for Border Gateway Protocol (BGP) con guration.
connection_id - (Required) The ID of the Direct Connect connection (or LAG) on which to create the virtual interface.
owner_account_id - (Required) The AWS account that will own the new virtual interface.
amazon_address - (Optional) The IPv4 CIDR address to use to send tra c to Amazon. Required for IPv4 BGP peers.
mtu - (Optional) The maximum transmission unit (MTU) is the size, in bytes, of the largest permissible packet that can
be passed over the connection. The MTU of a virtual private interface can be either 1500 or 9001 (jumbo frames).
Default is 1500 .
customer_address - (Optional) The IPv4 CIDR destination address to which Amazon should send tra c. Required for
IPv4 BGP peers.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
aws_device - The Direct Connect endpoint on which the virtual interface terminates.
Timeouts
aws_dx_hosted_private_virtual_interface provides the following Timeouts
(/docs/con guration/resources.html#timeouts) con guration options:
Import
Direct Connect hosted private virtual interfaces can be imported using the vif id , e.g.
Example Usage
provider "aws" {
provider "aws" {
alias = "accepter"
name = "vif-foo"
vlan = 4094
address_family = "ipv4"
bgp_asn = 65352
customer_address = "175.45.176.1/30"
amazon_address = "175.45.176.2/30"
route_filter_prefixes = [
"210.52.109.0/24",
"175.45.176.0/22",
]
}
tags = {
Side = "Accepter"
}
}
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_dx_hosted_public_virtual_interface_accepter provides the following Timeouts
(/docs/con guration/resources.html#timeouts) con guration options:
Import
Direct Connect hosted public virtual interfaces can be imported using the vif id , e.g.
Example Usage
name = "vif-foo"
vlan = 4094
address_family = "ipv4"
bgp_asn = 65352
customer_address = "175.45.176.1/30"
amazon_address = "175.45.176.2/30"
route_filter_prefixes = [
"210.52.109.0/24",
"175.45.176.0/22",
]
}
Argument Reference
The following arguments are supported:
address_family - (Required) The address family for the BGP peer. ipv4 or ipv6 .
bgp_asn - (Required) The autonomous system (AS) number for Border Gateway Protocol (BGP) con guration.
connection_id - (Required) The ID of the Direct Connect connection (or LAG) on which to create the virtual interface.
owner_account_id - (Required) The AWS account that will own the new virtual interface.
route_filter_prefixes - (Required) A list of routes to be advertised to the AWS network in this region.
amazon_address - (Optional) The IPv4 CIDR address to use to send tra c to Amazon. Required for IPv4 BGP peers.
customer_address - (Optional) The IPv4 CIDR destination address to which Amazon should send tra c. Required for
IPv4 BGP peers.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
aws_device - The Direct Connect endpoint on which the virtual interface terminates.
Timeouts
aws_dx_hosted_public_virtual_interface provides the following Timeouts
(/docs/con guration/resources.html#timeouts) con guration options:
Import
Direct Connect hosted public virtual interfaces can be imported using the vif id , e.g.
NOTE: When creating a LAG, Direct Connect requires creating a Connection. Terraform will remove this unmanaged
connection during resource creation.
Example Usage
Argument Reference
The following arguments are supported:
connections_bandwidth - (Required) The bandwidth of the individual physical connections bundled by the LAG.
Available values: 1Gbps, 10Gbps. Case sensitive.
location - (Required) The AWS Direct Connect location in which the LAG should be allocated. See DescribeLocations
(https://docs.aws.amazon.com/directconnect/latest/APIReference/API_DescribeLocations.html) for the list of AWS
Direct Connect locations. Use locationCode .
force_destroy - (Optional, Default:false) A boolean that indicates all connections associated with the LAG should be
deleted so that the LAG can be destroyed without error. These objects are not recoverable.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
has_logical_redundancy - Indicates whether the LAG supports a secondary BGP peer in the same address family
(IPv4/IPv6).
Import
Direct Connect LAGs can be imported using the lag id , e.g.
Example Usage
name = "vif-foo"
vlan = 4094
address_family = "ipv4"
bgp_asn = 65352
}
Argument Reference
The following arguments are supported:
address_family - (Required) The address family for the BGP peer. ipv4 or ipv6 .
bgp_asn - (Required) The autonomous system (AS) number for Border Gateway Protocol (BGP) con guration.
connection_id - (Required) The ID of the Direct Connect connection (or LAG) on which to create the virtual interface.
amazon_address - (Optional) The IPv4 CIDR address to use to send tra c to Amazon. Required for IPv4 BGP peers.
mtu - (Optional) The maximum transmission unit (MTU) is the size, in bytes, of the largest permissible packet that can
be passed over the connection. The MTU of a virtual private interface can be either 1500 or 9001 (jumbo frames).
Default is 1500 .
customer_address - (Optional) The IPv4 CIDR destination address to which Amazon should send tra c. Required for
IPv4 BGP peers.
dx_gateway_id - (Optional) The ID of the Direct Connect gateway to which to connect the virtual interface.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
aws_device - The Direct Connect endpoint on which the virtual interface terminates.
Timeouts
aws_dx_private_virtual_interface provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
Import
Direct Connect private virtual interfaces can be imported using the vif id , e.g.
Example Usage
name = "vif-foo"
vlan = 4094
address_family = "ipv4"
bgp_asn = 65352
customer_address = "175.45.176.1/30"
amazon_address = "175.45.176.2/30"
route_filter_prefixes = [
"210.52.109.0/24",
"175.45.176.0/22",
]
}
Argument Reference
The following arguments are supported:
address_family - (Required) The address family for the BGP peer. ipv4 or ipv6 .
bgp_asn - (Required) The autonomous system (AS) number for Border Gateway Protocol (BGP) con guration.
connection_id - (Required) The ID of the Direct Connect connection (or LAG) on which to create the virtual interface.
amazon_address - (Optional) The IPv4 CIDR address to use to send tra c to Amazon. Required for IPv4 BGP peers.
customer_address - (Optional) The IPv4 CIDR destination address to which Amazon should send tra c. Required for
IPv4 BGP peers.
route_filter_prefixes - (Required) A list of routes to be advertised to the AWS network in this region.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
aws_device - The Direct Connect endpoint on which the virtual interface terminates.
Timeouts
aws_dx_public_virtual_interface provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
Import
Direct Connect public virtual interfaces can be imported using the vif id , e.g.
Example Usage
dx_gateway_id = "${aws_dx_gateway.example.id}"
name = "tf-transit-vif-example"
vlan = 4094
address_family = "ipv4"
bgp_asn = 65352
}
Argument Reference
The following arguments are supported:
address_family - (Required) The address family for the BGP peer. ipv4 or ipv6 .
bgp_asn - (Required) The autonomous system (AS) number for Border Gateway Protocol (BGP) con guration.
connection_id - (Required) The ID of the Direct Connect connection (or LAG) on which to create the virtual interface.
dx_gateway_id - (Required) The ID of the Direct Connect gateway to which to connect the virtual interface.
amazon_address - (Optional) The IPv4 CIDR address to use to send tra c to Amazon. Required for IPv4 BGP peers.
customer_address - (Optional) The IPv4 CIDR destination address to which Amazon should send tra c. Required for
IPv4 BGP peers.
mtu - (Optional) The maximum transmission unit (MTU) is the size, in bytes, of the largest permissible packet that can
be passed over the connection. The MTU of a virtual transit interface can be either 1500 or 8500 (jumbo frames).
Default is 1500 .
aws_device - The Direct Connect endpoint on which the virtual interface terminates.
Timeouts
aws_dx_transit_virtual_interface provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
Import
Direct Connect transit virtual interfaces can be imported using the vif id , e.g.
Note: There are many restrictions before you can properly create DynamoDB Global Tables in multiple regions. See the
AWS DynamoDB Global Table Requirements
(http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/globaltables_reqs_bestpractices.html) for more
information.
Example Usage
provider "aws" {
alias = "us-east-1"
region = "us-east-1"
}
provider "aws" {
alias = "us-west-2"
region = "us-west-2"
}
hash_key = "myAttribute"
name = "myTable"
stream_enabled = true
stream_view_type = "NEW_AND_OLD_IMAGES"
read_capacity = 1
write_capacity = 1
attribute {
name = "myAttribute"
type = "S"
}
}
hash_key = "myAttribute"
name = "myTable"
stream_enabled = true
stream_view_type = "NEW_AND_OLD_IMAGES"
read_capacity = 1
write_capacity = 1
attribute {
name = "myAttribute"
type = "S"
}
}
name = "myTable"
replica {
region_name = "us-east-1"
}
replica {
region_name = "us-west-2"
}
}
Argument Reference
The following arguments are supported:
name - (Required) The name of the global table. Must match underlying DynamoDB Table names in all regions.
replica - (Required) Underlying DynamoDB Table. At least 1 replica must be de ned. See below.
Nested Fields
replica
region_name - (Required) AWS region name of replica DynamoDB Table. e.g. us-east-1
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
DynamoDB Global Tables can be imported using the global table name, e.g.
Example Usage
The following dynamodb table description models the table and GSI shown in the AWS SDK example documentation
(https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html)
resource "aws_dynamodb_table" "basic-dynamodb-table" {
name = "GameScores"
billing_mode = "PROVISIONED"
read_capacity = 20
write_capacity = 20
hash_key = "UserId"
range_key = "GameTitle"
attribute {
name = "UserId"
type = "S"
}
attribute {
name = "GameTitle"
type = "S"
}
attribute {
name = "TopScore"
type = "N"
}
ttl {
attribute_name = "TimeToExist"
enabled = false
}
global_secondary_index {
name = "GameTitleIndex"
hash_key = "GameTitle"
range_key = "TopScore"
write_capacity = 10
read_capacity = 10
projection_type = "INCLUDE"
non_key_attributes = ["UserId"]
}
tags = {
Name = "dynamodb-table-1"
Environment = "production"
}
}
Argument Reference
The following arguments are supported:
name - (Required) The name of the table, this needs to be unique within a region.
billing_mode - (Optional) Controls how you are charged for read and write throughput and how you manage
capacity. The valid values are PROVISIONED and PAY_PER_REQUEST . Defaults to PROVISIONED .
hash_key - (Required, Forces new resource) The attribute to use as the hash (partition) key. Must also be de ned as
an attribute , see below.
range_key - (Optional, Forces new resource) The attribute to use as the range (sort) key. Must also be de ned as an
attribute , see below.
write_capacity - (Optional) The number of write units for this table. If the billing_mode is PROVISIONED , this
eld is required.
read_capacity - (Optional) The number of read units for this table. If the billing_mode is PROVISIONED , this eld
is required.
attribute - (Required) List of nested attribute de nitions. Only required for hash_key and range_key attributes.
Each attribute has two properties:
name - (Required) The name of the attribute
type - (Required) Attribute type, which must be a scalar type: S , N , or B for (S)tring, (N)umber or (B)inary data
ttl - (Optional) De nes ttl, has two properties, and can only be speci ed once:
enabled - (Required) Indicates whether ttl is enabled (true) or disabled (false).
attribute_name - (Required) The name of the table attribute to store the TTL timestamp in.
local_secondary_index - (Optional, Forces new resource) Describe an LSI on the table; these can only be allocated
at creation so you cannot change this de nition after you have created the resource.
global_secondary_index - (Optional) Describe a GSI for the table; subject to the normal limits on the number of
GSIs, projected attributes, etc.
stream_enabled - (Optional) Indicates whether Streams are to be enabled (true) or disabled (false).
stream_view_type - (Optional) When an item in the table is modi ed, StreamViewType determines what information
is written to the table's stream. Valid values are KEYS_ONLY , NEW_IMAGE , OLD_IMAGE , NEW_AND_OLD_IMAGES .
server_side_encryption - (Optional) Encryption at rest options. AWS DynamoDB tables are automatically encrypted
at rest with an AWS owned Customer Master Key if this argument isn't speci ed.
Timeouts
The timeouts block allows you to specify timeouts
(https://www.terraform.io/docs/con guration/resources.html#timeouts) for certain actions:
update - (Defaults to 60 mins) Used when updating the table con guration and reset for each individual Global
Secondary Index update
Nested elds
local_secondary_index
projection_type - (Required) One of ALL , INCLUDE or KEYS_ONLY where ALL projects every attribute into the
index, KEYS_ONLY projects just the hash and range key into the index, and INCLUDE projects only the keys speci ed
in the non_key_attributes parameter.
non_key_attributes - (Optional) Only required with INCLUDE as a projection type; a list of attributes to project into
the index. These do not need to be de ned as attributes on the table.
global_secondary_index
write_capacity - (Optional) The number of write units for this index. Must be set if billing_mode is set to
PROVISIONED.
read_capacity - (Optional) The number of read units for this index. Must be set if billing_mode is set to
PROVISIONED.
hash_key - (Required) The name of the hash key in the index; must be de ned as an attribute in the resource.
projection_type - (Required) One of ALL , INCLUDE or KEYS_ONLY where ALL projects every attribute into the
index, KEYS_ONLY projects just the hash and range key into the index, and INCLUDE projects only the keys speci ed
in the non_key_attributes parameter.
non_key_attributes - (Optional) Only required with INCLUDE as a projection type; a list of attributes to project into
the index. These do not need to be de ned as attributes on the table.
server_side_encryption
enabled - (Required) Whether or not to enable encryption at rest using an AWS managed Customer Master Key. If
enabled is false then server-side encryption is set to AWS owned CMK (shown as DEFAULT in the AWS console). If
enabled is true then server-side encryption is set to AWS managed CMK (shown as KMS in the AWS console). The
AWS KMS documentation (https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html) explains the
di erence between AWS owned and AWS managed CMKs.
point_in_time_recovery
enabled - (Required) Whether to enable point-in-time recovery - note that it can take up to 10 minutes to enable for
new tables. If the point_in_time_recovery block is not provided then this defaults to false .
The DynamoDB API expects attribute structure (name and type) to be passed along when creating or updating GSI/LSIs or
creating the initial table. In these cases it expects the Hash / Range keys to be provided; because these get re-used in
numerous places (i.e the table's range key could be a part of one or more GSIs), they are stored on the table object to
prevent duplication and increase consistency. If you add attributes here that are not used in these scenarios it can cause an
in nite loop in planning.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
stream_arn - The ARN of the Table Stream. Only available when stream_enabled = true
stream_label - A timestamp, in ISO 8601 format, for this stream. Note that this timestamp is not a unique identi er
for the stream on its own. However, the combination of AWS customer ID, table name and this eld is guaranteed to
be unique. It can be used for creating CloudWatch Alarms. Only available when stream_enabled = true
Import
DynamoDB tables can be imported using the name , e.g.
Note: This resource is not meant to be used for managing large amounts of data in your table, it is not designed to
scale. You should perform regular backups of all data in the table, see AWS docs for more
(https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/BackupRestore.html).
Example Usage
item = <<ITEM
<<
{
"exampleHashKey": {"S": "something"},
"one": {"N": "11111"},
"two": {"N": "22222"},
"three": {"N": "33333"},
"four": {"N": "44444"}
}
ITEM
}
attribute {
name = "exampleHashKey"
type = "S"
}
}
Argument Reference
The following arguments are supported:
hash_key - (Required) Hash key to use for lookups and identi cation of the item
range_key - (Optional) Range key to use for lookups and identi cation of the item. Required if there is range key
de ned in the table.
item - (Required) JSON representation of a map of attribute name/value pairs, one for each attribute. Only the
primary key attributes are required; you can optionally provide other attribute name-value pairs for the item.
Attributes Reference
All of the arguments above are exported as attributes.
Import
DynamoDB table items cannot be imported.
Resource: aws_ebs_default_kms_key
Provides a resource to manage the default customer master key (CMK) that your AWS account uses to encrypt EBS volumes.
Your AWS account has an AWS-managed default CMK that is used for encrypting an EBS volume when no CMK is speci ed in
the API call that creates the volume. By using the aws_ebs_default_kms_key resource, you can specify a customer-
managed CMK to use in place of the AWS-managed default CMK.
NOTE: Creating an aws_ebs_default_kms_key resource does not enable default EBS encryption. Use the
aws_ebs_encryption_by_default (/docs/providers/aws/r/ebs_encryption_by_default.html) to enable default EBS
encryption.
NOTE: Destroying this resource will reset the default CMK to the account's AWS-managed default CMK for EBS.
Example Usage
Argument Reference
The following arguments are supported:
key_arn - (Required, ForceNew) The ARN of the AWS Key Management Service (AWS KMS) customer master key
(CMK) to use to encrypt the EBS volume.
Import
The EBS default KMS CMK can be imported with the KMS key ARN, e.g.
Example Usage
Argument Reference
The following arguments are supported:
enabled - (Optional) Whether or not default EBS encryption is enabled. Valid values are true or false . Defaults to
true .
Resource: aws_ebs_snapshot_copy
Creates a Snapshot of a snapshot.
Example Usage
tags = {
Name = "HelloWorld"
}
}
tags = {
Name = "HelloWorld_snap"
}
}
tags = {
Name = "HelloWorld_copy_snap"
}
}
Argument Reference
The following arguments are supported:
Attributes Reference
The following attributes are exported:
Example Usage
tags = {
Name = "HelloWorld"
}
}
tags = {
Name = "HelloWorld_snap"
}
}
Argument Reference
The following arguments are supported:
Timeouts
aws_ebs_snapshot provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration options:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
tags = {
Name = "HelloWorld"
}
}
Argument Reference
The following arguments are supported:
type - (Optional) The type of EBS volume. Can be "standard", "gp2", "io1", "sc1" or "st1" (Default: "standard").
kms_key_id - (Optional) The ARN for the KMS encryption key. When specifying kms_key_id , encrypted needs to be
set to true.
NOTE: When changing the size , iops or type of an instance, there are considerations
(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/considerations.html) to be aware of that Amazon have written
about this.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The volume ID (e.g. vol-59fcb34e).
Import
EBS Volumes can be imported using the id , e.g.
Example Usage
Argument Reference
The following arguments are supported:
availability_zone - (Required) The Availability Zone in which to create the Capacity Reservation.
ebs_optimized - (Optional) Indicates whether the Capacity Reservation supports EBS-optimized instances.
end_date - (Optional) The date and time at which the Capacity Reservation expires. When a Capacity Reservation
expires, the reserved capacity is released and you can no longer launch instances into it. Valid values: RFC3339 time
string (https://tools.ietf.org/html/rfc3339#section-5.8) ( YYYY-MM-DDTHH:MM:SSZ )
end_date_type - (Optional) Indicates the way in which the Capacity Reservation ends. Specify either unlimited or
limited .
ephemeral_storage - (Optional) Indicates whether the Capacity Reservation supports instances with temporary,
block-level storage.
instance_match_criteria - (Optional) Indicates the type of instance launches that the Capacity Reservation accepts.
Specify either open or targeted .
instance_platform - (Required) The type of operating system for which to reserve capacity. Valid options are
Linux/UNIX , Red Hat Enterprise Linux , SUSE Linux , Windows , Windows with SQL Server , Windows with
SQL Server Enterprise , Windows with SQL Server Standard or Windows with SQL Server Web .
tenancy - (Optional) Indicates the tenancy of the Capacity Reservation. Specify either default or dedicated .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Capacity Reservations can be imported using the id , e.g.
Example Usage
authentication_options {
type = "certificate-authentication"
root_certificate_chain_arn = "${aws_acm_certificate.root_cert.arn}"
}
connection_log_options {
enabled = true
cloudwatch_log_group = "${aws_cloudwatch_log_group.lg.name}"
cloudwatch_log_stream = "${aws_cloudwatch_log_stream.ls.name}"
}
}
Argument Reference
The following arguments are supported:
client_cidr_block - (Required) The IPv4 address range, in CIDR notation, from which to assign client IP addresses.
The address range cannot overlap with the local CIDR of the VPC in which the associated subnet is located, or the
routes that you add manually. The address range cannot be changed after the Client VPN endpoint has been created.
The CIDR block should be /22 or greater.
dns_servers - (Optional) Information about the DNS servers to be used for DNS resolution. A Client VPN endpoint
can have up to two DNS servers. If no DNS server is speci ed, the DNS address of the VPC that is to be associated with
Client VPN endpoint is used as the DNS server.
split_tunnel - (Optional) Indicates whether split-tunnel is enabled on VPN endpoint. Default value is false .
type - (Required) The type of client authentication to be used. Specify certificate-authentication to use
certi cate-based authentication, or directory-service-authentication to use Active Directory authentication.
active_directory_id - (Optional) The ID of the Active Directory to be used for authentication if type is directory-
service-authentication .
root_certificate_chain_arn - (Optional) The ARN of the client certi cate. The certi cate must be signed by a
certi cate authority (CA) and it must be provisioned in AWS Certi cate Manager (ACM). Only necessary when type is set
to certificate-authentication .
cloudwatch_log_stream - (Optional) The name of the CloudWatch Logs log stream to which the connection data is
published.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
dns_name - The DNS name to be used by clients when establishing their VPN session.
Import
AWS Client VPN endpoints can be imported using the id value found via aws ec2 describe-client-vpn-endpoints , e.g.
Example Usage
Argument Reference
The following arguments are supported:
subnet_id - (Required) The ID of the subnet to associate with the Client VPN endpoint.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
security_groups - The IDs of the security groups applied to the target network association.
vpc_id - The ID of the VPC in which the target network (subnet) is located.
Resource: aws_ec2_ eet
Provides a resource to manage EC2 Fleets.
Example Usage
target_capacity_specification {
default_target_capacity_type = "spot"
total_target_capacity = 5
}
}
Argument Reference
The following arguments are supported:
launch_template_config - (Required) Nested argument containing EC2 Launch Template con gurations. De ned
below.
target_capacity_specification - (Required) Nested argument containing target capacity con gurations. De ned
below.
on_demand_options - (Optional) Nested argument containing On-Demand con gurations. De ned below.
replace_unhealthy_instances - (Optional) Whether EC2 Fleet should replace unhealthy instances. Defaults to
false .
spot_options - (Optional) Nested argument containing Spot con gurations. De ned below.
tags - (Optional) Map of Fleet tags. To tag instances at launch, specify the tags in the Launch Template.
terminate_instances - (Optional) Whether to terminate instances for an EC2 Fleet if it is deleted successfully.
Defaults to false .
type - (Optional) The type of request. Indicates whether the EC2 Fleet only requests the target capacity, or also
attempts to maintain it. Valid values: maintain , request . Defaults to maintain .
launch_template_con g
launch_template_specification - (Required) Nested argument containing EC2 Launch Template to use. De ned
below.
override - (Optional) Nested argument(s) containing parameters to override the same parameters in the Launch
Template. De ned below.
launch_template_speci cation
override
Example:
launch_template_config {
override {
instance_type = "m4.xlarge"
weighted_capacity = 1
}
override {
instance_type = "m4.2xlarge"
weighted_capacity = 2
}
}
}
max_price - (Optional) Maximum price per unit hour that you are willing to pay for a Spot Instance.
priority - (Optional) Priority for the launch template override. If on_demand_options allocation_strategy is set
to prioritized , EC2 Fleet uses priority to determine which launch template override to use rst in ful lling On-
Demand capacity. The highest priority is launched rst. The lower the number, the higher the priority. If no number is
set, the launch template override has the lowest priority. Valid values are whole numbers starting at 0.
on_demand_options
allocation_strategy - (Optional) The order of the launch template overrides to use in ful lling On-Demand
capacity. Valid values: lowestPrice , prioritized . Default: lowestPrice .
spot_options
allocation_strategy - (Optional) How to allocate the target capacity across the Spot pools. Valid values:
diversified , lowestPrice . Default: lowestPrice .
instance_pools_to_use_count - (Optional) Number of Spot pools across which to allocate your target Spot
capacity. Valid only when Spot allocation_strategy is set to lowestPrice . Default: 1 .
target_capacity_speci cation
default_target_capacity_type - (Required) Default target capacity type. Valid values: on-demand , spot .
Attribute Reference
In addition to all arguments above, the following attributes are exported:
id - Fleet identi er
Timeouts
aws_ec2_fleet provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration options:
update - (Default 10m ) How long to wait for a eet to be modi ed.
delete - (Default 10m ) How long to wait for a eet to be deleted. If terminate_instances is true , how long to wait
for instances to terminate.
Import
aws_ec2_fleet can be imported by using the Fleet identi er, e.g.
Example Usage
Argument Reference
The following arguments are supported:
amazon_side_asn - (Optional) Private Autonomous System Number (ASN) for the Amazon side of a BGP session. The
range is 64512 to 65534 for 16-bit ASNs and 4200000000 to 4294967294 for 32-bit ASNs. Default value: 64512 .
dns_support - (Optional) Whether DNS support is enabled. Valid values: disable , enable . Default value: enable .
vpn_ecmp_support - (Optional) Whether VPN Equal Cost Multipath Protocol support is enabled. Valid values:
disable , enable . Default value: enable .
Attribute Reference
In addition to all arguments above, the following attributes are exported:
owner_id - Identi er of the AWS account that owns the EC2 Transit Gateway
Example Usage
Standard usage
Blackhole route
Argument Reference
The following arguments are supported:
destination_cidr_block - (Required) IPv4 CIDR range used for destination matches. Routing decisions are based on
the most speci c match.
blackhole - (Optional) Indicates whether to drop tra c that matches this route (default to false ).
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
Attribute Reference
In addition to all arguments above, the following attributes are exported:
id - EC2 Transit Gateway Route Table identi er combined with EC2 Transit Gateway Attachment identi er
Import
aws_ec2_transit_gateway_route_table_association can be imported by using the EC2 Transit Gateway Route Table
identi er, an underscore, and the EC2 Transit Gateway Attachment identi er, e.g.
Example Usage
Argument Reference
The following arguments are supported:
tags - (Optional) Key-value tags for the EC2 Transit Gateway Route Table.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
default_association_route_table - Boolean whether this is the default association route table for the EC2 Transit
Gateway.
default_propagation_route_table - Boolean whether this is the default propagation route table for the EC2
Transit Gateway.
Import
aws_ec2_transit_gateway_route_table can be imported by using the EC2 Transit Gateway Route Table identi er, e.g.
Example Usage
Argument Reference
The following arguments are supported:
Attribute Reference
In addition to all arguments above, the following attributes are exported:
id - EC2 Transit Gateway Route Table identi er combined with EC2 Transit Gateway Attachment identi er
Import
aws_ec2_transit_gateway_route_table_propagation can be imported by using the EC2 Transit Gateway Route Table
identi er, an underscore, and the EC2 Transit Gateway Attachment identi er, e.g.
When a cross-account (requester's AWS account di ers from the accepter's AWS account) EC2 Transit Gateway VPC
Attachment is created, an EC2 Transit Gateway VPC Attachment resource is automatically created in the accepter's account.
The requester can use the aws_ec2_transit_gateway_vpc_attachment resource to manage its side of the connection and
the accepter can use the aws_ec2_transit_gateway_vpc_attachment_accepter resource to "adopt" its side of the
connection into management.
Example Usage
tags = {
Name = "Example cross-account attachment"
}
}
A full example of how to how to create a Transit Gateway in one AWS account, share it with a second AWS account, and
attach a VPC in the second account to the Transit Gateway via the aws_ec2_transit_gateway_vpc_attachment and
aws_ec2_transit_gateway_vpc_attachment_accepter resources can be found in the ./examples/transit-gateway-
cross-account-vpc-attachment directory within the Github Repository (https://github.com/terraform-
providers/terraform-provider-aws/tree/master/examples/transit-gateway-cross-account-vpc-attachment).
Argument Reference
The following arguments are supported:
tags - (Optional) Key-value tags for the EC2 Transit Gateway VPC Attachment.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
vpc_owner_id - Identi er of the AWS account that owns the EC2 VPC.
Import
aws_ec2_transit_gateway_vpc_attachment_accepter can be imported by using the EC2 Transit Gateway Attachment
identi er, e.g.
Example Usage
A full example of how to create a Transit Gateway in one AWS account, share it with a second AWS account, and attach a VPC
in the second account to the Transit Gateway via the aws_ec2_transit_gateway_vpc_attachment and
aws_ec2_transit_gateway_vpc_attachment_accepter resources can be found in the ./examples/transit-gateway-
cross-account-vpc-attachment directory within the Github Repository (https://github.com/terraform-
providers/terraform-provider-aws/tree/master/examples/transit-gateway-cross-account-vpc-attachment).
Argument Reference
The following arguments are supported:
dns_support - (Optional) Whether DNS support is enabled. Valid values: disable , enable . Default value: enable .
ipv6_support - (Optional) Whether IPv6 support is enabled. Valid values: disable , enable . Default value:
disable .
tags - (Optional) Key-value tags for the EC2 Transit Gateway VPC Attachment.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
id - EC2 Transit Gateway Attachment identi er
vpc_owner_id - Identi er of the AWS account that owns the EC2 VPC.
Import
aws_ec2_transit_gateway_vpc_attachment can be imported by using the EC2 Transit Gateway Attachment identi er, e.g.
NOTE: Only one aws_ecr_lifecycle_policy resource can be used with the same ECR repository. To apply multiple
rules, they must be combined in the policy JSON.
NOTE: The AWS ECR API seems to reorder rules based on rulePriority . If you de ne multiple rules that are not
sorted in ascending rulePriority order in the Terraform code, the resource will be agged for recreation every
terraform plan .
Example Usage
policy = <<EOF
<<
{
"rules": [
{
"rulePriority": 1,
"description": "Expire images older than 14 days",
"selection": {
"tagStatus": "untagged",
"countType": "sinceImagePushed",
"countUnit": "days",
"countNumber": 14
},
"action": {
"type": "expire"
}
}
]
}
EOF
}
policy = <<EOF
<<
{
"rules": [
{
"rulePriority": 1,
"description": "Keep last 30 images",
"selection": {
"tagStatus": "tagged",
"tagPrefixList": ["v"],
"countType": "imageCountMoreThan",
"countNumber": 30
},
"action": {
"type": "expire"
}
}
]
}
EOF
}
Argument Reference
The following arguments are supported:
policy - (Required) The policy document. This is a JSON formatted string. See more details about Policy Parameters
(http://docs.aws.amazon.com/AmazonECR/latest/userguide/LifecyclePolicies.html#lifecycle_policy_parameters) in the
o cial AWS docs. For more information about building IAM policy documents with Terraform, see the AWS IAM Policy
Document Guide (/docs/providers/aws/guides/iam-policy-documents.html).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
ECR Lifecycle Policy can be imported using the name of the repository, e.g.
$ terraform import aws_ecr_lifecycle_policy.example tf-example
Resource: aws_ecr_repository
Provides an Elastic Container Registry Repository.
Example Usage
Argument Reference
The following arguments are supported:
image_tag_mutability - (Optional) The tag mutability setting for the repository. Must be one of: MUTABLE or
IMMUTABLE . Defaults to MUTABLE .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_ecr_repository provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration
options:
Import
ECR Repositories can be imported using the name , e.g.
$ terraform import aws_ecr_repository.service test-service
Resource: aws_ecr_repository_policy
Provides an Elastic Container Registry Repository Policy.
Example Usage
policy = <<EOF
<<
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "new policy",
"Effect": "Allow",
"Principal": "*",
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:PutImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:DescribeRepositories",
"ecr:GetRepositoryPolicy",
"ecr:ListImages",
"ecr:DeleteRepository",
"ecr:BatchDeleteImage",
"ecr:SetRepositoryPolicy",
"ecr:DeleteRepositoryPolicy"
]
}
]
}
EOF
}
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
ECR Repository Policy can be imported using the repository name, e.g.
Example Usage
Argument Reference
The following arguments are supported:
name - (Required) The name of the cluster (up to 255 letters, numbers, hyphens, and underscores)
setting - (Optional) Con guration block(s) with cluster settings. For example, this can be used to enable CloudWatch
Container Insights for a cluster. De ned below.
setting
The setting con guration block supports the following:
value - (Required) The value to assign to the setting. Value values are enabled and disabled .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
arn - The Amazon Resource Name (ARN) that identi es the cluster
Import
ECS clusters can be imported using the name , e.g.
Note: To prevent a race condition during service deletion, make sure to set depends_on to the related
aws_iam_role_policy ; otherwise, the policy may be destroyed too soon and the ECS service will then get stuck in the
DRAINING state.
Provides an ECS service - e ectively a task that is expected to run until an error occurs or a user terminates it (typically a
webserver or a database).
Example Usage
ordered_placement_strategy {
type = "binpack"
field = "cpu"
}
load_balancer {
target_group_arn = "${aws_lb_target_group.foo.arn}"
container_name = "mongo"
container_port = 8080
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-west-2a, us-west-2b]"
}
}
desired_count = 2
lifecycle {
ignore_changes = ["desired_count"]
}
}
Argument Reference
The following arguments are supported:
name - (Required) The name of the service (up to 255 letters, numbers, hyphens, and underscores)
task_definition - (Required) The family and revision ( family:revision ) or full ARN of the task de nition that you
want to run in your service.
desired_count - (Optional) The number of instances of the task de nition to place and keep running. Defaults to 0.
Do not specify if using the DAEMON scheduling strategy.
launch_type - (Optional) The launch type on which to run your service. The valid values are EC2 and FARGATE .
Defaults to EC2 .
platform_version - (Optional) The platform version on which to run your service. Only applicable for launch_type
set to FARGATE . Defaults to LATEST . More information about Fargate platform versions can be found in the AWS ECS
User Guide (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/platform_versions.html).
scheduling_strategy - (Optional) The scheduling strategy to use for the service. The valid values are REPLICA and
DAEMON . Defaults to REPLICA . Note that Fargate tasks do not support the DAEMON scheduling strategy
(https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scheduling_tasks.html).
iam_role - (Optional) ARN of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf.
This parameter is required if you are using a load balancer with your service, but only if your task de nition does not
use the awsvpc network mode. If using awsvpc network mode, do not specify this role. If your account has already
created the Amazon ECS service-linked role, that role is used by default for your service unless you specify a role here.
deployment_controller - (Optional) Con guration block containing deployment controller con guration. De ned
below.
deployment_maximum_percent - (Optional) The upper limit (as a percentage of the service's desiredCount) of the
number of running tasks that can be running in a service during a deployment. Not valid when using the DAEMON
scheduling strategy.
deployment_minimum_healthy_percent - (Optional) The lower limit (as a percentage of the service's desiredCount)
of the number of running tasks that must remain running and healthy in a service during a deployment.
enable_ecs_managed_tags - (Optional) Speci es whether to enable Amazon ECS managed tags for the tasks within
the service.
propagate_tags - (Optional) Speci es whether to propagate the tags from the task de nition or the service to the
tasks. The valid values are SERVICE and TASK_DEFINITION .
ordered_placement_strategy - (Optional) Service level strategy rules that are taken into consideration during task
placement. List from top to bottom in order of precedence. The maximum number of ordered_placement_strategy
blocks is 5 . De ned below.
health_check_grace_period_seconds - (Optional) Seconds to ignore failing load balancer health checks on newly
instantiated tasks to prevent premature shutdown, up to 2147483647. Only valid for services con gured to use load
balancers.
placement_constraints - (Optional) rules that are taken into consideration during task placement. Maximum
number of placement_constraints is 10 . De ned below.
network_configuration - (Optional) The network con guration for the service. This parameter is required for task
de nitions that use the awsvpc network mode to receive their own Elastic Network Interface, and it is not supported
for other network modes.
service_registries - (Optional) The service discovery registries for the service. The maximum number of
service_registries blocks is 1 .
deployment_controller
The deployment_controller con guration block supports the following:
type - (Optional) Type of deployment controller. Valid values: CODE_DEPLOY , ECS . Default: ECS .
load_balancer
load_balancer supports the following:
elb_name - (Required for ELB Classic) The name of the ELB (Classic) to associate with the service.
target_group_arn - (Required for ALB/NLB) The ARN of the Load Balancer target group to associate with the service.
container_name - (Required) The name of the container to associate with the load balancer (as it appears in a
container de nition).
container_port - (Required) The port on the container to associate with the load balancer.
Version note: Multiple load_balancer con guration block support was added in Terraform AWS Provider version
2.22.0. This allows con guration of ECS service support for multiple target groups (https://aws.amazon.com/about-
aws/whats-new/2019/07/amazon-ecs-services-now-support-multiple-load-balancer-target-groups/).
ordered_placement_strategy
ordered_placement_strategy supports the following:
type - (Required) The type of placement strategy. Must be one of: binpack , random , or spread
field - (Optional) For the spread placement strategy, valid values are instanceId (or host , which has the same
e ect), or any platform or custom attribute that is applied to a container instance. For the binpack type, valid values
are memory and cpu . For the random type, this attribute is not needed. For more information, see Placement
Strategy (https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_PlacementStrategy.html).
Note: for spread , host and instanceId will be normalized, by AWS, to be instanceId . This means the state le will
show instanceId but your con g will di er if you use host .
placement_constraints
placement_constraints support the following:
type - (Required) The type of constraint. The only valid values at this time are memberOf and distinctInstance .
expression - (Optional) Cluster Query Language expression to apply to the constraint. Does not need to be speci ed
for the distinctInstance type. For more information, see Cluster Query Language in the Amazon EC2 Container
Service Developer Guide (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-query-
language.html).
network_con guration
network_configuration support the following:
security_groups - (Optional) The security groups associated with the task or service. If you do not specify a security
group, the default security group for the VPC is used.
assign_public_ip - (Optional) Assign a public IP address to the ENI (Fargate launch type only). Valid values are
true or false . Default false .
For more information, see Task Networking (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-
networking.html)
service_registries
service_registries support the following:
registry_arn - (Required) The ARN of the Service Registry. The currently supported service registry is Amazon Route
53 Auto Naming Service( aws_service_discovery_service ). For more information, see Service
(https://docs.aws.amazon.com/Route53/latest/APIReference/API_autonaming_Service.html)
port - (Optional) The port value used if your Service Discovery service speci ed an SRV record.
container_port - (Optional) The port value, already speci ed in the task de nition, to be used for your service
discovery service.
container_name - (Optional) The container name value, already speci ed in the task de nition, to be used for your
service discovery service.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
cluster - The Amazon Resource Name (ARN) of cluster which the service runs on
Import
ECS services can be imported using the name together with ecs cluster name , e.g.
Example Usage
volume {
name = "service-storage"
host_path = "/ecs/service-storage"
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-west-2a, us-west-2b]"
}
}
The referenced task-definitions/service.json le contains a valid JSON document, which is shown below, and its
content is going to be passed directly into the container_definitions attribute as a string. Please note that this example
contains only a small subset of the available parameters.
[
{
"name":
"name" "first",
"image":
"image" "service-first",
"cpu":
"cpu" 10,
"memory":
"memory" 512,
"essential":
"essential" true,
true
"portMappings":
"portMappings" [
{
"containerPort":
"containerPort" 80,
"hostPort":
"hostPort" 80
}
]
},
{
"name":
"name" "second",
"image":
"image" "service-second",
"cpu":
"cpu" 10,
"memory":
"memory" 256,
"essential":
"essential" true,
true
"portMappings":
"portMappings" [
{
"containerPort":
"containerPort" 443,
"hostPort":
"hostPort" 443
}
]
}
]
proxy_configuration {
type = "APPMESH"
container_name = "applicationContainerName"
properties = {
AppPorts = "8080"
EgressIgnoredIPs = "169.254.170.2,169.254.169.254"
IgnoredUID = "1337"
ProxyEgressPort = 15001
ProxyIngressPort = 15000
}
}
}
Argument Reference
Top-Level Arguments
family - (Required) A unique name for your task de nition.
NOTE: Proper escaping is required for JSON eld values containing quotes ( " ) such as environment values. If directly
setting the JSON, they should be escaped as \" in the JSON, e.g. "value": "I \"love\" escaped quotes" . If using
a Terraform variable value, they should be escaped as \\\" in the variable, e.g. value = "I \\\"love\\\" escaped
quotes" in the variable and "value": "${var.myvariable}" in the JSON.
task_role_arn - (Optional) The ARN of IAM role that allows your Amazon ECS container task to make calls to other
AWS services.
execution_role_arn - (Optional) The Amazon Resource Name (ARN) of the task execution role that the Amazon ECS
container agent and the Docker daemon can assume.
network_mode - (Optional) The Docker networking mode to use for the containers in the task. The valid values are
none , bridge , awsvpc , and host .
ipc_mode - (Optional) The IPC resource namespace to be used for the containers in the task The valid values are
host , task , and none .
pid_mode - (Optional) The process namespace to use for the containers in the task. The valid values are host and
task .
volume - (Optional) A set of volume blocks that containers in your task may use.
placement_constraints - (Optional) A set of placement constraints rules that are taken into consideration during
task placement. Maximum number of placement_constraints is 10 .
cpu - (Optional) The number of cpu units used by the task. If the requires_compatibilities is FARGATE this eld
is required.
memory - (Optional) The amount (in MiB) of memory used by the task. If the requires_compatibilities is
FARGATE this eld is required.
requires_compatibilities - (Optional) A set of launch types required by the task. The valid values are EC2 and
FARGATE .
proxy_configuration - (Optional) The proxy con guration details for the App Mesh proxy.
name - (Required) The name of the volume. This name is referenced in the sourceVolume parameter of container
de nition in the mountPoints section.
host_path - (Optional) The path on the host container instance that is presented to the container. If not set, ECS will
create a nonpersistent data volume that starts empty and is deleted after the task has nished.
For more information, see Specifying a Docker volume in your Task De nition Developer Guide
(https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-volumes.html#specify-volume-con g)
scope - (Optional) The scope for the Docker volume, which determines its lifecycle, either task or shared . Docker
volumes that are scoped to a task are automatically provisioned when the task starts and destroyed when the task
stops. Docker volumes that are scoped as shared persist after the task stops.
autoprovision - (Optional) If this value is true , the Docker volume is created if it does not already exist. Note: This
eld is only used if the scope is shared .
driver - (Optional) The Docker volume driver to use. The driver value must match the driver name provided by
Docker because it is used for task placement.
Example Usage:
volume {
name = "service-storage"
docker_volume_configuration {
scope = "shared"
autoprovision = true
}
}
}
type - (Required) The type of constraint. Use memberOf to restrict selection to a group of valid candidates. Note that
distinctInstance is not supported in task de nitions.
expression - (Optional) Cluster Query Language expression to apply to the constraint. For more information, see
Cluster Query Language in the Amazon EC2 Container Service Developer Guide
(http://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-query-language.html).
container_name - (Required) The name of the container that will serve as the App Mesh proxy.
properties - (Required) The set of network con guration parameters to provide the Container Network Interface
(CNI) plugin, speci ed a key-value mapping.
type - (Optional) The proxy type. The default value is APPMESH . The only supported value is APPMESH .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
arn - Full ARN of the Task De nition (including both family and revision ).
Import
ECS Task De nitions can be imported via their Amazon Resource Name (ARN):
Example Usage
tags = {
Name = "MyProduct"
}
}
lifecycle_policy {
transition_to_ia = "AFTER_30_DAYS"
}
}
Argument Reference
The following arguments are supported:
creation_token - (Optional) A unique name (a maximum of 64 characters are allowed) used as reference when
creating the Elastic File System to ensure idempotent le system creation. By default generated by Terraform. See
Elastic File System (http://docs.aws.amazon.com/efs/latest/ug/) user guide for more information.
kms_key_id - (Optional) The ARN for the KMS encryption key. When specifying kms_key_id, encrypted needs to be set
to true.
performance_mode - (Optional) The le system performance mode. Can be either "generalPurpose" or "maxIO"
(Default: "generalPurpose" ).
provisioned_throughput_in_mibps - (Optional) The throughput, measured in MiB/s, that you want to provision for
the le system. Only applicable with throughput_mode set to provisioned .
throughput_mode - (Optional) Throughput mode for the le system. Defaults to bursting . Valid values: bursting ,
provisioned . When using provisioned , also set provisioned_throughput_in_mibps .
transition_to_ia - (Optional) Indicates how long it takes to transition les to the IA storage class. Valid values:
AFTER_14_DAYS , AFTER_30_DAYS , AFTER_60_DAYS , or AFTER_90_DAYS .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
dns_name - The DNS name for the lesystem per documented convention
(http://docs.aws.amazon.com/efs/latest/ug/mounting-fs-mount-cmd-dns-name.html).
Import
The EFS le systems can be imported using the id , e.g.
Example Usage
Argument Reference
The following arguments are supported:
file_system_id - (Required) The ID of the le system for which the mount target is intended.
subnet_id - (Required) The ID of the subnet to add the mount target in.
ip_address - (Optional) The address (within the address range of the speci ed subnet) at which the le system may
be mounted via the mount target.
security_groups - (Optional) A list of up to 5 VPC security group IDs (that must be for the same VPC as subnet
speci ed) in e ect for the mount target.
Attributes Reference
Note: The dns_name attribute is only useful if the mount target is in a VPC that has support for DNS hostnames
enabled. See Using DNS with Your VPC (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-dns.html) and
VPC resource (https://www.terraform.io/docs/providers/aws/r/vpc.html#enable_dns_hostnames) in Terraform for more
information.
dns_name - The DNS name for the given subnet/AZ per documented convention
(http://docs.aws.amazon.com/efs/latest/ug/mounting-fs-mount-cmd-dns-name.html).
network_interface_id - The ID of the network interface that Amazon EFS created when it created the mount target.
Import
The EFS mount targets can be imported using the id , e.g.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
NOTE: Do not use this resource to associate an EIP to aws_lb or aws_nat_gateway resources. Instead use the
allocation_id available in those resources to allow AWS to manage the association, otherwise you will see
AuthFailure errors.
NOTE: aws_eip_association is useful in scenarios where EIPs are either pre-existing or distributed to customers or
users and therefore cannot be changed.
Example Usage
tags = {
Name = "HelloWorld"
}
}
Argument Reference
The following arguments are supported:
instance_id - (Optional) The ID of the instance. This is required for EC2-Classic. For EC2-VPC, you can specify either
the instance ID or the network interface ID, but not both. The operation fails if you specify an instance ID unless exactly
one network interface is attached.
network_interface_id - (Optional) The ID of the network interface. If the instance has more than one network
interface, you must specify a network interface ID.
private_ip_address - (Optional) The primary or secondary private IP address to associate with the Elastic IP
address. If no private IP address is speci ed, the Elastic IP address is associated with the primary private IP address.
Attributes Reference
association_id - The ID that represents the association of the Elastic IP address with an instance.
allocation_id - As above
instance_id - As above
network_interface_id - As above
private_ip_address - As above
public_ip - As above
Import
EIP Assocations can be imported using their association ID.
Note: EIP may require IGW to exist prior to association. Use depends_on to set an explicit dependency on the IGW.
Note: Do not use network_interface to associate the EIP to aws_lb or aws_nat_gateway resources. Instead use
the allocation_id available in those resources to allow AWS to manage the association, otherwise you will see
AuthFailure errors.
Example Usage
Single EIP associated with an instance:
depends_on = ["aws_internet_gateway.gw"]
}
ami = "ami-5189a661"
instance_type = "t2.micro"
private_ip = "10.0.0.12"
subnet_id = "${aws_subnet.tf_test_subnet.id}"
}
instance = "${aws_instance.foo.id}"
associate_with_private_ip = "10.0.0.12"
depends_on = ["aws_internet_gateway.gw"]
}
Argument Reference
The following arguments are supported:
associate_with_private_ip - (Optional) A user speci ed primary or secondary private IP address to associate with
the Elastic IP address. If no private IP address is speci ed, the Elastic IP address is associated with the primary private
IP address.
public_ipv4_pool - (Optional) EC2 IPv4 address pool identi er or amazon . This option is only available for VPC EIPs.
NOTE: You can specify either the instance ID or the network_interface ID, but not both. Including both will not
return an error from the AWS API, but will have unde ned behavior. See the relevant AssociateAddress API Call
(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AssociateAddress.html) for more information.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
private_dns - The Private DNS associated with the Elastic IP address (if in VPC).
Note: The resource computes the public_dns and private_dns attributes according to the VPC DNS Guide
(https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-hostnames) as they are not available with
the EC2 API.
Timeouts
aws_eip provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration options:
read - (Default 15 minutes ) How long to wait querying for information about EIPs.
Import
EIPs in a VPC can be imported using their Allocation ID, e.g.
$ terraform import aws_eip.bar eipalloc-00a10e96
EIPs in EC2 Classic can be imported using their Public IP, e.g.
Example Usage
Basic Usage
vpc_config {
subnet_ids = ["${aws_subnet.example1.id}", "${aws_subnet.example2.id}"]
}
}
output "endpoint" {
value = "${aws_eks_cluster.example.endpoint}"
}
output "kubeconfig-certificate-authority-data" {
value = "${aws_eks_cluster.example.certificate_authority.0.data}"
}
condition {
test = "StringEquals"
variable = "${replace(aws_iam_openid_connect_provider.example.url, "https://
/ ", "")}:sub"
values = ["system:serviceaccount:kube-system:aws-node"]
}
principals {
identifiers = ["${aws_iam_openid_connect_provider.example.arn}"]
type = "Federated"
}
}
}
Argument Reference
The following arguments are supported:
role_arn - (Required) The Amazon Resource Name (ARN) of the IAM role that provides permissions for the
Kubernetes control plane to make calls to AWS API operations on your behalf.
vpc_config - (Required) Nested argument for the VPC associated with your cluster. Amazon EKS VPC resources have
speci c requirements to work properly with Kubernetes. For more information, see Cluster VPC Considerations
(https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html) and Cluster Security Group Considerations
(https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) in the Amazon EKS User Guide.
Con guration detailed below.
enabled_cluster_log_types - (Optional) A list of the desired control plane logging to enable. For more information,
see Amazon EKS Control Plane Logging (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html)
version – (Optional) Desired Kubernetes master version. If you do not specify a value, the latest available version at
resource creation is used and no upgrades will occur except those automatically triggered by EKS. The value must be
con gured and increased to upgrade the version when desired. Downgrades are not supported by EKS.
vpc_con g
endpoint_private_access - (Optional) Indicates whether or not the Amazon EKS private API server endpoint is
enabled. Default is false .
endpoint_public_access - (Optional) Indicates whether or not the Amazon EKS public API server endpoint is
enabled. Default is true .
security_group_ids – (Optional) List of security group IDs for the cross-account elastic network interfaces that
Amazon EKS creates to use to allow communication between your worker nodes and the Kubernetes control plane.
subnet_ids – (Required) List of subnet IDs. Must be in at least two di erent availability zones. Amazon EKS creates
cross-account elastic network interfaces in these subnets to allow communication between your worker nodes and the
Kubernetes control plane.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
identity - Nested attribute containing identity provider information for your cluster. Only available on Kubernetes
version 1.13 and 1.14 clusters created or upgraded on or after September 3, 2019.
oidc - Nested attribute containing OpenID Connect (https://openid.net/connect/) identity provider information
for the cluster.
status - The status of the EKS cluster. One of CREATING , ACTIVE , DELETING , FAILED .
Timeouts
aws_eks_cluster provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration options:
create - (Default 30 minutes ) How long to wait for the EKS Cluster to be created.
update - (Default 60 minutes ) How long to wait for the EKS Cluster to be updated. Note that the update timeout is
used separately for both version and vpc_config update timeouts.
delete - (Default 15 minutes ) How long to wait for the EKS Cluster to be deleted.
Import
EKS Clusters can be imported using the name , e.g.
Note: When you change an attribute, such as node_type , by default it is applied in the next maintenance window.
Because of this, Terraform may report a di erence in its planning phase because the actual modi cation has not yet
taken place. You can use the apply_immediately ag to instruct the service to apply the change immediately. Using
apply_immediately can result in a brief downtime as the server reboots. See the AWS Docs on Modifying an
ElastiCache Cache Cluster (https://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Clusters.Modify.html)
for more information.
Example Usage
Memcached Cluster
Redis Instance
Argument Reference
The following arguments are supported:
cluster_id – (Required) Group identi er. ElastiCache converts this name to lowercase
replication_group_id - (Optional) The ID of the replication group to which this cluster should belong. If this
parameter is speci ed, the cluster is added to the speci ed replication group as a read replica; otherwise, the cluster is
a standalone primary that is not part of any replication group.
engine – (Required unless replication_group_id is provided) Name of the cache engine to be used for this cache
cluster. Valid values for this parameter are memcached or redis
engine_version – (Optional) Version number of the cache engine to be used. See Describe Cache Engine Versions
(https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-cache-engine-versions.html) in the AWS
Documentation center for supported versions
maintenance_window – (Optional) Speci es the weekly time range for when maintenance on the cache cluster is
performed. The format is ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60
minute period. Example: sun:05:00-sun:09:00
node_type – (Required unless replication_group_id is provided) The compute and memory capacity of the
nodes. See Available Cache Node Types (https://aws.amazon.com/elasticache/details#Available_Cache_Node_Types)
for supported node types
num_cache_nodes – (Required unless replication_group_id is provided) The initial number of cache nodes that
the cache cluster will have. For Redis, this value must be 1. For Memcache, this value must be between 1 and 20. If this
number is reduced on subsequent runs, the highest numbered nodes will be removed.
port – (Optional) The port number on which each of the cache nodes will accept connections. For Memcache the
default is 11211, and for Redis the default port is 6379. Cannot be provided with replication_group_id .
subnet_group_name – (Optional, VPC only) Name of the subnet group to be used for the cache cluster.
security_group_names – (Optional, EC2 Classic only) List of security group names to associate with this cache cluster
security_group_ids – (Optional, VPC only) One or more VPC security groups associated with the cache cluster
apply_immediately - (Optional) Speci es whether any database modi cations are applied immediately, or during
the next maintenance window. Default is false . See Amazon ElastiCache Documentation for more information.
(https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyCacheCluster.html) (Available since
v0.6.0)
snapshot_arns – (Optional) A single-element string list containing an Amazon Resource Name (ARN) of a Redis RDB
snapshot le stored in Amazon S3. Example: arn:aws:s3:::my_bucket/snapshot1.rdb
snapshot_name - (Optional) The name of a snapshot from which to restore data into the new node group. Changing
the snapshot_name forces a new resource.
snapshot_window - (Optional, Redis only) The daily time range (in UTC) during which ElastiCache will begin taking a
daily snapshot of your cache cluster. Example: 05:00-09:00
snapshot_retention_limit - (Optional, Redis only) The number of days for which ElastiCache will retain automatic
cache cluster snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, then a snapshot
that was taken today will be retained for 5 days before being deleted. If the value of SnapshotRetentionLimit is set to
zero (0), backups are turned o . Please note that setting a snapshot_retention_limit is not supported on
cache.t1.micro or cache.t2.* cache nodes
notification_topic_arn – (Optional) An Amazon Resource Name (ARN) of an SNS topic to send ElastiCache
noti cations to. Example: arn:aws:sns:us-east-1:012345678999:my_sns_topic
az_mode - (Optional, Memcached only) Speci es whether the nodes in this Memcached node group are created in a
single Availability Zone or created across multiple Availability Zones in the cluster's region. Valid values for this
parameter are single-az or cross-az , default is single-az . If you want to choose cross-az , num_cache_nodes
must be greater than 1
availability_zone - (Optional) The Availability Zone for the cache cluster. If you want to create cache nodes in
multi-az, use preferred_availability_zones instead. Default: System chosen Availability Zone.
preferred_availability_zones - (Optional, Memcached only) A list of the Availability Zones in which cache nodes
are created. If you are creating your cluster in an Amazon VPC you can only locate nodes in Availability Zones that are
associated with the subnets in the selected subnet group. The number of Availability Zones listed must equal the value
of num_cache_nodes . If you want all the nodes in the same Availability Zone, use availability_zone instead, or
repeat the Availability Zone multiple times in the list. Default: System chosen Availability Zones. Detecting drift of
existing node availability zone is not currently supported. Updating this argument by itself to migrate existing node
availability zones is not currently supported and will show a perpetual di erence.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
cache_nodes - List of node objects including id , address , port and availability_zone . Referenceable e.g. as
${aws_elasticache_cluster.bar.cache_nodes.0.address}
configuration_endpoint - (Memcached only) The con guration endpoint to allow host discovery.
cluster_address - (Memcached only) The DNS name of the cache cluster without the port appended.
Import
ElastiCache Clusters can be imported using the cluster_id , e.g.
$ terraform import aws_elasticache_cluster.my_cluster my_cluster
Resource: aws_elasticache_parameter_group
Provides an ElastiCache parameter group resource.
NOTE: Attempting to remove the reserved-memory parameter when family is set to redis2.6 or redis2.8 may
show a perpetual di erence in Terraform due to an Elasticache API limitation. Leave that parameter con gured with
any value to workaround the issue.
Example Usage
parameter {
name = "activerehashing"
value = "yes"
}
parameter {
name = "min-slaves-to-write"
value = "2"
}
}
Argument Reference
The following arguments are supported:
description - (Optional) The description of the ElastiCache parameter group. Defaults to "Managed by Terraform".
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Note: When you change an attribute, such as engine_version , by default the ElastiCache API applies it in the next
maintenance window. Because of this, Terraform may report a di erence in its planning phase because the actual
modi cation has not yet taken place. You can use the apply_immediately ag to instruct the service to apply the
change immediately. Using apply_immediately can result in a brief downtime as servers reboots.
Example Usage
Adjusting number_cache_clusters directly. This will attempt to automatically add or remove replicas, but provides no
granular control (e.g. preferred availability zone, cache cluster ID) for the added or removed replicas. This also
currently expects cache cluster IDs in the form of replication_group_id-00# .
Otherwise for ne grained control of the underlying cache clusters, they can be added or removed with the
aws_elasticache_cluster resource (/docs/providers/aws/r/elasticache_cluster.html) and its
replication_group_id attribute. In this situation, you will need to utilize the lifecycle con guration block
(/docs/con guration/resources.html) with ignore_changes to prevent perpetual di erences during Terraform plan
with the number_cache_cluster attribute.
resource "aws_elasticache_replication_group" "example" {
automatic_failover_enabled = true
availability_zones = ["us-west-2a", "us-west-2b"]
replication_group_id = "tf-rep-group-1"
replication_group_description = "test description"
node_type = "cache.m4.large"
number_cache_clusters = 2
parameter_group_name = "default.redis3.2"
port = 6379
lifecycle {
ignore_changes = ["number_cache_clusters"]
}
}
cluster_id = "tf-rep-group-1-${count.index}"
replication_group_id = "${aws_elasticache_replication_group.example.id}"
}
cluster_mode {
replicas_per_node_group = 1
num_node_groups = 2
}
}
Note: We currently do not support passing a primary_cluster_id in order to create the Replication Group.
Note: Automatic Failover is unavailable for Redis versions earlier than 2.8.6, and unavailable on T1 node types. For T2
node types, it is only available on Redis version 3.2.4 or later with cluster mode enabled. See the High Availability Using
Replication Groups (https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Replication.html) guide for full
details on using Replication Groups.
Argument Reference
The following arguments are supported:
replication_group_id – (Required) The replication group identi er. This parameter is stored as a lowercase string.
number_cache_clusters - (Required for Cluster Mode Disabled) The number of cache clusters (primary and replicas)
this replication group will have. If Multi-AZ is enabled, the value of this parameter must be at least 2. Updates will occur
before other modi cations.
node_type - (Required) The compute and memory capacity of the nodes in the node group.
auto_minor_version_upgrade - (Optional) Speci es whether a minor engine upgrades will be applied automatically
to the underlying Cache Cluster instances during the maintenance window. Defaults to true .
availability_zones - (Optional) A list of EC2 availability zones in which the replication group's cache clusters will be
created. The order of the availability zones in the list is not important.
engine - (Optional) The name of the cache engine to be used for the clusters in this replication group. e.g. redis
auth_token - (Optional) The password used to access a password protected server. Can be speci ed only if
transit_encryption_enabled = true .
engine_version - (Optional) The version number of the cache engine to be used for the cache clusters in this
replication group.
parameter_group_name - (Optional) The name of the parameter group to associate with this replication group. If this
argument is omitted, the default cache parameter group for the speci ed engine is used.
port – (Optional) The port number on which each of the cache nodes will accept connections. For Memcache the
default is 11211, and for Redis the default port is 6379.
subnet_group_name - (Optional) The name of the cache subnet group to be used for the replication group.
security_group_names - (Optional) A list of cache security group names to associate with this replication group.
security_group_ids - (Optional) One or more Amazon VPC security groups associated with this replication group.
Use this parameter only when you are creating a replication group in an Amazon Virtual Private Cloud
snapshot_arns – (Optional) A single-element string list containing an Amazon Resource Name (ARN) of a Redis RDB
snapshot le stored in Amazon S3. Example: arn:aws:s3:::my_bucket/snapshot1.rdb
snapshot_name - (Optional) The name of a snapshot from which to restore data into the new node group. Changing
the snapshot_name forces a new resource.
maintenance_window – (Optional) Speci es the weekly time range for when maintenance on the cache cluster is
performed. The format is ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60
minute period. Example: sun:05:00-sun:09:00
notification_topic_arn – (Optional) An Amazon Resource Name (ARN) of an SNS topic to send ElastiCache
noti cations to. Example: arn:aws:sns:us-east-1:012345678999:my_sns_topic
snapshot_window - (Optional, Redis only) The daily time range (in UTC) during which ElastiCache will begin taking a
daily snapshot of your cache cluster. The minimum snapshot window is a 60 minute period. Example: 05:00-09:00
snapshot_retention_limit - (Optional, Redis only) The number of days for which ElastiCache will retain automatic
cache cluster snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, then a snapshot
that was taken today will be retained for 5 days before being deleted. If the value of SnapshotRetentionLimit is set to
zero (0), backups are turned o . Please note that setting a snapshot_retention_limit is not supported on
cache.t1.micro or cache.t2.* cache nodes
apply_immediately - (Optional) Speci es whether any modi cations are applied immediately, or during the next
maintenance window. Default is false .
cluster_mode - (Optional) Create a native redis cluster. automatic_failover_enabled must be set to true. Cluster
Mode documented below. Only 1 cluster_mode block is allowed.
replicas_per_node_group - (Required) Specify the number of replica nodes in each node group. Valid values are 0
to 5. Changing this number will force a new resource.
num_node_groups - (Required) Specify the number of node groups (shards) for this Redis replication group. Changing
this number will trigger an online resizing operation before other settings modi cations.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
configuration_endpoint_address - The address of the replication group con guration endpoint when cluster
mode is enabled.
primary_endpoint_address - (Redis only) The address of the endpoint for the primary node in the replication group,
if the cluster mode is disabled.
member_clusters - The identi ers of all the nodes that are part of this replication group.
Timeouts
aws_elasticache_replication_group provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
create - (Default 60m ) How long to wait for a replication group to be created.
delete - (Default 40m ) How long to wait for a replication group to be deleted.
update - (Default 40m ) How long to wait for replication group settings to be updated. This is also separately used for
adding/removing replicas and online resize operation completion, if necessary.
Import
ElastiCache Replication Groups can be imported using the replication_group_id , e.g.
NOTE: ElastiCache Security Groups are for use only when working with an ElastiCache cluster outside of a VPC. If you
are using a VPC, see the ElastiCache Subnet Group resource (/docs/providers/aws/r/elasticache_subnet_group.html).
Example Usage
Argument Reference
The following arguments are supported:
name – (Required) Name for the cache security group. This value is stored as a lowercase string.
description – (Optional) description for the cache security group. Defaults to "Managed by Terraform".
security_group_names – (Required) List of EC2 security group names to be authorized for ingress to the cache
security group
Attributes Reference
In addition to all arguments above, the following attributes are exported:
description
name
security_group_names
Import
ElastiCache Security Groups can be imported by name, e.g.
NOTE: ElastiCache Subnet Groups are only for use when working with an ElastiCache cluster inside of a VPC. If you are
on EC2 Classic, see the ElastiCache Security Group resource (/docs/providers/aws/r/elasticache_security_group.html).
Example Usage
tags = {
Name = "tf-test"
}
}
tags = {
Name = "tf-test"
}
}
Argument Reference
The following arguments are supported:
name – (Required) Name for the cache subnet group. Elasticache converts this name to lowercase.
description – (Optional) Description for the cache subnet group. Defaults to "Managed by Terraform".
subnet_ids – (Required) List of VPC Subnet IDs for the cache subnet group
Attributes Reference
In addition to all arguments above, the following attributes are exported:
description
name
subnet_ids
Import
ElastiCache Subnet Groups can be imported using the name , e.g.
This resource creates an application that has one con guration template named default , and no application versions
Example Usage
appversion_lifecycle {
service_role = "${aws_iam_role.beanstalk_service.arn}"
max_count = 128
delete_source_from_s3 = true
}
}
Argument Reference
The following arguments are supported:
name - (Required) The name of the application, must be unique within your account
tags - (Optional) Key-value mapping of tags for the Elastic Beanstalk Application.
Application version lifecycle ( appversion_lifecycle ) supports the following settings. Only one of either max_count or
max_age_in_days can be provided:
service_role - (Required) The ARN of an IAM service role under which the application version is deleted. Elastic
Beanstalk must have permission to assume this role.
delete_source_from_s3 - (Optional) Set to true to delete a version's source bundle from S3 when the application
version is deleted.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
arn - The ARN assigned by AWS for this Elastic Beanstalk Application.
Import
Elastic Beanstalk Applications can be imported using the name , e.g.
This resource creates a Beanstalk Application Version that can be deployed to a Beanstalk Environment.
NOTE on Application Version Resource: When using the Application Version resource with multiple Elastic Beanstalk
Environments (/docs/providers/aws/r/elastic_beanstalk_environment.html) it is possible that an error may be returned
when attempting to delete an Application Version while it is still in use by a di erent environment. To work around this
you can:
Example Usage
Argument Reference
The following arguments are supported:
bucket - (Required) S3 bucket that contains the Application Version source bundle.
force_delete - (Optional) On delete, force an Application Version to be deleted when it may be in use by multiple
Elastic Beanstalk Environments.
tags - Key-value mapping of tags for the Elastic Beanstalk Application Version.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
arn - The ARN assigned by AWS for this Elastic Beanstalk Application.
Resource: aws_elastic_beanstalk_con guration_template
Provides an Elastic Beanstalk Con guration Template, which are associated with a speci c application and are used to
deploy di erent versions of the application with the same con guration settings.
Example Usage
Argument Reference
The following arguments are supported:
application – (Required) name of the application to associate with this con guration template
environment_id – (Optional) The ID of the environment used with this con guration template
setting – (Optional) Option settings to con gure the new Environment. These override speci c values that are set as
defaults. The format is detailed below in Option Settings
solution_stack_name – (Optional) A solution stack to base your Template o of. Example stacks can be found in the
Amazon API documentation (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.platforms.html)
Option Settings
The setting eld supports the following format:
name
application
description
environment_id
option_settings
solution_stack_name
Resource: aws_elastic_beanstalk_environment
Provides an Elastic Beanstalk Environment Resource. Elastic Beanstalk allows you to deploy and manage applications in the
AWS cloud without worrying about the infrastructure that runs those applications.
Example Usage
Argument Reference
The following arguments are supported:
name - (Required) A unique name for this Environment. This name is used in the application URL
application – (Required) Name of the application that contains the version to be deployed
cname_prefix - (Optional) Pre x to use for the fully quali ed DNS name of the Environment.
tier - (Optional) Elastic Beanstalk Environment tier. Valid values are Worker or WebServer . If tier is left blank
WebServer will be used.
setting – (Optional) Option settings to con gure the new Environment. These override speci c values that are set as
defaults. The format is detailed below in Option Settings
solution_stack_name – (Optional) A solution stack to base your environment o of. Example stacks can be found in
the Amazon API documentation (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.platforms.html)
template_name – (Optional) The name of the Elastic Beanstalk Con guration template to use in deployment
version_label - (Optional) The name of the Elastic Beanstalk Application Version to use in deployment.
Option Settings
Some options can be stack-speci c, check AWS Docs (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-
options-general.html) for supported options and examples.
setting {
namespace = "aws:ec2:vpc"
name = "VPCId"
value = "vpc-xxxxxxxx"
}
setting {
namespace = "aws:ec2:vpc"
name = "Subnets"
value = "subnet-xxxxxxxx"
}
}
Attributes Reference
In addition to all arguments above, the following attributes are exported:
all_settings – List of all option settings con gured in this Environment. These are a combination of default settings
and their overrides from setting in the con guration.
Import
Elastic Beanstalk Environments can be imported using the id , e.g.
Example Usage
Basic Usage
cluster_config {
instance_type = "r4.large.elasticsearch"
}
snapshot_options {
automated_snapshot_start_hour = 23
}
tags = {
Domain = "TestDomain"
}
}
Access Policy
access_policies = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "es:*",
"Principal": "*",
"Effect": "Allow",
"Resource": "arn:aws:es:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_
id}:domain/${var.domain}/*",
"Condition": {
"IpAddress": {"aws:SourceIp": ["66.193.100.22/32"]}
}
}
]
}
POLICY
}
policy_document = <<CONFIG
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "es.amazonaws.com"
},
"Action": [
"logs:PutLogEvents",
"logs:PutLogEventsBatch",
"logs:CreateLogStream"
],
"Resource": "arn:aws:logs:*"
}
]
}
CONFIG
}
log_publishing_options {
cloudwatch_log_group_arn = "${aws_cloudwatch_log_group.example.arn}"
log_type = "INDEX_SLOW_LOGS"
}
}
VPC based ES
variable "vpc" {}
variable "domain" {
default = "tf-test"
}
tags {
Tier = "private"
}
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = [
"${data.aws_vpc.selected.cidr_blocks}",
]
}
}
cluster_config {
instance_type = "m4.large.elasticsearch"
}
vpc_options {
subnet_ids = [
"${data.aws_subnet_ids.selected.ids[0]}",
"${data.aws_subnet_ids.selected.ids[1]}",
]
security_group_ids = ["${aws_security_group.elasticsearch.id}"]
}
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
access_policies = <<CONFIG
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "es:*",
"Principal": "*",
"Effect": "Allow",
"Resource": "arn:aws:es:${data.aws_region.current.name}:${data.aws_caller_identity.current.ac
count_id}:domain/${var.domain}/*"
}
}
]
}
CONFIG
snapshot_options {
automated_snapshot_start_hour = 23
}
tags {
Domain = "TestDomain"
}
depends_on = [
"aws_iam_service_linked_role.es",
]
}
Argument Reference
The following arguments are supported:
access_policies - (Optional) IAM policy document specifying the access policies for the domain
advanced_options - (Optional) Key-value string pairs to specify advanced con guration options. Note that the values
for these con guration options must be strings (wrapped in quotes) or they may be wrong and cause a perpetual di ,
causing Terraform to want to recreate your Elasticsearch domain on every apply.
ebs_options - (Optional) EBS related options, may be required based on chosen instance size
(https://aws.amazon.com/elasticsearch-service/pricing/). See below.
encrypt_at_rest - (Optional) Encrypt at rest options. Only available for certain instance types
(http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-supported-instance-types.html). See
below.
vpc_options - (Optional) VPC related options, see below. Adding or removing this con guration forces a new
resource (documentation (https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html#es-
vpc-limitations)).
ebs_enabled - (Required) Whether EBS volumes are attached to data nodes in the domain.
volume_type - (Optional) The type of EBS volumes attached to data nodes.
volume_size - The size of EBS volumes attached to data nodes (in GB). Required if ebs_enabled is set to true .
iops - (Optional) The baseline input/output (I/O) performance of EBS volumes attached to data nodes. Applicable
only for the Provisioned IOPS EBS volume type.
enabled - (Required) Whether to enable encryption at rest. If the encrypt_at_rest block is not provided then this
defaults to false .
kms_key_id - (Optional) The KMS key id to encrypt the Elasticsearch domain with. If not speci ed then it defaults to
using the aws/es service KMS key.
dedicated_master_enabled - (Optional) Indicates whether dedicated master nodes are enabled for the cluster.
dedicated_master_type - (Optional) Instance type of the dedicated master nodes in the cluster.
zone_awareness_config - (Optional) Con guration block containing zone awareness settings. Documented below.
zone_awareness_enabled - (Optional) Indicates whether zone awareness is enabled. To enable awareness with three
Availability Zones, the availability_zone_count within the zone_awareness_config must be set to 3 .
availability_zone_count - (Optional) Number of Availability Zones for the domain to use with
zone_awareness_enabled . Defaults to 2 . Valid values: 2 or 3 .
enabled - (Required) Whether to enable node-to-node encryption. If the node_to_node_encryption block is not
provided then this defaults to false .
AWS documentation: VPC Support for Amazon Elasticsearch Service Domains (https://docs.aws.amazon.com/elasticsearch-
service/latest/developerguide/es-vpc.html)
Note you must have created the service linked role for the Elasticsearch service to use the vpc_options . If you need to
create the service linked role at the same time as the Elasticsearch domain then you must use depends_on to make sure
that the role is created before the Elasticsearch domain. See the VPC based ES domain example above.
security_group_ids - (Optional) List of VPC Security Group IDs to be applied to the Elasticsearch domain endpoints.
If omitted, the default Security Group for the VPC will be used.
subnet_ids - (Required) List of VPC Subnet IDs for the Elasticsearch domain endpoints to be created in.
Security Groups and Subnets referenced in these attributes must all be within the same VPC; this determines what VPC the
endpoints are created in.
snapshot_options supports the following attribute:
automated_snapshot_start_hour - (Required) Hour during which the service takes an automated daily snapshot of
the indices in the domain.
cloudwatch_log_group_arn - (Required) ARN of the Cloudwatch log group to which log needs to be published.
enabled - (Optional, Default: true) Speci es whether given log publishing option is enabled or not.
enabled - (Optional, Default: false) Speci es whether Amazon Cognito authentication with Kibana is enabled or not
role_arn - (Required) ARN of the IAM role that has the AmazonESCognitoAccess policy attached
Attributes Reference
In addition to all arguments above, the following attributes are exported:
endpoint - Domain-speci c endpoint used to submit index, search, and data upload requests.
vpc_options.0.availability_zones - If the domain was created inside a VPC, the names of the availability zones
the con gured subnet_ids were created inside.
vpc_options.0.vpc_id - If the domain was created inside a VPC, the ID of the VPC.
Import
Elasticsearch domains can be imported using the domain_name , e.g.
Example Usage
access_policies = <<POLICIES
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "es:*",
"Principal": "*",
"Effect": "Allow",
"Condition": {
"IpAddress": {"aws:SourceIp": "127.0.0.1/32"}
},
"Resource": "${aws_elasticsearch_domain.example.arn}/*"
}
]
}
POLICIES
}
Argument Reference
The following arguments are supported:
access_policies - (Optional) IAM policy document specifying the access policies for the domain
Resource: aws_elastictranscoder_pipeline
Provides an Elastic Transcoder pipeline resource.
Example Usage
content_config {
bucket = "${aws_s3_bucket.content_bucket.bucket}"
storage_class = "Standard"
}
thumbnail_config {
bucket = "${aws_s3_bucket.thumb_bucket.bucket}"
storage_class = "Standard"
}
}
Argument Reference
See "Create Pipeline" (http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/create-pipeline.html) in the
AWS docs for reference.
aws_kms_key_arn - (Optional) The AWS Key Management Service (AWS KMS) key that you want to use with this
pipeline.
content_config - (Optional) The ContentCon g object speci es information about the Amazon S3 bucket in which
you want Elastic Transcoder to save transcoded les and playlists. (documented below)
content_config_permissions - (Optional) The permissions for the content_config object. (documented below)
input_bucket - (Required) The Amazon S3 bucket in which you saved the media les that you want to transcode and
the graphics that you want to use as watermarks.
name - (Optional, Forces new resource) The name of the pipeline. Maximum 40 characters
notifications - (Optional) The Amazon Simple Noti cation Service (Amazon SNS) topic that you want to notify to
report job status. (documented below)
output_bucket - (Optional) The Amazon S3 bucket in which you want Elastic Transcoder to save the transcoded les.
role - (Required) The IAM Amazon Resource Name (ARN) for the role that you want Elastic Transcoder to use to
transcode jobs for this pipeline.
thumbnail_config - (Optional) The ThumbnailCon g object speci es information about the Amazon S3 bucket in
which you want Elastic Transcoder to save thumbnail les. (documented below)
The content_config object speci es information about the Amazon S3 bucket in which you want Elastic Transcoder to
save transcoded les and playlists: which bucket to use, and the storage class that you want to assign to the les. If you
specify values for content_config , you must also specify values for thumbnail_config . If you specify values for
content_config and thumbnail_config , omit the output_bucket object.
bucket - The Amazon S3 bucket in which you want Elastic Transcoder to save transcoded les and playlists.
storage_class - The Amazon S3 storage class, Standard or ReducedRedundancy, that you want Elastic Transcoder to
assign to the les and playlists that it stores in your Amazon S3 bucket.
access - The permission that you want to give to the AWS user that you speci ed in
content_config_permissions.grantee
grantee - The AWS user or group that you want to have access to transcoded les and playlists.
grantee_type - Specify the type of value that appears in the content_config_permissions.grantee object. Valid
values are Canonical , Email or Group .
completed - The topic ARN for the Amazon SNS topic that you want to notify when Elastic Transcoder has nished
processing a job in this pipeline.
error - The topic ARN for the Amazon SNS topic that you want to notify when Elastic Transcoder encounters an error
condition while processing a job in this pipeline.
progressing - The topic ARN for the Amazon Simple Noti cation Service (Amazon SNS) topic that you want to notify
when Elastic Transcoder has started to process a job in this pipeline.
warning - The topic ARN for the Amazon SNS topic that you want to notify when Elastic Transcoder encounters a
warning condition while processing a job in this pipeline.
The thumbnail_config object speci es information about the Amazon S3 bucket in which you want Elastic Transcoder to
save thumbnail les: which bucket to use, which users you want to have access to the les, the type of access you want
users to have, and the storage class that you want to assign to the les. If you specify values for content_config , you must
also specify values for thumbnail_config even if you don't want to create thumbnails. (You control whether to create
thumbnails when you create a job. For more information, see ThumbnailPattern in the topic Create Job.) If you specify values
for content_config and thumbnail_config , omit the OutputBucket object.
bucket - The Amazon S3 bucket in which you want Elastic Transcoder to save thumbnail les.
storage_class - The Amazon S3 storage class, Standard or ReducedRedundancy, that you want Elastic Transcoder to
assign to the thumbnails that it stores in your Amazon S3 bucket.
grantee - The AWS user or group that you want to have access to thumbnail les.
grantee_type - Specify the type of value that appears in the thumbnail_config_permissions.grantee object.
Import
Elastic Transcoder pipelines can be imported using the id , e.g.
Example Usage
audio {
audio_packing_mode = "SingleTrack"
bit_rate = 96
channels = 2
codec = "AAC"
sample_rate = 44100
}
audio_codec_options {
profile = "AAC-LC"
}
video {
bit_rate = "1600"
codec = "H.264"
display_aspect_ratio = "16:9"
fixed_gop = "false"
frame_rate = "auto"
max_frame_rate = "60"
keyframes_max_dist = 240
max_height = "auto"
max_width = "auto"
padding_policy = "Pad"
sizing_policy = "Fit"
}
video_codec_options = {
Profile = "main"
Level = "2.2"
MaxReferenceFrames = 3
InterlacedMode = "Progressive"
ColorSpaceConversionMode = "None"
}
video_watermarks {
id = "Terraform Test"
max_width = "20%"
max_height = "20%"
sizing_policy = "ShrinkToFit"
horizontal_align = "Right"
horizontal_offset = "10px"
vertical_align = "Bottom"
vertical_offset = "10px"
opacity = "55.5"
opacity = "55.5"
target = "Content"
}
thumbnails {
format = "png"
interval = 120
max_width = "auto"
max_height = "auto"
padding_policy = "Pad"
sizing_policy = "Fit"
}
}
Argument Reference
See "Create Preset" (http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/create-preset.html) in the AWS
docs for reference.
audio - (Optional, Forces new resource) Audio parameters object (documented below).
audio_codec_options - (Optional, Forces new resource) Codec options for the audio parameters (documented
below)
container - (Required, Forces new resource) The container type for the output le. Valid values are flac , flv ,
fmp4 , gif , mp3 , mp4 , mpg , mxf , oga , ogg , ts , and webm .
description - (Optional, Forces new resource) A description of the preset (maximum 255 characters)
name - (Optional, Forces new resource) The name of the preset. (maximum 40 characters)
thumbnails - (Optional, Forces new resource) Thumbnail parameters object (documented below)
video - (Optional, Forces new resource) Video parameters object (documented below)
video_watermarks - (Optional, Forces new resource) Watermark parameters for the video parameters (documented
below)
video_codec_options (Optional, Forces new resource) Codec options for the video parameters
audio_packing_mode - The method of organizing audio channels and tracks. Use Audio:Channels to specify the
number of channels in your output, and Audio:AudioPackingMode to specify the number of tracks and their relation
to the channels. If you do not specify an Audio:AudioPackingMode, Elastic Transcoder uses SingleTrack.
bit_rate - The bit rate of the audio stream in the output le, in kilobits/second. Enter an integer between 64 and
320, inclusive.
codec - The audio codec for the output le. Valid values are AAC , flac , mp2 , mp3 , pcm , and vorbis .
sample_rate - The sample rate of the audio stream in the output le, in hertz. Valid values are: auto , 22050 ,
32000 , 44100 , 48000 , 96000
bit_depth - The bit depth of a sample is how many bits of information are included in the audio samples. Valid
values are 16 and 24 . (FLAC/PCM Only)
bit_order - The order the bits of a PCM sample are stored in. The supported value is LittleEndian. (PCM Only)
profile - If you speci ed AAC for Audio:Codec, choose the AAC pro le for the output le.
signed - Whether audio samples are represented with negative and positive numbers (signed) or only positive
numbers (unsigned). The supported value is Signed. (PCM Only)
aspect_ratio - The aspect ratio of thumbnails. The following values are valid: auto, 1:1, 4:3, 3:2, 16:9
format - The format of thumbnails, if any. Valid formats are jpg and png.
interval - The approximate number of seconds between thumbnails. The value must be an integer. The actual
interval can vary by several seconds from one thumbnail to the next.
max_height - The maximum height of thumbnails, in pixels. If you specify auto, Elastic Transcoder uses 1080 (Full HD)
as the default value. If you specify a numeric value, enter an even integer between 32 and 3072, inclusive.
max_width - The maximum width of thumbnails, in pixels. If you specify auto, Elastic Transcoder uses 1920 (Full HD)
as the default value. If you specify a numeric value, enter an even integer between 32 and 4096, inclusive.
padding_policy - When you set PaddingPolicy to Pad, Elastic Transcoder might add black bars to the top and bottom
and/or left and right sides of thumbnails to make the total size of the thumbnails match the values that you speci ed
for thumbnail MaxWidth and MaxHeight settings.
resolution - The width and height of thumbnail les in pixels, in the format WidthxHeight, where both values are
even integers. The values cannot exceed the width and height that you speci ed in the Video:Resolution object. (To
better control resolution and aspect ratio of thumbnails, we recommend that you use the thumbnail values
max_width , max_height , sizing_policy , and padding_policy instead of resolution and aspect_ratio . The
two groups of settings are mutually exclusive. Do not use them together)
sizing_policy - A value that controls scaling of thumbnails. Valid values are: Fit , Fill , Stretch , Keep ,
ShrinkToFit , and ShrinkToFill .
aspect_ratio - The display aspect ratio of the video in the output le. Valid values are: auto , 1:1 , 4:3 , 3:2 ,
16:9 . (Note; to better control resolution and aspect ratio of output videos, we recommend that you use the values
max_width , max_height , sizing_policy , padding_policy , and display_aspect_ratio instead of resolution
and aspect_ratio .)
bit_rate - The bit rate of the video stream in the output le, in kilobits/second. You can con gure variable bit rate or
constant bit rate encoding.
codec - The video codec for the output le. Valid values are gif , H.264 , mpeg2 , vp8 , and vp9 .
display_aspect_ratio - The value that Elastic Transcoder adds to the metadata in the output le. If you set
DisplayAspectRatio to auto, Elastic Transcoder chooses an aspect ratio that ensures square pixels. If you specify
another option, Elastic Transcoder sets that value in the output le.
fixed_gop - Whether to use a xed value for Video:FixedGOP. Not applicable for containers of type gif. Valid values
are true and false. Also known as, Fixed Number of Frames Between Keyframes.
frame_rate - The frames per second for the video stream in the output le. The following values are valid: auto ,
10 , 15 , 23.97 , 24 , 25 , 29.97 , 30 , 50 , 60 .
keyframes_max_dist - The maximum number of frames between key frames. Not applicable for containers of type
gif.
max_frame_rate - If you specify auto for FrameRate, Elastic Transcoder uses the frame rate of the input video for the
frame rate of the output video, up to the maximum frame rate. If you do not specify a MaxFrameRate, Elastic
Transcoder will use a default of 30.
max_height - The maximum height of the output video in pixels. If you specify auto, Elastic Transcoder uses 1080
(Full HD) as the default value. If you specify a numeric value, enter an even integer between 96 and 3072, inclusive.
max_width - The maximum width of the output video in pixels. If you specify auto, Elastic Transcoder uses 1920 (Full
HD) as the default value. If you specify a numeric value, enter an even integer between 128 and 4096, inclusive.
padding_policy - When you set PaddingPolicy to Pad, Elastic Transcoder might add black bars to the top and bottom
and/or left and right sides of the output video to make the total size of the output video match the values that you
speci ed for max_width and max_height .
resolution - The width and height of the video in the output le, in pixels. Valid values are auto and
widthxheight . (see note for aspect_ratio )
sizing_policy - A value that controls scaling of the output video. Valid values are: Fit , Fill , Stretch , Keep ,
ShrinkToFit , ShrinkToFill .
horizontal_align - The horizontal position of the watermark unless you specify a nonzero value for
horzontal_offset .
horizontal_offset - The amount by which you want the horizontal position of the watermark to be o set from the
position speci ed by horizontal_align .
id - A unique identi er for the settings for one watermark. The value of Id can be up to 40 characters long. You can
specify settings for up to four watermarks.
opacity - A percentage that indicates how much you want a watermark to obscure the video in the location where it
appears.
sizing_policy - A value that controls scaling of the watermark. Valid values are: Fit , Stretch , ShrinkToFit
target - A value that determines how Elastic Transcoder interprets values that you speci ed for
video_watermarks.horizontal_offset , video_watermarks.vertical_offset , video_watermarks.max_width ,
and video_watermarks.max_height . Valid values are Content and Frame .
vertical_align - The vertical position of the watermark unless you specify a nonzero value for vertical_align .
Valid values are Top , Bottom , Center .
vertical_offset - The amount by which you want the vertical position of the watermark to be o set from the
position speci ed by vertical_align
Profile - The codec pro le that you want to use for the output le. (H.264/VP8 Only)
Level - The H.264 level that you want to use for the output le. Elastic Transcoder supports the following levels: 1 ,
1b , 1.1 , 1.2 , 1.3 , 2 , 2.1 , 2.2 , 3 , 3.1 , 3.2 , 4 , 4.1 (H.264 only)
MaxReferenceFrames - The maximum number of previously decoded frames to use as a reference for decoding
future frames. Valid values are integers 0 through 16. (H.264 only)
MaxBitRate - The maximum number of kilobits per second in the output video. Specify a value between 16 and
62,500 inclusive, or auto . (Optional, H.264/MPEG2/VP8/VP9 only)
BufferSize - The maximum number of kilobits in any x seconds of the output video. This window is commonly 10
seconds, the standard segment duration when you're using ts for the container type of the output video. Specify an
integer greater than 0. If you specify MaxBitRate and omit Bu erSize, Elastic Transcoder sets Bu erSize to 10 times the
value of MaxBitRate. (Optional, H.264/MPEG2/VP8/VP9 only)
InterlacedMode - The interlace mode for the output video. (Optional, H.264/MPEG2 Only)
ColorSpaceConversion - The color space conversion Elastic Transcoder applies to the output video. Valid values are
None , Bt709toBt601 , Bt601toBt709 , and Auto . (Optional, H.264/MPEG2 Only)
ChromaSubsampling - The sampling pattern for the chroma (color) channels of the output video. Valid values are
yuv420p and yuv422p .
LoopCount - The number of times you want the output gif to loop (Gif only)
Import
Elastic Transcoder presets can be imported using the id , e.g.
NOTE on ELB Instances and ELB Attachments: Terraform currently provides both a standalone ELB Attachment
resource (describing an instance attached to an ELB), and an Elastic Load Balancer resource
(/docs/providers/aws/r/elb.html) with instances de ned in-line. At this time you cannot use an ELB with in-line
instances in conjunction with an ELB Attachment resource. Doing so will cause a con ict and will overwrite attachments.
Example Usage
Argument Reference
The following arguments are supported:
NOTE on ELB Instances and ELB Attachments: Terraform currently provides both a standalone ELB Attachment
resource (/docs/providers/aws/r/elb_attachment.html) (describing an instance attached to an ELB), and an ELB resource
with instances de ned in-line. At this time you cannot use an ELB with in-line instances in conjunction with a ELB
Attachment resources. Doing so will cause a con ict and will overwrite attachments.
Example Usage
resource "aws_elb" "bar" {
name = "foobar-terraform-elb"
availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
access_logs {
bucket = "foo"
bucket_prefix = "bar"
interval = 60
}
listener {
instance_port = 8000
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
listener {
instance_port = 8000
instance_protocol = "http"
lb_port = 443
lb_protocol = "https"
ssl_certificate_id = "arn:aws:iam::123456789012:server-certificate/certName"
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = "HTTP:8000/"
interval = 30
}
instances = ["${aws_instance.foo.id}"]
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400
tags = {
Name = "foobar-terraform-elb"
}
}
Argument Reference
The following arguments are supported:
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name .
security_groups - (Optional) A list of security group IDs to assign to the ELB. Only valid if creating an ELB within a
VPC
subnets - (Required for a VPC ELB) A list of subnet IDs to attach to the ELB.
idle_timeout - (Optional) The time in seconds that the connection is allowed to be idle. Default: 60
connection_draining_timeout - (Optional) The time in seconds to allow for connections to drain. Default: 300
Exactly one of availability_zones or subnets must be speci ed: this determines if the ELB exists in a VPC or in EC2-
classic.
bucket_prefix - (Optional) The S3 bucket pre x. Logs are stored in the root if not con gured.
instance_protocol - (Required) The protocol to use to the instance. Valid values are HTTP , HTTPS , TCP , or SSL
lb_protocol - (Required) The protocol to listen on. Valid values are HTTP , HTTPS , TCP , or SSL
ssl_certificate_id - (Optional) The ARN of an SSL certi cate you have uploaded to AWS IAM. Note ECDSA-speci c
restrictions below. Only valid when lb_protocol is either HTTPS or SSL
healthy_threshold - (Required) The number of checks before the instance is declared healthy.
unhealthy_threshold - (Required) The number of checks before the instance is declared unhealthy.
target - (Required) The target of the check. Valid pattern is "${PROTOCOL}:${PORT}${PATH}", where PROTOCOL
values are:
HTTP , HTTPS - PORT and PATH are required
TCP , SSL - PORT is required, PATH is not supported
timeout - (Required) The length of time before the check times out.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
source_security_group - The name of the security group that you can use as part of your inbound rules for your
load balancer's back-end application instances. Use this for Classic or Default VPC only.
source_security_group_id - The ID of the security group that you can use as part of your inbound rules for your
load balancer's back-end application instances. Only available on ELBs launched in a VPC.
zone_id - The canonical hosted zone ID of the ELB (to be used in a Route 53 Alias record)
Import
ELBs can be imported using the name , e.g.
Example Usage
additional_info = <<EOF
<<
{
"instanceAwsClientConfiguration": {
"proxyPort": 8099,
"proxyHost": "myproxy.example.com"
}
}
EOF
termination_protection = false
keep_job_flow_alive_when_no_steps = true
ec2_attributes {
subnet_id = "${aws_subnet.main.id}"
emr_managed_master_security_group = "${aws_security_group.sg.id}"
emr_managed_slave_security_group = "${aws_security_group.sg.id}"
instance_profile = "${aws_iam_instance_profile.emr_profile.arn}"
}
master_instance_group {
instance_type = "m4.large"
}
core_instance_group {
instance_type = "c4.large"
instance_count = 1
ebs_config {
size = "40"
type = "gp2"
volumes_per_instance = 1
}
}
bid_price = "0.30"
autoscaling_policy = <<EOF
<<
{
"Constraints": {
"MinCapacity": 1,
"MaxCapacity": 2
},
"Rules": [
{
"Name": "ScaleOutMemoryPercentage",
"Description": "Scale out if YARNMemoryAvailablePercentage is less than 15",
"Action": {
"SimpleScalingPolicyConfiguration": {
"AdjustmentType": "CHANGE_IN_CAPACITY",
"ScalingAdjustment": 1,
"CoolDown": 300
}
},
"Trigger": {
"CloudWatchAlarmDefinition": {
"ComparisonOperator": "LESS_THAN",
"EvaluationPeriods": 1,
"MetricName": "YARNMemoryAvailablePercentage",
"Namespace": "AWS/ElasticMapReduce",
"Period": 300,
"Statistic": "AVERAGE",
"Threshold": 15.0,
"Unit": "PERCENT"
}
}
}
]
}
EOF
}
ebs_root_volume_size = 100
tags = {
role = "rolename"
env = "env"
}
bootstrap_action {
path = "s3://elasticmapreduce/bootstrap-actions/run-if"
name = "runif"
args = ["instance.isMaster=true", "echo running on master node"]
}
configurations_json = <<EOF
<<
[
{
"Classification": "hadoop-env",
"Configurations": [
{
"Classification": "export",
"Properties": {
"Properties": {
"JAVA_HOME": "/usr/lib/jvm/java-1.8.0"
}
}
],
"Properties": {}
},
{
"Classification": "spark-env",
"Configurations": [
{
"Classification": "export",
"Properties": {
"JAVA_HOME": "/usr/lib/jvm/java-1.8.0"
}
}
],
"Properties": {}
}
]
EOF
service_role = "${aws_iam_role.iam_emr_service_role.arn}"
}
The aws_emr_cluster resource typically requires two IAM roles, one for the EMR Cluster to use as a service, and another to
place on your Cluster Instances to interact with AWS from those instances. The suggested role policy template for the EMR
service is AmazonElasticMapReduceRole , and AmazonElasticMapReduceforEC2Role for the EC2 pro le. See the Getting
Started (https://docs.aws.amazon.com/ElasticMapReduce/latest/ManagementGuide/emr-gs-launch-sample-cluster.html)
guide for more information on these IAM roles. There is also a fully-bootable example Terraform con guration at the
bottom of this page.
step {
action_on_failure = "TERMINATE_CLUSTER"
name = "Setup Hadoop Debugging"
hadoop_jar_step {
jar = "command-runner.jar"
args = ["state-pusher-script"]
}
}
lifecycle {
ignore_changes = ["step"]
}
}
map_public_ip_on_launch = true
}
release_label = "emr-5.24.1"
termination_protection = true
ec2_attributes {
subnet_id = "${aws_subnet.example.id}"
}
master_instance_group {
instance_count = 3
}
core_instance_group {
}
}
Argument Reference
The following arguments are supported:
release_label - (Required) The release label for the Amazon EMR release
scale_down_behavior - (Optional) The way that individual Amazon EC2 instances terminate when an automatic
scale-in activity occurs or an instance group is resized.
additional_info - (Optional) A JSON string for selecting additional features such as adding proxy information. Note:
Currently there is no API to retrieve the value of this argument after EMR cluster creation from provider, therefore
Terraform cannot detect drift from the actual EMR cluster if its value is changed outside Terraform.
service_role - (Required) IAM role that will be assumed by the Amazon EMR service to access AWS resources
security_configuration - (Optional) The security con guration name to attach to the EMR cluster. Only valid for
EMR clusters with release_label 4.8.0 or greater
log_uri - (Optional) S3 bucket to write the log les of the job ow. If a value is not provided, logs are not created
applications - (Optional) A list of applications for the cluster. Valid values are: Flink , Hadoop , Hive , Mahout ,
Pig , Spark , and JupyterHub (as of EMR 5.14.0). Case insensitive
termination_protection - (Optional) Switch on/o termination protection (default is false , except when using
multiple master nodes). Before attempting to destroy the resource when termination protection is enabled, this
con guration must be applied with its value set to false .
keep_job_flow_alive_when_no_steps - (Optional) Switch on/o run cluster with no steps or when all steps are
complete (default is on)
ec2_attributes - (Optional) Attributes for the EC2 instances running the job ow. De ned below
kerberos_attributes - (Optional) Kerberos con guration for the cluster. De ned below
ebs_root_volume_size - (Optional) Size in GiB of the EBS root device volume of the Linux AMI that is used for each
EC2 instance. Available in Amazon EMR version 4.x and later.
custom_ami_id - (Optional) A custom Amazon Linux AMI for the cluster (instead of an EMR-owned AMI). Available in
Amazon EMR version 5.7.0 and later.
bootstrap_action - (Optional) List of bootstrap actions that will be run before Hadoop is started on the cluster
nodes. De ned below
configurations - (Optional) List of con gurations supplied for the EMR cluster you are creating
configurations_json - (Optional) A JSON string for supplying list of con gurations for the EMR cluster.
NOTE on con gurations_json: If the Configurations value is empty then you should skip the Configurations eld
instead of providing empty list as value "Configurations": [] .
configurations_json = <<EOF
<<
[
{
"Classification": "hadoop-env",
"Configurations": [
{
"Classification": "export",
"Properties": {
"JAVA_HOME": "/usr/lib/jvm/java-1.8.0"
}
}
],
"Properties": {}
}
]
EOF
visible_to_all_users - (Optional) Whether the job ow is visible to all IAM users of the AWS account associated
with the job ow. Default true
autoscaling_role - (Optional) An IAM role for automatic scaling policies. The IAM role provides permissions that the
automatic scaling feature requires to launch and terminate EC2 instances in an instance group.
step - (Optional) List of steps to run when creating the cluster. De ned below. It is highly recommended to utilize the
lifecycle con guration block (/docs/con guration/resources.html) with ignore_changes if other steps are being
managed outside of Terraform. This argument is processed in attribute-as-blocks mode (/docs/con guration/attr-as-
blocks.html).
instance_type - (Required) EC2 instance type for all instances in the instance group.
ebs_config - (Optional) Con guration block(s) for EBS volumes attached to each instance in the instance group.
Detailed below.
instance_count - (Optional) Target number of instances for the instance group. Must be at least 1. Defaults to 1.
ec2_attributes
Attributes for the Amazon EC2 instances running the job ow
key_name - (Optional) Amazon EC2 key pair that can be used to ssh to the master node as the user called hadoop
subnet_id - (Optional) VPC subnet id where you want the job ow to launch. Cannot specify the cc1.4xlarge
instance type for nodes of a job ow launched in a Amazon VPC
emr_managed_master_security_group - (Optional) Identi er of the Amazon EC2 EMR-Managed security group for
the master node
emr_managed_slave_security_group - (Optional) Identi er of the Amazon EC2 EMR-Managed security group for the
slave nodes
service_access_security_group - (Optional) Identi er of the Amazon EC2 service-access security group - required
when the cluster runs on a private subnet
instance_profile - (Required) Instance Pro le for EC2 instances of the cluster assume this role
NOTE on EMR-Managed security groups: These security groups will have any missing inbound or outbound access
rules added and maintained by AWS, to ensure proper communication between instances in a cluster. The EMR service
will maintain these rules for groups provided in emr_managed_master_security_group and
emr_managed_slave_security_group ; attempts to remove the required rules may succeed, only for the EMR service
to re-add them in a matter of minutes. This may cause Terraform to fail to destroy an environment that contains an
EMR cluster, because the EMR service does not revoke rules added on deletion, leaving a cyclic dependency between
the security groups that prevents their deletion. To avoid this, use the revoke_rules_on_delete optional attribute for
any Security Group used in emr_managed_master_security_group and emr_managed_slave_security_group . See
Amazon EMR-Managed Security Groups (http://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-man-sec-
groups.html) for more information about the EMR-managed security group rules.
kerberos_attributes
Attributes for Kerberos con guration
ad_domain_join_password - (Optional) The Active Directory password for ad_domain_join_user . Terraform cannot
perform drift detection of this con guration.
ad_domain_join_user - (Optional) Required only when establishing a cross-realm trust with an Active Directory
domain. A user with su cient privileges to join resources to the domain. Terraform cannot perform drift detection of
this con guration.
kdc_admin_password - (Required) The password used within the cluster for the kadmin service on the cluster-
dedicated KDC, which maintains Kerberos principals, password policies, and keytabs for the cluster. Terraform cannot
perform drift detection of this con guration.
realm - (Required) The name of the Kerberos realm to which all nodes in a cluster belong. For example,
EC2.INTERNAL
instance_group
Attributes for each task instance group in the cluster
instance_role - (Required) The role of the instance group in the cluster. Valid values are: MASTER , CORE , and
TASK .
instance_type - (Required) The EC2 instance type for all instances in the instance group
bid_price - (Optional) If set, the bid price for each EC2 instance in the instance group, expressed in USD. By setting
this attribute, the instance group is being declared as a Spot Instance, and will implicitly create a Spot request. Leave
this blank to use On-Demand Instances.
ebs_config - (Optional) A list of attributes for the EBS volumes attached to each instance in the instance group. Each
ebs_config de ned will result in additional EBS volumes being attached to each instance in the instance group.
De ned below
autoscaling_policy - (Optional) The autoscaling policy document. This is a JSON formatted string. See EMR Auto
Scaling (https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-automatic-scaling.html)
instance_type - (Required) EC2 instance type for all instances in the instance group.
bid_price - (Optional) Bid price for each EC2 instance in the instance group, expressed in USD. By setting this
attribute, the instance group is being declared as a Spot Instance, and will implicitly create a Spot request. Leave this
blank to use On-Demand Instances.
ebs_config - (Optional) Con guration block(s) for EBS volumes attached to each instance in the instance group.
Detailed below.
instance_count - (Optional) Target number of instances for the instance group. Must be 1 or 3. Defaults to 1.
Launching with multiple master nodes is only supported in EMR version 5.23.0+, and requires this resource's
core_instance_group to be con gured. Public (Internet accessible) instances must be created in VPC subnets that
have map public IP on launch (/docs/providers/aws/r/subnet.html#map_public_ip_on_launch) enabled. Termination
protection is automatically enabled when launched with multiple master nodes and Terraform must have the
termination_protection = false con guration applied before destroying this resource.
ebs_con g
Attributes for the EBS volumes attached to each EC2 instance in the instance_group
type - (Required) The volume type. Valid options are gp2 , io1 , standard and st1 . See EBS Volume Types
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html).
iops - (Optional) The number of I/O operations per second (IOPS) that the volume supports
volumes_per_instance - (Optional) The number of EBS volumes with this con guration to attach to each EC2
instance in the instance group (default is 1)
bootstrap_action
name - (Required) Name of the bootstrap action
path - (Required) Location of the script to run during a bootstrap action. Can be either a location in Amazon S3 or on
a local le system
args - (Optional) List of command line arguments to pass to the bootstrap action script
step
Attributes for step con guration
action_on_failure - (Required) The action to take if the step fails. Valid values: TERMINATE_JOB_FLOW ,
TERMINATE_CLUSTER , CANCEL_AND_WAIT , and CONTINUE
hadoop_jar_step - (Required) The JAR le used for the step. De ned below.
hadoop_jar_step
Attributes for Hadoop job step con guration
args - (Optional) List of command line arguments passed to the JAR le's main function when executed.
main_class - (Optional) Name of the main class in the speci ed Java le. If not speci ed, the JAR le should specify a
Main-Class in its manifest le.
properties - (Optional) Key-Value map of Java properties that are set when the step runs. You can use these
properties to pass key value pairs to your main function.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
master_instance_group.0.id - Master node type Instance Group ID, if using Instance Group for this node type.
core_instance_group.0.id - Core node type Instance Group ID, if using Instance Group for this node type.
log_uri - The path to the Amazon S3 location where logs for this cluster are stored.
ec2_attributes - Provides information about the EC2 instances in a cluster grouped by category: key name, subnet
ID, IAM instance pro le, and so on.
bootstrap_action - A list of bootstrap actions that will be run before Hadoop is started on the cluster nodes.
service_role - The IAM role that will be assumed by the Amazon EMR service to access AWS resources on your
behalf.
visible_to_all_users - Indicates whether the job ow is visible to all IAM users of the AWS account associated with
the job ow.
provider "aws" {
region = "us-west-2"
}
}
ec2_attributes {
subnet_id = "${aws_subnet.main.id}"
emr_managed_master_security_group = "${aws_security_group.allow_all.id}"
emr_managed_slave_security_group = "${aws_security_group.allow_all.id}"
instance_profile = "${aws_iam_instance_profile.emr_profile.arn}"
}
master_instance_type = "m5.xlarge"
core_instance_type = "m5.xlarge"
core_instance_count = 1
tags = {
role = "rolename"
dns_zone = "env_zone"
env = "env"
name = "name-env"
}
bootstrap_action {
path = "s3://elasticmapreduce/bootstrap-actions/run-if"
name = "runif"
args = ["instance.isMaster=true", "echo running on master node"]
}
configurations_json = <<EOF
<<
[
{
"Classification": "hadoop-env",
"Configurations": [
{
"Classification": "export",
"Properties": {
"JAVA_HOME": "/usr/lib/jvm/java-1.8.0"
}
}
],
"Properties": {}
},
{
"Classification": "spark-env",
"Configurations": [
{
"Classification": "export",
"Properties": {
"JAVA_HOME": "/usr/lib/jvm/java-1.8.0"
}
}
],
"Properties": {}
}
]
EOF
service_role = "${aws_iam_role.iam_emr_service_role.arn}"
}
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks =
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
depends_on = ["aws_subnet.main"]
lifecycle {
ignore_changes = ["ingress", "egress"]
}
tags = {
name = "emr_test"
}
}
tags = {
name = "emr_test"
}
}
tags = {
name = "emr_test"
}
}
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gw.id}"
}
}
assume_role_policy = <<EOF
<<
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "elasticmapreduce.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Resource": "*",
"Action": [
"ec2:AuthorizeSecurityGroupEgress",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CancelSpotInstanceRequests",
"ec2:CreateNetworkInterface",
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:DeleteNetworkInterface",
"ec2:DeleteSecurityGroup",
"ec2:DeleteTags",
"ec2:DeleteTags",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeAccountAttributes",
"ec2:DescribeDhcpOptions",
"ec2:DescribeInstanceStatus",
"ec2:DescribeInstances",
"ec2:DescribeKeyPairs",
"ec2:DescribeNetworkAcls",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribePrefixLists",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSpotInstanceRequests",
"ec2:DescribeSpotPriceHistory",
"ec2:DescribeSubnets",
"ec2:DescribeVpcAttribute",
"ec2:DescribeVpcEndpoints",
"ec2:DescribeVpcEndpointServices",
"ec2:DescribeVpcs",
"ec2:DetachNetworkInterface",
"ec2:ModifyImageAttribute",
"ec2:ModifyInstanceAttribute",
"ec2:RequestSpotInstances",
"ec2:RevokeSecurityGroupEgress",
"ec2:RunInstances",
"ec2:TerminateInstances",
"ec2:DeleteVolume",
"ec2:DescribeVolumeStatus",
"ec2:DescribeVolumes",
"ec2:DetachVolume",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListInstanceProfiles",
"iam:ListRolePolicies",
"iam:PassRole",
"s3:CreateBucket",
"s3:Get*",
"s3:List*",
"sdb:BatchPutAttributes",
"sdb:Select",
"sqs:CreateQueue",
"sqs:Delete*",
"sqs:GetQueue*",
"sqs:PurgeQueue",
"sqs:ReceiveMessage"
]
}]
}
EOF
}
assume_role_policy = <<EOF
<<
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Resource": "*",
"Action": [
"cloudwatch:*",
"dynamodb:*",
"ec2:Describe*",
"elasticmapreduce:Describe*",
"elasticmapreduce:ListBootstrapActions",
"elasticmapreduce:ListClusters",
"elasticmapreduce:ListInstanceGroups",
"elasticmapreduce:ListInstances",
"elasticmapreduce:ListSteps",
"kinesis:CreateStream",
"kinesis:DeleteStream",
"kinesis:DescribeStream",
"kinesis:GetRecords",
"kinesis:GetShardIterator",
"kinesis:MergeShards",
"kinesis:PutRecord",
"kinesis:SplitShard",
"rds:Describe*",
"s3:*",
"sdb:*",
"sns:*",
"sqs:*"
]
}]
}
EOF
}
Import
EMR clusters can be imported using the id , e.g.
Since the API does not return the actual values for Kerberos con gurations, environments with those Terraform
con gurations will need to use the lifecycle con guration block ignore_changes argument
(/docs/con guration/resources.html#ignore_changes) available to all Terraform resources to prevent perpetual di erences,
e.g.
lifecycle {
ignore_changes = ["kerberos_attributes"]
}
}
Resource: aws_emr_instance_group
Provides an Elastic MapReduce Cluster Instance Group con guration. See Amazon Elastic MapReduce Documentation
(https://aws.amazon.com/documentation/emr/) for more information.
NOTE: At this time, Instance Groups cannot be destroyed through the API nor web interface. Instance Groups are
destroyed when the EMR Cluster is destroyed. Terraform will resize any Instance Group to zero when destroying the
resource.
Example Usage
Argument Reference
The following arguments are supported:
name (Required) Human friendly name given to the instance group. Changing this forces a new resource to be
created.
cluster_id (Required) ID of the EMR Cluster to attach to. Changing this forces a new resource to be created.
instance_type (Required) The EC2 instance type for all instances in the instance group. Changing this forces a new
resource to be created.
instance_count (optional) target number of instances for the instance group. defaults to 0.
bid_price - (Optional) If set, the bid price for each EC2 instance in the instance group, expressed in USD. By setting
this attribute, the instance group is being declared as a Spot Instance, and will implicitly create a Spot request. Leave
this blank to use On-Demand Instances.
ebs_optimized (Optional) Indicates whether an Amazon EBS volume is EBS-optimized. Changing this forces a new
resource to be created.
ebs_config (Optional) One or more ebs_config blocks as de ned below. Changing this forces a new resource to be
created.
autoscaling_policy - (Optional) The autoscaling policy document. This is a JSON formatted string. See EMR Auto
Scaling (https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-automatic-scaling.html)
configurations_json - (Optional) A JSON string for supplying list of con gurations speci c to the EMR instance
group. Note that this can only be changed when using EMR release 5.21 or later.
configurations_json = <<EOF
<<
[
{
"Classification": "hadoop-env",
"Configurations": [
{
"Classification": "export",
"Properties": {
"JAVA_HOME": "/usr/lib/jvm/java-1.8.0"
}
}
],
"Properties": {}
}
]
EOF
iops - (Optional) The number of I/O operations per second (IOPS) that the volume supports.
size - (Optional) The volume size, in gibibytes (GiB). This can be a number from 1 - 1024. If the volume type is EBS-
optimized, the minimum value is 10.
type - (Optional) The volume type. Valid options are 'gp2', 'io1' and 'standard'.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
EMR task instance group can be imported using their EMR Cluster id and Instance Group id separated by a forward-slash / ,
e.g.
Example Usage
configuration = <<EOF
<<
{
"EncryptionConfiguration": {
"AtRestEncryptionConfiguration": {
"S3EncryptionConfiguration": {
"EncryptionMode": "SSE-S3"
},
"LocalDiskEncryptionConfiguration": {
"EncryptionKeyProviderType": "AwsKms",
"AwsKmsKey": "arn:aws:kms:us-west-2:187416307283:alias/tf_emr_test_key"
}
},
"EnableInTransitEncryption": false,
"EnableAtRestEncryption": true
}
}
EOF
}
Argument Reference
The following arguments are supported:
name - (Optional) The name of the EMR Security Con guration. By default generated by Terraform.
name_prefix - (Optional) Creates a unique name beginning with the speci ed pre x. Con icts with name .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
CloudWatch Logging
resource "aws_flow_log" "example" {
iam_role_arn = "${aws_iam_role.example.arn}"
log_destination = "${aws_cloudwatch_log_group.example.arn}"
traffic_type = "ALL"
vpc_id = "${aws_vpc.example.id}"
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "vpc-flow-logs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
S3 Logging
Argument Reference
traffic_type - (Required) The type of tra c to capture. Valid values: ACCEPT , REJECT , ALL .
iam_role_arn - (Optional) The ARN for the IAM role that's used to post ow logs to a CloudWatch Logs log group
log_destination_type - (Optional) The type of the logging destination. Valid values: cloud-watch-logs , s3 .
Default: cloud-watch-logs .
log_group_name - (Optional) Deprecated: Use log_destination instead. The name of the CloudWatch log group.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Flow Logs can be imported using the id , e.g.
Example Usage
Argument Reference
The following arguments are supported:
account_id - (Optional) The AWS account ID to associate with AWS Firewall Manager as the AWS Firewall Manager
administrator account. This can be an AWS Organizations master account or a member account. Defaults to the
current account. Must be con gured to perform drift detection.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Import
Firewall Manager administrator account association can be imported using the account ID, e.g.
Example Usage
Argument Reference
The following arguments are supported:
storage_capacity - (Required) The storage capacity (GiB) of the le system. Minimum of 3600 . Storage capacity is
provisioned in increments of 3,600 GiB.
subnet_ids - (Required) A list of IDs for the subnets that the le system will be accessible from. File systems currently
support only one subnet. The le server is also launched in that subnet's Availability Zone.
export_path - (Optional) S3 URI (with optional pre x) where the root of your Amazon FSx le system is exported. Can
only be speci ed with import_path argument and the path must use the same Amazon S3 bucket as speci ed in
import_path . Set equal to import_path to overwrite les on export. Defaults to s3://{IMPORT
BUCKET}/FSxLustre{CREATION TIMESTAMP} .
import_path - (Optional) S3 URI (with optional pre x) that you're using as the data repository for your FSx for Lustre
le system. For example, s3://example-bucket/optional-prefix/ .
imported_file_chunk_size - (Optional) For les imported from a data repository, this value determines the stripe
count and maximum amount of data per le (in MiB) stored on a single physical disk. Can only be speci ed with
import_path argument. Defaults to 1024 . Minimum of 1 and maximum of 512000 .
security_group_ids - (Optional) A list of IDs for the security groups that apply to the speci ed network interfaces
created for le system access. These security groups will apply to all network interfaces.
weekly_maintenance_start_time - (Optional) The preferred start time (in d:HH:MM format) to perform weekly
maintenance, in the UTC time zone.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
arn - Amazon Resource Name of the le system.
network_interface_ids - Set of Elastic Network Interface identi ers from which the le system is accessible.
Timeouts
aws_fsx_lustre_file_system provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
create - (Default 30m ) How long to wait for the le system to be created.
delete - (Default 30m ) How long to wait for the le system to be deleted.
Import
FSx File Systems can be imported using the id , e.g.
Certain resource arguments, like security_group_ids , do not have a FSx API method for reading the information after
creation. If the argument is set in the Terraform con guration on an imported resource, Terraform will always show a
di erence. To workaround this behavior, either omit the argument from the Terraform con guration or use
ignore_changes (/docs/con guration/resources.html#ignore_changes) to hide the di erence, e.g.
security_group_ids = ["${aws_security_group.example.id}"]
lifecycle {
ignore_changes = ["security_group_ids"]
}
}
Resource: aws_fsx_windows_ le_system
Manages a FSx Windows File System. See the FSx Windows Guide
(https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html) for more information.
NOTE: Either the active_directory_id argument or self_managed_active_directory con guration block must be
speci ed.
Example Usage
self_managed_active_directory {
dns_ips = ["10.0.0.111", "10.0.0.222"]
domain_name = "corp.example.com"
password = "avoid-plaintext-passwords"
username = "Admin"
}
}
Argument Reference
The following arguments are supported:
storage_capacity - (Required) Storage capacity (GiB) of the le system. Minimum of 300 and maximum of 65536.
subnet_ids - (Required) A list of IDs for the subnets that the le system will be accessible from. File systems support
only one subnet. The le server is also launched in that subnet's Availability Zone.
throughput_capacity - (Required) Throughput (megabytes per second) of the le system in power of 2 increments.
Minimum of 8 and maximum of 2048 .
active_directory_id - (Optional) The ID for an existing Microsoft Active Directory instance that the le system
should join when it's created. Cannot be speci ed with self_managed_active_directory .
copy_tags_to_backups - (Optional) A boolean ag indicating whether tags on the le system should be copied to
backups. Defaults to false .
daily_automatic_backup_start_time - (Optional) The preferred time (in HH:MM format) to take daily automatic
backups, in the UTC time zone.
kms_key_id - (Optional) ARN for the KMS Key to encrypt the le system at rest. Defaults to an AWS managed KMS
Key.
security_group_ids - (Optional) A list of IDs for the security groups that apply to the speci ed network interfaces
created for le system access. These security groups will apply to all network interfaces.
self_managed_active_directory - (Optional) Con guration block that Amazon FSx uses to join the Windows File
Server instance to your self-managed (including on-premises) Microsoft Active Directory (AD) directory. Cannot be
speci ed with active_directory_id . Detailed below.
skip_final_backup - (Optional) When enabled, will skip the default nal backup taken when the le system is
deleted. This con guration must be applied separately before attempting to delete the resource to have the desired
behavior. Defaults to false .
weekly_maintenance_start_time - (Optional) The preferred start time (in d:HH:MM format) to perform weekly
maintenance, in the UTC time zone.
self_managed_active_directory
The following arguments are supported for self_managed_active_directory con guration block:
dns_ips - (Required) A list of up to two IP addresses of DNS servers or domain controllers in the self-managed AD
directory. The IP addresses need to be either in the same VPC CIDR range as the le system or in the private IP version
4 (IPv4) address ranges as speci ed in RFC 1918 (https://tools.ietf.org/html/rfc1918).
domain_name - (Required) The fully quali ed domain name of the self-managed AD directory. For example,
corp.example.com .
password - (Required) The password for the service account on your self-managed AD domain that Amazon FSx will
use to join to your AD domain.
username - (Required) The user name for the service account on your self-managed AD domain that Amazon FSx will
use to join to your AD domain.
file_system_administrators_group - (Optional) The name of the domain group whose members are granted
administrative privileges for the le system. Administrative privileges include taking ownership of les and folders, and
setting audit controls (audit ACLs) on les and folders. The group that you specify must already exist in your domain.
Defaults to Domain Admins .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
dns_name - DNS name for the le system, e.g. fs-12345678.corp.example.com (domain name matching the Active
Directory domain name)
network_interface_ids - Set of Elastic Network Interface identi ers from which the le system is accessible.
Timeouts
aws_fsx_windows_file_system provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
create - (Default 30m ) How long to wait for the le system to be created.
delete - (Default 30m ) How long to wait for the le system to be deleted.
Import
FSx File Systems can be imported using the id , e.g.
Certain resource arguments, like security_group_ids and the self_managed_active_directory con guation block
password , do not have a FSx API method for reading the information after creation. If these arguments are set in the
Terraform con guration on an imported resource, Terraform will always show a di erence. To workaround this behavior,
either omit the argument from the Terraform con guration or use ignore_changes
(/docs/con guration/resources.html#ignore_changes) to hide the di erence, e.g.
security_group_ids = ["${aws_security_group.example.id}"]
lifecycle {
ignore_changes = ["security_group_ids"]
}
}
Resource: aws_gamelift_alias
Provides a Gamelift Alias resource.
Example Usage
routing_strategy {
message = "Example Message"
type = "TERMINAL"
}
}
Argument Reference
The following arguments are supported:
routing_strategy - (Required) Speci es the eet and/or routing type to use for the alias.
Nested Fields
routing_strategy
message - (Optional) Message text to be used with the TERMINAL routing strategy.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - Alias ID.
Example Usage
storage_location {
bucket = "${aws_s3_bucket.test.bucket}"
key = "${aws_s3_bucket_object.test.key}"
role_arn = "${aws_iam_role.test.arn}"
}
depends_on = ["aws_iam_role_policy.test"]
}
Argument Reference
The following arguments are supported:
operating_system - (Required) Operating system that the game server binaries are built to run on. e.g.
WINDOWS_2012 or AMAZON_LINUX .
storage_location - (Required) Information indicating where your game build les are stored. See below.
Nested Fields
storage_location
role_arn - (Required) ARN of the access role that allows Amazon GameLift to access your S3 bucket.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - Build ID.
Import
Gamelift Builds cannot be imported at this time.
Resource: aws_gamelift_ eet
Provides a Gamelift Fleet resource.
Example Usage
runtime_configuration {
server_process {
concurrent_executions = 1
launch_path = "C:\\game\\GomokuServer.exe"
}
}
}
Argument Reference
The following arguments are supported:
ec2_inbound_permission - (Optional) Range of IP addresses and port settings that permit inbound tra c to access
server processes running on the eet. See below.
metric_groups - (Optional) List of names of metric groups to add this eet to. A metric group tracks metrics across
all eets in the group. Defaults to default .
new_game_session_protection_policy - (Optional) Game session protection policy to apply to all instances in this
eet. e.g. FullProtection . Defaults to NoProtection .
resource_creation_limit_policy - (Optional) Policy that limits the number of game sessions an individual player
can create over a span of time for this eet. See below.
runtime_configuration - (Optional) Instructions for launching server processes on each instance in the eet. See
below.
Nested Fields
ec2_inbound_permission
ip_range - (Required) Range of allowed IP addresses expressed in CIDR notation. e.g. 000.000.000.000/[subnet
mask] or 0.0.0.0/[subnet mask] .
protocol - (Required) Network communication protocol used by the eet. e.g. TCP or UDP
to_port - (Required) Ending value for a range of allowed port numbers. Port numbers are end-inclusive. This value
must be higher than from_port .
resource_creation_limit_policy
new_game_sessions_per_creator - (Optional) Maximum number of game sessions that an individual can create
during the policy period.
policy_period_in_minutes - (Optional) Time span used in evaluating the resource creation limit policy.
runtime_configuration
server_process - (Optional) Collection of server process con gurations that describe which server processes to run
on each instance in a eet. See below.
server_process
concurrent_executions - (Required) Number of server processes using this con guration to run concurrently on an
instance.
launch_path - (Required) Location of the server executable in a game build. All game builds are installed on instances
at the root : for Windows instances C:\game , and for Linux instances /local/game .
parameters - (Optional) Optional list of parameters to pass to the server executable on launch.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - Fleet ID.
Import
Gamelift Fleets cannot be imported at this time.
Resource: aws_gamelift_game_session_queue
Provides an Gamelift Game Session Queue resource.
Example Usage
destinations = [
"${aws_gamelift_fleet.us_west_2_fleet.arn}",
"${aws_gamelift_fleet.eu_central_1_fleet.arn}",
]
player_latency_policy {
maximum_individual_player_latency_milliseconds = 100
policy_duration_seconds = 5
}
player_latency_policy {
maximum_individual_player_latency_milliseconds = 200
}
timeout_in_seconds = 60
}
Argument Reference
The following arguments are supported:
timeout_in_seconds - (Required) Maximum time a game session request can remain in the queue.
destinations - (Optional) List of eet/alias ARNs used by session queue for placing game sessions.
player_latency_policy - (Optional) One or more policies used to choose eet based on player latency. See below.
Nested Fields
player_latency_policy
policy_duration_seconds - (Optional) Length of time that the policy is enforced while placing a new game session.
Absence of value for this attribute means that the policy is enforced until the queue times out.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Gamelift Game Session Queues can be imported by their name , e.g.
Example Usage
notification {
sns_topic = "${aws_sns_topic.aws_sns_topic.arn}"
events = ["ArchiveRetrievalCompleted", "InventoryRetrievalCompleted"]
}
access_policy = <<EOF
<<
{
"Version":"2012-10-17",
"Statement":[
{
"Sid": "add-read-only-perm",
"Principal": "*",
"Effect": "Allow",
"Action": [
"glacier:InitiateJob",
"glacier:GetJobOutput"
],
"Resource": "arn:aws:glacier:eu-west-1:432981146916:vaults/MyArchive"
}
]
}
EOF
tags = {
Test = "MyArchive"
}
}
Argument Reference
The following arguments are supported:
name - (Required) The name of the Vault. Names can be between 1 and 255 characters long and the valid characters
are a-z, A-Z, 0-9, '_' (underscore), '-' (hyphen), and '.' (period).
access_policy - (Optional) The policy document. This is a JSON formatted string. The heredoc syntax or file
function is helpful here. Use the Glacier Developer Guide
(https://docs.aws.amazon.com/amazonglacier/latest/dev/vault-access-policy.html) for more information on Glacier
Vault Policy
notification - (Optional) The noti cations for the Vault. Fields documented below.
events - (Required) You can con gure a vault to publish a noti cation for ArchiveRetrievalCompleted and
InventoryRetrievalCompleted events.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Glacier Vaults can be imported using the name , e.g.
NOTE: This resource allows you to test Glacier Vault Lock policies by setting the complete_lock argument to false .
When testing policies in this manner, the Glacier Vault Lock automatically expires after 24 hours and Terraform will
show this resource as needing recreation after that time. To permanently apply the policy, set the complete_lock
argument to true . When changing complete_lock to true , it is expected the resource will show as recreating.
WARNING: Once a Glacier Vault Lock is completed, it is immutable. The deletion of the Glacier Vault Lock is not be
possible and attempting to remove it from Terraform will return an error. Set the ignore_deletion_error argument
to true and apply this con guration before attempting to delete this resource via Terraform or use terraform state
rm to remove this resource from Terraform management.
Example Usage
condition {
test = "NumericLessThanEquals"
variable = "glacier:ArchiveAgeinDays"
values = ["365"]
}
}
}
Argument Reference
The following arguments are supported:
complete_lock - (Required) Boolean whether to permanently apply this Glacier Lock Policy. Once completed, this
cannot be undone. If set to false , the Glacier Lock Policy remains in a testing mode for 24 hours. After that time, the
Glacier Lock Policy is automatically removed by Glacier and the Terraform resource will show as needing recreation.
Changing this from false to true will show as resource recreation, which is expected. Changing this from true to
false is not possible unless the Glacier Vault is recreated at the same time.
policy - (Required) JSON string containing the IAM policy to apply as the Glacier Vault Lock policy.
ignore_deletion_error - (Optional) Allow Terraform to ignore the error returned when attempting to delete the
Glacier Lock Policy. This can be used to delete or recreate the Glacier Vault via Terraform, for example, if the Glacier
Vault Lock policy permits that action. This should only be used in conjunction with complete_lock being set to true .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Glacier Vault Locks can be imported using the Glacier Vault name, e.g.
Example Usage
attributes {
flow_logs_enabled = true
flow_logs_s3_bucket = "example-bucket"
flow_logs_s3_prefix = "flow-logs/"
}
}
Argument Reference
The following arguments are supported:
ip_address_type - (Optional) The value for the address type must be IPV4 .
enabled - (Optional) Indicates whether the accelerator is enabled. The value is true or false. The default value is true.
flow_logs_s3_bucket - (Optional) The name of the Amazon S3 bucket for the ow logs.
flow_logs_s3_prefix - (Optional) The pre x for the location in the Amazon S3 bucket for the ow logs.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Global Accelerator accelerators can be imported using the id , e.g.
Example Usage
endpoint_configuration {
endpoint_id = "${aws_lb.example.arn}"
weight = 100
}
}
Argument Reference
The following arguments are supported:
health_check_interval_seconds - (Optional) The time—10 seconds or 30 seconds—between each health check for
an endpoint. The default value is 30.
health_check_path - (Optional) If the protocol is HTTP/S, then this speci es the path that is the destination for
health check targets. The default value is slash (/).
health_check_port - (Optional) The port that AWS Global Accelerator uses to check the health of endpoints that are
part of this endpoint group. The default port is the listener port that this endpoint group is associated with. If listener
port is a list of ports, Global Accelerator uses the rst port in the list.
health_check_protocol - (Optional) The protocol that AWS Global Accelerator uses to check the health of endpoints
that are part of this endpoint group. The default value is TCP.
threshold_count - (Optional) The number of consecutive health checks required to set the state of a healthy
endpoint to unhealthy, or to set an unhealthy endpoint to healthy. The default value is 3.
traffic_dial_percentage - (Optional) The percentage of tra c to send to an AWS Region. Additional tra c is
distributed to other endpoint groups for this listener. The default value is 100.
endpoint_id - (Optional) An ID for the endpoint. If the endpoint is a Network Load Balancer or Application Load
Balancer, this is the Amazon Resource Name (ARN) of the resource. If the endpoint is an Elastic IP address, this is the
Elastic IP address allocation ID.
weight - (Optional) The weight associated with the endpoint. When you add weights to endpoints, you con gure AWS
Global Accelerator to route tra c based on proportions that you specify.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Global Accelerator endpoint groups can be imported using the id , e.g.
Example Usage
attributes {
flow_logs_enabled = true
flow_logs_s3_bucket = "example-bucket"
flow_logs_s3_prefix = "flow-logs/"
}
}
port_range {
from_port = 80
to_port = 80
}
}
Argument Reference
The following arguments are supported:
client_affinity - (Optional) Direct all requests from a user to the same endpoint. Valid values are NONE ,
SOURCE_IP . Default: NONE . If NONE , Global Accelerator uses the " ve-tuple" properties of source IP address, source
port, destination IP address, destination port, and protocol to select the hash value. If SOURCE_IP , Global Accelerator
uses the "two-tuple" properties of source (client) IP address and destination IP address to select the hash value.
protocol - (Optional) The protocol for the connections from clients to the accelerator. Valid values are TCP , UDP .
port_range - (Optional) The list of port ranges for the connections from clients to the accelerator. Fields documented
below.
Import
Global Accelerator listeners can be imported using the id , e.g.
Example Usage
Argument Reference
The following arguments are supported:
catalog_id - (Optional) ID of the Glue Catalog to create the database in. If omitted, this defaults to the AWS Account
ID.
location_uri - (Optional) The location of the database (for example, an HDFS path).
parameters - (Optional) A list of key-value pairs that de ne parameters and properties of the database.
Import
Glue Catalog Databases can be imported using the catalog_id:name . If you have not set a Catalog ID specify the AWS
Account ID that the database is in, e.g.
Example Usage
Basic Table
table_type = "EXTERNAL_TABLE"
parameters = {
EXTERNAL = "TRUE"
"parquet.compression" = "SNAPPY"
}
storage_descriptor {
location = "s3://my-bucket/event-streams/my-stream"
input_format = "org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat"
output_format = "org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat"
ser_de_info {
name = "my-stream"
serialization_library = "org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe"
parameters = {
"serialization.format" = 1
}
}
columns {
name = "my_string"
type = "string"
}
columns {
name = "my_double"
type = "double"
}
columns {
name = "my_date"
type = "date"
comment = ""
}
columns {
name = "my_bigint"
type = "bigint"
comment = ""
}
columns {
name = "my_struct"
type = "struct<my_nested_string:string>"
comment = ""
}
}
}
Argument Reference
The following arguments are supported:
name - (Required) Name of the table. For Hive compatibility, this must be entirely lowercase.
database_name - (Required) Name of the metadata database where the table metadata resides. For Hive
compatibility, this must be all lowercase.
catalog_id - (Optional) ID of the Glue Catalog and database to create the table in. If omitted, this defaults to the AWS
Account ID plus the database name.
storage_descriptor - (Optional) A storage descriptor object containing information about the physical storage of
this table. You can refer to the Glue Developer Guide (https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-
catalog-tables.html#aws-glue-api-catalog-tables-StorageDescriptor) for a full explanation of this object.
partition_keys - (Optional) A list of columns by which the table is partitioned. Only primitive types are supported as
partition keys.
view_original_text - (Optional) If the table is a view, the original text of the view; otherwise null.
view_expanded_text - (Optional) If the table is a view, the expanded text of the view; otherwise null.
parameters - (Optional) Properties associated with this table, as a list of key-value pairs.
storage_descriptor
location - (Optional) The physical location of the table. By default this takes the form of the warehouse location,
followed by the database location in the warehouse, followed by the table name.
compressed - (Optional) True if the data in the table is compressed, or False if not.
number_of_buckets - (Optional) Must be speci ed if the table contains any dimension columns.
bucket_columns - (Optional) A list of reducer grouping columns, clustering columns, and bucketing columns in the
table.
sort_columns - (Optional) A list of Order objects specifying the sort order of each bucket in the table.
stored_as_sub_directories - (Optional) True if the table data is stored in subdirectories, or False if not.
column
ser_de_info
parameters - (Optional) A map of initialization parameters for the SerDe, in key-value form.
serialization_library - (Optional) Usually the class that implements the SerDe. An example is:
org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe.
sort_columns
sort_order - (Required) Indicates that the column is sorted in ascending order (== 1), or in descending order (==0).
skewed_info
skewed_column_values - (Optional) A mapping of skewed values to the columns that contain them.
Import
Glue Tables can be imported with their catalog ID (usually AWS account ID), database name, and table name, e.g.
NOTE: It is only valid to create one type of classi er (csv, grok, JSON, or XML). Changing classi er types will recreate the
classi er.
Example Usage
Csv Classi er
csv_classifier {
allow_single_column = false
contains_header = "PRESENT"
delimiter = ","
disable_value_trimming = false
header = ["example1", "example2"]
quote_symbol = "'"
}
}
Grok Classi er
grok_classifier {
classification = "example"
grok_pattern = "example"
}
}
JSON Classi er
resource "aws_glue_classifier" "example" {
name = "example"
json_classifier {
json_path = "example"
}
}
XML Classi er
xml_classifier {
classification = "example"
row_tag = "example"
}
}
Argument Reference
The following arguments are supported:
csv_classi er
allow_single_column - (Optional) Enables the processing of les that contain only one column.
contains_header - (Optional) Indicates whether the CSV le contains a header. This can be one of "ABSENT",
"PRESENT", or "UNKNOWN".
quote_symbol - (Optional) A custom symbol to denote what combines content into a single column value. It must be
di erent from the column delimiter.
grok_classi er
classification - (Required) An identi er of the data format that the classi er matches, such as Twitter, JSON,
Omniture logs, Amazon CloudWatch Logs, and so on.
json_classi er
json_path - (Required) A JsonPath string de ning the JSON data for the classi er to classify. AWS Glue supports a
subset of JsonPath , as described in Writing JsonPath Custom Classi ers
(https://docs.aws.amazon.com/glue/latest/dg/custom-classi er.html#custom-classi er-json).
xml_classi er
classification - (Required) An identi er of the data format that the classi er matches.
row_tag - (Required) The XML tag designating the element that contains each record in an XML document being
parsed. Note that this cannot identify a self-closing element (closed by /> ). An empty row element that contains only
attributes can be parsed as long as it ends with a closing tag (for example, <row item_a="A" item_b="B"></row> is
okay, but <row item_a="A" item_b="B" /> is not).
Attributes Reference
The following additional attributes are exported:
Import
Glue Classi ers can be imported using their name, e.g.
Example Usage
Non-VPC Connection
name = "example"
}
VPC Connection
For more information, see the AWS Documentation (https://docs.aws.amazon.com/glue/latest/dg/populate-add-
connection.html#connection-JDBC-VPC).
name = "example"
physical_connection_requirements {
availability_zone = "${aws_subnet.example.availability_zone}"
security_group_id_list = ["${aws_security_group.example.id}"]
subnet_id = "${aws_subnet.example.id}"
}
}
Argument Reference
The following arguments are supported:
catalog_id – (Optional) The ID of the Data Catalog in which to create the connection. If none is supplied, the AWS
account ID is used by default.
connection_properties – (Required) A map of key-value pairs used as parameters for this connection.
match_criteria – (Optional) A list of criteria that can be used in selecting this connection.
physical_connection_requirements
availability_zone - (Optional) The availability zone of the connection. This eld is redundant and implied by
subnet_id , but is currently an api requirement.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Glue Connections can be imported using the CATALOG-ID (AWS account ID if not custom) and NAME , e.g.
Example Usage
DynamoDB Target
dynamodb_target {
path = "table-name"
}
}
JDBC Target
jdbc_target {
connection_name = "${aws_glue_connection.example.name}"
path = "database-name/%"
}
}
S3 Target
s3_target {
path = "s3://${aws_s3_bucket.example.bucket}"
}
}
Catalog Target
catalog_target {
database_name = "${aws_glue_catalog_database.example.name}"
tables = ["${aws_glue_catalog_table.example.name}"]
}
schema_change_policy {
delete_behavior = "LOG"
}
configuration = <<EOF
<<
{
"Version":1.0,
"Grouping": {
"TableGroupingPolicy": "CombineCompatibleSchemas"
}
}
EOF
}
Argument Reference
role (Required) The IAM role friendly name (including path without leading slash), or ARN of an IAM role, used by the
crawler to access other resources.
classifiers (Optional) List of custom classi ers. By default, all AWS classi ers are included in a crawl, but these
custom classi ers always override the default classi ers for a given classi cation.
schedule (Optional) A cron expression used to specify the schedule. For more information, see Time-Based
Schedules for Jobs and Crawlers (https://docs.aws.amazon.com/glue/latest/dg/monitor-data-warehouse-
schedule.html). For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .
schema_change_policy (Optional) Policy for the crawler's update and deletion behavior.
table_prefix (Optional) The table pre x used for catalog tables that are created.
security_configuration (Optional) The name of Security Con guration to be used by the crawler
exclusions - (Optional) A list of glob patterns used to exclude from the crawl.
exclusions - (Optional) A list of glob patterns used to exclude from the crawl.
Note: configuration for catalog target crawlers will have { ... "Grouping": { "TableGroupingPolicy":
"CombineCompatibleSchemas"} } by default.
update_behavior - (Optional) The update behavior when the crawler nds a changed schema. Valid values: LOG or
UPDATE_IN_DATABASE . Defaults to UPDATE_IN_DATABASE .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - Crawler name
Import
Glue Crawlers can be imported using name , e.g.
Example Usage
Python Job
command {
script_location = "s3://${aws_s3_bucket.example.bucket}/example.py"
}
}
Scala Job
command {
script_location = "s3://${aws_s3_bucket.example.bucket}/example.scala"
}
default_arguments = {
"--job-language" = "scala"
}
}
Argument Reference
The following arguments are supported:
NOTE: The allocated_capacity attribute has been deprecated and might be removed in future releases, please use
max_capacity instead.
allocated_capacity – DEPRECATED (Optional) The number of AWS Glue data processing units (DPUs) to allocate to
this Job. At least 2 DPUs need to be allocated; the default is 10. A DPU is a relative measure of processing power that
consists of 4 vCPUs of compute capacity and 16 GB of memory.
default_arguments – (Optional) The map of default arguments for this job. You can specify arguments here that
your own job-execution script consumes, as well as arguments that AWS Glue itself consumes. For information about
how to specify and consume your own Job arguments, see the Calling AWS Glue APIs in Python
(http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-calling.html) topic in the developer guide.
For information about the key-value pairs that AWS Glue consumes to set up your job, see the Special Parameters
Used by AWS Glue (http://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-glue-arguments.html)
topic in the developer guide.
max_capacity – (Optional) The maximum number of AWS Glue data processing units (DPUs) that can be allocated
when this job runs.
max_retries – (Optional) The maximum number of times to retry this job if it fails.
name – (Required) The name you assign to this job. It must be unique in your account.
role_arn – (Required) The ARN of the IAM role associated with this job.
timeout – (Optional) The job timeout in minutes. The default is 2880 minutes (48 hours).
security_configuration - (Optional) The name of the Security Con guration to be associated with the job.
python_version - (Optional) The Python version being used to execute a Python shell job. Allowed values are 2 or 3.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - Job name
Import
Glue Jobs can be imported using name , e.g.
$ terraform import aws_glue_job.MyJob MyJob
Resource: aws_glue_security_con guration
Manages a Glue Security Con guration.
Example Usage
encryption_configuration {
cloudwatch_encryption {
cloudwatch_encryption_mode = "DISABLED"
}
job_bookmarks_encryption {
job_bookmarks_encryption_mode = "DISABLED"
}
s3_encryption {
kms_key_arn = "${data.aws_kms_key.example.arn}"
s3_encryption_mode = "SSE-KMS"
}
}
}
Argument Reference
The following arguments are supported:
encryption_configuration – (Required) Con guration block containing encryption con guration. Detailed below.
s3_encryption - (Required) A s3_encryption block as described below, which contains encryption con guration for
S3 data.
cloudwatch_encryption_mode - (Optional) Encryption mode to use for CloudWatch data. Valid values: DISABLED ,
SSE-KMS . Default value: DISABLED .
kms_key_arn - (Optional) Amazon Resource Name (ARN) of the KMS key to be used to encrypt the data.
job_bookmarks_encryption_mode - (Optional) Encryption mode to use for job bookmarks data. Valid values: CSE-
KMS , DISABLED . Default value: DISABLED .
kms_key_arn - (Optional) Amazon Resource Name (ARN) of the KMS key to be used to encrypt the data.
s3_encryption_mode - (Optional) Encryption mode to use for S3 data. Valid values: DISABLED , SSE-KMS , SSE-S3 .
Default value: DISABLED .
kms_key_arn - (Optional) Amazon Resource Name (ARN) of the KMS key to be used to encrypt the data.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Glue Security Con gurations can be imported using name , e.g.
Example Usage
Conditional Trigger
actions {
job_name = "${aws_glue_job.example1.name}"
}
predicate {
conditions {
job_name = "${aws_glue_job.example2.name}"
state = "SUCCEEDED"
}
}
}
On-Demand Trigger
actions {
job_name = "${aws_glue_job.example.name}"
}
}
Scheduled Trigger
resource "aws_glue_trigger" "example" {
name = "example"
schedule = "cron(15 12 * * ? *)"
type = "SCHEDULED"
actions {
job_name = "${aws_glue_job.example.name}"
}
}
Argument Reference
The following arguments are supported:
actions – (Required) List of actions initiated by this trigger when it res. De ned below.
enabled – (Optional) Start the trigger. Defaults to true . Not valid to disable for ON_DEMAND type.
predicate – (Optional) A predicate to specify when the new trigger should re. Required when trigger type is
CONDITIONAL . De ned below.
schedule – (Optional) A cron expression used to specify the schedule. Time-Based Schedules for Jobs and Crawlers
(https://docs.aws.amazon.com/glue/latest/dg/monitor-data-warehouse-schedule.html)
type – (Required) The type of trigger. Valid values are CONDITIONAL , ON_DEMAND , and SCHEDULED .
timeout - (Optional) The job run timeout in minutes. It overrides the timeout value of the job.
logical - (Optional) How to handle multiple conditions. Defaults to AND . Valid values are AND or ANY .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - Trigger name
Timeouts
aws_glue_trigger provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration options:
Import
Glue Triggers can be imported using name , e.g.
NOTE: Deleting this resource is equivalent to "disabling" GuardDuty for an AWS region, which removes all existing ndings. You can set the
enable attribute to false to instead "suspend" monitoring and feedback reporting while keeping existing data. See the Suspending or
Disabling Amazon GuardDuty documentation (https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_suspend-disable.html) for more
information.
Example Usage
Argument Reference
The following arguments are supported:
enable - (Optional) Enable monitoring and feedback reporting. Setting to false is equivalent to "suspending" GuardDuty. Defaults to true .
finding_publishing_frequency - (Optional) Speci es the frequency of noti cations sent for subsequent nding occurrences. If the detector
is a GuardDuty member account, the value is determined by the GuardDuty master account and cannot be modi ed, otherwise defaults to
SIX_HOURS . For standalone and GuardDuty master accounts, it must be con gured in Terraform to enable drift detection. Valid values for
standalone and master accounts: FIFTEEN_MINUTES , ONE_HOUR , SIX_HOURS . See AWS Documentation
(https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_ ndings_cloudwatch.html#guardduty_ ndings_cloudwatch_noti cation_frequency)
for more information.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
GuardDuty detectors can be imported using the detector ID, e.g.
Example Usage
detector_id = "${aws_guardduty_detector.member.id}"
master_account_id = "${aws_guardduty_detector.master.account_id}"
}
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_guardduty_invite_accepter provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
create - (Default 1m ) How long to wait for an invite to accept.
Import
aws_guardduty_invite_accepter can be imported using the the member GuardDuty detector ID, e.g.
Note: Currently in GuardDuty, users from member accounts cannot upload and further manage IPSets. IPSets that are
uploaded by the master account are imposed on GuardDuty functionality in its member accounts. See the GuardDuty
API Documentation (https://docs.aws.amazon.com/guardduty/latest/ug/create-ip-set.html)
Example Usage
Argument Reference
The following arguments are supported:
activate - (Required) Speci es whether GuardDuty is to start using the uploaded IPSet.
format - (Required) The format of the le that contains the IPSet. Valid values: TXT | STIX | OTX_CSV |
ALIEN_VAULT | PROOF_POINT | FIRE_EYE
Import
GuardDuty IPSet can be imported using the the master GuardDuty detector ID and IPSet ID, e.g.
Example Usage
enable = true
}
Argument Reference
The following arguments are supported:
detector_id - (Required) The detector ID of the GuardDuty account where you want to create member accounts.
invite - (Optional) Boolean whether to invite the account to GuardDuty as a member. Defaults to false . To detect if
an invitation needs to be (re-)sent, the Terraform state value is true based on a relationship_status of
Disabled , Enabled , Invited , or EmailVerificationInProgress .
disable_email_notification - (Optional) Boolean whether an email noti cation is sent to the accounts. Defaults to
false .
Timeouts
aws_guardduty_member provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration
options:
create - (Default 60s ) How long to wait for a veri cation to be done against inviting GuardDuty member account.
update - (Default 60s ) How long to wait for a veri cation to be done against inviting GuardDuty member account.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
relationship_status - The status of the relationship between the member account and its master account. More
information can be found in Amazon GuardDuty API Reference
(https://docs.aws.amazon.com/guardduty/latest/ug/get-members.html).
Import
GuardDuty members can be imported using the the master GuardDuty detector ID and member AWS account ID, e.g.
Note: Currently in GuardDuty, users from member accounts cannot upload and further manage ThreatIntelSets.
ThreatIntelSets that are uploaded by the master account are imposed on GuardDuty functionality in its member
accounts. See the GuardDuty API Documentation (https://docs.aws.amazon.com/guardduty/latest/ug/create-threat-
intel-set.html)
Example Usage
Argument Reference
The following arguments are supported:
activate - (Required) Speci es whether GuardDuty is to start using the uploaded ThreatIntelSet.
format - (Required) The format of the le that contains the ThreatIntelSet. Valid values: TXT | STIX | OTX_CSV |
ALIEN_VAULT | PROOF_POINT | FIRE_EYE
id - The ID of the GuardDuty ThreatIntelSet and the detector ID. Format: <DetectorID>:<ThreatIntelSetID>
Import
GuardDuty ThreatIntelSet can be imported using the the master GuardDuty detector ID and ThreatIntelSetID, e.g.
Example Usage
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
output "secret" {
value = "${aws_iam_access_key.lb.encrypted_secret}"
}
Argument Reference
The following arguments are supported:
user - (Required) The IAM user to associate with this access key.
pgp_key - (Optional) Either a base-64 encoded PGP public key, or a keybase username in the form
keybase:some_person_that_exists .
status - (Optional) The access key status to apply. Defaults to Active . Valid values are Active and Inactive .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
key_fingerprint - The ngerprint of the PGP key used to encrypt the secret
secret - The secret access key. Note that this will be written to the state le. Please supply a pgp_key instead, which
will prevent the secret from being stored in plain text
encrypted_secret - The encrypted secret, base64 encoded. ~> NOTE: The encrypted secret may be decrypted using
the command line, for example: terraform output encrypted_secret | base64 --decode | keybase pgp
decrypt .
ses_smtp_password - The secret access key converted into an SES SMTP password by applying AWS's documented
conversion algorithm (https://docs.aws.amazon.com/ses/latest/DeveloperGuide/smtp-credentials.html#smtp-
credentials-convert).
Resource: aws_iam_account_alias
Example Usage
Argument Reference
The following arguments are supported:
Import
The current Account Alias can be imported using the account_alias , e.g.
Note: There is only a single policy allowed per AWS account. An existing policy will be lost when using this resource as an
e ect of this limitation.
Manages Password Policy for the AWS Account. See more about Account Password Policy
(http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html) in the o cial AWS docs.
Example Usage
Argument Reference
The following arguments are supported:
hard_expiry - (Optional) Whether users are prevented from setting a new password after their password has
expired (i.e. require administrator reset)
password_reuse_prevention - (Optional) The number of previous passwords that users are prevented from reusing.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
expire_passwords - Indicates whether passwords in the account expire. Returns true if max_password_age
contains a value greater than 0 . Returns false if it is 0 or not present .
Import
IAM Account Password Policy can be imported using the word iam-account-password-policy , e.g.
Example Usage
Argument Reference
The following arguments are supported:
name - (Required) The group's name. The name must consist of upper and lowercase alphanumeric characters with
no spaces. You can also include any of the following characters: =,.@-_. . Group names are not distinguished by case.
For example, you cannot create groups named both "ADMINS" and "admins".
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
IAM Groups can be imported using the name , e.g.
WARNING: Multiple aws_iam_group_membership resources with the same group name will produce inconsistent
behavior!
Provides a top level resource to manage IAM Group membership for IAM Users. For more information on managing IAM
Groups or IAM Users, see IAM Groups (/docs/providers/aws/r/iam_group.html) or IAM Users
(/docs/providers/aws/r/iam_user.html)
Note: aws_iam_group_membership will con ict with itself if used more than once with the same group. To non-
exclusively manage the users in a group, see the aws_iam_user_group_membership resource
(/docs/providers/aws/r/iam_user_group_membership.html).
Example Usage
users = [
"${aws_iam_user.user_one.name}",
"${aws_iam_user.user_two.name}",
]
group = "${aws_iam_group.group.name}"
}
Argument Reference
The following arguments are supported:
users - (Required) A list of IAM User names to associate with the Group
group – (Required) The IAM Group name to attach the list of users to
Attributes Reference
name - The name to identify the Group Membership
NOTE: The usage of this resource con icts with the aws_iam_policy_attachment resource and will permanently show
a di erence if both are de ned.
Example Usage
Argument Reference
The following arguments are supported:
Import
IAM group policy attachments can be imported using the group name and policy arn separated by / .
Example Usage
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
Argument Reference
The following arguments are supported:
policy - (Required) The policy document. This is a JSON formatted string. For more information about building IAM
policy documents with Terraform, see the AWS IAM Policy Document Guide (/docs/providers/aws/guides/iam-policy-
documents.html)
name - (Optional) The name of the policy. If omitted, Terraform will assign a random, unique name.
name_prefix - (Optional) Creates a unique name beginning with the speci ed pre x. Con icts with name .
Attributes Reference
id - The group policy ID.
group - The group to which this policy applies.
Import
IAM Group Policies can be imported using the group_name:group_policy_name , e.g.
Example Usage
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
Argument Reference
The following arguments are supported:
name - (Optional, Forces new resource) The pro le's name. If omitted, Terraform will assign a random, unique name.
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name .
path - (Optional, default "/") Path in which to create the pro le.
roles - (Deprecated) A list of role names to include in the pro le. The current default is 1. If you see an error
message similar to Cannot exceed quota for InstanceSessionsPerInstanceProfile: 1 , then you must contact
AWS support and ask for a limit increase. WARNING: This is deprecated since version 0.9.3 (April 12, 2017)
(https://github.com/hashicorp/terraform/blob/master/CHANGELOG.md#093-april-12-2017), as >= 2 roles are not
possible. See issue #11575 (https://github.com/hashicorp/terraform/issues/11575).
Attribute Reference
id - The instance pro le's ID.
roles - The list of roles assigned to the instance pro le. (Deprecated)
Import
Instance Pro les can be imported using the name , e.g.
Example Usage
client_id_list = [
"266362248691-342342xasdasdasda-apps.googleusercontent.com",
]
thumbprint_list = []
}
Argument Reference
The following arguments are supported:
url - (Required) The URL of the identity provider. Corresponds to the iss claim.
client_id_list - (Required) A list of client IDs (also known as audiences). When a mobile or web app registers with
an OpenID Connect provider, they establish a value that identi es the application. (This is the value that's sent as the
client_id parameter on OAuth requests.)
thumbprint_list - (Required) A list of server certi cate thumbprints for the OpenID Connect (OIDC) identity
provider's server certi cate(s).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
IAM OpenID Connect Providers can be imported using the arn , e.g.
WARNING: The aws_iam_policy_attachment resource creates exclusive attachments of IAM policies. Across the entire
AWS account, all of the users/roles/groups to which a single policy is attached must be declared by a single
aws_iam_policy_attachment resource. This means that even any users/roles/groups that have the attached policy via
any other mechanism (including other Terraform resources) will have that attached policy revoked by this resource.
Consider aws_iam_role_policy_attachment , aws_iam_user_policy_attachment , or
aws_iam_group_policy_attachment instead. These resources do not enforce exclusive attachment of an IAM policy.
NOTE: The usage of this resource con icts with the aws_iam_group_policy_attachment ,
aws_iam_role_policy_attachment , and aws_iam_user_policy_attachment resources and will permanently show
a di erence if both are de ned.
Example Usage
resource "aws_iam_user" "user" {
name = "test-user"
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
name (Required) - The name of the attachment. This cannot be an empty string.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
Argument Reference
The following arguments are supported:
name - (Optional, Forces new resource) The name of the policy. If omitted, Terraform will assign a random, unique
name.
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name .
path - (Optional, default "/") Path in which to create the policy. See IAM Identi ers
(https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identi ers.html) for more information.
policy - (Required) The policy document. This is a JSON formatted string. For more information about building AWS
IAM policy documents with Terraform, see the AWS IAM Policy Document Guide (/docs/providers/aws/guides/iam-
policy-documents.html)
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
IAM Policies can be imported using the arn , e.g.
NOTE: If policies are attached to the role via the aws_iam_policy_attachment resource
(/docs/providers/aws/r/iam_policy_attachment.html) and you are modifying the role name or path , the
force_detach_policies argument must be set to true and applied before attempting the operation otherwise you
will encounter a DeleteConflict error. The aws_iam_role_policy_attachment resource (recommended)
(/docs/providers/aws/r/iam_role_policy_attachment.html) does not have this requirement.
Example Usage
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
tags = {
tag-
- key = "tag-value"
}
}
Argument Reference
The following arguments are supported:
name - (Optional, Forces new resource) The name of the role. If omitted, Terraform will assign a random, unique
name.
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name .
assume_role_policy - (Required) The policy that grants an entity permission to assume the role.
NOTE: This assume_role_policy is very similar but slightly di erent than just a standard IAM policy and cannot use an
aws_iam_policy resource. It can however, use an aws_iam_policy_document data source
(https://www.terraform.io/docs/providers/aws/d/iam_policy_document.html), see example below for how this could
work.
force_detach_policies - (Optional) Speci es to force detaching any policies the role has before destroying it.
Defaults to false .
path - (Optional) The path to the role. See IAM Identi ers
(https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identi ers.html) for more information.
max_session_duration - (Optional) The maximum session duration (in seconds) that you want to set for the
speci ed role. If you do not specify a value for this setting, the default maximum of one hour is applied. This setting
can have a value from 1 hour to 12 hours.
permissions_boundary - (Optional) The ARN of the policy that is used to set the permissions boundary for the role.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
}
Import
IAM Roles can be imported using the name , e.g.
NOTE: The usage of this resource con icts with the aws_iam_policy_attachment resource and will permanently show
a di erence if both are de ned.
Example Usage
resource "aws_iam_role" "role" {
name = "test-role"
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
Argument Reference
The following arguments are supported:
Example Usage
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
Argument Reference
The following arguments are supported:
name - (Optional) The name of the role policy. If omitted, Terraform will assign a random, unique name.
name_prefix - (Optional) Creates a unique name beginning with the speci ed pre x. Con icts with name .
policy - (Required) The policy document. This is a JSON formatted string. For more information about building IAM
policy documents with Terraform, see the AWS IAM Policy Document Guide (/docs/providers/aws/guides/iam-policy-
documents.html)
Attributes Reference
id - The role policy ID, in the form of role_name:role_policy_name .
Import
IAM Role Policies can be imported using the role_name:role_policy_name , e.g.
Example Usage
Argument Reference
The following arguments are supported:
saml_metadata_document - (Required) An XML document generated by an identity provider that supports SAML 2.0.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
valid_until - The expiration date and time for the SAML provider in RFC1123 format, e.g. Mon, 02 Jan 2006
15:04:05 MST .
Import
IAM SAML Providers can be imported using the arn , e.g.
CloudFront
AWS OpsWorks
For information about server certi cates in IAM, see Managing Server Certi cates
(https://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingServerCerts.html) in AWS Documentation.
Note: All arguments including the private key will be stored in the raw state as plain-text. Read more about sensitive
data in state (/docs/state/sensitive-data.html).
Example Usage
Using certs on le:
certificate_body = <<EOF
<<
-----BEGIN CERTIFICATE-----
[......] # cert contents
-----END CERTIFICATE-----
EOF
private_key = <<EOF
<<
-----BEGIN RSA PRIVATE KEY-----
[......] # cert contents
-----END RSA PRIVATE KEY-----
EOF
}
lifecycle {
create_before_destroy = true
}
}
listener {
instance_port = 8000
instance_protocol = "http"
lb_port = 443
lb_protocol = "https"
ssl_certificate_id = "${aws_iam_server_certificate.test_cert.arn}"
}
}
Argument Reference
The following arguments are supported:
name - (Optional) The name of the Server Certi cate. Do not include the path in this value. If omitted, Terraform will
assign a random, unique name.
name_prefix - (Optional) Creates a unique name beginning with the speci ed pre x. Con icts with name .
certificate_body – (Required) The contents of the public key certi cate in PEM-encoded format.
certificate_chain – (Optional) The contents of the certi cate chain. This is typically a concatenation of the PEM-
encoded public key certi cates of the chain.
path - (Optional) The IAM path for the server certi cate. If it is not included, it defaults to a slash (/). If this certi cate is
for use with AWS CloudFront, the path must be in format /cloudfront/your_path_here . See IAM Identi ers
(https://docs.aws.amazon.com/IAM/latest/UserGuide/Using_Identi ers.html) for more details on IAM Paths.
NOTE: AWS performs behind-the-scenes modi cations to some certi cate les if they do not adhere to a speci c
format. These modi cations will result in terraform forever believing that it needs to update the resources since the
local and AWS le contents will not match after theses modi cations occur. In order to prevent this from happening you
must ensure that all your PEM-encoded les use UNIX line-breaks and that certificate_body contains only one
certi cate. All other certi cates should go in certificate_chain . It is common for some Certi cate Authorities to
issue certi cate les that have DOS line-breaks and that are actually multiple certi cates concatenated together in
order to form a full certi cate chain.
Attributes Reference
id - The unique Server Certi cate name
arn - The Amazon Resource Name (ARN) specifying the server certi cate.
Import
IAM Server Certi cates can be imported using the name , e.g.
Example Usage
Argument Reference
The following arguments are supported:
aws_service_name - (Required, Forces new resource) The AWS service to which this role is attached. You use a string
similar to a URL but without the http:// in front. For example: elasticbeanstalk.amazonaws.com . To nd the full
list of services that support service-linked roles, check the docs
(https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html).
custom_suffix - (Optional, forces new resource) Additional string appended to the role name. Not all AWS services
support custom su xes.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
IAM service-linked roles can be imported using role ARN, e.g.
$ terraform import aws_iam_service_linked_role.elasticbeanstalk arn:aws:iam::123456789012:role/aws-servic
e-role/elasticbeanstalk.amazonaws.com/AWSServiceRoleForElasticBeanstalk
Resource: aws_iam_user_group_membership
Provides a resource for adding an IAM User (/docs/providers/aws/r/iam_user.html) to IAM Groups
(/docs/providers/aws/r/iam_group.html). This resource can be used multiple times with the same user for non-overlapping
groups.
Example usage
groups = [
"${aws_iam_group.group1.name}",
"${aws_iam_group.group2.name}",
]
}
groups = [
"${aws_iam_group.group3.name}",
]
}
Argument Reference
The following arguments are supported:
user - (Required) The name of the IAM User (/docs/providers/aws/r/iam_user.html) to add to groups
Import
IAM user group membership can be imported using the user name and group names separated by / .
NOTE: If policies are attached to the user via the aws_iam_policy_attachment resource
(/docs/providers/aws/r/iam_policy_attachment.html) and you are modifying the user name or path , the
force_destroy argument must be set to true and applied before attempting the operation otherwise you will
encounter a DeleteConflict error. The aws_iam_user_policy_attachment resource (recommended)
(/docs/providers/aws/r/iam_user_policy_attachment.html) does not have this requirement.
Example Usage
tags = {
tag-
- key = "tag-value"
}
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
Argument Reference
The following arguments are supported:
name - (Required) The user's name. The name must consist of upper and lowercase alphanumeric characters with no
spaces. You can also include any of the following characters: =,.@-_. . User names are not distinguished by case. For
example, you cannot create users named both "TESTUSER" and "testuser".
permissions_boundary - (Optional) The ARN of the policy that is used to set the permissions boundary for the user.
force_destroy - (Optional, default false) When destroying this user, destroy even if it has non-Terraform-managed
IAM access keys, login pro le or MFA devices. Without force_destroy a user with non-Terraform-managed access
keys and login pro le will fail to be destroyed.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
IAM Users can be imported using the name , e.g.
To reset an IAM User login password via Terraform, you can use the terraform taint command
(https://www.terraform.io/docs/commands/taint.html) or change any of the arguments.
Example Usage
output "password" {
value = "${aws_iam_user_login_profile.example.encrypted_password}"
}
Argument Reference
The following arguments are supported:
pgp_key - (Required) Either a base-64 encoded PGP public key, or a keybase username in the form
keybase:username . Only applies on resource creation. Drift detection is not possible with this argument.
password_length - (Optional, default 20) The length of the generated password on resource creation. Only applies
on resource creation. Drift detection is not possible with this argument.
password_reset_required - (Optional, default "true") Whether the user should be forced to reset the generated
password on resource creation. Only applies on resource creation. Drift detection is not possible with this argument.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
key_fingerprint - The ngerprint of the PGP key used to encrypt the password. Only available if password was
handled on Terraform resource creation, not import.
encrypted_password - The encrypted password, base64 encoded. Only available if password was handled on
Terraform resource creation, not import.
NOTE: The encrypted password may be decrypted using the command line, for example: terraform output
password | base64 --decode | keybase pgp decrypt .
Import
IAM User Login Pro les can be imported without password information support via the IAM User name, e.g.
Since Terraform has no method to read the PGP or password information during import, use the Terraform resource
lifecycle con guration block ignore_changes argument
(https://www.terraform.io/docs/con guration/resources.html#ignore_changes) to ignore them unless password recreation
is desired. e.g.
lifecycle {
ignore_changes = ["password_length", "password_reset_required", "pgp_key"]
}
}
Resource: aws_iam_user_policy_attachment
Attaches a Managed IAM Policy to an IAM user
NOTE: The usage of this resource con icts with the aws_iam_policy_attachment resource and will permanently show
a di erence if both are de ned.
Example Usage
Argument Reference
The following arguments are supported:
Import
IAM user policy attachments can be imported using the user name and policy arn separated by / .
Example Usage
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
Argument Reference
The following arguments are supported:
policy - (Required) The policy document. This is a JSON formatted string. For more information about building AWS
IAM policy documents with Terraform, see the AWS IAM Policy Document Guide (/docs/providers/aws/guides/iam-
policy-documents.html).
name - (Optional) The name of the policy. If omitted, Terraform will assign a random, unique name.
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name .
Import
IAM User Policies can be imported using the user_name:user_policy_name , e.g.
Example Usage
Argument Reference
The following arguments are supported:
username - (Required) The name of the IAM user to associate the SSH public key with.
encoding - (Required) Speci es the public key encoding format to use in the response. To retrieve the public key in
ssh-rsa format, use SSH . To retrieve the public key in PEM format, use PEM .
public_key - (Required) The SSH public key. The public key must be encoded in ssh-rsa format or PEM format.
status - (Optional) The status to assign to the SSH public key. Active means the key can be used for authentication
with an AWS CodeCommit repository. Inactive means the key cannot be used. Default is active .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
SSH public keys can be imported using the username , ssh_public_key_id , and encoding e.g.
$ terraform import aws_iam_user_ssh_key.user user:APKAJNCNNJICVN7CFKCA:SSH
Resource: aws_inspector_assessment_target
Provides a Inspector assessment target
Example Usage
Argument Reference
The following arguments are supported:
resource_group_arn (Optional) Inspector Resource Group Amazon Resource Name (ARN) stating tags for instance
matching. If not speci ed, all EC2 instances in the current AWS account and region are included in the assessment
target.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Inspector Assessment Targets can be imported via their Amazon Resource Name (ARN), e.g.
Example Usage
rules_package_arns = [
"arn:aws:inspector:us-west-2:758058086616:rulespackage/0-9hgA516p",
"arn:aws:inspector:us-west-2:758058086616:rulespackage/0-H5hpSawc",
"arn:aws:inspector:us-west-2:758058086616:rulespackage/0-JJOtZiqQ",
"arn:aws:inspector:us-west-2:758058086616:rulespackage/0-vg5GGHSD",
]
}
Argument Reference
The following arguments are supported:
target_arn - (Required) The assessment target ARN to attach the template to.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
provider "aws" {
region = "us-west-2"
}
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"]
}
tags = {
Name = "HelloWorld"
}
}
Argument Reference
The following arguments are supported:
tenancy - (Optional) The tenancy of the instance (if the instance is running in a VPC). An instance with a tenancy of
dedicated runs on single-tenant hardware. The host tenancy is not supported for the import-instance command.
host_id - (optional) The Id of a dedicated host that the instance will be assigned to. Use when an instance is to be
launched on a speci c dedicated host.
cpu_core_count - (Optional) Sets the number of CPU cores for an instance. This option is only supported on creation
of instance type that support CPU Options CPU Cores and Threads Per CPU Core Per Instance Type
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html#cpu-options-supported-
instances-values) - specifying this option for unsupported instance types will return an error from the EC2 API.
cpu_threads_per_core - (Optional - has no e ect unless cpu_core_count is also set) If set to to 1, hyperthreading
is disabled on the launched instance. Defaults to 2 if not set. See Optimizing CPU Options
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html) for more information.
NOTE: Changing cpu_core_count and/or cpu_threads_per_core will cause the resource to be destroyed and re-
created.
ebs_optimized - (Optional) If true, the launched EC2 instance will be EBS-optimized. Note that if this is not set on an
instance type that is optimized by default then this will show as disabled but if the instance type is optimized by default
then there is no need to set this and there is no e ect to disabling it. See the EBS Optimized section
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html) of the AWS User Guide for more
information.
instance_initiated_shutdown_behavior - (Optional) Shutdown behavior for the instance. Amazon defaults this to
stop for EBS-backed instances and terminate for instance-store instances. Cannot be set on instance-store
instances. See Shutdown Behavior (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-
instances.html#Using_ChangingInstanceInitiatedShutdownBehavior) for more information.
instance_type - (Required) The type of instance to start. Updates to this eld will trigger a stop/start of the EC2
instance.
key_name - (Optional) The key name of the Key Pair to use for the instance; which can be managed using the
aws_key_pair resource (/docs/providers/aws/r/key_pair.html).
get_password_data - (Optional) If true, wait for password data to become available and retrieve it. Useful for getting
the administrator password for instances running Microsoft Windows. The password data is exported to the
password_data attribute. See GetPasswordData
(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_GetPasswordData.html) for more information.
monitoring - (Optional) If true, the launched EC2 instance will have detailed monitoring enabled. (Available since
v0.6.0)
security_groups - (Optional, EC2-Classic and default VPC only) A list of security group names (EC2-Classic) or IDs
(default VPC) to associate with.
vpc_security_group_ids - (Optional, VPC only) A list of security group IDs to associate with.
source_dest_check - (Optional) Controls if tra c is routed to the instance when the destination address does not
match the instance. Used for NAT or VPNs. Defaults true.
user_data - (Optional) The user data to provide when launching the instance. Do not pass gzip-compressed data via
this argument; see user_data_base64 instead.
user_data_base64 - (Optional) Can be used instead of user_data to pass base64-encoded binary data directly. Use
this instead of user_data whenever the value is not a valid UTF-8 string. For example, gzip-encoded user data must
be base64-encoded and passed via this argument to avoid corruption.
iam_instance_profile - (Optional) The IAM Instance Pro le to launch the instance with. Speci ed as the name of
the Instance Pro le. Ensure your credentials have the correct permission to assign the instance pro le according to
the EC2 documentation (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html#roles-
usingrole-ec2instance-permissions), notably iam:PassRole .
ipv6_address_count - (Optional) A number of IPv6 addresses to associate with the primary network interface.
Amazon EC2 chooses the IPv6 addresses from the range of your subnet.
ipv6_addresses - (Optional) Specify one or more IPv6 addresses from the range of the subnet to associate with the
primary network interface
volume_tags - (Optional) A mapping of tags to assign to the devices created by the instance at launch time.
root_block_device - (Optional) Customize details about the root block device of the instance. See Block Devices
below for details.
ebs_block_device - (Optional) Additional EBS block devices to attach to the instance. Block device con gurations
only apply on resource creation. See Block Devices below for details on attributes and drift detection.
ephemeral_block_device - (Optional) Customize Ephemeral (also known as "Instance Store") volumes on the
instance. See Block Devices below for details.
network_interface - (Optional) Customize network interfaces to be attached at instance boot time. See Network
Interfaces below for more details.
credit_specification - (Optional) Customize the credit speci cation of the instance. See Credit Speci cation below
for more details.
Timeouts
The timeouts block allows you to specify timeouts
(https://www.terraform.io/docs/con guration/resources.html#timeouts) for certain actions:
create - (Defaults to 10 mins) Used when launching the instance (until it reaches the initial running state)
update - (Defaults to 10 mins) Used when stopping and starting the instance when necessary during update - e.g.
when changing instance type
volume_type - (Optional) The type of volume. Can be "standard" , "gp2" , "io1" , "sc1" , or "st1" . (Default:
"standard" ).
delete_on_termination - (Optional) Whether the volume should be destroyed on instance termination (Default:
true ).
encrypted - (Optional) Enable volume encryption. (Default: false ). Must be con gured to perform drift detection.
kms_key_id - (Optional) Amazon Resource Name (ARN) of the KMS Key to use when encrypting the volume. Must be
con gured to perform drift detection.
volume_type - (Optional) The type of volume. Can be "standard" , "gp2" , or "io1" . (Default: "standard" ).
delete_on_termination - (Optional) Whether the volume should be destroyed on instance termination (Default:
true ).
kms_key_id - (Optional) Amazon Resource Name (ARN) of the KMS Key to use when encrypting the volume. Must be
con gured to perform drift detection.
NOTE: Currently, changes to the ebs_block_device con guration of existing resources cannot be automatically
detected by Terraform. To manage changes and attachments of an EBS block to an instance, use the aws_ebs_volume
and aws_volume_attachment resources instead. If you use ebs_block_device on an aws_instance , Terraform will
assume management over the full set of non-root EBS block devices for the instance, treating additional block devices
as drift. For this reason, ebs_block_device cannot be mixed with external aws_ebs_volume and
aws_volume_attachment resources for a given instance.
Each ephemeral_block_device supports the following:
no_device - (Optional) Suppresses the speci ed device included in the AMI's block device mapping.
Each AWS Instance type has a di erent set of Instance Store block devices available for attachment. AWS publishes a list
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#StorageOnInstanceTypes) of which
ephemeral devices are available on each type. The devices are always identi ed by the virtual_name in the format
"ephemeral{0..N}" .
Network Interfaces
Each of the network_interface blocks attach a network interface to an EC2 Instance during boot time. However, because
the network interface is attached at boot-time, replacing/modifying the network interface WILL trigger a recreation of the
EC2 Instance. If you should need at any point to detach/modify/re-attach a network interface to the instance, use the
aws_network_interface or aws_network_interface_attachment resources instead.
The network_interface con guration block does, however, allow users to supply their own network interface to be used
as the default network interface on an EC2 Instance, attached at eth0 .
device_index - (Required) The integer index of the network interface attachment. Limited by instance type.
delete_on_termination - (Optional) Whether or not to delete the network interface on instance termination.
Defaults to false . Currently, the only valid value is false , as this is only supported when creating new network
interfaces when launching an instance.
NOTE: Removing this con guration on existing instances will only stop managing it. It will not change the con guration
back to the default for the instance type.
Credit speci cation can be applied/modi ed to the EC2 Instance at any time.
cpu_credits - (Optional) The credit option for CPU usage. Can be "standard" or "unlimited" . T3 instances are
launched as unlimited by default. T2 instances are launched as standard by default.
Example
resource "aws_vpc" "my_vpc" {
cidr_block = "172.16.0.0/16"
tags = {
Name = "tf-example"
}
}
tags = {
Name = "tf-example"
}
}
tags = {
Name = "primary_network_interface"
}
}
network_interface {
network_interface_id = "${aws_network_interface.foo.id}"
device_index = 0
}
credit_specification {
cpu_credits = "unlimited"
}
}
Attributes Reference
In addition to all arguments above, the following attributes are exported:
public_dns - The public DNS name assigned to the instance. For EC2-VPC, this is only available if you've enabled DNS
hostnames for your VPC
public_ip - The public IP address assigned to the instance, if applicable. NOTE: If you are using an aws_eip
(/docs/providers/aws/r/eip.html) with your instance, you should refer to the EIP's address directly and not use
public_ip , as this eld will change after the EIP is attached.
private_dns - The private DNS name assigned to the instance. Can only be used inside the Amazon EC2, and only
available if you've enabled DNS hostnames for your VPC
instance_state - The state of the instance. One of: pending , running , shutting-down , terminated ,
stopping , stopped . See Instance Lifecycle (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-
lifecycle.html) for more information.
Import
Instances can be imported using the id , e.g.
Example Usage
tags = {
Name = "main"
}
}
Argument Reference
The following arguments are supported:
Note: It's recommended to denote that the AWS Instance or Elastic IP depends on the Internet Gateway. For example:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
owner_id - The ID of the AWS account that owns the internet gateway.
Import
Internet Gateways can be imported using the id , e.g.
$ terraform import aws_internet_gateway.gw igw-c0a643a9
Resource: aws_iot_certi cate
Creates and manages an AWS IoT certi cate.
Example Usage
With CSR
Without CSR
Argument Reference
active - (Required) Boolean ag to indicate if the certi cate should be active
csr - (Optional) The certi cate signing request. Review CreateCerti cateFromCsr
(https://docs.aws.amazon.com/iot/latest/apireference/API_CreateCerti cateFromCsr.html) for more information on
generating a certi cate from a certi cate signing request (CSR). If none is speci ed both the certi cate and keys will be
generated, review CreateKeysAndCerti cate
(https://docs.aws.amazon.com/iot/latest/apireference/API_CreateKeysAndCerti cate.html) for more information on
generating keys and a certi cate.
Attributes Reference
In addition to the arguments, the following attributes are exported:
Example Usage
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"iot:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
Argument Reference
The following arguments are supported:
Example Usage
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"iot:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
Argument Reference
The following arguments are supported:
policy - (Required) The policy document. This is a JSON formatted string. Use the IoT Developer Guide
(http://docs.aws.amazon.com/iot/latest/developerguide/iot-policies.html) for more information on IoT Policies. For
more information about building AWS IAM policy documents with Terraform, see the AWS IAM Policy Document Guide
(/docs/providers/aws/guides/iam-policy-documents.html).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"Service": "credentials.iot.amazonaws.com"},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
Argument Reference
The following arguments are supported:
role_arn - (Required) The identity of the role to which the alias refers.
credential_duration - (Optional) The duration of the credential, in seconds. If you do not specify a value for this
setting, the default maximum of one hour is applied. This setting can have a value from 900 seconds (15 minutes) to
3600 seconds (60 minutes).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
attributes = {
First = "examplevalue"
}
}
Argument Reference
name - (Required) The name of the thing.
Attributes Reference
In addition to the arguments above, the following attributes are exported:
Import
IOT Things can be imported using the name, e.g.
Example Usage
Argument Reference
principal - (Required) The AWS IoT Certi cate ARN or Amazon Cognito Identity ID.
Example Usage
Argument Reference
name - (Required, Forces New Resource) The name of the thing type.
description - (Optional, Forces New Resource) The description of the thing type.
deprecated - (Optional, Defaults to false) Whether the thing type is deprecated. If true, no new things could be
associated with this type.
searchable_attributes - (Optional, Forces New Resource) A list of searchable thing attribute names.
Attributes Reference
In addition to the arguments above, the following attributes are exported:
Import
IOT Thing Types can be imported using the name, e.g.
Example Usage
resource "aws_iot_topic_rule" "rule" {
name = "MyRule"
description = "Example rule"
enabled = true
sql = "SELECT * FROM 'topic/test'"
sql_version = "2015-10-08"
sns {
message_format = "RAW"
role_arn = "${aws_iam_role.role.arn}"
target_arn = "${aws_sns_topic.mytopic.arn}"
}
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "iot.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sns:Publish"
],
"Resource": "${aws_sns_topic.mytopic.arn}"
}
]
}
EOF
}
Argument Reference
name - (Required) The name of the rule.
sql - (Required) The SQL statement used to query the topic. For more information, see AWS IoT SQL Reference
(http://docs.aws.amazon.com/iot/latest/developerguide/iot-rules.html#aws-iot-sql-reference
(http://docs.aws.amazon.com/iot/latest/developerguide/iot-rules.html#aws-iot-sql-reference)) in the AWS IoT Developer
Guide.
sql_version - (Required) The version of the SQL rules engine to use when evaluating the rule.
role_arn - (Required) The IAM role ARN that allows access to the CloudWatch alarm.
state_value - (Required) The value of the alarm state. Acceptable values are: OK, ALARM, INSUFFICIENT_DATA.
metric_unit - (Required) The metric unit (supported units can be found here:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Unit
(http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Unit))
role_arn - (Required) The IAM role ARN that allows access to the CloudWatch metric.
hash_key_type - (Optional) The hash key type. Valid values are "STRING" or "NUMBER".
range_key_type - (Optional) The range key type. Valid values are "STRING" or "NUMBER".
role_arn - (Required) The ARN of the IAM role that grants access to the DynamoDB table.
table_name - (Required) The name of the DynamoDB table.
id - (Required) The unique identi er for the document you are storing.
index - (Required) The Elasticsearch index where you want to store your data.
role_arn - (Required) The IAM role ARN that has access to Elasticsearch.
role_arn - (Required) The IAM role ARN that grants access to the Amazon Kinesis Firehose stream.
separator - (Optional) A character separator that is used to separate records written to the Firehose stream. Valid
values are: '\n' (newline), '\t' (tab), '\r\n' (Windows newline), ',' (comma).
role_arn - (Required) The ARN of the IAM role that grants access to the Amazon Kinesis stream.
role_arn - (Required) The ARN of the IAM role that grants access.
topic - (Required) The name of the MQTT topic the message should be republished to.
role_arn - (Required) The ARN of the IAM role that grants access.
message_format - (Required) The message format of the message to publish. Accepted values are "JSON" and "RAW".
role_arn - (Required) The ARN of the IAM role that grants access.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
IoT Topic Rules can be imported using the name , e.g.
Currently this resource requires an existing user-supplied key pair. This key pair's public key will be registered with AWS to
allow logging-in to EC2 instances.
When importing an existing key pair the public key material may be in any format supported by AWS. Supported formats
(per the AWS documentation (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#how-to-
generate-your-own-key-and-import-it-to-aws)) are:
Example Usage
Argument Reference
The following arguments are supported:
key_name_prefix - (Optional) Creates a unique name beginning with the speci ed pre x. Con icts with key_name .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
fingerprint - The MD5 public key ngerprint as speci ed in section 4 of RFC 4716.
Import
Key Pairs can be imported using the key_name , e.g.
Example Usage
inputs {
name_prefix = "test_prefix"
kinesis_stream {
resource_arn = "${aws_kinesis_stream.test_stream.arn}"
role_arn = "${aws_iam_role.test.arn}"
}
parallelism {
count = 1
}
schema {
record_columns {
mapping = "$.test"
name = "test"
sql_type = "VARCHAR(8)"
}
record_encoding = "UTF-8"
record_format {
mapping_parameters {
json {
record_row_path = "$"
}
}
}
}
}
}
Argument Reference
The following arguments are supported:
code - (Optional) SQL Code to transform input data, and generate output.
cloudwatch_logging_options - (Optional) The CloudWatch log stream options to monitor application errors. See
CloudWatch Logging Options below for more details.
inputs - (Optional) Input con guration of the application. See Inputs below for more details.
outputs - (Optional) Output destination con guration of the application. See Outputs below for more details.
reference_data_sources - (Optional) An S3 Reference Data Source for the application. See Reference Data Sources
below for more details.
role_arn - (Required) The ARN of the IAM Role used to send application messages.
Inputs
Con gure an Input for the Kinesis Analytics Application. You can only have 1 Input con gured.
name_prefix - (Required) The Name Pre x to use when creating an in-application stream.
schema - (Required) The Schema format of the data in the streaming source. See Source Schema below for more
details.
kinesis_firehose - (Optional) The Kinesis Firehose con guration for the streaming source. Con icts with
kinesis_stream . See Kinesis Firehose below for more details.
kinesis_stream - (Optional) The Kinesis Stream con guration for the streaming source. Con icts with
kinesis_firehose . See Kinesis Stream below for more details.
parallelism - (Optional) The number of Parallel in-application streams to create. See Parallelism below for more
details.
processing_configuration - (Optional) The Processing Con guration to transform records as they are received
from the stream. See Processing Con guration below for more details.
Outputs
Con gure Output destinations for the Kinesis Analytics Application. You can have a maximum of 3 destinations con gured.
schema - (Required) The Schema format of the data written to the destination. See Destination Schema below for
more details.
kinesis_firehose - (Optional) The Kinesis Firehose con guration for the destination stream. Con icts with
kinesis_stream . See Kinesis Firehose below for more details.
kinesis_stream - (Optional) The Kinesis Stream con guration for the destination stream. Con icts with
kinesis_firehose . See Kinesis Stream below for more details.
lambda - (Optional) The Lambda function destination. See Lambda below for more details.
schema - (Required) The Schema format of the data in the streaming source. See Source Schema below for more
details.
s3 - (Optional) The S3 con guration for the reference data source. See S3 Reference below for more details.
Kinesis Firehose
role_arn - (Required) The ARN of the IAM Role used to access the stream.
Kinesis Stream
role_arn - (Required) The ARN of the IAM Role used to access the stream.
Destination Schema
The Schema format of the data in the destination.
record_format_type - (Required) The Format Type of the records on the output stream. Can be CSV or JSON .
Source Schema
record_columns - (Required) The Record Column mapping for the streaming source data element. See Record
Columns below for more details.
record_format - (Required) The Record Format and mapping information to schematize a record. See Record Format
below for more details.
Parallelism
The Processing Con guration to transform records as they are received from the stream.
lambda - (Required) The Lambda function con guration. See Lambda below for more details.
Lambda
role_arn - (Required) The ARN of the IAM Role used to access the Lambda function.
Record Columns
The Column mapping of each data element in the streaming source to the corresponding column in the in-application
stream.
Record Format
The Record Format and relevant mapping information that should be applied to schematize the records on the stream.
mapping_parameters - (Optional) The Mapping Information for the record format. See Mapping Parameters below
for more details.
Mapping Parameters
Provides Mapping information speci c to the record format on the streaming source.
csv - (Optional) Mapping information when the record format uses delimiters. See CSV Mapping Parameters below
for more details.
json - (Optional) Mapping information when JSON is the record format on the streaming source. See JSON Mapping
Parameters below for more details.
Mapping information when JSON is the record format on the streaming source.
record_row_path - (Required) Path to the top-level parent that contains the records.
S3 Reference
Identi es the S3 bucket and object that contains the reference data.
Attributes Reference
The following attributes are exported along with all argument references:
Import
Kinesis Analytics Application can be imported by using ARN, e.g.
For more details, see the Amazon Kinesis Firehose Documentation (https://aws.amazon.com/documentation/ rehose/).
Example Usage
Extended S3 Destination
extended_s3_configuration {
role_arn = "${aws_iam_role.firehose_role.arn}"
bucket_arn = "${aws_s3_bucket.bucket.arn}"
processing_configuration {
enabled = "true"
processors {
type = "Lambda"
parameters {
parameter_name = "LambdaArn"
parameter_value = "${aws_lambda_function.lambda_processor.arn}:$LATEST"
}
}
}
}
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
"Sid": ""
}
]
}
EOF
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
S3 Destination
resource "aws_s3_bucket" "bucket" {
bucket = "tf-test-bucket"
acl = "private"
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
s3_configuration {
role_arn = "${aws_iam_role.firehose_role.arn}"
bucket_arn = "${aws_s3_bucket.bucket.arn}"
}
}
Redshift Destination
resource "aws_redshift_cluster" "test_cluster" {
cluster_identifier = "tf-redshift-cluster-%d"
database_name = "test"
master_username = "testuser"
master_password = "T3stPass"
node_type = "dc1.large"
cluster_type = "single-node"
}
s3_configuration {
role_arn = "${aws_iam_role.firehose_role.arn}"
bucket_arn = "${aws_s3_bucket.bucket.arn}"
buffer_size = 10
buffer_interval = 400
compression_format = "GZIP"
}
redshift_configuration {
role_arn = "${aws_iam_role.firehose_role.arn}"
cluster_jdbcurl = "jdbc:redshift://${aws_redshift_cluster.test_cluster.endpoint}/${aws_redshift_cl
uster.test_cluster.database_name}"
username = "testuser"
password = "T3stPass"
data_table_name = "test-table"
copy_options = "delimiter '|'"
data_table_columns = "test-col"
s3_backup_mode = "Enabled"
s3_backup_configuration {
role_arn = "${aws_iam_role.firehose_role.arn}"
bucket_arn = "${aws_s3_bucket.bucket.arn}"
buffer_size = 15
buffer_interval = 300
compression_format = "GZIP"
}
}
}
Elasticsearch Destination
resource "aws_elasticsearch_domain" "test_cluster" {
domain_name = "firehose-es-test"
}
s3_configuration {
role_arn = "${aws_iam_role.firehose_role.arn}"
bucket_arn = "${aws_s3_bucket.bucket.arn}"
buffer_size = 10
buffer_interval = 400
compression_format = "GZIP"
}
elasticsearch_configuration {
domain_arn = "${aws_elasticsearch_domain.test_cluster.arn}"
role_arn = "${aws_iam_role.firehose_role.arn}"
index_name = "test"
type_name = "test"
processing_configuration {
enabled = "true"
processors {
type = "Lambda"
parameters {
parameter_name = "LambdaArn"
parameter_value = "${aws_lambda_function.lambda_processor.arn}:$LATEST"
}
}
}
}
}
Splunk Destination
resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
name = "terraform-kinesis-firehose-test-stream"
destination = "splunk"
s3_configuration {
role_arn = "${aws_iam_role.firehose.arn}"
bucket_arn = "${aws_s3_bucket.bucket.arn}"
buffer_size = 10
buffer_interval = 400
compression_format = "GZIP"
}
splunk_configuration {
hec_endpoint = "https://http-inputs-mydomain.splunkcloud.com:443"
hec_token = "51D4DA16-C61B-4F5F-8EC7-ED4301342A4A"
hec_acknowledgment_timeout = 600
hec_endpoint_type = "Event"
s3_backup_mode = "FailedEventsOnly"
}
}
Argument Reference
The following arguments are supported:
name - (Required) A name to identify the stream. This is unique to the AWS account and region the Stream is created
in.
kinesis_source_configuration - (Optional) Allows the ability to specify the kinesis stream that is used as the
source of the rehose delivery stream.
server_side_encryption - (Optional) Encrypt at rest options. Server-side encryption should not be enabled when a
kinesis stream is con gured as the source of the rehose delivery stream.
destination – (Required) This is the destination to where the data is delivered. The only options are s3
(Deprecated, use extended_s3 instead), extended_s3 , redshift , elasticsearch , and splunk .
kinesis_stream_arn (Required) The kinesis stream used as the source of the rehose delivery stream.
role_arn (Required) The ARN of the role that provides access to the source Kinesis stream.
prefix - (Optional) The "YYYY/MM/DD/HH" time format pre x is automatically used for delivered S3 les. You can
specify an extra pre x to be added in front of the time format pre x. Note that if the pre x ends with a slash, it
appears as a folder in the S3 bucket
buffer_size - (Optional) Bu er incoming data to the speci ed size, in MBs, before delivering it to the destination.
The default value is 5. We recommend setting SizeInMBs to a value greater than the amount of data you typically
ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec set SizeInMBs to be
10 MB or higher.
buffer_interval - (Optional) Bu er incoming data for the speci ed period of time, in seconds, before delivering it to
the destination. The default value is 300.
compression_format - (Optional) The compression format. If no value is speci ed, the default is UNCOMPRESSED.
Other supported values are GZIP, ZIP & Snappy. If the destination is redshift you cannot use ZIP or Snappy.
kms_key_arn - (Optional) Speci es the KMS key ARN the stream will use to encrypt data. If not set, no encryption will
be used.
cloudwatch_logging_options - (Optional) The CloudWatch Logging Options for the delivery stream. More details
are given below
The extended_s3_configuration object supports the same elds from s3_configuration as well as the following:
data_format_conversion_configuration - (Optional) Nested argument for the serializer, deserializer, and schema
for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3. More details
given below.
error_output_prefix - (Optional) Pre x added to failed records before writing them to S3. This pre x appears
immediately following the bucket name.
processing_configuration - (Optional) The data processing con guration. More details are given below.
s3_backup_mode - (Optional) The Amazon S3 backup mode. Valid values are Disabled and Enabled . Default value
is Disabled .
s3_backup_configuration - (Optional) The con guration for backup in Amazon S3. Required if s3_backup_mode is
Enabled . Supports the same elds as s3_configuration object.
username - (Required) The username that the rehose delivery stream will assume. It is strongly recommended that
the username and password provided is used exclusively for Amazon Kinesis Firehose purposes, and that the
permissions for the account are restricted for Amazon Redshift INSERT permissions.
password - (Required) The password for the username above.
retry_duration - (Optional) The length of time during which Firehose retries delivery after a failure, starting from
the initial request and including the rst attempt. The default value is 3600 seconds (60 minutes). Firehose does not
retry if the value of DurationInSeconds is 0 (zero) or if the rst delivery attempt takes longer than the current value.
s3_backup_mode - (Optional) The Amazon S3 backup mode. Valid values are Disabled and Enabled . Default value
is Disabled .
s3_backup_configuration - (Optional) The con guration for backup in Amazon S3. Required if s3_backup_mode is
Enabled . Supports the same elds as s3_configuration object.
data_table_name - (Required) The name of the table in the redshift cluster that the s3 bucket will copy to.
copy_options - (Optional) Copy options for copying the data from the s3 intermediate bucket into redshift, for
example to change the default delimiter. For valid values, see the AWS documentation
(http://docs.aws.amazon.com/ rehose/latest/APIReference/API_CopyCommand.html)
data_table_columns - (Optional) The data table columns that will be targeted by the copy command.
cloudwatch_logging_options - (Optional) The CloudWatch Logging Options for the delivery stream. More details
are given below
processing_configuration - (Optional) The data processing con guration. More details are given below.
buffering_interval - (Optional) Bu er incoming data for the speci ed period of time, in seconds between 60 to
900, before delivering it to the destination. The default value is 300s.
buffering_size - (Optional) Bu er incoming data to the speci ed size, in MBs between 1 to 100, before delivering it
to the destination. The default value is 5MB.
domain_arn - (Required) The ARN of the Amazon ES domain. The IAM role must have permission for
DescribeElasticsearchDomain , DescribeElasticsearchDomains , and DescribeElasticsearchDomainConfig
after assuming RoleARN . The pattern needs to be arn:.* .
index_rotation_period - (Optional) The Elasticsearch index rotation period. Index rotation appends a timestamp to
the IndexName to facilitate expiration of old data. Valid values are NoRotation , OneHour , OneDay , OneWeek , and
OneMonth . The default value is OneDay .
retry_duration - (Optional) After an initial failure to deliver to Amazon Elasticsearch, the total amount of time, in
seconds between 0 to 7200, during which Firehose re-attempts delivery (including the rst attempt). After this time has
elapsed, the failed documents are written to Amazon S3. The default value is 300s. There will be no retry if the value is
0.
role_arn - (Required) The ARN of the IAM role to be assumed by Firehose for calling the Amazon ES Con guration
API and for indexing documents. The pattern needs to be arn:.* .
s3_backup_mode - (Optional) De nes how documents should be delivered to Amazon S3. Valid values are
FailedDocumentsOnly and AllDocuments . Default value is FailedDocumentsOnly .
type_name - (Required) The Elasticsearch type name with maximum length of 100 characters.
cloudwatch_logging_options - (Optional) The CloudWatch Logging Options for the delivery stream. More details
are given below
processing_configuration - (Optional) The data processing con guration. More details are given below.
hec_acknowledgment_timeout - (Optional) The amount of time, in seconds between 180 and 600, that Kinesis
Firehose waits to receive an acknowledgment from Splunk after it sends it data.
hec_endpoint - (Required) The HTTP Event Collector (HEC) endpoint to which Kinesis Firehose sends your data.
hec_endpoint_type - (Optional) The HEC endpoint type. Valid values are Raw or Event . The default value is Raw .
hec_token - The GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
s3_backup_mode - (Optional) De nes how documents should be delivered to Amazon S3. Valid values are
FailedEventsOnly and AllEvents . Default value is FailedEventsOnly .
retry_duration - (Optional) After an initial failure to deliver to Amazon Elasticsearch, the total amount of time, in
seconds between 0 to 7200, during which Firehose re-attempts delivery (including the rst attempt). After this time has
elapsed, the failed documents are written to Amazon S3. The default value is 300s. There will be no retry if the value is
0.
cloudwatch_logging_options - (Optional) The CloudWatch Logging Options for the delivery stream. More details
are given below.
processing_configuration - (Optional) The data processing con guration. More details are given below.
log_group_name - (Optional) The CloudWatch group name for logging. This value is required if enabled is true.
log_stream_name - (Optional) The CloudWatch log stream name for logging. This value is required if enabled is true.
processors - (Optional) Array of data processors. More details are given below
parameters - (Optional) Array of processor parameters. More details are given below
parameter_value - (Required) Parameter value. Must be between 1 and 512 length (inclusive). When providing a
Lambda ARN, you should specify the resource version as well.
data_format_conversion_con guration
NOTE: Once con gured, the data format conversion con guration can only be disabled, in which the con guration
values will remain, but will not be active. It is not currently possible to completely remove the con guration without
recreating the resource.
Example:
extended_s3_configuration {
buffer_size = 128
data_format_conversion_configuration {
input_format_configuration {
deserializer {
hive_json_ser_de {}
}
}
output_format_configuration {
serializer {
orc_ser_de {}
}
}
schema_configuration {
database_name = "${aws_glue_catalog_table.example.database_name}"
role_arn = "${aws_iam_role.example.arn}"
table_name = "${aws_glue_catalog_table.example.name}"
}
}
}
}
input_format_configuration - (Required) Nested argument that speci es the deserializer that you want Kinesis
Data Firehose to use to convert the format of your data from JSON. More details below.
output_format_configuration - (Required) Nested argument that speci es the serializer that you want Kinesis Data
Firehose to use to convert the format of your data to the Parquet or ORC format. More details below.
schema_configuration - (Required) Nested argument that speci es the AWS Glue Data Catalog table that contains
the column information. More details below.
enabled - (Optional) Defaults to true . Set it to false if you want to disable format conversion while preserving the
con guration details.
input_format_con guration
deserializer - (Required) Nested argument that speci es which deserializer to use. You can choose either the
Apache Hive JSON SerDe or the OpenX JSON SerDe. More details below.
deserializer
NOTE: One of the deserializers must be con gured. If no nested con guration needs to occur simply declare as
XXX_json_ser_de = [] or XXX_json_ser_de {} .
hive_json_ser_de - (Optional) Nested argument that speci es the native Hive / HCatalog JsonSerDe. More details
below.
open_x_json_ser_de - (Optional) Nested argument that speci es the OpenX SerDe. More details below.
hive_json_ser_de
timestamp_formats - (Optional) A list of how you want Kinesis Data Firehose to parse the date and time stamps that
may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's
DateTimeFormat format strings. For more information, see Class DateTimeFormat (https://www.joda.org/joda-
time/apidocs/org/joda/time/format/DateTimeFormat.html). You can also use the special value millis to parse time
stamps in epoch milliseconds. If you don't specify a format, Kinesis Data Firehose uses java.sql.Timestamp::valueOf by
default.
open_x_json_ser_de
case_insensitive - (Optional) When set to true, which is the default, Kinesis Data Firehose converts JSON keys to
lowercase before deserializing them.
column_to_json_key_mappings - (Optional) A map of column names to JSON keys that aren't identical to the column
names. This is useful when the JSON contains keys that are Hive keywords. For example, timestamp is a Hive keyword.
If you have a JSON key named timestamp, set this parameter to { ts = "timestamp" } to map this key to a column
named ts.
convert_dots_in_json_keys_to_underscores - (Optional) When set to true , speci es that the names of the keys
include dots and that you want Kinesis Data Firehose to replace them with underscores. This is useful because Apache
Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can
de ne the column name to be "a_b" when using this option. Defaults to false .
output_format_con guration
serializer - (Required) Nested argument that speci es which serializer to use. You can choose either the ORC SerDe
or the Parquet SerDe. More details below.
serializer
NOTE: One of the serializers must be con gured. If no nested con guration needs to occur simply declare as
XXX_ser_de = [] or XXX_ser_de {} .
orc_ser_de - (Optional) Nested argument that speci es converting data to the ORC format before storing it in
Amazon S3. For more information, see Apache ORC (https://orc.apache.org/docs/). More details below.
parquet_ser_de - (Optional) Nested argument that speci es converting data to the Parquet format before storing it
in Amazon S3. For more information, see Apache Parquet (https://parquet.apache.org/documentation/latest/). More
details below.
orc_ser_de
block_size_bytes - (Optional) The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to
copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis
Data Firehose uses this value for padding calculations.
bloom_filter_columns - (Optional) A list of column names for which you want Kinesis Data Firehose to create bloom
lters.
bloom_filter_false_positive_probability - (Optional) The Bloom lter false positive probability (FPP). The lower
the FPP, the bigger the Bloom lter. The default value is 0.05 , the minimum is 0 , and the maximum is 1 .
compression - (Optional) The compression code to use over data blocks. The default is SNAPPY .
dictionary_key_threshold - (Optional) A oat that represents the fraction of the total number of non-null rows. To
turn o dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary.
To always use dictionary encoding, set this threshold to 1 .
enable_padding - (Optional) Set this to true to indicate that you want stripes to be padded to the HDFS block
boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is
false .
format_version - (Optional) The version of the le to write. The possible values are V0_11 and V0_12 . The default
is V0_12 .
padding_tolerance - (Optional) A oat between 0 and 1 that de nes the tolerance for block padding as a decimal
fraction of stripe size. The default value is 0.05 , which means 5 percent of stripe size. For the default values of 64 MiB
ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2
MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a
new, smaller stripe is inserted to t within that space. This ensures that no stripe crosses block boundaries and causes
remote reads within a node-local task. Kinesis Data Firehose ignores this parameter when enable_padding is false .
row_index_stride - (Optional) The number of rows between index entries. The default is 10000 and the minimum
is 1000 .
stripe_size_bytes - (Optional) The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
parquet_ser_de
block_size_bytes - (Optional) The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to
copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis
Data Firehose uses this value for padding calculations.
compression - (Optional) The compression code to use over data blocks. The possible values are UNCOMPRESSED ,
SNAPPY , and GZIP , with the default being SNAPPY . Use SNAPPY for higher decompression speed. Use GZIP if the
compression ratio is more important than speed.
max_padding_bytes - (Optional) The maximum amount of padding to apply. This is useful if you intend to copy the
data from Amazon S3 to HDFS before querying. The default is 0 .
page_size_bytes - (Optional) The Parquet page size. Column chunks are divided into pages. A page is conceptually
an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
writer_version - (Optional) Indicates the version of row format to output. The possible values are V1 and V2 . The
default is V1 .
schema_con guration
database_name - (Required) Speci es the name of the AWS Glue database that contains the schema for the output
data.
role_arn - (Required) The role that Kinesis Data Firehose can use to access AWS Glue. This role must be in the same
account you use for Kinesis Data Firehose. Cross-account roles aren't allowed.
table_name - (Required) Speci es the AWS Glue table that contains the column information that constitutes your
data schema.
catalog_id - (Optional) The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by
default.
region - (Optional) If you don't specify an AWS Region, the default is the current region.
version_id - (Optional) Speci es the table version for the output data schema. Defaults to LATEST .
Attributes Reference
arn - The Amazon Resource Name (ARN) specifying the Stream
Import
Kinesis Firehose Delivery streams can be imported using the stream ARN, e.g.
Note: Import does not work for stream destination s3 . Consider using extended_s3 since s3 destination is deprecated.
Resource: aws_kinesis_stream
Provides a Kinesis Stream resource. Amazon Kinesis is a managed service that scales elastically for real-time processing of
streaming big data.
Example Usage
shard_level_metrics = [
"IncomingBytes",
"OutgoingBytes",
]
tags = {
Environment = "test"
}
}
Argument Reference
The following arguments are supported:
name - (Required) A name to identify the stream. This is unique to the AWS account and region the Stream is created
in.
shard_count – (Required) The number of shards that the stream will use. Amazon has guidelines for specifying the
Stream size that should be referenced when creating a Kinesis stream. See Amazon Kinesis Streams
(https://docs.aws.amazon.com/kinesis/latest/dev/amazon-kinesis-streams.html) for more.
retention_period - (Optional) Length of time data records are accessible after they are added to the stream. The
maximum value of a stream's retention period is 168 hours. Minimum value is 24. Default is 24.
shard_level_metrics - (Optional) A list of shard-level CloudWatch metrics which can be enabled for the stream. See
Monitoring with CloudWatch (https://docs.aws.amazon.com/streams/latest/dev/monitoring-with-cloudwatch.html) for
more. Note that the value ALL should not be used; instead you should provide an explicit list of metrics you wish to
enable.
enforce_consumer_deletion - (Optional) A boolean that indicates all registered consumers should be deregistered
from the stream so that the stream can be destroyed without error. The default value is false .
encryption_type - (Optional) The encryption type to use. The only acceptable values are NONE or KMS . The default
value is NONE .
kms_key_id - (Optional) The GUID for the customer-managed KMS key to use for encryption. You can also use a
Kinesis-owned master key by specifying the alias alias/aws/kinesis .
Attributes Reference
id - The unique Stream id
arn - The Amazon Resource Name (ARN) specifying the Stream (same as id )
Timeouts
aws_kinesis_stream provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration
options:
Import
Kinesis Streams can be imported using the name , e.g.
Example Usage
Argument Reference
The following arguments are supported:
name - (Optional) The display name of the alias. The name must start with the word "alias" followed by a forward
slash (alias/)
name_prefix - (Optional) Creates an unique alias beginning with the speci ed pre x. The name must start with the
word "alias" followed by a forward slash (alias/). Con icts with name .
target_key_id - (Required) Identi er for the key for which the alias is for, can be either an ARN or key_id.
Attributes Reference
In addition to the arguments, the following attributes are exported:
target_key_arn - The Amazon Resource Name (ARN) of the target key identi er.
Import
KMS aliases can be imported using the name , e.g.
Note: All arguments including the plaintext be stored in the raw state as plain-text. Read more about sensitive data in
state (/docs/state/sensitive-data.html).
Example Usage
plaintext = <<EOF
<<
{
"client_id": "e587dbae22222f55da22",
"client_secret": "8289575d00000ace55e1815ec13673955721b8a5"
}
EOF
}
Argument Reference
The following arguments are supported:
plaintext - (Required) Data to be encrypted. Note that this may show up in logs, and it will be stored in the state le.
key_id - (Required) Globally unique key ID for the customer master key.
Attributes Reference
All of the argument attributes are also exported as result attributes.
Note: All arguments including the key material will be stored in the raw state as plain-text. Read more about sensitive
data in state (/docs/state/sensitive-data.html).
Example Usage
Argument Reference
The following arguments are supported:
deletion_window_in_days - (Optional) Duration in days after which the key is deleted after destruction of the
resource. Must be between 7 and 30 days. Defaults to 30 .
enabled - (Optional) Speci es whether the key is enabled. Keys pending import can only be false . Imported keys
default to true unless expired.
key_material_base64 - (Optional) Base64 encoded 256-bit symmetric encryption key material to import. The CMK is
permanently associated with this key material. The same key material can be reimported, but you cannot import
di erent key material.
policy - (Optional) A key policy JSON document. If you do not provide a key policy, AWS KMS attaches a default key
policy to the CMK.
valid_to - (Optional) Time at which the imported key material expires. When the key material expires, AWS KMS
deletes the key material and the CMK becomes unusable. If not speci ed, key material does not expire. Valid values:
RFC3339 time string (https://tools.ietf.org/html/rfc3339#section-5.8) ( YYYY-MM-DDTHH:MM:SSZ )
Attributes Reference
The following attributes are exported:
key_usage - The cryptographic operations for which you can use the CMK.
Import
KMS External Keys can be imported using the id , e.g.
Example Usage
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
constraints {
encryption_context_equals = {
Department = "Finance"
}
}
}
Argument Reference
The following arguments are supported:
name - (Optional, Forces new resources) A friendly name for identifying the grant.
key_id - (Required, Forces new resources) The unique identi er for the customer master key (CMK) that the grant
applies to. Specify the key ID or the Amazon Resource Name (ARN) of the CMK. To specify a CMK in a di erent AWS
account, you must use the key ARN.
grantee_principal - (Required, Forces new resources) The principal that is given permission to perform the
operations that the grant permits in ARN format. Note that due to eventual consistency issues around IAM principals,
terraform's state may not always be refreshed to re ect what is true in AWS.
operations - (Required, Forces new resources) A list of operations that the grant permits. The permitted values are:
Decrypt, Encrypt, GenerateDataKey, GenerateDataKeyWithoutPlaintext, ReEncryptFrom, ReEncryptTo,
CreateGrant, RetireGrant, DescribeKey
retiring_principal - (Optional, Forces new resources) The principal that is given permission to retire the grant by
using RetireGrant operation in ARN format. Note that due to eventual consistency issues around IAM principals,
terraform's state may not always be refreshed to re ect what is true in AWS.
constraints - (Optional, Forces new resources) A structure that you can use to allow certain operations in the grant
only when the desired encryption context is present. For more information about encryption context, see Encryption
Context (http://docs.aws.amazon.com/kms/latest/developerguide/encryption-context.html).
grant_creation_tokens - (Optional, Forces new resources) A list of grant tokens to be used when creating the grant.
See Grant Tokens (http://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#grant_token) for more
information about grant tokens.
retire_on_delete -(Defaults to false, Forces new resources) If set to false (the default) the grants will be revoked
upon deletion, and if set to true the grants will try to be retired upon deletion. Note that retiring grants requires
special permissions, hence why we default to revoking grants. See RetireGrant
(https://docs.aws.amazon.com/kms/latest/APIReference/API_RetireGrant.html) for more information.
encryption_context_equals - (Optional) A list of key-value pairs that must match the encryption context in
subsequent cryptographic operation requests. The grant allows the operation only when the encryption context in the
request is the same as the encryption context speci ed in this constraint. Con icts with
encryption_context_subset .
encryption_context_subset - (Optional) A list of key-value pairs that must be included in the encryption context of
subsequent cryptographic operation requests. The grant allows the cryptographic operation only when the encryption
context in the request includes the key-value pairs speci ed in this constraint, although it can include additional key-
value pairs. Con icts with encryption_context_equals .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
grant_token - The grant token for the created grant. For more information, see Grant Tokens
(http://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#grant_token).
Resource: aws_kms_key
Provides a KMS customer master key.
Example Usage
Argument Reference
The following arguments are supported:
key_usage - (Optional) Speci es the intended use of the key. Defaults to ENCRYPT_DECRYPT, and only symmetric
encryption and decryption are supported.
policy - (Optional) A valid policy JSON document. For more information about building AWS IAM policy documents
with Terraform, see the AWS IAM Policy Document Guide (/docs/providers/aws/guides/iam-policy-documents.html).
deletion_window_in_days - (Optional) Duration in days after which the key is deleted after destruction of the
resource, must be between 7 and 30 days. Defaults to 30 days.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
KMS Keys can be imported using the id , e.g.
$ terraform import aws_kms_key.a 1234abcd-12ab-34cd-56ef-1234567890ab
Resource: aws_lambda_alias
Creates a Lambda function alias. Creates an alias that points to the speci ed Lambda function version.
For information about Lambda and how to use it, see What is AWS Lambda?
(http://docs.aws.amazon.com/lambda/latest/dg/welcome.html) For information about function aliases, see CreateAlias
(http://docs.aws.amazon.com/lambda/latest/dg/API_CreateAlias.html) and AliasRoutingCon guration
(https://docs.aws.amazon.com/lambda/latest/dg/API_AliasRoutingCon guration.html) in the API docs.
Example Usage
routing_config = {
additional_version_weights = {
"2" = 0.5
}
}
}
Argument Reference
name - (Required) Name for the alias you are creating. Pattern: (?!^[0-9]+$)([a-zA-Z0-9-_]+)
function_name - (Required) The function ARN of the Lambda function for which you want to create an alias.
function_version - (Required) Lambda function version for which you are creating the alias. Pattern: (\$LATEST|
[0-9]+) .
routing_config - (Optional) The Lambda alias' route con guration settings. Fields documented below
additional_version_weights - (Optional) A map that de nes the proportion of events that should be sent to
di erent versions of a lambda function.
Attributes Reference
arn - The Amazon Resource Name (ARN) identifying your Lambda function alias.
invoke_arn - The ARN to be used for invoking Lambda Function from API Gateway - to be used in
aws_api_gateway_integration (/docs/providers/aws/r/api_gateway_integration.html)'s uri
Import
Lambda Function Aliases can be imported using the function_name/alias , e.g.
For information about Lambda and how to use it, see What is AWS Lambda?
(http://docs.aws.amazon.com/lambda/latest/dg/welcome.html). For information about event source mappings, see
CreateEventSourceMapping (http://docs.aws.amazon.com/lambda/latest/dg/API_CreateEventSourceMapping.html) in the
API docs.
Example Usage
DynamoDB
Kinesis
SQS
Argument Reference
batch_size - (Optional) The largest number of records that Lambda will retrieve from your event source at the time
of invocation. Defaults to 100 for DynamoDB and Kinesis, 10 for SQS.
event_source_arn - (Required) The event source ARN - can either be a Kinesis or DynamoDB stream.
enabled - (Optional) Determines if the mapping will be enabled on creation. Defaults to true .
function_name - (Required) The name or the ARN of the Lambda function that will be subscribing to events.
starting_position - (Optional) The position in the stream where AWS Lambda should start reading. Must be one of
AT_TIMESTAMP (Kinesis only), LATEST or TRIM_HORIZON if getting events from Kinesis or DynamoDB. Must not be
provided if getting events from SQS. More information about these positions can be found in the AWS DynamoDB
Streams API Reference
(https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_GetShardIterator.html) and AWS
Kinesis API Reference (https://docs.aws.amazon.com/kinesis/latest/APIReference/API_GetShardIterator.html#Kinesis-
GetShardIterator-request-ShardIteratorType).
Attributes Reference
function_arn - The the ARN of the Lambda function the event source mapping is sending events to. (Note: this is a
computed value that di ers from function_name above.)
last_processing_result - The result of the last AWS Lambda invocation of your Lambda function.
state_transition_reason - The reason the event source mapping is in its current state.
Import
Lambda event source mappings can be imported using the UUID (event source mapping identi er), e.g.
Note: Terraform will recreate the imported resource as AWS does not expose startingPosition information for
existing Lambda event source mappings. For information about retrieving event source mappings, see
GetEventSourceMapping (https://docs.aws.amazon.com/lambda/latest/dg/API_GetEventSourceMapping.html) in the API
docs.
Resource: aws_lambda_function
Provides a Lambda Function resource. Lambda allows you to trigger execution of code in response to events in AWS,
enabling serverless backend solutions. The Lambda Function itself includes source code and runtime con guration.
For information about Lambda and how to use it, see What is AWS Lambda?
(https://docs.aws.amazon.com/lambda/latest/dg/welcome.html)
For a detailed example of setting up Lambda and API Gateway, see Serverless Applications with AWS Lambda and API
Gateway. (https://learn.hashicorp.com/terraform/aws/lambda-api-gateway)
NOTE: Due to AWS Lambda improved VPC networking changes that began deploying in September 2019
(https://aws.amazon.com/blogs/compute/announcing-improved-vpc-networking-for-aws-lambda-functions/), EC2
subnets and security groups associated with Lambda Functions can take up to 45 minutes to successfully delete.
Terraform AWS Provider version 2.31.0 and later automatically handles this increased timeout, however prior versions
require setting the customizable deletion timeouts of those Terraform resources to 45 minutes ( delete = "45m" ).
AWS and HashiCorp are working together to reduce the amount of time required for resource deletion and updates can
be tracked in this GitHub issue (https://github.com/terraform-providers/terraform-provider-aws/issues/10329).
Example Usage
Basic Example
resource "aws_iam_role" "iam_for_lambda" {
name = "iam_for_lambda"
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
source_code_hash = "${filebase64sha256("lambda_function_payload.zip
zip")}"
runtime = "nodejs8.10"
environment {
variables = {
foo = "bar"
}
}
}
Lambda Layers
NOTE: The aws_lambda_layer_version attribute values for arn and layer_arn were swapped in version 2.0.0 of
the Terraform AWS Provider. For version 1.x, use layer_arn references. For version 2.x, use arn references.
resource "aws_lambda_layer_version" "example" {
layers = ["${aws_lambda_layer_version.example.arn}"]
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*",
"Effect": "Allow"
}
]
}
EOF
}
Once you have created your deployment package you can specify it either directly as a local le (using the filename
argument) or indirectly via Amazon S3 (using the s3_bucket , s3_key and s3_object_version arguments). When
providing the deployment package via S3 it may be useful to use the aws_s3_bucket_object resource
(/docs/providers/aws/r/s3_bucket_object.html) to upload it.
For larger deployment packages it is recommended by Amazon to upload via S3, since the S3 API has better support for
uploading large les e ciently.
Argument Reference
filename - (Optional) The path to the function's deployment package within the local lesystem. If de ned, The s3_ -
pre xed options cannot be used.
s3_bucket - (Optional) The S3 bucket location containing the function's deployment package. Con icts with
filename . This bucket must reside in the same AWS region where you are creating the Lambda function.
s3_key - (Optional) The S3 key of an object containing the function's deployment package. Con icts with filename .
s3_object_version - (Optional) The object version containing the function's deployment package. Con icts with
filename .
dead_letter_config - (Optional) Nested block to con gure the function's dead letter queue. See details below.
role - (Required) IAM role attached to the Lambda Function. This governs both who / what can invoke your Lambda
Function, as well as what resources our Lambda Function has access to. See Lambda Permission Model
(https://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html) for more details.
layers - (Optional) List of Lambda Layer Version ARNs (maximum of 5) to attach to your Lambda Function. See
Lambda Layers (https://docs.aws.amazon.com/lambda/latest/dg/con guration-layers.html)
memory_size - (Optional) Amount of memory in MB your Lambda Function can use at runtime. Defaults to 128 . See
Limits (https://docs.aws.amazon.com/lambda/latest/dg/limits.html)
timeout - (Optional) The amount of time your Lambda Function has to run in seconds. Defaults to 3 . See Limits
(https://docs.aws.amazon.com/lambda/latest/dg/limits.html)
reserved_concurrent_executions - (Optional) The amount of reserved concurrent executions for this lambda
function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations. Defaults to
Unreserved Concurrency Limits -1 . See Managing Concurrency
(https://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html)
publish - (Optional) Whether to publish creation/change as new Lambda Function Version. Defaults to false .
vpc_config - (Optional) Provide this to allow your function to access your VPC. Fields documented below. See
Lambda in VPC (http://docs.aws.amazon.com/lambda/latest/dg/vpc.html)
environment - (Optional) The Lambda environment's con guration settings. Fields documented below.
kms_key_arn - (Optional) The ARN for the KMS encryption key.
source_code_hash - (Optional) Used to trigger updates. Must be set to a base64-encoded SHA256 hash of the
package le speci ed with either filename or s3_key . The usual way to set this is
filebase64sha256("file.zip") (Terraform 0.11.12 and later) or base64sha256(file("file.zip")) (Terraform
0.11.11 and earlier), where " le.zip" is the local lename of the lambda function source archive.
target_arn - (Required) The ARN of an SNS topic or SQS queue to notify when an invocation fails. If this option is
used, the function's IAM role must be granted suitable access to write to the target object, which means allowing either
the sns:Publish or sqs:SendMessage action on this ARN, depending on which service is targeted.
mode - (Required) Can be either PassThrough or Active . If PassThrough, Lambda will only trace the request from
an upstream service if it contains a tracing header with "sampled=1". If Active, Lambda will respect any tracing header
it receives from an upstream service. If no tracing header is received, Lambda will call X-Ray for a tracing decision.
subnet_ids - (Required) A list of subnet IDs associated with the Lambda function.
security_group_ids - (Required) A list of security group IDs associated with the Lambda function.
NOTE: if both subnet_ids and security_group_ids are empty then vpc_con g is considered to be empty or unset.
variables - (Optional) A map that de nes environment variables for the Lambda function.
Attributes Reference
arn - The Amazon Resource Name (ARN) identifying your Lambda Function.
qualified_arn - The Amazon Resource Name (ARN) identifying your Lambda Function Version (if versioning is
enabled via publish = true ).
invoke_arn - The ARN to be used for invoking Lambda Function from API Gateway - to be used in
aws_api_gateway_integration (/docs/providers/aws/r/api_gateway_integration.html)'s uri
source_code_hash - Base64-encoded representation of raw SHA-256 sum of the zip le, provided either via
filename or s3_* parameters.
create - (Default 10m ) How long to wait for slow uploads or EC2 throttling errors.
Import
Lambda Functions can be imported using the function_name , e.g.
For information about Lambda Layers and how to use them, see AWS Lambda Layers
(https://docs.aws.amazon.com/lambda/latest/dg/con guration-layers.html)
Example Usage
compatible_runtimes = ["nodejs8.10"]
}
Once you have created your deployment package you can specify it either directly as a local le (using the filename
argument) or indirectly via Amazon S3 (using the s3_bucket , s3_key and s3_object_version arguments). When
providing the deployment package via S3 it may be useful to use the aws_s3_bucket_object resource
(/docs/providers/aws/r/s3_bucket_object.html) to upload it.
For larger deployment packages it is recommended by Amazon to upload via S3, since the S3 API has better support for
uploading large les e ciently.
Argument Reference
layer_name (Required) A unique name for your Lambda Layer
filename (Optional) The path to the function's deployment package within the local lesystem. If de ned, The s3_ -
pre xed options cannot be used.
s3_bucket - (Optional) The S3 bucket location containing the function's deployment package. Con icts with
filename . This bucket must reside in the same AWS region where you are creating the Lambda function.
s3_key - (Optional) The S3 key of an object containing the function's deployment package. Con icts with filename .
s3_object_version - (Optional) The object version containing the function's deployment package. Con icts with
filename .
compatible_runtimes - (Optional) A list of Runtimes
(https://docs.aws.amazon.com/lambda/latest/dg/API_PublishLayerVersion.html#SSS-PublishLayerVersion-request-
CompatibleRuntimes) this layer is compatible with. Up to 5 runtimes can be speci ed.
license_info - (Optional) License info for your Lambda Layer. See License Info
(https://docs.aws.amazon.com/lambda/latest/dg/API_PublishLayerVersion.html#SSS-PublishLayerVersion-request-
LicenseInfo).
source_code_hash - (Optional) Used to trigger updates. Must be set to a base64-encoded SHA256 hash of the
package le speci ed with either filename or s3_key . The usual way to set this is
${filebase64sha256("file.zip")} (Terraform 0.11.12 or later) or ${base64sha256(file("file.zip"))}
(Terraform 0.11.11 and earlier), where " le.zip" is the local lename of the lambda layer source archive.
Attributes Reference
arn - The Amazon Resource Name (ARN) of the Lambda Layer with version.
layer_arn - The Amazon Resource Name (ARN) of the Lambda Layer without version.
Import
Lambda Layers can be imported using arn .
$ terraform import \
aws_lambda_layer_version.test_layer \
arn:aws:lambda:_REGION_:_ACCOUNT_ID_:layer:_LAYER_NAME_:_LAYER_VERSION_
Resource: aws_lambda_permission
Creates a Lambda permission to allow external sources invoking the Lambda function (e.g. CloudWatch Event Rule, SNS or
S3).
Example Usage
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
Usage with SNS
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
source_arn = "${aws_api_gateway_rest_api.MyDemoAPI.execution_arn}/*/*/*"
}
Argument Reference
action - (Required) The AWS Lambda action you want to allow in this statement. (e.g. lambda:InvokeFunction )
event_source_token - (Optional) The Event Source Token to validate. Used with Alexa Skills
(https://developer.amazon.com/docs/custom-skills/host-a-custom-skill-as-an-aws-lambda-function.html#use-aws-cli).
function_name - (Required) Name of the Lambda function whose resource policy you are updating
principal - (Required) The principal who is getting this permission. e.g. s3.amazonaws.com , an AWS account ID, or
any valid AWS service principal such as events.amazonaws.com or sns.amazonaws.com .
qualifier - (Optional) Query parameter to specify function version or alias name. The permission will then apply to
the speci c quali ed ARN. e.g. arn:aws:lambda:aws-region:acct-id:function:function-name:2
source_account - (Optional) This parameter is used for S3 and SES. The AWS account ID (without a hyphen) of the
source owner.
source_arn - (Optional) When granting Amazon S3 or CloudWatch Events permission to invoke your function, you
should specify this eld with the Amazon Resource Name (ARN) for the S3 Bucket or CloudWatch Events Rule as its
value. This ensures that only events generated from the speci ed bucket or rule can invoke the function. API Gateway
ARNs have a unique structure described here (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-
gateway-control-access-using-iam-policies-to-invoke-api.html).
statement_id_prefix - (Optional) A statement identi er pre x. Terraform will generate a unique su x. Con icts
with statement_id .
Import
Lambda permission statements can be imported using function_name/statement_id, with an optional quali er, e.g.
$ terraform import aws_lambda_permission.test_lambda_permission my_test_lambda_function/AllowExecutionFro
mCloudWatch
Example Usage
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"]
}
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"]
}
lifecycle {
create_before_destroy = true
}
}
lifecycle {
create_before_destroy = true
}
}
With this setup Terraform generates a unique name for your Launch Con guration and can then update the AutoScaling
Group without con ict before destroying the previous Launch Con guration.
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"]
}
lifecycle {
create_before_destroy = true
}
}
Argument Reference
The following arguments are supported:
name - (Optional) The name of the launch con guration. If you leave this blank, Terraform will auto-generate a unique
name.
name_prefix - (Optional) Creates a unique name beginning with the speci ed pre x. Con icts with name .
iam_instance_profile - (Optional) The name attribute of the IAM instance pro le to associate with launched
instances.
key_name - (Optional) The key name that should be used for the instance.
vpc_classic_link_id - (Optional) The ID of a ClassicLink-enabled VPC. Only applies to EC2-Classic instances. (eg.
vpc-2730681a )
vpc_classic_link_security_groups - (Optional) The IDs of one or more security groups for the speci ed
ClassicLink-enabled VPC (eg. sg-46ae3d11 ).
user_data - (Optional) The user data to provide when launching the instance. Do not pass gzip-compressed data via
this argument; see user_data_base64 instead.
user_data_base64 - (Optional) Can be used instead of user_data to pass base64-encoded binary data directly. Use
this instead of user_data whenever the value is not a valid UTF-8 string. For example, gzip-encoded user data must
be base64-encoded and passed via this argument to avoid corruption.
root_block_device - (Optional) Customize details about the root block device of the instance. See Block Devices
below for details.
ebs_block_device - (Optional) Additional EBS block devices to attach to the instance. See Block Devices below for
details.
ephemeral_block_device - (Optional) Customize Ephemeral (also known as "Instance Store") volumes on the
instance. See Block Devices below for details.
spot_price - (Optional; Default: On-demand price) The maximum price to use for reserving spot instances.
placement_tenancy - (Optional) The tenancy of the instance. Valid values are "default" or "dedicated" , see
AWS's Create Launch Con guration
(http://docs.aws.amazon.com/AutoScaling/latest/APIReference/API_CreateLaunchCon guration.html) for more details
Block devices
Each of the *_block_device attributes controls a portion of the AWS Launch Con guration's "Block Device Mapping". It's a
good idea to familiarize yourself with AWS's Block Device Mapping docs
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html) to understand the
implications of using these attributes.
volume_type - (Optional) The type of volume. Can be "standard" , "gp2" , or "io1" . (Default: "standard" ).
delete_on_termination - (Optional) Whether the volume should be destroyed on instance termination (Default:
true ).
encrypted - (Optional) Whether the volume should be encrypted or not. (Default: false ).
volume_type - (Optional) The type of volume. Can be "standard" , "gp2" , or "io1" . (Default: "standard" ).
delete_on_termination - (Optional) Whether the volume should be destroyed on instance termination (Default:
true ).
encrypted - (Optional) Whether the volume should be encrypted or not. Do not use this option if you are using
snapshot_id as the encrypted ag will be determined by the snapshot. (Default: false ).
Each AWS Instance type has a di erent set of Instance Store block devices available for attachment. AWS publishes a list
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#StorageOnInstanceTypes) of which
ephemeral devices are available on each type. The devices are always identi ed by the virtual_name in the format
"ephemeral{0..N}" .
NOTE: Changes to *_block_device con guration of existing resources cannot currently be detected by Terraform.
After updating to block device con guration, resource recreation can be manually triggered by using the taint
command (/docs/commands/taint.html).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Launch con gurations can be imported using the name , e.g.
Example Usage
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
capacity_reservation_specification {
capacity_reservation_preference = "open"
}
credit_specification {
cpu_credits = "standard"
}
disable_api_termination = true
ebs_optimized = true
elastic_gpu_specifications {
type = "test"
}
elastic_inference_accelerator {
type = "eia1.medium"
}
iam_instance_profile {
name = "test"
}
image_id = "ami-test"
instance_initiated_shutdown_behavior = "terminate"
instance_market_options {
market_type = "spot"
}
instance_type = "t2.micro"
kernel_id = "test"
key_name = "test"
license_specification {
license_configuration_arn = "arn:aws:license-manager:eu-west-1:123456789012:license-configuration:lic
-0123456789abcdef0123456789abcdef"
}
monitoring {
enabled = true
}
network_interfaces {
associate_public_ip_address = true
}
placement {
availability_zone = "us-west-2a"
}
ram_disk_id = "test"
vpc_security_group_ids = ["sg-12345678"]
tag_specifications {
resource_type = "instance"
tags = {
Name = "test"
}
}
user_data = "${base64encode(...)}"
}
Argument Reference
The following arguments are supported:
name - The name of the launch template. If you leave this blank, Terraform will auto-generate a unique name.
name_prefix - Creates a unique name beginning with the speci ed pre x. Con icts with name .
block_device_mappings - Specify volumes to attach to the instance besides the volumes speci ed by the AMI. See
Block Devices below for details.
credit_specification - Customize the credit speci cation of the instance. See Credit Speci cation below for more
details.
elastic_gpu_specifications - The elastic GPU to attach to the instance. See Elastic GPU below for more details.
iam_instance_profile - The IAM Instance Pro le to launch the instance with. See Instance Pro le below for more
details.
instance_market_options - The market (purchasing) option for the instance. See Market Options below for details.
license_specification - A list of license speci cations to associate with. See License Speci cation below for more
details.
monitoring - The monitoring option for the instance. See Monitoring below for more details.
network_interfaces - Customize network interfaces to be attached at instance boot time. See Network Interfaces
below for more details.
placement - The placement of the instance. See Placement below for more details.
security_group_names - A list of security group names to associate with. If you are creating Instances in a VPC, use
vpc_security_group_ids instead.
tag_specifications - The tags to apply to the resources during launch. See Tag Speci cations below for more
details.
user_data - The Base64-encoded user data to provide when launching the instance.
Block devices
Con gure additional volumes of the instance besides speci ed by the AMI. It's a good idea to familiarize yourself with AWS's
Block Device Mapping docs (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html)
to understand the implications of using these attributes.
To nd out more information for an existing AMI to override the con guration, such as device_name , you can use the AWS
CLI ec2 describe-images command (https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-images.html).
no_device - Suppresses the speci ed device included in the AMI's block device mapping.
delete_on_termination - Whether the volume should be destroyed on instance termination (Default: true ).
kms_key_id - AWS Key Management Service (AWS KMS) customer master key (CMK) to use when creating the
encrypted volume. encrypted must be set to true when this is set.
volume_type - The type of volume. Can be "standard" , "gp2" , or "io1" . (Default: "standard" ).
cpu_credits - The credit option for CPU usage. Can be "standard" or "unlimited" . T3 instances are launched as
unlimited by default. T2 instances are launched as standard by default.
Elastic GPU
Attach an elastic GPU the instance.
The elastic_gpu_specifications block supports the following:
Instance Pro le
The IAM Instance Pro le (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-
pro les.html) to attach.
arn - The Amazon Resource Name (ARN) of the instance pro le.
Market Options
The market (purchasing) option for the instances.
block_duration_minutes - The required duration in minutes. This value must be a multiple of 60.
instance_interruption_behavior - The behavior when a Spot Instance is interrupted. Can be hibernate , stop ,
or terminate . (Default: terminate ).
max_price - The maximum hourly price you're willing to pay for the Spot Instances.
Monitoring
The monitoring block supports the following:
enabled - If true , the launched EC2 instance will have detailed monitoring enabled.
Network Interfaces
Attaches one or more Network Interfaces (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) to the
instance.
Check limitations for autoscaling group in Creating an Auto Scaling Group Using a Launch Template Guide
(https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-launch-template.html#limitations)
associate_public_ip_address - Associate a public ip address with the network interface. Boolean value.
ipv6_addresses - One or more speci c IPv6 addresses from the IPv6 CIDR block range of your subnet. Con icts with
ipv6_address_count
ipv6_address_count - The number of IPv6 addresses to assign to a network interface. Con icts with
ipv6_addresses
ipv4_address_count - The number of secondary private IPv4 addresses to assign to a network interface. Con icts
with ipv4_address_count
ipv4_addresses - One or more private IPv4 addresses to associate. Con icts with ipv4_addresses
Placement
The Placement Group (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) of the instance.
tenancy - The tenancy of the instance (if the instance is running in a VPC). Can be default , dedicated , or host .
resource_type - The type of resource to tag. Valid values are instance and volume .
Attributes Reference
The following attributes are exported along with all argument references:
Import
Launch Templates can be imported using the id , e.g.
Example Usage
listener {
instance_port = 8000
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
}
Argument Reference
The following arguments are supported:
load_balancer - (Required) The load balancer to which the policy should be attached.
lb_port - (Required) The load balancer port to which the policy should be applied. This must be an active listener on
the load balancer.
cookie_expiration_period - (Optional) The time period after which the session cookie should be considered stale,
expressed in seconds.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
cookie_expiration_period - The time period after which the session cookie is considered stale, expressed in
seconds.
Resource: aws_lb
Provides a Load Balancer resource.
Example Usage
enable_deletion_protection = true
access_logs {
bucket = "${aws_s3_bucket.lb_logs.bucket}"
prefix = "test-lb"
enabled = true
}
tags = {
Environment = "production"
}
}
enable_deletion_protection = true
tags = {
Environment = "production"
}
}
Specifying Elastic IPs
subnet_mapping {
subnet_id = "${aws_subnet.example1.id}"
allocation_id = "${aws_eip.example1.id}"
}
subnet_mapping {
subnet_id = "${aws_subnet.example2.id}"
allocation_id = "${aws_eip.example2.id}"
}
}
Argument Reference
NOTE: Please note that internal LBs can only use ipv4 as the ip_address_type. You can only change to dualstack
ip_address_type if the selected subnets are IPv6 enabled.
name - (Optional) The name of the LB. This name must be unique within your AWS account, can have a maximum of
32 characters, must contain only alphanumeric characters or hyphens, and must not begin or end with a hyphen. If not
speci ed, Terraform will autogenerate a name beginning with tf-lb .
name_prefix - (Optional) Creates a unique name beginning with the speci ed pre x. Con icts with name .
load_balancer_type - (Optional) The type of load balancer to create. Possible values are application or network .
The default value is application .
security_groups - (Optional) A list of security group IDs to assign to the LB. Only valid for Load Balancers of type
application .
subnets - (Optional) A list of subnet IDs to attach to the LB. Subnets cannot be updated for Load Balancers of type
network . Changing this value for load balancers of type network will force a recreation of the resource.
idle_timeout - (Optional) The time in seconds that the connection is allowed to be idle. Only valid for Load Balancers
of type application . Default: 60.
enable_deletion_protection - (Optional) If true, deletion of the load balancer will be disabled via the AWS API. This
will prevent Terraform from deleting the load balancer. Defaults to false .
enable_cross_zone_load_balancing - (Optional) If true, cross-zone load balancing of the load balancer will be
enabled. This is a network load balancer feature. Defaults to false .
enable_http2 - (Optional) Indicates whether HTTP/2 is enabled in application load balancers. Defaults to true .
ip_address_type - (Optional) The type of IP addresses used by the subnets for your load balancer. The possible
values are ipv4 and dualstack
prefix - (Optional) The S3 bucket pre x. Logs are stored in the root if not con gured.
enabled - (Optional) Boolean to enable / disable access_logs . Defaults to false , even when bucket is speci ed.
subnet_id - (Required) The id of the subnet of which to attach to the load balancer. You can specify only one subnet
per Availability Zone.
Attributes Reference
The following attributes are exported in addition to the arguments listed above:
zone_id - The canonical hosted zone ID of the load balancer (to be used in a Route 53 Alias record).
Timeouts
aws_lb provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration options:
Import
LBs can be imported using their ARN, e.g.
This resource is for additional certi cates and does not replace the default certi cate on the listener.
Example Usage
Argument Reference
The following arguments are supported:
listener_arn - (Required, Forces New Resource) The ARN of the listener to which to attach the certi cate.
certificate_arn - (Required, Forces New Resource) The ARN of the certi cate to attach to the listener.
Resource: aws_lb_listener
Provides a Load Balancer Listener resource.
Example Usage
Forward Action
default_action {
type = "forward"
target_group_arn = "${aws_lb_target_group.front_end.arn}"
}
}
Redirect Action
resource "aws_lb" "front_end" {
default_action {
type = "redirect"
redirect {
port = "443"
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
Fixed-response Action
default_action {
type = "fixed-response"
fixed_response {
content_type = "text/plain"
message_body = "Fixed response content"
status_code = "200"
}
}
}
Authenticate-cognito Action
resource "aws_lb" "front_end" {
default_action {
type = "authenticate-cognito"
authenticate_cognito {
user_pool_arn = "${aws_cognito_user_pool.pool.arn}"
user_pool_client_id = "${aws_cognito_user_pool_client.client.id}"
user_pool_domain = "${aws_cognito_user_pool_domain.domain.domain}"
}
}
default_action {
type = "forward"
target_group_arn = "${aws_lb_target_group.front_end.arn}"
}
}
Authenticate-oidc Action
resource "aws_lb" "front_end" {
default_action {
type = "authenticate-oidc"
authenticate_oidc {
authorization_endpoint = "https://example.com/authorization_endpoint"
client_id = "client_id"
client_secret = "client_secret"
issuer = "https://example.com"
token_endpoint = "https://example.com/token_endpoint"
user_info_endpoint = "https://example.com/user_info_endpoint"
}
}
default_action {
type = "forward"
target_group_arn = "${aws_lb_target_group.front_end.arn}"
}
}
Argument Reference
The following arguments are supported:
load_balancer_arn - (Required, Forces New Resource) The ARN of the load balancer.
protocol - (Optional) The protocol for connections from clients to the load balancer. Valid values are TCP , TLS ,
UDP , TCP_UDP , HTTP and HTTPS . Defaults to HTTP .
ssl_policy - (Optional) The name of the SSL Policy for the listener. Required if protocol is HTTPS or TLS .
certificate_arn - (Optional) The ARN of the default SSL server certi cate. Exactly one certi cate is required if the
protocol is HTTPS. For adding additional SSL certi cates, see the aws_lb_listener_certificate resource
(/docs/providers/aws/r/lb_listener_certi cate.html).
NOTE:: Please note that listeners that are attached to Application Load Balancers must use either HTTP or HTTPS
protocols while listeners that are attached to Network Load Balancers must use the TCP protocol.
Action Blocks (for default_action ) support the following:
type - (Required) The type of routing action. Valid values are forward , redirect , fixed-response ,
authenticate-cognito and authenticate-oidc .
target_group_arn - (Optional) The ARN of the Target Group to which to route tra c. Required if type is forward .
redirect - (Optional) Information for creating a redirect action. Required if type is redirect .
fixed_response - (Optional) Information for creating an action that returns a custom HTTP response. Required if
type is fixed-response .
NOTE:: You can reuse URI components using the following reserved keywords: #{protocol} , #{host} , #{port} , #
{path} (the leading "/" is removed) and #{query} .
host - (Optional) The hostname. This component is not percent-encoded. The hostname can contain #{host} .
Defaults to #{host} .
path - (Optional) The absolute path, starting with the leading "/". This component is not percent-encoded. The path
can contain #{host}, #{path}, and #{port}. Defaults to /#{path} .
port - (Optional) The port. Specify a value from 1 to 65535 or #{port} . Defaults to #{port} .
protocol - (Optional) The protocol. Valid values are HTTP , HTTPS , or #{protocol} . Defaults to #{protocol} .
query - (Optional) The query parameters, URL-encoded when necessary, but not percent-encoded. Do not include the
leading "?". Defaults to #{query} .
status_code - (Required) The HTTP redirect code. The redirect is either permanent ( HTTP_301 ) or temporary
( HTTP_302 ).
content_type - (Required) The content type. Valid values are text/plain , text/css , text/html ,
application/javascript and application/json .
status_code - (Optional) The HTTP response code. Valid values are 2XX , 4XX , or 5XX .
authentication_request_extra_params - (Optional) The query parameters to include in the redirect request to the
authorization endpoint. Max: 10.
on_unauthenticated_request - (Optional) The behavior if the user is not authenticated. Valid values: deny , allow
and authenticate
scope - (Optional) The set of user claims to be requested from the IdP.
session_cookie_name - (Optional) The name of the cookie used to maintain session information.
user_pool_domain - (Required) The domain pre x or fully-quali ed domain name of the Cognito user pool.
authentication_request_extra_params - (Optional) The query parameters to include in the redirect request to the
authorization endpoint. Max: 10.
on_unauthenticated_request - (Optional) The behavior if the user is not authenticated. Valid values: deny , allow
and authenticate
scope - (Optional) The set of user claims to be requested from the IdP.
session_cookie_name - (Optional) The name of the cookie used to maintain session information.
Authentication Request Extra Params Blocks (for authentication_request_extra_params ) supports the following:
Attributes Reference
The following attributes are exported in addition to the arguments listed above:
Import
Listeners can be imported using their ARN, e.g.
Example Usage
action {
type = "forward"
target_group_arn = "${aws_lb_target_group.static.arn}"
}
condition {
field = "path-pattern"
values = ["/static/*"]
}
}
action {
type = "forward"
target_group_arn = "${aws_lb_target_group.static.arn}"
}
condition {
field = "host-header"
values = ["my-service.*.terraform.io"]
}
}
action {
action {
type = "redirect"
redirect {
port = "443"
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
condition {
field = "host-header"
values = ["my-service.*.terraform.io"]
}
}
action {
type = "fixed-response"
fixed_response {
content_type = "text/plain"
message_body = "HEALTHY"
status_code = "200"
}
}
condition {
field = "path-pattern"
values = ["/health"]
}
}
action {
type = "authenticate-cognito"
authenticate_cognito {
user_pool_arn = "${aws_cognito_user_pool.pool.arn}"
user_pool_client_id = "${aws_cognito_user_pool_client.client.id}"
user_pool_client_id = "${aws_cognito_user_pool_client.client.id}"
user_pool_domain = "${aws_cognito_user_pool_domain.domain.domain}"
}
}
action {
type = "forward"
target_group_arn = "${aws_lb_target_group.static.arn}"
}
}
action {
type = "authenticate-oidc"
authenticate_oidc {
authorization_endpoint = "https://example.com/authorization_endpoint"
client_id = "client_id"
client_secret = "client_secret"
issuer = "https://example.com"
token_endpoint = "https://example.com/token_endpoint"
user_info_endpoint = "https://example.com/user_info_endpoint"
}
}
action {
type = "forward"
target_group_arn = "${aws_lb_target_group.static.arn}"
}
}
Argument Reference
The following arguments are supported:
listener_arn - (Required, Forces New Resource) The ARN of the listener to which to attach the rule.
priority - (Optional) The priority for the rule between 1 and 50000 . Leaving it unset will automatically set the rule
with next available priority after currently existing highest rule. A listener can't have multiple rules with the same
priority.
type - (Required) The type of routing action. Valid values are forward , redirect , fixed-response ,
authenticate-cognito and authenticate-oidc .
target_group_arn - (Optional) The ARN of the Target Group to which to route tra c. Required if type is forward .
redirect - (Optional) Information for creating a redirect action. Required if type is redirect .
fixed_response - (Optional) Information for creating an action that returns a custom HTTP response. Required if
type is fixed-response .
authenticate_cognito - (Optional) Information for creating an authenticate action using Cognito. Required if type
is authenticate-cognito .
authenticate_oidc - (Optional) Information for creating an authenticate action using OIDC. Required if type is
authenticate-oidc .
NOTE:: You can reuse URI components using the following reserved keywords: #{protocol} , #{host} , #{port} , #
{path} (the leading "/" is removed) and #{query} .
host - (Optional) The hostname. This component is not percent-encoded. The hostname can contain #{host} .
Defaults to #{host} .
path - (Optional) The absolute path, starting with the leading "/". This component is not percent-encoded. The path
can contain #{host}, #{path}, and #{port}. Defaults to /#{path} .
port - (Optional) The port. Specify a value from 1 to 65535 or #{port} . Defaults to #{port} .
protocol - (Optional) The protocol. Valid values are HTTP , HTTPS , or #{protocol} . Defaults to #{protocol} .
query - (Optional) The query parameters, URL-encoded when necessary, but not percent-encoded. Do not include the
leading "?". Defaults to #{query} .
status_code - (Required) The HTTP redirect code. The redirect is either permanent ( HTTP_301 ) or temporary
( HTTP_302 ).
content_type - (Required) The content type. Valid values are text/plain , text/css , text/html ,
application/javascript and application/json .
status_code - (Optional) The HTTP response code. Valid values are 2XX , 4XX , or 5XX .
authentication_request_extra_params - (Optional) The query parameters to include in the redirect request to the
authorization endpoint. Max: 10.
on_unauthenticated_request - (Optional) The behavior if the user is not authenticated. Valid values: deny , allow
and authenticate
scope - (Optional) The set of user claims to be requested from the IdP.
session_cookie_name - (Optional) The name of the cookie used to maintain session information.
authentication_request_extra_params - (Optional) The query parameters to include in the redirect request to the
authorization endpoint. Max: 10.
on_unauthenticated_request - (Optional) The behavior if the user is not authenticated. Valid values: deny , allow
and authenticate
scope - (Optional) The set of user claims to be requested from the IdP.
session_cookie_name - (Optional) The name of the cookie used to maintain session information.
Authentication Request Extra Params Blocks (for authentication_request_extra_params ) supports the following:
field - (Required) The name of the eld. Must be one of path-pattern for path based routing or host-header for
host based routing.
Attributes Reference
The following attributes are exported in addition to the arguments listed above:
Import
Rules can be imported using their ARN, e.g.
$ terraform import aws_lb_listener_rule.front_end arn:aws:elasticloadbalancing:us-west-2:187416307283:lis
tener-rule/app/test/8e4497da625e2d8a/9ab28ade35828f96/67b3d2d36dd7c26b
Resource: aws_lb_ssl_negotiation_policy
Provides a load balancer SSL negotiation policy, which allows an ELB to control the ciphers and protocols that are supported
during SSL negotiations between a client and a load balancer.
Example Usage
resource "aws_elb" "lb" {
name = "test-lb"
availability_zones = ["us-east-1a"]
listener {
instance_port = 8000
instance_protocol = "https"
lb_port = 443
lb_protocol = "https"
ssl_certificate_id = "arn:aws:iam::123456789012:server-certificate/certName"
}
}
attribute {
name = "Protocol-TLSv1"
value = "false"
}
attribute {
name = "Protocol-TLSv1.1"
value = "false"
}
attribute {
name = "Protocol-TLSv1.2"
value = "true"
}
attribute {
name = "Server-Defined-Cipher-Order"
value = "true"
}
attribute {
name = "ECDHE-RSA-AES128-GCM-SHA256"
value = "true"
}
attribute {
name = "AES128-GCM-SHA256"
value = "true"
}
attribute {
name = "EDH-RSA-DES-CBC3-SHA"
value = "false"
}
}
Argument Reference
The following arguments are supported:
load_balancer - (Required) The load balancer to which the policy should be attached.
lb_port - (Required) The load balancer port to which the policy should be applied. This must be an active listener on
the load balancer.
attribute - (Optional) An SSL Negotiation policy attribute. Each has two properties:
name - The name of the attribute
To set your attributes, please see the AWS Elastic Load Balancing Developer Guide
(http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-security-policy-table.html) for a listing of the
supported SSL protocols, SSL options, and SSL ciphers.
NOTE: The AWS documentation references Server Order Preference, which the AWS Elastic Load Balancing API refers to
as Server-Defined-Cipher-Order . If you wish to set Server Order Preference, use this value instead.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
target_group_arn - (Required) The ARN of the target group with which to register targets
target_id (Required) The ID of the target. This is the Instance ID for an instance, or the container ID for an ECS
container. If the target type is ip, specify an IP address. If the target type is lambda, specify the arn of lambda.
availability_zone - (Optional) The Availability Zone where the IP address of the target is to be registered.
Attributes Reference
The following attributes are exported in addition to the arguments listed above:
Import
Target Group Attachments cannot be imported.
Resource: aws_lb_target_group
Provides a Target Group resource for use with Load Balancer resources.
Example Usage
IP Target Group
name - (Optional, Forces new resource) The name of the target group. If omitted, Terraform will assign a random,
unique name.
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name . Cannot be longer than 6 characters.
port - (Optional, Forces new resource) The port on which targets receive tra c, unless overridden when registering a
speci c target. Required when target_type is instance or ip . Does not apply when target_type is lambda .
protocol - (Optional, Forces new resource) The protocol to use for routing tra c to the targets. Should be one of
"TCP", "TLS", "UDP", "TCP_UDP", "HTTP" or "HTTPS". Required when target_type is instance or ip . Does not apply
when target_type is lambda .
vpc_id - (Optional, Forces new resource) The identi er of the VPC in which to create the target group. Required when
target_type is instance or ip . Does not apply when target_type is lambda .
deregistration_delay - (Optional) The amount time for Elastic Load Balancing to wait before changing the state of a
deregistering target from draining to unused. The range is 0-3600 seconds. The default value is 300 seconds.
slow_start - (Optional) The amount time for targets to warm up before the load balancer sends them a full share of
requests. The range is 30-900 seconds or 0 to disable. The default value is 0 seconds.
lambda_multi_value_headers_enabled - (Optional) Boolean whether the request and response headers exchanged
between the load balancer and the Lambda function include arrays of values or strings. Only applies when
target_type is lambda .
proxy_protocol_v2 - (Optional) Boolean to enable / disable support for proxy protocol v2 on Network Load
Balancers. See doc (https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-
groups.html#proxy-protocol) for more information.
stickiness - (Optional) A Stickiness block. Stickiness blocks are documented below. stickiness is only valid if used
with Load Balancers of type Application
health_check - (Optional) A Health Check block. Health Check blocks are documented below.
target_type - (Optional, Forces new resource) The type of target that you must specify when registering targets with
this target group. The possible values are instance (targets are speci ed by instance ID) or ip (targets are speci ed
by IP address) or lambda (targets are speci ed by lambda arn). The default is instance . Note that you can't specify
targets for a target group using both instance IDs and IP addresses. If the target type is ip , specify IP addresses from
the subnets of the virtual private cloud (VPC) for the target group, the RFC 1918 range (10.0.0.0/8, 172.16.0.0/12, and
192.168.0.0/16), and the RFC 6598 range (100.64.0.0/10). You can't specify publicly routable IP addresses.
type - (Required) The type of sticky sessions. The only current possible value is lb_cookie .
cookie_duration - (Optional) The time period, in seconds, during which requests from a client should be routed to
the same target. After this time period expires, the load balancer-generated cookie is considered stale. The range is 1
second to 1 week (604800 seconds). The default value is 1 day (86400 seconds).
enabled - (Optional) Boolean to enable / disable stickiness . Default is true
NOTE: To help facilitate the authoring of modules that support target groups of any protocol, you can de ne
stickiness regardless of the protocol chosen. However, for TCP target groups, enabled must be false .
Note: The Health Check parameters you can set vary by the protocol of the Target Group. Many parameters cannot
be set to custom values for network load balancers at this time. See
http://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateTargetGroup.html
(http://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateTargetGroup.html) for a complete
reference. Keep in mind, that health checks produce actual requests to the backend. The underlying function is invoked
when target_type is set to lambda .
enabled - (Optional) Indicates whether health checks are enabled. Defaults to true.
interval - (Optional) The approximate amount of time, in seconds, between health checks of an individual target.
Minimum value 5 seconds, Maximum value 300 seconds. For lambda target groups, it needs to be greater as the
timeout of the underlying lambda . Default 30 seconds.
path - (Required for HTTP/HTTPS ALB) The destination for the health check request. Applies to Application Load
Balancers only (HTTP/HTTPS), not Network Load Balancers (TCP).
port - (Optional) The port to use to connect with the target. Valid values are either ports 1-65536, or traffic-port .
Defaults to traffic-port .
protocol - (Optional) The protocol to use to connect with the target. Defaults to HTTP . Not applicable when
target_type is lambda .
timeout - (Optional) The amount of time, in seconds, during which no response means a failed health check. For
Application Load Balancers, the range is 2 to 120 seconds, and the default is 5 seconds for the instance target type
and 30 seconds for the lambda target type. For Network Load Balancers, you cannot set a custom value, and the
default is 10 seconds for TCP and HTTPS health checks and 6 seconds for HTTP health checks.
healthy_threshold - (Optional) The number of consecutive health checks successes required before considering an
unhealthy target healthy. Defaults to 3.
unhealthy_threshold - (Optional) The number of consecutive health check failures required before considering the
target unhealthy . For Network Load Balancers, this value must be the same as the healthy_threshold . Defaults to
3.
matcher (Required for HTTP/HTTPS ALB) The HTTP codes to use when checking for a successful response from a
target. You can specify multiple values (for example, "200,202") or a range of values (for example, "200-299"). Applies to
Application Load Balancers only (HTTP/HTTPS), not Network Load Balancers (TCP).
Attributes Reference
The following attributes are exported in addition to the arguments listed above:
Import
Target Groups can be imported using their ARN, e.g.
Note: License con gurations can also be associated with launch templates by specifying the
license_specifications block for an aws_launch_template .
Example Usage
filter {
name = "name"
values = ["amzn-ami-vpc-nat*"]
}
}
Argument Reference
The following arguments are supported:
resource_arn - (Required) ARN of the resource associated with the license con guration.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Note: Removing the license_count attribute is not supported by the License Manager API - use terraform taint
aws_licensemanager_license_configuration.<id> to recreate the resource instead.
Example Usage
license_rules = [
"#minimumSockets=2",
]
tags = {
foo = "barr"
}
}
Argument Reference
The following arguments are supported:
license_counting_type - (Required) Dimension to use to track license inventory. Specify either vCPU , Instance ,
Core or Socket .
Rules
License rules should be in the format of #RuleType=RuleValue . Supported rule types:
minimumVcpus - Resource must have minimum vCPU count in order to use the license. Default: 1
maximumVcpus - Resource must have maximum vCPU count in order to use the license. Default: unbounded, limit:
10000
minimumCores - Resource must have minimum core count in order to use the license. Default: 1
maximumCores - Resource must have maximum core count in order to use the license. Default: unbounded, limit:
10000
minimumSockets - Resource must have minimum socket count in order to use the license. Default: 1
maximumSockets - Resource must have maximum socket count in order to use the license. Default: unbounded, limit:
10000
allowedTenancy - De nes where the license can be used. If set, restricts license usage to selected tenancies. Specify
a comma delimited list of EC2-Default , EC2-DedicatedHost , EC2-DedicatedInstance
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
License con gurations can be imported using the id , e.g.
Note: Lightsail is currently only supported in a limited number of AWS Regions, please see "Regions and Availability
Zones in Amazon Lightsail" (https://lightsail.aws.amazon.com/ls/docs/overview/article/understanding-regions-and-
availability-zones-in-amazon-lightsail) for more details
Argument Reference
The following arguments are supported:
Attributes Reference
The following attributes are exported in addition to the arguments listed above:
Note: Lightsail is currently only supported in a limited number of AWS Regions, please see "Regions and Availability
Zones in Amazon Lightsail" (https://lightsail.aws.amazon.com/ls/docs/overview/article/understanding-regions-and-
availability-zones-in-amazon-lightsail) for more details
Example Usage
Argument Reference
The following arguments are supported:
availability_zone - (Required) The Availability Zone in which to create your instance (see list below)
blueprint_id - (Required) The ID for a virtual private server image (see list below)
bundle_id - (Required) The bundle of speci cation information (see list below)
key_pair_name - (Optional) The name of your key pair. Created in the Lightsail console (cannot use aws_key_pair
at this time)
user_data - (Optional) launch script to con gure server with additional user data
Availability Zones
Lightsail currently supports the following Availability Zones (e.g. us-east-1a ):
ap-northeast-1{a,c,d}
ap-northeast-2{a,c}
ap-south-1{a,b}
ap-southeast-1{a,b,c}
ap-southeast-2{a,b,c}
ca-central-1{a,b}
eu-central-1{a,b,c}
eu-west-1{a,b,c}
eu-west-2{a,b,c}
eu-west-3{a,b,c}
us-east-1{a,b,c,d,e,f}
us-east-2{a,b,c}
us-west-2{a,b,c}
Blueprints
Lightsail currently supports the following Blueprint IDs:
OS Only
amazon_linux_2018_03_0_2
centos_7_1901_01
debian_8_7
debian_9_5
freebsd_11_1
opensuse_42_2
ubuntu_16_04_2
ubuntu_18_04
Apps and OS
drupal_8_5_6
gitlab_11_1_4_1
joomla_3_8_11
lamp_5_6_37_2
lamp_7_1_20_1
magento_2_2_5
mean_4_0_1
nginx_1_14_0_1
nodejs_10_8_0
plesk_ubuntu_17_8_11_1
redmine_3_4_6
wordpress_4_9_8
wordpress_multisite_4_9_8
Bundles
Lightsail currently supports the following Bundle IDs (e.g. an instance in ap-northeast-1 would use small_2_0 ):
Pre x
A Bundle ID starts with one of the below size pre xes:
nano_
micro_
small_
medium_
large_
xlarge_
2xlarge_
Su x
A Bundle ID ends with one of the following su xes depending on Availability Zone:
ap-northeast-1: 2_0
ap-northeast-2: 2_0
ap-south-1: 2_1
ap-southeast-1: 2_0
ap-southeast-2: 2_2
ca-central-1: 2_0
eu-central-1: 2_0
eu-west-1: 2_0
eu-west-2: 2_0
eu-west-3: 2_0
us-east-1: 2_0
us-east-2: 2_0
us-west-2: 2_0
Attributes Reference
The following attributes are exported in addition to the arguments listed above:
availability_zone
blueprint_id
bundle_id
key_pair_name
user_data
Import
Lightsail Instances can be imported using their name, e.g.
Note: Lightsail is currently only supported in a limited number of AWS Regions, please see "Regions and Availability
Zones in Amazon Lightsail" (https://lightsail.aws.amazon.com/ls/docs/overview/article/understanding-regions-and-
availability-zones-in-amazon-lightsail) for more details
Create new Key Pair, encrypting the private key with a PGP Key
Argument Reference
The following arguments are supported:
name - (Optional) The name of the Lightsail Key Pair. If omitted, a unique name will be generated by Terraform
pgp_key – (Optional) An optional PGP key to encrypt the resulting private key material. Only used when creating a
new key pair
public_key - (Required) The public key material. This public key will be imported into Lightsail
NOTE: a PGP key is not required, however it is strongly encouraged. Without a PGP key, the private key material will be
stored in state unencrypted. pgp_key is ignored if public_key is supplied.
Attributes Reference
The following attributes are exported in addition to the arguments listed above:
fingerprint - The MD5 public key ngerprint as speci ed in section 4 of RFC 4716.
private_key - the private key, base64 encoded. This is only populated when creating a new key, and when no
pgp_key is provided
encrypted_private_key – the private key material, base 64 encoded and encrypted with the given pgp_key . This is
only populated when creating a new key and pgp_key is supplied
encrypted_fingerprint - The MD5 public key ngerprint for the encrypted private key
Import
Lightsail Key Pairs cannot be imported, because the private and public key are only available on initial creation.
Resource: aws_lightsail_static_ip_attachment
Provides a static IP address attachment - relationship between a Lightsail static IP & Lightsail instance.
Note: Lightsail is currently only supported in a limited number of AWS Regions, please see "Regions and Availability
Zones in Amazon Lightsail" (https://lightsail.aws.amazon.com/ls/docs/overview/article/understanding-regions-and-
availability-zones-in-amazon-lightsail) for more details
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
The following attributes are exported in addition to the arguments listed above:
Note: Lightsail is currently only supported in a limited number of AWS Regions, please see "Regions and Availability
Zones in Amazon Lightsail" (https://lightsail.aws.amazon.com/ls/docs/overview/article/understanding-regions-and-
availability-zones-in-amazon-lightsail) for more details
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
The following attributes are exported in addition to the arguments listed above:
Example Usage
listener {
instance_port = 443
instance_protocol = "http"
lb_port = 443
lb_protocol = "https"
ssl_certificate_id = "arn:aws:iam::000000000000:server-certificate/wu-tang.net"
}
tags = {
Name = "wu-tang"
}
}
policy_attribute {
name = "PublicKey"
value = "${file("wu-
- tang-
- pubkey")}"
}
}
policy_attribute {
name = "PublicKeyPolicyName"
value = "${aws_load_balancer_policy.wu-tang-root-ca-pubkey-policy.policy_name}"
}
}
policy_names = [
"${aws_load_balancer_policy.wu-tang-root-ca-backend-auth-policy.policy_name}",
]
}
Where the le pubkey in the current directory contains only the public key of the certi cate.
cat wu-tang-ca.pem | openssl x509 -pubkey -noout | grep -v '\-\-\-\-' | tr -d '\n' > wu-tang-pubkey
This example shows how to enable backend authentication for an ELB as well as customize the TLS settings.
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
listener {
instance_port = 443
instance_protocol = "http"
lb_port = 443
lb_protocol = "https"
ssl_certificate_id = "arn:aws:iam::000000000000:server-certificate/wu-tang.net"
}
tags = {
Name = "wu-tang"
}
}
policy_attribute {
name = "ECDHE-ECDSA-AES128-GCM-SHA256"
value = "true"
}
policy_attribute {
name = "Protocol-TLSv1.2"
value = "true"
}
}
policy_names = [
"${aws_load_balancer_policy.wu-tang-ssl.policy_name}",
]
}
This example shows how to customize the TLS settings of an HTTPS listener.
listener {
instance_port = 443
instance_protocol = "http"
lb_port = 443
lb_protocol = "https"
ssl_certificate_id = "arn:aws:iam::000000000000:server-certificate/wu-tang.net"
}
tags = {
Name = "wu-tang"
}
}
policy_attribute {
name = "Reference-Security-Policy"
value = "ELBSecurityPolicy-TLS-1-1-2017-01"
}
}
policy_names = [
"${aws_load_balancer_policy.wu-tang-ssl-tls-1-1.policy_name}",
]
}
This example shows how to add a Prede ned Security Policy for ELBs
(https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html)
Argument Reference
The following arguments are supported:
load_balancer_port - (Required) The load balancer listener port to apply the policy to.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The ID of the policy.
load_balancer_port - The load balancer listener port the policies are applied to
Resource: aws_load_balancer_policy
Provides a load balancer policy, which can be attached to an ELB listener or backend server.
Example Usage
listener {
instance_port = 443
instance_protocol = "http"
lb_port = 443
lb_protocol = "https"
ssl_certificate_id = "arn:aws:iam::000000000000:server-certificate/wu-tang.net"
}
tags = {
Name = "wu-tang"
}
}
policy_attribute {
name = "PublicKey"
value = "${file("wu-
- tang-
- pubkey")}"
}
}
policy_attribute {
name = "PublicKeyPolicyName"
value = "${aws_load_balancer_policy.wu-tang-root-ca-pubkey-policy.policy_name}"
}
}
policy_attribute {
name = "ECDHE-ECDSA-AES128-GCM-SHA256"
value = "true"
}
policy_attribute {
name = "Protocol-TLSv1.2"
value = "true"
}
}
policy_attribute {
name = "Reference-Security-Policy"
value = "ELBSecurityPolicy-TLS-1-1-2017-01"
}
}
policy_names = [
"${aws_load_balancer_policy.wu-tang-root-ca-backend-auth-policy.policy_name}",
]
}
policy_names = [
"${aws_load_balancer_policy.wu-tang-ssl.policy_name}",
]
}
Where the le pubkey in the current directory contains only the public key of the certi cate.
cat wu-tang-ca.pem | openssl x509 -pubkey -noout | grep -v '\-\-\-\-' | tr -d '\n' > wu-tang-pubkey
This example shows how to enable backend authentication for an ELB as well as customize the TLS settings.
Argument Reference
The following arguments are supported:
NOTE: Before using Amazon Macie for the rst time it must be enabled manually. Instructions are here
(https://docs.aws.amazon.com/macie/latest/userguide/macie-setting-up.html#macie-setting-up-enable).
Example Usage
Argument Reference
The following arguments are supported:
member_account_id - (Required) The ID of the AWS account that you want to associate with Amazon Macie as a
member account.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
NOTE: Before using Amazon Macie for the rst time it must be enabled manually. Instructions are here
(https://docs.aws.amazon.com/macie/latest/userguide/macie-setting-up.html#macie-setting-up-enable).
Example Usage
classification_type {
one_time = "FULL"
}
}
Argument Reference
The following arguments are supported:
bucket_name - (Required) The name of the S3 bucket that you want to associate with Amazon Macie.
classification_type - (Optional) The con guration of how Amazon Macie classi es the S3 objects.
member_account_id - (Optional) The ID of the Amazon Macie member account whose S3 resources you want to
associate with Macie. If member_account_id isn't speci ed, the action associates speci ed S3 resources with Macie
for the current master account.
prefix - (Optional) Object key pre x identifying one or more S3 objects to which the association applies.
continuous - (Optional) A string value indicating that Macie perform a one-time classi cation of all of the existing
objects in the bucket. The only valid value is the default value, FULL .
one_time - (Optional) A string value indicating whether or not Macie performs a one-time classi cation of all of the
existing objects in the bucket. Valid values are NONE and FULL . Defaults to NONE indicating that Macie only classi es
objects that are added after the association was created.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
vpc_id - (Required) The ID of the VPC whose main route table should be set
route_table_id - (Required) The ID of the Route Table to set as the new main route table for the target VPC
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Notes
On VPC creation, the AWS API always creates an initial Main Route Table. This resource records the ID of that Route Table
under original_route_table_id . The "Delete" action for a main_route_table_association consists of resetting this
original table as the Main Route Table for the VPC. You'll see this additional Route Table in the AWS console; it must remain
intact in order for the main_route_table_association delete to work properly.
Resource: aws_media_package_channel
Provides an AWS Elemental MediaPackage Channel.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Media Package Channels can be imported via the channel ID, e.g.
Example Usage
Argument Reference
The following arguments are supported:
name - (Required) The name of the container. Must contain alphanumeric characters or underscores.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
MediaStore Container can be imported using the MediaStore Container Name, e.g.
Example Usage
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "MediaStoreFullAccess",
"Action": [ "mediastore:*" ],
"Principal": {"AWS" : "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"},
"Effect": "Allow",
"Resource": "arn:aws:mediastore:${data.aws_caller_identity.current.account_id}:${data.aws_region.
current.name}:container/${aws_media_store_container.example.name}/*",
"Condition": {
"Bool": { "aws:SecureTransport": "true" }
}
}]
}
EOF
}
Argument Reference
The following arguments are supported:
policy - (Required) The contents of the policy. For more information about building AWS IAM policy documents with
Terraform, see the AWS IAM Policy Document Guide (/docs/providers/aws/guides/iam-policy-documents.html).
Import
MediaStore Container Policy can be imported using the MediaStore Container Name, e.g.
$ terraform import aws_media_store_container_policy.example example
Resource: aws_mq_broker
Provides an MQ Broker Resource. This resources also manages users for the broker.
Changes to an MQ Broker can occur when you change a parameter, such as configuration or user , and are re ected in
the next maintenance window. Because of this, Terraform may report a di erence in its planning phase because a
modi cation has not yet taken place. You can use the apply_immediately ag to instruct the service to apply the change
immediately (see documentation below).
Note: using apply_immediately can result in a brief downtime as the broker reboots.
Note: All arguments including the username and password will be stored in the raw state as plain-text. Read more
about sensitive data in state (/docs/state/sensitive-data.html).
Example Usage
configuration {
id = "${aws_mq_configuration.test.id}"
revision = "${aws_mq_configuration.test.latest_revision}"
}
engine_type = "ActiveMQ"
engine_version = "5.15.0"
host_instance_type = "mq.t2.micro"
security_groups = ["${aws_security_group.test.id}"]
user {
username = "ExampleUser"
password = "MindTheGap"
}
}
Argument Reference
The following arguments are supported:
apply_immediately - (Optional) Speci es whether any broker modi cations are applied immediately, or during the
next maintenance window. Default is false .
auto_minor_version_upgrade - (Optional) Enables automatic upgrades to new minor versions for brokers, as
Apache releases the versions.
broker_name - (Required) The name of the broker.
deployment_mode - (Optional) The deployment mode of the broker. Supported: SINGLE_INSTANCE and
ACTIVE_STANDBY_MULTI_AZ . Defaults to SINGLE_INSTANCE .
encryption_options - (Optional) Con guration block containing encryption options. See below.
engine_type - (Required) The type of broker engine. Currently, Amazon MQ supports only ActiveMQ .
engine_version - (Required) The version of the broker engine. Currently, See the AmazonMQ Broker Engine docs
(https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/broker-engine.html) for supported versions.
publicly_accessible - (Optional) Whether to enable connections from applications outside of the VPC that hosts
the broker's subnets.
security_groups - (Required) The list of security group IDs assigned to the broker.
subnet_ids - (Optional) The list of subnet IDs in which to launch the broker. A SINGLE_INSTANCE deployment
requires one subnet. An ACTIVE_STANDBY_MULTI_AZ deployment requires two subnets.
user - (Optional) The list of all ActiveMQ usernames for the speci ed broker. See below.
Nested Fields
configuration
encryption_options
kms_key_id - (Optional) Amazon Resource Name (ARN) of Key Management Service (KMS) Customer Master Key
(CMK) to use for encryption at rest. Requires setting use_aws_owned_key to false . To perform drift detection when
AWS managed CMKs or customer managed CMKs are in use, this value must be con gured.
use_aws_owned_key - (Optional) Boolean to enable an AWS owned Key Management Service (KMS) Customer Master
Key (CMK) that is not in your account. Defaults to true . Setting to false without con guring kms_key_id will create
an AWS managed Customer Master Key (CMK) aliased to aws/mq in your account.
maintenance_window_start_time
day_of_week - (Required) The day of the week. e.g. MONDAY , TUESDAY , or WEDNESDAY
time_zone - (Required) The time zone, UTC by default, in either the Country/City format, or the UTC o set format. e.g.
CET
NOTE: AWS currently does not support updating the maintenance window beyond resource creation.
logs
audit - (Optional) Enables audit logging. User management action made using JMX or the ActiveMQ Web Console is
logged. Defaults to false .
user
groups - (Optional) The list of groups (20 maximum) to which the ActiveMQ user belongs.
password - (Required) The password of the user. It must be 12 to 250 characters long, at least 4 unique characters,
and must not contain commas.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
instances - A list of information about allocated brokers (both active & standby).
instances.0.console_url - The URL of the broker's ActiveMQ Web Console (http://activemq.apache.org/web-
console.html).
instances.0.endpoints - The broker's wire-level protocol endpoints in the following order & format
referenceable e.g. as instances.0.endpoints.0 (SSL):
ssl://broker-id.mq.us-west-2.amazonaws.com:61617
amqp+ssl://broker-id.mq.us-west-2.amazonaws.com:5671
stomp+ssl://broker-id.mq.us-west-2.amazonaws.com:61614
mqtt+ssl://broker-id.mq.us-west-2.amazonaws.com:8883
wss://broker-id.mq.us-west-2.amazonaws.com:61619
Import
MQ Broker is currently not importable.
Resource: aws_mq_con guration
Provides an MQ Con guration Resource.
Example Usage
data = <<DATA
<<
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<broker xmlns="http://activemq.apache.org/schema/core">
<plugins>
<forcePersistencyModeBrokerPlugin persistenceFlag="true"/>
<statisticsBrokerPlugin/>
<timeStampingBrokerPlugin ttlCeiling="86400000" zeroExpirationOverride="86400000"/>
</plugins>
</broker>
DATA
}
Argument Reference
The following arguments are supported:
data - (Required) The broker con guration in XML format. See o cial docs (https://docs.aws.amazon.com/amazon-
mq/latest/developer-guide/amazon-mq-broker-con guration-parameters.html) for supported parameters and format
of the XML.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The unique ID that Amazon MQ generates for the con guration.
Import
MQ Con gurations can be imported using the con guration ID, e.g.
Example Usage
broker_node_group_info {
instance_type = "kafka.m5.large"
ebs_volume_size = "1000"
client_subnets = [
"${aws_subnet.subnet_az1.id}",
"${aws_subnet.subnet_az2.id}",
"${aws_subnet.subnet_az3.id}",
]
security_groups = [ "${aws_security_group.sg.id}" ]
security_groups = [ "${aws_security_group.sg.id}" ]
}
encryption_info {
encryption_at_rest_kms_key_arn = "${aws_kms_key.kms.arn}"
}
tags = {
foo = "bar"
}
}
output "zookeeper_connect_string" {
value = "${aws_msk_cluster.example.zookeeper_connect_string}"
}
output "bootstrap_brokers" {
description = "Plaintext connection host:port pairs"
value = "${aws_msk_cluster.example.bootstrap_brokers}"
}
output "bootstrap_brokers_tls" {
description = "TLS connection host:port pairs"
value = "${aws_msk_cluster.example.bootstrap_brokers_tls}"
}
Argument Reference
The following arguments are supported:
broker_node_group_info - (Required) Con guration block for the broker nodes of the Kafka cluster.
number_of_broker_nodes - (Required) The desired total number of broker nodes in the kafka cluster. It must be a
multiple of the number of speci ed client subnets.
client_authentication - (Optional) Con guration block for specifying a client authentication. See below.
configuration_info - (Optional) Con guration block for specifying a MSK Con guration to attach to Kafka brokers.
See below.
encryption_info - (Optional) Con guration block for specifying encryption. See below.
enhanced_monitoring - (Optional) Specify the desired enhanced MSK CloudWatch monitoring level. See Monitoring
Amazon MSK with Amazon CloudWatch (https://docs.aws.amazon.com/msk/latest/developerguide/monitoring.html)
ebs_volume_size - (Required) The size in GiB of the EBS volume for the data drive on each broker node.
instance_type - (Required) Specify the instance type to use for the kafka brokers. e.g. kafka.m5.large. (Pricing info
(https://aws.amazon.com/msk/pricing/))
security_groups - (Required) A list of the security groups to associate with the elastic network interfaces to control
who can communicate with the cluster.
az_distribution - (Optional) The distribution of broker nodes across availability zones (documentation
(https://docs.aws.amazon.com/msk/1.0/apireference/clusters.html#clusters-model-brokerazdistribution)). Currently
the only valid value is DEFAULT .
certificate_authority_arns - (Optional) List of ACM Certi cate Authority Amazon Resource Names (ARNs).
revision - (Required) Revision of the MSK Con guration to use in the cluster.
encryption_at_rest_kms_key_arn - (Optional) You may specify a KMS key short ID or ARN (it will always output an
ARN) to use for encrypting your data at rest. If no key is speci ed, an AWS managed KMS ('aws/msk' managed service)
key will be used for encrypting the data at rest.
client_broker - (Optional) Encryption setting for data in transit between clients and brokers. Valid values: TLS ,
TLS_PLAINTEXT , and PLAINTEXT . Default value: TLS_PLAINTEXT .
in_cluster - (Optional) Whether data communication among broker nodes is encrypted. Default value: true .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
bootstrap_brokers_tls - A comma separated list of one or more DNS names (or IPs) and TLS port pairs kafka
brokers suitable to boostrap connectivity to the kafka cluster. Only contains value if client_broker encryption in
transit is set to TLS_PLAINTEXT or TLS .
current_version - Current version of the MSK Cluster used for updates, e.g. K13V1IB3VIYZZH
encryption_info.0.encryption_at_rest_kms_key_arn - The ARN of the KMS key used for encryption at rest of the
broker data volumes.
zookeeper_connect_string - A comma separated list of one or more IP:port pairs to use to connect to the Apache
Zookeeper cluster.
Import
MSK clusters can be imported using the cluster arn , e.g.
NOTE: The API does not support deleting MSK con gurations. Removing this Terraform resource will only remove the
Terraform state for it.
Example Usage
server_properties = <<PROPERTIES
<<
auto.create.topics.enable = true
delete.topic.enable = true
PROPERTIES
}
Argument Reference
The following arguments are supported:
server_properties - (Required) Contents of the server.properties le. Supported properties are documented in the
MSK Developer Guide (https://docs.aws.amazon.com/msk/latest/developerguide/msk-con guration-properties.html).
kafka_versions - (Required) List of Apache Kafka versions which can use this con guration.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
MSK con gurations can be imported using the con guration ARN, e.g.
$ terraform import aws_msk_cluster.example arn:aws:kafka:us-west-2:123456789012:configuration/example/279
c0212-d057-4dba-9aa9-1c4e5a25bfc7-3
Resource: aws_nat_gateway
Provides a resource to create a VPC NAT Gateway.
Example Usage
tags = {
Name = "gw NAT"
}
}
Argument Reference
The following arguments are supported:
allocation_id - (Required) The Allocation ID of the Elastic IP address for the gateway.
subnet_id - (Required) The Subnet ID of the subnet in which to place the gateway.
Note: It's recommended to denote that the NAT Gateway depends on the Internet Gateway for the VPC in which the
NAT Gateway's subnet is located. For example:
depends_on = ["aws_internet_gateway.gw"]
}
Attributes Reference
In addition to all arguments above, the following attributes are exported:
subnet_id - The Subnet ID of the subnet in which the NAT gateway is placed.
network_interface_id - The ENI ID of the network interface created by the NAT gateway.
Import
NAT Gateways can be imported using the id , e.g.
Changes to a Neptune Cluster can occur when you manually change a parameter, such as backup_retention_period , and
are re ected in the next maintenance window. Because of this, Terraform may report a di erence in its planning phase
because a modi cation has not yet taken place. You can use the apply_immediately ag to instruct the service to apply
the change immediately (see documentation below).
Example Usage
Note: AWS Neptune does not support user name/password–based access control. See the AWS Docs
(https://docs.aws.amazon.com/neptune/latest/userguide/limits.html) for more information.
Argument Reference
The following arguments are supported:
apply_immediately - (Optional) Speci es whether any cluster modi cations are applied immediately, or during the
next maintenance window. Default is false .
availability_zones - (Optional) A list of EC2 Availability Zones that instances in the Neptune cluster can be created
in.
cluster_identifier - (Optional, Forces new resources) The cluster identi er. If omitted, Terraform will assign a
random, unique identi er.
cluster_identifier_prefix - (Optional, Forces new resource) Creates a unique cluster identi er beginning with the
speci ed pre x. Con icts with cluster_identifier .
engine - (Optional) The name of the database engine to be used for this Neptune cluster. Defaults to neptune .
final_snapshot_identifier - (Optional) The name of your nal Neptune snapshot when this Neptune cluster is
deleted. If omitted, no nal snapshot will be made.
iam_roles - (Optional) A List of ARNs for the IAM roles to associate to the Neptune Cluster.
kms_key_arn - (Optional) The ARN for the KMS encryption key. When specifying kms_key_arn , storage_encrypted
needs to be set to true.
neptune_subnet_group_name - (Optional) A Neptune subnet group to associate with this Neptune instance.
preferred_backup_window - (Optional) The daily time range during which automated backups are created if
automated backups are enabled using the BackupRetentionPeriod parameter. Time in UTC. Default: A 30-minute
window selected at random from an 8-hour block of time per region. e.g. 04:00-09:00
preferred_maintenance_window - (Optional) The weekly time range during which system maintenance can occur, in
(UTC) e.g. wed:04:00-wed:04:30
port - (Optional) The port on which the Neptune accepts connections. Default is 8182 .
replication_source_identifier - (Optional) ARN of a source Neptune cluster or Neptune instance if this Neptune
cluster is to be created as a Read Replica.
skip_final_snapshot - (Optional) Determines whether a nal Neptune snapshot is created before the Neptune
cluster is deleted. If true is speci ed, no Neptune snapshot is created. If false is speci ed, a Neptune snapshot is
created before the Neptune cluster is deleted, using the value from final_snapshot_identifier . Default is false .
snapshot_identifier - (Optional) Speci es whether or not to create this cluster from a snapshot. You can use either
the name or ARN when specifying a Neptune cluster snapshot, or the ARN when specifying a Neptune snapshot.
storage_encrypted - (Optional) Speci es whether the Neptune cluster is encrypted. The default is false if not
speci ed.
vpc_security_group_ids - (Optional) List of VPC security groups to associate with the Cluster
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_neptune_cluster provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration
options:
delete - (Default 120 minutes ) Used for destroying cluster. This includes any cleanup task during the destroying
process.
Import
aws_neptune_cluster can be imported by using the cluster identi er, e.g.
You can simply add neptune instances and Neptune manages the replication. You can use the count
(/docs/con guration/resources.html#count) meta-parameter to make multiple instances and join them all to the same
Neptune Cluster, or you may specify di erent Cluster Instance resources with various instance_class sizes.
Example Usage
The following example will create a neptune cluster with two neptune instances(one writer and one reader).
Argument Reference
The following arguments are supported:
apply_immediately - (Optional) Speci es whether any instance modi cations are applied immediately, or during the
next maintenance window. Default is false .
auto_minor_version_upgrade - (Optional) Indicates that minor engine upgrades will be applied automatically to the
instance during the maintenance window. Default is true .
availability_zone - (Optional) The EC2 Availability Zone that the neptune instance is created in.
engine - (Optional) The name of the database engine to be used for the neptune instance. Defaults to neptune .
Valid Values: neptune .
identifier - (Optional, Forces new resource) The indenti er for the neptune instance, if omitted, Terraform will
assign a random, unique identi er.
identifier_prefix - (Optional, Forces new resource) Creates a unique identi er beginning with the speci ed pre x.
Con icts with identifier .
neptune_parameter_group_name - (Optional) The name of the neptune parameter group to associate with this
instance.
port - (Optional) The port on which the DB accepts connections. Defaults to 8182 .
preferred_backup_window - (Optional) The daily time range during which automated backups are created if
automated backups are enabled. Eg: "04:00-09:00"
promotion_tier - (Optional) Default 0. Failover Priority setting on instance level. The reader who has lower tier has
higher priority to get promoter to writer.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
address - The hostname of the instance. See also endpoint and port .
kms_key_arn - The ARN for the KMS encryption key if one is set to the neptune cluster.
writer – Boolean indicating if this instance is writable. False indicates this instance is a read replica.
Timeouts
aws_neptune_cluster_instance provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
create - (Default 90 minutes ) How long to wait for creating instances to become available.
update - (Default 90 minutes ) How long to wait for updating instances to complete updates.
delete - (Default 90 minutes ) How long to wait for deleting instances to become fully deleted.
Import
aws_neptune_cluster_instance can be imported by using the instance identi er, e.g.
Example Usage
parameter {
name = "neptune_enable_audit_log"
value = 1
}
}
Argument Reference
The following arguments are supported:
name - (Optional, Forces new resource) The name of the neptune cluster parameter group. If omitted, Terraform will
assign a random, unique name.
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name .
description - (Optional) The description of the neptune cluster parameter group. Defaults to "Managed by
Terraform".
apply_method - (Optional) Valid values are immediate and pending-reboot . Defaults to pending-reboot .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Neptune Cluster Parameter Groups can be imported using the name , e.g.
Example Usage
Argument Reference
The following arguments are supported:
db_cluster_identifier - (Required) The DB Cluster Identi er from which to take the snapshot.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
availability_zones - List of EC2 Availability Zones that instances in the DB cluster snapshot can be restored in.
db_cluster_snapshot_arn - The Amazon Resource Name (ARN) for the DB Cluster Snapshot.
kms_key_id - If storage_encrypted is true, the AWS KMS key identi er for the encrypted DB cluster snapshot.
port - Port that the DB cluster was listening on at the time of the snapshot.
source_db_cluster_snapshot_identifier - The DB Cluster Snapshot Arn that the DB Cluster Snapshot was copied
from. It only has value in case of cross customer or cross region copy.
create - (Default 20m ) How long to wait for the snapshot to be available.
Import
aws_neptune_cluster_snapshot can be imported by using the cluster snapshot identi er, e.g.
Example Usage
source_type = "db-instance"
source_ids = ["${aws_neptune_cluster_instance.example.id}"]
event_categories = [
"maintenance",
"availability",
"creation",
"backup",
"restoration",
"recovery",
"deletion",
"failover",
"failure",
"notification",
"configuration change",
"read replica",
]
tags = {
"env" = "test"
}
}
Argument Reference
The following arguments are supported:
event_categories - (Optional) A list of event categories for a source_type that you want to subscribe to. Run aws
neptune describe-event-categories to nd all the event categories.
name - (Optional) The name of the Neptune event subscription. By default generated by Terraform.
name_prefix - (Optional) The name of the Neptune event subscription. Con icts with name .
sns_topic_arn - (Required) The ARN of the SNS topic to send events to.
source_ids - (Optional) A list of identi ers of the event sources for which events will be returned. If not speci ed,
then all sources are included in the response. If speci ed, a source_type must also be speci ed.
source_type - (Optional) The type of source that will be generating the events. Valid options are db-instance , db-
security-group , db-parameter-group , db-snapshot , db-cluster or db-cluster-snapshot . If not set, all
sources will be subscribed to.
Attributes
The following additional atttributes are provided:
arn - The Amazon Resource Name of the Neptune event noti cation subscription.
customer_aws_id - The AWS customer account associated with the Neptune event noti cation subscription.
Timeouts
aws_neptune_event_subscription provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
create - (Default 40m ) How long to wait for creating event subscription to become available.
delete - (Default 40m ) How long to wait for deleting event subscription to become fully deleted.
update - (Default 40m ) How long to wait for updating event subscription to complete updates.
Import
aws_neptune_event_subscription can be imported by using the event subscription name, e.g.
Example Usage
parameter {
name = "neptune_query_timeout"
value = "25"
}
}
Argument Reference
The following arguments are supported:
name - (Required, Forces new resource) The name of the Neptune parameter group.
description - (Optional) The description of the Neptune parameter group. Defaults to "Managed by Terraform".
apply_method - (Optional) The apply method of the Neptune parameter. Valid values are immediate and pending-
reboot . Defaults to pending-reboot .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
tags = {
Name = "My neptune subnet group"
}
}
Argument Reference
The following arguments are supported:
name - (Optional, Forces new resource) The name of the neptune subnet group. If omitted, Terraform will assign a
random, unique name.
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name .
description - (Optional) The description of the neptune subnet group. Defaults to "Managed by Terraform".
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Neptune Subnet groups can be imported using the name , e.g.
NOTE on Network ACLs and Network ACL Rules: Terraform currently provides both a standalone Network ACL Rule
(/docs/providers/aws/r/network_acl_rule.html) resource and a Network ACL resource with rules de ned in-line. At this
time you cannot use a Network ACL with in-line rules in conjunction with any Network ACL Rule resources. Doing so will
cause a con ict of rule settings and will overwrite rules.
Example Usage
egress {
protocol = "tcp"
rule_no = 200
action = "allow"
cidr_block = "10.3.0.0/18"
from_port = 443
to_port = 443
}
ingress {
protocol = "tcp"
rule_no = 100
action = "allow"
cidr_block = "10.3.0.0/18"
from_port = 80
to_port = 80
}
tags = {
Name = "main"
}
}
Argument Reference
The following arguments are supported:
ingress - (Optional) Speci es an ingress rule. Parameters de ned below. This argument is processed in attribute-as-
blocks mode (/docs/con guration/attr-as-blocks.html).
egress - (Optional) Speci es an egress rule. Parameters de ned below. This argument is processed in attribute-as-
blocks mode (/docs/con guration/attr-as-blocks.html).
protocol - (Required) The protocol to match. If using the -1 'all' protocol, you must specify a from and to port of 0.
cidr_block - (Optional) The CIDR block to match. This must be a valid network mask.
Note: For more information on ICMP types and codes, see here: https://www.iana.org/assignments/icmp-
parameters/icmp-parameters.xhtml (https://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml)
Attributes Reference
In addition to all arguments above, the following attributes are exported:
owner_id - The ID of the AWS account that owns the network ACL.
Import
Network ACLs can be imported using the id , e.g.
NOTE on Network ACLs and Network ACL Rules: Terraform currently provides both a standalone Network ACL Rule
resource and a Network ACL (/docs/providers/aws/r/network_acl.html) resource with rules de ned in-line. At this time
you cannot use a Network ACL with in-line rules in conjunction with any Network ACL Rule resources. Doing so will
cause a con ict of rule settings and will overwrite rules.
Example Usage
cidr_block =
from_port = 22
to_port = 22
}
Argument Reference
The following arguments are supported:
rule_number - (Required) The rule number for the entry (for example, 100). ACL entries are processed in ascending
order by rule number.
egress - (Optional, bool) Indicates whether this is an egress rule (rule is applied to tra c leaving the subnet). Default
false .
rule_action - (Required) Indicates whether to allow or deny the tra c that matches the rule. Accepted values:
allow | deny
cidr_block - (Optional) The network range to allow or deny, in CIDR notation (for example 172.16.0.0/24 ).
ipv6_cidr_block - (Optional) The IPv6 CIDR block to allow or deny.
icmp_type - (Optional) ICMP protocol: The ICMP type. Required if specifying ICMP for the protocol. e.g. -1
icmp_code - (Optional) ICMP protocol: The ICMP code. Required if specifying ICMP for the protocol. e.g. -1
NOTE: If the value of protocol is -1 or all , the from_port and to_port values will be ignored and the rule will
apply to all ports.
NOTE: If the value of icmp_type is -1 (which results in a wildcard ICMP type), the icmp_code must also be set to -1
(wildcard ICMP code).
Note: For more information on ICMP types and codes, see here: https://www.iana.org/assignments/icmp-
parameters/icmp-parameters.xhtml (https://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml)
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
attachment {
instance = "${aws_instance.test.id}"
device_index = 1
}
}
Argument Reference
The following arguments are supported:
private_ips_count - (Optional) Number of secondary private IPs to assign to the ENI. The total number of private
IPs will be 1 + private_ips_count, as a primary private IP will be assiged to an ENI by default.
source_dest_check - (Optional) Whether to enable source destination checking for the ENI. Default true.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Network Interfaces can be imported using the id , e.g.
NOTE on instances, interfaces, and security groups: Terraform currently provides the capability to assign security
groups via the aws_instance (/docs/providers/aws/d/instance.html) and the aws_network_interface
(/docs/providers/aws/r/network_interface.html) resources. Using this resource in conjunction with security groups
provided in-line in those resources will cause con icts, and will lead to spurious di s and unde ned behavior - please
use one or the other.
Example Usage
The following provides a very basic example of setting up an instance (provided by instance ) in the default security group,
creating a security group (provided by sg ) and then attaching the security group to the instance's primary network
interface via the aws_network_interface_sg_attachment resource, named sg_attachment :
filter {
name = "name"
values = ["amzn-ami-hvm-*"]
}
owners = ["amazon"]
}
tags = {
"type" = "terraform-test-instance"
}
}
In this example, instance is provided by the aws_instance data source, fetching an external instance, possibly not
managed by Terraform. sg_attachment then attaches to the output instance's network_interface_id :
data "aws_instance" "instance" {
instance_id = "i-1234567890abcdef0"
}
Argument Reference
security_group_id - (Required) The ID of the security group.
Output Reference
There are no outputs for this resource.
Resource: aws_opsworks_application
Provides an OpsWorks application resource.
Example Usage
domains = [
"example.com",
"sub.example.com",
]
environment {
key = "key"
value = "value"
secure = false
}
app_source {
type = "git"
revision = "master"
url = "https://github.com/example.git"
}
enable_ssl = true
ssl_configuration {
private_key = "${file("./
/ foobar.key
key")}"
certificate = "${file("./
/ foobar.crt
crt")}"
}
document_root = "public"
auto_bundle_on_deploy = true
rails_env = "staging"
}
Argument Reference
The following arguments are supported:
short_name - (Required) A short, machine-readable name for the application. This can only be de ned on resource
creation and ignored on resource update.
stack_id - (Required) The id of the stack the application will belong to.
type - (Required) Opsworks application type. One of aws-flow-ruby , java , rails , php , nodejs , static or
other .
enable_ssl - (Optional) Whether to enable SSL for the app. This must be set in order to let
ssl_configuration.private_key , ssl_configuration.certificate and ssl_configuration.chain take
e ect.
ssl_configuration - (Optional) The SSL con guration of the app. Object is described below.
document_root - (Optional) Subfolder for the document root for application of type rails .
auto_bundle_on_deploy - (Optional) Run bundle install when deploying for application of type rails .
rails_env - (Required if type = rails ) The name of the Rails environment for application of type rails .
aws_flow_ruby_settings - (Optional) Specify activity and work ow workers for your app using the aws- ow gem.
An app_source block supports the following arguments (can only be de ned once per resource):
url - (Required) The URL where the app resource can be found.
revision - (Optional) For sources that are version-aware, the revision to use.
A ssl_configuration block supports the following arguments (can only be de ned once per resource):
private_key - (Required) The private key; the contents of the certi cate's domain.key le.
certificate - (Required) The contents of the certi cate's domain.crt le.
chain - (Optional) Can be used to specify an intermediate certi cate authority key or client authentication.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
short_name - (Required) A short, machine-readable name for the layer, which will be used to identify it in the Chef
node JSON.
stack_id - (Required) The id of the stack the layer will belong to.
auto_assign_elastic_ips - (Optional) Whether to automatically assign an elastic IP address to the layer's instances.
auto_assign_public_ips - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP
address to each of the layer's instances.
custom_instance_profile_arn - (Optional) The ARN of an IAM pro le that will be used for the layer's instances.
custom_security_group_ids - (Optional) Ids for a set of security groups to apply to the layer's instances.
install_updates_on_boot - (Optional) Whether to install OS and package updates on each instance when it boots.
instance_shutdown_timeout - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after
triggering the Shutdown event.
system_packages - (Optional) Names of a set of system packages to install on the layer's instances.
ebs_volume - (Optional) ebs_volume blocks, as described below, will each create an EBS volume and connect it to
the layer's instances.
custom_configure_recipes
custom_deploy_recipes
custom_setup_recipes
custom_shutdown_recipes
custom_undeploy_recipes
mount_point - (Required) The path to mount the EBS volume on the layer's instances.
number_of_disks - (Required) The number of disks to use for the EBS volume.
type - (Optional) The type of volume to create. This may be standard (the default), io1 or gp2 .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
OpsWorks Custom Layers can be imported using the id , e.g.
Example Usage
Argument Reference
The following arguments are supported:
stack_id - (Required) The id of the stack the layer will belong to.
auto_assign_elastic_ips - (Optional) Whether to automatically assign an elastic IP address to the layer's instances.
auto_assign_public_ips - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP
address to each of the layer's instances.
custom_instance_profile_arn - (Optional) The ARN of an IAM pro le that will be used for the layer's instances.
custom_security_group_ids - (Optional) Ids for a set of security groups to apply to the layer's instances.
install_updates_on_boot - (Optional) Whether to install OS and package updates on each instance when it boots.
instance_shutdown_timeout - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after
triggering the Shutdown event.
system_packages - (Optional) Names of a set of system packages to install on the layer's instances.
url - (Optional) The URL path to use for Ganglia. Defaults to "/ganglia".
ebs_volume - (Optional) ebs_volume blocks, as described below, will each create an EBS volume and connect it to
the layer's instances.
custom_json - (Optional) Custom JSON attributes to apply to the layer.
The following extra optional arguments, all lists of Chef recipe names, allow custom Chef recipes to be applied to layer
instances at the ve di erent lifecycle events, if custom cookbooks are enabled on the layer's stack:
custom_configure_recipes
custom_deploy_recipes
custom_setup_recipes
custom_shutdown_recipes
custom_undeploy_recipes
mount_point - (Required) The path to mount the EBS volume on the layer's instances.
number_of_disks - (Required) The number of disks to use for the EBS volume.
type - (Optional) The type of volume to create. This may be standard (the default), io1 or gp2 .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
stack_id - (Required) The id of the stack the layer will belong to.
auto_assign_elastic_ips - (Optional) Whether to automatically assign an elastic IP address to the layer's instances.
auto_assign_public_ips - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP
address to each of the layer's instances.
custom_instance_profile_arn - (Optional) The ARN of an IAM pro le that will be used for the layer's instances.
custom_security_group_ids - (Optional) Ids for a set of security groups to apply to the layer's instances.
healthcheck_method - (Optional) HTTP method to use for instance healthchecks. Defaults to "OPTIONS".
healthcheck_url - (Optional) URL path to use for instance healthchecks. Defaults to "/".
install_updates_on_boot - (Optional) Whether to install OS and package updates on each instance when it boots.
instance_shutdown_timeout - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after
triggering the Shutdown event.
system_packages - (Optional) Names of a set of system packages to install on the layer's instances.
use_ebs_optimized_instances - (Optional) Whether to use EBS-optimized instances.
ebs_volume - (Optional) ebs_volume blocks, as described below, will each create an EBS volume and connect it to
the layer's instances.
The following extra optional arguments, all lists of Chef recipe names, allow custom Chef recipes to be applied to layer
instances at the ve di erent lifecycle events, if custom cookbooks are enabled on the layer's stack:
custom_configure_recipes
custom_deploy_recipes
custom_setup_recipes
custom_shutdown_recipes
custom_undeploy_recipes
mount_point - (Required) The path to mount the EBS volume on the layer's instances.
number_of_disks - (Required) The number of disks to use for the EBS volume.
type - (Optional) The type of volume to create. This may be standard (the default), io1 or gp2 .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
layer_ids = [
"${aws_opsworks_custom_layer.my-layer.id}",
]
instance_type = "t2.micro"
os = "Amazon Linux 2015.09"
state = "stopped"
}
Argument Reference
The following arguments are supported:
stack_id - (Required) The id of the stack the instance will belong to.
layer_ids - (Required) The ids of the layers the instance will belong to.
state - (Optional) The desired state of the instance. Can be either "running" or "stopped" .
install_updates_on_boot - (Optional) Controls where to install OS and package updates when the instance boots.
Defaults to true .
auto_scaling_type - (Optional) Creates load-based or time-based instances. If set, can be either: "load" or
"timer" .
availability_zone - (Optional) Name of the availability zone where instances will be created by default.
architecture - (Optional) Machine architecture for created instances. Can be either "x86_64" (the default) or
"i386"
ami_id - (Optional) The AMI to use for the instance. If an AMI is speci ed, os must be "Custom" .
root_device_type - (Optional) Name of the type of root device instances will have by default. Can be either "ebs"
or "instance-store"
ssh_key_name - (Optional) Name of the SSH keypair that instances will have by default.
tenancy - (Optional) Instance tenancy to use. Can be one of "default" , "dedicated" or "host"
virtualization_type - (Optional) Keyword to choose what virtualization mode created instances will use. Can be
either "paravirtual" or "hvm" .
root_block_device - (Optional) Customize details about the root block device of the instance. See Block Devices
below for details.
ebs_block_device - (Optional) Additional EBS block devices to attach to the instance. See Block Devices below for
details.
ephemeral_block_device - (Optional) Customize Ephemeral (also known as "Instance Store") volumes on the
instance. See Block Devices below for details.
Block devices
Each of the *_block_device attributes controls a portion of the AWS Instance's "Block Device Mapping". It's a good idea to
familiarize yourself with AWS's Block Device Mapping docs (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-
device-mapping-concepts.html) to understand the implications of using these attributes.
volume_type - (Optional) The type of volume. Can be "standard" , "gp2" , or "io1" . (Default: "standard" ).
delete_on_termination - (Optional) Whether the volume should be destroyed on instance termination (Default:
true ).
volume_type - (Optional) The type of volume. Can be "standard" , "gp2" , or "io1" . (Default: "standard" ).
delete_on_termination - (Optional) Whether the volume should be destroyed on instance termination (Default:
true ).
Modifying any ebs_block_device currently requires resource replacement.
Each AWS Instance type has a di erent set of Instance Store block devices available for attachment. AWS publishes a list
(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#StorageOnInstanceTypes) of which
ephemeral devices are available on each type. The devices are always identi ed by the virtual_name in the format
"ephemeral{0..N}" .
NOTE: Currently, changes to *_block_device con guration of existing resources cannot be automatically detected by
Terraform. After making updates to block device con guration, resource recreation can be manually triggered by using
the taint command (/docs/commands/taint.html).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
public_dns - The public DNS name assigned to the instance. For EC2-VPC, this is only available if you've enabled DNS
hostnames for your VPC
private_dns - The private DNS name assigned to the instance. Can only be used inside the Amazon EC2, and only
available if you've enabled DNS hostnames for your VPC
Timeouts
aws_opsworks_instance provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration
options:
create - (Default 10 minutes ) Used when the instance is created. It should cover the time needed for the instance
to start successfully.
delete - (Default 10 minutes ) Used when the instance is deleted. It should cover the time needed for the instance
to stop successfully.
update - (Default 10 minutes ) Used when the instance is changed. It should cover the time needed to either start or
stop the instance.
Import
Opsworks Instances can be imported using the instance id , e.g.
Example Usage
Argument Reference
The following arguments are supported:
stack_id - (Required) The id of the stack the layer will belong to.
app_server - (Optional) Keyword for the application container to use. Defaults to "tomcat".
app_server_version - (Optional) Version of the selected application container to use. Defaults to "7".
auto_assign_elastic_ips - (Optional) Whether to automatically assign an elastic IP address to the layer's instances.
auto_assign_public_ips - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP
address to each of the layer's instances.
custom_instance_profile_arn - (Optional) The ARN of an IAM pro le that will be used for the layer's instances.
custom_security_group_ids - (Optional) Ids for a set of security groups to apply to the layer's instances.
install_updates_on_boot - (Optional) Whether to install OS and package updates on each instance when it boots.
instance_shutdown_timeout - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after
triggering the Shutdown event.
jvm_type - (Optional) Keyword for the type of JVM to use. Defaults to openjdk .
system_packages - (Optional) Names of a set of system packages to install on the layer's instances.
The following extra optional arguments, all lists of Chef recipe names, allow custom Chef recipes to be applied to layer
instances at the ve di erent lifecycle events, if custom cookbooks are enabled on the layer's stack:
custom_configure_recipes
custom_deploy_recipes
custom_setup_recipes
custom_shutdown_recipes
custom_undeploy_recipes
mount_point - (Required) The path to mount the EBS volume on the layer's instances.
number_of_disks - (Required) The number of disks to use for the EBS volume.
type - (Optional) The type of volume to create. This may be standard (the default), io1 or gp2 .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
stack_id - (Required) The id of the stack the layer will belong to.
allocated_memory - (Optional) Amount of memory to allocate for the cache on each instance, in megabytes. Defaults
to 512MB.
auto_assign_elastic_ips - (Optional) Whether to automatically assign an elastic IP address to the layer's instances.
auto_assign_public_ips - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP
address to each of the layer's instances.
custom_instance_profile_arn - (Optional) The ARN of an IAM pro le that will be used for the layer's instances.
custom_security_group_ids - (Optional) Ids for a set of security groups to apply to the layer's instances.
install_updates_on_boot - (Optional) Whether to install OS and package updates on each instance when it boots.
instance_shutdown_timeout - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after
triggering the Shutdown event.
system_packages - (Optional) Names of a set of system packages to install on the layer's instances.
ebs_volume - (Optional) ebs_volume blocks, as described below, will each create an EBS volume and connect it to
the layer's instances.
The following extra optional arguments, all lists of Chef recipe names, allow custom Chef recipes to be applied to layer
instances at the ve di erent lifecycle events, if custom cookbooks are enabled on the layer's stack:
custom_configure_recipes
custom_deploy_recipes
custom_setup_recipes
custom_shutdown_recipes
custom_undeploy_recipes
mount_point - (Required) The path to mount the EBS volume on the layer's instances.
number_of_disks - (Required) The number of disks to use for the EBS volume.
type - (Optional) The type of volume to create. This may be standard (the default), io1 or gp2 .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Note: All arguments including the root password will be stored in the raw state as plain-text. Read more about sensitive
data in state (/docs/state/sensitive-data.html).
Example Usage
Argument Reference
The following arguments are supported:
stack_id - (Required) The id of the stack the layer will belong to.
auto_assign_elastic_ips - (Optional) Whether to automatically assign an elastic IP address to the layer's instances.
auto_assign_public_ips - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP
address to each of the layer's instances.
custom_instance_profile_arn - (Optional) The ARN of an IAM pro le that will be used for the layer's instances.
custom_security_group_ids - (Optional) Ids for a set of security groups to apply to the layer's instances.
install_updates_on_boot - (Optional) Whether to install OS and package updates on each instance when it boots.
instance_shutdown_timeout - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after
triggering the Shutdown event.
root_password_on_all_instances - (Optional) Whether to set the root user password to all instances in the stack
so they can access the instances in this layer.
system_packages - (Optional) Names of a set of system packages to install on the layer's instances.
The following extra optional arguments, all lists of Chef recipe names, allow custom Chef recipes to be applied to layer
instances at the ve di erent lifecycle events, if custom cookbooks are enabled on the layer's stack:
custom_configure_recipes
custom_deploy_recipes
custom_setup_recipes
custom_shutdown_recipes
custom_undeploy_recipes
mount_point - (Required) The path to mount the EBS volume on the layer's instances.
number_of_disks - (Required) The number of disks to use for the EBS volume.
type - (Optional) The type of volume to create. This may be standard (the default), io1 or gp2 .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
stack_id - (Required) The id of the stack the layer will belong to.
auto_assign_elastic_ips - (Optional) Whether to automatically assign an elastic IP address to the layer's instances.
auto_assign_public_ips - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP
address to each of the layer's instances.
custom_instance_profile_arn - (Optional) The ARN of an IAM pro le that will be used for the layer's instances.
custom_security_group_ids - (Optional) Ids for a set of security groups to apply to the layer's instances.
install_updates_on_boot - (Optional) Whether to install OS and package updates on each instance when it boots.
instance_shutdown_timeout - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after
triggering the Shutdown event.
system_packages - (Optional) Names of a set of system packages to install on the layer's instances.
ebs_volume - (Optional) ebs_volume blocks, as described below, will each create an EBS volume and connect it to
the layer's instances.
The following extra optional arguments, all lists of Chef recipe names, allow custom Chef recipes to be applied to layer
instances at the ve di erent lifecycle events, if custom cookbooks are enabled on the layer's stack:
custom_configure_recipes
custom_deploy_recipes
custom_setup_recipes
custom_shutdown_recipes
custom_undeploy_recipes
mount_point - (Required) The path to mount the EBS volume on the layer's instances.
number_of_disks - (Required) The number of disks to use for the EBS volume.
type - (Optional) The type of volume to create. This may be standard (the default), io1 or gp2 .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
allow_ssh - (Optional) Whether the user is allowed to use SSH to communicate with the instance
allow_sudo - (Optional) Whether the user is allowed to use sudo to elevate privileges
level - (Optional) The users permission level. Mus be one of deny , show , deploy , manage , iam_only
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The computed id of the permission. Please note that this is only used internally to identify the permission. This
value is not used in aws.
Resource: aws_opsworks_php_app_layer
Provides an OpsWorks PHP application layer resource.
Example Usage
Argument Reference
The following arguments are supported:
stack_id - (Required) The id of the stack the layer will belong to.
auto_assign_elastic_ips - (Optional) Whether to automatically assign an elastic IP address to the layer's instances.
auto_assign_public_ips - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP
address to each of the layer's instances.
custom_instance_profile_arn - (Optional) The ARN of an IAM pro le that will be used for the layer's instances.
custom_security_group_ids - (Optional) Ids for a set of security groups to apply to the layer's instances.
install_updates_on_boot - (Optional) Whether to install OS and package updates on each instance when it boots.
instance_shutdown_timeout - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after
triggering the Shutdown event.
system_packages - (Optional) Names of a set of system packages to install on the layer's instances.
ebs_volume - (Optional) ebs_volume blocks, as described below, will each create an EBS volume and connect it to
the layer's instances.
The following extra optional arguments, all lists of Chef recipe names, allow custom Chef recipes to be applied to layer
instances at the ve di erent lifecycle events, if custom cookbooks are enabled on the layer's stack:
custom_configure_recipes
custom_deploy_recipes
custom_setup_recipes
custom_shutdown_recipes
custom_undeploy_recipes
mount_point - (Required) The path to mount the EBS volume on the layer's instances.
number_of_disks - (Required) The number of disks to use for the EBS volume.
type - (Optional) The type of volume to create. This may be standard (the default), io1 or gp2 .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
stack_id - (Required) The id of the stack the layer will belong to.
app_server - (Optional) Keyword for the app server to use. Defaults to "apache_passenger".
auto_assign_elastic_ips - (Optional) Whether to automatically assign an elastic IP address to the layer's instances.
auto_assign_public_ips - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP
address to each of the layer's instances.
bundler_version - (Optional) When OpsWorks is managing Bundler, which version to use. Defaults to "1.5.3".
custom_instance_profile_arn - (Optional) The ARN of an IAM pro le that will be used for the layer's instances.
custom_security_group_ids - (Optional) Ids for a set of security groups to apply to the layer's instances.
install_updates_on_boot - (Optional) Whether to install OS and package updates on each instance when it boots.
instance_shutdown_timeout - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after
triggering the Shutdown event.
system_packages - (Optional) Names of a set of system packages to install on the layer's instances.
use_ebs_optimized_instances - (Optional) Whether to use EBS-optimized instances.
ebs_volume - (Optional) ebs_volume blocks, as described below, will each create an EBS volume and connect it to
the layer's instances.
The following extra optional arguments, all lists of Chef recipe names, allow custom Chef recipes to be applied to layer
instances at the ve di erent lifecycle events, if custom cookbooks are enabled on the layer's stack:
custom_configure_recipes
custom_deploy_recipes
custom_setup_recipes
custom_shutdown_recipes
custom_undeploy_recipes
mount_point - (Required) The path to mount the EBS volume on the layer's instances.
number_of_disks - (Required) The number of disks to use for the EBS volume.
type - (Optional) The type of volume to create. This may be standard (the default), io1 or gp2 .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Note: All arguments including the username and password will be stored in the raw state as plain-text. Read more
about sensitive data in state (/docs/state/sensitive-data.html).
Example Usage
Argument Reference
The following arguments are supported:
stack_id - (Required) The stack to register a db instance for. Changing this will force a new resource.
rds_db_instance_arn - (Required) The db instance to register for this stack. Changing this will force a new resource.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The computed id. Please note that this is only used internally to identify the stack <-> instance relation. This value
is not used in aws.
Resource: aws_opsworks_stack
Provides an OpsWorks stack resource.
Example Usage
tags = {
Name = "foobar-terraform-stack"
}
custom_json = <<EOT
<<
{
"foobar": {
"version": "1.0.0"
}
}
EOT
}
Argument Reference
The following arguments are supported:
region - (Required) The name of the region where the stack will exist.
service_role_arn - (Required) The ARN of an IAM role that the OpsWorks service will act as.
default_instance_profile_arn - (Required) The ARN of an IAM Instance Pro le that created instances will have by
default.
agent_version - (Optional) If set to "LATEST" , OpsWorks will automatically install the latest version.
color - (Optional) Color to paint next to the stack's resources in the OpsWorks console.
default_availability_zone - (Optional) Name of the availability zone where instances will be created by default.
This is required unless you set vpc_id .
configuration_manager_name - (Optional) Name of the con guration manager to use. Defaults to "Chef".
configuration_manager_version - (Optional) Version of the con guration manager to use. Defaults to "11.4".
custom_json - (Optional) User de ned JSON passed to "Chef". Use a "here doc" for multiline JSON.
default_root_device_type - (Optional) Name of the type of root device instances will have by default.
default_ssh_key_name - (Optional) Name of the SSH keypair that instances will have by default.
default_subnet_id - (Optional) Id of the subnet in which instances will be created by default. Mandatory if vpc_id
is set, and forbidden if it isn't.
hostname_theme - (Optional) Keyword representing the naming scheme that will be used for instance hostnames
within this stack.
manage_berkshelf - (Optional) Boolean value controlling whether Opsworks will run Berkshelf for this stack.
use_custom_cookbooks - (Optional) Boolean value controlling whether the custom cookbook settings are enabled.
use_opsworks_security_groups - (Optional) Boolean value controlling whether the standard OpsWorks security
groups apply to created instances.
vpc_id - (Optional) The id of the VPC that this stack belongs to.
url - (Required) The URL where the cookbooks resource can be found.
revision - (Optional) For sources that are version-aware, the revision to use.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
OpsWorks stacks can be imported using the id , e.g.
Example Usage
Argument Reference
The following arguments are supported:
stack_id - (Required) The id of the stack the layer will belong to.
auto_assign_elastic_ips - (Optional) Whether to automatically assign an elastic IP address to the layer's instances.
auto_assign_public_ips - (Optional) For stacks belonging to a VPC, whether to automatically assign a public IP
address to each of the layer's instances.
custom_instance_profile_arn - (Optional) The ARN of an IAM pro le that will be used for the layer's instances.
custom_security_group_ids - (Optional) Ids for a set of security groups to apply to the layer's instances.
install_updates_on_boot - (Optional) Whether to install OS and package updates on each instance when it boots.
instance_shutdown_timeout - (Optional) The time, in seconds, that OpsWorks will wait for Chef to complete after
triggering the Shutdown event.
system_packages - (Optional) Names of a set of system packages to install on the layer's instances.
ebs_volume - (Optional) ebs_volume blocks, as described below, will each create an EBS volume and connect it to
the layer's instances.
The following extra optional arguments, all lists of Chef recipe names, allow custom Chef recipes to be applied to layer
instances at the ve di erent lifecycle events, if custom cookbooks are enabled on the layer's stack:
custom_configure_recipes
custom_deploy_recipes
custom_setup_recipes
custom_shutdown_recipes
custom_undeploy_recipes
mount_point - (Required) The path to mount the EBS volume on the layer's instances.
number_of_disks - (Required) The number of disks to use for the EBS volume.
type - (Optional) The type of volume to create. This may be standard (the default), io1 or gp2 .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
allow_self_management - (Optional) Whether users can specify their own SSH public key through the My Settings
page
ssh_username - (Required) The ssh username, with witch this user wants to log in
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Note: Account management must be done from the organization's master account.
WARNING: Deleting this Terraform resource will only remove an AWS account from an organization. Terraform will not
close the account. The member account must be prepared to be a standalone account beforehand. See the AWS
Organizations documentation
(https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html) for more
information.
Example Usage:
Argument Reference
The following arguments are supported:
email - (Required) The email address of the owner to assign to the new member account. This email address must
not already be associated with another AWS account.
iam_user_access_to_billing - (Optional) If set to ALLOW , the new account enables IAM users to access account
billing information if they have the required permissions. If set to DENY , then only the root user of the new account
can access account billing information.
parent_id - (Optional) Parent Organizational Unit ID or Root ID for the account. Defaults to the Organization default
Root ID. A con guration must be present for this argument to perform drift detection.
role_name - (Optional) The name of an IAM role that Organizations automatically precon gures in the new member
account. This role trusts the master account, allowing users in the master account to assume the role, as permitted by
the master account administrator. The role has administrator permissions in the new member account. The
Organizations API provides no method for reading this information after account creation, so Terraform cannot
perform drift detection on its value and will always show a di erence for a con gured value after import unless
ignore_changes (/docs/con guration/resources.html#ignore_changes) is used.
Import
The AWS member account can be imported by using the account_id , e.g.
Certain resource arguments, like role_name , do not have an Organizations API method for reading the information after
account creation. If the argument is set in the Terraform con guration on an imported resource, Terraform will always show
a di erence. To workaround this behavior, either omit the argument from the Terraform con guration or use
ignore_changes (/docs/con guration/resources.html#ignore_changes) to hide the di erence, e.g.
lifecycle {
ignore_changes = ["role_name"]
}
}
Resource: aws_organizations_organizational_unit
Provides a resource to create an organizational unit.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
accounts - List of child accounts for this Organizational Unit. Does not return account information for child
Organizational Units. All elements have these attributes:
arn - ARN of the account
Import
AWS Organizations Organizational Units can be imported by using the id , e.g.
Example Usage:
feature_set = "ALL"
}
Argument Reference
The following arguments are supported:
aws_service_access_principals - (Optional) List of AWS service principal names for which you want to enable
integration with your organization. This is typically in the form of a URL, such as service-abbreviation.amazonaws.com.
Organization must have feature_set set to ALL . For additional information, see the AWS Organizations User Guide
(https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services.html).
enabled_policy_types - (Optional) List of Organizations policy types to enable in the Organization Root.
Organization must have feature_set set to ALL . For additional information about valid policy types (e.g.
SERVICE_CONTROL_POLICY ), see the AWS Organizations API Reference
(https://docs.aws.amazon.com/organizations/latest/APIReference/API_EnablePolicyType.html).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
accounts - List of organization accounts including the master account. For a list excluding the master account, see
the non_master_accounts attribute. All elements have these attributes:
arn - ARN of the account
non_master_accounts - List of organization accounts excluding the master account. For a list including the master
account, see the accounts attribute. All elements have these attributes:
arn - ARN of the account
policy_types - List of policy types enabled for this root. All elements have these attributes:
status - The status of the policy type as it relates to the associated root
Import
The AWS organization can be imported by using the id , e.g.
Example Usage
Organization Account
Organization Root
Organization Unit
Argument Reference
The following arguments are supported:
policy_id - (Required) The unique identi er (ID) of the policy that you want to attach to the target.
target_id - (Required) The unique identi er (ID) of the root, organizational unit, or account number that you want to
attach the policy to.
Import
aws_organizations_policy_attachment can be imported by using the target ID and policy ID, e.g. with an account target
$ terraform import aws_organizations_policy_attachment.account 123456789012:p-12345678
Resource: aws_organizations_policy
Provides a resource to manage an AWS Organizations policy
(https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies.html).
Example Usage
content = <<CONTENT
<<
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
}
CONTENT
}
Argument Reference
The following arguments are supported:
content - (Required) The policy content to add to the new policy. For example, if you create a service control policy
(SCP) (https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html), this string must
be JSON text that speci es the permissions that admins in attached accounts can delegate to their users, groups, and
roles. For more information about the SCP syntax, see the Service Control Policy Syntax documentation
(https://docs.aws.amazon.com/organizations/latest/userguide/orgs_reference_scp-syntax.html).
type - (Optional) The type of policy to create. Currently, the only valid value is SERVICE_CONTROL_POLICY (SCP).
Attribute Reference
id - The unique identi er (ID) of the policy.
Import
aws_organizations_policy can be imported by using the policy ID, e.g.
Note: All arguments including the Client ID and Client Secret will be stored in the raw state as plain-text. Read more
about sensitive data in state (/docs/state/sensitive-data.html).
Example Usage
Argument Reference
The following arguments are supported:
client_id - (Required) Client ID (part of OAuth Credentials) obtained via Amazon Developer Account.
client_secret - (Required) Client Secret (part of OAuth Credentials) obtained via Amazon Developer Account.
Import
Pinpoint ADM Channel can be imported using the application-id , e.g.
Note: All arguments, including certi cates and tokens, will be stored in the raw state as plain-text. Read more about
sensitive data in state (/docs/state/sensitive-data.html).
Example Usage
certificate = "${file("./
/ certificate.pem
pem")}"
private_key = "${file("./
/ private_key.key
key")}"
}
Argument Reference
The following arguments are supported:
default_authentication_method - (Optional) The default authentication method used for APNs. NOTE: Amazon
Pinpoint uses this default for every APNs push noti cation that you send using the console. You can override the
default when you send a message programmatically using the Amazon Pinpoint API, the AWS CLI, or an AWS SDK. If
your default authentication type fails, Amazon Pinpoint doesn't attempt to use the other authentication type.
If you choose to use Certi cate credentials you will have to provide: * certificate - (Required) The pem encoded TLS
Certi cate from Apple. * private_key - (Required) The Certi cate Private Key le (ie. .key le).
If you choose to use Key credentials you will have to provide: * bundle_id - (Required) The ID assigned to your iOS app. To
nd this value, choose Certi cates, IDs & Pro les, choose App IDs in the Identi ers section, and choose your app. *
team_id - (Required) The ID assigned to your Apple developer account team. This value is provided on the Membership
page. * token_key - (Required) The .p8 le that you download from your Apple developer account when you create an
authentication key. * token_key_id - (Required) The ID assigned to your signing key. To nd this value, choose Certi cates,
IDs & Pro les, and choose your key in the Keys section.
Import
Pinpoint APNs Channel can be imported using the application-id , e.g.
$ terraform import aws_pinpoint_apns_channel.apns application-id
Resource: aws_pinpoint_apns_sandbox_channel
Provides a Pinpoint APNs Sandbox Channel resource.
Note: All arguments, including certi cates and tokens, will be stored in the raw state as plain-text. Read more about
sensitive data in state (/docs/state/sensitive-data.html).
Example Usage
certificate = "${file("./
/ certificate.pem
pem")}"
private_key = "${file("./
/ private_key.key
key")}"
}
Argument Reference
The following arguments are supported:
default_authentication_method - (Optional) The default authentication method used for APNs Sandbox. NOTE:
Amazon Pinpoint uses this default for every APNs push noti cation that you send using the console. You can override
the default when you send a message programmatically using the Amazon Pinpoint API, the AWS CLI, or an AWS SDK. If
your default authentication type fails, Amazon Pinpoint doesn't attempt to use the other authentication type.
If you choose to use Certi cate credentials you will have to provide: * certificate - (Required) The pem encoded TLS
Certi cate from Apple. * private_key - (Required) The Certi cate Private Key le (ie. .key le).
If you choose to use Key credentials you will have to provide: * bundle_id - (Required) The ID assigned to your iOS app. To
nd this value, choose Certi cates, IDs & Pro les, choose App IDs in the Identi ers section, and choose your app. *
team_id - (Required) The ID assigned to your Apple developer account team. This value is provided on the Membership
page. * token_key - (Required) The .p8 le that you download from your Apple developer account when you create an
authentication key. * token_key_id - (Required) The ID assigned to your signing key. To nd this value, choose Certi cates,
IDs & Pro les, and choose your key in the Keys section.
Import
Pinpoint APNs Sandbox Channel can be imported using the application-id , e.g.
$ terraform import aws_pinpoint_apns_sandbox_channel.apns_sandbox application-id
Resource: aws_pinpoint_apns_voip_channel
Provides a Pinpoint APNs VoIP Channel resource.
Note: All arguments, including certi cates and tokens, will be stored in the raw state as plain-text. Read more about
sensitive data in state (/docs/state/sensitive-data.html).
Example Usage
certificate = "${file("./
/ certificate.pem
pem")}"
private_key = "${file("./
/ private_key.key
key")}"
}
Argument Reference
The following arguments are supported:
default_authentication_method - (Optional) The default authentication method used for APNs. NOTE: Amazon
Pinpoint uses this default for every APNs push noti cation that you send using the console. You can override the
default when you send a message programmatically using the Amazon Pinpoint API, the AWS CLI, or an AWS SDK. If
your default authentication type fails, Amazon Pinpoint doesn't attempt to use the other authentication type.
If you choose to use Certi cate credentials you will have to provide: * certificate - (Required) The pem encoded TLS
Certi cate from Apple. * private_key - (Required) The Certi cate Private Key le (ie. .key le).
If you choose to use Key credentials you will have to provide: * bundle_id - (Required) The ID assigned to your iOS app. To
nd this value, choose Certi cates, IDs & Pro les, choose App IDs in the Identi ers section, and choose your app. *
team_id - (Required) The ID assigned to your Apple developer account team. This value is provided on the Membership
page. * token_key - (Required) The .p8 le that you download from your Apple developer account when you create an
authentication key. * token_key_id - (Required) The ID assigned to your signing key. To nd this value, choose Certi cates,
IDs & Pro les, and choose your key in the Keys section.
Import
Pinpoint APNs VoIP Channel can be imported using the application-id , e.g.
$ terraform import aws_pinpoint_apns_voip_channel.apns_voip application-id
Resource: aws_pinpoint_apns_voip_sandbox_channel
Provides a Pinpoint APNs VoIP Sandbox Channel resource.
Note: All arguments, including certi cates and tokens, will be stored in the raw state as plain-text. Read more about
sensitive data in state (/docs/state/sensitive-data.html).
Example Usage
certificate = "${file("./
/ certificate.pem
pem")}"
private_key = "${file("./
/ private_key.key
key")}"
}
Argument Reference
The following arguments are supported:
default_authentication_method - (Optional) The default authentication method used for APNs. NOTE: Amazon
Pinpoint uses this default for every APNs push noti cation that you send using the console. You can override the
default when you send a message programmatically using the Amazon Pinpoint API, the AWS CLI, or an AWS SDK. If
your default authentication type fails, Amazon Pinpoint doesn't attempt to use the other authentication type.
If you choose to use Certi cate credentials you will have to provide: * certificate - (Required) The pem encoded TLS
Certi cate from Apple. * private_key - (Required) The Certi cate Private Key le (ie. .key le).
If you choose to use Key credentials you will have to provide: * bundle_id - (Required) The ID assigned to your iOS app. To
nd this value, choose Certi cates, IDs & Pro les, choose App IDs in the Identi ers section, and choose your app. *
team_id - (Required) The ID assigned to your Apple developer account team. This value is provided on the Membership
page. * token_key - (Required) The .p8 le that you download from your Apple developer account when you create an
authentication key. * token_key_id - (Required) The ID assigned to your signing key. To nd this value, choose Certi cates,
IDs & Pro les, and choose your key in the Keys section.
Import
Pinpoint APNs VoIP Sandbox Channel can be imported using the application-id , e.g.
$ terraform import aws_pinpoint_apns_voip_sandbox_channel.apns_voip_sandbox application-id
Resource: aws_pinpoint_app
Provides a Pinpoint App resource.
Example Usage
limits {
maximum_duration = 600
}
quiet_time {
start = "00:00"
end = "06:00"
}
}
Argument Reference
The following arguments are supported:
name_prefix - (Optional) The name of the Pinpoint application. Con icts with name
campaign_hook - (Optional) The default campaign limits for the app. These limits apply to each campaign for the app,
unless the campaign overrides the default with limits of its own
limits - (Optional) The default campaign limits for the app. These limits apply to each campaign for the app, unless
the campaign overrides the default with limits of its own
quiet_time - (Optional) The default quiet time for the app. Each campaign for this app sends no messages during this
time unless the campaign overrides the default with a quiet time of its own
lambda_function_name - (Optional) Lambda function name or ARN to be called for delivery. Con icts with web_url
mode - (Required if lambda_function_name or web_url are provided) What mode Lambda should be invoked in.
Valid values for this parameter are DELIVERY , FILTER .
web_url - (Optional) Web URL to call for hook. If the URL has authentication speci ed it will be added as
authentication to the request. Con icts with lambda_function_name
daily - (Optional) The maximum number of messages that the campaign can send daily.
maximum_duration - (Optional) The length of time (in seconds) that the campaign can run before it ends and message
deliveries stop. This duration begins at the scheduled start time for the campaign. The minimum value is 60.
messages_per_second - (Optional) The number of messages that the campaign can send per second. The minimum
value is 50, and the maximum is 20000.
total - (Optional) The maximum total number of messages that the campaign can send.
end - (Optional) The default end time for quiet time in ISO 8601 format. Required if start is set
start - (Optional) The default start time for quiet time in ISO 8601 format. Required if end is set
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Pinpoint App can be imported using the application-id , e.g.
Note: All arguments including the Api Key and Secret Key will be stored in the raw state as plain-text. Read more about
sensitive data in state (/docs/state/sensitive-data.html).
Example Usage
Argument Reference
The following arguments are supported:
Import
Pinpoint Baidu Channel can be imported using the application-id , e.g.
Example Usage
resource "aws_pinpoint_email_channel" "email" {
application_id = "${aws_pinpoint_app.app.application_id}"
from_address = "user@example.com"
identity = "${aws_ses_domain_identity.identity.arn}"
role_arn = "${aws_iam_role.role.arn}"
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": {
"Action": [
"mobileanalytics:PutEvents",
"mobileanalytics:PutItems"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
}
EOF
}
Argument Reference
The following arguments are supported:
application_id - (Required) The application ID.
role_arn - (Required) The ARN of an IAM Role used to submit events to Mobile Analytics' event ingestion service.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Pinpoint Email Channel can be imported using the application-id , e.g.
Example Usage
resource "aws_pinpoint_event_stream" "stream" {
application_id = "${aws_pinpoint_app.app.application_id}"
destination_stream_arn = "${aws_kinesis_stream.test_stream.arn}"
role_arn = "${aws_iam_role.test_role.arn}"
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": {
"Action": [
"kinesis:PutRecords",
"kinesis:DescribeStream"
],
"Effect": "Allow",
"Resource": [
"arn:aws:kinesis:us-east-1:*:*/*"
]
}
}
EOF
}
Argument Reference
The following arguments are supported:
application_id - (Required) The application ID.
destination_stream_arn - (Required) The Amazon Resource Name (ARN) of the Amazon Kinesis stream or Firehose
delivery stream to which you want to publish events.
role_arn - (Required) The IAM role that authorizes Amazon Pinpoint to publish events to the stream in your account.
Import
Pinpoint Event Stream can be imported using the application-id , e.g.
Note: Api Key argument will be stored in the raw state as plain-text. Read more about sensitive data in state
(/docs/state/sensitive-data.html).
Example Usage
Argument Reference
The following arguments are supported:
Import
Pinpoint GCM Channel can be imported using the application-id , e.g.
Example Usage
Argument Reference
The following arguments are supported:
short_code - (Optional) The Short Code registered with the phone provider.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Pinpoint SMS Channel can be imported using the application-id , e.g.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Placement groups can be imported using the name , e.g.
Example Usage
listener {
instance_port = 25
instance_protocol = "tcp"
lb_port = 25
lb_protocol = "tcp"
}
listener {
instance_port = 587
instance_protocol = "tcp"
lb_port = 587
lb_protocol = "tcp"
}
}
Argument Reference
The following arguments are supported:
load_balancer - (Required) The load balancer to which the policy should be attached.
instance_ports - (Required) List of instance ports to which the policy should be applied. This can be speci ed if the
protocol is SSL or TCP.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
aws_account_id - (Optional) The ID for the AWS account that the group is in. Currently, you use the ID for the AWS
account that contains your Amazon QuickSight account.
namespace - (Optional) The namespace. Currently, you should set this to default .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Quick Sight Group can be imported using the aws account id, namespace and group name separated by / .
Example Usage
Argument Reference
The following arguments are supported:
email - (Required) The email address of the user that you want to register.
identity_type - (Required) Amazon QuickSight supports several ways of managing the identity of users. This
parameter accepts two values: IAM and QUICKSIGHT .
user_role - (Required) The Amazon QuickSight role of the user. The user role can be one of the following: READER ,
AUTHOR , or ADMIN
user_name - (Optional) The Amazon QuickSight user name that you want to create for the user you are registering.
aws_account_id - (Optional) The ID for the AWS account that the group is in. Currently, you use the ID for the AWS
account that contains your Amazon QuickSight account.
iam_arn - (Optional) The ARN of the IAM user or role that you are registering with Amazon QuickSight.
namespace - (Optional) The namespace. Currently, you should set this to default .
session_name - (Optional) The name of the IAM session to use when assuming roles that can embed QuickSight
dashboards.
Attributes Reference
All above attributes except for session_name and identity_type are exported as well as:
Import
Importing is currently not supported on this resource.
Resource: aws_ram_principal_association
Provides a Resource Access Manager (RAM) principal association. Depending if RAM Sharing with AWS Organizations is
enabled (https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-orgs) ,
the RAM behavior with di erent principal types changes.
For AWS Account ID, Organization, and Organizational Unit principals within the same AWS Organization, no resource
share invitation is sent and resources become available automatically after creating the association.
For AWS Account ID principals outside the AWS Organization, a resource share invitation is sent and must be accepted
before resources become available. See the aws_ram_resource_share_accepter resource
(/docs/providers/aws/r/ram_resource_share_accepter.html) to accept these invitations.
For AWS Account ID principals, a resource share invitation is sent and must be accepted before resources become
available. See the aws_ram_resource_share_accepter resource
(/docs/providers/aws/r/ram_resource_share_accepter.html) to accept these invitations.
Example Usage
AWS Account ID
allow_external_principals = true
}
AWS Organization
Argument Reference
The following arguments are supported:
principal - (Required) The principal to associate with the resource share. Possible values are an AWS account ID, an
AWS Organizations Organization ARN, or an AWS Organizations Organization Unit ARN.
resource_share_arn - (Required) The Amazon Resource Name (ARN) of the resource share.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The Amazon Resource Name (ARN) of the Resource Share and the principal, separated by a comma.
Import
RAM Principal Associations can be imported using their Resource Share ARN and the principal separated by a comma,
e.g.
NOTE: Certain AWS resources (e.g. EC2 Subnets) can only be shared in an AWS account that is a member of an AWS
Organizations organization with organization-wide Resource Access Manager functionality enabled. See the Resource
Access Manager User Guide (https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) and AWS service speci c
documentation for additional information.
Example Usage
Argument Reference
The following arguments are supported:
resource_arn - (Required) Amazon Resource Name (ARN) of the resource to associate with the RAM Resource Share.
resource_share_arn - (Required) Amazon Resource Name (ARN) of the RAM Resource Share.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
RAM Resource Associations can be imported using their Resource Share ARN and Resource ARN separated by a comma, e.g.
Note: If both AWS accounts are in the same Organization and RAM Sharing with AWS Organizations is enabled
(https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-orgs), this
resource is not necessary as RAM Resource Share invitations are not used.
Example Usage
This con guration provides an example of using multiple Terraform AWS providers to con gure two di erent AWS accounts.
In the sender account, the con guration creates a aws_ram_resource_share and uses a data source in the receiver
account to create a aws_ram_principal_association resource with the receiver's account ID. In the receiver account, the
con guration accepts the invitation to share resources with the aws_ram_resource_share_accepter .
provider "aws" {
profile = "profile2"
}
provider "aws" {
alias = "alternate"
profile = "profile1"
}
name = "tf-test-resource-share"
allow_external_principals = true
tags = {
Name = "tf-test-resource-share"
}
}
principal = "${data.aws_caller_identity.receiver.account_id}"
resource_share_arn = "${aws_ram_resource_share.test.arn}"
}
Attributes Reference
In addition to all arguments above, the following attributes are exported:
receiver_account_id - The account ID of the receiver account which accepts the invitation.
sender_account_id - The account ID of the sender account which extends the invitation.
resources - A list of the resource ARNs shared via the resource share.
Import
Resource share accepters can be imported using the resource share ARN, e.g.
Example Usage
tags = {
Environment = "Production"
}
}
Argument Reference
The following arguments are supported:
allow_external_principals - (Optional) Indicates whether principals outside your organization can be associated
with a resource share.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Resource shares can be imported using the id , e.g.
Example Usage
resource "aws_rds_cluster" "default" {
cluster_identifier = "aurora-cluster-demo"
availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
database_name = "mydb"
master_username = "foo"
master_password = "bar"
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
}
excluded_members = [
"${aws_rds_cluster_instance.test1.id}",
"${aws_rds_cluster_instance.test2.id}",
]
}
static_members = [
"${aws_rds_cluster_instance.test1.id}",
"${aws_rds_cluster_instance.test3.id}",
]
}
Argument Reference
For more detailed documentation about each argument, refer to the AWS o cial documentation
(https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster-endpoint.html).
cluster_endpoint_identifier - (Required, Forces new resources) The identi er to use for the new endpoint. This
parameter is stored as a lowercase string.
custom_endpoint_type - (Required) The type of the endpoint. One of: READER , ANY .
static_members - (Optional) List of DB instance identi ers that are part of the custom endpoint group. Con icts with
excluded_members .
excluded_members - (Optional) List of DB instance identi ers that aren't part of the custom endpoint group. All other
eligible instances are reachable through the custom endpoint. Only relevant if the list of static members is empty.
Con icts with static_members .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
RDS Clusters Endpoint can be imported using the cluster_endpoint_identifier , e.g.
For information on the di erence between the available Aurora MySQL engines see Comparison between Aurora MySQL 1 and Aurora
MySQL 2 (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraMySQL.Updates.20180206.html) in the Amazon RDS User
Guide.
Changes to a RDS Cluster can occur when you manually change a parameter, such as port , and are re ected in the next maintenance
window. Because of this, Terraform may report a di erence in its planning phase because a modi cation has not yet taken place. You can
use the apply_immediately ag to instruct the service to apply the change immediately (see documentation below).
Note: using apply_immediately can result in a brief downtime as the server reboots. See the AWS Docs on RDS Maintenance
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html) for more information.
Note: All arguments including the username and password will be stored in the raw state as plain-text. Read more about sensitive data
in state (/docs/state/sensitive-data.html).
Example Usage
More information about Aurora Multi-Master Clusters can be found in the RDS User Guide
(https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-multi-master.html).
Argument Reference
For more detailed documentation about each argument, refer to the AWS o cial documentation
(https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html).
cluster_identifier - (Optional, Forces new resources) The cluster identi er. If omitted, Terraform will assign a random, unique
identi er.
cluster_identifier_prefix - (Optional, Forces new resource) Creates a unique cluster identi er beginning with the speci ed
pre x. Con icts with cluster_identifier .
copy_tags_to_snapshot – (Optional, boolean) Copy all Cluster tags to snapshots. Default is false .
database_name - (Optional) Name for an automatically created database on cluster creation. There are di erent naming restrictions
per database engine: RDS Naming Constraints
(http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html#RDS_Limits.Constraints)
deletion_protection - (Optional) If the DB instance should have deletion protection enabled. The database can't be deleted when
this value is set to true . The default is false .
final_snapshot_identifier - (Optional) The name of your nal DB snapshot when this DB cluster is deleted. If omitted, no nal
snapshot will be made.
skip_final_snapshot - (Optional) Determines whether a nal DB snapshot is created before the DB cluster is deleted. If true is
speci ed, no DB snapshot is created. If false is speci ed, a DB snapshot is created before the DB cluster is deleted, using the value
from final_snapshot_identifier . Default is false .
availability_zones - (Optional) A list of EC2 Availability Zones for the DB cluster storage where DB cluster instances can be created.
RDS automatically assigns 3 AZs if less than 3 AZs are con gured, which will show as a di erence requiring resource recreation next
Terraform apply. It is recommended to specify 3 AZs or use the lifecycle con guration block ignore_changes argument
(/docs/con guration/resources.html#ignore_changes) if necessary.
backtrack_window - (Optional) The target backtrack window, in seconds. Only available for aurora engine currently. To disable
backtracking, set this value to 0 . Defaults to 0 . Must be between 0 and 259200 (72 hours)
preferred_backup_window - (Optional) The daily time range during which automated backups are created if automated backups are
enabled using the BackupRetentionPeriod parameter.Time in UTC Default: A 30-minute window selected at random from an 8-hour
block of time per region. e.g. 04:00-09:00
preferred_maintenance_window - (Optional) The weekly time range during which system maintenance can occur, in (UTC) e.g.
wed:04:00-wed:04:30
vpc_security_group_ids - (Optional) List of VPC security groups to associate with the Cluster
snapshot_identifier - (Optional) Speci es whether or not to create this cluster from a snapshot. You can use either the name or
ARN when specifying a DB cluster snapshot, or the ARN when specifying a DB snapshot.
storage_encrypted - (Optional) Speci es whether the DB cluster is encrypted. The default is false for provisioned
engine_mode and true for serverless engine_mode .
replication_source_identifier - (Optional) ARN of a source DB cluster or DB instance if this DB cluster is to be created as a Read
Replica.
apply_immediately - (Optional) Speci es whether any cluster modi cations are applied immediately, or during the next
maintenance window. Default is false . See Amazon RDS Documentation for more information.
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html)
db_subnet_group_name - (Optional) A DB subnet group to associate with this DB instance. NOTE: This must match the
db_subnet_group_name speci ed on every aws_rds_cluster_instance (/docs/providers/aws/r/rds_cluster_instance.html) in the
cluster.
kms_key_id - (Optional) The ARN for the KMS encryption key. When specifying kms_key_id , storage_encrypted needs to be set to
true.
iam_roles - (Optional) A List of ARNs for the IAM roles to associate to the RDS Cluster.
iam_database_authentication_enabled - (Optional) Speci es whether or mappings of AWS Identity and Access Management (IAM)
accounts to database accounts is enabled. Please see AWS Documentation
(https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.html) for availability and limitations.
engine - (Optional) The name of the database engine to be used for this DB cluster. Defaults to aurora . Valid Values: aurora ,
aurora-mysql , aurora-postgresql
engine_mode - (Optional) The database engine mode. Valid values: global , multimaster , parallelquery , provisioned ,
serverless . Defaults to: provisioned . See the RDS User Guide
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/aurora-serverless.html) for limitations when using serverless .
engine_version - (Optional) The database engine version. Updating this argument results in an outage. See the Aurora MySQL
(https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.html) and Aurora Postgres
(https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Updates.html) documentation for your
con gured engine to determine this value. For example with Aurora MySQL 2, a potential value for this argument is
5.7.mysql_aurora.2.03.2 .
enabled_cloudwatch_logs_exports - (Optional) List of log types to export to cloudwatch. If omitted, no logs will be exported. The
following log types are supported: audit , error , general , slowquery , postgresql (PostgreSQL).
scaling_configuration - (Optional) Nested attribute with scaling properties. Only valid when engine_mode is set to serverless .
More details below.
S3 Import Options
Full details on the core parameters and impacts are in the API Docs: RestoreDBClusterFromS3
(https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromS3.html). Requires that the S3 bucket be in the
same region as the RDS cluster you're trying to create. Sample:
NOTE: RDS Aurora Serverless does not support loading data from S3, so its not possible to directly use engine_mode set to
serverless with s3_import .
s3_import {
source_engine = "mysql"
source_engine_version = "5.6"
bucket_name = "mybucket"
bucket_prefix = "backups"
ingestion_role = "arn:aws:iam::1234567890:role/role-xtrabackup-rds-restore"
}
}
source_engine_version - (Required) Version of the source engine used to make the backup
This will not recreate the resource if the S3 object changes in some way. It's only used to initialize the database. This only works currently
with the aurora engine. See AWS for currently supported engines and options. See Aurora S3 Migration Docs
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraMySQL.Migrating.ExtMySQL.html#AuroraMySQL.Migrating.ExtMySQL.S3).
Example:
engine_mode = "serverless"
scaling_configuration {
auto_pause = true
max_capacity = 256
min_capacity = 2
seconds_until_auto_pause = 300
timeout_action = "ForceApplyCapacityChange"
}
}
auto_pause - (Optional) Whether to enable automatic pause. A DB cluster can be paused only when it's idle (it has no connections). If
a DB cluster is paused for more than seven days, the DB cluster might be backed up with a snapshot. In this case, the DB cluster is
restored when there is a request to connect to it. Defaults to true .
max_capacity - (Optional) The maximum capacity. The maximum capacity must be greater than or equal to the minimum capacity.
Valid capacity values are 1 , 2 , 4 , 8 , 16 , 32 , 64 , 128 , and 256 . Defaults to 16 .
min_capacity - (Optional) The minimum capacity. The minimum capacity must be lesser than or equal to the maximum capacity.
Valid capacity values are 1 , 2 , 4 , 8 , 16 , 32 , 64 , 128 , and 256 . Defaults to 2 .
seconds_until_auto_pause - (Optional) The time, in seconds, before an Aurora DB cluster in serverless mode is paused. Valid values
are 300 through 86400 . Defaults to 300 .
timeout_action - (Optional) The action to take when the timeout is reached. Valid values: ForceApplyCapacityChange ,
RollbackCapacityChange . Defaults to RollbackCapacityChange . See documentation
(https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.how-it-works.html#aurora-serverless.how-it-
works.timeout-action).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
preferred_backup_window - The daily time range during which the backups happen
replication_source_identifier - ARN of the source DB cluster or DB instance if this DB cluster is created as a Read Replica.
Timeouts
aws_rds_cluster provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration options:
delete - (Default 120 minutes ) Used for destroying cluster. This includes any cleanup task during the destroying process.
Import
RDS Clusters can be imported using the cluster_identifier , e.g.
Unlike other RDS resources that support replication, with Amazon Aurora you do not designate a primary and subsequent
replicas. Instead, you simply add RDS Instances and Aurora manages the replication. You can use the count
(/docs/con guration/resources.html#count) meta-parameter to make multiple instances and join them all to the same RDS
Cluster, or you may specify di erent Cluster Instance resources with various instance_class sizes.
NOTE: Deletion Protection from the RDS service can only be enabled at the cluster level, not for individual cluster
instances. You can still add the prevent_destroy lifecycle behavior
(https://www.terraform.io/docs/con guration/resources.html#prevent_destroy) to your Terraform resource
con guration if you desire protection from accidental deletion.
Example Usage
Argument Reference
For more detailed documentation about each argument, refer to the AWS o cial documentation
(https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html).
identifier - (Optional, Forces new resource) The identi er for the RDS instance, if omitted, Terraform will assign a
random, unique identi er.
identifier_prefix - (Optional, Forces new resource) Creates a unique identi er beginning with the speci ed pre x.
Con icts with identifier .
cluster_identifier - (Required) The identi er of the aws_rds_cluster (/docs/providers/aws/r/rds_cluster.html) in
which to launch this instance.
engine - (Optional) The name of the database engine to be used for the RDS instance. Defaults to aurora . Valid
Values: aurora , aurora-mysql , aurora-postgresql . For information on the di erence between the available
Aurora MySQL engines see Comparison between Aurora MySQL 1 and Aurora MySQL 2
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraMySQL.Updates.20180206.html) in the Amazon
RDS User Guide.
instance_class - (Required) The instance class to use. For details on CPU and memory, see Scaling Aurora DB
Instances (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Managing.html). Aurora uses db.*
instance classes/types. Please see AWS Documentation
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html) for currently available
instance classes and complete details.
publicly_accessible - (Optional) Bool to control if instance is publicly accessible. Default false . See the
documentation on Creating DB Instances
(https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html) for more details on
controlling this property.
db_parameter_group_name - (Optional) The name of the DB parameter group to associate with this instance.
apply_immediately - (Optional) Speci es whether any database modi cations are applied immediately, or during
the next maintenance window. Default is false .
monitoring_role_arn - (Optional) The ARN for the IAM role that permits RDS to send enhanced monitoring metrics
to CloudWatch Logs. You can nd more information on the AWS Documentation
(http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.html) what IAM permissions are
needed to allow Enhanced Monitoring for RDS Instances.
monitoring_interval - (Optional) The interval, in seconds, between points when Enhanced Monitoring metrics are
collected for the DB instance. To disable collecting Enhanced Monitoring metrics, specify 0. The default is 0. Valid
Values: 0, 1, 5, 10, 15, 30, 60.
promotion_tier - (Optional) Default 0. Failover Priority setting on instance level. The reader who has lower tier has
higher priority to get promoter to writer.
availability_zone - (Optional, Computed) The EC2 Availability Zone that the DB instance is created in. See docs
(https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html) about the details.
preferred_backup_window - (Optional) The daily time range during which automated backups are created if
automated backups are enabled. Eg: "04:00-09:00"
auto_minor_version_upgrade - (Optional) Indicates that minor engine upgrades will be applied automatically to the
DB instance during the maintenance window. Default true .
performance_insights_enabled - (Optional) Speci es whether Performance Insights is enabled or not.
performance_insights_kms_key_id - (Optional) The ARN for the KMS key to encrypt Performance Insights data.
When specifying performance_insights_kms_key_id , performance_insights_enabled needs to be set to true.
copy_tags_to_snapshot – (Optional, boolean) Indicates whether to copy all of the user-de ned tags from the DB
instance to snapshots of the DB instance. Default false .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
writer – Boolean indicating if this instance is writable. False indicates this instance is a read replica.
endpoint - The DNS address for this instance. May not be writable
kms_key_id - The ARN for the KMS encryption key if one is set to the cluster.
performance_insights_kms_key_id - The ARN for the KMS encryption key used by Performance Insights.
Timeouts
aws_rds_cluster_instance provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration
options:
create - (Default 90 minutes ) Used for Creating Instances, Replicas, and restoring from Snapshots
update - (Default 90 minutes ) Used for Database modi cations
delete - (Default 90 minutes ) Used for destroying databases. This includes the time required to take snapshots
Import
RDS Cluster Instances can be imported using the identifier , e.g.
Example Usage
parameter {
name = "character_set_server"
value = "utf8"
}
parameter {
name = "character_set_client"
value = "utf8"
}
}
Argument Reference
The following arguments are supported:
name - (Optional, Forces new resource) The name of the DB cluster parameter group. If omitted, Terraform will assign
a random, unique name.
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name .
description - (Optional) The description of the DB cluster parameter group. Defaults to "Managed by Terraform".
parameter - (Optional) A list of DB parameters to apply. Note that parameters may di er from a family to an other.
Full list of all parameters can be discovered via aws rds describe-db-cluster-parameters
(https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-cluster-parameters.html) after initial creation of
the group.
apply_method - (Optional) "immediate" (default), or "pending-reboot". Some engines can't apply some parameters
without a reboot, and you will need to specify "pending-reboot" here.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
RDS Cluster Parameter Groups can be imported using the name , e.g.
More information about Aurora global databases can be found in the Aurora User Guide
(https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html#aurora-global-database-
creating).
NOTE: RDS only supports the aurora engine (MySQL 5.6 compatible) for Global Clusters at this time.
Example Usage
provider "aws" {
alias = "primary"
region = "us-east-2"
}
provider "aws" {
alias = "secondary"
region = "us-west-2"
}
global_cluster_identifier = "example"
}
engine_mode = "global"
global_cluster_identifier = "${aws_rds_global_cluster.example.id}"
}
cluster_identifier = "${aws_rds_cluster.primary.id}"
}
engine_mode = "global"
global_cluster_identifier = "${aws_rds_global_cluster.example.id}"
}
cluster_identifier = "${aws_rds_cluster.secondary.id}"
}
Argument Reference
The following arguments are supported:
global_cluster_identifier - (Required, Forces new resources) The global cluster identi er.
database_name - (Optional, Forces new resources) Name for an automatically created database on cluster creation.
deletion_protection - (Optional) If the Global Cluster should have deletion protection enabled. The database can't
be deleted when this value is set to true . The default is false .
engine - (Optional, Forces new resources) Name of the database engine to be used for this DB cluster. Valid values:
aurora . Defaults to aurora .
engine_version - (Optional, Forces new resources) Engine version of the Aurora global database.
storage_encrypted - (Optional, Forces new resources) Speci es whether the DB cluster is encrypted. The default is
false .
Attribute Reference
In addition to all arguments above, the following attributes are exported:
global_cluster_resource_id - AWS Region-unique, immutable identi er for the global database cluster. This
identi er is found in AWS CloudTrail log entries whenever the AWS KMS key for the DB cluster is accessed
Import
aws_rds_global_cluster can be imported by using the RDS Global Cluster identi er, e.g.
Note: All arguments including the username and password will be stored in the raw state as plain-text. Read more
about sensitive data in state (/docs/state/sensitive-data.html).
Example Usage
Argument Reference
For more detailed documentation about each argument, refer to the AWS o cial documentation
(http://docs.aws.amazon.com/cli/latest/reference/redshift/index.html#cli-aws-redshift).
cluster_identifier - (Required) The Cluster Identi er. Must be a lower case string.
database_name - (Optional) The name of the rst database to be created when the cluster is created. If you do not
provide a name, Amazon Redshift will create a default database called dev .
master_password - (Required unless a snapshot_identifier is provided) Password for the master DB user. Note
that this may show up in logs, and it will be stored in the state le. Password must contain at least 8 chars and contain
at least one uppercase letter, one lowercase letter, and one number.
master_username - (Required unless a snapshot_identifier is provided) Username for the master DB user.
vpc_security_group_ids - (Optional) A list of Virtual Private Cloud (VPC) security groups to be associated with the
cluster.
cluster_subnet_group_name - (Optional) The name of a cluster subnet group to be associated with this cluster. If
this parameter is not provided the resulting cluster will be deployed outside virtual private cloud (VPC).
availability_zone - (Optional) The EC2 Availability Zone (AZ) in which you want Amazon Redshift to provision the
cluster. For example, if you have several EC2 instances running in a speci c Availability Zone, then you might want the
cluster to be provisioned in the same zone in order to decrease network latency.
preferred_maintenance_window - (Optional) The weekly time range (in UTC) during which automated cluster
maintenance can occur. Format: ddd:hh24:mi-ddd:hh24:mi
cluster_parameter_group_name - (Optional) The name of the parameter group to be associated with this cluster.
automated_snapshot_retention_period - (Optional) The number of days that automated snapshots are retained. If
the value is 0, automated snapshots are disabled. Even if automated snapshots are disabled, you can still create
manual snapshots when you want with create-cluster-snapshot. Default is 1.
port - (Optional) The port number on which the cluster accepts incoming connections. The cluster is accessible only
via the JDBC and ODBC connection strings. Part of the connection string requires the port on which the cluster will
listen for incoming connections. Default port is 5439.
cluster_version - (Optional) The version of the Amazon Redshift engine software that you want to deploy on the
cluster. The version selected runs on all the nodes in the cluster.
allow_version_upgrade - (Optional) If true , major version upgrades can be applied during the maintenance window
to the Amazon Redshift engine that is running on the cluster. Default is true
number_of_nodes - (Optional) The number of compute nodes in the cluster. This parameter is required when the
ClusterType parameter is speci ed as multi-node. Default is 1.
publicly_accessible - (Optional) If true, the cluster can be accessed from a public network. Default is true .
kms_key_id - (Optional) The ARN for the KMS encryption key. When specifying kms_key_id , encrypted needs to be
set to true.
skip_final_snapshot - (Optional) Determines whether a nal snapshot of the cluster is created before Amazon
Redshift deletes the cluster. If true , a nal cluster snapshot is not created. If false , a nal cluster snapshot is created
before the cluster is deleted. Default is false.
final_snapshot_identifier - (Optional) The identi er of the nal snapshot that is to be created immediately
before deleting the cluster. If this parameter is provided, skip_final_snapshot must be false.
snapshot_identifier - (Optional) The name of the snapshot from which to create the new cluster.
snapshot_cluster_identifier - (Optional) The name of the cluster the source snapshot was created from.
owner_account - (Optional) The AWS customer account used to create or copy the snapshot. Required if you are
restoring a snapshot you do not own, optional if you own the snapshot.
iam_roles - (Optional) A list of IAM Role ARNs to associate with the cluster. A Maximum of 10 can be associated to
the cluster at any time.
Timeouts
aws_redshift_cluster provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration
options:
Nested Blocks
logging
enable - (Required) Enables logging information such as queries and connection attempts, for the speci ed Amazon
Redshift cluster.
bucket_name - (Optional, required when enable is true ) The name of an existing S3 bucket where the log les are
to be stored. Must be in the same region as the cluster and the cluster must have read bucket and put object
permissions. For more information on the permissions required for the bucket, please read the AWS documentation
(http://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html#db-auditing-enable-logging)
snapshot_copy
destination_region - (Required) The destination region that you want to copy snapshots to.
retention_period - (Optional) The number of days to retain automated snapshots in the destination region after
they are copied from the source region. Defaults to 7 .
grant_name - (Optional) The name of the snapshot copy grant to use when snapshots of an AWS KMS-encrypted
cluster are copied to the destination region.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
vpc_security_group_ids - The VPC security group Ids associated with the cluster
cluster_parameter_group_name - The name of the parameter group to be associated with this cluster
cluster_subnet_group_name - The name of a cluster subnet group to be associated with this cluster
Import
Redshift Clusters can be imported using the cluster_identifier , e.g.
Example Usage
source_type = "cluster"
source_ids = ["${aws_redshift_cluster.default.id}"]
severity = "INFO"
event_categories = [
"configuration",
"management",
"monitoring",
"security",
]
tags = {
Name = "default"
}
}
Argument Reference
The following arguments are supported:
sns_topic_arn - (Required) The ARN of the SNS topic to send events to.
source_ids - (Optional) A list of identi ers of the event sources for which events will be returned. If not speci ed,
then all sources are included in the response. If speci ed, a source_type must also be speci ed.
source_type - (Optional) The type of source that will be generating the events. Valid options are cluster , cluster-
parameter-group , cluster-security-group , or cluster-snapshot . If not set, all sources will be subscribed to.
severity - (Optional) The event severity to be published by the noti cation subscription. Valid options are INFO or
ERROR .
event_categories - (Optional) A list of event categories for a SourceType that you want to subscribe to. See
https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-event-noti cations.html
(https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-event-noti cations.html) or run aws redshift
describe-event-categories .
Attributes
The following additional atttributes are provided:
arn - Amazon Resource Name (ARN) of the Redshift event noti cation subscription
customer_aws_id - The AWS customer account associated with the Redshift event noti cation subscription
Import
Redshift Event Subscriptions can be imported using the name , e.g.
Example Usage
parameter {
name = "require_ssl"
value = "true"
}
parameter {
name = "query_group"
value = "example"
}
parameter {
name = "enable_user_activity_logging"
value = "true"
}
}
Argument Reference
The following arguments are supported:
description - (Optional) The description of the Redshift parameter group. Defaults to "Managed by Terraform".
You can read more about the parameters that Redshift supports in the documentation
(http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-parameter-groups.html)
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Redshift Parameter Groups can be imported using the name , e.g.
Example Usage
ingress {
cidr = "10.0.0.0/24"
}
}
Argument Reference
The following arguments are supported:
description - (Optional) The description of the Redshift security group. Defaults to "Managed by Terraform".
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Redshift security groups can be imported using the name , e.g.
Note that the grant must exist in the destination region, and not in the region of the cluster.
Example Usage
snapshot_copy {
destination_region = "us-east-2"
grant_name = "${aws_redshift_snapshot_copy_grant.test.snapshot_copy_grant_name}"
}
}
Argument Reference
The following arguments are supported:
snapshot_copy_grant_name - (Required, Forces new resource) A friendly name for identifying the grant.
kms_key_id - (Optional, Forces new resource) The unique identi er for the customer master key (CMK) that the grant
applies to. Specify the key ID or the Amazon Resource Name (ARN) of the CMK. To specify a CMK in a di erent AWS
account, you must use the key ARN. If not speci ed, the default key is used.
Attributes Reference
arn - Amazon Resource Name (ARN) of snapshot copy grant
Resource: aws_redshift_snapshot_schedule_association
Example Usage
Argument Reference
The following arguments are supported:
schedule_identifier - (Required, Forces new resource) The snapshot schedule identi er.
Import
Redshift Snapshot Schedule Association can be imported using the <cluster-identifier>/<schedule-identifier> , e.g.
Example Usage
Argument Reference
The following arguments are supported:
identifier - (Optional, Forces new resource) The snapshot schedule identi er. If omitted, Terraform will assign a
random, unique identi er.
identifier_prefix - (Optional, Forces new resource) Creates a unique identi er beginning with the speci ed pre x.
Con icts with identifier .
definitions - (Optional) The de nition of the snapshot schedule. The de nition is made up of schedule expressions,
for example cron(30 12 *) or rate(12 hours) .
force_destroy - (Optional) Whether to destroy all associated clusters with this snapshot schedule on deletion. Must
be enabled and applied before attempting deletion.
Import
Redshift Snapshot Schedule can be imported using the identifier , e.g.
Example Usage
tags = {
Name = "tf-dbsubnet-test-1"
}
}
tags = {
Name = "tf-dbsubnet-test-2"
}
}
tags = {
environment = "Production"
}
}
Argument Reference
The following arguments are supported:
description - (Optional) The description of the Redshift Subnet group. Defaults to "Managed by Terraform".
arn - Amazon Resource Name (ARN) of the Redshift Subnet group name
Import
Redshift subnet groups can be imported using the name , e.g.
Example Usage
resource_query {
query = <<JSON
<<
{
"ResourceTypeFilters": [
"AWS::EC2::Instance"
],
"TagFilters": [
{
"Key": "Stage",
"Values": ["Test"]
}
]
}
JSON
}
}
Argument Reference
The following arguments are supported:
name - (Required) The resource group's name. A resource group name can have a maximum of 127 characters,
including letters, numbers, hyphens, dots, and underscores. The name cannot start with AWS or aws .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
reference_name - (Optional) This is a reference name used in Caller Reference (helpful for identifying single
delegation set amongst others)
Attributes Reference
In addition to all arguments above, the following attributes are exported:
name_servers - A list of authoritative name servers for the hosted zone (e ectively a list of NS records).
Import
Route53 Delegation Sets can be imported using the delegation set id , e.g.
Example Usage
tags = {
Name = "tf-test-health-check"
}
}
Aggregate Check
tags = {
Name = "tf-test-calculated-health-check"
}
}
CloudWatch Alarm Check
Argument Reference
The following arguments are supported:
reference_name - (Optional) This is a reference name used in Caller Reference (helpful for identifying single
health_check set amongst others)
fqdn - (Optional) The fully quali ed domain name of the endpoint to be checked.
type - (Required) The protocol to use when performing health checks. Valid values are HTTP , HTTPS ,
HTTP_STR_MATCH , HTTPS_STR_MATCH , TCP , CALCULATED and CLOUDWATCH_METRIC .
failure_threshold - (Required) The number of consecutive health checks that an endpoint must pass or fail.
request_interval - (Required) The number of seconds between the time that Amazon Route 53 gets a response
from your endpoint and the time that it sends the next health-check request.
resource_path - (Optional) The path that you want Amazon Route 53 to request when performing health checks.
search_string - (Optional) String searched in the rst 5120 bytes of the response body for check to be considered
healthy. Only valid with HTTP_STR_MATCH and HTTPS_STR_MATCH .
measure_latency - (Optional) A Boolean value that indicates whether you want Route 53 to measure the latency
between health checkers in multiple AWS regions and your endpoint and to display CloudWatch latency graphs in the
Route 53 console.
invert_healthcheck - (Optional) A boolean value that indicates whether the status of health check should be
inverted. For example, if a health check is healthy but Inverted is True , then Route 53 considers the health check to be
unhealthy.
enable_sni - (Optional) A boolean value that indicates whether Route53 should send the fqdn to the endpoint
when performing the health check. This defaults to AWS' defaults: when the type is "HTTPS" enable_sni defaults to
true , when type is anything else enable_sni defaults to false .
child_healthchecks - (Optional) For a speci ed parent health check, a list of HealthCheckId values for the
associated child health checks.
child_health_threshold - (Optional) The minimum number of child health checks that must be healthy for Route
53 to consider the parent health check to be healthy. Valid values are integers between 0 and 256, inclusive
cloudwatch_alarm_region - (Optional) The CloudWatchRegion that the CloudWatch alarm was created in.
insufficient_data_health_status - (Optional) The status of the health check when CloudWatch has insu cient
data about the state of associated alarm. Valid values are Healthy , Unhealthy and LastKnownStatus .
regions - (Optional) A list of AWS regions that you want Amazon Route 53 health checkers to check the speci ed
endpoint from.
Attributes Reference
The following attributes are exported in addition to the arguments listed above:
Import
Route53 Health Checks can be imported using the health check id , e.g.
NOTE: There are restrictions on the con guration of query logging. Notably, the CloudWatch log group must be in the
us-east-1 region, a permissive CloudWatch log resource policy must be in place, and the Route53 hosted zone must
be public. See Con guring Logging for DNS Queries
(https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/query-logs.html?console_help=true#query-logs-
con guring) for additional details.
Example Usage
provider "aws" {
alias = "us-east-1"
region = "us-east-1"
}
name = "/aws/route53/${aws_route53_zone.example_com.name}"
retention_in_days = 30
}
resources = ["arn:aws:logs:*:*:log-group:/aws/route53/*"]
principals {
identifiers = ["route53.amazonaws.com"]
type = "Service"
}
}
}
policy_document = "${data.aws_iam_policy_document.route53-query-logging-policy.json}"
policy_name = "route53-query-logging-policy"
}
cloudwatch_log_group_arn = "${aws_cloudwatch_log_group.aws_route53_example_com.arn}"
zone_id = "${aws_route53_zone.example_com.zone_id}"
}
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Route53 query logging con gurations can be imported using their ID, e.g.
Example Usage
weighted_routing_policy {
weight = 10
}
set_identifier = "dev"
records = ["dev.example.com"]
}
weighted_routing_policy {
weight = 90
}
set_identifier = "live"
records = ["live.example.com"]
}
Alias record
See related part of AWS Route53 Developer Guide (https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-
record-sets-choosing-alias-non-alias.html) to understand di erences between alias and non-alias records.
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
}
alias {
name = "${aws_elb.main.dns_name}"
zone_id = "${aws_elb.main.zone_id}"
evaluate_target_health = true
}
}
records = [
"${aws_route53_zone.example.name_servers.0}",
"${aws_route53_zone.example.name_servers.1}",
"${aws_route53_zone.example.name_servers.2}",
"${aws_route53_zone.example.name_servers.3}",
]
}
Argument Reference
The following arguments are supported:
type - (Required) The record type. Valid values are A , AAAA , CAA , CNAME , MX , NAPTR , NS , PTR , SOA , SPF , SRV
and TXT .
records - (Required for non-alias records) A string list of records. To specify a single record value longer than 255
characters such as a TXT record for DKIM, add \"\" inside the Terraform con guration string (e.g.
"first255characters\"\"morecharacters" ).
set_identifier - (Optional) Unique identi er to di erentiate records with routing policies from one another.
Required if using failover , geolocation , latency , or weighted routing policies documented below.
health_check_id - (Optional) The health check the record should be associated with.
alias - (Optional) An alias block. Con icts with ttl & records . Alias record documented below.
failover_routing_policy - (Optional) A block indicating the routing behavior when associated health check fails.
Con icts with any other routing policy. Documented below.
geolocation_routing_policy - (Optional) A block indicating a routing policy based on the geolocation of the
requestor. Con icts with any other routing policy. Documented below.
latency_routing_policy - (Optional) A block indicating a routing policy based on the latency between the requestor
and an AWS region. Con icts with any other routing policy. Documented below.
weighted_routing_policy - (Optional) A block indicating a weighted routing policy. Con icts with any other routing
policy. Documented below.
multivalue_answer_routing_policy - (Optional) Set to true to indicate a multivalue answer routing policy.
Con icts with any other routing policy.
allow_overwrite - (Optional) Allow creation of this record in Terraform to overwrite an existing record, if any. This
does not a ect the ability to update the record in Terraform and does not prevent other resources within Terraform or
manual Route 53 changes outside Terraform from overwriting this record. false by default. This con guration is not
recommended for most environments.
Exactly one of records or alias must be speci ed: this determines whether it's an alias record.
name - (Required) DNS domain name for a CloudFront distribution, S3 bucket, ELB, or another resource record set in
this hosted zone.
zone_id - (Required) Hosted zone ID for a CloudFront distribution, S3 bucket, ELB, or Route 53 hosted zone. See
resource_elb.zone_id (/docs/providers/aws/r/elb.html#zone_id) for example.
evaluate_target_health - (Required) Set to true if you want Route 53 to determine whether to respond to DNS
queries using this resource record set by checking the health of the resource record set. Some resources have special
requirements, see related part of documentation
(https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-values.html#rrsets-values-alias-
evaluate-target-health).
type - (Required) PRIMARY or SECONDARY . A PRIMARY record will be served if its healthcheck is passing, otherwise
the SECONDARY will be served. See http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-
con guring-options.html#dns-failover-failover-rrsets
(http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-con guring-options.html#dns-failover-
failover-rrsets)
weight - (Required) A numeric value indicating the relative weight of the record. See
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-weighted
(http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-weighted).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
fqdn - FQDN (https://en.wikipedia.org/wiki/Fully_quali ed_domain_name) built using the zone domain and name .
Import
Route53 Records can be imported using ID of the record. The ID is made up as ZONEID_RECORDNAME_TYPE_SET-IDENTIFIER
e.g.
Z4KAPRWWNC7JR_dev.example.com_NS_dev
In this example, Z4KAPRWWNC7JR is the ZoneID, dev.example.com is the Record Name, NS is the Type and dev is the Set
Identi er. Only the Set Identi er is actually optional in the ID
Example Usage
security_group_ids = [
"${aws_security_group.sg1.id}",
"${aws_security_group.sg2.id}",
]
ip_address {
subnet_id = "${aws_subnet.sn1.id}"
}
ip_address {
subnet_id = "${aws_subnet.sn2.id}"
ip = "10.0.64.4"
}
tags {
Environment = "Prod"
}
}
Argument Reference
The following arguments are supported:
direction - (Required) The direction of DNS queries to or from the Route 53 Resolver endpoint. Valid values are
INBOUND (resolver forwards DNS queries to the DNS service for a VPC from your network or another VPC) or
OUTBOUND (resolver forwards DNS queries from the DNS service for a VPC to your network or another VPC).
ip_address - (Required) The subnets and IP addresses in your VPC that you want DNS queries to pass through on the
way from your VPCs to your network (for outbound endpoints) or on the way from your network to your VPCs (for
inbound endpoints). Described below.
security_group_ids - (Required) The ID of one or more security groups that you want to use to control access to this
VPC.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
host_vpc_id - The ID of the VPC that you want to create the resolver endpoint in.
Timeouts
aws_route53_resolver_endpoint provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
Import
Route 53 Resolver endpoints can be imported using the Route 53 Resolver endpoint ID, e.g.
Example Usage
Argument Reference
The following arguments are supported:
resolver_rule_id - (Required) The ID of the resolver rule that you want to associate with the VPC.
vpc_id - (Required) The ID of the VPC that you want to associate the resolver rule with.
name - (Optional) A name for the association that you're creating between a resolver rule and a VPC.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Route53 Resolver rule associations can be imported using the id , e.g.
Example Usage
System rule
Forward rule
target_ip {
ip = "123.45.67.89"
}
tags {
Environment = "Prod"
}
}
Argument Reference
The following arguments are supported:
domain_name - (Required) DNS queries for this domain name are forwarded to the IP addresses that are speci ed
using target_ip .
rule_type - (Required) The rule type. Valid values are FORWARD , SYSTEM and RECURSIVE .
name - (Optional) A friendly name that lets you easily nd a rule in the Resolver dashboard in the Route 53 console.
resolver_endpoint_id (Optional) The ID of the outbound resolver endpoint that you want to use to route DNS
queries to the IP addresses that you specify using target_ip . This argument should only be speci ed for FORWARD
type rules.
target_ip - (Optional) Con guration block(s) indicating the IPs that you want Resolver to forward DNS queries to
(documented below). This argument should only be speci ed for FORWARD type rules.
ip - (Required) One IP address that you want to forward DNS queries to. You can specify only IPv4 addresses.
port - (Optional) The port at ip that you want to forward DNS queries to. Default value is 53
Attributes Reference
In addition to all arguments above, the following attributes are exported:
arn - The ARN (Amazon Resource Name) for the resolver rule.
owner_id - When a rule is shared with another AWS account, the account ID of the account that the rule is shared
with.
share_status - Whether the rules is shared and, if so, whether the current account is sharing the rule with another
account, or another account is sharing the rule with the current account. Values are NOT_SHARED , SHARED_BY_ME or
SHARED_WITH_ME
Import
Route53 Resolver rules can be imported using the id , e.g.
NOTE: Unless explicit association ordering is required (e.g. a separate cross-account association authorization), usage of
this resource is not recommended. Use the vpc con guration blocks available within the aws_route53_zone resource
(/docs/providers/aws/r/route53_zone.html) instead.
NOTE: Terraform provides both this standalone Zone VPC Association resource and exclusive VPC associations de ned
in-line in the aws_route53_zone resource (/docs/providers/aws/r/route53_zone.html) via vpc con guration blocks. At
this time, you cannot use those in-line VPC associations in conjunction with this resource and the same zone ID
otherwise it will cause a perpetual di erence in plan output. You can optionally use the generic Terraform resource
lifecycle con guration block (/docs/con guration/resources.html#lifecycle) with ignore_changes in the
aws_route53_zone resource to manage additional associations via this resource.
Example Usage
resource "aws_vpc" "primary" {
cidr_block = "10.6.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
}
vpc {
vpc_id = "${aws_vpc.primary.id}"
}
lifecycle {
ignore_changes = ["vpc"]
}
}
Argument Reference
The following arguments are supported:
vpc_id - (Required) The VPC to associate with the private hosted zone.
vpc_region - (Optional) The VPC's region. Defaults to the region of the AWS provider.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Route 53 Hosted Zone Associations can be imported via the Hosted Zone ID and VPC ID, separated by a colon ( : ), e.g.
Example Usage
Public Zone
tags = {
Environment = "dev"
}
}
records = [
"${aws_route53_zone.dev.name_servers.0}",
"${aws_route53_zone.dev.name_servers.1}",
"${aws_route53_zone.dev.name_servers.2}",
"${aws_route53_zone.dev.name_servers.3}",
]
}
Private Zone
NOTE: Terraform provides both exclusive VPC associations de ned in-line in this resource via vpc con guration blocks
and a separate Zone VPC Association (/docs/providers/aws/r/route53_zone_association.html) resource. At this time, you
cannot use in-line VPC associations in conjunction with any aws_route53_zone_association resources with the same
zone ID otherwise it will cause a perpetual di erence in plan output. You can optionally use the generic Terraform
resource lifecycle con guration block (/docs/con guration/resources.html#lifecycle) with ignore_changes to manage
additional associations via the aws_route53_zone_association resource.
NOTE: Private zones require at least one VPC association at all times.
vpc {
vpc_id = "${aws_vpc.example.id}"
}
}
Argument Reference
The following arguments are supported:
comment - (Optional) A comment for the hosted zone. Defaults to 'Managed by Terraform'.
delegation_set_id - (Optional) The ID of the reusable delegation set whose NS records you want to assign to the
hosted zone. Con icts with vpc as delegation sets can only be used for public zones.
force_destroy - (Optional) Whether to destroy all records (possibly managed outside of Terraform) in the zone when
destroying the zone.
vpc - (Optional) Con guration block(s) specifying VPC(s) to associate with a private hosted zone. Con icts with the
delegation_set_id argument in this resource and any aws_route53_zone_association resource
(/docs/providers/aws/r/route53_zone_association.html) specifying the same zone ID. Detailed below.
vpc_region - (Optional) Region of the VPC to associate. Defaults to AWS provider region.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
zone_id - The Hosted Zone ID. This can be referenced by zone records.
name_servers - A list of name servers in associated (or default) delegation set. Find more about delegation sets in
AWS docs (https://docs.aws.amazon.com/Route53/latest/APIReference/actions-on-reusable-delegation-sets.html).
Import
Route53 Zones can be imported using the zone id , e.g.
NOTE on Route Tables and Routes: Terraform currently provides both a standalone Route resource and a Route Table
(/docs/providers/aws/r/route_table.html) resource with routes de ned in-line. At this time you cannot use a Route Table
with in-line routes in conjunction with any Route resources. Doing so will cause a con ict of rule settings and will
overwrite rules.
Example usage:
Argument Reference
The following arguments are supported:
Note that the default route, mapping the VPC's CIDR block to "local", is created implicitly and cannot be speci ed.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
NOTE: Only the arguments that are con gured (one of the above) will be exported as an attribute once the resource is
created.
Timeouts
aws_route provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration options:
Import
Individual routes can be imported using ROUTETABLEID_DESTINATION .
For example, import a route in route table rtb-656C65616E6F72 with an IPv4 destination CIDR of 10.42.0.0/16 like this:
Import a route in route table rtb-656C65616E6F72 with an IPv6 destination CIDR of 2620:0:2d0:200::8/125 similarly:
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
NOTE: Attempting to associate a route table with a subnet, where either is already associated, will result in an error
(e.g., Resource.AlreadyAssociated: the specified association for route table rtb-4176657279 conflicts
with an existing association ) unless you rst import the original association.
Route table associations can be imported using the subnet and route table IDs. For example, use this command:
NOTE on Route Tables and Routes: Terraform currently provides both a standalone Route resource
(/docs/providers/aws/r/route.html) and a Route Table resource with routes de ned in-line. At this time you cannot use
a Route Table with in-line routes in conjunction with any Route resources. Doing so will cause a con ict of rule settings
and will overwrite rules.
NOTE on gateway_id and nat_gateway_id : The AWS API is very forgiving with these two attributes and the
aws_route_table resource can be created with a NAT ID speci ed as a Gateway ID attribute. This will lead to a
permanent di between your con guration and state le, as the API returns the correct parameters in the returned
route table. If you're experiencing constant di s in your aws_route_table resources, the rst thing to check is
whether or not you're specifying a NAT ID instead of a Gateway ID, or vice-versa.
route {
cidr_block = "10.0.1.0/24"
gateway_id = "${aws_internet_gateway.main.id}"
}
route {
ipv6_cidr_block = "::/0"
egress_only_gateway_id = "${aws_egress_only_internet_gateway.foo.id}"
}
tags = {
Name = "main"
}
}
Argument Reference
The following arguments are supported:
Note that the default route, mapping the VPC's CIDR block to "local", is created implicitly and cannot be speci ed.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
NOTE: Only the target that is entered is exported as a readable attribute once the route resource is created.
owner_id - The ID of the AWS account that owns the route table.
Import
Route Tables can be imported using the route table id . For example, to import route table rtb-4e616f6d69 , use this
command:
NOTE: Each AWS account may only have one S3 Public Access Block con guration. Multiple con gurations of the
resource against the same AWS account will cause a perpetual di erence.
Advanced usage: To use a custom API endpoint for this Terraform resource, use the s3control endpoint provider
con guration (/docs/providers/aws/index.html#s3control), not the s3 endpoint provider con guration.
Example Usage
Argument Reference
The following arguments are supported:
account_id - (Optional) AWS account ID to con gure. Defaults to automatically determined account ID of the
Terraform AWS provider.
block_public_acls - (Optional) Whether Amazon S3 should block public ACLs for buckets in this account. Defaults to
false . Enabling this setting does not a ect existing policies or ACLs. When set to true causes the following
behavior:
PUT Bucket acl and PUT Object acl calls will fail if the speci ed ACL allows public access.
PUT Object calls will fail if the request includes an object ACL.
block_public_policy - (Optional) Whether Amazon S3 should block public bucket policies for buckets in this
account. Defaults to false . Enabling this setting does not a ect existing bucket policies. When set to true causes
Amazon S3 to:
Reject calls to PUT Bucket policy if the speci ed bucket policy allows public access.
ignore_public_acls - (Optional) Whether Amazon S3 should ignore public ACLs for buckets in this account. Defaults
to false . Enabling this setting does not a ect the persistence of any existing ACLs and doesn't prevent new public
ACLs from being set. When set to true causes Amazon S3 to:
Ignore all public ACLs on buckets in this account and any objects that they contain.
restrict_public_buckets - (Optional) Whether Amazon S3 should restrict public bucket policies for buckets in this
account. Defaults to false . Enabling this setting does not a ect previously stored bucket policies, except that public
and cross-account access within any public bucket policy, including non-public delegation to speci c accounts, is
blocked. When set to true :
Only the bucket owner and AWS Services can access buckets with public policies.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
id - AWS account ID
Import
aws_s3_account_public_access_block can be imported by using the AWS account ID, e.g.
Example Usage
tags = {
Name = "My bucket"
Environment = "Dev"
}
}
website {
index_document = "index.html"
error_document = "error.html"
routing_rules = <<EOF
<<
[{
"Condition": {
"KeyPrefixEquals": "docs/"
},
"Redirect": {
"ReplaceKeyPrefixWith": "documents/"
}
}]
EOF
}
}
Using CORS
resource "aws_s3_bucket" "b" {
bucket = "s3-website-test.hashicorp.com"
acl = "public-read"
cors_rule {
allowed_headers = ["*"]
allowed_methods = ["PUT", "POST"]
allowed_origins = ["https://s3-website-test.hashicorp.com"]
expose_headers = ["ETag"]
max_age_seconds = 3000
}
}
Using versioning
versioning {
enabled = true
}
}
Enable Logging
logging {
target_bucket = "${aws_s3_bucket.log_bucket.id}"
target_prefix = "log/"
}
}
lifecycle_rule {
id = "log"
id = "log"
enabled = true
prefix = "log/"
tags = {
"rule" = "log"
"autoclean" = "true"
}
transition {
days = 30
storage_class = "STANDARD_IA"
}
transition {
days = 60
storage_class = "GLACIER"
}
expiration {
days = 90
}
}
lifecycle_rule {
id = "tmp"
prefix = "tmp/"
enabled = true
expiration {
date = "2016-01-12"
}
}
}
versioning {
enabled = true
}
lifecycle_rule {
prefix = "config/"
enabled = true
noncurrent_version_transition {
days = 30
storage_class = "STANDARD_IA"
}
noncurrent_version_transition {
days = 60
storage_class = "GLACIER"
}
noncurrent_version_expiration {
days = 90
days = 90
}
}
}
provider "aws" {
region = "eu-west-1"
}
provider "aws" {
alias = "central"
region = "eu-central-1"
}
assume_role_policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
POLICY
}
policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetReplicationConfiguration",
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"${aws_s3_bucket.bucket.arn}"
]
},
{
"Action": [
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl"
"s3:GetObjectVersionAcl"
],
"Effect": "Allow",
"Resource": [
"${aws_s3_bucket.bucket.arn}/*"
]
},
{
"Action": [
"s3:ReplicateObject",
"s3:ReplicateDelete"
],
"Effect": "Allow",
"Resource": "${aws_s3_bucket.destination.arn}/*"
}
]
}
POLICY
}
versioning {
enabled = true
}
}
versioning {
enabled = true
}
replication_configuration {
role = "${aws_iam_role.replication.arn}"
rules {
id = "foobar"
prefix = "foo"
status = "Enabled"
destination {
bucket = "${aws_s3_bucket.destination.arn}"
storage_class = "STANDARD"
}
}
}
}
Enable Default Server Side Encryption
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = "${aws_kms_key.mykey.arn}"
sse_algorithm = "aws:kms"
}
}
}
}
Argument Reference
The following arguments are supported:
bucket - (Optional, Forces new resource) The name of the bucket. If omitted, Terraform will assign a random, unique
name.
bucket_prefix - (Optional, Forces new resource) Creates a unique bucket name beginning with the speci ed pre x.
Con icts with bucket .
force_destroy - (Optional, Default: false ) A boolean that indicates all objects (including any locked objects
(https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html)) should be deleted from the bucket so
that the bucket can be destroyed without error. These objects are not recoverable.
acceleration_status - (Optional) Sets the accelerate con guration of an existing bucket. Can be Enabled or
Suspended .
region - (Optional) If speci ed, the AWS region this bucket should reside in. Otherwise, the region used by the callee.
request_payer - (Optional) Speci es who should bear the cost of Amazon S3 data transfer. Can be either
BucketOwner or Requester . By default, the owner of the S3 bucket would incur the costs of any data transfer. See
Requester Pays Buckets (http://docs.aws.amazon.com/AmazonS3/latest/dev/RequesterPaysBuckets.html) developer
guide for more information.
index_document - (Required, unless using redirect_all_requests_to ) Amazon S3 returns this index document
when requests are made to the root domain or any of the subfolders.
error_document - (Optional) An absolute path to the document to return in case of a 4XX error.
redirect_all_requests_to - (Optional) A hostname to redirect all website requests for this bucket to. Hostname
can optionally be pre xed with a protocol ( http:// or https:// ) to use when redirecting requests. The default is
the protocol that is used in the original request.
allowed_methods (Required) Speci es which methods are allowed. Can be GET , PUT , POST , DELETE or HEAD .
max_age_seconds (Optional) Speci es time in seconds that browser can cache the response for a pre ight request.
The versioning object supports the following:
enabled - (Optional) Enable versioning. Once you version-enable a bucket, it can never return to an unversioned
state. You can, however, suspend versioning on that bucket.
mfa_delete - (Optional) Enable MFA delete for either Change the versioning state of your bucket or
Permanently delete an object version . Default is false .
target_bucket - (Required) The name of the bucket that will receive the log objects.
prefix - (Optional) Object key pre x identifying one or more objects to which the rule applies.
date (Optional) Speci es the date after which you want the corresponding action to take e ect.
days (Optional) Speci es the number of days after object creation when the speci c rule action takes e ect.
date (Optional) Speci es the date after which you want the corresponding action to take e ect.
days (Optional) Speci es the number of days after object creation when the speci c rule action takes e ect.
storage_class (Required) Speci es the Amazon S3 storage class to which you want the object to transition. Can be
ONEZONE_IA , STANDARD_IA , INTELLIGENT_TIERING , GLACIER , or DEEP_ARCHIVE .
The noncurrent_version_expiration object supports the following
days (Required) Speci es the number of days an object is noncurrent object versions expire.
days (Required) Speci es the number of days an object is noncurrent object versions expire.
storage_class (Required) Speci es the Amazon S3 storage class to which you want the noncurrent versions object
to transition. Can be ONEZONE_IA , STANDARD_IA , INTELLIGENT_TIERING , GLACIER , or DEEP_ARCHIVE .
role - (Required) The ARN of the IAM role for Amazon S3 to assume when replicating the objects.
rules - (Required) Speci es the rules managing the replication (documented below).
destination - (Required) Speci es the destination for the rule (documented below).
prefix - (Optional) Object keyname pre x identifying one or more objects to which the rule applies.
status - (Required) The status of the rule. Either Enabled or Disabled . The rule is ignored if status is not Enabled.
filter - (Optional) Filter that identi es subset of objects to which the replication rule applies (documented below).
NOTE on prefix and filter : Amazon S3's latest version of the replication con guration is V2, which includes the
filter attribute for replication rules. With the filter attribute, you can specify object lters based on the object key
pre x, tags, or both to scope the objects that the rule applies to. Replication con guration V1 supports ltering based
on only the prefix attribute. For backwards compatibility, Amazon S3 continues to support the V1 con guration. * For
a speci c rule, prefix con icts with filter * If any rule has filter speci ed then they all must * priority is
optional (with a default value of 0 ) but must be unique between multiple rules
bucket - (Required) The ARN of the S3 bucket where you want Amazon S3 to store replicas of the object identi ed by
the rule.
storage_class - (Optional) The class of storage used to store the object. Can be STANDARD , REDUCED_REDUNDANCY ,
STANDARD_IA , ONEZONE_IA , INTELLIGENT_TIERING , GLACIER , or DEEP_ARCHIVE .
replica_kms_key_id - (Optional) Destination KMS encryption key ARN for SSE-KMS replication. Must be used in
conjunction with sse_kms_encrypted_objects source selection criteria.
access_control_translation - (Optional) Speci es the overrides to use for object owners on replication. Must be
used in conjunction with account_id owner override con guration.
account_id - (Optional) The Account ID to use for overriding the object owner on replication. Must be used in
conjunction with access_control_translation override con guration.
The source_selection_criteria object supports the following:
sse_kms_encrypted_objects - (Optional) Match SSE-KMS encrypted objects (documented below). If speci ed,
replica_kms_key_id in destination must be speci ed as well.
prefix - (Optional) Object keyname pre x that identi es subset of objects to which the rule applies.
tags - (Optional) A mapping of tags that identi es subset of objects to which the rule applies. The rule applies only to
objects having all the tags in its tagset.
rule - (required) A single object for server-side encryption by default con guration. (documented below)
sse_algorithm - (required) The server-side encryption algorithm to use. Valid values are AES256 and aws:kms
kms_master_key_id - (optional) The AWS KMS master key ID used for the SSE-KMS encryption. This can only be used
when you set the value of sse_algorithm as aws:kms . The default aws/s3 AWS KMS master key is used if this
element is absent while the sse_algorithm is aws:kms .
owner - (Required) The override value for the owner on replicated objects. Currently only Destination is supported.
object_lock_enabled - (Required) Indicates whether this bucket has an Object Lock con guration enabled. Valid
value is Enabled .
rule - (Optional) The Object Lock rule in place for this bucket.
default_retention - (Required) The default retention period that you want to apply to new objects placed in this
bucket.
mode - (Required) The default Object Lock retention mode you want to apply to new objects placed in this bucket.
Valid values are GOVERNANCE and COMPLIANCE .
days - (Optional) The number of days that you want to specify for the default retention period.
years - (Optional) The number of years that you want to specify for the default retention period.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
bucket_regional_domain_name - The bucket region-speci c domain name. The bucket domain name including the
region name, please refer here (https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) for format.
Note: The AWS CloudFront allows specifying S3 region-speci c endpoint when creating S3 origin, it will prevent redirect
issues (https://forums.aws.amazon.com/thread.jspa?threadID=216814) from CloudFront to S3 Origin URL.
website_endpoint - The website endpoint, if the bucket is con gured with a website. If not, this will be an empty
string.
website_domain - The domain of the website endpoint, if the bucket is con gured with a website. If not, this will be
an empty string. This is used to create Route 53 alias records.
Import
S3 bucket can be imported using the bucket , e.g.
Example Usage
included_object_versions = "All"
schedule {
frequency = "Daily"
}
destination {
bucket {
format = "ORC"
bucket_arn = "${aws_s3_bucket.inventory.arn}"
}
}
}
included_object_versions = "All"
schedule {
frequency = "Daily"
}
filter {
prefix = "documents/"
}
destination {
bucket {
format = "ORC"
bucket_arn = "${aws_s3_bucket.inventory.arn}"
prefix = "inventory"
}
}
}
Argument Reference
The following arguments are supported:
bucket - (Required) The name of the bucket to put inventory con guration.
name - (Required) Unique identi er of the inventory con guration for the bucket.
included_object_versions - (Required) Object ltering that accepts a pre x (documented below). Can be All or
Current .
schedule - (Required) Contains the frequency for generating inventory results (documented below).
destination - (Required) Destination bucket where inventory list les are written (documented below).
enabled - (Optional, Default: true) Speci es whether the inventory is enabled or disabled.
optional_fields - (Optional) Contains the optional elds that are included in the inventory results.
frequency - (Required) Speci es how frequently inventory results are produced. Can be Daily or Weekly .
bucket - (Required) The S3 bucket con guration where inventory results are published (documented below).
format - (Required) Speci es the output format of the inventory results. Can be CSV , ORC (https://orc.apache.org/)
or Parquet (https://parquet.apache.org/).
account_id - (Optional) The ID of the account that owns the destination bucket. Recommended to be set to prevent
problems if the destination bucket ownership changes.
encryption - (Optional) Contains the type of server-side encryption to use to encrypt the inventory (documented
below).
sse_kms - (Optional) Speci es to use server-side encryption with AWS KMS-managed keys to encrypt the inventory le
(documented below).
sse_s3 - (Optional) Speci es to use server-side encryption with Amazon S3-managed keys (SSE-S3) to encrypt the
inventory le.
key_id - (Required) The ARN of the KMS customer master key (CMK) used to encrypt the inventory le.
Import
S3 bucket inventory con gurations can be imported using bucket:inventory , e.g.
Example Usage
filter {
prefix = "documents/"
tags = {
priority = "high"
class = "blue"
}
}
}
Argument Reference
The following arguments are supported:
bucket - (Required) The name of the bucket to put metric con guration.
name - (Required) Unique identi er of the metrics con guration for the bucket.
filter - (Optional) Object ltering (http://docs.aws.amazon.com/AmazonS3/latest/dev/metrics-
con gurations.html#metrics-con gurations- lter) that accepts a pre x, tags, or a logical AND of pre x and tags
(documented below).
Import
S3 bucket metric con gurations can be imported using bucket:metric , e.g.
NOTE: S3 Buckets only support a single noti cation con guration. Declaring multiple aws_s3_bucket_notification
resources to the same S3 Bucket will cause a perpetual di erence in con guration. See the example "Trigger multiple
Lambda functions" for an option.
Example Usage
policy = <<POLICY
<<
{
"Version":"2012-10-17",
"Statement":[{
"Effect": "Allow",
"Principal": {"AWS":"*"},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:*:*:s3-event-notification-topic",
"Condition":{
"ArnLike":{"aws:SourceArn":"${aws_s3_bucket.bucket.arn}"}
}
}]
}
POLICY
}
topic {
topic_arn = "${aws_sns_topic.topic.arn}"
events = ["s3:ObjectCreated:*"]
filter_suffix = ".log"
}
}
policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:*:*:s3-event-notification-queue",
"Condition": {
"ArnEquals": { "aws:SourceArn": "${aws_s3_bucket.bucket.arn}" }
}
}
]
}
POLICY
}
queue {
queue_arn = "${aws_sqs_queue.queue.arn}"
events = ["s3:ObjectCreated:*"]
filter_suffix = ".log"
}
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow"
}
]
}
EOF
}
lambda_function {
lambda_function_arn = "${aws_lambda_function.func.arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "AWSLogs/"
filter_suffix = ".log"
}
}
lambda_function {
lambda_function_arn = "${aws_lambda_function.func1.arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "AWSLogs/"
filter_suffix = ".log"
}
}
lambda_function {
lambda_function_arn = "${aws_lambda_function.func2.arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "OtherLogs/"
filter_suffix = ".log"
}
}
policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:*:*:s3-event-notification-queue",
"Condition": {
"ArnEquals": { "aws:SourceArn": "${aws_s3_bucket.bucket.arn}" }
}
}
]
}
POLICY
}
queue {
id = "image-upload-event"
queue_arn = "${aws_sqs_queue.queue.arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "images/"
}
queue {
id = "video-upload-event"
queue_arn = "${aws_sqs_queue.queue.arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "videos/"
}
}
For Terraform's JSON syntax (https://www.terraform.io/docs/con guration/syntax.html), use an array instead of de ning the
queue key twice.
{
"bucket":
"bucket" "${aws_s3_bucket.bucket.id}",
"queue":
"queue" [
{
"id":
"id" "image-upload-event",
"queue_arn":
"queue_arn" "${aws_sqs_queue.queue.arn}",
"events":
"events" ["s3:ObjectCreated:*"],
"filter_prefix":
"filter_prefix" "images/"
},
{
"id":
"id" "video-upload-event",
"queue_arn":
"queue_arn" "${aws_sqs_queue.queue.arn}",
"events":
"events" ["s3:ObjectCreated:*"],
"filter_prefix":
"filter_prefix" "videos/"
}
]
}
Argument Reference
The following arguments are supported:
bucket - (Required) The name of the bucket to put noti cation con guration.
topic - (Optional) The noti cation con guration to SNS Topic (documented below).
queue - (Optional) The noti cation con guration to SQS Queue (documented below).
lambda_function - (Optional, Multiple) Used to con gure noti cations to a Lambda Function (documented below).
id - (Optional) Speci es unique identi er for each of the noti cation con gurations.
id - (Optional) Speci es unique identi er for each of the noti cation con gurations.
id - (Optional) Speci es unique identi er for each of the noti cation con gurations.
Import
S3 bucket noti cation can be imported using the bucket , e.g.
Example Usage
Uploading a le to a bucket
etag = "${filemd5("path/
/ to/
/ file")}"
}
S3 Object Lock
resource "aws_s3_bucket" "examplebucket" {
bucket = "examplebuckettftest"
acl = "private"
versioning {
enabled = true
}
object_lock_configuration {
object_lock_enabled = "Enabled"
}
}
object_lock_legal_hold_status = "ON"
object_lock_mode = "GOVERNANCE"
object_lock_retain_until_date = "2021-12-31T23:59:60Z"
force_destroy = true
}
Argument Reference
Note: If you specify content_encoding you are responsible for encoding the body appropriately. source , content ,
and content_base64 all expect already encoded/compressed bytes.
source - (Optional, con icts with content and content_base64 ) The path to a le that will be read and uploaded as
raw bytes for the object content.
content - (Optional, con icts with source and content_base64 ) Literal string value to use as the object content,
which will be uploaded as UTF-8-encoded text.
content_base64 - (Optional, con icts with source and content ) Base64-encoded data that will be decoded and
uploaded as raw bytes for the object content. This allows safely uploading non-UTF8 binary data, but is recommended
only for small content such as the result of the gzipbase64 function with small text strings. For larger objects, use
source to stream the content from a disk le.
cache_control - (Optional) Speci es caching behavior along the request/reply chain Read w3c cache_control
(http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9) for further details.
content_disposition - (Optional) Speci es presentational information for the object. Read w3c content_disposition
(http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1) for further information.
content_encoding - (Optional) Speci es what content encodings have been applied to the object and thus what
decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header eld. Read
w3c content encoding (http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11) for further information.
content_type - (Optional) A standard MIME type describing the format of the object data, e.g. application/octet-
stream. All Valid MIME Types are valid for this input.
etag - (Optional) Used to trigger updates. The only meaningful value is ${filemd5("path/to/file")} (Terraform
0.11.12 or later) or ${md5(file("path/to/file"))} (Terraform 0.11.11 or earlier). This attribute is not compatible
with KMS encryption, kms_key_id or server_side_encryption = "aws:kms" .
server_side_encryption - (Optional) Speci es server-side encryption of the object in S3. Valid values are " AES256 "
and " aws:kms ".
kms_key_id - (Optional) Speci es the AWS KMS Key ARN to use for object encryption. This value is a fully quali ed
ARN of the KMS Key. If using aws_kms_key , use the exported arn attribute: kms_key_id =
"${aws_kms_key.foo.arn}"
metadata - (Optional) A mapping of keys/values to provision metadata (will be automatically pre xed by x-amz-
meta- , note that only lowercase label are currently supported by the AWS Go API).
force_destroy - (Optional) Allow the object to be deleted by removing any legal hold on any object version. Default
is false . This value should be set to true only if the bucket has S3 object lock enabled.
If no content is provided through source , content or content_base64 , then the object will be empty.
Note: Terraform ignores all leading / s in the object's key and treats multiple / s in the rest of the object's key as a
single / , so values of /index.html and index.html correspond to the same S3 object as do
first//second///third// and first/second/third/ .
Attributes Reference
The following attributes are exported
etag - the ETag generated for the object (an MD5 sum of the object content). For plaintext objects or objects
encrypted with an AWS-managed key, the hash is an MD5 digest of the object data. For objects encrypted with a KMS
key or objects created by either the Multipart Upload or Part Copy operation, the hash is not an MD5 digest,
regardless of the method of encryption. More information on possible values can be found on Common Response
Headers (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html).
version_id - A unique version ID value for the object, if bucket versioning is enabled.
Resource: aws_s3_bucket_policy
Attaches a policy to an S3 bucket resource.
Example Usage
Basic Usage
policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}
Argument Reference
The following arguments are supported:
bucket - (Required) The name of the bucket to which to apply the policy.
policy - (Required) The text of the policy. For more information about building AWS IAM policy documents with
Terraform, see the AWS IAM Policy Document Guide (/docs/providers/aws/guides/iam-policy-documents.html).
Import
S3 bucket policies can be imported using the bucket name, e.g.
$ terraform import aws_s3_bucket_policy.example my-bucket-name
Resource: aws_s3_bucket_public_access_block
Manages S3 bucket-level Public Access Block con guration. For more information about these settings, see the AWS S3 Block
Public Access documentation (https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html).
Example Usage
block_public_acls = true
block_public_policy = true
}
Argument Reference
The following arguments are supported:
bucket - (Required) S3 Bucket to which this Public Access Block con guration should be applied.
block_public_acls - (Optional) Whether Amazon S3 should block public ACLs for this bucket. Defaults to false .
Enabling this setting does not a ect existing policies or ACLs. When set to true causes the following behavior:
PUT Bucket acl and PUT Object acl calls will fail if the speci ed ACL allows public access.
PUT Object calls will fail if the request includes an object ACL.
block_public_policy - (Optional) Whether Amazon S3 should block public bucket policies for this bucket. Defaults
to false . Enabling this setting does not a ect the existing bucket policy. When set to true causes Amazon S3 to:
Reject calls to PUT Bucket policy if the speci ed bucket policy allows public access.
ignore_public_acls - (Optional) Whether Amazon S3 should ignore public ACLs for this bucket. Defaults to false .
Enabling this setting does not a ect the persistence of any existing ACLs and doesn't prevent new public ACLs from
being set. When set to true causes Amazon S3 to:
Ignore public ACLs on this bucket and any objects that it contains.
restrict_public_buckets - (Optional) Whether Amazon S3 should restrict public bucket policies for this bucket.
Defaults to false . Enabling this setting does not a ect the previously stored bucket policy, except that public and
cross-account access within the public bucket policy, including non-public delegation to speci c accounts, is blocked.
When set to true :
Only the bucket owner and AWS Services can access this buckets if it has a public policy.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Import
aws_s3_bucket_public_access_block can be imported by using the bucket name, e.g.
Example Usage
Basic usage:
production_variants {
variant_name = "variant-1"
model_name = "${aws_sagemaker_model.m.name}"
initial_instance_count = 1
instance_type = "ml.t2.medium"
}
tags {
Name = "foo"
}
}
Argument Reference
The following arguments are supported:
kms_key_arn - (Optional) Amazon Resource Name (ARN) of a AWS Key Management Service key that Amazon
SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint.
name - (Optional) The name of the endpoint con guration. If omitted, Terraform will assign a random, unique name.
accelerator_type (Optional) - The size of the Elastic Inference (EI) instance to use for the production variant.
initial_variant_weight (Optional) - Determines initial tra c distribution among all of the models that you specify
in the endpoint con guration. If unspeci ed, it defaults to 1.0.
variant_name - (Optional) The name of the variant. If omitted, Terraform will assign a random, unique name. ##
Attributes Reference
The following attributes are exported:
arn - The Amazon Resource Name (ARN) assigned by AWS to this endpoint con guration.
Import
Endpoint con gurations can be imported using the name , e.g.
Example Usage
Basic usage:
tags = {
Name = "foo"
}
}
Argument Reference
The following arguments are supported:
name - (Optional) The name of the endpoint. If omitted, Terraform will assign a random, unique name.
Attributes Reference
The following attributes are exported:
arn - The Amazon Resource Name (ARN) assigned by AWS to this endpoint.
Import
Endpoints can be imported using the name , e.g.
Example Usage
Basic usage:
primary_container {
image = "174872318107.dkr.ecr.us-west-2.amazonaws.com/kmeans:1"
}
}
principals {
type = "Service"
identifiers = ["sagemaker.amazonaws.com"]
}
}
}
Argument Reference
The following arguments are supported:
name - (Optional) The name of the model (must be unique). If omitted, Terraform will assign a random, unique name.
primary_container - (Optional) The primary docker image containing inference code that is used when the model is
deployed for predictions. If not speci ed, the container argument is required. Fields are documented below.
execution_role_arn - (Required) A role that SageMaker can assume to access model artifacts and docker images for
deployment.
container (Optional) - Speci es containers in the inference pipeline. If not speci ed, the primary_container
argument is required. Fields are documented below.
enable_network_isolation (Optional) - Isolates the model container. No inbound or outbound network calls can be
made to or from the model container.
vpc_config (Optional) - Speci es the VPC that you want your model to connect to. VpcCon g is used in hosting
services and in batch transform.
image - (Required) The registry path where the inference code image is stored in Amazon ECR.
model_data_url - (Optional) The URL for the S3 location where model artifacts are stored.
environment - (Optional) Environment variables for the Docker container. A list of key value pairs.
Attributes Reference
The following attributes are exported:
arn - The Amazon Resource Name (ARN) assigned by AWS to this model.
Import
Models can be imported using the name , e.g.
Example Usage
Basic usage:
tags = {
Name = "foo"
}
}
Argument Reference
The following arguments are supported:
role_arn - (Required) The ARN of the IAM role to be used by the notebook instance which allows SageMaker to call
other services on your behalf.
kms_key_id - (Optional) The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt
the model artifacts at rest using Amazon S3 server-side encryption.
lifecycle_config_name - (Optional) The name of a lifecycle con guration to associate with the notebook instance.
Attributes Reference
The following attributes are exported:
arn - The Amazon Resource Name (ARN) assigned by AWS to this notebook instance.
Import
Sagemaker Notebook Instances can be imported using the name , e.g.
Example Usage
Usage:
Argument Reference
The following arguments are supported:
name - (Optional) The name of the lifecycle con guration (must be unique). If omitted, Terraform will assign a random,
unique name.
on_create - (Optional) A shell script (base64-encoded) that runs only once when the SageMaker Notebook Instance is
created.
on_start - (Optional) A shell script (base64-encoded) that runs every time the SageMaker Notebook Instance is
started including the time it's created.
Attributes Reference
The following attributes are exported:
arn - The Amazon Resource Name (ARN) assigned by AWS to this lifecycle con guration.
Import
Models can be imported using the name , e.g.
Example Usage
Basic
NOTE: Con guring rotation causes the secret to rotate once as soon as you store the secret. Before you do this, you
must ensure that all of your applications that use the credentials stored in the secret are updated to retrieve the secret
from AWS Secrets Manager. The old credentials might no longer be usable after the initial rotation and any applications
that you fail to update will break as soon as the old credentials are no longer valid.
NOTE: If you cancel a rotation that is in progress (by removing the rotation con guration), it can leave the
VersionStage labels in an unexpected state. Depending on what step of the rotation was in progress, you might need to
remove the staging label AWSPENDING from the partially created version, speci ed by the SecretVersionId response
value. You should also evaluate the partially rotated new version to see if it should be deleted, which you can do by
removing all staging labels from the new version's VersionStage eld.
rotation_rules {
automatically_after_days = 7
}
}
Argument Reference
The following arguments are supported:
name - (Optional) Speci es the friendly name of the new secret. The secret name can consist of uppercase letters,
lowercase letters, digits, and any of the following characters: /_+=.@- Con icts with name_prefix .
name_prefix - (Optional) Creates a unique name beginning with the speci ed pre x. Con icts with name .
kms_key_id - (Optional) Speci es the ARN or alias of the AWS KMS customer master key (CMK) to be used to encrypt
the secret values in the versions stored in this secret. If you don't specify this value, then Secrets Manager defaults to
using the AWS account's default CMK (the one named aws/secretsmanager ). If the default KMS CMK with that name
doesn't yet exist, then AWS Secrets Manager creates it for you automatically the rst time.
recovery_window_in_days - (Optional) Speci es the number of days that AWS Secrets Manager waits before it can
delete the secret. This value can be 0 to force deletion without recovery or range from 7 to 30 days. The default
value is 30 .
rotation_lambda_arn - (Optional) Speci es the ARN of the Lambda function that can rotate the secret.
rotation_rules - (Optional) A structure that de nes the rotation con guration for this secret. De ned below.
tags - (Optional) Speci es a key-value map of user-de ned tags that are attached to the secret.
rotation_rules
automatically_after_days - (Required) Speci es the number of days between automatic scheduled rotations of the
secret.
Attribute Reference
id - Amazon Resource Name (ARN) of the secret.
Import
aws_secretsmanager_secret can be imported by using the secret Amazon Resource Name (ARN), e.g.
NOTE: If the AWSCURRENT staging label is present on this version during resource deletion, that label cannot be
removed and will be skipped to prevent errors when fully deleting the secret. That label will leave this secret version
active even after the resource is deleted from Terraform unless the secret itself is deleted. Move the AWSCURRENT
staging label before or after deleting this resource from Terraform to fully trigger version deprecation if necessary.
Example Usage
Key-Value Pairs
Secrets Manager also accepts key-value pairs in JSON.
variable "example" {
default = {
key1 = "value1"
key2 = "value2"
}
type = "map"
}
Reading key-value pairs from JSON back into a native Terraform map can be accomplished in Terraform 0.12 and later with
the jsondecode() function (https://www.terraform.io/docs/con guration/functions/jsondecode.html):
output "example" {
value = jsondecode(aws_secretsmanager_secret_version.example
example.secret_string
secret_string)["key1"]
}
Argument Reference
The following arguments are supported:
secret_id - (Required) Speci es the secret to which you want to add a new version. You can specify either the
Amazon Resource Name (ARN) or the friendly name of the secret. The secret must already exist.
secret_string - (Optional) Speci es text data that you want to encrypt and store in this version of the secret. This is
required if secret_binary is not set.
secret_binary - (Optional) Speci es binary data that you want to encrypt and store in this version of the secret. This
is required if secret_string is not set. Needs to be encoded to base64.
version_stages - (Optional) Speci es a list of staging labels that are attached to this version of the secret. A staging
label must be unique to a single version of the secret. If you specify a staging label that's already associated with a
di erent version of the same secret then that staging label is automatically removed from the other version and
attached to this version. If you do not specify a value, then AWS Secrets Manager automatically moves the staging label
AWSCURRENT to this new version on creation.
NOTE: If version_stages is con gured, you must include the AWSCURRENT staging label if this secret version is the
only version or if the label is currently present on this secret version, otherwise Terraform will show a perpetual
di erence.
Attribute Reference
arn - The ARN of the secret.
Import
aws_secretsmanager_secret_version can be imported by using the secret ID and version ID, e.g.
NOTE on Security Groups and Security Group Rules: Terraform currently provides both a standalone Security Group
Rule resource (/docs/providers/aws/r/security_group_rule.html) (a single ingress or egress rule), and a Security
Group resource with ingress and egress rules de ned in-line. At this time you cannot use a Security Group with in-
line rules in conjunction with any Security Group Rule resources. Doing so will cause a con ict of rule settings and will
overwrite rules.
NOTE: Referencing Security Groups across VPC peering has certain restrictions. More information is available in the VPC
Peering User Guide (https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html).
NOTE: Due to AWS Lambda improved VPC networking changes that began deploying in September 2019
(https://aws.amazon.com/blogs/compute/announcing-improved-vpc-networking-for-aws-lambda-functions/), security
groups associated with Lambda Functions can take up to 45 minutes to successfully delete. Terraform AWS Provider
version 2.31.0 and later automatically handles this increased timeout, however prior versions require setting the
customizable deletion timeout to 45 minutes ( delete = "45m" ). AWS and HashiCorp are working together to reduce
the amount of time required for resource deletion and updates can be tracked in this GitHub issue
(https://github.com/terraform-providers/terraform-provider-aws/issues/10329).
Example Usage
Basic usage
resource "aws_security_group" "allow_tls" {
name = "allow_tls"
description = "Allow TLS inbound traffic"
vpc_id = "${aws_vpc.main.id}"
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks =
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
prefix_list_ids = ["pl-12c4e678"]
}
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks =
}
tags = {
Name = "allow_all"
}
}
Argument Reference
The following arguments are supported:
name - (Optional, Forces new resource) The name of the security group. If omitted, Terraform will assign a random,
unique name
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name .
description - (Optional, Forces new resource) The security group description. Defaults to "Managed by Terraform".
Cannot be "". NOTE: This eld maps to the AWS GroupDescription attribute, for which there is no Update API. If
you'd like to classify your security groups in a way that can be updated, use tags .
ingress - (Optional) Can be speci ed multiple times for each ingress rule. Each ingress block supports elds
documented below. This argument is processed in attribute-as-blocks mode (/docs/con guration/attr-as-blocks.html).
egress - (Optional, VPC only) Can be speci ed multiple times for each egress rule. Each egress block supports elds
documented below. This argument is processed in attribute-as-blocks mode (/docs/con guration/attr-as-blocks.html).
revoke_rules_on_delete - (Optional) Instruct Terraform to revoke all of the Security Groups attached ingress and
egress rules before deleting the rule itself. This is normally not needed, however certain AWS services such as Elastic
Map Reduce may automatically add required rules to security groups used with the service, and those rules may
contain a cyclic dependency that prevent the security groups from being destroyed without removing the dependency
rst. Default false
from_port - (Required) The start port (or ICMP type number if protocol is "icmp")
protocol - (Required) The protocol. If you select a protocol of "-1" (semantically equivalent to "all" , which is not a
valid value here), you must specify a "from_port" and "to_port" equal to 0. If not icmp, tcp, udp, or "-1" use the protocol
number (https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml)
security_groups - (Optional) List of security group Group Names if using EC2-Classic, or Group IDs if using a VPC.
self - (Optional) If true, the security group itself will be added as a source to this ingress rule.
to_port - (Required) The end range port (or ICMP code if protocol is "icmp").
prefix_list_ids - (Optional) List of pre x list IDs (for allowing access to VPC endpoints)
from_port - (Required) The start port (or ICMP type number if protocol is "icmp")
protocol - (Required) The protocol. If you select a protocol of "-1" (semantically equivalent to "all" , which is not a
valid value here), you must specify a "from_port" and "to_port" equal to 0. If not icmp, tcp, udp, or "-1" use the protocol
number (https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml)
security_groups - (Optional) List of security group Group Names if using EC2-Classic, or Group IDs if using a VPC.
self - (Optional) If true, the security group itself will be added as a source to this egress rule.
to_port - (Required) The end range port (or ICMP code if protocol is "icmp").
NOTE on Egress rules: By default, AWS creates an ALLOW ALL egress rule when creating a new Security Group inside of
a VPC. When creating a new Security Group inside a VPC, Terraform will remove this default rule, and require you
speci cally re-create it if you desire that rule. We feel this leads to fewer surprises in terms of controlling your egress
rules. If you desire this rule to be in place, you can use this egress block:
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
prefix_list_ids = ["${aws_vpc_endpoint.my_endpoint.prefix_list_id}"]
}
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_security_group provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration
options:
create - (Default 10m ) How long to wait for a security group to be created.
delete - (Default 10m ) How long to retry on DependencyViolation errors during security group deletion from
lingering ENIs left by certain AWS services such as Elastic Load Balancing. NOTE: Lambda ENIs can take up to 45
minutes to delete, which is not a ected by changing this customizable timeout (in version 2.31.0 and later of the
Terraform AWS Provider) unless it is increased above 45 minutes.
Import
Security Groups can be imported using the security group id , e.g.
NOTE on Security Groups and Security Group Rules: Terraform currently provides both a standalone Security Group
Rule resource (a single ingress or egress rule), and a Security Group resource
(/docs/providers/aws/r/security_group.html) with ingress and egress rules de ned in-line. At this time you cannot
use a Security Group with in-line rules in conjunction with any Security Group Rule resources. Doing so will cause a
con ict of rule settings and will overwrite rules.
NOTE: Setting protocol = "all" or protocol = -1 with from_port and to_port will result in the EC2 API
creating a security group rule with all ports open. This API behavior cannot be controlled by Terraform and may
generate warnings in the future.
NOTE: Referencing Security Groups across VPC peering has certain restrictions. More information is available in the VPC
Peering User Guide (https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html).
Example Usage
Basic usage
cidr_blocks =
prefix_list_ids = ["pl-12c4e678"]
security_group_id = "sg-123456"
}
Argument Reference
The following arguments are supported:
type - (Required) The type of rule being created. Valid options are ingress (inbound) or egress (outbound).
prefix_list_ids - (Optional) List of pre x list IDs (for allowing access to VPC endpoints).
from_port - (Required) The start port (or ICMP type number if protocol is "icmp").
protocol - (Required) The protocol. If not icmp, tcp, udp, or all use the protocol number
(https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml)
source_security_group_id - (Optional) The security group id to allow access to/from, depending on the type .
Cannot be speci ed with cidr_blocks and self .
self - (Optional) If true, the security group itself will be added as a source to this ingress rule. Cannot be speci ed
with source_security_group_id .
to_port - (Required) The end port (or ICMP code if protocol is "icmp").
Attributes Reference
In addition to all arguments above, the following attributes are exported:
from_port - The start port (or ICMP type number if protocol is "icmp")
Not all rule permissions (e.g., not all of a rule's CIDR blocks) need to be imported for Terraform to manage rule permissions.
However, importing some of a rule's permissions but not others, and then making changes to the rule will result in the
creation of an additional rule to capture the updated permissions. Rule permissions that were not imported are left intact in
the original rule.
Examples
Import an ingress rule in security group sg-6e616f6d69 for TCP port 8000 with an IPv4 destination CIDR of 10.0.3.0/24 :
Import a rule with various IPv4 and IPv6 source CIDR blocks:
Import a rule, applicable to all ports, with a protocol other than TCP/UDP/ICMP/ALL, e.g., Multicast Transport Protocol (MTP),
using the IANA protocol number, e.g., 92.
Import a rule applicable to all protocols and ports with a security group source:
Import a rule that has itself and an IPv6 CIDR block as sources:
NOTE: Destroying this resource will disable Security Hub for this AWS account.
NOTE: This AWS service is in Preview and may change before General Availability release. Backwards compatibility is not
guaranteed between Terraform AWS Provider releases.
Example Usage
Argument Reference
The resource does not support any arguments.
Attributes Reference
The following attributes are exported in addition to the arguments listed above:
Import
An existing Security Hub enabled account can be imported using the AWS account ID, e.g.
NOTE: This AWS service is in Preview and may change before General Availability release. Backwards compatibility is not
guaranteed between Terraform AWS Provider releases.
Example Usage
Argument Reference
The following arguments are supported:
product_arn - (Required) The ARN of the product that generates ndings that you want to import into Security Hub -
see below.
arn:aws:securityhub:${var.region}::product/aws/guardduty
arn:aws:securityhub:${var.region}::product/aws/inspector
arn:aws:securityhub:${var.region}::product/aws/macie
arn:aws:securityhub:${var.region}:733251395267:product/alertlogic/althreatmanagement
arn:aws:securityhub:${var.region}:679703615338:product/armordefense/armoranywhere
arn:aws:securityhub:${var.region}:151784055945:product/barracuda/cloudsecurityguardian
arn:aws:securityhub:${var.region}:758245563457:product/checkpoint/cloudguard-iaas
arn:aws:securityhub:${var.region}:634729597623:product/checkpoint/dome9-arc
arn:aws:securityhub:${var.region}:517716713836:product/crowdstrike/crowdstrike-falcon
arn:aws:securityhub:${var.region}:749430749651:product/cyberark/cyberark-pta
arn:aws:securityhub:${var.region}:250871914685:product/f5networks/f5-advanced-waf
arn:aws:securityhub:${var.region}:123073262904:product/fortinet/fortigate
arn:aws:securityhub:${var.region}:324264561773:product/guardicore/aws-infection-monkey
arn:aws:securityhub:${var.region}:324264561773:product/guardicore/guardicore
arn:aws:securityhub:${var.region}:949680696695:product/ibm/qradar-siem
arn:aws:securityhub:${var.region}:955745153808:product/imperva/imperva-attack-analytics
arn:aws:securityhub:${var.region}:297986523463:product/mcafee-skyhigh/mcafee-mvision-cloud-aws
arn:aws:securityhub:${var.region}:188619942792:product/paloaltonetworks/redlock
arn:aws:securityhub:${var.region}:122442690527:product/paloaltonetworks/vm-series
arn:aws:securityhub:${var.region}:805950163170:product/qualys/qualys-pc
arn:aws:securityhub:${var.region}:805950163170:product/qualys/qualys-vm
arn:aws:securityhub:${var.region}:336818582268:product/rapid7/insightvm
arn:aws:securityhub:${var.region}:062897671886:product/sophos/sophos-server-protection
arn:aws:securityhub:${var.region}:112543817624:product/splunk/splunk-enterprise
arn:aws:securityhub:${var.region}:112543817624:product/splunk/splunk-phantom
arn:aws:securityhub:${var.region}:956882708938:product/sumologicinc/sumologic-mda
arn:aws:securityhub:${var.region}:754237914691:product/symantec-corp/symantec-cwp
arn:aws:securityhub:${var.region}:422820575223:product/tenable/tenable-io
arn:aws:securityhub:${var.region}:679593333241:product/trend-micro/deep-security
arn:aws:securityhub:${var.region}:453761072151:product/turbot/turbot
arn:aws:securityhub:${var.region}:496947949261:product/twistlock/twistlock-enterprise
Attributes Reference
The following attributes are exported in addition to the arguments listed above:
arn - The ARN of a resource that represents your subscription to the product that generates the ndings that you
want to import into Security Hub.
Import
Security Hub product subscriptions can be imported in the form product_arn,arn , e.g.
NOTE: This AWS service is in Preview and may change before General Availability release. Backwards compatibility is not
guaranteed between Terraform AWS Provider releases.
Example Usage
Argument Reference
The following arguments are supported:
Name ARN
Attributes Reference
The following attributes are exported in addition to the arguments listed above:
Import
Security Hub standards subscriptions can be imported using the standards subscription ARN, e.g.
Example Usage
Argument Reference
The following arguments are supported:
provider_name - (Required) Name of the person or organization who owns the portfolio.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Service Catalog Portfolios can be imported using the service catalog portfolio id , e.g.
Example Usage
Argument Reference
The following arguments are supported:
description - (Optional) The description that you specify for the namespace when you create it.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The ID of a namespace.
arn - The ARN that Amazon Route 53 assigns to the namespace when you create it.
Import
Service Discovery HTTP Namespace can be imported using the namespace ID, e.g.
Example Usage
Argument Reference
The following arguments are supported:
vpc - (Required) The ID of VPC that you want to associate the namespace with.
description - (Optional) The description that you specify for the namespace when you create it.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The ID of a namespace.
arn - The ARN that Amazon Route 53 assigns to the namespace when you create it.
hosted_zone - The ID for the hosted zone that Amazon Route 53 creates when you create a namespace.
Resource: aws_service_discovery_public_dns_namespace
Provides a Service Discovery Public DNS Namespace resource.
Example Usage
Argument Reference
The following arguments are supported:
description - (Optional) The description that you specify for the namespace when you create it.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The ID of a namespace.
arn - The ARN that Amazon Route 53 assigns to the namespace when you create it.
hosted_zone - The ID for the hosted zone that Amazon Route 53 creates when you create a namespace.
Import
Service Discovery Public DNS Namespace can be imported using the namespace ID, e.g.
Example Usage
dns_config {
namespace_id = "${aws_service_discovery_private_dns_namespace.example.id}"
dns_records {
ttl = 10
type = "A"
}
routing_policy = "MULTIVALUE"
}
health_check_custom_config {
failure_threshold = 1
}
}
resource "aws_service_discovery_public_dns_namespace" "example" {
name = "example.terraform.com"
description = "example"
}
dns_config {
namespace_id = "${aws_service_discovery_public_dns_namespace.example.id}"
dns_records {
ttl = 10
type = "A"
}
}
health_check_config {
failure_threshold = 10
resource_path = "path"
type = "HTTP"
}
}
Argument Reference
The following arguments are supported:
dns_config - (Optional) A complex type that contains information about the resource record sets that you want
Amazon Route 53 to create when you register an instance.
health_check_config - (Optional) A complex type that contains settings for an optional health check. Only for Public
DNS namespaces.
health_check_custom_config - (Optional, ForceNew) A complex type that contains settings for ECS managed health
checks.
namespace_id - (Optional) The ID of the namespace that you want to use to create the service.
dns_con g
The following arguments are supported:
namespace_id - (Required, ForceNew) The ID of the namespace to use for DNS con guration.
dns_records - (Required) An array that contains one DnsRecord object for each resource record set.
routing_policy - (Optional) The routing policy that you want to apply to all records that Route 53 creates when you
register an instance and specify the service. Valid Values: MULTIVALUE, WEIGHTED
dns_records
ttl - (Required) The amount of time, in seconds, that you want DNS resolvers to cache the settings for this resource
record set.
type - (Required, ForceNew) The type of the resource, which indicates the value that Amazon Route 53 returns in
response to DNS queries. Valid Values: A, AAAA, SRV, CNAME
health_check_con g
The following arguments are supported:
failure_threshold - (Optional) The number of consecutive health checks. Maximum value of 10.
resource_path - (Optional) The path that you want Route 53 to request when performing health checks. Route 53
automatically adds the DNS name for the service. If you don't specify a value, the default value is /.
type - (Optional, ForceNew) The type of health check that you want to create, which indicates how Route 53
determines whether an endpoint is healthy. Valid Values: HTTP, HTTPS, TCP
health_check_custom_con g
The following arguments are supported:
failure_threshold - (Optional, ForceNew) The number of 30-second intervals that you want service discovery to
wait before it changes the health status of a service instance. Maximum value of 10.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Service Discovery Service can be imported using the service ID, e.g.
Example Usage
Argument Reference
The following arguments are supported:
quota_code - (Required) Code of the service quota to track. For example: L-F678F1CE . Available values can be found
with the AWS CLI service-quotas list-service-quotas command
(https://docs.aws.amazon.com/cli/latest/reference/service-quotas/list-service-quotas.html).
service_code - (Required) Code of the service to track. For example: vpc . Available values can be found with the
AWS CLI service-quotas list-services command (https://docs.aws.amazon.com/cli/latest/reference/service-quotas/list-
services.html).
value - (Required) Float specifying the desired value for the service quota. If the desired value is higher than the
current value, a quota increase request is submitted. When a known request is submitted and pending, the value
re ects the desired value of the pending request.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Import
NOTE This resource does not require explicit import and will assume management of an existing service quota on
Terraform resource creation.
aws_servicequotas_service_quota can be imported by using the service code and quota code, separated by a front
slash ( / ), e.g.
Example Usage
Argument Reference
The following arguments are supported:
Example Usage
Argument Reference
The following arguments are supported:
Import
SES Con guration Sets can be imported using their name , e.g.
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
dkim_tokens - DKIM tokens generated by SES. These tokens should be used to create CNAME records used to verify
SES Easy DKIM. See below for an example of how this might be achieved when the domain is hosted in Route 53 and
managed by Terraform. Find out more about verifying domains in Amazon SES in the AWS SES docs
(http://docs.aws.amazon.com/ses/latest/DeveloperGuide/easy-dkim-dns-records.html).
Example Usage
Import
DKIM tokens can be imported using the domain attribute, e.g.
$ terraform import aws_ses_domain_dkim.example example.com
Resource: aws_ses_domain_identity
Provides an SES domain identity resource
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
verification_token - A code which when added to the domain as a TXT record will signal to SES that the owner of
the domain has authorised SES to act on their behalf. The domain identity will be in state "veri cation pending" until
this is done. See below for an example of how this might be achieved when the domain is hosted in Route 53 and
managed by Terraform. Find out more about verifying domains in Amazon SES in the AWS SES docs
(http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-domains.html).
Example Usage
Import
SES domain identities can be imported using the domain name.
WARNING: This resource implements a part of the veri cation work ow. It does not represent a real-world entity in
AWS, therefore changing or deleting this resource on its own has no immediate e ect.
Example Usage
depends_on = ["aws_route53_record.example_amazonses_verification_record"]
}
Argument Reference
The following arguments are supported:
domain - (Required) The domain name of the SES domain identity to verify.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
create - (Default 45m ) How long to wait for a domain identity to be veri ed.
Resource: aws_ses_domain_mail_from
Provides an SES domain MAIL FROM resource.
NOTE: For the MAIL FROM domain to be fully usable, this resource should be paired with the aws_ses_domain_identity
resource (/docs/providers/aws/r/ses_domain_identity.html). To validate the MAIL FROM domain, a DNS MX record is
required. To pass SPF checks, a DNS TXT record may also be required. See the Amazon SES MAIL FROM documentation
(https://docs.aws.amazon.com/ses/latest/DeveloperGuide/mail-from-set.html) for more information.
Example Usage
Argument Reference
The following arguments are required:
mail_from_domain - (Required) Subdomain (of above domain) which is to be used as MAIL FROM address (Required
for DMARC validation)
The following arguments are optional:
behavior_on_mx_failure - (Optional) The action that you want Amazon SES to take if it cannot successfully read the
required MX record when you send an email. Defaults to UseDefaultValue . See the SES API documentation
(https://docs.aws.amazon.com/ses/latest/APIReference/API_SetIdentityMailFromDomain.html) for more information.
Attributes Reference
In addition to the arguments, which are exported, the following attributes are exported:
Import
MAIL FROM domain can be imported using the domain attribute, e.g.
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Import
SES email identities can be imported using the email address.
Example Usage
CloudWatch Destination
cloudwatch_destination {
default_value = "default"
dimension_name = "dimension"
value_source = "emailHeader"
}
}
Kinesis Destination
kinesis_destination {
stream_arn = "${aws_kinesis_firehose_delivery_stream.example.arn}"
role_arn = "${aws_iam_role.example.arn}"
}
}
SNS Destination
resource "aws_ses_event_destination" "sns" {
name = "event-destination-sns"
configuration_set_name = "${aws_ses_configuration_set.example.name}"
enabled = true
matching_types = ["bounce", "send"]
sns_destination {
topic_arn = "${aws_sns_topic.example.arn}"
}
}
Argument Reference
The following arguments are supported:
matching_types - (Required) A list of matching types. May be any of "send" , "reject" , "bounce" , "complaint" ,
"delivery" , "open" , "click" , or "renderingFailure" .
value_source - (Required) The source for the value. It can be either "messageTag" or "emailHeader"
role_arn - (Required) The ARN of the role that has permissions to access the Kinesis Stream
Example Usage
Argument Reference
The following arguments are supported:
topic_arn - (Optional) The Amazon Resource Name (ARN) of the Amazon SNS topic. Can be set to "" (an empty
string) to disable publishing.
notification_type - (Required) The type of noti cations that will be published to the speci ed Amazon SNS topic.
Valid Values: Bounce, Complaint or Delivery.
identity - (Required) The identity for which the Amazon SNS topic will be set. You can specify an identity by using its
name or by using its Amazon Resource Name (ARN).
include_original_headers - (Optional) Whether SES should include original email headers in SNS noti cations of
this type. false by default.
Import
Identity Noti cation Topics can be imported using ID of the record. The ID is made up as IDENTITY|TYPE where IDENTITY is
the SES Identity and TYPE is the Noti cation Type.
e.g.
example.com|Bounce
In this example, example.com is the SES Identity and Bounce is the Noti cation Type.
Example Usage
principals {
identifiers = ["*"]
type = "AWS"
}
}
}
Argument Reference
The following arguments are supported:
identity - (Required) Name or Amazon Resource Name (ARN) of the SES Identity.
policy - (Required) JSON string of the policy. For more information about building AWS IAM policy documents with
Terraform, see the AWS IAM Policy Document Guide (/docs/providers/aws/guides/iam-policy-documents.html).
Import
SES Identity Policies can be imported using the identity and policy name, separated by a pipe character ( | ), e.g.
Example Usage
Argument Reference
The following arguments are supported:
Example Usage
add_header_action {
header_name = "Custom-Header"
header_value = "Added by SES"
position = 1
}
s3_action {
bucket_name = "emails"
position = 2
}
}
Argument Reference
The following arguments are supported:
after - (Optional) The name of the rule to place this rule after
scan_enabled - (Optional) If true, incoming emails will be scanned for spam and viruses
Import
SES receipt rules can be imported using the ruleset name and rule name separated by : .
Example Usage
Argument Reference
The following arguments are supported:
Import
SES receipt rule sets can be imported using the rule set name.
Example Usage
Argument Reference
The following arguments are supported:
name - (Required) The name of the template. Cannot exceed 64 characters. You will refer to this name when you send
email.
html - (Optional) The HTML body of the email. Must be less than 500KB in size, including both the text and HTML
parts.
text - (Optional) The email body that will be visible to recipients whose email clients do not display HTML. Must be
less than 500KB in size, including both the text and HTML parts.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
SES templates can be imported using the template name, e.g.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The Amazon Resource Name (ARN) that identi es the created activity.
Import
Activities can be imported using the arn , e.g.
Example Usage
definition = <<EOF
<<
{
"Comment": "A Hello World example of the Amazon States Language using an AWS Lambda Function",
"StartAt": "HelloWorld",
"States": {
"HelloWorld": {
"Type": "Task",
"Resource": "${aws_lambda_function.lambda.arn}",
"End": true
}
}
}
EOF
}
Argument Reference
The following arguments are supported:
definition - (Required) The Amazon States Language de nition of the state machine.
role_arn - (Required) The Amazon Resource Name (ARN) of the IAM role to use for this state machine.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
status - The current status of the state machine. Either "ACTIVE" or "DELETING".
Import
State Machines can be imported using the arn , e.g.
Example Usage
Create protection
Argument Reference
The following arguments are supported:
name - (Required) A friendly name for the Protection you are creating.
resource_arn - (Required) The ARN (Amazon Resource Name) of the resource to be protected.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The unique identi er (ID) for the Protection object that is created.
Import
Shield protection resources can be imported by specifying their ID e.g.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
SimpleDB Domains can be imported using the name , e.g.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
name - (Required) The friendly name for the SNS platform application
platform - (Required) The platform that the app is registered with. See Platform
(http://docs.aws.amazon.com/sns/latest/dg/mobile-push-send-register.html) for supported platforms.
event_delivery_failure_topic_arn - (Optional) SNS Topic triggered when a delivery to any of the platform
endpoints associated with your platform application encounters a permanent failure.
event_endpoint_created_topic_arn - (Optional) SNS Topic triggered when a new platform endpoint is added to
your platform application.
event_endpoint_deleted_topic_arn - (Optional) SNS Topic triggered when an existing platform endpoint is deleted
from your platform application.
event_endpoint_updated_topic_arn - (Optional) SNS Topic triggered when an existing platform endpoint is
changed from your platform application.
failure_feedback_role_arn - (Optional) The IAM role permitted to receive failure feedback for this application.
success_feedback_role_arn - (Optional) The IAM role permitted to receive success feedback for this application.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
SNS platform applications can be imported using the ARN, e.g.
Example Usage
Argument Reference
The following arguments are supported:
monthly_spend_limit - (Optional) The maximum amount in USD that you are willing to spend each month to send
SMS messages.
delivery_status_iam_role_arn - (Optional) The ARN of the IAM role that allows Amazon SNS to write logs about
SMS deliveries in CloudWatch Logs.
default_sender_id - (Optional) A string, such as your business brand, that is displayed as the sender on the
receiving device.
default_sms_type - (Optional) The type of SMS message that you will send by default. Possible values are:
Promotional, Transactional
usage_report_s3_bucket - (Optional) The name of the Amazon S3 bucket to receive daily SMS usage reports from
Amazon SNS.
Resource: aws_sns_topic
Provides an SNS topic resource
Example Usage
Argument Reference
The following arguments are supported:
name - (Optional) The friendly name for the SNS topic. By default generated by Terraform.
name_prefix - (Optional) The friendly name for the SNS topic. Con icts with name .
policy - (Optional) The fully-formed AWS policy as JSON. For more information about building AWS IAM policy
documents with Terraform, see the AWS IAM Policy Document Guide (/docs/providers/aws/guides/iam-policy-
documents.html).
application_success_feedback_role_arn - (Optional) The IAM role permitted to receive success feedback for this
topic
http_success_feedback_role_arn - (Optional) The IAM role permitted to receive success feedback for this topic
kms_master_key_id - (Optional) The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom
CMK. For more information, see Key Terms (https://docs.aws.amazon.com/sns/latest/dg/sns-server-side-
encryption.html#sse-key-terms)
lambda_success_feedback_role_arn - (Optional) The IAM role permitted to receive success feedback for this topic
sqs_success_feedback_role_arn - (Optional) The IAM role permitted to receive success feedback for this topic
Attributes Reference
In addition to all arguments above, the following attributes are exported:
arn - The ARN of the SNS topic, as a more obvious property (clone of id)
Import
SNS Topics can be imported using the topic arn , e.g.
NOTE: If a Principal is speci ed as just an AWS account ID rather than an ARN, AWS silently converts it to the ARN for the
root user, causing future terraform plans to di er. To avoid this problem, just specify the full ARN, e.g.
arn:aws:iam::123456789012:root
Example Usage
resource "aws_sns_topic" "test" {
name = "my-topic-with-policy"
}
policy = "${data.aws_iam_policy_document.sns-topic-policy.json}"
}
statement {
actions = [
"SNS:Subscribe",
"SNS:SetTopicAttributes",
"SNS:RemovePermission",
"SNS:Receive",
"SNS:Publish",
"SNS:ListSubscriptionsByTopic",
"SNS:GetTopicAttributes",
"SNS:DeleteTopic",
"SNS:AddPermission",
]
condition {
test = "StringEquals"
variable = "AWS:SourceOwner"
values = [
"${var.account-id}",
]
}
effect = "Allow"
principals {
type = "AWS"
identifiers = ["*"]
}
resources = [
"${aws_sns_topic.test.arn}",
]
sid = "__default_statement_ID"
}
}
Argument Reference
The following arguments are supported:
Import
SNS Topic Policy can be imported using the topic ARN, e.g.
NOTE: If the SNS topic and SQS queue are in di erent AWS regions, it is important for the "aws_sns_topic_subscription"
to use an AWS provider that is in the same region of the SNS topic. If the "aws_sns_topic_subscription" is using a
provider with a di erent region than the SNS topic, terraform will fail to create the subscription.
NOTE: Setup of cross-account subscriptions from SNS topics to SQS queues requires Terraform to have access to BOTH
accounts.
NOTE: If SNS topic and SQS queue are in di erent AWS accounts but the same region it is important for the
"aws_sns_topic_subscription" to use the AWS provider of the account with the SQS queue. If
"aws_sns_topic_subscription" is using a Provider with a di erent account than the SQS queue, terraform creates the
subscriptions but does not keep state and tries to re-create the subscription at every apply.
NOTE: If SNS topic and SQS queue are in di erent AWS accounts and di erent AWS regions it is important to recognize
that the subscription needs to be initiated from the account with the SQS queue but in the region of the SNS topic.
Example Usage
You can directly supply a topic and ARN by hand in the topic_arn property along with the queue ARN:
Alternatively you can use the ARN properties of a managed SNS topic and SQS queue:
resource "aws_sns_topic" "user_updates" {
name = "user-updates-topic"
}
You can subscribe SNS topics to SQS queues in di erent Amazon accounts and regions:
/*
#
# Variables
#
*/
variable "sns" {
default = {
account-
- id = "111111111111"
role-
- name = "service/service-hashicorp-terraform"
name = "example-sns-topic"
display_name = "example"
region = "us-west-1"
}
}
variable "sqs" {
default = {
account-
- id = "222222222222"
role-
- name = "service/service-hashicorp-terraform"
name = "example-sqs-queue"
region = "us-east-1"
}
}
statement {
actions = [
"SNS:Subscribe",
"SNS:SetTopicAttributes",
"SNS:RemovePermission",
"SNS:Receive",
"SNS:Publish",
"SNS:ListSubscriptionsByTopic",
"SNS:GetTopicAttributes",
"SNS:DeleteTopic",
"SNS:AddPermission",
]
condition {
test = "StringEquals"
variable = "AWS:SourceOwner"
values = [
"${var.sns["account-
- id"]}",
]
}
effect = "Allow"
principals {
type = "AWS"
identifiers = ["*"]
}
resources = [
"arn:aws:sns:${var.sns["region"]}:${var.sns["account-
- id"]}:${var.sns["name"]}",
]
sid = "__default_statement_ID"
}
statement {
actions = [
"SNS:Subscribe",
"SNS:Receive",
]
condition {
test = "StringLike"
variable = "SNS:Endpoint"
values = [
"arn:aws:sqs:${var.sqs["region"]}:${var.sqs["account-
- id"]}:${var.sqs["name"]}",
]
}
effect = "Allow"
principals {
type = "AWS"
identifiers = ["*"]
}
resources = [
"arn:aws:sns:${var.sns["region"]}:${var.sns["account-
- id"]}:${var.sns["name"]}",
]
sid = "__console_sub_0"
}
}
statement {
sid = "example-sns-topic"
effect = "Allow"
principals {
type = "AWS"
identifiers = ["*"]
}
actions = [
"SQS:SendMessage",
]
resources = [
"arn:aws:sqs:${var.sqs["region"]}:${var.sqs["account-
- id"]}:${var.sqs["name"]}",
]
condition {
test = "ArnEquals"
variable = "aws:SourceArn"
values = [
"arn:aws:sns:${var.sns["region"]}:${var.sns["account-
- id"]}:${var.sns["name"]}",
]
}
}
}
provider "aws" {
alias = "sns"
region = "${var.sns["region"]}"
assume_role {
role_arn = "arn:aws:iam::${var.sns["account-
- id"]}:role/${var.sns["role-
- name"]}"
session_name = "sns-${var.sns["region"]}"
}
}
provider "aws" {
alias = "sqs"
region = "${var.sqs["region"]}"
assume_role {
role_arn = "arn:aws:iam::${var.sqs["account-
- id"]}:role/${var.sqs["role-
- name"]}"
session_name = "sqs-${var.sqs["region"]}"
}
}
provider "aws" {
alias = "sns2sqs"
region = "${var.sns["region"]}"
assume_role {
role_arn = "arn:aws:iam::${var.sqs["account-
- id"]}:role/${var.sqs["role-
- name"]}"
session_name = "sns2sqs-${var.sns["region"]}"
}
}
Argument Reference
The following arguments are supported:
protocol - (Required) The protocol to use. The possible values for this are: sqs , sms , lambda , application .
( http or https are partially supported, see below) ( email is option but unsupported, see below).
endpoint - (Required) The endpoint to send data to, the contents will vary with the protocol. (see below for more
information)
endpoint_auto_confirms - (Optional) Boolean indicating whether the end point is capable of auto con rming
subscription (http://docs.aws.amazon.com/sns/latest/dg/SendMessageToHttp.html#SendMessageToHttp.prepare)
e.g., PagerDuty (default is false)
confirmation_timeout_in_minutes - (Optional) Integer indicating number of minutes to wait in retying mode for
fetching subscription arn before marking it as failure. Only applicable for http and https protocols (default is 1 minute).
raw_message_delivery - (Optional) Boolean indicating whether or not to enable raw message delivery (the original
message is directly passed, not wrapped in JSON with the original message in the message property) (default is false).
filter_policy - (Optional) JSON String with the lter policy that will be used in the subscription to lter messages
seen by the target resource. Refer to the SNS docs (https://docs.aws.amazon.com/sns/latest/dg/message-
ltering.html) for more details.
delivery_policy - (Optional) JSON String with the delivery policy (retries, backo , etc.) that will be used in the
subscription - this only applies to HTTP/S subscriptions. Refer to the SNS docs
(https://docs.aws.amazon.com/sns/latest/dg/DeliveryPolicies.html) for more details.
Protocols supported
Supported SNS protocols include:
lambda -- delivery of JSON-encoded message to a lambda function
application -- delivery of JSON-encoded message to an EndpointArn for a mobile app and device
http -- delivery of JSON-encoded messages via HTTP. Supported only for the end points that auto con rms the
subscription.
https -- delivery of JSON-encoded messages via HTTPS. Supported only for the end points that auto con rms the
subscription.
These are unsupported because the endpoint needs to be authorized and does not generate an ARN until the target email
address has been validated. This breaks the Terraform model and as a result are not currently supported.
Specifying endpoints
Endpoints have di erent format requirements according to the protocol that is chosen.
SQS endpoints come in the form of the SQS queue's ARN (not the URL of the queue) e.g: arn:aws:sqs:us-west-
2:432981146916:terraform-queue-too
Application endpoints are also the endpoint ARN for the mobile app and device.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
endpoint - The full endpoint to send data to (SQS ARN, HTTP(S) URL, Application ARN, SMS number, etc.)
Import
SNS Topic Subscriptions can be imported using the subscription arn , e.g.
$ terraform import aws_sns_topic_subscription.user_updates_sqs_target arn:aws:sns:us-west-2:0123456789012
:my-topic:8a21d249-4329-4871-acc6-7be709c6ea7f
Resource: aws_spot_datafeed_subscription
To help you understand the charges for your Spot instances, Amazon EC2 provides a data feed that describes your Spot
instance usage and pricing. This data feed is sent to an Amazon S3 bucket that you specify when you subscribe to the data
feed.
Example Usage
Argument Reference
bucket - (Required) The Amazon S3 bucket in which to store the Spot instance data feed.
prefix - (Optional) Path of folder inside bucket to place spot pricing data.
Import
A Spot Datafeed Subscription can be imported using the word spot-datafeed-subscription , e.g.
Example Usage
launch_specification {
instance_type = "m4.10xlarge"
ami = "ami-1234"
spot_price = "2.793"
placement_tenancy = "dedicated"
iam_instance_profile_arn = "${aws_iam_instance_profile.example.arn}"
}
launch_specification {
instance_type = "m4.4xlarge"
ami = "ami-5678"
key_name = "my-key"
spot_price = "1.117"
iam_instance_profile_arn = "${aws_iam_instance_profile.example.arn}"
availability_zone = "us-west-1a"
subnet_id = "subnet-1234"
weighted_capacity = 35
root_block_device {
volume_size = "300"
volume_type = "gp2"
}
tags = {
Name = "spot-fleet-example"
}
}
}
NOTE: Terraform does not support the functionality where multiple subnet_id or availability_zone parameters
can be speci ed in the same launch con guration block. If you want to specify multiple values, then separate launch
con guration blocks should be used:
resource "aws_spot_fleet_request" "foo" {
iam_fleet_role = "arn:aws:iam::12345678:role/spot-fleet"
spot_price = "0.005"
target_capacity = 2
valid_until = "2019-11-04T20:44:20Z"
launch_specification {
instance_type = "m1.small"
ami = "ami-d06a90b0"
key_name = "my-key"
availability_zone = "us-west-2a"
}
launch_specification {
instance_type = "m5.large"
ami = "ami-d06a90b0"
key_name = "my-key"
availability_zone = "us-west-2a"
}
}
Argument Reference
Most of these arguments directly correspond to the o cial API
(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_SpotFleetRequestCon gData.html).
iam_fleet_role - (Required) Grants the Spot eet permission to terminate Spot instances on your behalf when you
cancel its Spot eet request using CancelSpotFleetRequests or when the Spot eet request expires, if you set
terminateInstancesWithExpiration.
replace_unhealthy_instances - (Optional) Indicates whether Spot eet should replace unhealthy instances. Default
false .
launch_specification - Used to de ne the launch con guration of the spot- eet request. Can be speci ed multiple
times to de ne di erent bids across di erent markets and instance types.
Note: This takes in similar but not identical inputs as aws_instance (/docs/providers/aws/r/instance.html). There are
limitations on what you can specify. See the list of o cially supported inputs in the reference documentation
(http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_SpotFleetLaunchSpeci cation.html). Any normal
aws_instance (/docs/providers/aws/r/instance.html) parameter that corresponds to those inputs may be used and it
have a additional parameter iam_instance_profile_arn takes aws_iam_instance_profile attribute arn as
input.
spot_price - (Optional; Default: On-demand price) The maximum bid price per unit hour.
wait_for_fulfillment - (Optional; Default: false) If set, Terraform will wait for the Spot Request to be ful lled, and
will throw an error if the timeout of 10m is reached.
target_capacity - The number of units to request. You can choose to set the target capacity in terms of instances or
a performance characteristic that is important to your application workload, such as vCPUs, memory, or I/O.
allocation_strategy - Indicates how to allocate the target capacity across the Spot pools speci ed by the Spot eet
request. The default is lowestPrice .
instance_pools_to_use_count - (Optional; Default: 1) The number of Spot pools across which to allocate your
target Spot capacity. Valid only when allocation_strategy is set to lowestPrice . Spot Fleet selects the cheapest
Spot pools and evenly allocates your target Spot capacity across the number of Spot pools that you specify.
terminate_instances_with_expiration - Indicates whether running Spot instances should be terminated when the
Spot eet request expires.
fleet_type - (Optional) The type of eet request. Indicates whether the Spot Fleet only requests the target capacity
or also attempts to maintain it. Default is maintain .
valid_until - (Optional) The end date and time of the request, in UTC RFC3339
(https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new
Spot instance requests are placed or enabled to ful ll the request. Defaults to 24 hours.
valid_from - (Optional) The start date and time of the request, in UTC RFC3339
(https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). The default is to start
ful lling the request immediately.
load_balancers (Optional) A list of elastic load balancer names to add to the Spot eet.
target_group_arns (Optional) A list of aws_alb_target_group ARNs, for use with Application Load Balancing.
Timeouts
The timeouts block allows you to specify timeouts
(https://www.terraform.io/docs/con guration/resources.html#timeouts) for certain actions:
create - (Defaults to 10 mins) Used when requesting the spot instance (only valid if wait_for_fulfillment =
true )
Attributes Reference
In addition to all arguments above, the following attributes are exported:
By default Terraform creates Spot Instance Requests with a persistent type, which means that for the duration of their
lifetime, AWS will launch an instance with the con gured details if and when the spot market will accept the requested price.
On destruction, Terraform will make an attempt to terminate the associated Spot Instance if there is one present.
Spot Instances requests with a one-time type will close the spot request when the instance is terminated either by the
request being below the current spot price availability or by a user.
NOTE: Because their behavior depends on the live status of the spot market, Spot Instance Requests have a unique
lifecycle that makes them behave di erently than other Terraform resources. Most importantly: there is no guarantee
that a Spot Instance exists to ful ll the request at any given point in time. See the AWS Spot Instance documentation
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html) for more information.
Example Usage
tags = {
Name = "CheapWorker"
}
}
Argument Reference
Spot Instance Requests support all the same arguments as aws_instance (/docs/providers/aws/r/instance.html), with the
addition of:
spot_price - (Optional; Default: On-demand price) The maximum price to request on the spot market.
wait_for_fulfillment - (Optional; Default: false) If set, Terraform will wait for the Spot Request to be ful lled, and
will throw an error if the timeout of 10m is reached.
spot_type - (Optional; Default: persistent ) If set to one-time , after the instance is terminated, the spot request
will be closed.
launch_group - (Optional) A launch group is a group of spot instances that launch together and terminate together.
If left empty instances are launched and terminated individually.
block_duration_minutes - (Optional) The required duration for the Spot instances, in minutes. This value must be a
multiple of 60 (60, 120, 180, 240, 300, or 360). The duration period starts as soon as your Spot instance receives its
instance ID. At the end of the duration period, Amazon EC2 marks the Spot instance for termination and provides a
Spot instance termination notice, which gives the instance a two-minute warning before it terminates. Note that you
can't specify an Availability Zone group or a launch group if you specify a duration.
valid_until - (Optional) The end date and time of the request, in UTC RFC3339
(https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new
Spot instance requests are placed or enabled to ful ll the request. The default end date is 7 days from the current
date.
valid_from - (Optional) The start date and time of the request, in UTC RFC3339
(https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). The default is to start
ful lling the request immediately.
Timeouts
The timeouts block allows you to specify timeouts
(https://www.terraform.io/docs/con guration/resources.html#timeouts) for certain actions:
create - (Defaults to 10 mins) Used when requesting the spot instance (only valid if wait_for_fulfillment =
true )
delete - (Defaults to 20 mins) Used when terminating all instances launched via the given spot instance request
Attributes Reference
In addition to all arguments above, the following attributes are exported:
These attributes are exported, but they are expected to change over time and so should only be used for informational
purposes, not for resource dependencies:
spot_instance_id - The Instance ID (if any) that is currently ful lling the Spot Instance request.
public_dns - The public DNS name assigned to the instance. For EC2-VPC, this is only available if you've enabled DNS
hostnames for your VPC
private_dns - The private DNS name assigned to the instance. Can only be used inside the Amazon EC2, and only
available if you've enabled DNS hostnames for your VPC
private_ip - The private IP address assigned to the instance
Resource: aws_sqs_queue
Example Usage
tags = {
Environment = "production"
}
}
FIFO queue
Argument Reference
The following arguments are supported:
name - (Optional) This is the human-readable name of the queue. If omitted, Terraform will assign a random name.
name_prefix - (Optional) Creates a unique name beginning with the speci ed pre x. Con icts with name .
visibility_timeout_seconds - (Optional) The visibility timeout for the queue. An integer from 0 to 43200 (12
hours). The default for this attribute is 30. For more information about visibility timeout, see AWS docs
(https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AboutVT.html).
message_retention_seconds - (Optional) The number of seconds Amazon SQS retains a message. Integer
representing seconds, from 60 (1 minute) to 1209600 (14 days). The default for this attribute is 345600 (4 days).
max_message_size - (Optional) The limit of how many bytes a message can contain before Amazon SQS rejects it. An
integer from 1024 bytes (1 KiB) up to 262144 bytes (256 KiB). The default for this attribute is 262144 (256 KiB).
delay_seconds - (Optional) The time in seconds that the delivery of all messages in the queue will be delayed. An
integer from 0 to 900 (15 minutes). The default for this attribute is 0 seconds.
receive_wait_time_seconds - (Optional) The time for which a ReceiveMessage call will wait for a message to arrive
(long polling) before returning. An integer from 0 to 20 (seconds). The default for this attribute is 0, meaning that the
call will return immediately.
policy - (Optional) The JSON policy for the SQS queue. For more information about building AWS IAM policy
documents with Terraform, see the AWS IAM Policy Document Guide (/docs/providers/aws/guides/iam-policy-
documents.html).
redrive_policy - (Optional) The JSON policy to set up the Dead Letter Queue, see AWS docs
(https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQSDeadLetterQueue.html).
Note: when specifying maxReceiveCount , you must specify it as an integer ( 5 ), and not a string ( "5" ).
fifo_queue - (Optional) Boolean designating a FIFO queue. If not set, it defaults to false making it standard.
content_based_deduplication - (Optional) Enables content-based deduplication for FIFO queues. For more
information, see the related documentation
(http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html#FIFO-queues-
exactly-once-processing)
kms_master_key_id - (Optional) The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom
CMK. For more information, see Key Terms
(http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-
encryption.html#sqs-sse-key-terms).
kms_data_key_reuse_period_seconds - (Optional) The length of time, in seconds, for which Amazon SQS can reuse
a data key to encrypt or decrypt messages before calling AWS KMS again. An integer representing seconds, between 60
seconds (1 minute) and 86,400 seconds (24 hours). The default is 300 (5 minutes).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
SQS Queues can be imported using the queue url , e.g.
Example Usage
policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Id": "sqspolicy",
"Statement": [
{
"Sid": "First",
"Effect": "Allow",
"Principal": "*",
"Action": "sqs:SendMessage",
"Resource": "${aws_sqs_queue.q.arn}",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "${aws_sns_topic.example.arn}"
}
}
}
]
}
POLICY
}
Argument Reference
The following arguments are supported:
queue_url - (Required) The URL of the SQS Queue to which to attach the policy
policy - (Required) The JSON policy for the SQS queue. For more information about building AWS IAM policy
documents with Terraform, see the AWS IAM Policy Document Guide (/docs/providers/aws/guides/iam-policy-
documents.html).
Import
SQS Queue Policies can be imported using the queue URL, e.g.
$ terraform import aws_sqs_queue_policy.test https://queue.amazonaws.com/0123456789012/myqueue
Resource: aws_ssm_activation
Registers an on-premises server or virtual machine with Amazon EC2 so that it can be managed using Run Command.
Example Usage
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": {"Service": "ssm.amazonaws.com"},
"Action": "sts:AssumeRole"
}
}
EOF
}
Argument Reference
The following arguments are supported:
description - (Optional) The description of the resource that you want to register.
registration_limit - (Optional) The maximum number of managed instances you want to register. The default
value is 1 instance.
activation_code - The code the system generates when it processes the activation.
expiration_date - The date by which this activation request should expire. The default value is 24 hours.
registration_limit - The maximum number of managed instances you want to be registered. The default value is 1
instance.
registration_count - The number of managed instances that are currently registered using this activation.
Resource: aws_ssm_association
Associates an SSM Document to an instance or EC2 tag.
Example Usage
targets {
key = "InstanceIds"
values = ["${aws_instance.example.id}"]
}
}
Argument Reference
The following arguments are supported:
document_version - (Optional) The document version you want to associate with the target(s). Can be a speci c
version or the default version.
instance_id - (Optional) The instance ID to apply an SSM document to. Use targets with key InstanceIds for
document schema versions 2.0 and above.
parameters - (Optional) A block of arbitrary string parameters to pass to the SSM document.
schedule_expression - (Optional) A cron expression when the association will be applied to the target(s).
targets - (Optional) A block containing the targets of the SSM association. Targets are documented below. AWS
currently supports a maximum of 5 targets.
compliance_severity - (Optional) The compliance severity for the association. Can be one of the following:
UNSPECIFIED , LOW , MEDIUM , HIGH or CRITICAL
max_concurrency - (Optional) The maximum number of targets allowed to run the association at the same time. You
can specify a number, for example 10, or a percentage of the target set, for example 10%.
max_errors - (Optional) The number of errors that are allowed before the system stops sending requests to run the
association on additional targets. You can specify a number, for example 10, or a percentage of the target set, for
example 10%.
Output Location ( output_location ) is an S3 bucket where you want to store the results of this association:
Targets specify what instance IDs or tags to apply the document to and has these keys:
values - (Required) A list of instance IDs or tag values. AWS currently limits this list size to one value.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
instance_id - The instance id that the SSM document was applied to.
Import
SSM associations can be imported using the association_id , e.g.
NOTE on updating SSM documents: Only documents with a schema version of 2.0 or greater can update their content
once created, see SSM Schema Features (http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-
ssm-docs.html#document-schemas-features). To update a document with an older schema version you must recreate
the resource.
Example Usage
content = <<DOC
<<
{
"schemaVersion": "1.2",
"description": "Check ip configuration of a Linux instance.",
"parameters": {
},
"runtimeConfig": {
"aws:runShellScript": {
"properties": [
{
"id": "0.aws:runShellScript",
"runCommand": ["ifconfig"]
}
]
}
}
}
DOC
}
Argument Reference
The following arguments are supported:
document_format - (Optional, defaults to JSON) The format of the document. Valid document types include: JSON
and YAML
document_type - (Required) The type of the document. Valid document types include: Command , Policy ,
Automation and Session
permissions - (Optional) Additional Permissions to attach to the document. See Permissions below for details.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
hash_type - "Sha1" "Sha256". The hashing algorithm used when hashing the content.
owner - The AWS user account of the person who created the document.
platform_types - A list of OS platforms compatible with this SSM document, either "Windows" or "Linux".
Permissions
The permissions attribute speci es how you want to share the document. If you share a document privately, you must
specify the AWS user account IDs for those people who can use the document. If you share a document publicly, you must
specify All as the account ID.
type - The permission type for the document. The permission type can be Share .
account_ids - The AWS user accounts that should have access to the document. The account IDs can either be a
group of account IDs or All .
Import
SSM Documents can be imported using the name, e.g.
Example Usage
Argument Reference
The following arguments are supported:
schedule - (Required) The schedule of the Maintenance Window in the form of a cron
(https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-maintenance-cron.html) or rate
expression.
cutoff - (Required) The number of hours before the end of the Maintenance Window that Systems Manager stops
scheduling new tasks for execution.
allow_unassociated_targets - (Optional) Whether targets must be registered with the Maintenance Window
before tasks can be de ned for those targets.
schedule_timezone - (Optional) Timezone for schedule in Internet Assigned Numbers Authority (IANA) Time Zone
Database format (https://www.iana.org/time-zones). For example: America/Los_Angeles , etc/UTC , or
Asia/Seoul .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The ID of the maintenance window.
Import
SSM Maintenance Windows can be imported using the maintenance window id , e.g. $ terraform import
aws_ssm_maintenance_window.imported-window mw-0123456789
Resource: aws_ssm_maintenance_window_target
Provides an SSM Maintenance Window Target resource
Example Usage
targets {
key = "tag:Name"
values = ["acceptance_test"]
}
}
Argument Reference
The following arguments are supported:
window_id - (Required) The Id of the maintenance window to register the target with.
resource_type - (Required) The type of target being registered with the Maintenance Window. Possible values
INSTANCE .
targets - (Required) The targets (either instances or tags). Instances are speci ed using
Key=InstanceIds,Values=InstanceId1,InstanceId2. Tags are speci ed using Key=tag name,Values=tag value.
owner_information - (Optional) User-provided value that will be included in any CloudWatch events raised while
running tasks for these targets in this Maintenance Window.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Automation Tasks
targets {
key = "InstanceIds"
values = ["${aws_instance.example.id}"]
}
task_invocation_parameters {
automation_parameters {
document_version = "$LATEST"
parameter {
name = "InstanceId"
values = ["${aws_instance.example.id}"]
}
}
}
}
Lambda Tasks
resource "aws_ssm_maintenance_window_task" "example" {
max_concurrency = 2
max_errors = 1
priority = 1
service_role_arn = "${aws_iam_role.example.arn}"
task_arn = "${aws_lambda_function.example.arn}"
task_type = "LAMBDA"
window_id = "${aws_ssm_maintenance_window.example.id}"
targets {
key = "InstanceIds"
values = ["${aws_instance.example.id}"]
}
task_invocation_parameters {
lambda_parameters {
client_context = "${base64encode("{\"key1\":\"value1\"}")}"
payload = "{\"key1\":\"value1\"}"
}
}
}
targets {
key = "InstanceIds"
values = ["${aws_instance.example.id}"]
}
task_invocation_parameters {
run_command_parameters {
output_s3_bucket = "${aws_s3_bucket.example.bucket}"
output_s3_key_prefix = "output"
service_role_arn = "${aws_iam_role.example.arn}"
timeout_seconds = 600
notification_config {
notification_arn = "${aws_sns_topic.example.arn}"
notification_events = ["All"]
notification_type = ["Command"]
}
parameter {
name = "commands"
values = ["date"]
}
}
}
}
targets {
key = "InstanceIds"
values = ["${aws_instance.example.id}"]
}
task_invocation_parameters {
step_functions_parameters {
input = "{\"key1\":\"value1\"}"
name = "example"
}
}
}
Argument Reference
The following arguments are supported:
window_id - (Required) The Id of the maintenance window to register the task with.
max_concurrency - (Required) The maximum number of targets this task can be run for in parallel.
max_errors - (Required) The maximum number of errors allowed before this task stops being scheduled.
task_type - (Required) The type of task being registered. The only allowed value is RUN_COMMAND .
service_role_arn - (Required) The role that should be assumed when executing the task.
targets - (Required) The targets (either instances or window target ids). Instances are speci ed using
Key=InstanceIds,Values=instanceid1,instanceid2. Window target ids are speci ed using
Key=WindowTargetIds,Values=window target id1, window target id2.
priority - (Optional) The priority of the task in the Maintenance Window, the lower the number the higher the
priority. Tasks in a Maintenance Window are scheduled in priority order with tasks that have the same priority
scheduled in parallel.
logging_info - (Optional, Deprecated) A structure containing information about an Amazon S3 bucket to write
instance-level logs to. Use task_invocation_parameters con guration block run_command_parameters
con guration block output_s3_* arguments instead. Con icts with task_invocation_parameters . Documented
below.
task_parameters - (Optional, Deprecated) A structure containing information about parameters required by the
particular task_arn . Use parameter con guration blocks under the task_invocation_parameters con guration
block instead. Con icts with task_invocation_parameters . Documented below.
task_invocation_parameters - (Optional) The parameters for task execution. This argument is con ict with
task_parameters and logging_info .
s3_bucket_name - (Required)
s3_region - (Required)
s3_bucket_prefix - (Optional)
name - (Required)
values - (Required)
automation_parameters - (Optional) The parameters for an AUTOMATION task type. Documented below.
lambda_parameters - (Optional) The parameters for a LAMBDA task type. Documented below.
run_command_parameters - (Optional) The parameters for a RUN_COMMAND task type. Documented below.
step_functions_parameters - (Optional) The parameters for a STEP_FUNCTIONS task type. Documented below.
document_version - (Optional) The version of an Automation document to use during task execution.
parameter - (Optional) The parameters for the RUN_COMMAND task execution. Documented below.
client_context - (Optional) Pass client-speci c information to the Lambda function that you are invoking.
document_hash - (Optional) The SHA-256 or SHA-1 hash created by the system when the document was created. SHA-
1 hashes have been deprecated.
document_hash_type - (Optional) SHA-256 or SHA-1. SHA-1 hashes have been deprecated. Valid values: Sha256 and
Sha1
notification_config - (Optional) Con gurations for sending noti cations about command status changes on a per-
instance basis. Documented below.
parameter - (Optional) The parameters for the RUN_COMMAND task execution. Documented below.
service_role_arn - (Optional) The IAM service role to assume during task execution.
timeout_seconds - (Optional) If this time is reached and the command has not already started executing, it doesn't
run.
notification_arn - (Optional) An Amazon Resource Name (ARN) for a Simple Noti cation Service (SNS) topic. Run
Command pushes noti cations about command status changes to this topic.
notification_events - (Optional) The di erent events for which you can receive noti cations. Valid values: All ,
InProgress , Success , TimedOut , Cancelled , and Failed
notification_type - (Optional) When speci ed with Command , receive noti cation when the status of a command
changes. When speci ed with Invocation , for commands sent to multiple instances, receive noti cation on a per-
instance basis when the status of a command changes. Valid values: Command and Invocation
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
AWS Maintenance Window Task can be imported using the window_id and window_task_id separated by / .
Example Usage
To store a basic string parameter:
tags = {
environment = "${var.environment}"
}
}
Note: The unencrypted value of a SecureString will be stored in the raw state as plain-text. Read more about sensitive
data in state (/docs/state/sensitive-data.html).
Argument Reference
The following arguments are supported:
name - (Required) The name of the parameter. If the name contains a path (e.g. any forward slashes ( / )), it must be
fully quali ed with a leading forward slash ( / ). For additional requirements and constraints, see the AWS SSM User
Guide (https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-parameter-name-constraints.html).
type - (Required) The type of the parameter. Valid types are String , StringList and SecureString .
tier - (Optional) The tier of the parameter. If not speci ed, will default to Standard . Valid tiers are Standard and
Advanced . For more information on parameter tiers, see the AWS SSM Parameter tier comparison and guide
(https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-store-advanced-parameters.html).
overwrite - (Optional) Overwrite an existing parameter. If not speci ed, will default to false if the resource has not
been created by terraform to avoid overwrite of existing resource and will default to true otherwise (terraform
lifecycle rules should then be used to manage the update behavior).
Attributes Reference
In addition to all arguments above, the following attributes are exported:
type - (Required) The type of the parameter. Valid types are String , StringList and SecureString .
Import
SSM Parameters can be imported using the parameter store name , e.g.
NOTE on Patch Baselines: The approved_patches and approval_rule are both marked as optional elds, but the
Patch Baseline requires that at least one of them is speci ed.
Example Usage
Basic usage using approved_patches only
global_filter {
key = "PRODUCT"
values = ["WindowsServer2008"]
}
global_filter {
key = "CLASSIFICATION"
values = ["ServicePacks"]
}
global_filter {
key = "MSRC_SEVERITY"
values = ["Low"]
}
approval_rule {
approve_after_days = 7
compliance_level = "HIGH"
patch_filter {
key = "PRODUCT"
values = ["WindowsServer2016"]
}
patch_filter {
key = "CLASSIFICATION"
values = ["CriticalUpdates", "SecurityUpdates", "Updates"]
}
patch_filter {
key = "MSRC_SEVERITY"
values = ["Critical", "Important", "Moderate"]
}
}
approval_rule {
approve_after_days = 7
patch_filter {
key = "PRODUCT"
values = ["WindowsServer2012"]
}
}
}
Argument Reference
The following arguments are supported:
name - (Required) The name of the patch baseline.
operating_system - (Optional) De nes the operating system the patch baseline applies to. Supported operating
systems include WINDOWS , AMAZON_LINUX , AMAZON_LINUX_2 , SUSE , UBUNTU , CENTOS , and
REDHAT_ENTERPRISE_LINUX . The Default value is WINDOWS .
approved_patches_compliance_level - (Optional) De nes the compliance level for approved patches. This means
that if an approved patch is reported as missing, this is the severity of the compliance violation. Valid compliance
levels include the following: CRITICAL , HIGH , MEDIUM , LOW , INFORMATIONAL , UNSPECIFIED . The default value is
UNSPECIFIED .
global_filter - (Optional) A set of global lters used to exclude patches from the baseline. Up to 4 global lters can
be speci ed using Key/Value pairs. Valid Keys are PRODUCT | CLASSIFICATION | MSRC_SEVERITY | PATCH_ID.
approval_rule - (Optional) A set of rules used to include patches in the baseline. up to 10 approval rules can be
speci ed. Each approval_rule block requires the elds documented below.
approve_after_days - (Required) The number of days after the release date of each patch matched by the rule the
patch is marked as approved in the patch baseline. Valid Range: 0 to 100.
patch_filter - (Required) The patch lter group that de nes the criteria for the rule. Up to 4 patch lters can be
speci ed per approval rule using Key/Value pairs. Valid Keys are PRODUCT | CLASSIFICATION | MSRC_SEVERITY |
PATCH_ID .
compliance_level - (Optional) De nes the compliance level for patches approved by this rule. Valid compliance
levels include the following: CRITICAL , HIGH , MEDIUM , LOW , INFORMATIONAL , UNSPECIFIED . The default value is
UNSPECIFIED .
enable_non_security - (Optional) Boolean enabling the application of non-security updates. The default value is
'false'. Valid for Linux instances only.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
SSM Patch Baselines can be imported by their baseline ID, e.g.
$ terraform import aws_ssm_patch_baseline.example pb-12345678
Resource: aws_ssm_patch_group
Provides an SSM Patch Group resource
Example Usage
Argument Reference
The following arguments are supported:
baseline_id - (Required) The ID of the patch baseline to register the patch group with.
patch_group - (Required) The name of the patch group that should be registered with the patch baseline.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
resource "aws_s3_bucket" "hoge" {
bucket = "tf-test-bucket-1234"
region = "us-east-1"
}
policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SSMBucketPermissionsCheck",
"Effect": "Allow",
"Principal": {
"Service": "ssm.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::tf-test-bucket-1234"
},
{
"Sid": " SSMBucketDelivery",
"Effect": "Allow",
"Principal": {
"Service": "ssm.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": ["arn:aws:s3:::tf-test-bucket-1234/*"],
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
EOF
}
s3_destination = {
bucket_name = "${aws_s3_bucket.hoge.bucket}"
region = "${aws_s3_bucket.hoge.region}"
}
}
Argument Reference
The following arguments are supported:
s3_destination
s3_destination supports the following:
region - (Required) Region with the bucket targeted by the Resource Data Sync.
sync_format - (Optional) A supported sync format. Only JsonSerDe is currently supported. Defaults to JsonSerDe.
Import
SSM resource data sync can be imported using the name , e.g.
NOTE: The gateway must have cache added (e.g. via the aws_storagegateway_cache
(/docs/providers/aws/r/storagegateway_cache.html) resource) before creating volumes otherwise the Storage Gateway
API will return an error.
NOTE: The gateway must have an upload bu er added (e.g. via the aws_storagegateway_upload_buffer
(/docs/providers/aws/r/storagegateway_upload_bu er.html) resource) before the volume is operational to clients,
however the Storage Gateway API will allow volume creation without error in that case and return volume status as
UPLOAD BUFFER NOT CONFIGURED .
Example Usage
Argument Reference
The following arguments are supported:
network_interface_id - (Required) The network interface of the gateway on which to expose the iSCSI target. Only
IPv4 addresses are accepted.
target_name - (Required) The name of the iSCSI target used by initiators to connect to the target and as a su x for
the target ARN. The target name must be unique across all volumes of a gateway.
snapshot_id - (Optional) The snapshot ID of the snapshot to restore as the new cached volume. e.g. snap-
1122aabb .
source_volume_arn - (Optional) The ARN for an existing volume. Specifying this ARN makes the new volume into an
exact copy of the speci ed existing volume's latest recovery point. The volume_size_in_bytes value for this new
volume must be equal to or larger than the size of the existing volume, in bytes.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
NOTE: The Storage Gateway API provides no method to remove a cache disk. Destroying this Terraform resource does
not perform any Storage Gateway actions.
Example Usage
Argument Reference
The following arguments are supported:
Attribute Reference
In addition to all arguments above, the following attributes are exported:
id - Combined gateway Amazon Resource Name (ARN) and local disk identi er.
Import
aws_storagegateway_cache can be imported by using the gateway Amazon Resource Name (ARN) and local disk identi er
separated with a colon ( : ), e.g.
NOTE: The Storage Gateway API requires the gateway to be connected to properly return information after activation. If
you are receiving The specified gateway is not connected errors during resource creation (gateway activation),
ensure your gateway instance meets the Storage Gateway requirements
(https://docs.aws.amazon.com/storagegateway/latest/userguide/Requirements.html).
Example Usage
File Gateway
Tape Gateway
Argument Reference
NOTE: One of activation_key or gateway_ip_address must be provided for resource creation (gateway activation).
Neither is required for resource import. If using gateway_ip_address , Terraform must be able to make an HTTP (port
80) GET request to the speci ed IP address from where it is running.
gateway_timezone - (Required) Time zone for the gateway. The time zone is of the format "GMT", "GMT-hr:mm", or
"GMT+hr:mm". For example, GMT-4:00 indicates the time is 4 hours behind GMT. The time zone is used, for example,
for scheduling snapshots and your gateway's maintenance schedule.
activation_key - (Optional) Gateway activation key during resource creation. Con icts with gateway_ip_address .
Additional information is available in the Storage Gateway User Guide
(https://docs.aws.amazon.com/storagegateway/latest/userguide/get-activation-key.html).
gateway_ip_address - (Optional) Gateway IP address to retrieve activation key during resource creation. Con icts
with activation_key . Gateway must be accessible on port 80 from where Terraform is running. Additional
information is available in the Storage Gateway User Guide
(https://docs.aws.amazon.com/storagegateway/latest/userguide/get-activation-key.html).
gateway_type - (Optional) Type of the gateway. The default value is STORED . Valid values: CACHED , FILE_S3 ,
STORED , VTL .
media_changer_type - (Optional) Type of medium changer to use for tape gateway. Terraform cannot detect drift of
this argument. Valid values: STK-L700 , AWS-Gateway-VTL .
smb_active_directory_settings - (Optional) Nested argument with Active Directory domain join information for
Server Message Block (SMB) le shares. Only valid for FILE_S3 gateway type. Must be set before creating
ActiveDirectory authentication SMB le shares. More details below.
smb_guest_password - (Optional) Guest password for Server Message Block (SMB) le shares. Only valid for
FILE_S3 gateway type. Must be set before creating GuestAccess authentication SMB le shares. Terraform can only
detect drift of the existence of a guest password, not its actual value from the gateway. Terraform can however update
the password with changing the argument.
tape_drive_type - (Optional) Type of tape drive to use for tape gateway. Terraform cannot detect drift of this
argument. Valid values: IBM-ULT3580-TD5 .
smb_active_directory_settings
Information to join the gateway to an Active Directory domain for Server Message Block (SMB) le shares.
NOTE It is not possible to uncon gure this setting without recreating the gateway. Also, Terraform can only detect drift
of the domain_name argument from the gateway.
domain_name - (Required) The name of the domain that you want the gateway to join.
password - (Required) The password of the user who has permission to add the gateway to the Active Directory
domain.
username - (Required) The user name of user who has permission to add the gateway to the Active Directory domain.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_storagegateway_gateway provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
create - (Default 10m ) How long to wait for gateway activation and connection to Storage Gateway.
Import
aws_storagegateway_gateway can be imported by using the gateway Amazon Resource Name (ARN), e.g.
Example Usage
Argument Reference
The following arguments are supported:
client_list - (Required) The list of clients that are allowed to access the le gateway. The list must contain either
valid IP addresses or valid CIDR blocks. Set to ["0.0.0.0/0"] to not limit access. Minimum 1 item. Maximum 100
items.
location_arn - (Required) The ARN of the backed storage used for storing le data.
role_arn - (Required) The ARN of the AWS Identity and Access Management (IAM) role that a le gateway assumes
when it accesses the underlying storage.
default_storage_class - (Optional) The default storage class for objects put into an Amazon S3 bucket by the le
gateway. Defaults to S3_STANDARD . Valid values: S3_STANDARD , S3_STANDARD_IA , S3_ONEZONE_IA .
guess_mime_type_enabled - (Optional) Boolean value that enables guessing of the MIME type for uploaded objects
based on le extensions. Defaults to true .
kms_encrypted - (Optional) Boolean value if true to use Amazon S3 server side encryption with your own AWS KMS
key, or false to use a key managed by Amazon S3. Defaults to false .
kms_key_arn - (Optional) Amazon Resource Name (ARN) for KMS key used for Amazon S3 server side encryption. This
value can only be set when kms_encrypted is true.
nfs_file_share_defaults - (Optional) Nested argument with le share default values. More information below.
object_acl - (Optional) Access Control List permission for S3 bucket objects. Defaults to private .
read_only - (Optional) Boolean to indicate write status of le share. File share does not accept writes if true .
Defaults to false .
requester_pays - (Optional) Boolean who pays the cost of the request and the data download from the Amazon S3
bucket. Set this value to true if you want the requester to pay instead of the bucket owner. Defaults to false .
squash - (Optional) Maps a user to anonymous user. Defaults to RootSquash . Valid values: RootSquash (only root is
mapped to anonymous user), NoSquash (no one is mapped to anonymous user), AllSquash (everyone is mapped to
anonymous user)
nfs_ le_share_defaults
Files and folders stored as Amazon S3 objects in S3 buckets don't, by default, have Unix le permissions assigned to them.
Upon discovery in an S3 bucket by Storage Gateway, the S3 objects that represent les and folders are assigned these
default Unix permissions.
directory_mode - (Optional) The Unix directory mode in the string form "nnnn". Defaults to "0777" .
file_mode - (Optional) The Unix le mode in the string form "nnnn". Defaults to "0666" .
group_id - (Optional) The default group ID for the le share (unless the les have another group ID speci ed).
Defaults to 65534 ( nfsnobody ). Valid values: 0 through 4294967294 .
owner_id - (Optional) The default owner ID for the le share (unless the les have another owner ID speci ed).
Defaults to 65534 ( nfsnobody ). Valid values: 0 through 4294967294 .
Attribute Reference
In addition to all arguments above, the following attributes are exported:
path - File share path used by the NFS client to identify the mount point.
Timeouts
aws_storagegateway_nfs_file_share provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
Import
aws_storagegateway_nfs_file_share can be imported by using the NFS File Share Amazon Resource Name (ARN), e.g.
$ terraform import aws_storagegateway_nfs_file_share.example arn:aws:storagegateway:us-east-1:12345678901
2:share/share-12345678
Resource: aws_storagegateway_smb_ le_share
Manages an AWS Storage Gateway SMB File Share.
Example Usage
NOTE: The gateway must have already joined the Active Directory domain prior to SMB le share creation. e.g. via "SMB
Settings" in the AWS Storage Gateway console or smb_active_directory_settings in the
aws_storagegateway_gateway resource (/docs/providers/aws/r/storagegateway_gateway.html).
Guest Authentication
NOTE: The gateway must have already had the SMB guest password set prior to SMB le share creation. e.g. via "SMB
Settings" in the AWS Storage Gateway console or smb_guest_password in the aws_storagegateway_gateway
resource (/docs/providers/aws/r/storagegateway_gateway.html).
Argument Reference
The following arguments are supported:
location_arn - (Required) The ARN of the backed storage used for storing le data.
role_arn - (Required) The ARN of the AWS Identity and Access Management (IAM) role that a le gateway assumes
when it accesses the underlying storage.
authentication - (Optional) The authentication method that users use to access the le share. Defaults to
ActiveDirectory . Valid values: ActiveDirectory , GuestAccess .
default_storage_class - (Optional) The default storage class for objects put into an Amazon S3 bucket by the le
gateway. Defaults to S3_STANDARD . Valid values: S3_STANDARD , S3_STANDARD_IA , S3_ONEZONE_IA .
guess_mime_type_enabled - (Optional) Boolean value that enables guessing of the MIME type for uploaded objects
based on le extensions. Defaults to true .
invalid_user_list - (Optional) A list of users in the Active Directory that are not allowed to access the le share.
Only valid if authentication is set to ActiveDirectory .
kms_encrypted - (Optional) Boolean value if true to use Amazon S3 server side encryption with your own AWS KMS
key, or false to use a key managed by Amazon S3. Defaults to false .
kms_key_arn - (Optional) Amazon Resource Name (ARN) for KMS key used for Amazon S3 server side encryption. This
value can only be set when kms_encrypted is true.
smb_file_share_defaults - (Optional) Nested argument with le share default values. More information below.
object_acl - (Optional) Access Control List permission for S3 bucket objects. Defaults to private .
read_only - (Optional) Boolean to indicate write status of le share. File share does not accept writes if true .
Defaults to false .
requester_pays - (Optional) Boolean who pays the cost of the request and the data download from the Amazon S3
bucket. Set this value to true if you want the requester to pay instead of the bucket owner. Defaults to false .
valid_user_list - (Optional) A list of users in the Active Directory that are allowed to access the le share. Only valid
if authentication is set to ActiveDirectory .
smb_ le_share_defaults
Files and folders stored as Amazon S3 objects in S3 buckets don't, by default, have Unix le permissions assigned to them.
Upon discovery in an S3 bucket by Storage Gateway, the S3 objects that represent les and folders are assigned these
default Unix permissions.
directory_mode - (Optional) The Unix directory mode in the string form "nnnn". Defaults to "0777" .
file_mode - (Optional) The Unix le mode in the string form "nnnn". Defaults to "0666" .
group_id - (Optional) The default group ID for the le share (unless the les have another group ID speci ed).
Defaults to 0 . Valid values: 0 through 4294967294 .
owner_id - (Optional) The default owner ID for the le share (unless the les have another owner ID speci ed).
Defaults to 0 . Valid values: 0 through 4294967294 .
Attribute Reference
In addition to all arguments above, the following attributes are exported:
path - File share path used by the NFS client to identify the mount point.
Timeouts
aws_storagegateway_smb_file_share provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
Import
aws_storagegateway_smb_file_share can be imported by using the SMB File Share Amazon Resource Name (ARN), e.g.
NOTE: The Storage Gateway API provides no method to remove an upload bu er disk. Destroying this Terraform
resource does not perform any Storage Gateway actions.
Example Usage
Argument Reference
The following arguments are supported:
Attribute Reference
In addition to all arguments above, the following attributes are exported:
id - Combined gateway Amazon Resource Name (ARN) and local disk identi er.
Import
aws_storagegateway_upload_buffer can be imported by using the gateway Amazon Resource Name (ARN) and local disk
identi er separated with a colon ( : ), e.g.
NOTE: The Storage Gateway API provides no method to remove a working storage disk. Destroying this Terraform
resource does not perform any Storage Gateway actions.
Example Usage
Argument Reference
The following arguments are supported:
Attribute Reference
In addition to all arguments above, the following attributes are exported:
id - Combined gateway Amazon Resource Name (ARN) and local disk identi er.
Import
aws_storagegateway_working_storage can be imported by using the gateway Amazon Resource Name (ARN) and local
disk identi er separated with a colon ( : ), e.g.
NOTE: Due to AWS Lambda improved VPC networking changes that began deploying in September 2019
(https://aws.amazon.com/blogs/compute/announcing-improved-vpc-networking-for-aws-lambda-functions/), subnets
associated with Lambda Functions can take up to 45 minutes to successfully delete. Terraform AWS Provider version
2.31.0 and later automatically handles this increased timeout, however prior versions require setting the customizable
deletion timeout to 45 minutes ( delete = "45m" ). AWS and HashiCorp are working together to reduce the amount of
time required for resource deletion and updates can be tracked in this GitHub issue (https://github.com/terraform-
providers/terraform-provider-aws/issues/10329).
Example Usage
Basic Usage
tags = {
Name = "Main"
}
}
Argument Reference
The following arguments are supported:
ipv6_cidr_block - (Optional) The IPv6 network range for the subnet, in CIDR notation. The subnet size must use a
/64 pre x length.
map_public_ip_on_launch - (Optional) Specify true to indicate that instances launched into the subnet should be
assigned a public IP address. Default is false .
assign_ipv6_address_on_creation - (Optional) Specify true to indicate that network interfaces created in the
speci ed subnet should be assigned an IPv6 address. Default is false
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Timeouts
aws_subnet provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration options:
delete - (Default 20m ) How long to retry on DependencyViolation errors during subnet deletion from lingering
ENIs left by certain AWS services such as Elastic Load Balancing. NOTE: Lambda ENIs can take up to 45 minutes to
delete, which is not a ected by changing this customizable timeout (in version 2.31.0 and later of the Terraform AWS
Provider) unless it is increased above 45 minutes.
Import
Subnets can be imported using the subnet id , e.g.
Example Usage
To register a basic SWF domain:
Argument Reference
The following arguments are supported:
name - (Optional, Forces new resource) The name of the domain. If omitted, Terraform will assign a random, unique
name.
name_prefix - (Optional, Forces new resource) Creates a unique name beginning with the speci ed pre x. Con icts
with name .
workflow_execution_retention_period_in_days - (Required, Forces new resource) Length of time that SWF will
continue to retain information about the work ow execution after the work ow execution is complete, must be
between 0 and 90 days.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
SWF Domains can be imported using the name , e.g.
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "transfer.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowFullAccesstoCloudWatchLogs",
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "*"
}
]
}
POLICY
}
tags = {
NAME = "tf-acc-test-transfer-server"
ENV = "test"
}
}
Argument Reference
The following arguments are supported:
endpoint_details - (Optional) The virtual private cloud (VPC) endpoint settings that you want to con gure for your
SFTP server. Fields documented below.
endpoint_type - (Optional) The type of endpoint that you want your SFTP server connect to. If you connect to a
VPC_ENDPOINT , your SFTP server isn't accessible over the public internet. If you want to connect your SFTP server via
public internet, set PUBLIC . Defaults to PUBLIC .
invocation_role - (Optional) Amazon Resource Name (ARN) of the IAM role used to authenticate the user account
with an identity_provider_type of API_GATEWAY .
url - (Optional) - URL of the service endpoint used to authenticate users with an identity_provider_type of
API_GATEWAY .
identity_provider_type - (Optional) The mode of authentication enabled for this service. The default value is
SERVICE_MANAGED , which allows you to store and access SFTP user credentials within the service. API_GATEWAY
indicates that user authentication requires a call to an API Gateway endpoint URL provided by you to integrate an
identity provider of your choice.
logging_role - (Optional) Amazon Resource Name (ARN) of an IAM role that allows the service to write your SFTP
users’ activity to your Amazon CloudWatch logs for monitoring and auditing purposes.
force_destroy - (Optional) A boolean that indicates all users associated with the server should be deleted so that the
Server can be destroyed without error. The default value is false .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Transfer Servers can be imported using the server id , e.g.
tags = {
NAME = "tf-acc-test-transfer-server"
}
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "transfer.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowFullAccesstoS3",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "*"
}
]
}
POLICY
}
tags = {
tags = {
NAME = "tftestuser"
}
}
Argument Reference
The following arguments are supported:
user_name - (Requirement) The name of the user account that is assigned to one or more servers.
Import
Transfer SSH Public Key can be imported using the server_id and user_name and ssh_public_key_id separated by / .
tags = {
NAME = "tf-acc-test-transfer-server"
}
}
assume_role_policy = <<EOF
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "transfer.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
policy = <<POLICY
<<
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowFullAccesstoS3",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "*"
}
]
}
POLICY
}
Argument Reference
The following arguments are supported:
user_name - (Requirement) The name used for log in to your SFTP server.
home_directory - (Optional) The landing directory (folder) for a user when they log in to the server using their SFTP
client. It should begin with a / . The rst item in the path is the name of the home bucket (accessible as
${Transfer:HomeBucket} in the policy) and the rest is the home directory (accessible as
${Transfer:HomeDirectory} in the policy). For example, /example-bucket-1234/username would set the home
bucket to example-bucket-1234 and the home directory to username .
policy - (Optional) An IAM JSON policy document that scopes down user access to portions of their Amazon S3
bucket. IAM variables you can use inside this policy include ${Transfer:UserName} , ${Transfer:HomeDirectory} ,
and ${Transfer:HomeBucket} . Since the IAM variable syntax matches Terraform's interpolation syntax, they must be
escaped inside Terraform con guration strings ( $${Transfer:UserName} ). These are evaluated on-the- y when
navigating the bucket.
role - (Requirement) Amazon Resource Name (ARN) of an IAM role that allows the service to controls your user’s
access to your Amazon S3 bucket.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
Transfer Users can be imported using the server_id and user_name separated by / .
NOTE on EBS block devices: If you use ebs_block_device on an aws_instance , Terraform will assume management
over the full set of non-root EBS block devices for the instance, and treats additional block devices as drift. For this
reason, ebs_block_device cannot be mixed with external aws_ebs_volume + aws_ebs_volume_attachment
resources for a given instance.
Example Usage
tags = {
Name = "HelloWorld"
}
}
Argument Reference
The following arguments are supported:
device_name - (Required) The device name to expose to the instance (for example, /dev/sdh or xvdh ). See Device
Naming on Linux Instances (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html#available-
ec2-device-names) and Device Naming on Windows Instances
(https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/device_naming.html#available-ec2-device-names) for
more information.
force_detach - (Optional, Boolean) Set to true if you want to force the volume to detach. Useful if previous
attempts failed, but use this option only as a last resort, as this can result in data loss. See Detaching an Amazon EBS
Volume from an Instance (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html) for
more information.
skip_destroy - (Optional, Boolean) Set this to true if you do not wish to detach the volume from the instance to
which it is attached at destroy time, and instead just remove the attachment from Terraform state. This is useful when
destroying an instance which has volumes created by some other means attached.
Attributes Reference
device_name - The device name exposed to the instance
Example Usage
Argument Reference
The following arguments are supported:
vpc_id - (Required) The ID of the VPC to which we would like to associate a DHCP Options Set.
dhcp_options_id - (Required) The ID of the DHCP Options Set to associate to the VPC.
Remarks
You can only associate one DHCP Options Set to a given VPC ID.
Removing the DHCP Options Association automatically sets AWS's default DHCP Options Set to the VPC.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Basic usage:
Full usage:
tags = {
Name = "foo-name"
}
}
Argument Reference
The following arguments are supported:
domain_name - (Optional) the su x domain name to use by default when resolving non Fully Quali ed Domain
Names. In other words, this is what ends up being the search value in the /etc/resolv.conf le.
domain_name_servers - (Optional) List of name servers to con gure in /etc/resolv.conf . If you want to use the
default AWS nameservers you should set this to AmazonProvidedDNS .
netbios_node_type - (Optional) The NetBIOS node type (1, 2, 4, or 8). AWS recommends to specify 2 since broadcast
and multicast are not supported in their network. For more information about these node types, see RFC 2132
(http://www.ietf.org/rfc/rfc2132.txt).
Remarks
Notice that all arguments are optional but you have to specify at least one argument.
domain_name_servers , netbios_name_servers , ntp_servers are limited by AWS to maximum four servers only.
To actually use the DHCP Options Set you need to associate it to a VPC using aws_vpc_dhcp_options_association
(/docs/providers/aws/r/vpc_dhcp_options_association.html).
If you delete a DHCP Options Set, all VPCs using it will be associated to AWS's default DHCP Option Set.
In most cases unless you're con guring your own DNS you'll want to set domain_name_servers to
AmazonProvidedDNS .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
owner_id - The ID of the AWS account that owns the DHCP options set.
You can nd more technical documentation about DHCP Options Set in the o cial AWS User Guide
(https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_DHCP_Options.html).
Import
VPC DHCP Options can be imported using the dhcp options id , e.g.
Example Usage
policy = <<POLICY
<<
{
"Version":"2012-10-17",
"Statement":[{
"Effect": "Allow",
"Principal": {
"Service": "vpce.amazonaws.com"
},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:*:*:vpce-notification-topic"
}]
}
POLICY
}
Argument Reference
The following arguments are supported:
vpc_endpoint_service_id - (Optional) The ID of the VPC Endpoint Service to receive noti cations for.
vpc_endpoint_id - (Optional) The ID of the VPC Endpoint to receive noti cations for.
connection_notification_arn - (Required) The ARN of the SNS topic for the noti cations.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
VPC Endpoint connection noti cations can be imported using the VPC endpoint connection notification id , e.g.
NOTE on VPC Endpoints and VPC Endpoint Associations: Terraform provides both standalone VPC Endpoint
Associations for Route Tables (/docs/providers/aws/r/vpc_endpoint_route_table_association.html) - (an association
between a VPC endpoint and a single route_table_id ) and Subnets
(/docs/providers/aws/r/vpc_endpoint_subnet_association.html) - (an association between a VPC endpoint and a single
subnet_id ) and a VPC Endpoint resource with route_table_ids and subnet_ids attributes. Do not use the same
resource ID in both a VPC Endpoint resource and a VPC Endpoint Association resource. Doing so will cause a con ict of
associations and will overwrite the association.
Example Usage
Basic
Basic w/ Tags
tags = {
Environment = "test"
}
}
security_group_ids = [
"${aws_security_group.sg1.id}",
]
private_dns_enabled = true
}
Custom Service
security_group_ids = [
"${aws_security_group.ptfe_service.id}",
]
subnet_ids = ["${local.subnet_ids}"]
private_dns_enabled = false
}
NOTE The dns_entry output is a list of maps: Terraform interpolation support for lists of maps requires the lookup
and [] until full support of lists of maps is available
Argument Reference
The following arguments are supported:
service_name - (Required) The service name, in the form com.amazonaws.region.service for AWS services.
vpc_id - (Required) The ID of the VPC in which the endpoint will be used.
auto_accept - (Optional) Accept the VPC endpoint (the VPC endpoint and service need to be in the same AWS
account).
policy - (Optional) A policy to attach to the endpoint that controls access to the service. Defaults to full access. All
Gateway and some Interface endpoints support policies - see the relevant AWS documentation
(https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-access.html) for more details. For more
information about building AWS IAM policy documents with Terraform, see the AWS IAM Policy Document Guide
(/docs/providers/aws/guides/iam-policy-documents.html).
private_dns_enabled - (Optional; AWS services and AWS Marketplace partner services only) Whether or not to
associate a private hosted zone with the speci ed VPC. Applicable for endpoints of type Interface . Defaults to
false .
route_table_ids - (Optional) One or more route table IDs. Applicable for endpoints of type Gateway .
subnet_ids - (Optional) The ID of one or more subnets in which to create a network interface for the endpoint.
Applicable for endpoints of type Interface .
security_group_ids - (Optional) The ID of one or more security groups to associate with the network interface.
Required for endpoints of type Interface .
vpc_endpoint_type - (Optional) The VPC endpoint type, Gateway or Interface . Defaults to Gateway .
Timeouts
aws_vpc_endpoint provides the following Timeouts (/docs/con guration/resources.html#timeouts) con guration options:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
cidr_blocks - The list of CIDR blocks for the exposed AWS service. Applicable for endpoints of type Gateway .
dns_entry - The DNS entries for the VPC Endpoint. Applicable for endpoints of type Interface . DNS blocks are
documented below.
network_interface_ids - One or more network interfaces for the VPC Endpoint. Applicable for endpoints of type
Interface .
owner_id - The ID of the AWS account that owns the VPC endpoint.
prefix_list_id - The pre x list ID of the exposed AWS service. Applicable for endpoints of type Gateway .
requester_managed - Whether or not the VPC Endpoint is being managed by its service - true or false .
Import
VPC Endpoints can be imported using the vpc endpoint id , e.g.
Example Usage
Argument Reference
The following arguments are supported:
route_table_id - (Required) Identi er of the EC2 Route Table to be associated with the VPC Endpoint.
vpc_endpoint_id - (Required) Identi er of the VPC Endpoint with which the EC2 Route Table will be associated.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - A hash of the EC2 Route Table and VPC Endpoint identi ers.
Import
VPC Endpoint Route Table Associations can be imported using vpc_endpoint_id together with route_table_id , e.g.
NOTE on VPC Endpoint Services and VPC Endpoint Service Allowed Principals: Terraform provides both a standalone
VPC Endpoint Service Allowed Principal (/docs/providers/aws/r/vpc_endpoint_service_allowed_principal.html) resource
and a VPC Endpoint Service resource with an allowed_principals attribute. Do not use the same principal ARN in
both a VPC Endpoint Service resource and a VPC Endpoint Service Allowed Principal resource. Doing so will cause a
con ict and will overwrite the association.
Example Usage
Basic usage:
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
NOTE on VPC Endpoint Services and VPC Endpoint Service Allowed Principals: Terraform provides both a standalone
VPC Endpoint Service Allowed Principal (/docs/providers/aws/r/vpc_endpoint_service_allowed_principal.html) resource
and a VPC Endpoint Service resource with an allowed_principals attribute. Do not use the same principal ARN in
both a VPC Endpoint Service resource and a VPC Endpoint Service Allowed Principal resource. Doing so will cause a
con ict and will overwrite the association.
Example Usage
Basic
Basic w/ Tags
tags = {
Environment = "test"
}
}
Argument Reference
The following arguments are supported:
acceptance_required - (Required) Whether or not VPC endpoint connection requests to the service must be
accepted by the service owner - true or false .
network_load_balancer_arns - (Required) The ARNs of one or more Network Load Balancers for the endpoint
service.
allowed_principals - (Optional) The ARNs of one or more principals allowed to discover the endpoint service.
tags - (Optional) A mapping of tags to assign to the resource.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
manages_vpc_endpoints - Whether or not the service manages its VPC endpoints - true or false .
Import
VPC Endpoint Services can be imported using the VPC endpoint service id , e.g.
NOTE on VPC Endpoints and VPC Endpoint Subnet Associations: Terraform provides both a standalone VPC Endpoint
Subnet Association (an association between a VPC endpoint and a single subnet_id ) and a VPC Endpoint
(/docs/providers/aws/r/vpc_endpoint.html) resource with a subnet_ids attribute. Do not use the same subnet ID in
both a VPC Endpoint resource and a VPC Endpoint Subnet Association resource. Doing so will cause a con ict of
associations and will overwrite the association.
Example Usage
Basic usage:
Argument Reference
The following arguments are supported:
vpc_endpoint_id - (Required) The ID of the VPC endpoint with which the subnet will be associated.
subnet_id - (Required) The ID of the subnet to be associated with the VPC endpoint.
Timeouts
aws_vpc_endpoint_subnet_association provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Basic usage:
tags = {
Name = "main"
}
}
Argument Reference
The following arguments are supported:
instance_tenancy - (Optional) A tenancy option for instances launched into the VPC
enable_dns_support - (Optional) A boolean ag to enable/disable DNS support in the VPC. Defaults true.
enable_dns_hostnames - (Optional) A boolean ag to enable/disable DNS hostnames in the VPC. Defaults false.
enable_classiclink - (Optional) A boolean ag to enable/disable ClassicLink for the VPC. Only valid in regions and
accounts that support EC2 Classic. See the ClassicLink documentation
(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vpc-classiclink.html) for more information. Defaults false.
assign_generated_ipv6_cidr_block - (Optional) Requests an Amazon-provided IPv6 CIDR block with a /56 pre x
length for the VPC. You cannot specify the range of IP addresses, or the size of the CIDR block. Default is false .
main_route_table_id - The ID of the main route table associated with this VPC. Note that you can change a VPC's
main route table by using an aws_main_route_table_association
(/docs/providers/aws/r/main_route_table_assoc.html).
Import
VPCs can be imported using the vpc id , e.g.
When a VPC is created, a primary IPv4 CIDR block for the VPC must be speci ed. The
aws_vpc_ipv4_cidr_block_association resource allows further IPv4 CIDR blocks to be added to the VPC.
Example Usage
Argument Reference
The following arguments are supported:
cidr_block - (Required) The additional IPv4 CIDR block to associate with the VPC.
Timeouts
aws_vpc_ipv4_cidr_block_association provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
aws_vpc_ipv4_cidr_block_association can be imported by using the VPC CIDR Association ID, e.g.
$ terraform import aws_vpc_ipv4_cidr_block_association.example vpc-cidr-assoc-xxxxxxxx
Resource: aws_vpc_peering_connection_accepter
Provides a resource to manage the accepter's side of a VPC Peering Connection.
When a cross-account (requester's AWS account di ers from the accepter's AWS account) or an inter-region VPC Peering
Connection is created, a VPC Peering Connection resource is automatically created in the accepter's account. The requester
can use the aws_vpc_peering_connection resource to manage its side of the connection and the accepter can use the
aws_vpc_peering_connection_accepter resource to "adopt" its side of the connection into management.
Example Usage
provider "aws" {
region = "us-east-1"
provider "aws" {
alias = "peer"
region = "us-west-2"
tags = {
Side = "Requester"
}
}
tags = {
Side = "Accepter"
}
}
Argument Reference
The following arguments are supported:
Attributes Reference
All of the argument attributes except auto_accept are also exported as result attributes.
allow_remote_vpc_dns_resolution - Indicates whether a local VPC can resolve public DNS hostnames to private IP
addresses when queried from instances in a peer VPC.
allow_classic_link_to_remote_vpc - Indicates whether a local ClassicLink connection can communicate with the
peer VPC over the VPC Peering Connection.
allow_vpc_to_remote_classic_link - Indicates whether a local VPC can communicate with a ClassicLink connection
in the peer VPC over the VPC Peering Connection.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
NOTE on VPC Peering Connections and VPC Peering Connection Options: Terraform provides both a standalone VPC
Peering Connection Options (/docs/providers/aws/r/vpc_peering_options.html) and a VPC Peering Connection resource
with accepter and requester attributes. Do not manage options for the same VPC peering connection in both a VPC
Peering Connection resource and a VPC Peering Connection Options resource. Doing so will cause a con ict of options
and will overwrite the options. Using a VPC Peering Connection Options resource decouples management of the
connection options from management of the VPC Peering Connection and allows options to be set correctly in cross-
account scenarios.
Note: For cross-account (requester's AWS account di ers from the accepter's AWS account) or inter-region VPC Peering
Connections use the aws_vpc_peering_connection resource to manage the requester's side of the connection and
use the aws_vpc_peering_connection_accepter resource to manage the accepter's side of the connection.
Example Usage
accepter {
allow_remote_vpc_dns_resolution = true
}
requester {
allow_remote_vpc_dns_resolution = true
}
}
tags = {
Name = "VPC Peering between foo and bar"
}
}
Argument Reference
Note: Modifying the VPC Peering Connection options requires peering to be active. An automatic activation can be done
using the auto_accept (/docs/providers/aws/r/vpc_peering.html#auto_accept) attribute. Alternatively, the VPC Peering
Connection has to be made active manually using other means. See notes
(/docs/providers/aws/r/vpc_peering.html#notes) below for more information.
peer_owner_id - (Optional) The AWS account ID of the owner of the peer VPC. Defaults to the account ID the AWS
provider (/docs/providers/aws/index.html) is currently connected to.
peer_vpc_id - (Required) The ID of the VPC with which you are creating the VPC Peering Connection.
vpc_id - (Required) The ID of the requester VPC.
auto_accept - (Optional) Accept the peering (both VPCs need to be in the same AWS account).
peer_region - (Optional) The region of the accepter VPC of the [VPC Peering Connection]. auto_accept must be
false , and use the aws_vpc_peering_connection_accepter to manage the accepter side.
accepter (Optional) - An optional con guration block that allows for VPC Peering Connection
(http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide) options to be set for the VPC that accepts the peering
connection (a maximum of one).
requester (Optional) - A optional con guration block that allows for VPC Peering Connection
(http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide) options to be set for the VPC that requests the
peering connection (a maximum of one).
Note: When enabled, the DNS resolution feature requires that VPCs participating in the peering must have support for
the DNS hostnames enabled. This can be done using the enable_dns_hostnames
(/docs/providers/aws/r/vpc.html#enable_dns_hostnames) attribute in the aws_vpc (/docs/providers/aws/r/vpc.html)
resource. See Using DNS with Your VPC (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-dns.html) user
guide for more information.
allow_remote_vpc_dns_resolution - (Optional) Allow a local VPC to resolve public DNS hostnames to private IP
addresses when queried from instances in the peer VPC. This is not supported
(https://docs.aws.amazon.com/vpc/latest/peering/modify-peering-connections.html) for inter-region VPC peering.
Timeouts
aws_vpc_peering_connection provides the following Timeouts (/docs/con guration/resources.html#timeouts)
con guration options:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Notes
If both VPCs are not in the same AWS account do not enable the auto_accept attribute. The accepter can manage its side
of the connection using the aws_vpc_peering_connection_accepter resource or accept the connection manually using
the AWS Management Console, AWS CLI, through SDKs, etc.
Import
VPC Peering resources can be imported using the vpc peering id , e.g.
NOTE on VPC Peering Connections and VPC Peering Connection Options: Terraform provides both a standalone VPC
Peering Connection Options and a VPC Peering Connection (/docs/providers/aws/r/vpc_peering.html) resource with
accepter and requester attributes. Do not manage options for the same VPC peering connection in both a VPC
Peering Connection resource and a VPC Peering Connection Options resource. Doing so will cause a con ict of options
and will overwrite the options. Using a VPC Peering Connection Options resource decouples management of the
connection options from management of the VPC Peering Connection and allows options to be set correctly in cross-
region and cross-account scenarios.
Basic usage:
accepter {
allow_remote_vpc_dns_resolution = true
}
requester {
allow_vpc_to_remote_classic_link = true
allow_classic_link_to_remote_vpc = true
}
}
provider "aws" {
alias = "requester"
provider "aws" {
alias = "accepter"
}
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
}
cidr_block = "10.1.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
}
vpc_id = "${aws_vpc.main.id}"
peer_vpc_id = "${aws_vpc.peer.id}"
peer_owner_id = "${data.aws_caller_identity.peer.account_id}"
auto_accept = false
tags = {
Side = "Requester"
}
}
vpc_peering_connection_id = "${aws_vpc_peering_connection.peer.id}"
auto_accept = true
tags = {
Side = "Accepter"
}
}
vpc_peering_connection_id = "${aws_vpc_peering_connection_accepter.peer.id}"
requester {
allow_remote_vpc_dns_resolution = true
}
}
}
vpc_peering_connection_id = "${aws_vpc_peering_connection_accepter.peer.id}"
accepter {
allow_remote_vpc_dns_resolution = true
}
}
Argument Reference
The following arguments are supported:
accepter (Optional) - An optional con guration block that allows for VPC Peering Connection
(http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide) options to be set for the VPC that accepts the peering
connection (a maximum of one).
requester (Optional) - A optional con guration block that allows for VPC Peering Connection
(http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide) options to be set for the VPC that requests the
peering connection (a maximum of one).
Note: When enabled, the DNS resolution feature requires that VPCs participating in the peering must have support for
the DNS hostnames enabled. This can be done using the enable_dns_hostnames
(/docs/providers/aws/r/vpc.html#enable_dns_hostnames) attribute in the aws_vpc (/docs/providers/aws/r/vpc.html)
resource. See Using DNS with Your VPC (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-dns.html) user
guide for more information.
allow_remote_vpc_dns_resolution - (Optional) Allow a local VPC to resolve public DNS hostnames to private IP
addresses when queried from instances in the peer VPC.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The ID of the VPC Peering Connection Options.
Import
VPC Peering Connection Options can be imported using the vpc peering id , e.g.
Note: All arguments including tunnel1_preshared_key and tunnel2_preshared_key will be stored in the raw state
as plain-text. Read more about sensitive data in state (/docs/state/sensitive-data.html).
Note: The CIDR blocks in the arguments tunnel1_inside_cidr and tunnel2_inside_cidr must have a pre x of /30
and be a part of a speci c range. Read more about this in the AWS documentation
(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_VpnTunnelOptionsSpeci cation.html).
Example Usage
Argument Reference
The following arguments are required:
type - (Required) The type of VPN connection. The only type AWS supports at this time is "ipsec.1".
Other arguments:
static_routes_only - (Optional, Default false ) Whether the VPN connection uses static routes exclusively. Static
routes must be used for devices that don't support BGP.
tunnel1_inside_cidr - (Optional) The CIDR block of the inside IP addresses for the rst VPN tunnel.
tunnel2_inside_cidr - (Optional) The CIDR block of the inside IP addresses for the second VPN tunnel.
Note: The preshared key must be between 8 and 64 characters in length and cannot start with zero(0). Allowed
characters are alphanumeric characters, periods(.) and underscores(_).
Attribute Reference
In addition to all arguments above, the following attributes are exported:
customer_gateway_configuration - The con guration information for the VPN connection's customer gateway (in
the native XML format).
tunnel1_cgw_inside_address - The RFC 6890 link-local address of the rst VPN tunnel (Customer Gateway Side).
tunnel1_vgw_inside_address - The RFC 6890 link-local address of the rst VPN tunnel (VPN Gateway Side).
tunnel2_cgw_inside_address - The RFC 6890 link-local address of the second VPN tunnel (Customer Gateway Side).
tunnel2_vgw_inside_address - The RFC 6890 link-local address of the second VPN tunnel (VPN Gateway Side).
vpn_gateway_id - The ID of the virtual private gateway to which the connection is attached.
Import
VPN Connections can be imported using the vpn connection id , e.g.
Example Usage
Argument Reference
The following arguments are supported:
destination_cidr_block - (Required) The CIDR block associated with the local subnet of the customer network.
Attribute Reference
In addition to all arguments above, the following attributes are exported:
destination_cidr_block - The CIDR block associated with the local subnet of the customer network.
Note: The aws_vpn_gateway (/docs/providers/aws/r/vpn_gateway.html) resource can also automatically attach the
Virtual Private Gateway it creates to an existing VPC by setting the vpc_id
(/docs/providers/aws/r/vpn_gateway.html#vpc_id) attribute accordingly.
Example Usage
Argument Reference
The following arguments are supported:
Attributes Reference
In addition to all arguments above, the following attributes are exported:
vpc_id - The ID of the VPC that Virtual Private Gateway is attached to.
Example Usage
tags = {
Name = "main"
}
}
Argument Reference
The following arguments are supported:
availability_zone - (Optional) The Availability Zone for the virtual private gateway.
amazon_side_asn - (Optional) The Autonomous System Number (ASN) for the Amazon side of the gateway. If you
don't specify an ASN, the virtual private gateway is created with the default ASN.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
VPN Gateways can be imported using the vpn gateway id , e.g.
Note: This resource should not be used with a route table that has the propagating_vgws argument set. If that
argument is set, any route propagation not explicitly listed in its value will be removed.
Example Usage
Argument Reference
The following arguments are required:
Attributes Reference
This resource does not export any additional attributes.
Resource: aws_waf_byte_match_set
Provides a WAF Byte Match Set Resource
Example Usage
byte_match_tuples {
text_transformation = "NONE"
target_string = "badrefer1"
positional_constraint = "CONTAINS"
field_to_match {
type = "HEADER"
data = "referer"
}
}
}
Argument Reference
The following arguments are supported:
byte_match_tuples - Speci es the bytes (typically a string that corresponds with ASCII characters) that you want to
search for in web requests, the location in requests that you want to search, and other settings.
Nested blocks
byte_match_tuples
Arguments
field_to_match - (Required) The part of a web request that you want to search, such as a speci ed header or a
query string.
positional_constraint - (Required) Within the portion of a web request that you want to search (for example, in
the query string, if any), specify where you want to search. e.g. CONTAINS , CONTAINS_WORD or EXACTLY . See docs
(http://docs.aws.amazon.com/waf/latest/APIReference/API_ByteMatchTuple.html#WAF-Type-ByteMatchTuple-
PositionalConstraint) for all supported values.
target_string - (Optional) The value that you want to search for. e.g. HEADER , METHOD or BODY . See docs
(http://docs.aws.amazon.com/waf/latest/APIReference/API_ByteMatchTuple.html#WAF-Type-ByteMatchTuple-
TargetString) for all supported values.
text_transformation - (Required) Text transformations used to eliminate unusual formatting that attackers use in
web requests in an e ort to bypass AWS WAF. If you specify a transformation, AWS WAF performs the transformation
on target_string before inspecting a request for a match. e.g. CMD_LINE , HTML_ENTITY_DECODE or NONE . See
docs (http://docs.aws.amazon.com/waf/latest/APIReference/API_ByteMatchTuple.html#WAF-Type-ByteMatchTuple-
TextTransformation) for all supported values.
field_to_match
Arguments
data - (Optional) When type is HEADER , enter the name of the header that you want to search, e.g. User-Agent or
Referer . If type is any other value, omit this eld.
type - (Required) The part of the web request that you want AWS WAF to search for a speci ed string. e.g. HEADER ,
METHOD or BODY . See docs (http://docs.aws.amazon.com/waf/latest/APIReference/API_FieldToMatch.html) for all
supported values.
Remarks
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF Byte Match Set can be imported using the id, e.g.
Example Usage
geo_match_constraint {
type = "Country"
value = "US"
}
geo_match_constraint {
type = "Country"
value = "CA"
}
}
Argument Reference
The following arguments are supported:
geo_match_constraint - (Optional) The GeoMatchConstraint objects which contain the country that you want AWS
WAF to search for.
Nested Blocks
geo_match_constraint
Arguments
type - (Required) The type of geographical area you want AWS WAF to search for. Currently Country is the only valid
value.
value - (Required) The country that you want AWS WAF to search for. This is the two-letter country code, e.g. US , CA ,
RU , CN , etc. See docs (https://docs.aws.amazon.com/waf/latest/APIReference/API_GeoMatchConstraint.html) for all
supported values.
Remarks
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
ip_set_descriptors {
type = "IPV4"
value = "192.0.7.0/24"
}
ip_set_descriptors {
type = "IPV4"
value = "10.16.16.0/16"
}
}
Argument Reference
The following arguments are supported:
ip_set_descriptors - (Optional) One or more pairs specifying the IP address type (IPV4 or IPV6) and the IP address
range (in CIDR format) from which web requests originate.
Nested Blocks
ip_set_descriptors
Arguments
value - (Required) An IPv4 or IPv6 address speci ed via CIDR notation. e.g. 192.0.2.44/32 or
1111:0000:0000:0000:0000:0000:0000:0000/64
Attributes Reference
In addition to all arguments above, the following attributes are exported:
id - The ID of the WAF IPSet.
Import
WAF IPSets can be imported using their ID, e.g.
Example Usage
ip_set_descriptors {
type = "IPV4"
value = "192.0.7.0/24"
}
}
rate_key = "IP"
rate_limit = 100
predicates {
data_id = "${aws_waf_ipset.ipset.id}"
negated = false
type = "IPMatch"
}
}
Argument Reference
The following arguments are supported:
metric_name - (Required) The name or description for the Amazon CloudWatch metric of this rule.
rate_limit - (Required) The maximum number of requests, which have an identical value in the eld speci ed by
the RateKey, allowed in a ve-minute period. Minimum value is 100.
Nested Blocks
predicates
Arguments
negated - (Required) Set this to false if you want to allow, block, or count requests based on the settings in the
speci ed ByteMatchSet , IPSet , SqlInjectionMatchSet , XssMatchSet , or SizeConstraintSet . For example, if
an IPSet includes the IP address 192.0.2.44 , AWS WAF will allow or block requests based on that IP address. If set to
true , AWS WAF will allow, block, or count requests based on all IP addresses except 192.0.2.44 .
data_id - (Required) A unique identi er for a predicate in the rule, such as Byte Match Set ID or IPSet ID.
type - (Required) The type of predicate in a rule. Valid values: ByteMatch , GeoMatch , IPMatch , RegexMatch ,
SizeConstraint , SqlInjectionMatch , or XssMatch .
Remarks
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF Rated Based Rule can be imported using the id, e.g.
Example Usage
regex_match_tuple {
field_to_match {
data = "User-Agent"
type = "HEADER"
}
regex_pattern_set_id = "${aws_waf_regex_pattern_set.example.id}"
text_transformation = "NONE"
}
}
Argument Reference
The following arguments are supported:
regex_match_tuple - (Required) The regular expression pattern that you want AWS WAF to search for in web
requests, the location in requests that you want AWS WAF to search, and other settings. See below.
Nested Arguments
regex_match_tuple
field_to_match - (Required) The part of a web request that you want to search, such as a speci ed header or a
query string.
text_transformation - (Required) Text transformations used to eliminate unusual formatting that attackers use in
web requests in an e ort to bypass AWS WAF. e.g. CMD_LINE , HTML_ENTITY_DECODE or NONE . See docs
(http://docs.aws.amazon.com/waf/latest/APIReference/API_ByteMatchTuple.html#WAF-Type-ByteMatchTuple-
TextTransformation) for all supported values.
field_to_match
data - (Optional) When type is HEADER , enter the name of the header that you want to search, e.g. User-Agent or
Referer . If type is any other value, omit this eld.
type - (Required) The part of the web request that you want AWS WAF to search for a speci ed string. e.g. HEADER ,
METHOD or BODY . See docs (http://docs.aws.amazon.com/waf/latest/APIReference/API_FieldToMatch.html) for all
supported values.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Argument Reference
The following arguments are supported:
regex_pattern_strings - (Optional) A list of regular expression (regex) patterns that you want AWS WAF to search
for, such as B[a@]dB[o0]t .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
byte_match_tuples {
text_transformation = "NONE"
target_string = "badrefer1"
positional_constraint = "CONTAINS"
field_to_match {
type = "HEADER"
data = "referer"
}
}
}
Argument Reference
The following arguments are supported:
byte_match_tuples - (Optional)Settings for the ByteMatchSet, such as the bytes (typically a string that corresponds
with ASCII characters) that you want AWS WAF to search for in web requests. ByteMatchTuple documented below.
positional_constraint - (Required) Within the portion of a web request that you want to search.
target_string - (Required) The value that you want AWS WAF to search for. The maximum length of the value is 50
bytes.
data - (Optional) When the value of Type is HEADER, enter the name of the header that you want AWS WAF to search,
for example, User-Agent or Referer. If the value of Type is any other value, omit Data.
type - (Required) The part of the web request that you want AWS WAF to search for a speci ed string.
Remarks
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF Regional Byte Match Set can be imported using the id, e.g.
Example Usage
geo_match_constraint {
type = "Country"
value = "US"
}
geo_match_constraint {
type = "Country"
value = "CA"
}
}
Argument Reference
The following arguments are supported:
geo_match_constraint - (Optional) The Geo Match Constraint objects which contain the country that you want AWS
WAF to search for.
Nested Blocks
geo_match_constraint
Arguments
type - (Required) The type of geographical area you want AWS WAF to search for. Currently Country is the only valid
value.
value - (Required) The country that you want AWS WAF to search for. This is the two-letter country code, e.g. US , CA ,
RU , CN , etc. See docs (https://docs.aws.amazon.com/waf/latest/APIReference/API_GeoMatchConstraint.html) for all
supported values.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF Regional Geo Match Set can be imported using the id, e.g.
Example Usage
ip_set_descriptor {
type = "IPV4"
value = "192.0.7.0/24"
}
ip_set_descriptor {
type = "IPV4"
value = "10.16.16.0/16"
}
}
Argument Reference
The following arguments are supported:
ip_set_descriptor - (Optional) One or more pairs specifying the IP address type (IPV4 or IPV6) and the IP address
range (in CIDR notation) from which web requests originate.
Nested Blocks
ip_set_descriptor
Arguments
Remarks
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF Regional IPSets can be imported using their ID, e.g.
Example Usage
ip_set_descriptors {
type = "IPV4"
value = "192.0.7.0/24"
}
}
rate_key = "IP"
rate_limit = 100
predicate {
data_id = "${aws_wafregional_ipset.ipset.id}"
negated = false
type = "IPMatch"
}
}
Argument Reference
The following arguments are supported:
metric_name - (Required) The name or description for the Amazon CloudWatch metric of this rule.
rate_limit - (Required) The maximum number of requests, which have an identical value in the eld speci ed by
the RateKey, allowed in a ve-minute period. Minimum value is 100.
Nested Blocks
predicate
Arguments
negated - (Required) Set this to false if you want to allow, block, or count requests based on the settings in the
speci ed ByteMatchSet , IPSet , SqlInjectionMatchSet , XssMatchSet , or SizeConstraintSet . For example, if
an IPSet includes the IP address 192.0.2.44 , AWS WAF will allow or block requests based on that IP address. If set to
true , AWS WAF will allow, block, or count requests based on all IP addresses except 192.0.2.44 .
data_id - (Required) A unique identi er for a predicate in the rule, such as Byte Match Set ID or IPSet ID.
type - (Required) The type of predicate in a rule. Valid values: ByteMatch , GeoMatch , IPMatch , RegexMatch ,
SizeConstraint , SqlInjectionMatch , or XssMatch .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF Regional Rate Based Rule can be imported using the id, e.g.
Example Usage
regex_match_tuple {
field_to_match {
data = "User-Agent"
type = "HEADER"
}
regex_pattern_set_id = "${aws_wafregional_regex_pattern_set.example.id}"
text_transformation = "NONE"
}
}
Argument Reference
The following arguments are supported:
regex_match_tuple - (Required) The regular expression pattern that you want AWS WAF to search for in web
requests, the location in requests that you want AWS WAF to search, and other settings. See below.
Nested Arguments
regex_match_tuple
field_to_match - (Required) The part of a web request that you want to search, such as a speci ed header or a
query string.
text_transformation - (Required) Text transformations used to eliminate unusual formatting that attackers use in
web requests in an e ort to bypass AWS WAF. e.g. CMD_LINE , HTML_ENTITY_DECODE or NONE . See docs
(http://docs.aws.amazon.com/waf/latest/APIReference/API_ByteMatchTuple.html#WAF-Type-ByteMatchTuple-
TextTransformation) for all supported values.
field_to_match
data - (Optional) When type is HEADER , enter the name of the header that you want to search, e.g. User-Agent or
Referer . If type is any other value, omit this eld.
type - (Required) The part of the web request that you want AWS WAF to search for a speci ed string. e.g. HEADER ,
METHOD or BODY . See docs (http://docs.aws.amazon.com/waf/latest/APIReference/API_FieldToMatch.html) for all
supported values.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF Regional Regex Match Set can be imported using the id, e.g.
Example Usage
Argument Reference
The following arguments are supported:
regex_pattern_strings - (Optional) A list of regular expression (regex) patterns that you want AWS WAF to search
for, such as B[a@]dB[o0]t .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF Regional Regex Pattern Set can be imported using the id, e.g.
Example Usage
activated_rule {
action {
type = "COUNT"
}
priority = 50
rule_id = "${aws_wafregional_rule.example.id}"
}
}
Argument Reference
The following arguments are supported:
metric_name - (Required) A friendly name for the metrics from the rule group
Nested Blocks
activated_rule
Arguments
action - (Required) Speci es the action that CloudFront or AWS WAF takes when a web request matches the
conditions in the rule.
type - (Required) e.g. BLOCK , ALLOW , or COUNT
priority - (Required) Speci es the order in which the rules are evaluated. Rules with a lower value are evaluated
before rules with a higher value.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF Regional Rule Group can be imported using the id, e.g.
Example Usage
ip_set_descriptor {
type = "IPV4"
value = "192.0.7.0/24"
}
}
predicate {
type = "IPMatch"
data_id = "${aws_wafregional_ipset.ipset.id}"
negated = false
}
}
Argument Reference
The following arguments are supported:
metric_name - (Required) The name or description for the Amazon CloudWatch metric of this rule.
Nested Fields
predicate
See the WAF Documentation (https://docs.aws.amazon.com/waf/latest/APIReference/API_Predicate.html) for more
information.
Arguments
type - (Required) The type of predicate in a rule. Valid values: ByteMatch , GeoMatch , IPMatch , RegexMatch ,
SizeConstraint , SqlInjectionMatch , or XssMatch
data_id - (Required) The unique identi er of a predicate, such as the ID of a ByteMatchSet or IPSet .
negated - (Required) Whether to use the settings or the negated settings that you speci ed in the objects.
Remarks
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF Regional Rule can be imported using the id, e.g.
Example Usage
size_constraints {
text_transformation = "NONE"
comparison_operator = "EQ"
size = "4096"
field_to_match {
type = "BODY"
}
}
}
Argument Reference
The following arguments are supported:
size_constraints - (Optional) Speci es the parts of web requests that you want to inspect the size of.
Nested Blocks
size_constraints
Arguments
field_to_match - (Required) Speci es where in a web request to look for the size constraint.
comparison_operator - (Required) The type of comparison you want to perform. e.g. EQ , NE , LT , GT . See docs
(http://docs.aws.amazon.com/waf/latest/APIReference/API_SizeConstraint.html#WAF-Type-SizeConstraint-
ComparisonOperator) for all supported values.
size - (Required) The size in bytes that you want to compare against the size of the speci ed field_to_match . Valid
values are between 0 - 21474836480 bytes (0 - 20 GB).
text_transformation - (Required) Text transformations used to eliminate unusual formatting that attackers use in
web requests in an e ort to bypass AWS WAF. If you specify a transformation, AWS WAF performs the transformation
on field_to_match before inspecting a request for a match. e.g. CMD_LINE , HTML_ENTITY_DECODE or NONE . See
docs (http://docs.aws.amazon.com/waf/latest/APIReference/API_SizeConstraint.html#WAF-Type-SizeConstraint-
TextTransformation) for all supported values. Note: if you choose BODY as type , you must choose NONE because
CloudFront forwards only the rst 8192 bytes for inspection.
field_to_match
Arguments
data - (Optional) When type is HEADER , enter the name of the header that you want to search, e.g. User-Agent or
Referer . If type is any other value, omit this eld.
type - (Required) The part of the web request that you want AWS WAF to search for a speci ed string. e.g. HEADER ,
METHOD or BODY . See docs (http://docs.aws.amazon.com/waf/latest/APIReference/API_FieldToMatch.html) for all
supported values.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF Size Constraint Set can be imported using the id, e.g.
Example Usage
sql_injection_match_tuple {
text_transformation = "URL_DECODE"
field_to_match {
type = "QUERY_STRING"
}
}
}
Argument Reference
The following arguments are supported:
sql_injection_match_tuple - (Optional) The parts of web requests that you want AWS WAF to inspect for malicious
SQL code and, if you want AWS WAF to inspect a header, the name of the header.
Nested elds
sql_injection_match_tuple
field_to_match - (Required) Speci es where in a web request to look for snippets of malicious SQL code.
text_transformation - (Required) Text transformations used to eliminate unusual formatting that attackers use in
web requests in an e ort to bypass AWS WAF. If you specify a transformation, AWS WAF performs the transformation
on field_to_match before inspecting a request for a match. e.g. CMD_LINE , HTML_ENTITY_DECODE or NONE . See
docs (https://docs.aws.amazon.com/waf/latest/APIReference/API_regional_SqlInjectionMatchTuple.html#WAF-Type-
regional_SqlInjectionMatchTuple-TextTransformation) for all supported values.
field_to_match
data - (Optional) When type is HEADER , enter the name of the header that you want to search, e.g. User-Agent or
Referer . If type is any other value, omit this eld.
type - (Required) The part of the web request that you want AWS WAF to search for a speci ed string. e.g. HEADER ,
METHOD or BODY . See docs (https://docs.aws.amazon.com/waf/latest/APIReference/API_regional_FieldToMatch.html)
for all supported values.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF Regional Sql Injection Match Set can be imported using the id, e.g.
Note: An Application Load Balancer can only be associated with one WAF Regional WebACL.
Example Usage
ip_set_descriptor {
type = "IPV4"
value = "192.0.7.0/24"
}
}
predicate {
data_id = "${aws_wafregional_ipset.ipset.id}"
negated = false
type = "IPMatch"
}
}
default_action {
type = "ALLOW"
}
rule {
action {
type = "BLOCK"
}
priority = 1
rule_id = "${aws_wafregional_rule.foo.id}"
}
}
Argument Reference
The following arguments are supported:
resource_arn - (Required) ARN of the resource to associate with. For example, an Application Load Balancer or API
Gateway Stage.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Regular Rule
ip_set_descriptor {
type = "IPV4"
value = "192.0.7.0/24"
}
}
predicate {
data_id = "${aws_wafregional_ipset.ipset.id}"
negated = false
type = "IPMatch"
}
}
default_action {
type = "ALLOW"
}
rule {
action {
type = "BLOCK"
}
priority = 1
rule_id = "${aws_wafregional_rule.wafrule.id}"
type = "REGULAR"
}
}
Group Rule
resource "aws_wafregional_web_acl" "example" {
name = "example"
metric_name = "example"
default_action {
type = "ALLOW"
}
rule {
priority = 1
rule_id = "${aws_wafregional_rule_group.example.id}"
type = "GROUP"
override_action {
type = "NONE"
}
}
}
Logging
NOTE: The Kinesis Firehose Delivery Stream name must begin with aws-waf-logs- . See the AWS WAF Developer
Guide (https://docs.aws.amazon.com/waf/latest/developerguide/logging.html) for more information about enabling
WAF logging.
logging_configuration {
log_destination = "${aws_kinesis_firehose_delivery_stream.example.arn}"
redacted_fields {
field_to_match {
type = "URI"
}
field_to_match {
data = "referer"
type = "HEADER"
}
}
}
}
Argument Reference
The following arguments are supported:
default_action - (Required) The action that you want AWS WAF Regional to take when a request doesn't match the
criteria in any of the rules that are associated with the web ACL.
metric_name - (Required) The name or description for the Amazon CloudWatch metric of this web ACL.
logging_configuration - (Optional) Con guration block to enable WAF logging. Detailed below.
rule - (Optional) Set of con guration blocks containing rules for the web ACL. Detailed below.
type - (Required) Speci es how you want AWS WAF Regional to respond to requests that match the settings in a rule.
e.g. ALLOW , BLOCK or COUNT
redacted_fields - (Optional) Con guration block containing parts of the request that you want redacted from the
logs. Detailed below.
field_to_match - (Required) Set of con guration blocks for elds to redact. Detailed below.
Additional information about this con guration can be found in the AWS WAF Regional API Reference
(https://docs.aws.amazon.com/waf/latest/APIReference/API_regional_FieldToMatch.html).
data - (Optional) When the value of type is HEADER , enter the name of the header that you want the WAF to search,
for example, User-Agent or Referer . If the value of type is any other value, omit data .
type - (Required) The part of the web request that you want AWS WAF to search for a speci ed string. e.g. HEADER or
METHOD
Additional information about this con guration can be found in the AWS WAF Regional API Reference
(https://docs.aws.amazon.com/waf/latest/APIReference/API_regional_ActivatedRule.html).
priority - (Required) Speci es the order in which the rules in a WebACL are evaluated. Rules with a lower value are
evaluated before rules with a higher value.
override_action - (Optional) Con guration block of the override the action that a group requests CloudFront or
AWS WAF takes when a web request matches the conditions in the rule. Only used if type is GROUP . Detailed below.
type - (Required) Speci es how you want AWS WAF Regional to respond to requests that match the settings in a rule.
e.g. ALLOW , BLOCK or COUNT
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF Regional Web ACL can be imported using the id, e.g.
Example Usage
xss_match_tuple {
text_transformation = "NONE"
field_to_match {
type = "URI"
}
}
xss_match_tuple {
text_transformation = "NONE"
field_to_match {
type = "QUERY_STRING"
}
}
}
Argument Reference
The following arguments are supported:
xss_match_tuple - (Optional) The parts of web requests that you want to inspect for cross-site scripting attacks.
Nested elds
xss_match_tuple
field_to_match - (Required) Speci es where in a web request to look for cross-site scripting attacks.
text_transformation - (Required) Which text transformation, if any, to perform on the web request before
inspecting the request for cross-site scripting attacks.
field_to_match
data - (Optional) When the value of type is HEADER , enter the name of the header that you want the WAF to search,
for example, User-Agent or Referer . If the value of type is any other value, omit data .
type - (Required) The part of the web request that you want AWS WAF to search for a speci ed string. e.g. HEADER or
METHOD
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
activated_rule {
action {
type = "COUNT"
}
priority = 50
rule_id = "${aws_waf_rule.example.id}"
}
}
Argument Reference
The following arguments are supported:
metric_name - (Required) A friendly name for the metrics from the rule group
Nested Blocks
activated_rule
Arguments
action - (Required) Speci es the action that CloudFront or AWS WAF takes when a web request matches the
conditions in the rule.
type - (Required) e.g. BLOCK , ALLOW , or COUNT
priority - (Required) Speci es the order in which the rules are evaluated. Rules with a lower value are evaluated
before rules with a higher value.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF Rule Group can be imported using the id, e.g.
Example Usage
ip_set_descriptors {
type = "IPV4"
value = "192.0.7.0/24"
}
}
predicates {
data_id = "${aws_waf_ipset.ipset.id}"
negated = false
type = "IPMatch"
}
}
Argument Reference
The following arguments are supported:
metric_name - (Required) The name or description for the Amazon CloudWatch metric of this rule. The name can
contain only alphanumeric characters (A-Z, a-z, 0-9); the name can't contain whitespace.
Nested Blocks
predicates
negated - (Required) Set this to false if you want to allow, block, or count requests based on the settings in the
speci ed waf_byte_match_set (/docs/providers/aws/r/waf_byte_match_set.html), waf_ipset
(/docs/providers/aws/r/waf_ipset.html), aws_waf_size_constraint_set
(/docs/providers/aws/r/waf_size_constraint_set.html), aws_waf_sql_injection_match_set
(/docs/providers/aws/r/waf_sql_injection_match_set.html) or aws_waf_xss_match_set
(/docs/providers/aws/r/waf_xss_match_set.html). For example, if an IPSet includes the IP address 192.0.2.44 , AWS
WAF will allow or block requests based on that IP address. If set to true , AWS WAF will allow, block, or count requests
based on all IP addresses except 192.0.2.44 .
data_id - (Required) A unique identi er for a predicate in the rule, such as Byte Match Set ID or IPSet ID.
type - (Required) The type of predicate in a rule. Valid values: ByteMatch , GeoMatch , IPMatch , RegexMatch ,
SizeConstraint , SqlInjectionMatch , or XssMatch .
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF rules can be imported using the id, e.g.
Example Usage
size_constraints {
text_transformation = "NONE"
comparison_operator = "EQ"
size = "4096"
field_to_match {
type = "BODY"
}
}
}
Argument Reference
The following arguments are supported:
size_constraints - (Optional) Speci es the parts of web requests that you want to inspect the size of.
Nested Blocks
size_constraints
Arguments
field_to_match - (Required) Speci es where in a web request to look for the size constraint.
comparison_operator - (Required) The type of comparison you want to perform. e.g. EQ , NE , LT , GT . See docs
(http://docs.aws.amazon.com/waf/latest/APIReference/API_SizeConstraint.html#WAF-Type-SizeConstraint-
ComparisonOperator) for all supported values.
size - (Required) The size in bytes that you want to compare against the size of the speci ed field_to_match . Valid
values are between 0 - 21474836480 bytes (0 - 20 GB).
text_transformation - (Required) Text transformations used to eliminate unusual formatting that attackers use in
web requests in an e ort to bypass AWS WAF. If you specify a transformation, AWS WAF performs the transformation
on field_to_match before inspecting a request for a match. e.g. CMD_LINE , HTML_ENTITY_DECODE or NONE . See
docs (http://docs.aws.amazon.com/waf/latest/APIReference/API_SizeConstraint.html#WAF-Type-SizeConstraint-
TextTransformation) for all supported values. Note: if you choose BODY as type , you must choose NONE because
CloudFront forwards only the rst 8192 bytes for inspection.
field_to_match
Arguments
data - (Optional) When type is HEADER , enter the name of the header that you want to search, e.g. User-Agent or
Referer . If type is any other value, omit this eld.
type - (Required) The part of the web request that you want AWS WAF to search for a speci ed string. e.g. HEADER ,
METHOD or BODY . See docs (http://docs.aws.amazon.com/waf/latest/APIReference/API_FieldToMatch.html) for all
supported values.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
sql_injection_match_tuples {
text_transformation = "URL_DECODE"
field_to_match {
type = "QUERY_STRING"
}
}
}
Argument Reference
The following arguments are supported:
name - (Required) The name or description of the SQL Injection Match Set.
sql_injection_match_tuples - (Optional) The parts of web requests that you want AWS WAF to inspect for
malicious SQL code and, if you want AWS WAF to inspect a header, the name of the header.
Nested Blocks
sql_injection_match_tuples
field_to_match - (Required) Speci es where in a web request to look for snippets of malicious SQL code.
text_transformation - (Required) Text transformations used to eliminate unusual formatting that attackers use in
web requests in an e ort to bypass AWS WAF. If you specify a transformation, AWS WAF performs the transformation
on field_to_match before inspecting a request for a match. e.g. CMD_LINE , HTML_ENTITY_DECODE or NONE . See
docs (http://docs.aws.amazon.com/waf/latest/APIReference/API_SqlInjectionMatchTuple.html#WAF-Type-
SqlInjectionMatchTuple-TextTransformation) for all supported values.
field_to_match
Arguments
data - (Optional) When type is HEADER , enter the name of the header that you want to search, e.g. User-Agent or
Referer . If type is any other value, omit this eld.
type - (Required) The part of the web request that you want AWS WAF to search for a speci ed string. e.g. HEADER ,
METHOD or BODY . See docs (http://docs.aws.amazon.com/waf/latest/APIReference/API_FieldToMatch.html) for all
supported values.
Remarks
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
ip_set_descriptors {
type = "IPV4"
value = "192.0.7.0/24"
}
}
predicates {
data_id = "${aws_waf_ipset.ipset.id}"
negated = false
type = "IPMatch"
}
}
default_action {
type = "ALLOW"
}
rules {
action {
type = "BLOCK"
}
priority = 1
rule_id = "${aws_waf_rule.wafrule.id}"
type = "REGULAR"
}
}
Logging
NOTE: The Kinesis Firehose Delivery Stream name must begin with aws-waf-logs- and be located in us-east-1
region. See the AWS WAF Developer Guide (https://docs.aws.amazon.com/waf/latest/developerguide/logging.html) for
more information about enabling WAF logging.
logging_configuration {
log_destination = "${aws_kinesis_firehose_delivery_stream.example.arn}"
redacted_fields {
field_to_match {
type = "URI"
}
field_to_match {
data = "referer"
type = "HEADER"
}
}
}
}
Argument Reference
The following arguments are supported:
default_action - (Required) Con guration block with action that you want AWS WAF to take when a request doesn't
match the criteria in any of the rules that are associated with the web ACL. Detailed below.
metric_name - (Required) The name or description for the Amazon CloudWatch metric of this web ACL.
rules - (Optional) Con guration blocks containing rules to associate with the web ACL and the settings for each rule.
Detailed below.
logging_configuration - (Optional) Con guration block to enable WAF logging. Detailed below.
log_destination - (Required) Amazon Resource Name (ARN) of Kinesis Firehose Delivery Stream
redacted_fields - (Optional) Con guration block containing parts of the request that you want redacted from the
logs. Detailed below.
redacted_fields Con guration Block
field_to_match - (Required) Set of con guration blocks for elds to redact. Detailed below.
Additional information about this con guration can be found in the AWS WAF Regional API Reference
(https://docs.aws.amazon.com/waf/latest/APIReference/API_regional_FieldToMatch.html).
data - (Optional) When the value of type is HEADER , enter the name of the header that you want the WAF to search,
for example, User-Agent or Referer . If the value of type is any other value, omit data .
type - (Required) The part of the web request that you want AWS WAF to search for a speci ed string. e.g. HEADER or
METHOD
action - (Optional) The action that CloudFront or AWS WAF takes when a web request matches the conditions in the
rule. Not used if type is GROUP .
type - (Required) valid values are: BLOCK , ALLOW , or COUNT
override_action - (Optional) Override the action that a group requests CloudFront or AWS WAF takes when a web
request matches the conditions in the rule. Only used if type is GROUP .
type - (Required) valid values are: NONE or COUNT
priority - (Required) Speci es the order in which the rules in a WebACL are evaluated. Rules with a lower value are
evaluated before rules with a higher value.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WAF Web ACL can be imported using the id , e.g.
$ terraform import aws_waf_web_acl.main 0c8e583e-18f3-4c13-9e2a-67c4805d2f94
Resource: aws_waf_xss_match_set
Provides a WAF XSS Match Set Resource
Example Usage
xss_match_tuples {
text_transformation = "NONE"
field_to_match {
type = "URI"
}
}
xss_match_tuples {
text_transformation = "NONE"
field_to_match {
type = "QUERY_STRING"
}
}
}
Argument Reference
The following arguments are supported:
xss_match_tuples - (Optional) The parts of web requests that you want to inspect for cross-site scripting attacks.
Nested Blocks
xss_match_tuples
field_to_match - (Required) Speci es where in a web request to look for cross-site scripting attacks.
text_transformation - (Required) Text transformations used to eliminate unusual formatting that attackers use in
web requests in an e ort to bypass AWS WAF. If you specify a transformation, AWS WAF performs the transformation
on target_string before inspecting a request for a match. e.g. CMD_LINE , HTML_ENTITY_DECODE or NONE . See
docs (http://docs.aws.amazon.com/waf/latest/APIReference/API_XssMatchTuple.html#WAF-Type-XssMatchTuple-
TextTransformation) for all supported values.
field_to_match
Arguments
data - (Optional) When type is HEADER , enter the name of the header that you want to search, e.g. User-Agent or
Referer . If type is any other value, omit this eld.
type - (Required) The part of the web request that you want AWS WAF to search for a speci ed string. e.g. HEADER ,
METHOD or BODY . See docs (http://docs.aws.amazon.com/waf/latest/APIReference/API_FieldToMatch.html) for all
supported values.
Remarks
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Example Usage
Basic usage:
network {
vpc_id = "${aws_vpc.test.id}"
subnet_ids = ["${aws_subnet.test.*.id}"]
security_group_ids = ["${aws_security_group.test.id}"]
}
}
identity_provider {
type = "SAML"
saml_metadata = "${file("saml-
- metadata.xml
xml")}"
}
}
Argument Reference
The following arguments are supported:
audit_stream_arn - (Optional) The ARN of the Amazon Kinesis data stream that receives the audit events.
device_ca_certificate - (Optional) The certi cate chain, including intermediate certi cates and the root certi cate
authority certi cate used to issue device certi cates.
identity_provider - (Optional) Provide this to allow manage the identity provider con guration for the eet. Fields
documented below.
optimize_for_end_user_location - (Optional) The option to optimize for better performance by routing tra c
through the closest AWS Region to users, which may be outside of your home Region. Defaults to true .
subnet_ids - (Required) A list of subnet IDs used for X-ENI connections from Amazon WorkLink rendering containers.
security_group_ids - (Required) A list of security group IDs associated with access to the provided subnets.
saml_metadata - (Required) The SAML metadata document provided by the customer’s identity provider.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
company_code - The identi er used by users to sign in to the Amazon WorkLink app.
Import
WorkLink can be imported using the ARN, e.g.
Example Usage
Argument Reference
The following arguments are supported:
certificate - (Required, ForceNew) The root certi cate of the Certi cate Authority.
Attributes Reference
In addition to all arguments above, the following attributes are exported:
Import
WorkLink Website Certi cate Authority can be imported using FLEET-ARN,WEBSITE-CA-ID , e.g.
Example Usage
attributes = {
Hello = "Tris"
}
}
Argument Reference
rule_name - (Required) The name of the sampling rule.
resource_arn - (Required) Matches the ARN of the AWS resource on which the service runs.
fixed_rate - (Required) The percentage of matching requests to instrument, after the reservoir is exhausted.
reservoir_size - (Required) A xed number of matching requests to instrument per second, prior to applying the
xed rate. The reservoir is not used directly by services, but applies to all services using the rule collectively.
service_name - (Required) Matches the name that the service uses to identify itself in segments.
service_type - (Required) Matches the origin that the service uses to identify its type in segments.
Import
XRay Sampling Rules can be imported using the name, e.g.