EKS Guide PDF
EKS Guide PDF
EKS Guide PDF
User Guide
Amazon EKS User Guide
Amazon EKS User Guide
Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
Amazon EKS User Guide
Table of Contents
What is Amazon EKS? ......................................................................................................................... 1
Amazon EKS control plane architecture ......................................................................................... 1
How does Amazon EKS work? ...................................................................................................... 2
Getting started with Amazon EKS ........................................................................................................ 3
Getting started with eksctl ....................................................................................................... 3
Prerequisites ...................................................................................................................... 3
Install and configure kubectl ............................................................................................... 6
Create your Amazon EKS cluster and compute ....................................................................... 9
Next steps ....................................................................................................................... 12
Getting started with the console ................................................................................................ 13
Prerequisites .................................................................................................................... 13
Step 1: Create your Amazon EKS cluster .............................................................................. 21
Step 2: Create a kubeconfig file ...................................................................................... 23
Step 3: Create compute .................................................................................................... 24
Clusters ........................................................................................................................................... 28
Creating a cluster ..................................................................................................................... 28
Updating Kubernetes version ..................................................................................................... 34
Update an existing cluster ................................................................................................. 35
Kubernetes 1.16 upgrade prerequisites ............................................................................... 41
Deleting a cluster ..................................................................................................................... 43
Cluster endpoint access ............................................................................................................. 46
Modifying cluster endpoint access ...................................................................................... 46
Accessing a private only API server ..................................................................................... 50
Cluster Autoscaler .................................................................................................................... 51
Create an Amazon EKS cluster ........................................................................................... 51
Cluster Autoscaler node group considerations ...................................................................... 52
Deploy the Cluster Autoscaler ............................................................................................ 53
View your Cluster Autoscaler logs ...................................................................................... 54
Control plane logging ............................................................................................................... 55
Enabling and disabling control plane logs ........................................................................... 56
Viewing cluster control plane logs ...................................................................................... 57
Kubernetes versions .................................................................................................................. 58
Available Amazon EKS Kubernetes versions ......................................................................... 58
Kubernetes 1.17 ............................................................................................................... 58
Kubernetes 1.16 ............................................................................................................... 59
Kubernetes 1.15 ............................................................................................................... 60
Kubernetes 1.14 ............................................................................................................... 60
Amazon EKS version deprecation ....................................................................................... 61
Platform versions ..................................................................................................................... 61
Kubernetes version 1.17 .................................................................................................... 62
Kubernetes version 1.16 .................................................................................................... 62
Kubernetes version 1.15 .................................................................................................... 63
Kubernetes version 1.14 .................................................................................................... 64
Windows support ..................................................................................................................... 66
Considerations ................................................................................................................. 66
Enabling Windows support ................................................................................................ 67
Deploy a Windows sample application ................................................................................ 71
ARM support ........................................................................................................................... 72
Considerations ................................................................................................................. 72
Prerequisites .................................................................................................................... 72
Create a cluster ................................................................................................................ 73
Enable ARM support ......................................................................................................... 73
Launch nodes .................................................................................................................. 73
Join nodes to a cluster ...................................................................................................... 75
iv
Amazon EKS User Guide
v
Amazon EKS User Guide
vi
Amazon EKS User Guide
vii
Amazon EKS User Guide
viii
Amazon EKS User Guide
Amazon EKS control plane architecture
Amazon EKS runs Kubernetes control plane instances across multiple Availability Zones to ensure high
availability. Amazon EKS automatically detects and replaces unhealthy control plane instances, and it
provides automated version upgrades and patching for them.
Amazon EKS is integrated with many AWS services to provide scalability and security for your
applications, including the following:
Amazon EKS runs up-to-date versions of the open-source Kubernetes software, so you can use all of the
existing plugins and tooling from the Kubernetes community. Applications running on Amazon EKS are
fully compatible with applications running on any standard Kubernetes environment, whether running
in on-premises data centers or public clouds. This means that you can easily migrate any standard
Kubernetes application to Amazon EKS without any code modification required.
This control plane consists of at least two API server nodes and three etcd nodes that run across three
Availability Zones within a Region. Amazon EKS automatically detects and replaces unhealthy control
plane instances, restarting them across the Availability Zones within the Region as needed. Amazon EKS
leverages the architecture of AWS Regions in order to maintain high availability. Because of this, Amazon
EKS is able to offer an SLA for API server endpoint availability.
Amazon EKS uses Amazon VPC network policies to restrict traffic between control plane components to
within a single cluster. Control plane components for a cluster cannot view or receive communication
from other clusters or other AWS accounts, except as authorized with Kubernetes RBAC policies.
This secure and highly-available configuration makes Amazon EKS reliable and recommended for
production workloads.
1
Amazon EKS User Guide
How does Amazon EKS work?
1. Create an Amazon EKS cluster in the AWS Management Console or with the AWS CLI or one of the
AWS SDKs.
2. Launch managed or self-managed nodes that register with the Amazon EKS cluster. We provide you
with an AWS CloudFormation template that automatically configures your nodes. You can also deploy
applications to AWS Fargate if you don't need to manage nodes.
3. When your cluster is ready, you can configure your favorite Kubernetes tools (such as kubectl) to
communicate with your cluster.
4. Deploy and manage applications on your Amazon EKS cluster the same way that you would with any
other Kubernetes environment.
To create your first cluster and its associated resources, see Getting started with Amazon EKS (p. 3).
2
Amazon EKS User Guide
Getting started with eksctl
• Getting started with eksctl (p. 3) – This getting started guide helps you to install all of the
required resources to get started with Amazon EKS using eksctl, a simple command line utility for
creating and managing Kubernetes clusters on Amazon EKS. At the end of the tutorial, you will have a
running Amazon EKS cluster that you can deploy applications to. This is the fastest and simplest way
to get started with Amazon EKS.
• Getting started with the AWS Management Console (p. 13) – This getting started guide helps you
to create all of the required resources to get started with Amazon EKS using the AWS Management
Console. At the end of the tutorial, you will have a running Amazon EKS cluster that you can
deploy applications to. In this guide, you manually create each resource in the Amazon EKS or AWS
CloudFormation consoles. The procedures give you complete visibility into how each resource is
created and how they interact with each other.
The procedures in this guide create several resources for you automatically, that you have to create
manually when you create your cluster using the AWS Management Console. If you'd rather manually
create most of the resources to better understand how they interact with each other, then use the AWS
Management Console to create your cluster and compute. For more information, see Getting started
with the AWS Management Console (p. 13).
Prerequisites
This section helps you to install and configure the tools and resources that you need to create and
manage an Amazon EKS cluster.
macOS
If you currently have the AWS CLI installed, determine which version that you have installed.
aws --version
If you don't have version 1.18.97 or later, or version 2.0.30 or later installed, then install the AWS
CLI version 2. For other installation options, or to upgrade your currently installed version 2, see
Upgrading the AWS CLI version 2 on macOS.
3
Amazon EKS User Guide
Prerequisites
If you're unable to use the AWS CLI version 2, then ensure that you have the latest version of the
AWS CLI version 1 installed using the following command.
Linux
If you currently have the AWS CLI installed, determine which version that you have installed.
aws --version
If you don't have version 1.18.97 or later, or version 2.0.30 or later installed, then install the AWS
CLI version 2. For other installation options, or to upgrade your currently installed version 2, see
Upgrading the AWS CLI version 2 on Linux.
If you're unable to use the AWS CLI version 2, then ensure that you have the latest version of the
AWS CLI version 1 installed using the following command.
Windows
If you currently have the AWS CLI installed, determine which version that you have installed.
aws --version
If you don't have either version 1.18.97 or later, or version 2.0.30 or later installed, then install
the AWS CLI version 2 using the following steps. For other installation options, or to upgrade your
currently installed version 2, see Upgrading the AWS CLI version 2 on Windows.
1. Download the AWS CLI MSI installer for Windows (64-bit) at https://awscli.amazonaws.com/
AWSCLIV2.msi
2. Run the downloaded MSI installer and follow the onscreen instructions. By default, the AWS CLI
installs to C:\Program Files\Amazon\AWSCLIV2.
If you're unable to use the AWS CLI version 2, then ensure that you have the latest version of the
AWS CLI version 1 installed using the following command.
4
Amazon EKS User Guide
Prerequisites
$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: region-code
Default output format [None]: json
When you type this command, the AWS CLI prompts you for four pieces of information: access key,
secret access key, AWS Region, and output format. This information is stored in a profile (a
collection of settings) named default. This profile is used when you run commands, unless you specify
another one.
For more information, see Configuring the AWS CLI in the AWS Command Line Interface User Guide.
Install eksctl
To install 0.25.0-rc.0 version or later of the eksctl command line utility, choose the tab with the name of
the operating system that you'd like to install eksctl on. For more information, see https://eksctl.io/.
macOS
The easiest way to get started with Amazon EKS and macOS is by installing eksctl with Homebrew.
The eksctl Homebrew recipe installs eksctl and any other dependencies that are required for
Amazon EKS, such as kubectl. The recipe also installs the aws-iam-authenticator (p. 210),
which is required if you don't have the AWS CLI version 1.16.156 or higher installed.
1. If you do not already have Homebrew installed on macOS, install it with the following
command.
4. Test that your installation was successful with the following command.
eksctl version
Note
The GitTag version should be at least 0.25.0-rc.0. If not, check your terminal
output for any installation or upgrade errors, or manually download an archive of the
release from https://github.com/weaveworks/eksctl/releases/download/0.25.0-rc.0/
eksctl_Darwin_amd64.tar.gz, extract eksctl, and then execute it.
5
Amazon EKS User Guide
Install and configure kubectl
Linux
1. Download and extract the latest release of eksctl with the following command.
3. Test that your installation was successful with the following command.
eksctl version
Note
The GitTag version should be at least 0.25.0-rc.0. If not, check your terminal
output for any installation or upgrade errors, or replace the address in step 1 with
https://github.com/weaveworks/eksctl/releases/download/0.25.0-
rc.0/eksctl_Linux_amd64.tar.gz and complete steps 1-3 again.
Windows
1. If you do not already have Chocolatey installed on your Windows system, see Installing
Chocolatey.
2. Install or upgrade eksctl .
3. Test that your installation was successful with the following command.
eksctl version
Note
The GitTag version should be at least 0.25.0-rc.0. If not, check your terminal
output for any installation or upgrade errors, or manually download an archive of the
release from https://github.com/weaveworks/eksctl/releases/download/0.25.0-rc.0/
eksctl_Windows_amd64.zip, extract eksctl, and then execute it.
6
Amazon EKS User Guide
Install and configure kubectl
Note
If you used the preceding Homebrew instructions to install eksctl on macOS, then kubectl
has already been installed on your system. You can skip to Create your Amazon EKS cluster and
compute (p. 9).
To install version 1.17 of the kubectl command line utility, choose the tab with the name of the
operating system that you'd like to install kubectl on. If you need to install a different version to use
with a different cluster version, then see ??? (p. 221).
macOS
c. Compare the generated SHA-256 sum in the command output against your downloaded
SHA-256 file. The two should match.
3. Apply execute permissions to the binary.
chmod +x ./kubectl
• If you don't already have a version of kubectl installed, then move the binary to a folder that's
already in your PATH.
• If you already have a version of kubectl installed, then we recommend creating a $HOME/
bin/kubectl folder, moving the binary to that folder, and ensuring that $HOME/bin comes
first in your $PATH.
(Optional) Add the $HOME/bin path to your shell initialization file so that it is configured
when you open a shell.
5. After you install kubectl, you can verify its version with the following command:
7
Amazon EKS User Guide
Install and configure kubectl
Linux
c. Compare the generated SHA-256 sum in the command output against your downloaded
SHA-256 file. The two should match.
3. Apply execute permissions to the binary.
chmod +x ./kubectl
• If you don't already have a version of kubectl installed, then move the binary to a folder in
your PATH.
• If you already have a version of kubectl installed, then we recommend creating a $HOME/
bin/kubectl folder, moving the binary to that folder, and ensuring that $HOME/bin comes
first in your $PATH.
(Optional) Add the $HOME/bin path to your shell initialization file so that it is configured
when you open a shell.
Note
This step assumes you are using the Bash shell; if you are using another shell, change
the command to use your specific shell initialization file.
5. After you install kubectl, you can verify its version with the following command:
8
Amazon EKS User Guide
Create your Amazon EKS cluster and compute
Windows
Get-FileHash kubectl.exe
c. Compare the generated SHA-256 sum in the command output against your downloaded
SHA-256 file. The two should match, although the PowerShell output will be uppercase.
4. Copy the binary to a folder in your PATH. If you have an existing directory in your PATH that
you use for command line utilities, copy the binary to that directory. Otherwise, complete the
following steps.
a. Create a new directory for your command line binaries, such as C:\bin.
b. Copy the kubectl.exe binary to your new directory.
c. Edit your user or system PATH environment variable to add the new directory to your PATH.
d. Close your PowerShell terminal and open a new one to pick up the new PATH variable.
5. After you install kubectl, you can verify its version with the following command:
1. Choose a tab below that best matches your compute requirements. Though the following procedure
will create a cluster with one compute option, you can add any of the other options after your
cluster is created. To learn more about each option, see Compute (p. 85). If you want to create a
cluster that only runs Linux applications on AWS Fargate, then choose AWS Fargate – Linux. If you
intend to run Linux applications on Amazon EC2 instances, then choose Managed nodes – Linux. If
9
Amazon EKS User Guide
Create your Amazon EKS cluster and compute
you want to run Windows applications on Amazon EC2 instances, then choose Self-managed nodes
– Windows.
Create your Amazon EKS cluster with Fargate support with the following command. You can
replace prod with your own value and you can replace us-west-2 with any Amazon EKS
Fargate supported Region (p. 111).
We recommend that you deploy version 1.17. If you must deploy an earlier version, then you
can only replace it with version 1.16 or 1.15. If you change 1.17, then read the important
Amazon EKS release notes (p. 58) for the version and install the corresponding version of
kubectl (p. 221).
Your new Amazon EKS cluster is created without a node group. Eksctl creates a pod execution
role, a Fargate profile (p. 117) for the default and kube-system namespaces, and it
patches the coredns deployment so that it can run on Fargate. For more information see AWS
Fargate (p. 111).
Managed nodes – Linux
Create your Amazon EKS cluster and Linux nodes with the following command. Replace the
example values with your own values. You can replace us-west-2 with any Amazon EKS
supported Region.
Important
Amazon EKS nodes are standard Amazon EC2 instances, and you are billed for them
based on normal Amazon EC2 instance prices. For more information, see Amazon EC2
pricing.
We recommend that you deploy version 1.17. If you must deploy an earlier version, then you
can only replace it with version 1.16 or 1.15. If you change 1.17, then read the important
Amazon EKS release notes (p. 58) for the version and install the corresponding version of
kubectl (p. 221).
Though --ssh-public-key is optional, we highly recommend that you specify it when you
create your node group with a cluster. This option enables SSH access to the nodes in your
managed node group. Enabling SSH access allows you to connect to your instances and gather
diagnostic information if there are issues. You cannot enable remote access after the node
group is created. If you don't have a public key, you can create a key pair for Amazon EC2 and
then retrieve the public key for the key pair to specify for --ssh-public-key. Ensure that you
create the key in the same Region that you create the cluster in.
10
Amazon EKS User Guide
Create your Amazon EKS cluster and compute
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--ssh-access \
--ssh-public-key my-public-key.pub \
--managed
Output:
You'll see several lines of output as the cluster and nodes are created. The last line of output is
similar to the following example line.
Note
If nodes fail to join the cluster, see ??? (p. 305) in the Troubleshooting guide.
Self-managed nodes – Windows
Familiarize yourself with the Windows support considerations (p. 66), which include
supported values for instanceType in the example text below. Replace the example values
with your own values.
We recommend that you deploy version 1.17. If you must deploy an earlier version, then you
can only replace it with version 1.16 or 1.15. If you change 1.17, then read the important
Amazon EKS release notes (p. 58) for the version and install the corresponding version of
kubectl (p. 221).
Important
Amazon EKS nodes are standard Amazon EC2 instances, and you are billed for them
based on normal Amazon EC2 instance prices. For more information, see Amazon EC2
pricing.
Save the text below to a file named cluster-spec.yaml. The configuration file is used to
create a cluster with a self-managed Windows node group and a managed Linux node group.
Even if you only want to run Windows applications in your cluster, all Amazon EKS clusters must
contain at least one Linux node, though we recommend that you create at least two Linux nodes
for availability purposes.
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: windows-prod
region: us-west-2
version: '1.17'
managedNodeGroups:
- name: linux-ng
instanceType: t2.large
minSize: 2
nodeGroups:
- name: windows-ng
instanceType: m5.large
minSize: 2
volumeSize: 100
amiFamily: WindowsServer2019FullContainer
Create your Amazon EKS cluster and Windows and Linux nodes with the following command.
11
Amazon EKS User Guide
Next steps
Note
For more information about the available options for eksctl create cluster, see the
project README on GitHub or view the help page with the following command.
Output:
You'll see several lines of output as the cluster and nodes are created. The last line of output is
similar to the following example line.
Note
If nodes fail to join the cluster, see ??? (p. 305) in the Troubleshooting guide.
2. Cluster provisioning usually takes between 10 and 15 minutes. When your cluster is ready, test that
your kubectl configuration is correct.
Note
If you receive any authorization or resource type errors, see Unauthorized or access denied
(kubectl) (p. 306) in the troubleshooting section.
Output:
3. (Linux accelerated AMI nodes only) If you chose an accelerated AMI instance type and the Amazon
EKS-optimized accelerated AMI , then you must apply the NVIDIA device plugin for Kubernetes as a
DaemonSet on your cluster with the following command.
Next steps
Now that you have a working Amazon EKS cluster with nodes, you are ready to start installing
Kubernetes add-ons and deploying applications to your cluster. The following documentation topics help
you to extend the functionality of your cluster.
• the section called “Cluster Autoscaler” (p. 51) – Configure the Kubernetes Cluster Autoscaler to
automatically adjust the number of nodes in your node groups.
• the section called “Sample deployment” (p. 193) – Deploy a sample application to test your cluster
and Linux nodes.
• Deploy a Windows sample application (p. 71) – Deploy a sample application to test your cluster and
Windows nodes.
• Cluster management (p. 221) – Learn how to use important tools for managing your cluster.
12
Amazon EKS User Guide
Getting started with the console
The procedures in this guide give you complete visibility into how each resource is created and how
the resources interact with each other. If you'd rather have most of the resources created for you
automatically, use the eksctl CLI to create your cluster and nodes. For more information, see Getting
started with eksctl (p. 3).
Prerequisites
This section helps you to install and configure the tools and resources that you need to create and
manage an Amazon EKS cluster.
macOS
If you currently have the AWS CLI installed, determine which version that you have installed.
aws --version
If you don't have version 1.18.97 or later, or version 2.0.30 or later installed, then install the AWS
CLI version 2. For other installation options, or to upgrade your currently installed version 2, see
Upgrading the AWS CLI version 2 on macOS.
If you're unable to use the AWS CLI version 2, then ensure that you have the latest version of the
AWS CLI version 1 installed using the following command.
Linux
If you currently have the AWS CLI installed, determine which version that you have installed.
aws --version
If you don't have version 1.18.97 or later, or version 2.0.30 or later installed, then install the AWS
CLI version 2. For other installation options, or to upgrade your currently installed version 2, see
Upgrading the AWS CLI version 2 on Linux.
13
Amazon EKS User Guide
Prerequisites
sudo ./aws/install
If you're unable to use the AWS CLI version 2, then ensure that you have the latest version of the
AWS CLI version 1 installed using the following command.
Windows
If you currently have the AWS CLI installed, determine which version that you have installed.
aws --version
If you don't have either version 1.18.97 or later, or version 2.0.30 or later installed, then install
the AWS CLI version 2 using the following steps. For other installation options, or to upgrade your
currently installed version 2, see Upgrading the AWS CLI version 2 on Windows.
1. Download the AWS CLI MSI installer for Windows (64-bit) at https://awscli.amazonaws.com/
AWSCLIV2.msi
2. Run the downloaded MSI installer and follow the onscreen instructions. By default, the AWS CLI
installs to C:\Program Files\Amazon\AWSCLIV2.
If you're unable to use the AWS CLI version 2, then ensure that you have the latest version of the
AWS CLI version 1 installed using the following command.
$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: region-code
Default output format [None]: json
When you type this command, the AWS CLI prompts you for four pieces of information: access key,
secret access key, AWS Region, and output format. This information is stored in a profile (a
collection of settings) named default. This profile is used when you run commands, unless you specify
another one.
For more information, see Configuring the AWS CLI in the AWS Command Line Interface User Guide.
To install version 1.17 of the kubectl command line utility, choose the tab with the name of the
operating system that you'd like to install kubectl on. If you need to install a different version to use
with a different cluster version, then see ??? (p. 221).
14
Amazon EKS User Guide
Prerequisites
macOS
c. Compare the generated SHA-256 sum in the command output against your downloaded
SHA-256 file. The two should match.
3. Apply execute permissions to the binary.
chmod +x ./kubectl
• If you don't already have a version of kubectl installed, then move the binary to a folder that's
already in your PATH.
• If you already have a version of kubectl installed, then we recommend creating a $HOME/
bin/kubectl folder, moving the binary to that folder, and ensuring that $HOME/bin comes
first in your $PATH.
(Optional) Add the $HOME/bin path to your shell initialization file so that it is configured
when you open a shell.
5. After you install kubectl, you can verify its version with the following command:
Linux
15
Amazon EKS User Guide
Prerequisites
c. Compare the generated SHA-256 sum in the command output against your downloaded
SHA-256 file. The two should match.
3. Apply execute permissions to the binary.
chmod +x ./kubectl
• If you don't already have a version of kubectl installed, then move the binary to a folder in
your PATH.
• If you already have a version of kubectl installed, then we recommend creating a $HOME/
bin/kubectl folder, moving the binary to that folder, and ensuring that $HOME/bin comes
first in your $PATH.
(Optional) Add the $HOME/bin path to your shell initialization file so that it is configured
when you open a shell.
Note
This step assumes you are using the Bash shell; if you are using another shell, change
the command to use your specific shell initialization file.
5. After you install kubectl, you can verify its version with the following command:
Windows
16
Amazon EKS User Guide
Prerequisites
Get-FileHash kubectl.exe
c. Compare the generated SHA-256 sum in the command output against your downloaded
SHA-256 file. The two should match, although the PowerShell output will be uppercase.
4. Copy the binary to a folder in your PATH. If you have an existing directory in your PATH that
you use for command line utilities, copy the binary to that directory. Otherwise, complete the
following steps.
a. Create a new directory for your command line binaries, such as C:\bin.
b. Copy the kubectl.exe binary to your new directory.
c. Edit your user or system PATH environment variable to add the new directory to your PATH.
d. Close your PowerShell terminal and open a new one to pick up the new PATH variable.
5. After you install kubectl, you can verify its version with the following command:
17
Amazon EKS User Guide
Prerequisites
AWS CloudFormation
1. Save the following AWS CloudFormation template to a text file on your local system.
---
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Amazon EKS Cluster Role'
Resources:
eksClusterRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- eks.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
Outputs:
RoleArn:
Description: The role that Amazon EKS will use to create AWS resources for
Kubernetes clusters
Value: !GetAtt eksClusterRole.Arn
Export:
Name: !Sub "${AWS::StackName}-RoleArn"
Note
Prior to April 16, 2020, ManagedPolicyArns had an entry for
arn:aws:iam::aws:policy/AmazonEKSServicePolicy. With the
AWSServiceRoleForAmazonEKS service-linked role, that policy is no longer required.
2. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation.
3. Choose Create stack.
4. For Specify template, select Upload a template file, and then choose Choose file.
5. Choose the file you created earlier, and then choose Next.
6. For Stack name, enter a name for your role, such as eksClusterRole, and then choose Next.
7. On the Configure stack options page, choose Next.
8. On the Review page, review your information, acknowledge that the stack might create IAM
resources, and then choose Create stack.
When you create an Amazon EKS cluster, you specify the VPC subnets for your cluster to use. Amazon
EKS requires subnets in at least two Availability Zones. We recommend a VPC with public and private
18
Amazon EKS User Guide
Prerequisites
subnets so that Kubernetes can create public load balancers in the public subnets that load balance
traffic to pods running on nodes that are in private subnets.
For more information about both VPC types, see ??? (p. 159).
Choose the tab below that represents your desired VPC configuration.
https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-07-23/amazon-eks-
vpc-private-subnets.yaml
6. On the Specify Details page, fill out the parameters accordingly, and then choose Next.
• Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can
call it eks-vpc.
• VpcBlock: Choose a CIDR range for your VPC. Each worker node, pod, and load balancer that
you deploy is assigned an IP address from this block. The default value provides enough
IP addresses for most implementations, but if it doesn't, then you can change it. For more
information, see VPC and subnet sizing in the Amazon VPC User Guide. You can also add
additional CIDR blocks to the VPC once it's created.
• PublicSubnet01Block: Specify a CIDR block for public subnet 1. The default value provides
enough IP addresses for most implementations, but if it doesn't, then you can change it
• PublicSubnet02Block: Specify a CIDR block for public subnet 2. The default value provides
enough IP addresses for most implementations, but if it doesn't, then you can change it
• PrivateSubnet01Block: Specify a CIDR block for private subnet 1. The default value provides
enough IP addresses for most implementations, but if it doesn't, then you can change it
• PrivateSubnet02Block: Specify a CIDR block for private subnet 2. The default value provides
enough IP addresses for most implementations, but if it doesn't, then you can change it
7. (Optional) On the Options page, tag your stack resources. Choose Next.
8. On the Review page, choose Create.
9. When your stack is created, select it in the console and choose Outputs.
10. Record the SecurityGroups value for the security group that was created. When you add nodes
to your cluster, you must specify the ID of the security group. The security group is applied to
the cross-account elastic network interfaces that are created in your subnets that allow the
Amazon EKS control plane to communicate with your nodes.
11. Record the VpcId for the VPC that was created. You need this when you launch your node group
template.
12. Record the SubnetIds for the subnets that were created and whether you created them as
public or private subnets. When you add nodes to your cluster, you must specify the IDs of the
subnets that you want to launch the nodes into.
19
Amazon EKS User Guide
Prerequisites
https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-07-23/amazon-eks-
vpc-sample.yaml
6. On the Specify Details page, fill out the parameters accordingly, and then choose Next.
• Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can
call it eks-vpc.
• VpcBlock: Choose a CIDR block for your VPC. Each worker node, pod, and load balancer that
you deploy is assigned an IP address from this block. The default value provides enough
IP addresses for most implementations, but if it doesn't, then you can change it. For more
information, see VPC and subnet sizing in the Amazon VPC User Guide. You can also add
additional CIDR blocks to the VPC once it's created.
• Subnet01Block: Specify a CIDR block for subnet 1. The default value provides enough IP
addresses for most implementations, but if it doesn't, then you can change it
• Subnet02Block: Specify a CIDR block for subnet 2. The default value provides enough IP
addresses for most implementations, but if it doesn't, then you can change it
• Subnet03Block: Specify a CIDR block for subnet 3. The default value provides enough IP
addresses for most implementations, but if it doesn't, then you can change it
7. (Optional) On the Options page, tag your stack resources. Choose Next.
8. On the Review page, choose Create.
9. When your stack is created, select it in the console and choose Outputs.
10. Record the SecurityGroups value for the security group that was created. When you add nodes
to your cluster, you must specify the ID of the security group. The security group is applied to
the cross-account elastic network interfaces that are created in your subnets that allow the
Amazon EKS control plane to communicate with your nodes.
11. Record the VpcId for the VPC that was created. You need this when you launch your node group
template.
12. Record the SubnetIds for the subnets that were created. When you add nodes to your cluster,
you must specify the IDs of the subnets that you want to launch the nodes into.
https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-07-23/amazon-eks-
fully-private-vpc.yaml
20
Amazon EKS User Guide
Step 1: Create your Amazon EKS cluster
6. On the Specify Details page, fill out the parameters accordingly, and then choose Next.
• Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can
call it eks-vpc.
• VpcBlock: Choose a CIDR block for your VPC. Each worker node, pod, and load balancer that
you deploy is assigned an IP address from this block. The default value provides enough
IP addresses for most implementations, but if it doesn't, then you can change it. For more
information, see VPC and subnet sizing in the Amazon VPC User Guide. You can also add
additional CIDR blocks to the VPC once it's created.
• PrivateSubnet01Block: Specify a CIDR block for subnet 1. The default value provides enough
IP addresses for most implementations, but if it doesn't, then you can change it
• PrivateSubnet02Block: Specify a CIDR block for subnet 2. The default value provides enough
IP addresses for most implementations, but if it doesn't, then you can change it
• PrivateSubnet03Block: Specify a CIDR block for subnet 3. The default value provides enough
IP addresses for most implementations, but if it doesn't, then you can change it
7. (Optional) On the Options page, tag your stack resources. Choose Next.
8. On the Review page, choose Create.
9. When your stack is created, select it in the console and choose Outputs.
10. Record the SecurityGroups value for the security group that was created. When you add nodes
to your cluster, you must specify the ID of the security group. The security group is applied to
the cross-account elastic network interfaces that are created in your subnets that allow the
Amazon EKS control plane to communicate with your nodes.
11. Record the VpcId for the VPC that was created. You need this when you launch your node group
template.
12. Record the SubnetIds for the subnets that were created. When you add nodes to your cluster,
you must specify the IDs of the subnets that you want to launch the nodes into.
21
Amazon EKS User Guide
Step 1: Create your Amazon EKS cluster
Kubernetes secrets encryption with an AWS KMS CMK requires Kubernetes version 1.13 or later. If
no keys are listed, you must create one first. For more information, see Creating keys.
• Tags – (Optional) Add any tags to your cluster. For more information, see Tagging your Amazon
EKS resources (p. 235).
4. Select Next.
5. On the Specify networking page, select values for the following fields:
• VPC – The VPC that you created previously in the section called “Create your Amazon EKS cluster
VPC” (p. 18). You can find the name of your VPC in the drop-down list.
• Subnets – By default, the available subnets in the VPC specified in the previous field are
preselected. Select any subnet that you don't want to host cluster resources, such as worker nodes
or load balancers.
• Security groups – The SecurityGroups value from the AWS CloudFormation output that
you generated with Create your Amazon EKS cluster VPC (p. 18). This security group has
ControlPlaneSecurityGroup in the drop-down name.
Important
The node AWS CloudFormation template modifies the security group that you specify
here, so Amazon EKS strongly recommends that you use a dedicated security group for
each cluster control plane (one per cluster). If this security group is shared with other
resources, you might block or disrupt connections to those resources.
• For Cluster endpoint access – Choose one of the following options:
• Public – Enables only public access to your cluster's Kubernetes API server endpoint. Kubernetes
API requests that originate from outside of your cluster's VPC use the public endpoint. By
default, access is allowed from any source IP address. You can optionally restrict access to one
or more CIDR ranges such as 192.168.0.0/16, for example, by selecting Advanced settings and
then selecting Add source.
• Private – Enables only private access to your cluster's Kubernetes API server endpoint.
Kubernetes API requests that originate from within your cluster's VPC use the private VPC
endpoint.
Important
If you created a VPC without outbound internet access, then you must enable private
access.
• Public and private – Enables public and private access.
For more information about the previous options, see ??? (p. 46).
6. Select Next.
7. On the Configure logging page, you can optionally choose which log types that you want to
enable. By default, each log type is Disabled. For more information, see Amazon EKS control plane
logging (p. 55).
8. Select Next.
22
Amazon EKS User Guide
Step 2: Create a kubeconfig file
9. On the Review and create page, review the information that you entered or selected on the
previous pages. Select Edit if you need to make changes to any of your selections. Once you're
satisfied with your settings, select Create. The Status field shows CREATING until the cluster
provisioning process completes.
Note
You might receive an error that one of the Availability Zones in your request doesn't have
sufficient capacity to create an Amazon EKS cluster. If this happens, the error output
contains the Availability Zones that can support a new cluster. Retry creating your cluster
with at least two subnets that are located in the supported Availability Zones for your
account. For more information, see Insufficient capacity (p. 305).
When your cluster provisioning is complete (usually between 10 and 15 minutes), note the API
server endpoint and Certificate authority values. These are used in your kubectl configuration.
1. Use the AWS CLI update-kubeconfig command to create or update a kubeconfig for your cluster.
• By default, the resulting configuration file is created at the default kubeconfig path (.kube/
config) in your home directory or merged with an existing kubeconfig at that location. You can
specify another path with the --kubeconfig option.
• You can specify an IAM role ARN with the --role-arn option to use for authentication when you
issue kubectl commands. Otherwise, the IAM entity in your default AWS CLI or SDK credential
chain is used. You can view your default AWS CLI or SDK identity by running the aws sts get-
caller-identity command.
• For more information, see the help page with the aws eks update-kubeconfig help command or
see update-kubeconfig in the AWS CLI Command Reference.
Note
To run the following command, you must have permission to the use the
eks:DescribeCluster API action with the cluster that you specify. For more information,
see Amazon EKS identity-based policy examples (p. 249).
Note
If you receive any authorization or resource type errors, see Unauthorized or access denied
(kubectl) (p. 306) in the troubleshooting section.
Output:
23
Amazon EKS User Guide
Step 3: Create compute
Before creating an AWS Fargate profile, you must create a Fargate pod execution role to use wtih
your profile.
To create an AWS Fargate pod execution role with the AWS Management Console
You can now create the Fargate profile, specifying the IAM role that you created.
To create a Fargate profile for a cluster with the AWS Management Console
a. For Namespace, enter a namespace to match for pods, such as kube-system or default.
24
Amazon EKS User Guide
Step 3: Create compute
b. (Optional) Add Kubernetes labels to the selector that pods in the specified namespace
must have to match the selector. For example, you could add the label infrastructure:
fargate to the selector so that only pods in the specified namespace that also have the
infrastructure: fargate Kubernetes label match the selector.
6. On the Review and create page, review the information for your Fargate profile and choose
Create.
The Amazon EKS node kubelet daemon makes calls to AWS APIs on your behalf. Nodes receive
permissions for these API calls through an IAM instance profile and associated policies. You must
create an IAM role before you can launch the nodes. For more information, see Amazon EKS
node IAM role (p. 257). You can create the role using the AWS Management Console or AWS
CloudFormation. Select the tab with the name of the tool that you'd like to use to create the role.
Note
We recommend that you create a new node IAM role for each cluster. Otherwise, a node
from one cluster could authenticate with another cluster that it does not belong to.
To launch your managed node group using the AWS Management Console
1. Wait for your cluster status to show as ACTIVE. You cannot create a managed node group for a
cluster that is not yet ACTIVE.
2. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home#/clusters.
3. Choose the name of the cluster that you want to create your managed node group in.
4. On the cluster page, select the Compute tab, and then choose Add Node Group.
5. On the Configure node group page, fill out the parameters accordingly, and then choose Next.
25
Amazon EKS User Guide
Step 3: Create compute
• AMI type – Choose Amazon Linux 2 (AL2_x86_64) for non-GPU instances, or Amazon Linux 2
GPU Enabled (AL2_x86_64_GPU) for GPU instances.
• Instance type – Choose the instance type to use in your managed node group. Each Amazon
EC2 instance type supports a maximum number of elastic network interfaces (ENIs) and each
ENI supports a maximum number of IP addresses. Since each worker node and pod is assigned
its own IP address it's important to choose an instance type that will support the maximum
number of pods that you want to run on each worker node. For a list of the number of ENIs
and IP addresses supported by instance types, see IP addresses per network interface per
instance type. For example, the t3.medium instance type supports a maximum of 18 IP
26
Amazon EKS User Guide
Step 3: Create compute
addresses for the worker node and pods. Some instance types might not be available in all
Regions.
• Disk size – Enter the disk size (in GiB) to use for your node's root volume.
7. On the Setup scaling policies page, fill out the parameters accordingly, and then choose Next.
Note
Amazon EKS does not automatically scale your node group in or out. However, you can
configure the Kubernetes Cluster Autoscaler (p. 51) to do this for you.
• Minimum size – Specify the minimum number of nodes that the managed node group can
scale in to.
• Maximum size – Specify the maximum number of nodes that the managed node group can
scale out to.
• Desired size – Specify the current number of nodes that the managed node group should
maintain at launch.
8. On the Review and create page, review your managed node group configuration and choose
Create.
Note
If nodes fail to join the cluster, see ??? (p. 305) in the Troubleshooting guide.
9. Watch the status of your nodes and wait for them to reach the Ready status.
10. (GPU nodes only) If you chose a GPU instance type and the Amazon EKS-optimized accelerated
AMI, then you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on your
cluster with the following command.
11. (Optional) Deploy a sample Linux application (p. 193) – Deploy a sample application to test
your cluster and Linux nodes.
Add Windows support to your cluster and launch Windows nodes. For more information, see Windows
support (p. 66). All Amazon EKS clusters must contain at least one Linux node, even if you only want
to run Windows workloads in your cluster.
Next steps
Now that you have a working Amazon EKS cluster with nodes, you are ready to start installing
Kubernetes add-ons and deploying applications to your cluster. The following documentation topics help
you to extend the functionality of your cluster.
• the section called “Cluster Autoscaler” (p. 51) – Configure the Kubernetes Cluster Autoscaler to
automatically adjust the number of nodes in your node groups.
• the section called “Sample deployment” (p. 193) – Deploy a sample application to test your cluster
and Linux nodes.
• Deploy a Windows sample application (p. 71) – Deploy a sample application to test your cluster and
Windows nodes.
• Cluster management (p. 221) – Learn how to use important tools for managing your cluster.
27
Amazon EKS User Guide
Creating a cluster
The Amazon EKS control plane consists of control plane nodes that run the Kubernetes software, such
as etcd and the Kubernetes API server. The control plane runs in an account managed by AWS, and the
Kubernetes API is exposed via the Amazon EKS endpoint associated with your cluster. Each Amazon EKS
cluster control plane is single-tenant and unique, and runs on its own set of Amazon EC2 instances.
All of the data stored by the etcd nodes and associated Amazon EBS volumes is encrypted using AWS
KMS. The cluster control plane is provisioned across multiple Availability Zones and fronted by an Elastic
Load Balancing Network Load Balancer. Amazon EKS also provisions elastic network interfaces in your
VPC subnets to provide connectivity from the control plane instances to the nodes (for example, to
support kubectl exec, logs, and proxy data flows).
Amazon EKS nodes run in your AWS account and connect to your cluster's control plane via the API
server endpoint and a certificate file that is created for your cluster.
Prerequisites
You must have the AWS CLI version 1.16.156 or later or the aws-iam-authenticator installed. For
more information, see ??? (p. 13) or ??? (p. 210).
Choose the tab below that corresponds to your desired cluster creation method.
eksctl
28
Amazon EKS User Guide
Creating a cluster
eksctl version
1. Create a cluster with the Amazon EKS lastest Kubernetes version in your default region. Replace
my-cluster with your own value.
Note
To see most options that can be specified when creating a cluster with eksctl, use
the eksctl create cluster --help command. To see all options, you can use a
config file. For more information, see Using config files and the config file schema in
the eksctl documentation. You can find config file examples on GitHub.
Warning
If you create a cluster using a config file with the secretsEncryption option, which
requires an existing AWS Key Management Service key, and the key that you use is
ever deleted, then there is no path to recovery for the cluster. If you enable envelope
encryption, the Kubernetes secrets are encrypted using the customer master key (CMK)
that you select. The CMK must be symmetric, created in the same region as the cluster,
and if the CMK was created in a different account, the user must have access to the
CMK. For more information, see Allowing users in other accounts to use a CMK in the
AWS Key Management Service Developer Guide. Kubernetes secrets encryption with an
AWS KMS CMK requires Kubernetes version 1.13 or later.
By default, the create-key command creates a symmetric key with a key policy that
gives the account's root user admin access on AWS KMS actions and resources. For more
information, see Creating keys. If you want to scope down the permissions, make sure
that the kms:DescribeKey and kms:CreateGrant actions are permitted on the
key policy for the principal that will be calling the create-cluster API. Amazon EKS
does not support the key policy condition kms:GrantIsForAWSResource. Creating a
cluster will not work if this action is in the key policy statement.
Cluster provisioning takes several minutes. During cluster creation, you'll see several lines of
output. The last line of output is similar to the following example line.
2. When your cluster is ready, test that your kubectl configuration is correct.
Note
If you receive any authorization or resource type errors, see Unauthorized or access
denied (kubectl) (p. 306) in the troubleshooting section.
Output:
29
Amazon EKS User Guide
Creating a cluster
3. (Optional) If you want to run pods on AWS Fargate in your cluster, then you must ??? (p. 114)
and ??? (p. 115).
4. Follow the procedures in Launching self-managed Amazon Linux 2 nodes (p. 96) to add Linux
nodes to your cluster to support your workloads.
5. (Optional) After you add Linux nodes to your cluster, follow the procedures in Windows
support (p. 66) to add Windows support to your cluster and to add Windows nodes. All
Amazon EKS clusters must contain at least one Linux node, even if you only want to run
Windows workloads in your cluster.
• You have created a VPC and a dedicated security group that meet the requirements for an
Amazon EKS cluster. For more information, see Cluster VPC considerations (p. 163) and Amazon
EKS security group considerations (p. 166). The Getting started with the AWS Management
Console (p. 13) guide creates a VPC that meets the requirements, or you can also follow Creating a
VPC for your Amazon EKS cluster (p. 159) to create one.
• You have created an Amazon EKS cluster IAM role to apply to your cluster. The Getting started
with Amazon EKS (p. 3) guide creates a service role for you, or you can also follow Amazon EKS
IAM roles (p. 248) to create one manually.
Kubernetes secrets encryption with an AWS KMS CMK requires Kubernetes version 1.13 or
later. If no keys are listed, you must create one first. For more information, see Creating keys.
Note
By default, the create-key command creates a symmetric key with a key policy that
gives the account's root user admin access on AWS KMS actions and resources. If you
want to scope down the permissions, make sure that the kms:DescribeKey and
kms:CreateGrant actions are permitted on the key policy for the principal that will
be calling the create-cluster API.
30
Amazon EKS User Guide
Creating a cluster
• VPC – Select an existing VPC to use for your cluster. If none are listed, then you need to create
one first. For more information, see ??? (p. 159).
• Subnets – By default, the available subnets in the VPC specified in the previous field are
preselected. Select any subnet that you don't want to host cluster resources, such as worker
nodes or load balancers. The subnets must meet the requirements for an Amazon EKS cluster.
For more information, see Cluster VPC considerations (p. 163).
Important
If you select subnets that were created before 03/26/2020 using one of the Amazon
EKS AWS CloudFormation VPC templates, be aware of a default setting change that
was introduced on 03/26/2020. For more information, see ??? (p. 159).
• Security groups – The SecurityGroups value from the AWS CloudFormation output that
you generated with Create your Amazon EKS cluster VPC (p. 18). This security group has
ControlPlaneSecurityGroup in the drop-down name.
Important
The node AWS CloudFormation template modifies the security group that you
specify here, so Amazon EKS strongly recommends that you use a dedicated
security group for each cluster control plane (one per cluster). If this security
group is shared with other resources, you might block or disrupt connections to those
resources.
• For Cluster endpoint access – Choose one of the following options:
• Public – Enables only public access to your cluster's Kubernetes API server endpoint.
Kubernetes API requests that originate from outside of your cluster's VPC use the public
endpoint. By default, access is allowed from any source IP address. You can optionally
restrict access to one or more CIDR ranges such as 192.168.0.0/16, for example, by
selecting Advanced settings and then selecting Add source.
• Private – Enables only private access to your cluster's Kubernetes API server endpoint.
Kubernetes API requests that originate from within your cluster's VPC use the private VPC
endpoint.
Important
If you created a VPC without outbound internet access, then you must enable
private access.
• Public and private – Enables public and private access.
For more information about the previous options, see ??? (p. 46).
6. Select Next.
7. On the Configure logging page, you can optionally choose which log types that you want to
enable. By default, each log type is Disabled. For more information, see Amazon EKS control
plane logging (p. 55).
8. Select Next.
31
Amazon EKS User Guide
Creating a cluster
9. On the Review and create page, review the information that you entered or selected on the
previous pages. Select Edit if you need to make changes to any of your selections. Once you're
satisfied with your settings, select Create. The Status field shows CREATING until the cluster
provisioning process completes.
Note
You might receive an error that one of the Availability Zones in your request doesn't
have sufficient capacity to create an Amazon EKS cluster. If this happens, the error
output contains the Availability Zones that can support a new cluster. Retry creating
your cluster with at least two subnets that are located in the supported Availability
Zones for your account. For more information, see Insufficient capacity (p. 305).
AWS CLI
• You have created a VPC and a dedicated security group that meets the requirements for an
Amazon EKS cluster. For more information, see Cluster VPC considerations (p. 163) and Amazon
EKS security group considerations (p. 166). The Getting started with the AWS Management
Console (p. 13) guide creates a VPC that meets the requirements, or you can also follow Creating a
VPC for your Amazon EKS cluster (p. 159) to create one.
• You have created an Amazon EKS cluster IAM role to apply to your cluster. The Getting started
with Amazon EKS (p. 3) guide creates a service role for you, or you can also follow Amazon EKS
IAM roles (p. 248) to create one manually.
1. Create your cluster with the following command. Substitute your cluster name, the Amazon
Resource Name (ARN) of your Amazon EKS cluster IAM role that you created in Create your
Amazon EKS cluster IAM role (p. 17), and the subnet and security group IDs for the VPC that you
created in Create your Amazon EKS cluster VPC (p. 18).
32
Amazon EKS User Guide
Creating a cluster
Note
If your IAM user doesn't have administrative privileges, you must explicitly add
permissions for that user to call the Amazon EKS API operations. For more information,
see Amazon EKS identity-based policy examples (p. 249).
Output:
{
"cluster": {
"name": "devel",
"arn": "arn:aws:eks:region-code:111122223333:cluster/devel",
"createdAt": 1527785885.159,
"version": "1.17",
"roleArn": "arn:aws:iam::111122223333:role/eks-service-role-
AWSServiceRoleForAmazonEKS-AFNL4H8HB71F",
"resourcesVpcConfig": {
"subnetIds": [
"subnet-a9189fe2",
"subnet-50432629"
],
"securityGroupIds": [
"sg-f5c54184"
],
"vpcId": "vpc-a54041dc",
"endpointPublicAccess": true,
"endpointPrivateAccess": false
},
"status": "CREATING",
"certificateAuthority": {}
}
}
Note
You might receive an error that one of the Availability Zones in your request doesn't
have sufficient capacity to create an Amazon EKS cluster. If this happens, the error
output contains the Availability Zones that can support a new cluster. Retry creating
your cluster with at least two subnets that are located in the supported Availability
Zones for your account. For more information, see Insufficient capacity (p. 305).
To encrypt the Kubernetes secrets with a customer master key (CMK) from AWS Key
Management Service (AWS KMS), first create a CMK using the create-key operation.
Note
By default, the create-key command creates a symmetric key with a key policy that
gives the account's root user admin access on AWS KMS actions and resources. If you
want to scope down the permissions, make sure that the kms:DescribeKey and
kms:CreateGrant actions are permitted on the key policy for the principal that will
be calling the create-cluster API.
Amazon EKS does not support the key policy condition
kms:GrantIsForAWSResource. Creating a cluster will not work if this action is in the
key policy statement.
33
Amazon EKS User Guide
Updating Kubernetes version
--encryption-config '[{"resources":["secrets"],"provider":
{"keyArn":"$MY_KEY_ARN"}}]'
The keyArn member can contain either the alias or ARN of your CMK. The CMK must be
symmetric, created in the same Region as the cluster, and if the CMK was created in a different
account, the user must have access to the CMK. For more information, see Allowing users in
other accounts to use a CMK in the AWS Key Management Service Developer Guide. Kubernetes
secrets encryption with an AWS KMS CMK requires Kubernetes version 1.13 or later.
Warning
Deletion of the CMK will permanently put the cluster in a degraded state. If any CMKs
used for cluster creation are scheduled for deletion, verify that this is the intended
action before deletion. Once the key is deleted, there is no path to recovery for the
cluster.
2. Cluster provisioning usually takes between 10 and 15 minutes. You can query the status of your
cluster with the following command. When your cluster status is ACTIVE, you can proceed.
4. Now that you have created your cluster, follow the procedures in Create a kubeconfig for
Amazon EKS (p. 213) to enable communication with your new cluster.
5. (Optional) If you want to run pods on AWS Fargate in your cluster, see Getting started with AWS
Fargate using Amazon EKS (p. 113).
6. After you enable communication, follow the procedures in Launching self-managed Amazon
Linux 2 nodes (p. 96) to add nodes to your cluster to support your workloads.
7. (Optional) After you add Linux nodes to your cluster, follow the procedures in Windows
support (p. 66) to add Windows support to your cluster and to add Windows nodes. All
Amazon EKS clusters must contain at least one Linux node, even if you only want to run
Windows workloads in your cluster.
34
Amazon EKS User Guide
Update an existing cluster
New Kubernetes versions introduce significant changes, so we recommend that you test the behavior of
your applications against a new Kubernetes version before performing the update on your production
clusters. You can achieve this by building a continuous integration workflow to test your application
behavior end-to-end before moving to a new Kubernetes version.
The update process consists of Amazon EKS launching new API server nodes with the updated
Kubernetes version to replace the existing ones. Amazon EKS performs standard infrastructure and
readiness health checks for network traffic on these new nodes to verify that they are working as
expected. If any of these checks fail, Amazon EKS reverts the infrastructure deployment, and your cluster
remains on the prior Kubernetes version. Running applications are not affected, and your cluster is never
left in a non-deterministic or unrecoverable state. Amazon EKS regularly backs up all managed clusters,
and mechanisms exist to recover clusters if necessary. We are constantly evaluating and improving our
Kubernetes infrastructure management processes.
In order to upgrade the cluster, Amazon EKS requires 2-3 free IP addresses from the subnets which were
provided when you created the cluster. If these subnets do not have available IP addresses, then the
upgrade can fail. Additionally, if any of the subnets or security groups that were provided during cluster
creation have been deleted, the cluster upgrade process can fail.
Note
Although Amazon EKS runs a highly available control plane, you might experience minor service
interruptions during an update. For example, if you attempt to connect to an API server just
before or just after it's terminated and replaced by a new API server running the new version
of Kubernetes, you might experience API call errors or connectivity issues. If this happens, retry
your API operations until they succeed.
Amazon EKS does not modify any of your Kubernetes add-ons when you update a cluster. After updating
your cluster, we recommend that you update your add-ons to the versions listed in the following table
for the new Kubernetes version that you're updating to. Steps to accomplish this are included in the
update procedures.
If you're using additional add-ons for your cluster that aren't listed in the previous table, update them to
the latest compatible versions after updating your cluster.
1. Compare the Kubernetes version of your cluster control plane to the Kubernetes version of your
nodes.
• Get the Kubernetes version of your cluster control plane with the following command.
• Get the Kubernetes version of your nodes with the following command.
35
Amazon EKS User Guide
Update an existing cluster
If your nodes are more than one Kubernetes minor version older than your control plane, then you
must upgrade your nodes to a newer Kubernetes minor version before you update your cluster's
Kubernetes version. For more information, see Kubernetes version and version skew support policy
in the Kubernetes documentation.
We recommend that you update your nodes to your cluster's current pre-update Kubernetes minor
version prior to your cluster update. Your nodes must not run a newer Kubernetes version than
your control plane. For example, if your control plane is running version 1.16 and your nodes
are running version 1.14, update your nodes to version 1.15 or 1.16 (recommended) before you
update your cluster’s Kubernetes version to 1.17. For more information, see Self-managed node
updates (p. 104).
2. The pod security policy admission controller is enabled on Amazon EKS clusters running Kubernetes
version 1.13 or later. If you are upgrading your cluster to Kubernetes version 1.13 or later, ensure
that the proper pod security policies are in place before you update to avoid any issues. You can
check for the default policy with the following command:
If you receive the following error, see To install or restore the default pod security policy (p. 279)
before proceeding.
3. Update your cluster. For instructions, select the tab with the name of the tool that you want to use
to update your cluster.
eksctl
This procedure requires eksctl version 0.25.0-rc.0 or later. You can check your version with
the following command:
eksctl version
Update your Amazon EKS cluster Kubernetes version one minor version later than its current
version with the following command, replacing dev with your cluster name. Because Amazon
EKS runs a highly available control plane, you can update only one minor version at a time. See
Kubernetes Version and Version Skew Support Policy for the rationale behind this requirement.
Important
You may need to update some of your deployed resources before you can update
to 1.16. For more information, see the section called “Kubernetes 1.16 upgrade
prerequisites” (p. 41). Upgrading a cluster from 1.16 to 1.17 will fail if any of your
AWS Fargate pods have a kubelet minor version earlier than 1.16. Before upgrading
your cluster from 1.16 to 1.17, you need to recycle your Fargate pods so that their
kubelet is 1.16 before attempting to upgrade the cluster to 1.17.
36
Amazon EKS User Guide
Update an existing cluster
AWS CLI
1. Update your cluster with the following AWS CLI command. Substitute your cluster name and
desired Kubernetes minor version.
Important
You may need to update some of your deployed resources before you can update
to 1.16. For more information, see the section called “Kubernetes 1.16 upgrade
prerequisites” (p. 41). Upgrading a cluster from 1.16 to 1.17 will fail if any of your
AWS Fargate pods have a kubelet minor version earlier than 1.16. Before upgrading
your cluster from 1.16 to 1.17, you need to recycle your Fargate pods so that their
kubelet is 1.16 before attempting to upgrade the cluster to 1.17.
Important
Because Amazon EKS runs a highly available control plane, you can update only one
minor version at a time. See Kubernetes Version and Version Skew Support Policy for
the rationale behind this requirement. Therefore, if your current version is 1.15 and
you want to upgrade to 1.17, then you must first upgrade your cluster to 1.16 and
then upgrade it from 1.16 to 1.17. If you try to update directly from 1.15 to 1.17,
then the update version command throws an error.
Output:
37
Amazon EKS User Guide
Update an existing cluster
{
"update": {
"id": "b5f0ba18-9a87-4450-b5a0-825e6e84496f",
"status": "InProgress",
"type": "VersionUpdate",
"params": [
{
"type": "Version",
"value": "1.17"
},
{
"type": "PlatformVersion",
"value": "eks.1"
}
],
...
"errors": []
}
}
2. Monitor the status of your cluster update with the following command, using the cluster
name and update ID that the previous command returned. Your update is complete when the
status appears as Successful.
Note
The cluster update should finish in a few minutes.
Output:
{
"update": {
"id": "b5f0ba18-9a87-4450-b5a0-825e6e84496f",
"status": "Successful",
"type": "VersionUpdate",
"params": [
{
"type": "Version",
"value": "1.17"
},
{
"type": "PlatformVersion",
"value": "eks.1"
}
],
...
"errors": []
}
}
4. Patch the kube-proxy daemonset to use the image that corresponds to your cluster's Region and
current Kubernetes version (in this example, 1.17.7).
38
Amazon EKS User Guide
Update an existing cluster
Update kube-proxy to the recommended version by taking the output from the previous step and
replacing the version tag with your cluster's recommended kube-proxy version:
Your account ID and region may differ from the example above.
5. Check your cluster's DNS provider. Clusters that were created with Kubernetes version 1.10 shipped
with kube-dns as the default DNS and service discovery provider. If you have updated a 1.10 cluster
to a newer version and you want to use CoreDNS for DNS and service discovery, then you must
install CoreDNS and remove kube-dns.
To check if your cluster is already running CoreDNS, use the following command.
If the output shows coredns in the pod names, you're already running CoreDNS in your cluster. If
not, see Installing or upgrading CoreDNS (p. 184) to install CoreDNS on your cluster, update it to
the recommended version, return here, and skip steps 6-8.
6. Check the current version of your cluster's coredns deployment.
kubectl describe deployment coredns --namespace kube-system | grep Image | cut -d "/" -
f 3
Output:
coredns:v1.1.3
The recommended coredns versions for the corresponding Kubernetes versions are as follows:
7. If your current coredns version is 1.5.0 or later, but earlier than the recommended version, then
skip this step. If your current version is earlier than 1.5.0, then you need to modify the config map
for coredns to use the forward plug-in, rather than the proxy plug-in.
b. Replace proxy in the following line with forward. Save the file and exit the editor.
proxy . /etc/resolv.conf
39
Amazon EKS User Guide
Update an existing cluster
9. Update coredns to the recommended version by taking the output from the previous step and
replacing the version tag with your cluster's recommended coredns version:
10. Check the version of your cluster's Amazon VPC CNI Plugin for Kubernetes. Use the following
command to print your cluster's CNI version.
kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -
f 2
Output:
amazon-k8s-cni:1.6.2
If your CNI version is earlier than 1.6.3, then use the appropriate command below to update your
CNI version to the latest recommended version:
• Replace region-code in the following command with the Region that your cluster is in and
then run the modified command to replace the Region code in the file (currently us-west-2).
11. (Optional) If you deployed the Kubernetes Cluster Autoscaler to your cluster prior to upgrading the
cluster, update the Cluster Autoscaler to the latest version that matches the Kubernetes major and
minor version that you upgraded to.
a. Open the Cluster Autoscaler releases page in a web browser and find the latest Cluster
Autoscaler version that matches your cluster's Kubernetes major and minor version. For
example, if your cluster's Kubernetes version is 1.17 find the latest Cluster Autoscaler release
that begins with 1.17. Record the semantic version number (1.17.n) for that release to use in the
next step.
b. Set the Cluster Autoscaler image tag to the version that you recorded in the previous step with
the following command. Replace 1.17.n with your own value. You can replace us with asia or
eu.
Note
Depending on the version that you need, you may need to change the previous address
to gcr.io/google-containers/cluster-autoscaler:v1.n.n . The image
address is listed on the releases page.
12. (Clusters with GPU nodes only) If your cluster has node groups with GPU support (for example,
p3.2xlarge), you must update the NVIDIA device plugin for Kubernetes DaemonSet on your cluster
with the following command.
13. After your cluster update is complete, update your nodes to the same Kubernetes version of your
updated cluster. For more information, see Self-managed node updates (p. 104) or Updating a
managed node group (p. 92). Any new pods launched on Fargate will have a kubelet version
that matches your cluster version. Existing Fargate pods will not be changed.
• NetworkPolicy resources will no longer be served from extensions/v1beta1 in v1.16. Migrate use
to the networking.k8s.io/v1 API, available since v1.8. Existing persisted data can be retrieved
through the networking.k8s.io/v1 API.
• PodSecurityPolicy resources will no longer be served from extensions/v1beta1 in v1.16. Migrate to
the policy/v1beta1 API, available since v1.10. Existing persisted data can be retrieved through the
policy/v1beta1 API.
• DaemonSet, Deployment, StatefulSet, and ReplicaSet resources will no longer be served from
extensions/v1beta1, apps/v1beta1, or apps/v1beta2 in v1.16. Migrate to the apps/v1 API,
available since v1.9. Existing persisted data can be retrieved through the apps/v1 API. For example, to
convert a Deployment that currently uses apps/v1beta1, enter the following command.
41
Amazon EKS User Guide
Kubernetes 1.16 upgrade prerequisites
Note
The previous command may use different default values from what is set in your current
manifest file. To learn more about a specific resource, see the Kubernetes API reference.
If you originally created an Amazon EKS cluster with Kubernetes version 1.11 or earlier and have not
removed the --resource-container flag from the kube-proxy DaemonSet, then updating to
Kubernetes 1.16 will cause kube-proxy failures. This flag is deprecated in Kubernetes 1.16. For more
information, see kube-proxy in Kubernetes 1.16 Deprecations and removals. You must remove this flag
before updating to Kubernetes 1.16.
To easily check for deprecated API usage in your cluster, make sure that the audit control plane
log (p. 55) is enabled, and specify v1beta as a filter for the events. All of the replacement APIs
are in Kubernetes versions later than 1.10. Applications on any supported version of Amazon EKS can
begin using the updated APIs now.
• Remove the --resource-container="" flag from your kube-proxy DaemonSet, if your cluster
was originally deployed with Kubernetes 1.11 or earlier or use a kube-proxy configuration file
(recommended). To determine whether your current version of kube-proxy has the flag, enter the
following command.
If you receive no output, then you don't need to remove anything. If you receive output similar to --
resource-container="", then you need to remove the flag. Enter the following command to edit
your current kube-proxy config.
With the editor open, remove the --resource-container="" line, and save the file. We recommend
that you instead, start using a kube-proxy configuration file. To do so, download the following
mantifest.
Output
42
Amazon EKS User Guide
Deleting a cluster
https://A89DBB2140C8AC0C2F920A36CCC6E18C.sk1.region-code.eks.amazonaws.com
Edit the kube-proxy-daemonset.yaml file that you downloaded. In your editor, replace
MASTER_ENDPOINT with the output from the previous command. Replace REGION with your cluster's
region. On the same line, replace the version with the version of your cluster, if necessary. Apply the
file with the following command.
Deleting a cluster
When you're done using an Amazon EKS cluster, you should delete the resources associated with it so
that you don't incur any unnecessary costs.
Important
If you have active services in your cluster that are associated with a load balancer, you must
delete those services before deleting the cluster so that the load balancers are deleted properly.
Otherwise, you can have orphaned resources in your VPC that prevent you from being able to
delete the VPC.
Choose the tab below that corresponds to your preferred cluster deletion method.
eksctl
This procedure requires eksctl version 0.25.0-rc.0 or later. You can check your version with the
following command:
eksctl version
2. Delete any services that have an associated EXTERNAL-IP value. These services are fronted by
an Elastic Load Balancing load balancer, and you must delete them in Kubernetes to allow the
load balancer and associated resources to be properly released.
3. Delete the cluster and its associated nodes with the following command, replacing prod with
your cluster name.
Output:
43
Amazon EKS User Guide
Deleting a cluster
2. Delete any services that have an associated EXTERNAL-IP value. These services are fronted by
an Elastic Load Balancing load balancer, and you must delete them in Kubernetes to allow the
load balancer and associated resources to be properly released.
a. Select the VPC stack to delete and choose Actions and then Delete Stack.
b. On the Delete Stack confirmation screen, choose Yes, Delete.
44
Amazon EKS User Guide
Deleting a cluster
AWS CLI
2. Delete any services that have an associated EXTERNAL-IP value. These services are fronted by
an Elastic Load Balancing load balancer, and you must delete them in Kubernetes to allow the
load balancer and associated resources to be properly released.
a. List the node groups in your cluster with the following command.
Note
The node groups listed are managed node groups (p. 87) only.
b. Delete each node group with the following command. Delete all node groups in the cluster.
c. List the Fargate profiles in your cluster with the following command.
d. Delete each Fargate profile with the following command. Delete all Fargate profiles in the
cluster.
a. List your available AWS CloudFormation stacks with the following command. Find the node
template name in the resulting output.
b. Delete each node stack with the following command, replacing node-stack with your
node stack name. Delete all self-managed node stacks in the cluster.
5. Delete the cluster with the following command, replacing my-cluster with your cluster name.
a. List your available AWS CloudFormation stacks with the following command. Find the VPC
template name in the resulting output.
45
Amazon EKS User Guide
Cluster endpoint access
b. Delete the VPC stack with the following command, replacing my-vpc-stack with your VPC
stack name.
When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server
that you use to communicate with your cluster (using Kubernetes management tools such as kubectl).
By default, this API server endpoint is public to the internet, and access to the API server is secured using
a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access
Control (RBAC).
You can enable private access to the Kubernetes API server so that all communication between your
nodes and the API server stays within your VPC. You can limit the IP addresses that can access your API
server from the internet, or completely disable internet access to the API server.
Note
Because this endpoint is for the Kubernetes API server and not a traditional AWS PrivateLink
endpoint for communicating with an AWS API, it doesn't appear as an endpoint in the Amazon
VPC console.
When you enable endpoint private access for your cluster, Amazon EKS creates a Route 53 private hosted
zone on your behalf and associates it with your cluster's VPC. This private hosted zone is managed
by Amazon EKS, and it doesn't appear in your account's Route 53 resources. In order for the private
hosted zone to properly route traffic to your API server, your VPC must have enableDnsHostnames
and enableDnsSupport set to true, and the DHCP options set for your VPC must include
AmazonProvidedDNS in its domain name servers list. For more information, see Updating DNS support
for your VPC in the Amazon VPC User Guide.
Note
In addition to standard Amazon EKS permissions, your IAM user or role must have
route53:AssociateVPCWithHostedZone permissions to enable the cluster's endpoint
private access.
You can define your API server endpoint access requirements when you create a new cluster, and you can
update the API server endpoint access for a cluster at any time.
46
Amazon EKS User Guide
Modifying cluster endpoint access
47
Amazon EKS User Guide
Modifying cluster endpoint access
You can modify your cluster API server endpoint access using the AWS Management Console or AWS CLI.
For instructions, select the tab for the tool that you want to use.
48
Amazon EKS User Guide
Modifying cluster endpoint access
as 203.0.113.5/32. The block cannot include reserved addresses. You can enter additional
blocks by selecting Add Source. There is a maximum number of CIDR blocks that you can
specify. For more information, see Amazon EKS service quotas (p. 239). If you specify no
blocks, then the public API server endpoint receives requests from all (0.0.0.0/0) IP addresses.
If you restrict access to your public endpoint using CIDR blocks, it is recommended that you
also enable private endpoint access so that nodes and Fargate pods (if you use them) can
communicate with the cluster. Without the private endpoint enabled, your public access
endpoint CIDR sources must include the egress sources from your VPC. For example, if you have
a node in a private subnet that communicates to the internet through a NAT Gateway, you will
need to add the outbound IP address of the NAT gateway as part of an allowed CIDR block on
your public endpoint.
7. Choose Update to finish.
AWS CLI
Complete the following steps using the AWS CLI version 1.18.97 or later. You can check your current
version with aws --version. To install or upgrade the AWS CLI, see Installing the AWS CLI.
1. Update your cluster API server endpoint access with the following AWS CLI command.
Substitute your cluster name and desired endpoint access values. If you set
endpointPublicAccess=true, then you can (optionally) enter single CIDR block, or a
comma-separated list of CIDR blocks for publicAccessCidrs. The blocks cannot include
reserved addresses. If you specify CIDR blocks, then the public API server endpoint will only
receive requests from the listed blocks. There is a maximum number of CIDR blocks that you can
specify. For more information, see Amazon EKS service quotas (p. 239). If you restrict access
to your public endpoint using CIDR blocks, it is recommended that you also enable private
endpoint access so that nodes and Fargate pods (if you use them) can communicate with the
cluster. Without the private endpoint enabled, your public access endpoint CIDR sources must
include the egress sources from your VPC. For example, if you have a node in a private subnet
that communicates to the internet through a NAT Gateway, you will need to add the outbound
IP address of the NAT gateway as part of an allowed CIDR block on your public endpoint. If you
specify no CIDR blocks, then the public API server endpoint receives requests from all (0.0.0.0/0)
IP addresses.
Note
The following command enables private access and public access from a single IP
address for the API server endpoint. Replace 203.0.113.5/32 with a single CIDR
block, or a comma-separated list of CIDR blocks that you want to restrict network
access to.
Output:
{
"update": {
"id": "e6f0905f-a5d4-4a2a-8c49-EXAMPLE00000",
"status": "InProgress",
"type": "EndpointAccessUpdate",
"params": [
{
"type": "EndpointPublicAccess",
"value": "true"
},
49
Amazon EKS User Guide
Accessing a private only API server
{
"type": "EndpointPrivateAccess",
"value": "true"
},
{
"type": "publicAccessCidrs",
"value": "[\203.0.113.5/32\"]"
}
],
"createdAt": 1576874258.137,
"errors": []
}
}
2. Monitor the status of your endpoint access update with the following command, using the
cluster name and update ID that was returned by the previous command. Your update is
complete when the status is shown as Successful.
Output:
{
"update": {
"id": "e6f0905f-a5d4-4a2a-8c49-EXAMPLE00000",
"status": "Successful",
"type": "EndpointAccessUpdate",
"params": [
{
"type": "EndpointPublicAccess",
"value": "true"
},
{
"type": "EndpointPrivateAccess",
"value": "true"
},
{
"type": "publicAccessCidrs",
"value": "[\203.0.113.5/32\"]"
}
],
"createdAt": 1576874258.137,
"errors": []
}
}
• Connected network – Connect your network to the VPC with an AWS transit gateway or other
connectivity option and then use a computer in the connected network. You must ensure that your
Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your
connected network.
50
Amazon EKS User Guide
Cluster Autoscaler
• Amazon EC2 bastion host – You can launch an Amazon EC2 instance into a public subnet in your
cluster's VPC and then log in via SSH into that instance to run kubectl commands. For more
information, see Linux bastion hosts on AWS. You must ensure that your Amazon EKS control plane
security group contains rules to allow ingress traffic on port 443 from your bastion host. For more
information, see Amazon EKS security group considerations (p. 166).
When you configure kubectl for your bastion host, be sure to use AWS credentials that are already
mapped to your cluster's RBAC configuration, or add the IAM user or role that your bastion will
use to the RBAC configuration before you remove endpoint public access. For more information,
see Managing users or IAM roles for your cluster (p. 217) and Unauthorized or access denied
(kubectl) (p. 306).
• AWS Cloud9 IDE – AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets
you write, run, and debug your code with just a browser. You can create an AWS Cloud9 IDE in your
cluster's VPC and use the IDE to communicate with your cluster. For more information, see Creating
an environment in AWS Cloud9. You must ensure that your Amazon EKS control plane security group
contains rules to allow ingress traffic on port 443 from your IDE security group. For more information,
see Amazon EKS security group considerations (p. 166).
When you configure kubectl for your AWS Cloud9 IDE, be sure to use AWS credentials that are
already mapped to your cluster's RBAC configuration, or add the IAM user or role that your IDE will
use to the RBAC configuration before you remove endpoint public access. For more information,
see Managing users or IAM roles for your cluster (p. 217) and Unauthorized or access denied
(kubectl) (p. 306).
Cluster Autoscaler
The Kubernetes Cluster Autoscaler automatically adjusts the number of nodes in your cluster when pods
fail to launch due to lack of resources or when nodes in the cluster are underutilized and their pods can
be rescheduled onto other nodes in the cluster.
This topic shows you how to deploy the Cluster Autoscaler to your Amazon EKS cluster and how to
configure it to modify your Amazon EC2 Auto Scaling groups. The Cluster Autoscaler modifies your node
groups so that they scale out when you need more resources and scale in when you have underutilized
resources.
If you are running a stateful application across multiple Availability Zones that is backed by Amazon
EBS volumes and using the Kubernetes Cluster Autoscaler (p. 51), you should configure multiple
node groups, each scoped to a single Availability Zone. In addition, you should enable the --balance-
similar-node-groups feature. Otherwise, you can create a single node group that spans multiple
Availability Zones.
Choose one of the cluster creation procedures below that meets your requirements.
To create a cluster with a single managed group that spans multiple Availability Zones
• Create an Amazon EKS cluster with a single managed node group with the following eksctl
command. For more information, see Creating an Amazon EKS cluster (p. 28). Substitute the
variable text with your own values.
51
Amazon EKS User Guide
Cluster Autoscaler node group considerations
...
[✓] using region region-code
[✓] setting availability zones to [region-codea region-codeb region-codec]
[✓] subnets for region-codea - public:192.168.0.0/19 private:192.168.96.0/19
[✓] subnets for region-codeb - public:192.168.32.0/19 private:192.168.128.0/19
[✓] subnets for region-codec - public:192.168.64.0/19 private:192.168.160.0/19
...
[✓] nodegroup "ng-6bcca56a" has 2 node(s)
[✓] node "ip-192-168-28-68.region-code.compute.internal" is ready
[✓] node "ip-192-168-61-153.region-code.compute.internal" is ready
[✓] waiting for at least 2 node(s) to become ready in "ng-6bcca56a"
[✓] nodegroup "ng-6bcca56a" has 2 node(s)
[✓] node "ip-192-168-28-68.region-code.compute.internal" is ready
[✓] node "ip-192-168-61-153.region-code.compute.internal" is ready
...
[✓] EKS cluster "my-cluster" in "region-code" region-code is ready
To create a cluster with a dedicated managed node group for each Availability Zone
1. Create an Amazon EKS cluster with no node groups with the following eksctl command. For
more information, see Creating an Amazon EKS cluster (p. 28). Note the Availability Zones that
the cluster is created in. You will use these Availability Zones when you create your node groups.
Substitute the red variable text with your own values.
...
[✓] using region region-code
[✓] setting availability zones to [region-codea region-codec region-codeb]
[✓] subnets for region-codea - public:192.168.0.0/19 private:192.168.96.0/19
[✓] subnets for region-codec - public:192.168.32.0/19 private:192.168.128.0/19
[✓] subnets for region-codeb - public:192.168.64.0/19 private:192.168.160.0/19
...
[✓] EKS cluster "my-cluster" in "region-code" region is ready
This cluster was created in the following Availability Zones: region-codea, region-codec, and
region-codeb.
2. For each Availability Zone in your cluster, use the following eksctl command to create a node
group. Substitute the variable text with your own values. This command creates an Auto Scaling
group with a minimum count of one and a maximum count of ten.
52
Amazon EKS User Guide
Deploy the Cluster Autoscaler
If you used the previous eksctl commands to create your node groups, these permissions are
automatically provided and attached to your node IAM roles. If you did not use eksctl, you must create
an IAM policy with the following document and attach it to your node IAM roles. For more information,
see Modifying a role in the IAM User Guide.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"ec2:DescribeLaunchTemplateVersions"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
If you used the previous eksctl commands to create your node groups, these tags are automatically
applied. If not, you must manually tag your Auto Scaling groups with the following tags. For more
information, see Tagging your Amazon EC2 resources in the Amazon EC2 User Guide for Linux Instances.
Key Value
k8s.io/cluster-autoscaler/<cluster- owned
name>
k8s.io/cluster-autoscaler/enabled true
1. Deploy the Cluster Autoscaler to your cluster with the following command.
53
Amazon EKS User Guide
View your Cluster Autoscaler logs
Edit the cluster-autoscaler container command to replace <YOUR CLUSTER NAME> with your
cluster's name, and add the following options.
• --balance-similar-node-groups
• --skip-nodes-with-system-pods=false
spec:
containers:
- command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --expander=least-waste
- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/
cluster-autoscaler/<YOUR CLUSTER NAME>
- --balance-similar-node-groups
- --skip-nodes-with-system-pods=false
Note
Depending on the version that you need, you may need to change the previous address to
gcr.io/google-containers/cluster-autoscaler:v1.n.n . The image address is
listed on the releases page.
Output:
54
Amazon EKS User Guide
Control plane logging
You can start using Amazon EKS control plane logging by choosing which log types you want to enable
for each new or existing Amazon EKS cluster. You can enable or disable each log type on a per-cluster
basis using the AWS Management Console, AWS CLI (version 1.16.139 or higher), or through the Amazon
EKS API. When enabled, logs are automatically sent from the Amazon EKS cluster to CloudWatch Logs in
the same account.
When you use Amazon EKS control plane logging, you're charged standard Amazon EKS pricing for each
cluster that you run. You are charged the standard CloudWatch Logs data ingestion and storage costs for
any logs sent to CloudWatch Logs from your clusters. You are also charged for any AWS resources, such
as Amazon EC2 instances or Amazon EBS volumes, that you provision as part of your cluster.
The following cluster control plane log types are available. Each log type corresponds to a component
of the Kubernetes control plane. To learn more about these components, see Kubernetes Components in
the Kubernetes documentation.
• Kubernetes API server component logs (api) – Your cluster's API server is the control plane
component that exposes the Kubernetes API. For more information, see kube-apiserver in the
Kubernetes documentation.
• Audit (audit) – Kubernetes audit logs provide a record of the individual users, administrators,
or system components that have affected your cluster. For more information, see Auditing in the
Kubernetes documentation.
• Authenticator (authenticator) – Authenticator logs are unique to Amazon EKS. These logs
represent the control plane component that Amazon EKS uses for Kubernetes Role Based
Access Control (RBAC) authentication using IAM credentials. For more information, see Cluster
authentication (p. 210).
• Controller manager (controllerManager) – The controller manager manages the core control
loops that are shipped with Kubernetes. For more information, see kube-controller-manager in the
Kubernetes documentation.
• Scheduler (scheduler) – The scheduler component manages when and where to run pods in your
cluster. For more information, see kube-scheduler in the Kubernetes documentation.
55
Amazon EKS User Guide
Enabling and disabling control plane logs
When you enable a log type, the logs are sent with a log verbosity level of 2.
aws --version
If your AWS CLI version is below 1.16.139, you must first update to the latest version. To install or
upgrade the AWS CLI, see Installing the AWS command line interface in the AWS Command Line
Interface User Guide.
2. Update your cluster's control plane log export configuration with the following AWS CLI command.
Substitute your cluster name and desired endpoint access values.
Note
The following command sends all available log types to CloudWatch Logs.
Output:
{
"update": {
"id": "883405c8-65c6-4758-8cee-2a7c1340a6d9",
"status": "InProgress",
"type": "LoggingUpdate",
"params": [
{
"type": "ClusterLogging",
"value": "{\"clusterLogging\":[{\"types\":[\"api\",\"audit\",
\"authenticator\",\"controllerManager\",\"scheduler\"],\"enabled\":true}]}"
}
],
"createdAt": 1553271814.684,
"errors": []
}
}
56
Amazon EKS User Guide
Viewing cluster control plane logs
3. Monitor the status of your log configuration update with the following command, using the cluster
name and the update ID that were returned by the previous command. Your update is complete
when the status appears as Successful.
Output:
{
"update": {
"id": "883405c8-65c6-4758-8cee-2a7c1340a6d9",
"status": "Successful",
"type": "LoggingUpdate",
"params": [
{
"type": "ClusterLogging",
"value": "{\"clusterLogging\":[{\"types\":[\"api\",\"audit\",
\"authenticator\",\"controllerManager\",\"scheduler\"],\"enabled\":true}]}"
}
],
"createdAt": 1553271814.684,
"errors": []
}
}
To learn more about viewing, analyzing, and managing logs in CloudWatch, see the Amazon CloudWatch
Logs User Guide.
57
Amazon EKS User Guide
Kubernetes versions
• 1.17.6
• 1.16.8
• 1.15.11
• 1.14.9 (Can't be deployed using the AWS Management Console, but can be deployed using the API, or
tools such as the AWS CLI, eksctl, and AWS CloudFormation.
Unless your application requires a specific version of Kubernetes, we recommend that you choose the
latest available Kubernetes version supported by Amazon EKS for your clusters. As new Kubernetes
versions become available in Amazon EKS, we recommend that you proactively update your clusters to
use the latest available version. For more information, see Updating an Amazon EKS cluster Kubernetes
version (p. 34).
Kubernetes 1.17
Kubernetes 1.17 is now available in Amazon EKS. For more information about Kubernetes 1.17, see the
official release announcement.
Important
• EKS has not enabled the CSIMigrationAWS feature flag. This will be enabled in a future
release, along with detailed migration instructions. For more info on CSI migration, see the
Kubernetes blog.
• Upgrading a cluster from 1.16 to 1.17 will fail if any of your AWS Fargate pods have a
kubelet minor version earlier than 1.16. Before upgrading your cluster from 1.16 to 1.17,
you need to recycle your Fargate pods so that their kubelet is 1.16 before attempting to
upgrade the cluster to 1.17. To recycle a Kubernetes deployment on a 1.15 or later cluster, use
the following command.
The following Kubernetes features are now supported in Kubernetes 1.17 Amazon EKS clusters:
• Cloud Provider Labels have reached general availability. If you are using the beta labels in your pod
specs for features such as node affinity, or in any custom controllers, then we recommend that you
start migrating them to the new GA labels. For information about the new labels, see the following
Kubernetes documentation:
• node.kubernetes.io/instance-type
• topology.kubernetes.io/region
• topology.kubernetes.io/zone
• The ResourceQuotaScopeSelectors feature has graduated to generally available. This feature allows
you to limit the number of resources a quota supports to only those that pertain to the scope.
58
Amazon EKS User Guide
Kubernetes 1.16
• The TaintNodesByCondition feature has graduated to generally available. This feature allows you to
taint nodes that have conditions such as high disk or memory pressure.
• The CSI Topology feature has graduated to generally available, and is fully supported by the EBS CSI
driver. You can use topology to restrict the Availability Zone where a volume is provisioned.
• Finalizer protection for services of type LoadBalancer has graduated to generally available. This
feature ensures that a service resource is not fully deleted until the correlating load balancer is also
deleted.
• Custom resources now support default values. You specify values in an OpenAPI v3 validation schema.
• The Windows containers RunAsUsername feature is now in beta, allowing you to run Windows
applications in a container as a different username than the default.
Kubernetes 1.16
Kubernetes 1.16 is now available in Amazon EKS. For more information about Kubernetes 1.16, see the
official release announcement.
Important
The following Kubernetes features are now supported in Kubernetes 1.16 Amazon EKS clusters:
• Volume expansion in the CSI specification has moved to beta, which allows for any CSI spec
volume plugin to be resizeable. For more information, see Volume Expansion in the Kubernetes CSI
documentation. The latest version of the EBS CSI driver supports volume expansion when running on
an Amazon EKS 1.16 cluster.
• Windows GMSA support has graduated from alpha to beta, and is now supported by Amazon EKS.
For more information, see Configure GMSA for Windows Pods and containers in the Kubernetes
documentation.
• A new annotation: service.beta.kubernetes.io/aws-load-balancer-eip-allocations is
available on service type LoadBalancer to assign an elastic IP address to Network Load Balancers.
For more information, see the Support EIP Allocations with AWS NLB GitHub issue.
• Finalizer protection for service load balancers is now in beta and enabled by default. Service load
balancer finalizer protection ensures that any load balancer resources allocated for a Kubernetes
Service object, such as the AWS Network Load Balancer, will be destroyed or released when the
service is deleted. For more information, see Garbage Collecting Load Balancers in the Kubernetes
documentation.
• The Kubernetes custom resource definitions and admission webhooks extensibility mechanisms have
both reached general availability. For more information, see Custom Resources and Dynamic Admission
Control in the Kubernetes documentation.
• The server-side apply feature has reached beta status and is enabled by default. For more information,
see Server Side Apply in the Kubernetes documentation.
• The CustomResourceDefaulting feature is promoted to beta and enabled by default. Defaults
may be specified in structural schemas through the apiextensions.k8s.io/v1 API. For more
information, see Specifying a structural schema in the Kubernetes documentation.
59
Amazon EKS User Guide
Kubernetes 1.15
Kubernetes 1.15
Kubernetes 1.15 is now available in Amazon EKS. For more information about Kubernetes 1.15, see the
official release announcement.
Important
Starting with 1.15, Amazon EKS no longer tags the VPC containing your cluster.
• For more information about VPC tagging, see ??? (p. 165).
Important
Amazon EKS has set the re-invocation policy for the Pod Identity Webhook to IfNeeded.
This allows the webhook to be re-invoked if objects are changed by other mutating admission
webhooks like the App Mesh sidecar injector. For more information about the App Mesh sidecar
injector, see Install the sidecar injector.
The following features are now supported in Kubernetes 1.15 Amazon EKS clusters:
• EKS now supports configuring transport layer security (TLS) termination, access logs, and source
ranges for network load balancers. For more information, see Network Load Balancer support on AWS
on GitHub.
• Improved flexibility of Custom Resource Definitions (CRD), including the ability to convert
between versions on the fly. For more information, see Extend the Kubernetes API with
CustomResourceDefinitions on GitHub.
• NodeLocal DNSCache is in beta for Kubernetes version 1.15 clusters. This feature can help improve
cluster DNS performance by running a DNS caching agent on cluster nodes as a DaemonSet. For more
information, see Using NodeLocal DNSCache in Kubernetes clusters on GitHub.
Note
When running CoreDNS on Amazon EC2, we recommend not using force_tcp in the
configuration and ensuring that options use-vc is not set in /etc/resolv.conf.
Kubernetes 1.14
Kubernetes 1.14 is now available in Amazon EKS. For more information about Kubernetes 1.14, see the
official release announcement.
Important
The -allow-privileged flag has been removed from kubelet on Amazon EKS 1.14 nodes.
If you have modified or restricted the Amazon EKS default pod security policy (p. 277) on your
cluster, you should verify that your applications have the permissions they need on 1.14 nodes.
The following features are now supported in Kubernetes 1.14 Amazon EKS clusters:
• Container Storage Interface Topology is in beta for Kubernetes version 1.14 clusters. For more
information, see CSI Topology Feature in the Kubernetes CSI Developer Documentation. The following
60
Amazon EKS User Guide
Amazon EKS version deprecation
CSI drivers provide a CSI interface for container orchestrators like Kubernetes to manage the life cycle
of Amazon EBS volumes, Amazon EFS file systems, and Amazon FSx for Lustre file systems:
• Amazon Elastic Block Store (EBS) CSI driver
• Amazon EFS CSI driver
• Amazon FSx for Lustre CSI driver
• Process ID (PID) limiting is in beta for Kubernetes version 1.14 clusters. This feature allows you to set
quotas for how many processes a pods can create, which can prevent resource starvation for other
applications on a cluster. For more information, see Process ID limiting for stability improvements in
Kubernetes 1.14.
• Persistent Local Volumes are now GA and make locally attached storage available as a persistent
volume source. For more information, see Kubernetes 1.14: Local persistent volumes GA.
• Pod Priority and Preemption is now GA and allows pods to be assigned a scheduling priority level. For
more information, see Pod Priority and Preemption in the Kubernetes documentation.
• Windows node support is GA with Kubernetes 1.14.
We will announce the deprecation of a given Kubernetes minor version at least 60 days before the end of
support date. Because of the Amazon EKS qualification and release process for new Kubernetes versions,
the deprecation of a Kubernetes version on Amazon EKS will be on or after the date the Kubernetes
project stops supporting the version upstream.
Kubernetes supports compatibility between the control plane and nodes for up to two minor versions,
so 1.15 nodes will continue to operate when orchestrated by a 1.17 control plane. For more information,
see Kubernetes version and version skew support policy in the Kubernetes documentation.
Platform versions
Amazon EKS platform versions represent the capabilities of the cluster control plane, such as which
Kubernetes API server flags are enabled, as well as the current Kubernetes patch version. Each
Kubernetes minor version has one or more associated Amazon EKS platform versions. The platform
versions for different Kubernetes minor versions are independent.
When a new Kubernetes minor version is available in Amazon EKS, such as 1.17, the initial Amazon EKS
platform version for that Kubernetes minor version starts at eks.1. However, Amazon EKS releases new
platform versions periodically to enable new Kubernetes control plane settings and to provide security
fixes.
When new Amazon EKS platform versions become available for a minor version:
61
Amazon EKS User Guide
Kubernetes version 1.17
New Amazon EKS platform versions don't introduce breaking changes or cause service interruptions.
Note
Automatic upgrades of existing Amazon EKS platform versions are rolled out incrementally.
The roll-out process might take some time. If you need the latest Amazon EKS platform version
features immediately, you should create a new Amazon EKS cluster.
Clusters are always created with the latest available Amazon EKS platform version (eks.n) for the
specified Kubernetes version. If you update your cluster to a new Kubernetes minor version, your cluster
receives the current Amazon EKS platform version for the Kubernetes minor version that you updated to.
The current and recent Amazon EKS platform versions are described in the following tables.
62
Amazon EKS User Guide
Kubernetes version 1.15
63
Amazon EKS User Guide
Kubernetes version 1.14
64
Amazon EKS User Guide
Kubernetes version 1.14
65
Amazon EKS User Guide
Windows support
Windows support
This topic describes how to add Windows support to Amazon EKS clusters.
Considerations
Before deploying Windows nodes, be aware of the following considerations.
• Windows workloads are supported with Amazon EKS clusters running Kubernetes version 1.14 or later.
• Amazon EC2 instance types C3, C4, D2, I2, M4 (excluding m4.16xlarge), and R3 instances are not
supported for Windows workloads.
• Host networking mode is not supported for Windows workloads.
• Amazon EKS clusters must contain one or more Linux nodes to run core system pods that only run on
Linux, such as coredns and the VPC resource controller.
• The kubelet and kube-proxy event logs are redirected to the EKS Windows Event Log and are set to
a 200 MB limit.
• Windows nodes support one elastic network interface per node. The number of pods that you can run
per Windows node is equal to the number of IP addresses available per elastic network interface for
the node's instance type, minus one. For more information, see IP addresses per network interface per
instance type in the Amazon EC2 User Guide for Linux Instances.
• Group Managed Service Accounts (GMSA) for Windows pods and containers is not supported
by Amazon EKS versions earlier than 1.16. You can follow the instructions in the Kubernetes
documentation to enable and test this alpha feature on clusters that are earlier than 1.16.
66
Amazon EKS User Guide
Enabling Windows support
eksctl
This procedure only works for clusters that were created with eksctl and assumes that your
eksctl version is 0.25.0-rc.0 or later. You can check your version with the following command.
eksctl version
For more information about installing or upgrading eksctl, see Installing or upgrading
eksctl (p. 226).
1. Enable Windows support for your Amazon EKS cluster with the following eksctl command.
This command deploys the VPC resource controller and VPC admission controller webhook that
are required on Amazon EKS clusters to run Windows workloads.
2. After you have enabled Windows support, you can launch a Windows node group into your
cluster. For more information, see Launching self-managed Windows nodes (p. 100).
After you add Windows support to your cluster, you must specify node selectors on your applications
so that the pods land on a node with the appropriate operating system. For Linux pods, use the
following node selector text in your manifests.
nodeSelector:
kubernetes.io/os: linux
kubernetes.io/arch: amd64
For Windows pods, use the following node selector text in your manifests.
nodeSelector:
kubernetes.io/os: windows
kubernetes.io/arch: amd64
Windows
In the following steps, replace the region-code with the Region that your cluster resides in.
67
Amazon EKS User Guide
Enabling Windows support
If output similar to the following example output is returned, then the cluster has the necessary
role binding.
NAME AGE
eks:kube-proxy-windows 10d
If the output includes Error from server (NotFound), then the cluster does not have the
necessary cluster role binding. Add the binding by creating a file named eks-kube-proxy-
windows-crb.yaml with the following content.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: eks:kube-proxy-windows
labels:
k8s-app: kube-proxy
eks.amazonaws.com/component: kube-proxy
subjects:
- kind: Group
name: "eks:kube-proxy-windows"
roleRef:
kind: ClusterRole
name: system:node-proxier
apiGroup: rbac.authorization.k8s.io
4. After you have enabled Windows support, you can launch a Windows node group into your
cluster. For more information, see Launching self-managed Windows nodes (p. 100).
68
Amazon EKS User Guide
Enabling Windows support
After you add Windows support to your cluster, you must specify node selectors on your applications
so that the pods land on a node with the appropriate operating system. For Linux pods, use the
following node selector text in your manifests.
nodeSelector:
kubernetes.io/os: linux
kubernetes.io/arch: amd64
For Windows pods, use the following node selector text in your manifests.
nodeSelector:
kubernetes.io/os: windows
kubernetes.io/arch: amd64
To enable Windows support for your cluster with a macOS or Linux client
This procedure requires that the openssl library and jq JSON processor are installed on your client
system.
In the following steps, replace region-code with the Region that your cluster resides in.
2. Create the VPC admission controller webhook manifest for your cluster.
./webhook-create-signed-cert.sh
69
Amazon EKS User Guide
Enabling Windows support
If output similar to the following example output is returned, then the cluster has the necessary
role binding.
NAME AGE
eks:kube-proxy-windows 10d
If the output includes Error from server (NotFound), then the cluster does not have the
necessary cluster role binding. Add the binding by creating a file named eks-kube-proxy-
windows-crb.yaml with the following content.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: eks:kube-proxy-windows
labels:
k8s-app: kube-proxy
eks.amazonaws.com/component: kube-proxy
subjects:
- kind: Group
name: "eks:kube-proxy-windows"
roleRef:
kind: ClusterRole
name: system:node-proxier
apiGroup: rbac.authorization.k8s.io
5. After you have enabled Windows support, you can launch a Windows node group into your
cluster. For more information, see Launching self-managed Windows nodes (p. 100).
After you add Windows support to your cluster, you must specify node selectors on your applications
so that the pods land on a node with the appropriate operating system. For Linux pods, use the
following node selector text in your manifests.
nodeSelector:
kubernetes.io/os: linux
kubernetes.io/arch: amd64
For Windows pods, use the following node selector text in your manifests.
nodeSelector:
kubernetes.io/os: windows
kubernetes.io/arch: amd64
70
Amazon EKS User Guide
Deploy a Windows sample application
apiVersion: apps/v1
kind: Deployment
metadata:
name: windows-server-iis
spec:
selector:
matchLabels:
app: windows-server-iis
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: windows-server-iis
tier: backend
track: stable
spec:
containers:
- name: windows-server-iis
image: mcr.microsoft.com/windows/servercore:1809
ports:
- name: http
containerPort: 80
imagePullPolicy: IfNotPresent
command:
- powershell.exe
- -command
- "Add-WindowsFeature Web-Server; Invoke-WebRequest -UseBasicParsing
-Uri 'https://dotnetbinaries.blob.core.windows.net/servicemonitor/2.0.1.6/
ServiceMonitor.exe' -OutFile 'C:\\ServiceMonitor.exe'; echo '<html><body><br/
><br/><marquee><H1>Hello EKS!!!<H1><marquee></body><html>' > C:\\inetpub\\wwwroot\
\default.html; C:\\ServiceMonitor.exe 'w3svc'; "
nodeSelector:
kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
name: windows-server-iis-service
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: windows-server-iis
tier: backend
track: stable
sessionAffinity: None
type: LoadBalancer
71
Amazon EKS User Guide
ARM support
5. After your external IP address is available, point a web browser to that address to view the IIS home
page.
Note
It might take several minutes for DNS to propagate and for your sample application to load
in your web browser.
ARM support
You can create an Amazon EKS cluster and add nodes running AWS Graviton-based instances to the
cluster. These instances deliver significant cost savings for scale-out and ARM-based applications such as
web servers, containerized microservices, caching fleets, and distributed data stores.
Note
These instructions and the assets that they reference are offered as a beta feature that is
administered by AWS. Use of these instructions and assets is governed as a beta under the AWS
service terms. While in beta, Amazon EKS does not support using AWS Graviton-based instances
for production Kubernetes workloads. Submit comments or questions in a GitHub issue.
Considerations
• Nodes can use any AWS Graviton-based instance type, such as a1.xlarge or m6g.2xlarge. However, all
nodes in a node group must use the same instance type.
• Nodes must be deployed with Kubernetes version 1.15 or 1.14.
• To use AWS Graviton-based instance nodes, you must set up a new Amazon EKS cluster. You cannot
add these nodes to a cluster that has existing x86 nodes.
Prerequisites
• Have eksctl installed on your computer. If you don't have it installed, see Install eksctl (p. 5) for
installation instructions.
• Have kubectl installed on your computer. For more information, see Installing kubectl (p. 221).
• Have the AWS CLI version 1.16.156 or later, or the aws-iam-authenticator installed. For more
information, see Installing aws-iam-authenticator (p. 210) or Installing the AWS CLI.
72
Amazon EKS User Guide
Create a cluster
Create a cluster
1. Run the following command to create an Amazon EKS cluster with no nodes. If you want to create a
cluster running Kubernetes version 1.14, then replace 1.15 with the version that you want. You can
replace region-code with any Region that Amazon EKS is available in.
Launching an Amazon EKS cluster using eksctl creates an AWS CloudFormation stack. The launch
process for this stack typically takes 10 to 15 minutes. You can monitor the progress in the Amazon
EKS console.
2. When the cluster creation completes, open the AWS CloudFormation console. You will see a stack
named eksctl-a1-preview-cluster. Select this stack. Select the Resources tab. Record the
values of the IDs for the ControlPlaneSecurityGroup and VPC resources.
3. Confirm that the cluster is running with the kubectl get svc command. The command returns
output similar to the following example output.
1. Update the CoreDNS image ID using the command that corresponds to the version of the cluster
that you installed in a previous step. You can replace 1.15 with 1.14.
2. Update the kube-proxy image ID using the command that corresponds to the version of the cluster
that you installed in a previous step.You can replace 1.15 with 1.14.
Launch nodes
Important
Amazon EKS nodes are standard Amazon EC2 instances, and you are billed for them based on
normal Amazon EC2 instance prices. For more information, see Amazon EC2 pricing.
73
Amazon EKS User Guide
Launch nodes
1. Open the AWS CloudFormation console. Ensure that you are in the AWS Region that you created
your Amazon EKS cluster in.
2. Choose Create stack, and then choose With new resources (standard).
3. For Specify template, select Amazon S3 URL, enter the following URL into the Amazon S3 URL box,
and then choose Next twice.
https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-07-23/amazon-eks-arm-
nodegroup.yaml
4. On the Specify stack details page, fill out the following parameters accordingly:
• Stack name – Choose a stack name for your AWS CloudFormation stack. For example, you can
name it a1-preview-nodes.
• KubernetesVersion – Select the version of Kubernetes that you chose when launching your
Amazon EKS cluster.
• ClusterName – Enter the name that you used when you created your Amazon EKS cluster.
Important
This name must exactly match the name you used in Step 1: Create your Amazon EKS
cluster (p. 21); otherwise, your nodes cannot join the cluster.
• ClusterControlPlaneSecurityGroup – Choose the ControlPlaneSecurityGroup ID value
from the AWS CloudFormation output that you generated with the section called “Create a
cluster” (p. 73).
• NodeGroupName – Enter a name for your node group. This name can be used later to identify the
Auto Scaling group that is created for your nodes.
• NodeAutoScalingGroupMinSize – Enter the minimum number of nodes that Auto Scaling group
can scale in to.
• NodeAutoScalingGroupDesiredCapacity – Enter the desired number of nodes to scale to when
your stack is created.
• NodeAutoScalingGroupMaxSize – Enter the maximum number of nodes that your node Auto
Scaling group can scale out to.
• NodeInstanceType – Choose one of the A1 or M6g instance types for your nodes, such as
a1.large.
• NodeVolumeSize – Specify a root volume size for your nodes, in GiB.
• KeyName – Enter the name of an Amazon EC2 SSH key pair that you can use to connect using SSH
into your nodes with after they launch. If you don't already have an Amazon EC2 key pair, you can
create one in the AWS Management Console. For more information, see Amazon EC2 key pairs in
the Amazon EC2 User Guide for Linux Instances.
Note
If you do not provide a key pair here, the AWS CloudFormation stack creation fails.
• BootstrapArguments – Arguments to pass to the bootstrap script. For details, see https://
github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh.
• VpcId – Enter the ID for the VPC that you created in the section called “Create a cluster” (p. 73).
• Subnets – Choose the subnets that you created in the section called “Create a cluster” (p. 73).
Important
If any of the subnets are public subnets, then they must have the automatic public IP
address assignment setting enabled. If the setting is not enabled for the public subnet,
then any nodes that you deploy to that public subnet will not be assigned a public IP
address and will not be able to communicate with the cluster or other AWS services.
If the subnet was deployed before 03/26/2020 using either of the Amazon EKS AWS
CloudFormation VPC templates (p. 159), or by using eksctl, then automatic public IP
address assignment is disabled for public subnets. For information about how to enable
public IP address assignment for a subnet, see Modifying the Public IPv4 Addressing
74
Amazon EKS User Guide
Join nodes to a cluster
Attribute for Your Subnet. If the node is deployed to a private subnet, then it is able to
communicate with the cluster and other AWS services through a NAT gateway.
• NodeImageAMI11x – The Amazon EC2 Systems Manager parameter for the AMI image ID. You
should not make any changes to these parameters.
5. Choose Next and then choose Next again.
6. Acknowledge that the stack might create IAM resources, and then choose Create stack.
Note
If nodes fail to join the cluster, see ??? (p. 305) in the Troubleshooting guide.
7. When your stack has finished creating, select it in the console and choose Outputs.
8. Record the NodeInstanceRole for the node group that was created. You need this when you
configure your Amazon EKS nodes.
wget https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2019-11-15/aws-
auth-cm.yaml
b. Open the file with your favorite text editor. Replace the <ARN of instance role (not
instance profile)> snippet with the NodeInstanceRole value that you recorded in the
previous procedure, and save the file.
Important
Do not modify any other lines in this file.
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
c. Apply the configuration. This command may take a few minutes to finish.
Note
If you receive any authorization or resource type errors, see Unauthorized or access
denied (kubectl) (p. 306) in the troubleshooting section.
2. Watch the status of your nodes and wait for them to reach the Ready status.
75
Amazon EKS User Guide
(Optional) Deploy an application
clusterrole.rbac.authorization.k8s.io/cni-metrics-helper created
serviceaccount/cni-metrics-helper created
clusterrolebinding.rbac.authorization.k8s.io/cni-metrics-helper created
deployment.extensions/cni-metrics-helper created
2. Confirm that the CNI metrics helper is running with the following command.
The pod is running if you see the cni-metrics-helper pod returned in the output.
Inferentia support
This topic describes how to create an Amazon EKS cluster with nodes running Amazon EC2 Inf1 instances
and (optionally) deploy a sample application. Amazon EC2 Inf1 instances are powered by AWS Inferentia
chips, which are custom built by AWS to provide high performance and lowest cost inference in the
cloud. Machine learning models are deployed to containers using AWS Neuron, a specialized software
development kit (SDK) consisting of a compiler, run-time, and profiling tools that optimize the machine
learning inference performance of Inferentia chips. AWS Neuron supports popular machine learning
frameworks such as TensorFlow, PyTorch, and MXNet.
Considerations
• Inf1 instances are supported on Amazon EKS clusters running Kubernetes version 1.14 and later.
• Neuron device logical IDs must be contiguous. If a pod requesting multiple Neuron devices is
scheduled on an inf1.6xlarge or inf1.24xlarge instance type (which have more than one Neuron
device), that pod will fail to start if the Kubernetes scheduler selects non-contiguous device IDs. For
more information, see Device logical IDs must be contiguous on GitHub.
• Amazon EC2 Inf1 instances are not currently supported with managed node groups.
Prerequisites
• Have eksctl installed on your computer. If you don't have it installed, see Install eksctl (p. 5) for
installation instructions.
• Have kubectl installed on your computer. For more information, see Installing kubectl (p. 221).
• (Optional) Have python3 installed on your computer. If you don't have it installed, then see Python
downloads for installation instructions.
76
Amazon EKS User Guide
Create a cluster
Create a cluster
To create a cluster with Inf1 Amazon EC2 instance nodes
1. Create a cluster with Inf1 Amazon EC2 instance nodes. You can replace inf1.2xlarge with any Inf1
instance type. eksctl detects that you are launching a node group with an Inf1 instance type and
will start your nodes using the EKS-optimized accelerated AMI (p. 134).
Note
You can't use IAM roles for service accounts (p. 260) with TensorFlow Serving.
Note
Note the value of the following line of the output. It's used in a later (optional) step.
When launching a node group with Inf1 instances, eksctl automatically installs the AWS Neuron
Kubernetes device plugin. This plugin advertises Neuron devices as a system resource to the
Kubernetes scheduler, which can be requested by a container. In addition to the default Amazon EKS
node IAM policies, the Amazon S3 read only access policy is added so that the sample application,
covered in a later step, can load a trained model from Amazon S3.
2. Make sure that all pods have started correctly.
Output
77
Amazon EKS User Guide
(Optional) Create a Neuron
TensorFlow Serving application image
Note the repositoryUri returned in the output for use in a later step.
2. Create a Dockerfile named Dockerfile.tf-serving with the following contents. The Dockerfile
contains the commands to build a Neuron optimized TensorFlow Serving application image. Neuron
TensorFlow Serving uses the same API as normal TensorFlow Serving. The only differences are that
the saved model must be compiled for Inferentia and the entry point is a different binary.
FROM amazonlinux:2
4. Build the Docker image and upload it to the Amazon ECR repository created in a previous step.
Note
If you receive permission related issues from Docker, then you may need to configure
Docker for non-root user use. For more information, see Manage Docker as a non-root user
in the Docker documentation.
78
Amazon EKS User Guide
(Optional) Deploy a TensorFlow Serving application image
The sample deployment manifest manages two containers: The Neuron runtime container image and the
TensorFlow Serving application. For more information about the Neuron container image, see Tutorial:
Neuron container tools on GitHub. The Neuron runtime runs as a sidecar container image and is used to
interact with the Inferentia chips on your nodes. The two containers communicate over a Unix domain
socket placed in a shared mounted volume. At start-up, the application image will fetch your model from
Amazon S3, launch Neuron TensorFlow Serving with the saved model, and wait for prediction requests.
The number of Inferentia devices can be adjusted using the aws.amazon.com/neuron resource in the
Neuron runtime container specification. The runtime expects 128 2-MB pages per Inferentia device,
therefore, hugepages-2Mi has to be set to 256 x the number of Inferentia devices. In
order to access Inferentia devices, the Neuron runtime requires SYS_ADMIN and IPC_LOCK capabilities,
however, the runtime drops these capabilities at initialization, before opening a gRPC socket.
1. Add the AmazonS3ReadOnlyAccess IAM policy to the node instance role that was created in step 1
of the section called “Create a cluster” (p. 77). This is necessary so that the sample application can
load a trained model from Amazon S3.
2. Create a file named bert_deployment.yaml with the contents below. Update 111122223333,
region-code, and bert/saved_model with your account ID, Region code, and saved model name
and location. The model name is for identification purposes when a client makes a request to the
TensorFlow server. This example uses a model name to match a sample BERT client script that will
be used in a later step for sending prediction requests. You can also replace 1.0.7865.0 with a
later version. For the latest version, see Neuron Runtime Release Notes on GitHub or enter the
following command.
kind: Deployment
apiVersion: apps/v1
metadata:
name: eks-neuron-test
labels:
app: eks-neuron-test
role: master
spec:
replicas: 2
selector:
matchLabels:
app: eks-neuron-test
role: master
template:
metadata:
labels:
79
Amazon EKS User Guide
(Optional) Deploy a TensorFlow Serving application image
app: eks-neuron-test
role: master
spec:
volumes:
- name: sock
emptyDir: {}
containers:
- name: eks-neuron-test
image: 111122223333.dkr.ecr.region-code.amazonaws.com/tensorflow-model-
server-neuron:1.15.0
command:
- /usr/local/bin/tensorflow_model_server_neuron
args:
- --port=9000
- --rest_api_port=8500
- --model_name=bert_mrpc_hc_gelus_b4_l24_0926_02
- --model_base_path=s3://bert/saved_model
ports:
- containerPort: 8500
- containerPort: 9000
imagePullPolicy: IfNotPresent
env:
- name: AWS_REGION
value: "region-code"
- name: S3_USE_HTTPS
value: "1"
- name: S3_VERIFY_SSL
value: "0"
- name: AWS_LOG_LEVEL
value: "3"
- name: NEURON_RTD_ADDRESS
value: unix:/sock/neuron.sock
resources:
limits:
cpu: 4
memory: 4Gi
requests:
cpu: "1"
memory: 1Gi
volumeMounts:
- name: sock
mountPath: /sock
- name: neuron-rtd
image: 790709498068.dkr.ecr.region-code.amazonaws.com/neuron-rtd:1.0.7865.0
securityContext:
capabilities:
add:
- SYS_ADMIN
- IPC_LOCK
volumeMounts:
- name: sock
mountPath: /sock
resources:
limits:
hugepages-2Mi: 256Mi
aws.amazon.com/neuron: 1
requests:
memory: 1024Mi
4. Create a file named bert_service.yaml with the following contents. The HTTP and gRPC ports
are opened for accepting prediction requests.
80
Amazon EKS User Guide
(Optional) Make predictions against
your TensorFlow Serving service
kind: Service
apiVersion: v1
metadata:
name: eks-neuron-test
labels:
app: eks-neuron-test
spec:
type: ClusterIP
ports:
- name: http-tf-serving
port: 8500
targetPort: 8500
- name: grpc-tf-serving
port: 9000
targetPort: 9000
selector:
app: eks-neuron-test
role: master
2. Download the sample BERT client from the Neuron GitHub repository.
curl https://raw.githubusercontent.com/aws/aws-neuron-sdk/master/src/examples/
tensorflow/k8s_bert_demo/bert_client.py > bert_client.py
python3 bert_client.py
Output
...
Inference successful: 0
Inference successful: 1
Inference successful: 2
Inference successful: 3
Inference successful: 4
Inference successful: 5
Inference successful: 6
Inference successful: 7
Inference successful: 8
Inference successful: 9
...
Inference successful: 91
Inference successful: 92
Inference successful: 93
81
Amazon EKS User Guide
Viewing API server flags
Inference successful: 94
Inference successful: 95
Inference successful: 96
Inference successful: 97
Inference successful: 98
Inference successful: 99
Ran 100 inferences successfully. Latency
When a cluster is first created, the initial API server logs include the flags that were used to start the API
server. If you enable API server logs when you launch the cluster, or shortly thereafter, these logs are
sent to CloudWatch Logs and you can view them there.
1. If you have not already done so, enable API server logs for your Amazon EKS cluster.
Private clusters
This topic describes how to deploy a private cluster without outbound internet access. If you're not
familiar with Amazon EKS networking, see De-mystifying cluster networking for Amazon EKS worker
nodes.
82
Amazon EKS User Guide
Requirements
Requirements
The following requirements must be met to run Amazon EKS in a private cluster without outbound
internet access.
• A container image must be in or copied to Amazon Elastic Container Registry (Amazon ECR) or to a
registry inside the VPC to be pulled. For more information, see the section called “Creating local copies
of container images” (p. 84).
• Endpoint private access is required for nodes to register with the cluster endpoint. Endpoint public
access is optional. For more information, see ??? (p. 46).
• You may need to include the VPC endpoints found at the section called “VPC endpoints for private
clusters” (p. 84).
• You must include the following text to the bootstrap arguments when launching self-managed nodes.
This text bypasses the Amazon EKS introspection and does not require access to the Amazon EKS API
from within the VPC. Replace cluster-endpoint and cluster-certificate-authority with
the values from your Amazon EKS cluster.
• The aws-auth ConfigMap must be created from within the VPC. For more information about create
the aws-auth ConfigMap, see ??? (p. 217).
Considerations
Here are some things to consider when running Amazon EKS in a private cluster without outbound
internet access.
83
Amazon EKS User Guide
Creating local copies of container images
• The App Mesh sidecar injector for Kubernetes is supported. For more information, see App Mesh
sidecar injector on GitHub.
• The App Mesh controller for Kubernetes is not supported. For more information, see App Mesh
controller on GitHub.
1. Create an Amazon ECR repository. For more information, see Creating a repository.
2. Pull the container image from the external registry using docker pull.
3. Tag your image with the Amazon ECR registry, repository, and optional image tag name combination
using docker tag.
4. Authenticate to the registry. For more information, see Registry authentication.
5. Push the image to Amazon ECR using docker push.
Note
Be sure to update your resource configuration to use the new image location.
The following example pulls the amazon/aws-node-termination-handler image, using tag v1.3.1-
linux-amd64, from Docker Hub and creates a local copy in Amazon ECR.
• com.amazonaws.region.ec2
• com.amazonaws.region.ecr.api
• com.amazonaws.region.ecr.dkr
• com.amazonaws.region.s3 – For pulling container images
• com.amazonaws.region.logs – For CloudWatch Logs
• com.amazonaws.region.sts – If using AWS Fargate or IAM roles for service accounts
• com.amazonaws.region.elasticloadbalancing – If using Application Load Balancers
• com.amazonaws.region.autoscaling – If using Cluster Autoscaler
• com.amazonaws.region.appmesh-envoy-management – If using App Mesh
84
Amazon EKS User Guide
Can run workloads that require the No Yes (p. 76) – Linux No
Inferentia chip nodes only
Can run workloads that require a GPU Yes (p. 131) – Yes (p. 131) – No
Linux nodes only Linux nodes only
Pods share a kernel runtime Yes – All of your Yes – All of your No – Each pod has
environment with other pods pods on each of pods on each of a dedicated kernel
your nodes your nodes
Pods share CPU, memory, storage, and Yes – Can result in Yes – Can result in No – Each pod
network resources with other pods. unused resources unused resources has dedicated
on each node on each node resources and
can be sized
independently to
maximize resource
utilization.
Pods can use more hardware and Yes – If the pod Yes – If the pod No, but the
memory than requested in pod specs requires more requires more pod can be re-
resources than resources than deployed using
requested, and requested, and a larger vCPU
resources are resources are and memory
available on the available on the configuration.
node, the pod can node, the pod can
85
Amazon EKS User Guide
Must deploy and manage Amazon EC2 Yes (p. 89) Yes – Manual No
instances – automated configuration or
through Amazon using Amazon
EKS EKS-provided AWS
CloudFormation
templates to
deploy Linux
(x86) (p. 96),
Linux
(ARM) (p. 72), or
Windows (p. 66)
nodes.
Can deploy your own custom CNI to Yes Yes No – You don't
nodes manage nodes.
Must update node AMI yourself Yes –You're Yes - Using tools No – You don't
notified in the other than the manage nodes.
Amazon EKS Amazon EKS
console when console, because
updates are self-managed
available and nodes can't be
can perform managed with
the update with the Amazon EKS
one click in the console.
console.
86
Amazon EKS User Guide
Managed node groups
Must update node Kubernetes version Yes You're notified Yes – Using tools No – You don't
yourself in the Amazon other than the manage nodes.
EKS console Amazon EKS
when updates console, because
are available and self-managed
can perform the nodes can't be
update with one managed with
mouse click. the Amazon EKS
console.
Can use Amazon EBS storage with pods Yes (p. 144) Yes (p. 144) No
Can use Amazon EFS storage with pods Yes (p. 148) Yes (p. 148) No
Can use Amazon FSx for Lustre storage Yes (p. 152) Yes (p. 152) No
with pods
With Amazon EKS managed node groups, you don’t need to separately provision or register the Amazon
EC2 instances that provide compute capacity to run your Kubernetes applications. You can create,
update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon
87
Amazon EKS User Guide
Managed node groups concepts
EKS-optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to
ensure that your applications stay available.
All managed nodes are provisioned as part of an Amazon EC2 Auto Scaling group that is managed for
you by Amazon EKS. All resources including the instances and Auto Scaling groups run within your AWS
account. Each node group uses the Amazon EKS-optimized Amazon Linux 2 AMI and can run across
multiple Availability Zones that you define.
You can add a managed node group to new or existing clusters using the Amazon EKS console, eksctl,
AWS CLI, AWS API, or infrastructure as code tools including AWS CloudFormation. Nodes launched as
part of a managed node group are automatically tagged for auto-discovery by the Kubernetes cluster
autoscaler and you can use the node group to apply Kubernetes labels to nodes and update them at any
time.
There are no additional costs to use Amazon EKS managed node groups, you only pay for the AWS
resources you provision. These include Amazon EC2 instances, Amazon EBS volumes, Amazon EKS cluster
hours, and any other AWS infrastructure. There are no minimum fees and no upfront commitments.
To get started with a new Amazon EKS cluster and managed node group, see Getting started with the
AWS Management Console (p. 13).
To add a managed node group to an existing cluster, see Creating a managed node group (p. 89).
88
Amazon EKS User Guide
Creating a managed node group
• You can create multiple managed node groups within a single cluster. For example, you could create
one node group with the standard Amazon EKS-optimized Amazon Linux 2 AMI for some workloads
and another with the GPU variant for workloads that require GPU support.
• If your managed node group encounters a health issue, Amazon EKS returns an error message to help
you to diagnose the issue. For more information, see Managed node group errors (p. 307).
• Amazon EKS adds Kubernetes labels to managed node group instances. These Amazon EKS-provided
labels are prefixed with eks.amazon.com.
• Amazon EKS automatically drains nodes using the Kubernetes API during terminations or updates.
Updates respect the pod disruption budgets that you set for your pods.
• There are no additional costs to use Amazon EKS managed node groups, you only pay for the AWS
resources you provision.
Managed node groups (p. 87) are supported on Amazon EKS clusters beginning with Kubernetes
version 1.14 and platform version (p. 61) eks.3. Existing clusters can update to version 1.14 or later to
take advantage of this feature. For more information, see Updating an Amazon EKS cluster Kubernetes
version (p. 34).
If this is your first time launching an Amazon EKS managed node group, we recommend that you follow
one of our Getting started with Amazon EKS (p. 3) guides instead. The guides provide complete end-to-
end walkthroughs for creating an Amazon EKS cluster with nodes.
Important
Amazon EKS nodes are standard Amazon EC2 instances, and you are billed for them based on
normal Amazon EC2 prices. For more information, see Amazon EC2 Pricing.
Select the tab with the name of the tool that you'd like to create your managed node group with.
eksctl
eksctl version
• Create your managed node group with the following eksctl command, replacing the
variable text with your own values.
89
Amazon EKS User Guide
Creating a managed node group
--managed
To launch your managed node group using the AWS Management Console
1. Wait for your cluster status to show as ACTIVE. You cannot create a managed node group for a
cluster that is not yet ACTIVE.
2. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home#/clusters.
3. Choose the name of the cluster that you want to create your managed node group in.
4. On the cluster page, select the Compute tab, and then choose Add Node Group.
5. On the Configure node group page, fill out the parameters accordingly, and then choose Next.
90
Amazon EKS User Guide
Creating a managed node group
6. On the Set compute configuration page, fill out the parameters accordingly, and then choose
Next.
• AMI type – Choose Amazon Linux 2 (AL2_x86_64) for non-GPU instances, or Amazon Linux 2
GPU Enabled (AL2_x86_64_GPU) for GPU instances.
• Instance type – Choose the instance type to use in your managed node group. Each Amazon
EC2 instance type supports a maximum number of elastic network interfaces (ENIs) and each
ENI supports a maximum number of IP addresses. Since each worker node and pod is assigned
its own IP address it's important to choose an instance type that will support the maximum
number of pods that you want to run on each worker node. For a list of the number of ENIs
and IP addresses supported by instance types, see IP addresses per network interface per
instance type. For example, the t3.medium instance type supports a maximum of 18 IP
addresses for the worker node and pods. Some instance types might not be available in all
Regions.
• Disk size – Enter the disk size (in GiB) to use for your node's root volume.
7. On the Setup scaling policies page, fill out the parameters accordingly, and then choose Next.
Note
Amazon EKS does not automatically scale your node group in or out. However, you can
configure the Kubernetes Cluster Autoscaler (p. 51) to do this for you.
• Minimum size – Specify the minimum number of nodes that the managed node group can
scale in to.
• Maximum size – Specify the maximum number of nodes that the managed node group can
scale out to.
• Desired size – Specify the current number of nodes that the managed node group should
maintain at launch.
8. On the Review and create page, review your managed node group configuration and choose
Create.
Note
If nodes fail to join the cluster, see ??? (p. 305) in the Troubleshooting guide.
9. Watch the status of your nodes and wait for them to reach the Ready status.
10. (GPU nodes only) If you chose a GPU instance type and the Amazon EKS-optimized accelerated
AMI, then you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on your
cluster with the following command.
11. (Optional) Deploy a sample Linux application (p. 193) – Deploy a sample application to test
your cluster and Linux nodes.
Now that you have a working Amazon EKS cluster with nodes, you are ready to start installing
Kubernetes add-ons and deploying applications to your cluster. The following documentation topics help
you to extend the functionality of your cluster.
• the section called “Cluster Autoscaler” (p. 51) – Configure the Kubernetes Cluster Autoscaler to
automatically adjust the number of nodes in your node groups.
• the section called “Sample deployment” (p. 193) – Deploy a sample application to test your cluster
and Linux nodes.
• Deploy a Windows sample application (p. 71) – Deploy a sample application to test your cluster and
Windows nodes.
91
Amazon EKS User Guide
Updating a managed node group
• Cluster management (p. 221) – Learn how to use important tools for managing your cluster.
• You have updated the Kubernetes version for your Amazon EKS cluster, and you want to update your
nodes to use the same Kubernetes version.
• A new AMI release version is available for your managed node group. For more information, see
Amazon EKS-optimized Amazon Linux 2 AMI versions (p. 132).
• You want to adjust the minimum, maximum, or desired count of the instances in your managed node
group.
• You want to add or remove Kubernetes labels from the instances in your managed node group.
• You want to add or remove AWS tags from your managed node group.
If there is a newer AMI release version for your managed node group's Kubernetes version than the one
your node group is running, you can update it to use that new AMI version. If your cluster is running a
newer Kubernetes version than your node group, you can update the node group to use the latest AMI
release version that matches your cluster's Kubernetes version.
Note
You cannot roll back a node group to an earlier Kubernetes version or AMI version.
When a node in a managed node group is terminated due to a scaling action or update, the pods in that
node are drained first. For more information, see Managed node update behavior (p. 94).
1. (Optional) If you are using the Kubernetes Cluster Autoscaler, scale the deployment down to zero
replicas to avoid conflicting scaling actions.
2. Select the tab with the name of the tool that you'd like to upgrade the version with.
If you select a node group from the table and an update is available for it, you'll receive a
notification on the Node Group configuration page. If so, you can select the Update now
button on the Node Group configuration page.
Note
Update now only appears if there is an update available. If you do not see this text,
then your node group is running the latest available version.
92
Amazon EKS User Guide
Updating a managed node group
4. On the Update AMI release version page, select the Available AMI release version that
you want to update to, select one of the following options for Update strategy, and choose
Update.
• Rolling update — This option respects pod disruption budgets for your cluster and the
update fails if Amazon EKS is unable to gracefully drain the pods that are running on this
node group due to a pod disruption budget issue.
• Force update — This option does not respect pod disruption budgets and it forces node
restarts.
eksctl
Upgrade a managed nodegroup to the latest AMI release of the same Kubernetes version that is
currently deployed on the nodes with the following command.
You can upgrade a nodegroup to a version that is one minor release later than the nodegroup's
current Kubernetes version, up to the cluster's Kubernetes version. For example, if you have a
cluster running Kubernetes 1.17, you can upgrade nodes currently running Kubernetes 1.16 to
version 1.17 with the following command.
3. (Optional) If you are using the Kubernetes Cluster Autoscaler, scale the deployment back to your
desired amount of replicas.
• Tags — Add tags to or remove tags from your node group resource. These tags are only applied
to the Amazon EKS node group, and they do not propagate to other resources, such as subnets or
Amazon EC2 instances in the node group.
• Kubernetes labels — Add or remove Kubernetes labels to the nodes in your node group. The
labels shown here are only the labels that you have applied with Amazon EKS. Other labels may
exist on your nodes that are not shown here.
5. On the Edit node group page edit the Group size if necessary.
• Minimum size — Specify the current number of nodes that the managed node group should
maintain.
• Maximum size — Specify the maximum number of nodes that the managed node group can scale
out to. Managed node groups can support up to 100 nodes by default.
93
Amazon EKS User Guide
Deleting a managed node group
• Desired size — Specify the current number of nodes that the managed node group should
maintain.
6. When you are finished editing, choose Save changes.
1. Amazon EKS creates a new Amazon EC2 launch template version for the Auto Scaling group
associated with your node group. The new template uses the target AMI for the update.
2. The Auto Scaling group is updated to use the latest launch template with the new AMI.
3. The Auto Scaling group maximum size and desired size are incremented by twice the number of
distinct Availability Zones of the Auto Scaling group to ensure at least one new instance comes up in
every Availability Zone of your node group.
4. The Auto Scaling group launches a new instance with the new AMI to satisfy the increased desired size
of the node group.
5. Amazon EKS checks the nodes in the node group for the eks.amazonaws.com/nodegroup-image
label, and it cordons all of the nodes in the node group that are not labeled with the latest AMI
ID. This prevents nodes that have already been updated from a previous failed update from being
cordoned.
6. Amazon EKS randomly selects a node in your node group and sends a termination signal to the Auto
Scaling group, Then Amazon EKS sends a signal to drain the pods from the node.* After the node is
drained, it is terminated. This step is repeated until all of the nodes are using the new AMI version.
7. The Auto Scaling group maximum size and desired size are decremented by 1 to return to your pre-
update values.
* If pods do not drain from a node (for example, if a pod disruption budget is too restrictive) for 15
minutes, then one of two things happens:
• If the update is not forced, then the update fails and reports an error.
• If the update is forced, then the pods that could not be drained are deleted.
When you delete a managed node group, Amazon EKS randomly selects a node in your node group
and sends a termination signal to the Auto Scaling group. Then Amazon EKS sends a signal to drain
the pods from the node. If pods do not drain from a node (for example, if a pod disruption budget is
too restrictive) for 15 minutes, then the pods are deleted. After the node is drained, it is terminated.
This step is repeated until all of the nodes in the Auto Scaling group are terminated, and then the Auto
Scaling group is deleted.
Important
If you delete a managed node group that uses a node IAM role that is not used by any
other managed node group in the cluster, the role is removed from the aws-auth
ConfigMap (p. 217). If any self-managed node groups in the cluster are using the same node
IAM role, the self-managed nodes will move to the NotReady status and cluster operation will
be disrupted. You can add the mapping back to the ConfigMap to minimize disruption.
94
Amazon EKS User Guide
Self-managed nodes
Self-managed nodes
Worker machines in Kubernetes are called nodes. Amazon EKS nodes run in your AWS account and
connect to your cluster's control plane via the cluster API server endpoint. You deploy one or more nodes
into a node group. A node group is one or more Amazon EC2 instances that are deployed in an Amazon
EC2 Auto Scaling group. All instances in a node group must:
A cluster can contain several node groups, and each node group can contain several nodes.
Amazon EKS nodes are standard Amazon EC2 instances, and you are billed for them based on normal
EC2 prices. For more information, see Amazon EC2 pricing.
Amazon EKS provides a specialized Amazon Machine Image (AMI) called the Amazon EKS-optimized AMI.
This AMI is built on top of Amazon Linux 2, and is configured to serve as the base image for Amazon EKS
nodes. The AMI is configured to work with Amazon EKS out of the box, and it includes Docker, kubelet,
and the AWS IAM Authenticator. The AMI also contains a specialized bootstrap script that allows it to
discover and connect to your cluster's control plane automatically.
Note
You can track security or privacy events for Amazon Linux 2 at the Amazon Linux security center
or subscribe to the associated RSS feed. Security and privacy events include an overview of the
issue, what packages are affected, and how to update your instances to correct the issue.
If you restrict access to your cluster's public endpoint using CIDR blocks, it is recommended that you also
enable private endpoint access so that nodes can communicate with the cluster. Without the private
endpoint enabled, the CIDR blocks that you specify for public access must include the egress sources
from your VPC. For more information, see Amazon EKS cluster endpoint access control (p. 46).
To add self-managed nodes to your Amazon EKS cluster, see Launching self-managed Amazon Linux 2
nodes (p. 96). If you follow the steps in the guide, the required tag is added to the node for you. If
you launch self-managed nodes manually, then you must add the following tag to each node. For more
information, see Adding and deleting tags on an individual resource.
Key Value
kubernetes.io/cluster/<cluster-name> owned
For more information about nodes from a general Kubernetes perspective, see Nodes in the Kubernetes
documentation.
Topics
• Launching self-managed Amazon Linux 2 nodes (p. 96)
• Launching self-managed Windows nodes (p. 100)
95
Amazon EKS User Guide
Launching self-managed Amazon Linux 2 nodes
If this is your first time launching Amazon EKS Linux nodes, we recommend that you follow one of
our Getting started with Amazon EKS (p. 3) guides instead. The guides provide complete end-to-end
walkthroughs for creating an Amazon EKS cluster with nodes.
Important
Amazon EKS nodes are standard Amazon EC2 instances, and you are billed for them based on
normal Amazon EC2 instance prices. For more information, see Amazon EC2 pricing.
Choose the tab below that corresponds to your desired node creation method.
eksctl
This procedure requires eksctl version 0.25.0-rc.0 or later. You can check your version with the
following command:
eksctl version
1. Create your node group with the following command. Replace the example values with your
own values.
Note
If nodes fail to join the cluster, see ??? (p. 305) in the Troubleshooting guide.
Note
For more information on the available options for eksctl create nodegroup, see the
project README on GitHub or view the help page with the following command.
Output:
You'll see several lines of output as the nodes are created. The last line of output is similar to
the following example line.
96
Amazon EKS User Guide
Launching self-managed Amazon Linux 2 nodes
2. (Optional) Deploy a sample Linux application (p. 193) — Deploy a sample application to test
your cluster and Linux nodes.
• You have created a VPC and security group that meet the requirements for an Amazon EKS
cluster. For more information, see Cluster VPC considerations (p. 163) and Amazon EKS security
group considerations (p. 166). The Getting started with Amazon EKS (p. 3) guide creates a
VPC that meets the requirements, or you can also follow Creating a VPC for your Amazon EKS
cluster (p. 159) to create one manually.
• You have created an Amazon EKS cluster and specified that it use the VPC and security group that
meet the requirements of an Amazon EKS cluster. For more information, see Creating an Amazon
EKS cluster (p. 28).
1. Wait for your cluster status to show as ACTIVE. If you launch your nodes before the cluster is
active, the nodes will fail to register with the cluster and you will have to relaunch them.
2. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation
3. Choose Create stack.
4. For Specify template, select Amazon S3 URL, then copy the following URL, paste it into
Amazon S3 URL, and select Next twice.
https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-07-23/amazon-eks-
nodegroup.yaml
5. On the Quick create stack page, fill out the following parameters accordingly:
• Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can
call it <cluster-name>-nodes.
• ClusterName: Enter the name that you used when you created your Amazon EKS cluster.
Important
This name must exactly match the name you used in Step 1: Create your Amazon EKS
cluster (p. 21); otherwise, your nodes cannot join the cluster.
• ClusterControlPlaneSecurityGroup: Choose the SecurityGroups value from the AWS
CloudFormation output that you generated with Create your Amazon EKS cluster VPC (p. 18).
• NodeGroupName: Enter a name for your node group. This name can be used later to identify
the Auto Scaling node group that is created for your nodes.
• NodeAutoScalingGroupMinSize: Enter the minimum number of nodes that your node Auto
Scaling group can scale in to.
• NodeAutoScalingGroupDesiredCapacity: Enter the desired number of nodes to scale to when
your stack is created.
• NodeAutoScalingGroupMaxSize: Enter the maximum number of nodes that your node Auto
Scaling group can scale out to.
• NodeInstanceType: Choose an instance type for your nodes.
Note
The supported instance types for the latest version of the Amazon VPC CNI plugin
for Kubernetes are shown here. You may need to update your CNI version to take
97
Amazon EKS User Guide
Launching self-managed Amazon Linux 2 nodes
advantage of the latest supported instance types. For more information, see Amazon
VPC CNI plugin for Kubernetes upgrades (p. 183).
Important
Some instance types might not be available in all Regions.
• NodeImageIdSSMParam: Pre-populated with the Amazon EC2 Systems Manager parameter
of the current recommended Amazon EKS-optimized Linux AMI ID for a Kubernetes version.
If you want to use the Amazon EKS-optimized accelerated AMI, then replace amazon-
linux-2 with amazon-linux-2-gpu. If you want to use a different Kubernetes minor
version supported with Amazon EKS, then you can replace 1.x with a different supported
version (p. 58). We recommend specifying the same Kubernetes version as your cluster.
Note
The Amazon EKS node AMI is based on Amazon Linux 2. You can track security or
privacy events for Amazon Linux 2 at the Amazon Linux Security Center or subscribe
to the associated RSS feed. Security and privacy events include an overview of the
issue, what packages are affected, and how to update your instances to correct the
issue.
• NodeImageId: (Optional) If you are using your own custom AMI (instead of the Amazon EKS-
optimized AMI), enter a node AMI ID for your Region. If you specify a value here, it overrides
any values in the NodeImageIdSSMParam field.
• NodeVolumeSize: Specify a root volume size for your nodes, in GiB.
• KeyName: Enter the name of an Amazon EC2 SSH key pair that you can use to connect using
SSH into your nodes with after they launch. If you don't already have an Amazon EC2 keypair,
you can create one in the AWS Management Console. For more information, see Amazon EC2
key pairs in the Amazon EC2 User Guide for Linux Instances.
Note
If you do not provide a keypair here, the AWS CloudFormation stack creation fails.
• BootstrapArguments: Specify any optional arguments to pass to the node bootstrap script,
such as extra kubelet arguments. For more information, view the bootstrap script usage
information on GitHub.
Note
• If you are launching nodes into a private VPC without outbound internet access,
then you need to include the following arguments.
• If you want to assign IP addresses to pods that are from a different CIDR block than
the block that includes the IP address for the node, then you may need to add a
CIDR block to your VPC and specify an argument to support the capability. For
more information, see the section called “CNI custom networking” (p. 177).
• DisableIMDSv1: Each node supports the Instance Metadata Service Version 1 (IMDSv1) and
IMDSv2 by default, but you can disable IMDSv1. Select true if you don't want any nodes in the
node group, or any pods scheduled on the nodes in the node group to use IMDSv1. For more
information about IMDS, see Configuring the instance metadata service.
• VpcId: Enter the ID for the VPC that you created in Create your Amazon EKS cluster
VPC (p. 18).
• Subnets: Choose the subnets that you created in Create your Amazon EKS cluster VPC (p. 18).
If you created your VPC using the steps described at Creating a VPC for your Amazon EKS
cluster (p. 159), then specify only the private subnets within the VPC for your nodes to
launch into.
98
Amazon EKS User Guide
Launching self-managed Amazon Linux 2 nodes
Important
If any of the subnets are public subnets, then they must have the automatic public
IP address assignment setting enabled. If the setting is not enabled for the public
subnet, then any nodes that you deploy to that public subnet will not be assigned
a public IP address and will not be able to communicate with the cluster or other
AWS services. If the subnet was deployed before 03/26/2020 using either of the
Amazon EKS AWS CloudFormation VPC templates (p. 159), or by using eksctl,
then automatic public IP address assignment is disabled for public subnets. For
information about how to enable public IP address assignment for a subnet, see
Modifying the Public IPv4 Addressing Attribute for Your Subnet. If the node is
deployed to a private subnet, then it is able to communicate with the cluster and
other AWS services through a NAT gateway.
Important
If the subnets do not have internet access, then make sure that you're aware of the
considerations and extra steps in ??? (p. 82).
6. Acknowledge that the stack might create IAM resources, and then choose Create stack.
7. When your stack has finished creating, select it in the console and choose Outputs.
8. Record the NodeInstanceRole for the node group that was created. You need this when you
configure your Amazon EKS nodes.
Note
If you launched nodes inside a private VPC without outbound internet access, then you must
enable nodes to join your cluster from within the VPC.
1. Download, edit, and apply the AWS IAM Authenticator configuration map.
b. Open the file with your favorite text editor. Replace the <ARN of instance role (not
instance profile)> snippet with the NodeInstanceRole value that you recorded in the
previous procedure, and save the file.
Important
Do not modify any other lines in this file.
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
c. Apply the configuration. This command may take a few minutes to finish.
99
Amazon EKS User Guide
Launching self-managed Windows nodes
Note
If you receive any authorization or resource type errors, see Unauthorized or access
denied (kubectl) (p. 306) in the troubleshooting section.
Note
If nodes fail to join the cluster, see ??? (p. 305) in the Troubleshooting guide.
2. Watch the status of your nodes and wait for them to reach the Ready status.
3. (GPU nodes only) If you chose a GPU instance type and the Amazon EKS-optimized accelerated
AMI, you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on your cluster
with the following command.
4. (Optional) Deploy a sample Linux application (p. 193) — Deploy a sample application to test
your cluster and Linux nodes.
You must enable Windows support for your cluster and we recommend that you review important
considerations before you launch a Windows node group. For more information, see Enabling Windows
support (p. 67).
Choose the tab below that corresponds to your desired node creation method:
eksctl
If you don't already have an Amazon EKS cluster and a Linux node group to add a Windows node
group to, then we recommend that you follow the Getting started with eksctl (p. 3) guide instead.
The guide provides a complete end-to-end walkthrough for creating an Amazon EKS cluster with
Linux and Windows nodes. If you have an existing Amazon EKS cluster and a Linux node group to
add a Windows node group to, then complete the following steps to add the Windows node group.
eksctl version
1. Create your node group with the following command. Replace the example values with your
own values.
100
Amazon EKS User Guide
Launching self-managed Windows nodes
Note
If nodes fail to join the cluster, see ??? (p. 305) in the Troubleshooting guide.
Note
For more information on the available options for eksctl create nodegroup, see the
project README on GitHub or view the help page with the following command.
Output:
You'll see several lines of output as the nodes are created. The last line of output is similar to
the following example line.
2. (Optional) Deploy a Windows sample application (p. 71) — Deploy a sample application to test
your cluster and Windows nodes.
• You have an existing Amazon EKS cluster and a Linux node group. If you don't have these
resources, we recommend that you follow one of our Getting started with Amazon EKS (p. 3)
guides to create them. The guides provide a complete end-to-end walkthrough for creating an
Amazon EKS cluster with Linux nodes.
• You have created a VPC and security group that meet the requirements for an Amazon EKS
cluster. For more information, see Cluster VPC considerations (p. 163) and Amazon EKS security
group considerations (p. 166). The Getting started with Amazon EKS (p. 3) guide creates a
VPC that meets the requirements, or you can also follow Creating a VPC for your Amazon EKS
cluster (p. 159) to create one manually.
1. Wait for your cluster status to show as ACTIVE. If you launch your nodes before the cluster is
active, the nodes will fail to register with the cluster and you will have to relaunch them.
2. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation
3. Choose Create stack.
4. For Specify template, select Amazon S3 URL, then copy the following URL, paste it into
Amazon S3 URL, and select Next twice.
https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-07-23/amazon-eks-
windows-nodegroup.yaml
101
Amazon EKS User Guide
Launching self-managed Windows nodes
5. On the Quick create stack page, fill out the following parameters accordingly:
• Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can
call it cluster-name-nodes.
• ClusterName: Enter the name that you used when you created your Amazon EKS cluster.
Important
This name must exactly match the name you used in Step 1: Create your Amazon EKS
cluster (p. 21); otherwise, your nodes cannot join the cluster.
• ClusterControlPlaneSecurityGroup: Choose the SecurityGroups value from the AWS
CloudFormation output that you generated with Create your Amazon EKS cluster VPC (p. 18).
• NodeGroupName: Enter a name for your node group. This name can be used later to identify
the Auto Scaling node group that is created for your nodes.
• NodeAutoScalingGroupMinSize: Enter the minimum number of nodes that your node Auto
Scaling group can scale in to.
• NodeAutoScalingGroupDesiredCapacity: Enter the desired number of nodes to scale to when
your stack is created.
• NodeAutoScalingGroupMaxSize: Enter the maximum number of nodes that your node Auto
Scaling group can scale out to.
• NodeInstanceType: Choose an instance type for your nodes.
Note
The supported instance types for the latest version of the Amazon VPC CNI plugin
for Kubernetes are shown here. You may need to update your CNI version to take
advantage of the latest supported instance types. For more information, see Amazon
VPC CNI plugin for Kubernetes upgrades (p. 183).
• NodeImageIdSSMParam: Pre-populated with the Amazon EC2 Systems Manager parameter
of the current recommended Amazon EKS-Optimized Windows Core AMI ID. If you want to
use the full version of Windows, then replace Core with Full.
• NodeImageId: (Optional) If you are using your own custom AMI (instead of the Amazon EKS-
optimized AMI), enter a node AMI ID for your Region. If you specify a value here, it overrides
any values in the NodeImageIdSSMParam field.
• NodeVolumeSize: Specify a root volume size for your nodes, in GiB.
• KeyName: Enter the name of an Amazon EC2 SSH key pair that you can use to connect using
SSH into your nodes with after they launch. If you don't already have an Amazon EC2 keypair,
you can create one in the AWS Management Console. For more information, see Amazon EC2
key pairs in the Amazon EC2 User Guide for Windows Instances.
Note
If you do not provide a keypair here, the AWS CloudFormation stack creation fails.
• BootstrapArguments: Specify any optional arguments to pass to the node bootstrap script,
such as extra kubelet arguments using -KubeletExtraArgs.
• VpcId: Select the ID for the VPC that you created in Create your Amazon EKS cluster
VPC (p. 18).
• NodeSecurityGroups: Select the security group that was created for your Linux node group in
Create your Amazon EKS cluster VPC (p. 18). If your Linux nodes have more than one security
group attached to them (for example, if the Linux node group was created with eksctl),
specify all of them here.
• Subnets: Choose the subnets that you created in Create your Amazon EKS cluster VPC (p. 18).
If you created your VPC using the steps described at Creating a VPC for your Amazon EKS
cluster (p. 159), then specify only the private subnets within the VPC for your nodes to
launch into.
102
Amazon EKS User Guide
Launching self-managed Windows nodes
Important
If any of the subnets are public subnets, then they must have the automatic public
IP address assignment setting enabled. If the setting is not enabled for the public
subnet, then any nodes that you deploy to that public subnet will not be assigned
a public IP address and will not be able to communicate with the cluster or other
AWS services. If the subnet was deployed before 03/26/2020 using either of the
Amazon EKS AWS CloudFormation VPC templates (p. 159), or by using eksctl,
then automatic public IP address assignment is disabled for public subnets. For
information about how to enable public IP address assignment for a subnet, see
Modifying the Public IPv4 Addressing Attribute for Your Subnet. If the node is
deployed to a private subnet, then it is able to communicate with the cluster and
other AWS services through a NAT gateway.
6. Acknowledge that the stack might create IAM resources, and then choose Create stack.
7. When your stack has finished creating, select it in the console and choose Outputs.
8. Record the NodeInstanceRole for the node group that was created. You need this when you
configure your Amazon EKS Windows nodes.
1. Download, edit, and apply the AWS IAM Authenticator configuration map.
b. Open the file with your favorite text editor. Replace the <ARN of instance role (not
instance profile) of **Linux** node> and <ARN of instance role (not
instance profile) of **Windows** node> snippets with the NodeInstanceRole
values that you recorded for your Linux and Windows nodes, and save the file.
Important
Do not modify any other lines in this file.
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile) of **Linux** node>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: <ARN of instance role (not instance profile) of **Windows**
node>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- eks:kube-proxy-windows
c. Apply the configuration. This command may take a few minutes to finish.
103
Amazon EKS User Guide
Self-managed node updates
Note
If you receive any authorization or resource type errors, see Unauthorized or access
denied (kubectl) (p. 306) in the troubleshooting section.
Note
If nodes fail to join the cluster, see ??? (p. 305) in the Troubleshooting guide.
2. Watch the status of your nodes and wait for them to reach the Ready status.
3. (Optional) Deploy a Windows sample application (p. 71) — Deploy a sample application to test
your cluster and Windows nodes.
There are two basic ways to update self-managed node groups in your clusters to use a new AMI:
• the section called “Migrating to a new node group” (p. 104) – Create a new node group and migrate
your pods to that group. Migrating to a new node group is more graceful than simply updating the AMI
ID in an existing AWS CloudFormation stack, because the migration process taints the old node group
as NoSchedule and drains the nodes after a new stack is ready to accept the existing pod workload.
• the section called “Updating an existing self-managed node group” (p. 109) – Update the AWS
CloudFormation stack for an existing node group to use the new AMI. This method is not supported for
node groups that were created with eksctl.
eksctl
This procedure requires eksctl version 0.25.0-rc.0 or later. You can check your version with the
following command:
eksctl version
1. Retrieve the name of your existing node groups, substituting default with your cluster name.
104
Amazon EKS User Guide
Self-managed node updates
Output:
2. Launch a new node group with eksctl with the following command, substituting the example
values with your own values.
Note
For more available flags and their descriptions, see https://eksctl.io/.
3. When the previous command completes, verify that all of your nodes have reached the Ready
state with the following command:
4. Delete the original node group with the following command, substituting the example values
with your cluster and nodegroup names:
To migrate your applications to a new node group with the AWS Management Console
1. Launch a new node group by following the steps outlined in Launching self-managed Amazon
Linux 2 nodes (p. 96).
2. When your stack has finished creating, select it in the console and choose Outputs.
3. Record the NodeInstanceRole for the node group that was created. You need this to add the
new Amazon EKS nodes to your cluster.
Note
If you have attached any additional IAM policies to your old node group IAM role, such
as adding permissions for the Kubernetes Cluster Autoscaler, you should attach those
same policies to your new node group IAM role to maintain that functionality on the
new group.
4. Update the security groups for both node groups so that they can communicate with each other.
For more information, see Amazon EKS security group considerations (p. 166).
a. Record the security group IDs for both node groups. This is shown as the
NodeSecurityGroup value in the AWS CloudFormation stack outputs.
105
Amazon EKS User Guide
Self-managed node updates
You can use the following AWS CLI commands to get the security group IDs from the stack
names. In these commands, oldNodes is the AWS CloudFormation stack name for your
older node stack, and newNodes is the name of the stack that you are migrating to.
oldNodes="<old_node_CFN_stack_name>"
newNodes="<new_node_CFN_stack_name>"
b. Add ingress rules to each node security group so that they accept traffic from each other.
The following AWS CLI commands add ingress rules to each security group that allow all
traffic on all protocols from the other security group. This configuration allows pods in each
node group to communicate with each other while you are migrating your workload to the
new group.
5. Edit the aws-auth configmap to map the new node instance role in RBAC.
apiVersion: v1
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::111122223333:role/nodes-1-16-NodeInstanceRole-
U11V27W93CX5
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
Replace the <ARN of instance role (not instance profile)> snippet with the
NodeInstanceRole value that you recorded in Step 3 (p. 105), then save and close the file to
apply the updated configmap.
6. Watch the status of your nodes and wait for your new nodes to join your cluster and reach the
Ready status.
106
Amazon EKS User Guide
Self-managed node updates
7. (Optional) If you are using the Kubernetes Cluster Autoscaler, scale the deployment down to 0
replicas to avoid conflicting scaling actions.
8. Use the following command to taint each of the nodes that you want to remove with
NoSchedule so that new pods are not scheduled or rescheduled on the nodes you are
replacing:
If you are upgrading your nodes to a new Kubernetes version, you can identify and taint all of
the nodes of a particular Kubernetes version (in this case, 1.15) with the following code snippet.
K8S_VERSION=1.15
nodes=$(kubectl get nodes -o jsonpath="{.items[?(@.status.nodeInfo.kubeletVersion==
\"v$K8S_VERSION\")].metadata.name}")
for node in ${nodes[@]}
do
echo "Tainting $node"
kubectl taint nodes $node key=value:NoSchedule
done
Output (this cluster is using kube-dns for DNS resolution, but your cluster may return coredns
instead):
10. If your current deployment is running fewer than two replicas, scale out the deployment to two
replicas. Substitute coredns for kube-dns if your previous command output returned that
instead.
11. Drain each of the nodes that you want to remove from your cluster with the following
command:
If you are upgrading your nodes to a new Kubernetes version, you can identify and drain all of
the nodes of a particular Kubernetes version (in this case, 1.15) with the following code snippet.
K8S_VERSION=1.15
nodes=$(kubectl get nodes -o jsonpath="{.items[?(@.status.nodeInfo.kubeletVersion==
\"v$K8S_VERSION\")].metadata.name}")
for node in ${nodes[@]}
do
echo "Draining $node"
kubectl drain $node --ignore-daemonsets --delete-local-data
done
107
Amazon EKS User Guide
Self-managed node updates
12. After your old nodes have finished draining, revoke the security group ingress rules you
authorized earlier, and then delete the AWS CloudFormation stack to terminate the instances.
Note
If you have attached any additional IAM policies to your old node group IAM role, such
as adding permissions for the Kubernetes Cluster Autoscaler), you must detach those
additional policies from the role before you can delete your AWS CloudFormation stack.
a. Revoke the ingress rules that you created for your node security groups earlier. In these
commands, oldNodes is the AWS CloudFormation stack name for your older node stack,
and newNodes is the name of the stack that you are migrating to.
oldNodes="<old_node_CFN_stack_name>"
newNodes="<new_node_CFN_stack_name>"
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::111122223333:role/nodes-1-16-NodeInstanceRole-
W70725MZQFF8
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::111122223333:role/nodes-1-15-NodeInstanceRole-
U11V27W93CX5
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
108
Amazon EKS User Guide
Self-managed node updates
Note
You must also tag your new Auto Scaling group appropriately (for example, k8s.io/
cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/<YOUR CLUSTER
NAME>) and update your Cluster Autoscaler deployment's command to point to the
newly tagged Auto Scaling group. For more information, see Cluster Autoscaler on
AWS.
15. (Optional) Verify that you are using the latest version of the Amazon VPC CNI plugin for
Kubernetes. You may need to update your CNI version to take advantage of the latest
supported instance types. For more information, see Amazon VPC CNI plugin for Kubernetes
upgrades (p. 183).
16. If your cluster is using kube-dns for DNS resolution (see step Step 9 (p. 107)), scale in the
kube-dns deployment to one replica.
The latest default Amazon EKS node AWS CloudFormation template is configured to launch an instance
with the new AMI into your cluster before removing an old one, one at a time. This configuration ensures
that you always have your Auto Scaling group's desired count of active instances in your cluster during
the rolling update.
Note
This method is not supported for node groups that were created with eksctl. If you created
your cluster or node group with eksctl, see Migrating to a new node group (p. 104).
Output (this cluster is using kube-dns for DNS resolution, but your cluster may return coredns
instead):
2. If your current deployment is running fewer than two replicas, scale out the deployment to two
replicas. Substitute coredns for kube-dns if your previous command output returned that instead.
3. (Optional) If you are using the Kubernetes Cluster Autoscaler, scale the deployment down to zero
replicas to avoid conflicting scaling actions.
109
Amazon EKS User Guide
Self-managed node updates
4. Determine the instance type and desired instance count of your current node group. You will enter
these values later when you update the AWS CloudFormation template for the group.
https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-07-23/amazon-eks-
nodegroup.yaml
9. On the Specify stack details page, fill out the following parameters, and choose Next:
/aws/service/eks/optimized-ami/1.17/amazon-linux-2/recommended/image_id
You can change the 1.17 value to any supported Kubernetes version (p. 61). If you want to
use the Amazon EKS-optimized accelerated AMI, then change amazon-linux-2 to amazon-
linux-2-gpu.
110
Amazon EKS User Guide
AWS Fargate
Note
Using the Amazon EC2 Systems Manager parameter enables you to update your nodes in
the future without having to lookup and specify an AMI ID. If your AWS CloudFormation
stack is using this value, any stack update will always launch the latest recommended
Amazon EKS-optimized AMI for your specified Kubernetes version, even if you don't
change any values in the template.
• NodeImageId – To use your own custom AMI, enter the ID for the AMI to use.
Important
This value overrides any value specified for NodeImageIdSSMParam. If you want to use
the NodeImageIdSSMParam value, ensure that the value for NodeImageId is blank.
10. (Optional) On the Options page, tag your stack resources. Choose Next.
11. On the Review page, review your information, acknowledge that the stack might create IAM
resources, and then choose Update stack.
Note
The update of each node in the cluster takes several minutes. Wait for the update of all
nodes to complete before performing the next steps.
12. If your cluster's DNS provider is kube-dns, scale in the kube-dns deployment to one replica.
13. (Optional) If you are using the Kubernetes Cluster Autoscaler, scale the deployment back to your
desired amount of replicas.
14. (Optional) Verify that you are using the latest version of the Amazon VPC CNI plugin for Kubernetes.
You may need to update your CNI version to take advantage of the latest supported instance types.
For more information, see Amazon VPC CNI plugin for Kubernetes upgrades (p. 183).
AWS Fargate
This topic discusses using Amazon EKS to run Kubernetes pods on AWS Fargate.
AWS Fargate is a technology that provides on-demand, right-sized compute capacity for containers.
With AWS Fargate, you no longer have to provision, configure, or scale groups of virtual machines to run
containers. This removes the need to choose server types, decide when to scale your node groups, or
optimize cluster packing.
You can control which pods start on Fargate and how they run with Fargate profiles (p. 117), which are
defined as part of your Amazon EKS cluster.
Amazon EKS integrates Kubernetes with AWS Fargate by using controllers that are built by AWS using
the upstream, extensible model provided by Kubernetes. These controllers run as part of the Amazon
EKS managed Kubernetes control plane and are responsible for scheduling native Kubernetes pods onto
Fargate. The Fargate controllers include a new scheduler that runs alongside the default Kubernetes
scheduler in addition to several mutating and validating admission controllers. When you start a pod
that meets the criteria for running on Fargate, the Fargate controllers running in the cluster recognize,
update, and schedule the pod onto Fargate.
Each pod running on Fargate has its own isolation boundary and does not share the underlying kernel,
CPU resources, memory resources, or elastic network interface with another pod.
This topic describes the different components of pods running on Fargate, and calls out special
considerations for using Fargate with Amazon EKS.
111
Amazon EKS User Guide
Fargate considerations
AWS Fargate with Amazon EKS is currently only available in the following Regions:
• Classic Load Balancers and Network Load Balancers are not supported on pods running on Fargate. For
ingress, we recommend that you use the ALB Ingress Controller on Amazon EKS (p. 205) (minimum
version v1.1.4).
• Pods must match a Fargate profile at the time that they are scheduled in order to run on Fargate. Pods
which do not match a Fargate profile may be stuck as Pending. If a matching Fargate profile exists,
you can delete pending pods that you have created to reschedule them onto Fargate.
• Daemonsets are not supported on Fargate. If your application requires a daemon, you should
reconfigure that daemon to run as a sidecar container in your pods.
• Privileged containers are not supported on Fargate.
• Pods running on Fargate cannot specify HostPort or HostNetwork in the pod manifest.
• GPUs are currently not available on Fargate.
• Pods running on Fargate are only supported on private subnets (with NAT gateway access to AWS
services, but not a direct route to an Internet Gateway), so your cluster's VPC must have private
subnets available. For clusters without outbound internet access, see ??? (p. 82).
• You can use the Vertical Pod Autoscaler (p. 196) to initially right size the CPU and memory for your
Fargate pods, and then use the the section called “Horizontal Pod Autoscaler” (p. 201) to scale
those pods. If you want the Vertical Pod Autoscaler to automatically re-deploy pods to Fargate with
larger CPU and memory combinations, then set the Vertical Pod Autoscaler's mode to either Auto
or Recreate to ensure correct functionality. For more information, see the Vertical Pod Autoscaler
documentation on GitHub.
• Stateful applications are not recommended for pods running on Fargate. Instead, we recommend that
you use AWS solutions such as Amazon S3 or DynamoDB for pod data storage.
• DNS resolution and DNS hostnames must be enabled for your VPC. For more information, see Viewing
and updating DNS support for your VPC.
• Fargate runs each pod in a VM-isolated environment without sharing resources with other pods.
However, because Kubernetes is a single-tenant orchestrator, Fargate cannot guarantee pod-level
security isolation. You should run sensitive workloads or untrusted workloads that need complete
security isolation using separate Amazon EKS clusters.
• Fargate profiles support specifying subnets from VPC secondary CIDR blocks. You may want to specify
a secondary CIDR block because there are a limited number of IP addresses available in a subnet. As a
112
Amazon EKS User Guide
Getting started with Fargate
result, there are a limited number of pods that can be created in the cluster. Using different subnets
for pods allows you to increase the number of available IP addresses. For more information, see
Adding IPv4 CIDR blocks to a VPC.
If you restrict access to your cluster's public endpoint using CIDR blocks, it is recommended that you
also enable private endpoint access so that Fargate pods can communicate with the cluster. Without
the private endpoint enabled, the CIDR blocks that you specify for public access must include the egress
sources from your VPC. For more information, see Amazon EKS cluster endpoint access control (p. 46).
If you do not already have an Amazon EKS cluster that supports Fargate, you can create one with the
following eksctl command.
Note
This procedure requires eksctl version 0.25.0-rc.0 or later. You can check your version with
the following command:
eksctl version
113
Amazon EKS User Guide
Getting started with Fargate
Adding the --fargate option in the command above creates a cluster without a node group. However,
eksctl creates a pod execution role, a Fargate profile for the default and kube-system namespaces,
and it patches the coredns deployment so that it can run on Fargate.
If you are working with an existing cluster that already has nodes associated with it, you need to make
sure that pods on these nodes can communicate freely with pods running on Fargate. Pods running
on Fargate are automatically configured to use the cluster security group for the cluster that they are
associated with. You must ensure that any existing nodes in your cluster can send and receive traffic to
and from the cluster security group. Managed node groups (p. 87) are automatically configured to use
the cluster security group as well, so you do not need to modify or check them for this compatibility.
For existing node groups that were created with eksctl or the Amazon EKS-managed AWS
CloudFormation templates, you can add the cluster security group to the nodes manually, or you can
modify the node group's Auto Scaling group launch template to attach the cluster security group to the
instances. For more information, see Changing an instance's security groups in the Amazon VPC User
Guide.
You can check for a cluster security group for your cluster in the AWS Management Console under the
cluster's Networking section, or with the following AWS CLI command:
When you create a Fargate profile, you must specify a pod execution role to use with your pods. This role
is added to the cluster's Kubernetes Role based access control (RBAC) for authorization. This allows the
kubelet that is running on the Fargate infrastructure to register with your Amazon EKS cluster so that it
can appear in your cluster as a node. For more information, see Pod execution role (p. 259).
To create an AWS Fargate pod execution role with the AWS Management Console
114
Amazon EKS User Guide
Getting started with Fargate
7. For Role name, enter a unique name for your role, such as
AmazonEKSFargatePodExecutionRole, then choose Create role.
Choose the tab below that corresponds to your preferred Fargate profile creation method.
eksctl
This procedure requires eksctl version 0.25.0-rc.0 or later. You can check your version with the
following command:
eksctl version
• Create your Fargate profile with the following eksctl command, replacing the variable
text with your own values. You must specify a namespace, but the labels option is not required.
To create a Fargate profile for a cluster with the AWS Management Console
115
Amazon EKS User Guide
Getting started with Fargate
5. On the Configure pods selection page, enter the following information and choose Next.
a. For Namespace, enter a namespace to match for pods, such as kube-system or default.
b. (Optional) Add Kubernetes labels to the selector that pods in the specified namespace
must have to match the selector. For example, you could add the label infrastructure:
fargate to the selector so that only pods in the specified namespace that also have the
infrastructure: fargate Kubernetes label match the selector.
6. On the Review and create page, review the information for your Fargate profile and choose
Create.
{
"fargateProfileName": "coredns",
"clusterName": "dev",
"podExecutionRoleArn": "arn:aws:iam::111122223333:role/
AmazonEKSFargatePodExecutionRole",
"subnets": [
"subnet-0b64dd020cdff3864",
"subnet-00b03756df55e2b87",
"subnet-0418fcb68ed294abf"
],
"selectors": [
{
"namespace": "kube-system",
"labels": {
"k8s-app": "kube-dns"
}
}
]
}
You could apply this Fargate profile to your cluster with the following AWS CLI command. First, create
a file called coredns.json and paste the JSON file from the previous step into it, replacing the
variable text with your own cluster values.
Next steps
• You can start migrating your existing applications to run on Fargate with the following workflow.
116
Amazon EKS User Guide
Fargate profile
1. Create a Fargate profile (p. 118) that matches your application's Kubernetes namespace and
Kubernetes labels.
2. Delete and re-create any existing pods so that they are scheduled on Fargate. For example, the
following command triggers a rollout of the coredns Deployment. You can modify the namespace
and deployment type to update your specific pods.
• Deploy the ALB Ingress Controller on Amazon EKS (p. 205) (version v1.1.4 or later) to allow Ingress
objects for your pods running on Fargate.
• You can use the Vertical Pod Autoscaler (p. 196) to initially right size the CPU and memory for your
Fargate pods, and then use the the section called “Horizontal Pod Autoscaler” (p. 201) to scale
those pods. If you want the Vertical Pod Autoscaler to automatically re-deploy pods to Fargate with
larger CPU and memory combinations, then set the Vertical Pod Autoscaler's mode to either Auto
or Recreate to ensure correct functionality. For more information, see the Vertical Pod Autoscaler
documentation on GitHub.
The Fargate profile allows an administrator to declare which pods run on Fargate. This declaration is
done through the profile’s selectors. Each profile can have up to five selectors that contain a namespace
and optional labels. You must define a namespace for every selector. The label field consists of multiple
optional key-value pairs. Pods that match a selector (by matching a namespace for the selector and all of
the labels specified in the selector) are scheduled on Fargate. If a namespace selector is defined without
any labels, Amazon EKS will attempt to schedule all pods that run in that namespace onto Fargate using
the profile. If a to-be-scheduled pod matches any of the selectors in the Fargate profile, then that pod is
scheduled on Fargate.
If a pod matches multiple Fargate profiles, Amazon EKS picks one of the matches at random. In this
case, you can specify which profile a pod should use by adding the following Kubernetes label to the pod
specification: eks.amazonaws.com/fargate-profile: profile_name. However, the pod must still
match a selector in that profile in order to be scheduled onto Fargate.
When you create a Fargate profile, you must specify a pod execution role for the pods that run on
Fargate using the profile. This role is added to the cluster's Kubernetes Role Based Access Control (RBAC)
for authorization so that the kubelet that is running on the Fargate infrastructure can register with
your Amazon EKS cluster and appear in your cluster as a node. The pod execution role also provides IAM
permissions to the Fargate infrastructure to allow read access to Amazon ECR image repositories. For
more information, see Pod execution role (p. 259).
Fargate profiles are immutable. However, you can create a new updated profile to replace an existing
profile and then delete the original after the updated profile has finished creating.
Note
Any pods that are running using a Fargate profile will be stopped and put into pending when
the profile is deleted.
If any Fargate profiles in a cluster are in the DELETING status, you must wait for that Fargate profile to
finish deleting before you can create any other profiles in that cluster.
117
Amazon EKS User Guide
Fargate profile
{
"fargateProfileName": "",
"clusterName": "",
"podExecutionRoleArn": "",
"subnets": [
""
],
"selectors": [
{
"namespace": "",
"labels": {
"KeyName": ""
}
}
],
"clientRequestToken": "",
"tags": {
"KeyName": ""
}
}
When your cluster creates pods on AWS Fargate, the pod needs to make calls to AWS APIs on your
behalf, for example, to pull container images from Amazon ECR. The Amazon EKS pod execution role
provides the IAM permissions to do this.
When you create a Fargate profile, you must specify a pod execution role to use with your pods. This
role is added to the cluster's Kubernetes Role Based Access Control (RBAC) for authorization, so that
the kubelet that is running on the Fargate infrastructure can register with your Amazon EKS cluster
and appear in your cluster as a node. For more information, see Pod execution role (p. 259).
Subnets
The IDs of subnets to launch pods into that use this profile. At this time, pods running on Fargate
are not assigned public IP addresses, so only private subnets (with no direct route to an Internet
Gateway) are accepted for this parameter.
Selectors
The selectors to match for pods to use this Fargate profile. Each selector must have an associated
namespace. Optionally, you can also specify labels for a namespace. You may specify up to five
selectors in a Fargate profile. A pod only needs to match one selector to run using the Fargate
profile.
Namespace
You must specify a namespace for a selector. The selector only matches pods that are created in
this namespace, but you can create multiple selectors to target multiple namespaces.
Labels
You can optionally specify Kubernetes labels to match for the selector. The selector only
matches pods that have all of the labels that are specified in the selector.
118
Amazon EKS User Guide
Fargate profile
services, but not a direct route to an Internet Gateway), so your cluster's VPC must have private subnets
available. Select the tab of the tool that you'd like to use to create the profile.
eksctl
This procedure requires eksctl version 0.25.0-rc.0 or later. You can check your version with the
following command:
eksctl version
• Create your Fargate profile with the following eksctl command, replacing the variable
text with your own values. You must specify a namespace, but the labels option is not required.
To create a Fargate profile for a cluster with the AWS Management Console
a. For Namespace, enter a namespace to match for pods, such as kube-system or default.
b. (Optional) Add Kubernetes labels to the selector that pods in the specified namespace
must have to match the selector. For example, you could add the label infrastructure:
fargate to the selector so that only pods in the specified namespace that also have the
infrastructure: fargate Kubernetes label match the selector.
6. On the Review and create page, review the information for your Fargate profile and choose
Create.
119
Amazon EKS User Guide
Fargate pod configuration
When you delete a Fargate profile, any pods that were scheduled onto Fargate with the profile are
deleted. If those pods match another Fargate profile, then they are scheduled on Fargate with that
profile. If they no longer match any Fargate profiles, then they are not scheduled onto Fargate and may
remain as pending.
Only one Fargate profile in a cluster can be in the DELETING status at a time. You must wait for a
Fargate profile to finish deleting before you can delete any other profiles in that cluster.
When pods are scheduled on Fargate, the vCPU and memory reservations within the pod specification
determine how much CPU and memory to provision for the pod.
• The maximum request out of any Init containers is used to determine the Init request vCPU and
memory requirements.
• Requests for all long-running containers are added up to determine the long-running request vCPU
and memory requirements.
• The larger of the above two values is chosen for the vCPU and memory request to use for your pod.
• Fargate adds 256 MB to each pod's memory reservation for the required Kubernetes components
(kubelet, kube-proxy, and containerd).
Fargate rounds up to the compute configuration shown below that most closely matches the sum of
vCPU and memory requests in order to ensure pods always have the resources that they need to run.
If you do not specify a vCPU and memory combination, then the smallest available combination is used
(.25 vCPU and 0.5 GB memory).
The table below shows the vCPU and memory combinations that are available for pods running on
Fargate.
120
Amazon EKS User Guide
Amazon EKS-optimized AMIs
For pricing information on these compute configurations, see AWS Fargate pricing.
Fargate storage
When provisioned, each pod running on Fargate receives 20 GB of container image layer storage. Pod
storage is ephemeral. After a pod stops, the storage is deleted. New pods launched onto Fargate on or
after 5/28/2020, have encryption of the ephemeral storage volume enabled by default. The ephemeral
pod storage is encrypted with an AES-256 encryption algorithm using AWS Fargate-managed keys.
Note
The usable storage for Amazon EKS pods running on Fargate is less than 20GB because some
space is used by the kubelet and other Kubernetes modules that are loaded inside the pod.
Topics
• Amazon EKS-optimized Amazon Linux 2 AMIs (p. 121)
• Amazon EKS-optimized Ubuntu AMIs (p. 136)
• Amazon EKS-optimized Windows AMIs (p. 136)
The AMI IDs for the latest Amazon EKS-optimized AMI (with and without the section called “Amazon
EKS-optimized accelerated Amazon Linux 2 AMI” (p. 134)) are shown in the following table. You
can also retrieve the IDs with an AWS Systems Manager parameter using different tools. For more
information, see Retrieving Amazon EKS-optimized Amazon Linux 2 AMI IDs (p. 135).
Note
• The Amazon EKS-optimized accelerated AMI only supports GPU and Inferentia based instance
types. Be sure to specify these instance types in your node AWS CloudFormation template.
By using the Amazon EKS-optimized accelerated AMI, you agree to NVIDIA's end user license
agreement (EULA).
121
Amazon EKS User Guide
Amazon Linux 2
• The Amazon EKS-optimized accelerated AMI was previously referred to as the Amazon EKS-
optimized AMI with GPU support.
• Previous versions of the Amazon EKS-optimized accelerated AMI installed the nvidia-docker
repository. The repository is no longer included in Amazon EKS AMI version v20200529 and
later.
122
Amazon EKS User Guide
Amazon Linux 2
123
Amazon EKS User Guide
Amazon Linux 2
124
Amazon EKS User Guide
Amazon Linux 2
125
Amazon EKS User Guide
Amazon Linux 2
Important
These AMIs require the latest AWS CloudFormation node template. You can't use these AMIs
with a previous version of the node template; they will fail to join your cluster. Be sure to
upgrade any existing AWS CloudFormation node stacks with the latest template (URL shown
below) before you attempt to use these AMIs.
https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-07-23/amazon-eks-
nodegroup.yaml
The AWS CloudFormation node template launches your nodes with Amazon EC2 user data that triggers
a specialized bootstrap script. This script allows your nodes to discover and connect to your cluster's
control plane automatically. For more information, see Launching self-managed Amazon Linux 2
nodes (p. 96).
The Amazon EKS-optimized AMI is built on top of Amazon Linux 2, specifically for use as a node in
Amazon EKS clusters. You can use this repository to view the specifics of how the Amazon EKS team
configures kubelet, Docker, the AWS IAM Authenticator for Kubernetes, and more.
The build scripts repository includes a HashiCorp packer template and build scripts to generate an AMI.
These scripts are the source of truth for Amazon EKS-optimized AMI builds, so you can follow the GitHub
repository to monitor changes to our AMIs. For example, perhaps you want your own AMI to use the
same version of Docker that the EKS team uses for the official AMI.
126
Amazon EKS User Guide
Amazon Linux 2
The GitHub repository also contains the specialized bootstrap script that runs at boot time to configure
your instance's certificate data, control plane endpoint, cluster name, and more.
Additionally, the GitHub repository contains our Amazon EKS node AWS CloudFormation templates.
These templates make it easier to spin up an instance running the Amazon EKS-optimized AMI and
register it with a cluster.
In addition to the standard Amazon EKS-optimized AMI configuration, the accelerated AMI includes the
following:
• NVIDIA drivers
• The nvidia-container-runtime (as the default runtime)
• AWS Neuron container runtime
The AMI IDs for the latest Amazon EKS-optimized accelerated AMI are shown in the following table.
You can also retrieve the IDs with an AWS Systems Manager parameter using different tools. For more
information, see Retrieving Amazon EKS-optimized Amazon Linux 2 AMI IDs (p. 135).
Note
• The Amazon EKS-optimized accelerated AMI only supports GPU and Inferentia based instance
types. Be sure to specify these instance types in your node AWS CloudFormation template.
By using the Amazon EKS-optimized accelerated AMI, you agree to NVIDIA's end user license
agreement (EULA).
• The Amazon EKS-optimized accelerated AMI was previously referred to as the Amazon EKS-
optimized AMI with GPU support.
• Previous versions of the Amazon EKS-optimized accelerated AMI installed the nvidia-docker
repository. The repository is no longer included in Amazon EKS AMI version v20200529 and
later.
127
Amazon EKS User Guide
Amazon Linux 2
128
Amazon EKS User Guide
Amazon Linux 2
129
Amazon EKS User Guide
Amazon Linux 2
Important
These AMIs require the latest AWS CloudFormation node template. You can't use these AMIs
with a previous version of the node template; they will fail to join your cluster. Be sure to
upgrade any existing AWS CloudFormation node stacks with the latest template (URL shown
below) before you attempt to use these AMIs.
130
Amazon EKS User Guide
Amazon Linux 2
https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-07-23/amazon-eks-
nodegroup.yaml
The AWS CloudFormation node template launches your nodes with Amazon EC2 user data that triggers
a specialized bootstrap script. This script allows your nodes to discover and connect to your cluster's
control plane automatically. For more information, see Launching self-managed Amazon Linux 2
nodes (p. 96).
After your GPU nodes join your cluster, you must apply the NVIDIA device plugin for Kubernetes as a
DaemonSet on your cluster with the following command.
You can verify that your nodes have allocatable GPUs with the following command:
This example pod manifest launches a Cuda container that runs nvidia-smi on a node. Create a file
called nvidia-smi.yaml, copy and paste the following manifest into it, and save the file.
apiVersion: v1
kind: Pod
metadata:
name: nvidia-smi
spec:
restartPolicy: OnFailure
containers:
- name: nvidia-smi
image: nvidia/cuda:9.2-devel
args:
- "nvidia-smi"
resources:
limits:
nvidia.com/gpu: 1
After the pod has finished running, view its logs with the following command:
131
Amazon EKS User Guide
Amazon Linux 2
Output:
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
The Amazon EKS-optimized AMI metadata, including the AMI ID, for each variant can be retrieved
programmatically. For more information, see Retrieving Amazon EKS-optimized Amazon Linux 2 AMI
IDs (p. 135).
AMIs are versioned by Kubernetes version and the release date of the AMI in the following format:
k8s_major_version.k8s_minor_version.k8s_patch_version-release_date
AMI version kubelet version Docker version Kernel version Packer version
AMI version kubelet version Docker version Kernel version Packer version
132
Amazon EKS User Guide
Amazon Linux 2
AMI version kubelet version Docker version Kernel version Packer version
AMI version kubelet version Docker version Kernel version Packer version
AMI version kubelet version Docker version Kernel version Packer version
133
Amazon EKS User Guide
Amazon Linux 2
AMI version kubelet version Docker version Kernel version Packer version
1.16.13-20200723
1.16.13 19.03.6-ce 4.14.181 v20200723 418.87.00
1.16.12-20200710
1.16.12 19.03.6-ce 4.14.181 v20200710 418.87.00
1.16.12-20200709
1.16.12 19.03.6-ce 4.14.181 v20200709 418.87.00
1.15.11-20200723
1.15.11 19.03.6-ce 4.14.181 v20200723 418.87.00
134
Amazon EKS User Guide
Amazon Linux 2
1.15.11-20200710
1.15.11 19.03.6-ce 4.14.181 v20200710 418.87.00
1.15.11-20200709
1.15.11 19.03.6-ce 4.14.181 v20200709 418.87.00
1.15.11-20200615
1.15.11 19.03.6-ce 4.14.181 v20200615 418.87.00
1.15.11-20200609
1.15.11 19.03.6-ce 4.14.181 v20200609 418.87.00
1.15.11-20200531
1.15.11 18.09.9-ce 4.14.177 v20200531 418.87.00
1.15.11-20200507
1.15.11 18.09.9-ce 4.14.177 v20200507 418.87.00
1.15.11-20200423
1.15.11 18.09.9-ce 4.14.173 v20200423 418.87.00
1.15.10-20200406
1.15.10 18.09.9-ce 4.14.173 v20200406 418.87.00
1.15.10-20200228
1.15.10 18.09.9-ce 4.14.165 v20200228 418.87.00
135
Amazon EKS User Guide
Ubuntu
Parameter Store API, see GetParameter. Your user account must have the ssm:GetParameter IAM
permission to retrieve the Amazon EKS-optimized AMI metadata.
Select the name of the tool that you want to retrieve the AMI ID with.
AWS CLI
You can retrieve the image ID of the latest recommended Amazon EKS-optimized Amazon Linux AMI
with the following command by using the sub-parameter image_id. Replace 1.17 with a supported
version (p. 61) and region-code with an Amazon EKS-supported Region for which you want the
AMI ID. Replace amazon-linux-2 with amazon-linux-2-gpu to see the accelerated AMI ID.
Example output:
ami-abcd1234efgh5678i
You can query for the recommended Amazon EKS-optimized AMI ID using a URL. The URL opens
the Amazon EC2 Systems Manager console with the value of the ID for the parameter. In the
following URL, replace 1.17 with a supported version (p. 61) and region-code with an Amazon
EKS-supported Region for which you want the AMI ID. Replace amazon-linux-2 with amazon-
linux-2-gpu to see the accelerated AMI ID.
https://console.aws.amazon.com/systems-manager/parameters/%252Faws%252Fservice
%252Feks%252Foptimized-ami%252F1.17%252Famazon-linux-2%252Frecommended%252Fimage_id/
description?region=region-code
Canonical delivers a built-for-purpose Kubernetes Node OS image. This minimized Ubuntu image is
optimized for Amazon EKS and includes the custom AWS kernel that is jointly developed with AWS.
For more information, see Ubuntu and Amazon Elastic Kubernetes Service and Optimized support for
Amazon EKS on Ubuntu 18.04.
The AMI IDs for the latest Amazon EKS-optimized AMI are shown in the following table. Windows Server
2019 is a Long-Term Servicing Channel (LTSC) release and Windows Server, version 1909 is a Semi-
Annual Channel (SAC) release. For more information, see Windows Server servicing channels: LTSC
and SAC in the Microsoft documentation. You can also retrieve the IDs with an AWS Systems Manager
parameter using different tools. For more information, see Retrieving Amazon EKS-optimized Windows
AMI IDs (p. 142).
136
Amazon EKS User Guide
Windows
137
Amazon EKS User Guide
Windows
138
Amazon EKS User Guide
Windows
139
Amazon EKS User Guide
Windows
140
Amazon EKS User Guide
Windows
141
Amazon EKS User Guide
Windows
Select the name of the tool that you want to retrieve the AMI ID with.
AWS CLI
You can retrieve the image ID of the latest recommended Amazon EKS-optimized Windows AMI
with the following command by using the sub-parameter image_id. You can replace 1.17 with
any supported Amazon EKS version and can replace region-code with an Amazon EKS-supported
Region for which you want the AMI ID. Replace Core with Full to see the Windows Server full AMI
ID. You can also replace 2019 with 1909 for the Core version only.
Example output:
ami-ami-00a053f1635fffea0
You can query for the recommended Amazon EKS-optimized AMI ID using a URL. The URL opens the
Amazon EC2 Systems Manager console with the value of the ID for the parameter. In the following
URL, you can replace 1.17 with any supported Amazon EKS version and can replace region-code
with an Amazon EKS-supported Region for which you want the AMI ID. Replace Core with Full to
see the Windows Server full AMI ID. You can also replace 2019 with 1909 for the Core version only.
https://console.aws.amazon.com/systems-manager/parameters/%252Faws%252Fservice%252Fami-
windows-latest%252FWindows_Server-2019-English-Core-EKS_Optimized-1.17%252Fimage_id/
description?region=region-code
142
Amazon EKS User Guide
Storage classes
Storage
This chapter covers storage options for Amazon EKS clusters.
The Storage classes (p. 143) topic uses the in-tree Amazon EBS storage provisioner. The Amazon EBS
CSI driver (p. 144) is available for managing storage in Kubernetes 1.14 and later clusters.
Note
The existing in-tree Amazon EBS plugin is still supported, but by using a CSI driver, you
benefit from the decoupling of Kubernetes upstream release cycle and CSI driver release cycle.
Eventually, the in-tree plugin will be deprecated in favor of the CSI driver.
Topics
• Storage classes (p. 143)
• Amazon EBS CSI driver (p. 144)
• Amazon EFS CSI driver (p. 148)
• Amazon FSx for Lustre CSI driver (p. 152)
Storage classes
Amazon EKS clusters that were created prior to Kubernetes version 1.11 were not created with any
storage classes. You must define storage classes for your cluster to use and you should define a default
storage class for your persistent volume claims. For more information, see Storage classes in the
Kubernetes documentation.
Note
This topic uses the in-tree Amazon EBS storage provisioner. For Kubernetes 1.14 and later
clusters, the Amazon EBS CSI driver (p. 144) is available for managing storage. The existing
in-tree Amazon EBS plugin is still supported, but by using a CSI driver, you benefit from the
decoupling of Kubernetes upstream release cycle and CSI driver release cycle. Eventually, the in-
tree plugin will be deprecated in favor of the CSI driver.
1. Create an AWS storage class manifest file for your storage class. The gp2-storage-class.yaml
example below defines a storage class called gp2 that uses the Amazon EBS gp2 volume type.
For more information about the options available for AWS storage classes, see AWS EBS in the
Kubernetes documentation.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp2
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
143
Amazon EKS User Guide
Amazon EBS CSI driver
fsType: ext4
2. Use kubectl to create the storage class from the manifest file.
Output:
1. List the existing storage classes for your cluster. A storage class must be defined before you can set it
as a default.
Output:
2. Choose a storage class and set it as your default by setting the storageclass.kubernetes.io/
is-default-class=true annotation.
Output:
Output:
This topic shows you how to deploy the Amazon EBS CSI Driver to your Amazon EKS cluster and verify
that it works. We recommend using version v0.5.0 of the driver.
Note
This driver is only supported on Kubernetes version 1.14 and later Amazon EKS clusters and
nodes. The driver is not supported on Fargate. Alpha features of the Amazon EBS CSI Driver
are not supported on Amazon EKS clusters. The driver is in Beta release. It is well tested and
144
Amazon EKS User Guide
Amazon EBS CSI driver
supported by Amazon EKS for production use. Support for the driver will not be dropped,
though details may change. If the schema or schematics of the driver changes, instructions for
migrating to the next version will be provided.
For detailed descriptions of the available parameters and complete examples that demonstrate the
driver's features, see the Amazon EBS Container Storage Interface (CSI) driver project on GitHub.
1. Create an IAM policy called Amazon_EBS_CSI_Driver for your node instance profile that allows
the Amazon EBS CSI Driver to make calls to AWS APIs on your behalf. Use the following AWS CLI
commands to create the IAM policy in your AWS account. You can view the policy document on
GitHub.
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/
v0.5.0/docs/example-iam-policy.json
Output:
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
mapRoles:
----
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::111122223333:role/eksctl-alb-nodegroup-ng-b1f603c5-
NodeInstanceRole-GKNS581EASPU
username: system:node:{{EC2PrivateDNSName}}
Events: <none>
Record the role name for any rolearn values that have the system:nodes group assigned to
them. In the previous example output, the role name is eksctl-alb-nodegroup-ng-b1f603c5-
NodeInstanceRole-GKNS581EASPU. You should have one value for each node group in your
cluster.
3. Attach the new Amazon_EBS_CSI_Driver IAM policy to each of the node IAM roles you identified
earlier with the following command, substituting the red text with your own AWS account number
and node IAM role name.
145
Amazon EKS User Guide
Amazon EBS CSI driver
4. Deploy the Amazon EBS CSI Driver with the following command.
Note
This command requires version 1.14 or later of kubectl. You can see your kubectl version
with the following command. To install or upgrade your kubectl version, see Installing
kubectl (p. 221).
To deploy a sample application and verify that the CSI driver is working
This procedure uses the Dynamic volume provisioning example from the Amazon EBS Container Storage
Interface (CSI) driver GitHub repository to consume a dynamically-provisioned Amazon EBS volume.
1. Clone the Amazon EBS Container Storage Interface (CSI) driver GitHub repository to your local
system.
cd aws-ebs-csi-driver/examples/kubernetes/dynamic-provisioning/
3. Deploy the ebs-sc storage class, ebs-claim persistent volume claim, and app sample application
from the specs directory.
Output:
Name: ebs-sc
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-
configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":
{"annotations":{},"name":"ebs-
sc"},"provisioner":"ebs.csi.aws.com","volumeBindingMode":"WaitForFirstConsumer"}
Provisioner: ebs.csi.aws.com
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
146
Amazon EKS User Guide
Amazon EBS CSI driver
Note that the storage class uses the WaitForFirstConsumer volume binding mode. This means
that volumes are not dynamically provisioned until a pod makes a persistent volume claim. For more
information, see Volume Binding Mode in the Kubernetes documentation.
5. Watch the pods in the default namespace and wait for the app pod to become ready.
6. List the persistent volumes in the default namespace. Look for a persistent volume with the
default/ebs-claim claim.
kubectl get pv
Output:
Output:
Name: pvc-37717cd6-d0dc-11e9-b17f-06fad4858a5a
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: ebs.csi.aws.com
Finalizers: [kubernetes.io/pv-protection external-attacher/ebs-csi-aws-com]
StorageClass: ebs-sc
Status: Bound
Claim: default/ebs-claim
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 4Gi
Node Affinity:
Required Terms:
Term 0: topology.ebs.csi.aws.com/zone in [regiona]
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: ebs.csi.aws.com
VolumeHandle: vol-0d651e157c6d93445
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/
csiProvisionerIdentity=1567792483192-8081-ebs.csi.aws.com
Events: <none>
Output:
147
Amazon EKS User Guide
Amazon EFS CSI driver
9. When you finish experimenting, delete the resources for this sample application to clean up.
This topic shows you how to deploy the Amazon EFS CSI Driver to your Amazon EKS cluster and verify
that it works.
Note
The driver is not supported on Fargate. Alpha features of the Amazon EFS CSI Driver are not
supported on Amazon EKS clusters.
For detailed descriptions of the available parameters and complete examples that demonstrate the
driver's features, see the Amazon EFS Container Storage Interface (CSI) driver project on GitHub.
• Deploy the Amazon EFS CSI driver with the following command.
Note
This command requires kubectl version 1.14 or later. You can see your kubectl version
with the following command. To install or upgrade your kubectl version, see Installing
kubectl (p. 221).
Note
• Starting with the 1.0.0 release, encryption of data in transit using TLS is enabled by
default. Using encryption in transit, data will be encrypted during its transition over
the network to the Amazon EFS service. To disable it and mount volumes using NFSv4,
set the volumeAttributes field encryptInTransit to "false" in your persistent
volume manifest. For an example manifest, see Encryption in Transit example on GitHub.
• Only static volume provisioning is supported. This means that an Amazon EFS file system
needs to be created outside of Amazon EKS before being used by pods in your cluster.
The Amazon EFS CSI driver supports Amazon EFS access points, which are application-specific entry
points into an Amazon EFS file system that make it easier to share a file system between multiple pods.
Access points can enforce a user identity for all file system requests that are made through the access
148
Amazon EKS User Guide
Amazon EFS CSI driver
point, and enforce a root directory for each pod. For more information, see Amazon EFS access points on
GitHub.
To create an Amazon EFS file system for your Amazon EKS cluster
1. Locate the VPC ID for your Amazon EKS cluster. You can find this ID in the Amazon EKS console, or
you can use the following AWS CLI command.
Output:
vpc-exampledb76d3e813
2. Locate the CIDR range for your cluster's VPC. You can find this in the Amazon VPC console, or you
can use the following AWS CLI command.
Output:
192.168.0.0/16
3. Create a security group that allows inbound NFS traffic for your Amazon EFS mount points.
149
Amazon EKS User Guide
Amazon EFS CSI driver
Important
By default, new Amazon EFS file systems are owned by root:root, and only the root
user (UID 0) has read-write-execute permissions. If your containers are not running as
root, you must change the Amazon EFS file system permissions to allow other users
to modify the file system. For more information, see Working with users, groups, and
permissions at the Network File System (NFS) level in the Amazon Elastic File System
User Guide.
To deploy a sample application and verify that the CSI driver is working
This procedure uses the Multiple Pods Read Write Many example from the Amazon EFS Container
Storage Interface (CSI) driver GitHub repository to consume a statically provisioned Amazon EFS
persistent volume and access it from multiple pods with the ReadWriteMany access mode.
1. Clone the Amazon EFS Container Storage Interface (CSI) driver GitHub repository to your local
system.
cd aws-efs-csi-driver/examples/kubernetes/multiple_pods/
3. Retrieve your Amazon EFS file system ID. You can find this in the Amazon EFS console, or use the
following AWS CLI command.
Output:
fs-582a03f3
4. Edit the specs/pv.yaml file and replace the volumeHandle value with your Amazon EFS file
system ID.
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-582a03f3
Note
Because Amazon EFS is an elastic file system, it does not enforce any file system capacity
limits. The actual storage capacity value in persistent volumes and persistent volume claims
is not used when creating the file system. However, since storage capacity is a required field
in Kubernetes, you must specify a valid value, such as, 5Gi in this example. This value does
not limit the size of your Amazon EFS file system.
150
Amazon EKS User Guide
Amazon EFS CSI driver
5. Deploy the efs-sc storage class, efs-claim persistent volume claim, efs-pv persistent volume,
and app1 and app2 sample applications from the specs directory.
6. Watch the pods in the default namespace and wait for the app1 and app2 pods' STATUS become
Running.
Note
It may take a few minutes for the pods to reach the Running status.
7. List the persistent volumes in the default namespace. Look for a persistent volume with the
default/efs-claim claim.
kubectl get pv
Output:
Output:
Name: efs-pv
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":
{"annotations":{},"name":"efs-pv"},"spec":{"accessModes":["ReadWriteMany"],"capaci...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: efs-sc
Status: Bound
Claim: default/efs-claim
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: efs.csi.aws.com
VolumeHandle: fs-582a03f3
ReadOnly: false
VolumeAttributes: <none>
Events: <none>
151
Amazon EKS User Guide
Amazon FSx for Lustre CSI driver
Output:
10. Verify that the app2 pod is shows the same data in the volume.
Output:
11. When you finish experimenting, delete the resources for this sample application to clean up.
This topic shows you how to deploy the Amazon FSx for Lustre CSI Driver to your Amazon EKS cluster
and verify that it works. We recommend using version 0.3.0 of the driver.
Note
This driver is supported on Kubernetes version 1.17 and later Amazon EKS clusters and nodes.
The driver is not supported on Fargate. Alpha features of the Amazon FSx for Lustre CSI Driver
are not supported on Amazon EKS clusters. The driver is in Beta release. It is well tested and
supported by Amazon EKS for production use. Support for the driver will not be dropped,
though details may change. If the schema or schematics of the driver changes, instructions for
migrating to the next version will be provided.
For detailed descriptions of the available parameters and complete examples that demonstrate the
driver's features, see the Amazon FSx for Lustre Container Storage Interface (CSI) driver project on
GitHub.
Prerequisites
You must have:
• Version 1.18.97 or later of the AWS CLI installed. You can check your currently-installed version with
the aws --version command. To install or upgrade the AWS CLI, see Installing the AWS CLI.
• An existing Amazon EKS cluster. If you don't currently have a cluster, see ??? (p. 3) to create one.
152
Amazon EKS User Guide
Amazon FSx for Lustre CSI driver
• Version 0.25.0-rc.0 or later of eksctl installed. You can check your currently-installed version
with the eksctl version command. To install or upgrade eksctl, see Installing or upgrading
eksctl (p. 226).
• The latest version of kubectl installed that aligns to your cluster version. You can check your
currently-installed version with the kubectl version --short --client command. For more
information, see Installing kubectl (p. 221).
To deploy the Amazon FSx for Lustre CSI driver to an Amazon EKS cluster
1. Create an AWS Identity and Access Management OIDC provider and associate it with your cluster.
2. Create an IAM policy and service account that allows the driver to make calls to AWS APIs on your
behalf.
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"iam:CreateServiceLinkedRole",
"iam:AttachRolePolicy",
"iam:PutRolePolicy"
],
"Resource":"arn:aws:iam::*:role/aws-service-role/s3.data-
source.lustre.fsx.amazonaws.com/*"
},
{
"Action":"iam:CreateServiceLinkedRole",
"Effect":"Allow",
"Resource":"*",
"Condition":{
"StringLike":{
"iam:AWSServiceName":[
"fsx.amazonaws.com"
]
}
}
},
{
"Effect":"Allow",
"Action":[
"s3:ListBucket",
"fsx:CreateFileSystem",
"fsx:DeleteFileSystem",
"fsx:DescribeFileSystems"
],
"Resource":[
"*"
]
}
]
}
153
Amazon EKS User Guide
Amazon FSx for Lustre CSI driver
Take note of the policy Amazon Resource Name (ARN) that is returned.
3. Create a Kubernetes service account for the driver and attach the policy to the service account.
Replacing the ARN of the policy with the ARN returned in the previous step.
Output:
You'll see several lines of output as the service account is created. The last line of output is similar to
the following example line.
Note the name of the AWS CloudFormation stack that was deployed. In the example output above,
the stack is named eksctl-prod-addon-iamserviceaccount-kube-system-fsx-csi-
controller-sa.
4. Note the Role ARN for the role that was created.
Output
Warning: kubectl apply should be used on resource created by either kubectl create --
save-config or kubectl apply
serviceaccount/fsx-csi-controller-sa configured
clusterrole.rbac.authorization.k8s.io/fsx-csi-external-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/fsx-csi-external-provisioner-binding
created
deployment.apps/fsx-csi-controller created
daemonset.apps/fsx-csi-node created
csidriver.storage.k8s.io/fsx.csi.aws.com created
6. Patch the driver deployment to add the service account that you created in step 3, replacing the ARN
with the ARN that you noted in step 4.
154
Amazon EKS User Guide
Amazon FSx for Lustre CSI driver
To deploy a Kubernetes storage class, persistent volume claim, and sample application to
verify that the CSI driver is working
This procedure uses the Dynamic volume provisioning for Amazon S3 from the Amazon FSx for Lustre
Container Storage Interface (CSI) driver GitHub repository to consume a dynamically-provisioned
Amazon FSx for Lustre volume.
1. Create an Amazon S3 bucket and a folder within it named export by creating and copying a file to
the bucket.
aws s3 mb s3://fsx-csi
echo test-file >> testfile
aws s3 cp testfile s3://fsx-csi/export/testfile
3. Edit the file and replace the existing, alternate-colored values with your own.
parameters:
subnetId: subnet-056da83524edbe641
securityGroupIds: sg-086f61ea73388fb6b
s3ImportPath: s3://ml-training-data-000
s3ExportPath: s3://ml-training-data-000/export
deploymentType: SCRATCH_2
• subnetId – The subnet ID that the Amazon FSx for Lustre file system should be created in. Amazon
FSx for Lustre is not supported in all Availability Zones. Open the Amazon FSx for Lustre console
at https://console.aws.amazon.com/fsx/ to confirm that the subnet that you want to use is in a
supported Availability Zone. The subnet can include your nodes, or can be a different subnet or
VPC. If the subnet that you specify is not the same subnet that you have nodes in, then your VPCs
must be connected, and you must ensure that you have the necessary ports open in your security
groups.
• securityGroupIds – The security group ID for your nodes.
• s3ImportPath – The Amazon Simple Storage Service data repository that you want to copy data
from to the persistent volume. Specify the fsx-csi bucket that you created in step 1.
• s3ExportPath – The Amazon S3 data repository that you want to export new or modified files to.
Specify the fsx-csi/export folder that you created in step 1.
• deploymentType – The file system deployment type. Valid values are SCRATCH_1, SCRATCH_2,
and PERSISTENT_1. For more information about deployment types, see Create your Amazon FSx
for Lustre file system.
Note
The Amazon S3 bucket for s3ImportPath and s3ExportPath must be the same,
otherwise the driver cannot create the Amazon FSx for Lustre file system. The
s3ImportPath can stand alone. A random path will be created automatically like s3://
ml-training-data-000/FSxLustre20190308T012310Z. The s3ExportPath cannot
be used without specifying a value for S3ImportPath.
155
Amazon EKS User Guide
Amazon FSx for Lustre CSI driver
6. (Optional) Edit the claim.yaml file. Change the following value to one of the increment values
listed below, based on your storage requirements and the deploymentType that you selected in a
previous step.
storage: 1200Gi
• SCRATCH_2 and PERSISTENT – 1.2 TiB, 2.4 TiB, or increments of 2.4 TiB over 2.4 TiB.
• SCRATCH_1 – 1.2 TiB, 2.4 TiB, 3.6 TiB, or increments of 3.6 TiB over 3.6 TiB.
7. Create the persistent volume claim.
Output.
Note
The STATUS may show as Pending for 5-10 minutes, before changing to Bound. Don't
continue with the next step until the STATUS is Bound.
9. Deploy the sample application.
Output
Access Amazon S3 files from the Amazon FSx for Lustre file system
156
Amazon EKS User Guide
Amazon FSx for Lustre CSI driver
If you only want to import data and read it without any modification and creation, then you don't need a
value for s3ExportPath in your storageclass.yaml file. Verify that data was written to the Amazon
FSx for Lustre file system by the sample app.
Output.
export out.txt
The sample app wrote the out.txt file to the file system.
For new files and modified files, you can use the Lustre user space tool to archive the data back to
Amazon S3 using the value that you specified for s3ExportPath.
Note
• New files aren't synced back to Amazon S3 automatically. In order to sync files to
the s3ExportPath, you need to install the Lustre client in your container image and
manually run the lfs hsm_archive command. The container should run in privileged
mode with the CAP_SYS_ADMIN capability.
• This example uses a lifecycle hook to install the Lustre client for demonstration purpose.
A normal approach is building a container image with the Lustre client.
2. Confirm that the out.txt file was written to the s3ExportPath folder in Amazon S3.
aws s3 ls fsx-csi/export/
Output
157
Amazon EKS User Guide
The following explanations help you understand how components of the diagram relate to each other
and which topics in this guide and other AWS guides that you can reference for more information.
• Amazon VPC and subnets – All Amazon EKS resources are deployed to one Region in an existing
subnet in an existing VPC. For more information, see VPCs and subnets in the Amazon VPC User Guide.
Each subnet exists in one Availability Zone. The VPC and subnets must meet requirements such as the
following:
• VPCs and subnets must be tagged appropriately, so that Kubernetes knows that it can use them
for deploying resources, such as load balancers. For more information, see the section called “VPC
tagging requirement” (p. 165) and the section called “Subnet tagging requirement” (p. 165). If
you deploy the VPC using an Amazon EKS-provided AWS CloudFormation template (p. 161) or
using eksctl, then the VPC and subnets are tagged appropriately for you.
• A subnet may or may not have internet access. If a subnet does not have internet access, the pods
deployed within it must be able to access other AWS services, such as Amazon ECR, to pull container
images. For more information about using subnets that don't have internet access, see ??? (p. 82).
• Any public subnets that you use must be configured to auto-assign public IP addresses for
Amazon EC2 instances launched within them. For more information, see the section called “VPC IP
addressing” (p. 164).
• The nodes and control plane must be able to communicate over all ports through appropriately
tagged security groups. For more information, see the section called “Amazon EKS security group
considerations” (p. 166).
• You can implement can implement a network segmentation and tenant isolation network policy.
Network policies are similar to AWS security groups in that you can create network ingress and
egress rules. Instead of assigning instances to a security group, you assign network policies to pods
using pod selectors and labels. For more information, see the section called “Installing Calico on
Amazon EKS” (p. 187).
158
Amazon EKS User Guide
Creating a VPC for Amazon EKS
You can deploy a VPC and subnets that meet the Amazon EKS requirements through manual
configuration, or by deploying the VPC and subnets using eksctl (p. 226), or an Amazon EKS
provided AWS CloudFormation template. Both eksctl and the AWS CloudFormation template create
the VPC and subnets with the required configuration. For more information, see the section called
“Creating a VPC for Amazon EKS” (p. 161).
• Amazon EKS control plane – Deployed and managed by Amazon EKS in an Amazon EKS-managed
VPC. When you create the cluster, Amazon EKS creates and manages requester-managed network
interfaces in a separate VPC from the control plane VPC that you specify, which allows AWS Fargate
and Amazon EC2 instances to communicate with the control plane.
By default, the control plane exposes a public endpoint so that clients and nodes can communicate
with the cluster. You can limit the internet client source IP addresses that can communicate with the
public endpoint. Alternatively, you can enable a private endpoint and disable the public endpoint or
enable both the public and private endpoints. To learn more about cluster endpoints, see ??? (p. 46).
Clients in your on-premises network or other VPCs can communicate with the public or private-only
endpoint, if you've configured connectivity between the VPC that the cluster is deployed to and the
other networks. For more information about connecting your VPC to other networks, see the AWS
Network-to-Amazon VPC connectivity options and Amazon VPC-to-Amazon VPC connectivity options
technical papers.
• Amazon EC2 instances – Each Amazon EC2 node is deployed to one subnet. Each node is assigned a
private IP address from a CIDR block assigned to the subnet. If the subnets were created using one of
the Amazon EKS provided AWS CloudFormation templates (p. 161), then nodes deployed to public
subnets are automatically assigned a public IP address by the subnet. Each node is deployed with the
the section called “Pod networking (CNI)” (p. 169) which, by default, assigns each pod a private IP
address from the CIDR block assigned to the subnet that the node is in and adds the IP address as a
secondary IP address to one of the elastic network interfaces (ENI) attached to the instance.
For self-managed nodes, you can change this behavior by assigning additional CIDR blocks to your VPC
and enabling the section called “CNI custom networking” (p. 177), which assigns IP addresses to pods
from different subnets than the node is deployed to. To use custom networking, you must enable it
when you launch your self-managed nodes.
By default, the source IP address of each pod that communicates with resources outside of the VPC
is translated through network address translation (NAT) to the primary IP address of the primary ENI
attached to the ode. You can change this behavior to instead have a NAT device in a private subnet
translate each pod's IP address to the NAT device's IP address. For more information, see the section
called “External SNAT” (p. 176).
• Fargate pods – Deployed to private subnets only. Each pod is assigned a private IP address from the
CIDR block assigned to the subnet. Fargate does not support all pod networking options. For more
information, see ??? (p. 112).
If you want to use an existing VPC, then it must meet specific requirements for use with Amazon EKS. For
more information, see ??? (p. 163). This topic guides you through creating a VPC for your cluster using
one of the following configurations:
159
Amazon EKS User Guide
Creating a VPC for Amazon EKS
• Public and private subnets – This VPC has two public and two private subnets. One public and one
private subnet are deployed to the same Availability Zone. The other public and private subnets
are deployed to a second Availability Zone in the same Region. We recommend this option for all
production deployments. This option allows you to deploy your nodes to private subnets and allows
Kubernetes to deploy load balancers to the public subnets that can load balance traffic to pods
running on nodes in the private subnets.
Public IP addresses are automatically assigned to resources deployed to one of the public subnets, but
public IP addresses are not assigned to any resources deployed to the private subnets. The nodes in
private subnets can communicate with the cluster and other AWS services, and pods can communicate
outbound to the internet through a NAT gateway that is deployed in each Availability Zone. A security
group is deployed that denies all inbound traffic and allows all outbound traffic. The subnets are
tagged so that Kubernetes is able to deploy load balancers to them. For more information about
subnet tagging, see ??? (p. 165). For more information about this type of VPC, see VPC with public
and private subnets (NAT).
• Only public subnets – This VPC has three public subnets that are deployed into different Availability
Zones in the region. All nodes are automatically assigned public IP addresses and can send and
receive internet traffic through an internet gateway. A security group is deployed that denies all
inbound traffic and allows all outbound traffic. The subnets are tagged so that Kubernetes can deploy
load balancers to them. For more information about subnet tagging, see ??? (p. 165). For more
information about this type of VPC, see VPC with a single public subnet.
• Only private subnets – This VPC has three private subnets that are deployed into different Availability
Zones in the Region. All nodes can optionally send and receive internet traffic through a NAT instance
or NAT gateway. A security group is deployed that denies all inbound traffic and allows all outbound
traffic. The subnets are tagged so that Kubernetes can deploy internal load balancers to them. For
more information about subnet tagging, see ??? (p. 165). For more information about this type of
VPC, see VPC with a private subnet only and AWS Site-to-Site VPN access.
Important
There are additional requirements if the VPC does not have outbound internet access, such
as via a NAT Instance, NAT Gateway, VPN, or Direct Connect. You must bypass the EKS cluster
introspection by providing the cluster certificate authority and cluster API endpoint to the
nodes. You also may need to configure VPC endpoints listed in ??? (p. 46).
Important
If you deployed a VPC using eksctl or by using either of the Amazon EKS AWS CloudFormation
VPC templates:
• On or after 03/26/2020 – Public IPv4 addresses are automatically assigned by public subnets
to new nodes deployed to public subnets.
• Before 03/26/2020 – Public IPv4 addresses are not automatically assigned by public subnets
to new nodes deployed to public subnets.
This change impacts new node groups deployed to public subnets in the following ways:
• Managed node groups (p. 89) – If the node group is deployed to a public subnet on or after
04/22/2020, the public subnet must have automatic assignment of public IP addresses
enabled. For more information, see Modifying the public IPv4 addressing attribute for your
subnet.
• Linux (p. 96), Windows (p. 100), or Arm (p. 73) self-managed node groups – If the node
group is deployed to a public subnet on or after 03/26/2020, the public subnet must have
automatic assignment of public IP addresses enabled or the nodes must be launched with a
public IP address. For more information, see Modifying the public IPv4 addressing attribute
for your subnet or Assigning a public IPv4 address during instance launch.
160
Amazon EKS User Guide
Creating a VPC for Amazon EKS
Choose the tab below that represents your desired VPC configuration.
https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-07-23/amazon-eks-
vpc-private-subnets.yaml
6. On the Specify Details page, fill out the parameters accordingly, and then choose Next.
• Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can
call it eks-vpc.
• VpcBlock: Choose a CIDR range for your VPC. Each worker node, pod, and load balancer that
you deploy is assigned an IP address from this block. The default value provides enough
IP addresses for most implementations, but if it doesn't, then you can change it. For more
information, see VPC and subnet sizing in the Amazon VPC User Guide. You can also add
additional CIDR blocks to the VPC once it's created.
• PublicSubnet01Block: Specify a CIDR block for public subnet 1. The default value provides
enough IP addresses for most implementations, but if it doesn't, then you can change it
• PublicSubnet02Block: Specify a CIDR block for public subnet 2. The default value provides
enough IP addresses for most implementations, but if it doesn't, then you can change it
• PrivateSubnet01Block: Specify a CIDR block for private subnet 1. The default value provides
enough IP addresses for most implementations, but if it doesn't, then you can change it
• PrivateSubnet02Block: Specify a CIDR block for private subnet 2. The default value provides
enough IP addresses for most implementations, but if it doesn't, then you can change it
7. (Optional) On the Options page, tag your stack resources. Choose Next.
8. On the Review page, choose Create.
9. When your stack is created, select it in the console and choose Outputs.
10. Record the SecurityGroups value for the security group that was created. When you add nodes
to your cluster, you must specify the ID of the security group. The security group is applied to
the cross-account elastic network interfaces that are created in your subnets that allow the
Amazon EKS control plane to communicate with your nodes.
11. Record the VpcId for the VPC that was created. You need this when you launch your node group
template.
12. Record the SubnetIds for the subnets that were created and whether you created them as
public or private subnets. When you add nodes to your cluster, you must specify the IDs of the
subnets that you want to launch the nodes into.
161
Amazon EKS User Guide
Creating a VPC for Amazon EKS
2. From the navigation bar, select a Region that supports Amazon EKS.
3. Choose Create stack.
4. For Choose a template, select Specify an Amazon S3 template URL.
5. Paste the following URL into the text area and choose Next:
https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-07-23/amazon-eks-
vpc-sample.yaml
6. On the Specify Details page, fill out the parameters accordingly, and then choose Next.
• Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can
call it eks-vpc.
• VpcBlock: Choose a CIDR block for your VPC. Each worker node, pod, and load balancer that
you deploy is assigned an IP address from this block. The default value provides enough
IP addresses for most implementations, but if it doesn't, then you can change it. For more
information, see VPC and subnet sizing in the Amazon VPC User Guide. You can also add
additional CIDR blocks to the VPC once it's created.
• Subnet01Block: Specify a CIDR block for subnet 1. The default value provides enough IP
addresses for most implementations, but if it doesn't, then you can change it
• Subnet02Block: Specify a CIDR block for subnet 2. The default value provides enough IP
addresses for most implementations, but if it doesn't, then you can change it
• Subnet03Block: Specify a CIDR block for subnet 3. The default value provides enough IP
addresses for most implementations, but if it doesn't, then you can change it
7. (Optional) On the Options page, tag your stack resources. Choose Next.
8. On the Review page, choose Create.
9. When your stack is created, select it in the console and choose Outputs.
10. Record the SecurityGroups value for the security group that was created. When you add nodes
to your cluster, you must specify the ID of the security group. The security group is applied to
the cross-account elastic network interfaces that are created in your subnets that allow the
Amazon EKS control plane to communicate with your nodes.
11. Record the VpcId for the VPC that was created. You need this when you launch your node group
template.
12. Record the SubnetIds for the subnets that were created. When you add nodes to your cluster,
you must specify the IDs of the subnets that you want to launch the nodes into.
https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-07-23/amazon-eks-
fully-private-vpc.yaml
6. On the Specify Details page, fill out the parameters accordingly, and then choose Next.
• Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can
call it eks-vpc.
162
Amazon EKS User Guide
Next steps
• VpcBlock: Choose a CIDR block for your VPC. Each worker node, pod, and load balancer that
you deploy is assigned an IP address from this block. The default value provides enough
IP addresses for most implementations, but if it doesn't, then you can change it. For more
information, see VPC and subnet sizing in the Amazon VPC User Guide. You can also add
additional CIDR blocks to the VPC once it's created.
• PrivateSubnet01Block: Specify a CIDR block for subnet 1. The default value provides enough
IP addresses for most implementations, but if it doesn't, then you can change it
• PrivateSubnet02Block: Specify a CIDR block for subnet 2. The default value provides enough
IP addresses for most implementations, but if it doesn't, then you can change it
• PrivateSubnet03Block: Specify a CIDR block for subnet 3. The default value provides enough
IP addresses for most implementations, but if it doesn't, then you can change it
7. (Optional) On the Options page, tag your stack resources. Choose Next.
8. On the Review page, choose Create.
9. When your stack is created, select it in the console and choose Outputs.
10. Record the SecurityGroups value for the security group that was created. When you add nodes
to your cluster, you must specify the ID of the security group. The security group is applied to
the cross-account elastic network interfaces that are created in your subnets that allow the
Amazon EKS control plane to communicate with your nodes.
11. Record the VpcId for the VPC that was created. You need this when you launch your node group
template.
12. Record the SubnetIds for the subnets that were created. When you add nodes to your cluster,
you must specify the IDs of the subnets that you want to launch the nodes into.
Next steps
After you have created your VPC, you can try the Getting started with Amazon EKS (p. 3) walkthrough,
but you can skip the Create your Amazon EKS cluster VPC (p. 18) section and use these subnets and
security groups for your cluster.
When you create your cluster, specify all of the subnets that will host resources for your cluster (such as
nodes and load balancers).
Note
Internet-facing load balancers require a public subnet in your cluster. By default, nodes also
require outbound internet access to the Amazon EKS APIs for cluster introspection and node
registration at launch time. For clusters without outbound internet access, see ??? (p. 82).
To pull container images, they require access to the Amazon S3 and Amazon ECR APIs (and any
other container registries, such as DockerHub). For more information, see Amazon EKS security
group considerations (p. 166) and AWS IP Address Ranges in the AWS General Reference.
The subnets that you pass when you create the cluster influence where Amazon EKS places elastic
network interfaces that are used for the control plane to node communication.
It is possible to specify only public or private subnets when you create your cluster, but there are some
limitations associated with these configurations:
163
Amazon EKS User Guide
VPC IP addressing
• Private-only: Everything runs in a private subnet and Kubernetes cannot create internet-facing load
balancers for your pods.
• Public-only: Everything runs in a public subnet, including your nodes.
Amazon EKS creates an elastic network interface in your private subnets to facilitate communication to
your nodes. This communication channel supports Kubernetes functionality such as kubectl exec and
kubectl logs. The security group that you specify when you create your cluster is applied to the elastic
network interfaces that are created for your cluster control plane.
Your VPC must have DNS hostname and DNS resolution support. Otherwise, your nodes cannot register
with your cluster. For more information, see Using DNS with Your VPC in the Amazon VPC User Guide.
VPC IP addressing
Nodes must be able to communicate with the control plane and other AWS services. If your nodes are
deployed in a private subnet, then you must have either:
• Setup a default route for the subnet to a NAT gateway. The NAT gateway must be assigned a public IP
address to provide internet access for the nodes.
• Configured several necessary settings for the subnet and taken the necessary actions listed in
??? (p. 82).
If self-managed nodes are deployed to a public subnet, then the subnet must be configured to auto-
assign public IP addresses or your node instances must be assigned a public IP address when they're
launched. If managed nodes are deployed to a public subnet, then the subnet must be configured to
auto-assign public IP addresses or the nodes will not be assigned a public IP address. Determine whether
your public subnets are configured to auto-assign public IP addresses with the following command.
Output
"MapPublicIpOnLaunch": false,
"SubnetId": "subnet-aaaaaaaaaaaaaaaaa",
"MapPublicIpOnLaunch": false,
"SubnetId": "subnet-bbbbbbbbbbbbbbbbb",
For any subnets that have MapPublicIpOnLaunch set to false, change the setting to true.
Important
If you used an Amazon EKS AWS CloudFormation template (p. 159) to deploy your VPC prior
to 03/26/2020, then you need to change the setting for your public subnets.
You can define both private (RFC 1918), and public (non-RFC 1918) CIDR ranges within the VPC used for
your Amazon EKS cluster. For more information, see Adding IPv4 CIDR blocks to a VPC in the Amazon
VPC User Guide. When choosing the classless inter-domain routing (CIDR) blocks for your VPC and
subnets, make sure that the blocks contain enough IP addresses for all of the Amazon EC2 nodes and
pods that you plan to deploy (one IP address per pod). You can conserve IP address use by implementing
a transit gateway with a shared services VPC. For more information, see Isolated VPCs with shared
services and EKS VPC routable IP address conservation patterns in a hybrid network.
164
Amazon EKS User Guide
VPC tagging requirement
The Amazon EKS control plane creates up to 4 requester-managed network interfaces in your VPC
for each cluster. Be sure that the subnets that you specify have enough available IP addresses for the
requester-managed network interfaces and your pods.
Key Value
kubernetes.io/cluster/<cluster-name> shared
• Key: The <cluster-name> value matches your Amazon EKS cluster's name.
• Value: The shared value allows more than one cluster to use this VPC.
This tag is not required or created by Amazon EKS for 1.15 clusters. If you deploy a 1.15 cluster to a VPC
that already has this tag, the tag is not removed.
Key Value
kubernetes.io/cluster/<cluster-name> shared
Key Value
kubernetes.io/role/internal-elb 1
165
Amazon EKS User Guide
Amazon EKS security group considerations
order by subnet ID). If you use an Amazon EKS AWS CloudFormation template to create your VPC after
03/26/2020, then the subnets created by the template are tagged when they're created. For more
information about the Amazon EKS AWS CloudFormation VPC templates, see ??? (p. 159).
Key Value
kubernetes.io/role/elb 1
You can check for a cluster security group for your cluster in the AWS Management Console under the
cluster's Networking section, or with the following AWS CLI command:
If your cluster is running Kubernetes version 1.14 and platform version (p. 61) eks.3 or later, then we
recommend that you add the cluster security group to all existing and future node groups. For more
information, see Security Groups for Your VPC in the Amazon VPC User Guide. Amazon EKS managed
node groups (p. 87) are automatically configured to use the cluster security group.
If you need to limit the open ports between the control plane and nodes, the default cluster security
group can be modified to allow only the following required minimum ports. The required minimum ports
are the same as they were in previous Amazon EKS versions.
166
Amazon EKS User Guide
Control plane and node security groups (for
Amazon EKS clusters earlier than Kubernetes
version 1.14 and platform version eks.3)
*Any protocol and ports that you expect your nodes to use for inter-node communication should be
included, if required. Nodes also require outbound internet access to the Amazon EKS APIs for cluster
introspection and node registration at launch time, or that you've implemented the required necessary
settings in ??? (p. 82). To pull container images, they require access to Amazon S3, Amazon ECR APIs,
and any other container registries that they need to pull images from, such as DockerHub. For more
information, see AWS IP address ranges in the AWS General Reference.
You can check the control plane security group for your cluster in the AWS Management Console under
the cluster's Networking section (listed as Additional security groups), or with the following AWS CLI
command:
If you launch nodes with the AWS CloudFormation template in the Getting started with Amazon
EKS (p. 3) walkthrough, AWS CloudFormation modifies the control plane security group to allow
communication with the nodes. Amazon EKS strongly recommends that you use a dedicated security
group for each control plane (one per cluster). If you share a control plane security group with other
Amazon EKS clusters or resources, you may block or disrupt connections to those resources.
The security group for the nodes and the security group for the control plane communication to the
nodes have been set up to prevent communication to privileged ports in the nodes. If your applications
require added inbound or outbound access from the control plane or nodes, you must add these rules to
the security groups associated with your cluster. For more information, see Security Groups for Your VPC
in the Amazon VPC User Guide.
167
Amazon EKS User Guide
Control plane and node security groups (for
Amazon EKS clusters earlier than Kubernetes
version 1.14 and platform version eks.3)
Note
To allow proxy functionality on privileged ports or to run the CNCF conformance tests yourself,
you must edit the security groups for your control plane and the nodes. The security group on
the nodes' side needs to allow inbound access for ports 0-65535 from the control plane, and the
control plane side needs to allow outbound access to the nodes on ports 0-65535.
When cluster
endpoint private
access (p. 46)
is enabled: Any
security groups
that generate API
server client traffic
(such as kubectl
commands on
a bastion host
within your
cluster's VPC)
When cluster
endpoint private
access (p. 46)
is enabled: Any
security groups
that generate API
server client traffic
(such as kubectl
commands on
a bastion host
within your
cluster's VPC)
Minimum inbound Any protocol Any ports that All node security
traffic (from other that you expect you expect your groups
nodes) your nodes to use nodes to use
for inter-node for inter-node
communication communication
168
Amazon EKS User Guide
Pod networking (CNI)
*Nodes also require access to the Amazon EKS APIs for cluster introspection and node registration at
launch time either through the internet or VPC endpoints. To pull container images, they require access
to the Amazon S3 and Amazon ECR APIs (and any other container registries, such as DockerHub). For
more information, see AWS IP address ranges in the AWS General Reference and the section called
“Private clusters” (p. 82).
If you have more than one security group associated to your nodes, then one of the security groups
must have the following tag applied to it. If you have only one security group associated to your
nodes, then the tag is optional. For more information about tagging, see Working with tags using the
console (p. 237).
Key Value
kubernetes.io/cluster/<cluster-name> owned
169
Amazon EKS User Guide
CNI configuration variables
The CNI plugin is responsible for allocating VPC IP addresses to Kubernetes nodes and configuring the
necessary networking for pods on each node. The plugin consists of two primary components:
• The L-IPAM daemon is responsible for attaching elastic network interfaces to instances, assigning
secondary IP addresses to elastic network interfaces, and maintaining a "warm pool" of IP addresses on
each node for assignment to Kubernetes pods when they are scheduled.
• The CNI plugin itself is responsible for wiring the host network (for example, configuring the interfaces
and virtual Ethernet pairs) and adding the correct interface to the pod namespace.
For more information about the design and networking configuration, see CNI plugin for Kubernetes
networking over AWS VPC.
Elastic network interface and secondary IP address limitations by Amazon EC2 instance types are
applicable. In general, larger instances can support more IP addresses. For more information, see IP
Addresses Per Network Interface Per Instance Type in the Amazon EC2 User Guide for Linux Instances.
Topics
• CNI configuration variables (p. 170)
• External source network address translation (SNAT) (p. 176)
• CNI custom networking (p. 177)
• CNI metrics helper (p. 180)
• Amazon VPC CNI plugin for Kubernetes upgrades (p. 183)
• Alternate compatible CNI plugins (p. 184)
170
Amazon EKS User Guide
CNI configuration variables
AWS_VPC_CNI_NODE_PORT_SUPPORT
Type – Boolean
Default – true
Specifies whether NodePort services are enabled on a node's primary network interface. This
requires additional iptables rules and that the kernel's reverse path filter on the primary interface
is set to loose.
AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
Type – Boolean
Default – false
Specifies that your pods may use subnets and security groups, within the same VPC as your control
plane resources, that are independent of your cluster's resourcesVpcConfig. By default, pods
share the same subnet and security groups as the node's primary interface. Setting this variable
to true causes ipamD to use the security groups and subnets in a node's ENIConfig for elastic
network interface allocation. You must create an ENIConfig custom resource definition for each
subnet that your pods will reside in, and then annotate each node to use a specific ENIConfig
(multiple nodes can be annotated with the same ENIConfig). Nodes can only be annotated with a
single ENIConfig at a time, and the subnet in the ENIConfig must belong to the same Availability
Zone that the node resides in. For more information, see CNI custom networking (p. 177).
ENI_CONFIG_ANNOTATION_DEF
Type – String
Default – k8s.amazonaws.com/eniConfig
Type – String
Default – k8s.amazonaws.com/eniConfig
Type – Integer
Default – 9001
Used to configure the MTU size for attached ENIs. The valid range is from 576 to 9001.
AWS_VPC_K8S_CNI_EXTERNALSNAT
Type – Boolean
Default – false
171
Amazon EKS User Guide
CNI configuration variables
Specifies whether an external NAT gateway should be used to provide SNAT of secondary ENI IP
addresses. If set to true, the SNAT iptables rule and off-VPC IP rule are not applied, and these
rules are removed if they have already been applied.
Disable SNAT if you need to allow inbound communication to your pods from external VPNs, direct
connections, and external VPCs, and your pods do not need to access the internet directly via an
Internet Gateway. Your nodes must be running in a private subnet and connected to the internet
through an AWS NAT Gateway or another external NAT device.
For more information, see External source network address translation (SNAT) (p. 176).
AWS_VPC_K8S_CNI_RANDOMIZESNAT
Type – String
Default – hashrandom
Specifies whether the SNAT iptables rule should randomize the outgoing ports for connections.
This should be used when AWS_VPC_K8S_CNI_EXTERNALSNAT=false. When enabled
(hashrandom) the --random flag will be added to the SNAT iptables rule. To use a pseudo
random number generation, rather than hash-based (--random-fully), use prng for the
environment variable. For old versions of iptables that do not support --random-fully, this
option will fall back to --random. Disable (none) this functionality if you rely on sequential port
allocation for outgoing connections.
Note
Any options other than none will cause outbound connections to be assigned a source port
that's not necessarily part of the ephemeral port range set at the OS level (/proc/sys/
net/ipv4/ip_local_port_range). This is relevant if you have NACLs restricting traffic
based on the port range found in ip_local_port_range.
AWS_VPC_K8S_CNI_EXCLUDE_SNAT_CIDRS – v1.6.0 and later
Type – String
Default – empty
Specify a comma-separated list of IPv4 CIDRs to exclude from SNAT. For every item in the list, an
iptables rule and off-VPC IP rule will be applied. If an item is not a valid IPv4 range it will be
skipped. This should be used when AWS_VPC_K8S_CNI_EXTERNALSNAT=false.
WARM_ENI_TARGET
Type – Integer
Default – 1
Specifies the number of free elastic network interfaces (and all of their available IP addresses) that
the ipamD daemon should attempt to keep available for pod assignment on the node. By default,
ipamD attempts to keep one elastic network interface and all of its IP addresses available for pod
assignment.
Note
The number of IP addresses per network interface varies by instance type. For more
information, see IP addresses per network interface per instance type in the Amazon EC2
User Guide for Linux Instances.
For example, an m4.4xlarge launches with one network interface and 30 IP addresses. If five pods
are placed on the node and five free IP addresses are removed from the IP address warm pool, then
ipamD attempts to allocate more interfaces until WARM_ENI_TARGET free interfaces are available on
the node.
172
Amazon EKS User Guide
CNI configuration variables
Note
If WARM_IP_TARGET is set, then this environment variable is ignored and the
WARM_IP_TARGET behavior is used instead.
WARM_IP_TARGET
Type – Integer
Default – None
Specifies the number of free IP addresses that the ipamD daemon should attempt to keep available
for pod assignment on the node. For example, if WARM_IP_TARGET is set to 10, then ipamD
attempts to keep 10 free IP addresses available at all times. If the elastic network interfaces on the
node are unable to provide these free addresses, ipamD attempts to allocate more interfaces until
WARM_IP_TARGET free IP addresses are available.
Note
This environment variable overrides WARM_ENI_TARGET behavior.
MINIMUM_IP_TARGET – v1.6.0 and later
Type – Integer
Default – None
Specifies the number of total IP addresses that the ipamD daemon should attempt to allocate for
pod assignment on the node. MINIMUM_IP_TARGET behaves identically to WARM_IP_TARGET,
except that instead of setting a target number of free IP addresses to keep available at all times, it
sets a target number for a floor on how many total IP addresses are allocated.
This also improves reliability of the cluster by reducing the number of calls necessary to allocate or
deallocate private IP addresses, which may be throttled, especially at scaling-related times.
MAX_ENI
Type – Integer
Default – None
Specifies the maximum number of ENIs that will be attached to the node. When MAX_ENI is unset or
less than or equal to 0, the setting is not used, and the maximum number of ENIs is always equal to
the maximum number for the instance type in question. Even when MAX_ENI is a positive number, it
is limited by the maximum number for the instance type.
AWS_VPC_K8S_CNI_LOGLEVEL
Type – String
Default – DEBUG
Valid values – DEBUG, INFO, WARN, ERROR, or FATAL (not case sensitive)
Type – String
173
Amazon EKS User Guide
CNI configuration variables
Default – Unset
Specifies where to write the logging output of ipamd. You can specify stdout or override the
default file, such as /var/log/aws-routed-eni/ipamd.log.
AWS_VPC_K8S_PLUGIN_LOG_FILE
Type – String
Default – Unset
Specifies where to write the logging output for the aws-cni plugin. You can specify stdout or
override the default file, such as /var/log/aws-routed-eni/plugin.log.
AWS_VPC_K8S_PLUGIN_LOG_LEVEL
Type – String
Default – DEBUG
Valid values – DEBUG, INFO, WARN, ERROR, or FATAL (not case sensitive)
Type – String
Default – 127.0.0.1:61679
Specifies the bind address for the introspection endpoint. A Unix domain socket can be specified
with the unix: prefix before the socket path.
DISABLE_INTROSPECTION
Type – Boolean
Default – false
Specifies whether introspection endpoints are disabled on a node. Setting this to true will reduce
the debugging information you can get from the node when running the aws-cni-support.sh
script.
DISABLE_METRICS
Type – Boolean
Default – false
Specifies whether the Prometheus metrics endpoint is disabled or not for ipamd. By default metrics
are published on :61678/metrics.
AWS_VPC_K8S_CNI_VETHPREFIX
Type – String
Default – eni
Specifies the veth prefix used to generate the host-side veth device name for the CNI. The prefix
can be a maximum of four characters long.
ADDITIONAL_ENI_TAGS – v1.6.0 and later
Type – String
174
Amazon EKS User Guide
CNI configuration variables
Default – {}
Metadata applied to ENIs help you categorize and organize your resources for billing or other
purposes. Each tag consists of a custom-defined key and an optional value. Tag keys can have
a maximum character length of 128 characters. Tag values can have a maximum length of 256
characters. The tags will be added to all ENIs on the host.
Important
Custom tags should not contain the k8s.amazonaws.com prefix, because it is reserved. If
the tag contains k8s.amazonaws.com, the tag addition will be ignored.
CLUSTER_NAME
Type – String
Default – ""
The tag cluster.k8s.amazonaws.com/name will be set to the cluster name of the aws-node
daemonset which created the ENI.
Instance ID tag
No manage tag
The L-IPAMD (aws-node daemonSet) running on every node requires access to the Kubernetes API
server. If it can not reach the Kubernetes API server, ipamD will exit and the CNI will not be able to
get any IP addresses for pods. To confirm whether L-IPAMD has access to the Kubernetes API server.
Output
175
Amazon EKS User Guide
External SNAT
SSH into a node to check whether the node can reach the API server.
Output
Trying 10.0.0.1...
Connected to 10.0.0.1.
Escape character is '^]'.
If you receive the last line of output, then the Kubernetes API server is reachable.
• Enables pods to communicate bi-directionally with the internet. The node must be in a public subnet
and have a public or elastic IP address assigned to the primary private IP address of its primary
network interface. The traffic is translated to and from the public or elastic IP address and routed to
and from the internet by an internet gateway, as shown in the following picture.
SNAT is necessary because the internet gateway only knows how to translate between the primary
private and public or elastic IP address assigned to the primary elastic network interface of the
Amazon EC2 instance node that pods are running on.
• Prevents a device in other private IP address spaces (for example, VPC peering, Transit VPC, or Direct
Connect) from communicating directly to a pod that is not assigned the primary private IP address of
the primary elastic network interface of the Amazon EC2 instance node.
176
Amazon EKS User Guide
CNI custom networking
If the internet or devices in other private IP address spaces need to communicate with a pod that isn't
assigned the primary private IP address assigned to the primary elastic network interface of the Amazon
EC2 instance node that the pod is running on, then:
• The node must be deployed in a private subnet that has a route to a NAT device in a public subnet.
• You need to enable external SNAT in the CNI plugin aws-node DaemonSet with the following
command:
Once external SNAT is enabled, the CNI plugin does not translate a pod's private IP address to the
primary private IP address assigned to the primary elastic network interface of the Amazon EC2 instance
node that the pod is running on when traffic is destined for an adddress outside of the VPC. Traffic from
the pod to the internet is externally translated to and from the public IP address of the NAT device and
routed to and from the internet by an internet gateway, as shown in the following picture.
• There are a limited number of IP addresses available in a subnet. This limits the number of pods that
can be created in the cluster. Using different subnets for pods allows you to increase the number of
available IP addresses.
• For security reasons, your pods must use different security groups or subnets than the node's primary
network interface.
• The nodes are configured in public subnets and you want the pods to be placed in private subnets
using a NAT Gateway. For more information, see External source network address translation
(SNAT) (p. 176).
Note
You can configure custom networking for self-managed node groups, but not for managed node
groups. The use cases discussed in this topic require the Amazon VPC CNI plugin for Kubernetes
177
Amazon EKS User Guide
CNI custom networking
version 1.4.0 or later. To check your CNI version, and upgrade if necessary, see Amazon VPC CNI
plugin for Kubernetes upgrades (p. 183).
Enabling a custom network effectively removes an available elastic network interface (and all of its
available IP addresses for pods) from each node that uses it. The primary network interface for the node
is not used for pod placement when a custom network is enabled.
1. Associate a secondary CIDR block to your cluster's VPC. For more information, see Associating a
Secondary IPv4 CIDR Block with Your VPC in the Amazon VPC User Guide.
2. Create a subnet in your VPC for each Availability Zone, using your secondary CIDR block. Your
custom subnets must be from a different VPC CIDR block than the subnet that your nodes were
launched into. For more information, see Creating a Subnet in Your VPC in the Amazon VPC User
Guide.
3. Set the AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true environment variable to true in the
aws-node DaemonSet:
kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -
f 2
Output:
amazon-k8s-cni:1.6.3
5. If you have version 1.3 or later of the CNI installed, you can skip to step 6. Define a new ENIConfig
custom resource for your cluster.
a. Create a file called ENIConfig.yaml and paste the following content into it:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: eniconfigs.crd.k8s.amazonaws.com
spec:
scope: Cluster
group: crd.k8s.amazonaws.com
version: v1alpha1
names:
plural: eniconfigs
singular: eniconfig
kind: ENIConfig
6. Create an ENIConfig custom resource for each subnet that you want to schedule pods in.
a. Create a unique file for each elastic network interface configuration. Each file must include the
contents below with a unique value for name. We highly recommend using a value for name that
matches the Availability Zone of the subnet, as this makes deployment of multi-AZ Auto Scaling
groups simpler (see step 6c below). In this example, a file named us-west-2a.yaml is created.
178
Amazon EKS User Guide
CNI custom networking
Replace the example values for name, subnet, and securityGroups with your own values.
In this example, we follow best practices and set the value for name to the Availability Zone
that the subnet is in. If you don't have a specific security group that you want to attach for your
pods, you can leave that value empty for now. Later, you will specify the node security group in
the ENIConfig.
Note
Each subnet and security group combination requires its own custom resource.
apiVersion: crd.k8s.amazonaws.com/v1alpha1
kind: ENIConfig
metadata:
name: us-west-2a
spec:
securityGroups:
- sg-0dff111a1d11c1c11
subnet: subnet-011b111c1f11fdf11
b. Apply each custom resource file that you created to your cluster with the following command:
c. (Optional, but recommended for multi-Availability Zone node groups) By default, Kubernetes
applies the Availability Zone of a node to the failure-domain.beta.kubernetes.io/zone
label. If you named your ENIConfig custom resources after each Availability Zone in your VPC,
as recommended in step 6a above, then you can enable Kubernetes to automatically apply the
corresponding ENIConfig for the node's Availability Zone with the following command.
Note
Ensure that an annotation with the key k8s.amazonaws.com/eniConfig
for the ENI_CONFIG_ANNOTATION_DEF environment variable doesn't exist in
the container spec for the aws-node daemonset. If it exists, it overrides the
ENI_CONFIG_LABEL_DEF value, and should be removed. You can check to see if the
variable is set with the kubectl describe daemonset aws-node -n kube-
system | grep ENI_CONFIG_ANNOTATION_DEF command. If no output is returned,
then the variable is not set.
7. Create a new self-managed node group for each ENIConfig that you configured.
a. Determine the maximum number of pods that can be scheduled on each node using the
following formula.
For example, the m5.large instance type supports three network interfaces and ten IPv4
addresses per interface. Inserting the values into the formula, the instance can support a
maximum of 20 pods, as shown in the following calculation.
maxPods = (3 - 1) * (10 - 1) + 2 = 20
For more information about the the maximum number of network interfaces per instance type,
see IP addresses per network interface per instance type in the Amazon EC2 User Guide for
Linux Instances.
179
Amazon EKS User Guide
CNI metrics helper
b. Follow the steps in the Self-managed nodes tab of Launching self-managed Amazon
Linux 2 nodes (p. 96) to create each new self-managed node group. After you've opened
the AWS CloudFormation template, enter values as described in the instructions. For the
BootstrapArguments field, enter the following value.
8. After your node groups are created, record the security group that was created for each node group
and apply it to its associated ENIConfig. Edit each ENIConfig with the following command,
replacing eniconfig-name with your value:
If you followed best practices from steps 6a and 6c above, the eniconfig-name corresponds to the
Availability Zone name.
spec:
securityGroups:
- sg-0dff222a2d22c2c22
subnet: subnet-022b222c2f22fdf22
9. If you have any nodes in your cluster that had pods placed on them before you completed
this procedure, you should terminate them. Only new nodes that are registered with the
k8s.amazonaws.com/eniConfig label will use the new custom networking feature.
When managing an Amazon EKS cluster, you may want to know how many IP addresses have been
assigned and how many are available. The CNI metrics helper helps you to:
When a node is provisioned, the CNI plugin automatically allocates a pool of secondary IP addresses
from the node’s subnet to the primary elastic network interface (eth0). This pool of IP addresses
is known as the warm pool, and its size is determined by the node’s instance type. For example, a
c4.large instance can support three elastic network interfaces and nine IP addresses per interface.
The number of IP addresses available for a given pod is one less than the maximum (of ten) because
one of the IP addresses is reserved for the elastic network interface itself. For more information, see IP
Addresses Per Network Interface Per Instance Type in the Amazon EC2 User Guide for Linux Instances.
As the pool of IP addresses is depleted, the plugin automatically attaches another elastic network
interface to the instance and allocates another set of secondary IP addresses to that interface. This
process continues until the node can no longer support additional elastic network interfaces.
The following metrics are collected for your cluster and exported to CloudWatch:
• The maximum number of elastic network interfaces that the cluster can support
180
Amazon EKS User Guide
CNI metrics helper
1. Create a file called allow_put_metrics_data.json and populate it with the following policy
document.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "cloudwatch:PutMetricData",
"Resource": "*"
}
]
}
2. Create an IAM policy called CNIMetricsHelperPolicy for your node instance profile that
allows the CNI metrics helper to make calls to AWS APIs on your behalf. Use the following AWS CLI
command to create the IAM policy in your AWS account.
Output:
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
mapRoles:
----
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::111122223333:role/eksctl-prod-nodegroup-standard-wo-
NodeInstanceRole-GKNS581EASPU
181
Amazon EKS User Guide
CNI metrics helper
username: system:node:{{EC2PrivateDNSName}}
Events: <none>
Record the role name for any rolearn values that have the system:nodes group assigned to
them. In the above example output, the role name is eksctl-prod-nodegroup-standard-wo-
NodeInstanceRole-GKNS581EASPU. You should have one value for each node group in your
cluster.
4. Attach the new CNIMetricsHelperPolicy IAM policy to each of the node IAM roles you identified
earlier with the following command, substituting the red text with your own AWS account number
and node IAM role name.
• Apply the CNI metrics helper manifest with the following command.
182
Amazon EKS User Guide
CNI upgrades
The latest version that we recommend is version 1.6.3. You can view the different releases available for
the plugin, and read the release notes for each version on GitHub.
Use the following procedures to check your CNI plugin version and upgrade to the latest recommended
version.
kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -
f 2
Output:
amazon-k8s-cni:1.6.2
In this example output, the CNI version is 1.6.2, which is earlier than the current recommended
version, 1.6.3. Use the following procedure to upgrade the CNI.
• If your CNI version is earlier than 1.6.3, then use the appropriate command below to update your
CNI version to the latest recommended version:
183
Amazon EKS User Guide
Alternate compatible CNI plugins
• Replace region-code in the following command with the Region that your cluster is in and
then run the modified command to replace the Region code in the file (currently us-west-2).
Amazon EKS maintains relationships with a network of partners that offer support for alternate
compatible CNI plugins. Refer to the following partners' documentation for details on supported
Kubernetes versions and qualifications and testing performed.
Amazon EKS aims to give you a wide selection of options to cover all use cases. If you develop a
commercially supported Kubernetes CNI plugin that is not listed here, then please contact our partner
team at aws-container-partners@amazon.com for more information.
184
Amazon EKS User Guide
Installing or upgrading CoreDNS
provider. If you have updated from a 1.10 cluster and you want to use CoreDNS for DNS and service
discovery, then you must install CoreDNS and remove kube-dns.
To check if your cluster is already running CoreDNS, use the following command.
If the output shows coredns in the pod names, then you're already running CoreDNS in your cluster. If
not, use the following procedure to update your DNS and service discovery provider to CoreDNS.
Note
The service for CoreDNS is still called kube-dns for backward compatibility.
export REGION="region-code"
c. Download the CoreDNS manifest from the Amazon EKS resource bucket.
d. Replace the variable placeholders in the dns.yaml file with your environment variable values
and apply the updated manifest to your cluster. The following command completes this in one
step.
185
Amazon EKS User Guide
Upgrading CoreDNS
Note
It might take several minutes for the expected output to return properly, depending on
the rate of DNS requests in your cluster.
In the following expected output, the number 23 is the DNS request count total.
3. Upgrade CoreDNS to the recommended version for your cluster by completing the steps in the
section called “Upgrading CoreDNS” (p. 186).
4. Scale down the kube-dns deployment to zero replicas.
Upgrading CoreDNS
1. Check the current version of your cluster's coredns deployment.
kubectl describe deployment coredns --namespace kube-system | grep Image | cut -d "/" -
f 3
Output:
coredns:v1.1.3
The recommended coredns versions for the corresponding Kubernetes versions are as follows:
2. If your current coredns version is 1.5.0 or later, but earlier than the recommended version, then
skip this step. If your current version is earlier than 1.5.0, then you need to modify the config map
for coredns to use the forward plug-in, rather than the proxy plug-in.
b. Replace proxy in the following line with forward. Save the file and exit the editor.
proxy . /etc/resolv.conf
186
Amazon EKS User Guide
Installing Calico on Amazon EKS
4. Update coredns to the recommended version by taking the output from the previous step and
replacing the version tag with your cluster's recommended coredns version:
1. Apply the Calico manifest from the aws/amazon-vpc-cni-k8s GitHub project. This manifest
creates DaemonSets in the kube-system namespace.
2. Watch the kube-system DaemonSets and wait for the calico-node DaemonSet to have the
DESIRED number of pods in the READY state. When this happens, Calico is working.
Output:
• If you are done using Calico in your Amazon EKS cluster, you can delete the DaemonSet with the
following command:
187
Amazon EKS User Guide
Stars policy demo
Before you create any network policies, all services can communicate bidirectionally. After you apply the
network policies, you can see that the client can only communicate with the frontend service, and the
backend only accepts traffic from the frontend.
3. To connect to the management UI, forward your local port 9001 to the management-ui service
running on your cluster:
4. Open a browser on your local system and point it to http://localhost:9001/. You should see the
management UI. The C node is the client service, the F node is the frontend service, and the B node
is the backend service. Each node has full communication access to all other nodes (as indicated by
the bold, colored lines).
188
Amazon EKS User Guide
Stars policy demo
5. Apply the following network policies to isolate the services from each other:
6. Refresh your browser. You see that the management UI can no longer reach any of the nodes, so
they don't show up in the UI.
7. Apply the following network policies to allow the management UI to access the services:
8. Refresh your browser. You see that the management UI can reach the nodes again, but the nodes
cannot communicate with each other.
189
Amazon EKS User Guide
Stars policy demo
9. Apply the following network policy to allow traffic from the frontend service to the backend service:
10. Apply the following network policy to allow traffic from the client namespace to the frontend
service:
190
Amazon EKS User Guide
Stars policy demo
11. (Optional) When you are done with the demo, you can delete its resources with the following
commands:
Even after deleting the resources, there can still be iptables rules on the nodes that might
interfere in unexpected ways with networking in your cluster. The only sure way to remove Calico is
to terminate all of the nodes and recycle them. To terminate all nodes, either set the Auto Scaling
Group desired count to 0, then back up to the desired number, or just terminate the nodes. If you
191
Amazon EKS User Guide
Stars policy demo
are unable to recycle the nodes, then see Disabling and removing Calico Policy in the Calico GitHub
repository for a last resort procedure.
192
Amazon EKS User Guide
Sample deployment
Applications
Your applications are deployed in containers, which are deployed in pods in Kubernetes. A pod includes
one or more containers. Typically, one or more pods that provide the same service are deployed in a
Kubernetes service. Once you've deployed multiple pods that provide the same service, you can:
• Vertically scale pods up or down with the Kubernetes the section called “Vertical Pod
Autoscaler” (p. 196).
• Horizontally scale the number of pods needed to meet demand up or down with the Kubernetes the
section called “Horizontal Pod Autoscaler” (p. 201).
• Create an external (for internet-accessible pods) or an internal (for private pods) load
balancer (p. 204) to balance the traffic load across pods. The load balancer routes traffic at Layer 4 of
the OSI model.
• Create an the section called “ALB Ingress Controller on Amazon EKS” (p. 205) to balance the traffic
load across pods. The application load balancer routes traffic at Layer 7 of the OSI model.
• If you're new to Kubernetes, this topic helps you the section called “Sample deployment” (p. 193).
Prerequisites
• You must have an existing Kubernetes cluster to deploy a sample application. If you don't have an
existing cluster, you can deploy an Amazon EKS cluster using one of the ??? (p. 3) guides.
• You must have kubectl installed on your computer. For more information, see ??? (p. 221).
• kubectl must be configured to communicate with your cluster. For more information, see
??? (p. 213).
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
193
Amazon EKS User Guide
Sample deployment
labels:
app: my-app
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
To learn more about Kubernetes services and deployments, see the Kubernetes documentation.
The containers in the sample manifest do not use network storage, but they may be able to. For
more information, see ??? (p. 143). Though not implemented in this example, we recommend
that you create Kubernetes service accounts for your pods, and associate them to AWS IAM
accounts. Specifying service accounts enables your pods to have the minimum permissions that
they require to interact with other services. For more information, see ??? (p. 260)
b. Deploy the application.
Output
194
Amazon EKS User Guide
Sample deployment
replicaset.apps/my-deployment-776d8f8fd8 3 3 3 27m
In the output, you see the service and deployment that are specified in the sample manifest
deployed in the previous step. You also see three pods, which are due to specifying 3 for
replicas in the sample manifest. For more information about pods, see Pods in the Kubernetes
documentation. Kubernetes automatically created the replicaset resource, even though it wasn't
specified in the sample manifest. For more information about ReplicaSets, see ReplicaSet in the
Kubernetes documentation.
Note
Kubernetes will maintain the number of replicas specified in the manifest. If this were
a production deployment and you wanted Kubernetes to horizontally scale the number
of replicas or vertically scale the compute resources for the pods, you'd need to use the
Horizontal Pod Autoscaler (p. 201) and the Vertical Pod Autoscaler (p. 196).
4. View the details of the deployed service.
Abbreviated output
Name: my-service
Namespace: my-namespace
Labels: app=my-app
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":
{},"labels":{"app":"my-app"},"name":"my-service","namespace":"my-namespace"}...
Selector: app=my-app
Type: ClusterIP
IP: 10.100.190.12
Port: <unset> 80/TCP
TargetPort: 80/TCP
...
In the output, the value for IP: is a unique IP address that can be reached from any pod within the
cluster.
5. View the details of one of the pods that was deployed.
Abbreviated output
Name: my-deployment-776d8f8fd8-78w66
Namespace: my-namespace
Priority: 0
Node: ip-192-168-9-36.us-west-2.compute.internal/192.168.9.36
...
IP: 192.168.16.57
IPs:
IP: 192.168.16.57
Controlled By: ReplicaSet/my-deployment-776d8f8fd8
...
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
...
195
Amazon EKS User Guide
Vertical Pod Autoscaler
Events:
Type Reason Age From
Message
---- ------ ---- ----
-------
Normal Scheduled 3m20s default-scheduler
Successfully assigned my-namespace/my-deployment-776d8f8fd8-78w66 to
ip-192-168-9-36.us-west-2.compute.internal
...
In the output, the value for IP: is a unique IP that is assigned to the pod from the CIDR block
assigned to the subnet that the node is in, by default. If you'd prefer that pods be assigned IP
addresses from different CIDR blocks than the subnet that the node is in, you can change the default
behavior. For more information, see ??? (p. 177). You can also see that the Kubernetes scheduler
scheduled the pod on the node with the IP address 192.168.9.36.
6. Execute a shell on one of the pods by replacing the value below with a value returned for one of
your pods in step 3.
cat /etc/resolv.conf
Output
nameserver 10.100.0.10
search my-namespace.svc.cluster.local svc.cluster.local cluster.local us-
west-2.compute.internal
options ndots:5
In the previous output, the value for nameserver is the cluster's nameserver and is automatically
assigned as the name server for any pod deployed to the cluster.
8. Disconnect from the pod by typing exit.
9. Remove the sample service, deployment, pods, and namespace.
196
Amazon EKS User Guide
Deploy the Vertical Pod Autoscaler
If you have already deployed the metrics server to your cluster, you can move on to the next section. You
can check for the metrics server with the following command.
If this command returns a NotFound error, then you must deploy the metrics server to your Amazon EKS
cluster.
2. Verify that the metrics-server deployment is running the desired number of pods with the
following command.
Output
1. Open a terminal window and navigate to a directory where you would like to download the Vertical
Pod Autoscaler source code.
2. Clone the kubernetes/autoscaler GitHub repository.
cd autoscaler/vertical-pod-autoscaler/
4. (Optional) If you have already deployed another version of the Vertical Pod Autoscaler, remove it
with the following command.
./hack/vpa-down.sh
5. Deploy the Vertical Pod Autoscaler to your cluster with the following command.
./hack/vpa-up.sh
6. Verify that the Vertical Pod Autoscaler pods have been created successfully.
197
Amazon EKS User Guide
Test your Vertical Pod Autoscaler installation
Output:
1. Deploy the hamster.yaml Vertical Pod Autoscaler example with the following command.
Output:
3. Describe one of the pods to view its CPU and memory reservation.
Output:
Name: hamster-c7d89d6db-rglf5
Namespace: default
Priority: 0
Node: ip-192-168-9-44.region-code.compute.internal/192.168.9.44
Start Time: Fri, 27 Sep 2019 10:35:15 -0700
Labels: app=hamster
pod-template-hash=c7d89d6db
Annotations: kubernetes.io/psp: eks.privileged
vpaUpdates: Pod resources updated by hamster-vpa: container 0:
Status: Running
IP: 192.168.23.42
IPs: <none>
Controlled By: ReplicaSet/hamster-c7d89d6db
Containers:
hamster:
Container ID: docker://
e76c2413fc720ac395c33b64588c82094fc8e5d590e373d5f818f3978f577e24
Image: k8s.gcr.io/ubuntu-slim:0.1
198
Amazon EKS User Guide
Test your Vertical Pod Autoscaler installation
You can see that the original pod reserves 100 millicpu of CPU and 50 mebibytes of memory. For
this example application, 100 millicpu is less than the pod needs to run, so it is CPU-constrained.
It also reserves much less memory than it needs. The Vertical Pod Autoscaler vpa-recommender
deployment analyzes the hamster pods to see if the CPU and memory requirements are
appropriate. If adjustments are needed, the vpa-updater relaunches the pods with updated values.
4. Wait for the vpa-updater to launch a new hamster pod. This should take a minute or two. You can
monitor the pods with the following command.
Note
If you are not sure that a new pod has launched, compare the pod names with your previous
list. When the new pod launches, you will see a new pod name.
5. When a new hamster pod is started, describe it and view the updated CPU and memory
reservations.
Output:
Name: hamster-c7d89d6db-jxgfv
Namespace: default
Priority: 0
Node: ip-192-168-9-44.region-code.compute.internal/192.168.9.44
Start Time: Fri, 27 Sep 2019 10:37:08 -0700
Labels: app=hamster
pod-template-hash=c7d89d6db
Annotations: kubernetes.io/psp: eks.privileged
vpaUpdates: Pod resources updated by hamster-vpa: container 0: cpu
request, memory request
Status: Running
IP: 192.168.3.140
IPs: <none>
Controlled By: ReplicaSet/hamster-c7d89d6db
Containers:
hamster:
Container ID:
docker://2c3e7b6fb7ce0d8c86444334df654af6fb3fc88aad4c5d710eac3b1e7c58f7db
Image: k8s.gcr.io/ubuntu-slim:0.1
Image ID: docker-pullable://k8s.gcr.io/ubuntu-
slim@sha256:b6f8c3885f5880a4f1a7cf717c07242eb4858fdd5a84b5ffe35b1cf680ea17b1
Port: <none>
199
Amazon EKS User Guide
Test your Vertical Pod Autoscaler installation
Here you can see that the CPU reservation has increased to 587 millicpu, which is over five times
the original value. The memory has increased to 262,144 Kilobytes, which is around 250 mebibytes,
or five times the original value. This pod was under-resourced, and the Vertical Pod Autoscaler
corrected our estimate with a much more appropriate value.
6. Describe the hamster-vpa resource to view the new recommendation.
Output:
Name: hamster-vpa
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"autoscaling.k8s.io/
v1beta2","kind":"VerticalPodAutoscaler","metadata":{"annotations":{},"name":"hamster-
vpa","namespace":"d...
API Version: autoscaling.k8s.io/v1beta2
Kind: VerticalPodAutoscaler
Metadata:
Creation Timestamp: 2019-09-27T18:22:51Z
Generation: 23
Resource Version: 14411
Self Link: /apis/autoscaling.k8s.io/v1beta2/namespaces/default/
verticalpodautoscalers/hamster-vpa
UID: d0d85fb9-e153-11e9-ae53-0205785d75b0
Spec:
Target Ref:
API Version: apps/v1
Kind: Deployment
Name: hamster
Status:
Conditions:
Last Transition Time: 2019-09-27T18:23:28Z
Status: True
Type: RecommendationProvided
Recommendation:
Container Recommendations:
Container Name: hamster
Lower Bound:
Cpu: 550m
Memory: 262144k
Target:
Cpu: 587m
Memory: 262144k
Uncapped Target:
Cpu: 587m
200
Amazon EKS User Guide
Horizontal Pod Autoscaler
Memory: 262144k
Upper Bound:
Cpu: 21147m
Memory: 387863636
Events: <none>
7. When you finish experimenting with the example application, you can delete it with the following
command.
The Horizontal Pod Autoscaler is a standard API resource in Kubernetes that simply requires that a
metrics source (such as the Kubernetes metrics server) is installed on your Amazon EKS cluster to work.
You do not need to deploy or install the Horizontal Pod Autoscaler on your cluster to begin scaling your
applications. For more information, see Horizontal Pod Autoscaler in the Kubernetes documentation.
Use this topic to prepare the Horizontal Pod Autoscaler for your Amazon EKS cluster and to verify that it
is working with a sample application.
Note
This topic is based on the Horizontal pod autoscaler walkthrough in the Kubernetes
documentation.
If you have already deployed the metrics server to your cluster, you can move on to the next section. You
can check for the metrics server with the following command.
If this command returns a NotFound error, then you must deploy the metrics server to your Amazon EKS
cluster.
2. Verify that the metrics-server deployment is running the desired number of pods with the
following command.
201
Amazon EKS User Guide
Run a Horizontal Pod Autoscaler test application
Output
1. Deploy a simple Apache web server application with the following command.
This Apache web server pod is given a 500 millicpu CPU limit and it is serving on port 80.
2. Create a Horizontal Pod Autoscaler resource for the php-apache deployment.
This command creates an autoscaler that targets 50 percent CPU utilization for the deployment,
with a minimum of one pod and a maximum of ten pods. When the average CPU load is below 50
percent, the autoscaler tries to reduce the number of pods in the deployment, to a minimum of one.
When the load is greater than 50 percent, the autoscaler tries to increase the number of pods in
the deployment, up to a maximum of ten. For more information, see How does the Horizontal Pod
Autoscaler work? in the Kubernetes documentation.
3. Describe the autoscaler with the following command to view its details.
Output:
Name: php-apache
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Thu, 11 Jun 2020 16:05:41 -0500
Reference: Deployment/php-apache
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 50%
Min replicas: 1
Max replicas: 10
Deployment pods: 1 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get
the target's current scale
202
Amazon EKS User Guide
Run a Horizontal Pod Autoscaler test application
As you can see, the current CPU load is <unknown>, because there's no load on the server yet. The
pod count is already at its lowest boundary (one), so it cannot scale in.
4. Create a load for the web server by running a container.
If you don't receive a command prompt after several seconds, you may need to press Enter. From
the command prompt, enter the following command to generate load and cause the autoscaler to
scale out the deployment.
5. To watch the deployment scale out, periodically run the following command in a separate terminal
from the terminal that you ran the previous step in.
Output:
As long as actual CPU percentage is higher than the target percentage, then the replica count
increases, up to 10. In this case, it's 250%, so the number of REPLICAS continues to increase.
Note
It may take a few minutes before you see the replica count reach its maximum. If only 6
replicas, for example, are necessary for the CPU load to remain at or under 50%, then the
load won't scale beyond 6 replicas.
6. Stop the load. In the terminal window you're generating the load in (from step 4), stop the load by
holding down the Ctrl+C keys. You can watch the replicas scale back to 1 by running the following
command again.
Output
203
Amazon EKS User Guide
Load balancing
Note
The default timeframe for scaling back down is five minutes, so it will take some time
before you see the replica count reach 1 again, even when the current CPU percentage is 0
percent. The timeframe is modifiable. For more information, see Horizontal Pod Autoscaler
in the Kubernetes documentation.
7. When you are done experimenting with your sample application, delete the php-apache resources.
Load balancing
Amazon EKS supports the Network Load Balancer and the Classic Load Balancer for pods running on
Amazon EC2 instance nodes through the Kubernetes service of type LoadBalancer. Classic Load
Balancers and Network Load Balancers are not supported for pods running on AWS Fargate (Fargate).
For Fargate ingress, we recommend that you use the ALB Ingress Controller (p. 205) on Amazon EKS
(minimum version v1.1.8).
The configuration of your load balancer is controlled by annotations that are added to the manifest for
your service. By default, Classic Load Balancers are used for LoadBalancer type services. To use the
Network Load Balancer instead, apply the following annotation to your service:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
For an example service manifest that specifies a load balancer, see Type LoadBalancer in the Kubernetes
documentation. For more information about using Network Load Balancer with Kubernetes, see Network
Load Balancer support on AWS in the Kubernetes documentation.
By default, services of type LoadBalancer create public-facing load balancers. To use an internal load
balancer, apply the following annotation to your service:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
For internal load balancers, your Amazon EKS cluster must be configured to use at least one private
subnet in your VPC. Kubernetes examines the route table for your subnets to identify whether they are
public or private. Public subnets have a route directly to the internet using an internet gateway, but
private subnets do not.
Key Value
kubernetes.io/role/elb 1
204
Amazon EKS User Guide
ALB Ingress Controller on Amazon EKS
Private subnets must be tagged in the following way so that Kubernetes knows it can use the subnets
for internal load balancers. If you use an Amazon EKS AWS CloudFormation template to create your VPC
after 03/26/2020, then the subnets created by the template are tagged when they're created. For more
information about the Amazon EKS AWS CloudFormation VPC templates, see ??? (p. 159).
Key Value
kubernetes.io/role/internal-elb 1
To ensure that your ingress objects use the ALB Ingress Controller, add the following annotation to your
Ingress specification. For more information, see Ingress specification in the documentation.
annotations:
kubernetes.io/ingress.class: alb
• Instance – Registers nodes within your cluster as targets for the ALB. Traffic reaching the ALB is routed
to NodePort for your service and then proxied to your pods. This is the default traffic mode. You
can also explicitly specify it with the alb.ingress.kubernetes.io/target-type: instance
annotation.
Note
Your Kubernetes service must specify the NodePort type to use this traffic mode.
• IP – Registers pods as targets for the ALB. Traffic reaching the ALB is directly routed to pods for your
service. You must specify the alb.ingress.kubernetes.io/target-type: ip annotation to use
this traffic mode.
For other available annotations supported by the ALB Ingress Controller, see Ingress annotations.
This topic shows you how to configure the ALB Ingress Controller to work with your Amazon EKS cluster.
Important
You cannot use the ALB Ingress Controller with ??? (p. 82).
1. Tag the subnets in your VPC that you want to use for your load balancers so that the ALB
Ingress Controller knows that it can use them. For more information, see Subnet tagging
requirement (p. 165). If you deployed your cluster with eksctl, then the tags are already applied.
• All subnets in your VPC should be tagged accordingly so that Kubernetes can discover them.
Key Value
kubernetes.io/cluster/<cluster- shared
name>
205
Amazon EKS User Guide
ALB Ingress Controller on Amazon EKS
• Public subnets in your VPC should be tagged accordingly so that Kubernetes knows to use only
those subnets for external load balancers.
Key Value
kubernetes.io/role/elb 1
• Private subnets must be tagged in the following way so that Kubernetes knows it can use the
subnets for internal load balancers. If you use an Amazon EKS AWS CloudFormation template
to create your VPC after 03/26/2020, then the subnets created by the template are tagged
when they're created. For more information about the Amazon EKS AWS CloudFormation VPC
templates, see ??? (p. 159).
Key Value
kubernetes.io/role/internal-elb 1
2. Create an IAM OIDC provider and associate it with your cluster. If you don't have eksctl version
0.25.0-rc.0 or later installed, complete the instructions in Installing or upgrading eksctl (p. 226)
to install or upgrade it. You can check your installed version with eksctl version.
3. Download an IAM policy for the ALB Ingress Controller pod that allows it to make calls to AWS APIs
on your behalf. You can view the policy document on GitHub.
6. Create an IAM role for the ALB Ingress Controller and attach the role to the service account created
in the previous step. If you didn't create your cluster with eksctl, then use the instructions on the
AWS Management Console or AWS CLI tabs.
eksctl
The command that follows only works for clusters that were created with eksctl.
206
Amazon EKS User Guide
ALB Ingress Controller on Amazon EKS
1. Using the instructions on the AWS Management Console tab in Create an IAM
role (p. 268), create an IAM role named eks-alb-ingress-controller and attach the
ALBIngressControllerIAMPolicy IAM policy that you created in a previous step to it.
Note the Amazon Resource Name (ARN) of the role, once you've created it.
2. Annotate the Kubernetes service account with the ARN of the role that you created with the
following command.
AWS CLI
1. Using the instructions on the AWS CLI tab in Create an IAM role (p. 268),
create an IAM role named eks-alb-ingress-controller and attach the
ALBIngressControllerIAMPolicy IAM policy that you created in a previous step to it.
Note the Amazon Resource Name (ARN) of the role, once you've created it.
2. Annotate the Kubernetes service account with the ARN of the role that you created with the
following command.
8. Open the ALB Ingress Controller deployment manifest for editing with the following command.
9. Add a line for the cluster name after the --ingress-class=alb line. If you're running the ALB
Ingress Controller on Fargate, then you must also add the lines for the VPC ID, and AWS Region
name of your cluster. Once you've added the appropriate lines, save and close the file.
spec:
containers:
- args:
- --ingress-class=alb
- --cluster-name=prod
- --aws-vpc-id=vpc-03468a8157edca5bd
207
Amazon EKS User Guide
ALB Ingress Controller on Amazon EKS
- --aws-region=region-code
10. Confirm that the ALB Ingress Controller is running with the following command.
Expected output:
1. Deploy the game 2048 as a sample application to verify that the ALB Ingress Controller creates an
Application Load Balancer as a result of the Ingress object. You can run the sample application on
a cluster that has Amazon EC2 nodes only, one or more Fargate pods, or a combination of the two.
If your cluster has Amazon EC2 nodes and no Fargate pods, then select the Amazon EC2 nodes
only tab. If your cluster has any existing Fargate pods, or you want to deploy the application to new
Fargate pods, then select the Fargate tab. For more information about Fargate pods, see Getting
started with AWS Fargate using Amazon EKS (p. 113) .
Fargate
Ensure that the cluster that you want to use Fargate in is in the list of supported
Regions (p. 111).
a. Create a Fargate profile that includes the sample application's namespace with the following
command. Replace the example-values with your own values.
Note
The command that follows only works for clusters that were created with eksctl.
If you didn't create your cluster with eksctl, then you can create the profile with
the the AWS Management Console (p. 118), using the same values for name and
namespace that are in the command below.
b. Download and apply the manifest files to create the Kubernetes namespace, deployment, and
service with the following commands.
208
Amazon EKS User Guide
ALB Ingress Controller on Amazon EKS
2. After a few minutes, verify that the Ingress resource was created with the following command.
Output:
Note
If your Ingress has not been created after several minutes, run the following command to
view the Ingress controller logs. These logs may contain error messages that can help you
diagnose any issues with your deployment.
3. Open a browser and navigate to the ADDRESS URL from the previous command output to see the
sample application.
4. When you finish experimenting with your sample application, delete it with the following
commands.
209
Amazon EKS User Guide
Installing aws-iam-authenticator
Cluster authentication
Amazon EKS uses IAM to provide authentication to your Kubernetes cluster (through the aws eks get-
token command, available in version 1.16.156 or later of the AWS CLI, or the AWS IAM Authenticator for
Kubernetes), but it still relies on native Kubernetes Role Based Access Control (RBAC) for authorization.
This means that IAM is only used for authentication of valid IAM entities. All permissions for interacting
with your Amazon EKS cluster’s Kubernetes API is managed through the native Kubernetes RBAC system.
Topics
• Installing aws-iam-authenticator (p. 210)
• Create a kubeconfig for Amazon EKS (p. 213)
• Managing users or IAM roles for your cluster (p. 217)
Installing aws-iam-authenticator
Amazon EKS uses IAM to provide authentication to your Kubernetes cluster through the AWS IAM
authenticator for Kubernetes. You can configure the stock kubectl client to work with Amazon EKS by
installing the AWS IAM authenticator for Kubernetes and modifying your kubectl configuration file to
use it for authentication.
Note
If you're running the AWS CLI version 1.16.156 or later, then you don't need to install the
authenticator. Instead, you can use the aws eks get-token command. For more information,
see the section called “Create kubeconfig manually” (p. 215).
If you're unable to use the AWS CLI version 1.16.156 or later to create the kubeconfig file, then select
the operating system that you want to install the aws-iam-authenticator on.
210
Amazon EKS User Guide
Installing aws-iam-authenticator
macOS
To install aws-iam-authenticator with Homebrew
1. If you do not already have Homebrew installed on your Mac, install it with the following command.
aws-iam-authenticator help
You can also install the AWS-vended version of the aws-iam-authenticator by following these steps.
2. (Optional) Verify the downloaded binary with the SHA-256 sum provided in the same bucket prefix.
c. Compare the generated SHA-256 sum in the command output against your downloaded aws-
iam-authenticator.sha256 file. The two should match.
3. Apply execute permissions to the binary.
chmod +x ./aws-iam-authenticator
211
Amazon EKS User Guide
Installing aws-iam-authenticator
aws-iam-authenticator help
Linux
To install aws-iam-authenticator on Linux
2. (Optional) Verify the downloaded binary with the SHA-256 sum provided in the same bucket prefix.
a. Download the SHA-256 sum for your system. To download the ARM version, change amd64 to
arm64 before running the command.
c. Compare the generated SHA-256 sum in the command output against your downloaded aws-
iam-authenticator.sha256 file. The two should match.
3. Apply execute permissions to the binary.
chmod +x ./aws-iam-authenticator
aws-iam-authenticator help
Windows
To install aws-iam-authenticator on Windows with Chocolatey
1. If you do not already have Chocolatey installed on your Windows system, see Installing chocolatey.
2. Open a PowerShell terminal window and install the aws-iam-authenticator package with the
following command:
212
Amazon EKS User Guide
Create a kubeconfig for Amazon EKS
aws-iam-authenticator help
1. Open a PowerShell terminal window and download the Amazon EKS-vended aws-iam-
authenticator binary from Amazon S3:
2. (Optional) Verify the downloaded binary with the SHA-256 sum provided in the same bucket prefix.
Get-FileHash aws-iam-authenticator.exe
c. Compare the generated SHA-256 sum in the command output against your downloaded
SHA-256 file. The two should match, although the PowerShell output will be uppercase.
3. Copy the binary to a folder in your PATH. If you have an existing directory in your PATH that you
use for command line utilities, copy the binary to that directory. Otherwise, complete the following
steps.
a. Create a new directory for your command line binaries, such as C:\bin.
b. Copy the aws-iam-authenticator.exe binary to your new directory.
c. Edit your user or system PATH environment variable to add the new directory to your PATH.
d. Close your PowerShell terminal and open a new one to pick up the new PATH variable.
4. Test that the aws-iam-authenticator binary works.
aws-iam-authenticator help
If you have an existing Amazon EKS cluster, create a kubeconfig file for that cluster. For more
information, see Create a kubeconfig for Amazon EKS (p. 213). Otherwise, see Creating an Amazon
EKS cluster (p. 28) to create a new Amazon EKS cluster.
This section offers two procedures to create or update your kubeconfig. You can quickly create or update
a kubeconfig with the AWS CLI update-kubeconfig command automatically by using the AWS CLI, or you
can create a kubeconfig manually using the AWS CLI or the aws-iam-authenticator.
213
Amazon EKS User Guide
Create kubeconfig automatically
Amazon EKS uses the aws eks get-token command, available in version 1.16.156 or later of the AWS
CLI or the AWS IAM Authenticator for Kubernetes with kubectl for cluster authentication. If you have
installed the AWS CLI on your system, then by default the AWS IAM Authenticator for Kubernetes will
use the same credentials that are returned with the following command:
For more information, see Configuring the AWS CLI in the AWS Command Line Interface User Guide.
1. Ensure that you have version 1.16.156 or later of the AWS CLI installed. To install or upgrade the
AWS CLI, see Installing the AWS Command Line Interface in the AWS Command Line Interface User
Guide.
Note
Your system's Python version must be 2.7.9 or later. Otherwise, you receive hostname
doesn't match errors with AWS CLI calls to Amazon EKS.
You can check your AWS CLI version with the following command:
aws --version
Important
Package managers such yum, apt-get, or Homebrew for macOS are often behind several
versions of the AWS CLI. To ensure that you have the latest version, see Installing the AWS
Command Line Interface in the AWS Command Line Interface User Guide.
2. Use the AWS CLI update-kubeconfig command to create or update your kubeconfig for your cluster.
• By default, the resulting configuration file is created at the default kubeconfig path (.kube/
config) in your home directory or merged with an existing kubeconfig at that location. You can
specify another path with the --kubeconfig option.
• You can specify an IAM role ARN with the --role-arn option to use for authentication when you
issue kubectl commands. Otherwise, the IAM entity in your default AWS CLI or SDK credential
chain is used. You can view your default AWS CLI or SDK identity by running the aws sts get-
caller-identity command.
• For more information, see the help page with the aws eks update-kubeconfig help command or
see update-kubeconfig in the AWS CLI Command Reference.
Note
To run the following command, you must have permission to the use the
eks:DescribeCluster API action with the cluster that you specify. For more information,
see Amazon EKS identity-based policy examples (p. 249).
214
Amazon EKS User Guide
Create kubeconfig manually
Note
If you receive any authorization or resource type errors, see Unauthorized or access denied
(kubectl) (p. 306) in the troubleshooting section.
Output:
mkdir -p ~/.kube
2. Open your favorite text editor and copy one of the kubeconfig code blocks below into it,
depending on your preferred client token method.
• To use the AWS CLI aws eks get-token command (requires version 1.16.156 or later of the AWS
CLI):
apiVersion: v1
clusters:
- cluster:
server: <endpoint-url>
certificate-authority-data: <base64-encoded-ca-cert>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws
args:
- "eks"
- "get-token"
- "--cluster-name"
- "<cluster-name>"
✓ - "--role"
✓ - "<role-arn>"
✓ env:
✓ - name: AWS_PROFILE
✓ value: "<aws-profile>"
apiVersion: v1
clusters:
215
Amazon EKS User Guide
Create kubeconfig manually
- cluster:
server: <endpoint-url>
certificate-authority-data: <base64-encoded-ca-cert>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"
✓ - "-r"
✓ - "<role-arn>"
✓ env:
✓ - name: AWS_PROFILE
✓ value: "<aws-profile>"
3. Replace the <endpoint-url> with the endpoint URL that was created for your cluster.
4. Replace the <base64-encoded-ca-cert> with the certificateAuthority.data that was
created for your cluster.
5. Replace the <cluster-name> with your cluster name.
6. (Optional) To assume an IAM role to perform cluster operations instead of the default AWS
credential provider chain, uncomment the -r or --role and <role-arn> lines and substitute an
IAM role ARN to use with your user.
7. (Optional) To always use a specific named AWS credential profile (instead of the default AWS
credential provider chain), uncomment the env lines and substitute <aws-profile> with the
profile name to use.
8. Save the file to the default kubectl folder, with your cluster name in the file name. For example, if
your cluster name is devel, save the file to ~/.kube/config-devel.
9. Add that file path to your KUBECONFIG environment variable so that kubectl knows where to look
for your cluster configuration.
export KUBECONFIG=$KUBECONFIG:~/.kube/config-devel
10. (Optional) Add the configuration to your shell initialization file so that it is configured when you
open a shell.
216
Amazon EKS User Guide
Managing users or IAM roles for your cluster
[System.Environment]::SetEnvironmentVariable('KUBECONFIG', $ENV:KUBECONFIG,
'Machine')
Note
If you receive any authorization or resource type errors, see Unauthorized or access denied
(kubectl) (p. 306) in the troubleshooting section.
Output:
The aws-auth ConfigMap is applied as part of the Getting started with Amazon EKS (p. 3) guide which
provides a complete end-to-end walkthrough from creating an Amazon EKS cluster to deploying a
sample Kubernetes application. It is initially created to allow your nodes to join your cluster, but you
also use this ConfigMap to add RBAC access to IAM users and roles. If you have not launched nodes and
applied the aws-auth ConfigMap, you can do so with the following procedure.
If you receive an error stating "Error from server (NotFound): configmaps "aws-auth"
not found", then proceed with the following steps to apply the stock ConfigMap.
2. Download, edit, and apply the AWS authenticator configuration map.
217
Amazon EKS User Guide
Managing users or IAM roles for your cluster
b. Open the file with your favorite text editor. Replace <ARN of instance role (not
instance profile)> with the Amazon Resource Name (ARN) of the IAM role associated with
your nodes, and save the file. Do not modify any other lines in this file.
Important
The role ARN cannot include a path. The format of the role ARN must be
arn:aws:iam::123456789012:role/role-name. For more information, see aws-
auth ConfigMap does not grant access to the cluster (p. 310).
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
You can inspect the AWS CloudFormation stack outputs for your worker node groups and look
for the following values:
Note
If you receive any authorization or resource type errors, see Unauthorized or access
denied (kubectl) (p. 306) in the troubleshooting section.
3. Watch the status of your nodes and wait for them to reach the Ready status.
1. Ensure that the AWS credentials that kubectl is using are already authorized for your cluster. The
IAM user that created the cluster has these permissions by default.
2. Open the aws-auth ConfigMap.
Note
If you receive an error stating "Error from server (NotFound): configmaps "aws-
auth" not found", then use the previous procedure to apply the stock ConfigMap.
Example ConfigMap:
✓ Please edit the object below. Lines beginning with a '✓' will be ignored,
218
Amazon EKS User Guide
Managing users or IAM roles for your cluster
✓ and an empty file will abort the edit. If an error occurs while saving this file will
be
✓ reopened with the relevant failures.
✓
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::111122223333:role/doc-test-nodes-NodeInstanceRole-
WDO5P42N3ETB
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"mapRoles":"- rolearn: arn:aws:iam::111122223333:role/
doc-test-nodes-NodeInstanceRole-WDO5P42N3ETB\n username: system:node:
{{EC2PrivateDNSName}}\n groups:\n - system:bootstrappers\n -
system:nodes\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"aws-
auth","namespace":"kube-system"}}
creationTimestamp: 2018-04-04T18:49:10Z
name: aws-auth
namespace: kube-system
resourceVersion: "780"
selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
uid: dcc31de5-3838-11e8-af26-02e00430057c
3. Add your IAM users, roles, or AWS accounts to the configMap. You cannot add IAM groups to the
configMap.
• To add an IAM user: add the user details to the mapUsers section of the ConfigMap, under
data. Add this section if it does not already exist in the file. Each entry supports the following
parameters:
• userarn: The ARN of the IAM user to add.
• username: The user name within Kubernetes to map to the IAM user.
• groups: A list of groups within Kubernetes to which the user is mapped to. For more
information, see Default Roles and Role Bindings in the Kubernetes documentation.
• To add an IAM role (for example, for federated users): add the role details to the mapRoles
section of the ConfigMap, under data. Add this section if it does not already exist in the file. Each
entry supports the following parameters:
• rolearn: The ARN of the IAM role to add.
• username: The user name within Kubernetes to map to the IAM role.
• groups: A list of groups within Kubernetes to which the role is mapped. For more information,
see Default Roles and Role Bindings in the Kubernetes documentation.
• A mapRoles section that adds the node instance role so that nodes can register themselves with
the cluster.
• A mapUsers section with the AWS users admin from the default AWS account, and ops-user
from another AWS account. Both users are added to the system:masters group.
✓ Please edit the object below. Lines beginning with a '✓' will be ignored,
✓ and an empty file will abort the edit. If an error occurs while saving this file will
be
✓ reopened with the relevant failures.
219
Amazon EKS User Guide
Managing users or IAM roles for your cluster
✓
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::555555555555:role/devel-nodes-NodeInstanceRole-74RF4UBDUKL6
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::555555555555:user/admin
username: admin
groups:
- system:masters
- userarn: arn:aws:iam::111122223333:user/ops-user
username: ops-user
groups:
- system:masters
220
Amazon EKS User Guide
Installing kubectl
Cluster management
This chapter includes the following topics to help you manage your cluster.
• the section called “Installing kubectl” (p. 221) – Learn how to install kubectl; a command line tool
for managing Kubernetes.
• the section called “eksctl” (p. 226) – Learn how to install a simple command line utility for creating
and managing Kubernetes clusters on Amazon EKS.
• the section called “Tutorial: Deploy Kubernetes Dashboard” (p. 228) – Learn how to install the
dashboard, a web-based user interface for your Kubernetes cluster and applications.
• the section called “Metrics server” (p. 231) – The Kubernetes Metrics Server is an aggregator
of resource usage data in your cluster. It is not deployed by default in your cluster, but is used by
Kubernetes add-ons, such as the Kubernetes Dashboard and the section called “Horizontal Pod
Autoscaler” (p. 201). In this topic you learn how to install the Metrics Server.
• the section called “Prometheus metrics” (p. 232) – The Kubernetes API server exposes a number of
metrics that are useful for monitoring and analysis. This topic explains how to deploy Prometheus and
some of the ways that you can use it to view and analyze what your cluster is doing.
• the section called “Using Helm” (p. 235) – The Helm package manager for Kubernetes helps you
install and manage applications on your Kubernetes cluster. This topic helps you install and run the
Helm binaries so that you can install and manage charts using the Helm CLI on your local computer.
• the section called “Tagging your resources” (p. 235) – To help you manage your Amazon EKS
resources, you can assign your own metadata to each resource in the form of tags. This topic describes
tags and shows you how to create them.
• the section called “Service quotas” (p. 239) – Your AWS account has default quotas, formerly referred
to as limits, for each AWS service. Learn about the quotas for Amazon EKS and how to increase them.
Installing kubectl
Kubernetes uses a command line utility called kubectl for communicating with the cluster API server.
The kubectl binary is available in many operating system package managers, and this option is often
much easier than a manual download and install process. You can follow the instructions for your specific
operating system or package manager in the Kubernetes documentation to install.
This topic helps you to download and install the Amazon EKS-vended kubectl binaries for macOS, Linux,
and Windows operating systems. These binaries are identical to the upstream community versions, and
are not unique to Amazon EKS or AWS.
Note
You must use a kubectl version that is within one minor version difference of your Amazon EKS
cluster control plane. For example, a 1.16 kubectl client should work with Kubernetes 1.15,
1.16 and 1.17 clusters.
macOS
1. Download the Amazon EKS-vended kubectl binary for your cluster's Kubernetes version from
Amazon S3:
221
Amazon EKS User Guide
Installing kubectl
• Kubernetes 1.17:
• Kubernetes 1.16:
• Kubernetes 1.15:
• Kubernetes 1.14:
2. (Optional) Verify the downloaded binary with the SHA-256 sum for your binary.
a. Download the SHA-256 sum for your cluster's Kubernetes version for macOS:
• Kubernetes 1.17:
• Kubernetes 1.16:
• Kubernetes 1.15:
• Kubernetes 1.14:
c. Compare the generated SHA-256 sum in the command output against your downloaded
SHA-256 file. The two should match.
3. Apply execute permissions to the binary.
chmod +x ./kubectl
4. Copy the binary to a folder in your PATH. If you have already installed a version of kubectl, then
we recommend creating a $HOME/bin/kubectl and ensuring that $HOME/bin comes first in
your $PATH.
222
Amazon EKS User Guide
Installing kubectl
5. (Optional) Add the $HOME/bin path to your shell initialization file so that it is configured when
you open a shell.
6. After you install kubectl, you can verify its version with the following command:
Linux
1. Download the Amazon EKS-vended kubectl binary for your cluster's Kubernetes version
from Amazon S3. To download the ARM version, change amd64 to arm64 before running the
command.
• Kubernetes 1.17:
• Kubernetes 1.16:
• Kubernetes 1.15:
• Kubernetes 1.14:
2. (Optional) Verify the downloaded binary with the SHA-256 sum for your binary.
a. Download the SHA-256 sum for your cluster's Kubernetes version for Linux. To download
the ARM version, change amd64 to arm64 before running the command.
• Kubernetes 1.17:
• Kubernetes 1.16:
• Kubernetes 1.15:
223
Amazon EKS User Guide
Installing kubectl
• Kubernetes 1.14:
c. Compare the generated SHA-256 sum in the command output against your downloaded
SHA-256 file. The two should match.
3. Apply execute permissions to the binary.
chmod +x ./kubectl
4. Copy the binary to a folder in your PATH. If you have already installed a version of kubectl, then
we recommend creating a $HOME/bin/kubectl and ensuring that $HOME/bin comes first in
your $PATH.
5. (Optional) Add the $HOME/bin path to your shell initialization file so that it is configured when
you open a shell.
Note
This step assumes you are using the Bash shell; if you are using another shell, change
the command to use your specific shell initialization file.
6. After you install kubectl, you can verify its version with the following command:
Windows
• Kubernetes 1.17:
• Kubernetes 1.16:
224
Amazon EKS User Guide
Installing kubectl
• Kubernetes 1.15:
• Kubernetes 1.14:
3. (Optional) Verify the downloaded binary with the SHA-256 sum for your binary.
a. Download the SHA-256 sum for your cluster's Kubernetes version for Windows:
• Kubernetes 1.17:
• Kubernetes 1.16:
• Kubernetes 1.15:
• Kubernetes 1.14:
Get-FileHash kubectl.exe
c. Compare the generated SHA-256 sum in the command output against your downloaded
SHA-256 file. The two should match, although the PowerShell output will be uppercase.
4. Copy the binary to a folder in your PATH. If you have an existing directory in your PATH that
you use for command line utilities, copy the binary to that directory. Otherwise, complete the
following steps.
a. Create a new directory for your command line binaries, such as C:\bin.
b. Copy the kubectl.exe binary to your new directory.
c. Edit your user or system PATH environment variable to add the new directory to your PATH.
d. Close your PowerShell terminal and open a new one to pick up the new PATH variable.
5. After you install kubectl, you can verify its version with the following command:
225
Amazon EKS User Guide
eksctl
For more information and to see the official documentation, visit https://eksctl.io/.
Choose the tab below that best represents your client setup.
macOS
The easiest way to get started with Amazon EKS and macOS is by installing eksctl with Homebrew.
The eksctl Homebrew recipe installs eksctl and any other dependencies that are required for
Amazon EKS, such as kubectl. The recipe also installs the aws-iam-authenticator (p. 210),
which is required if you don't have the AWS CLI version 1.16.156 or higher installed.
1. If you do not already have Homebrew installed on macOS, install it with the following
command.
4. Test that your installation was successful with the following command.
eksctl version
Note
The GitTag version should be at least 0.25.0-rc.0. If not, check your terminal
output for any installation or upgrade errors, or manually download an archive of the
release from https://github.com/weaveworks/eksctl/releases/download/0.25.0-rc.0/
eksctl_Darwin_amd64.tar.gz, extract eksctl, and then execute it.
226
Amazon EKS User Guide
Installing or upgrading eksctl
Linux
1. Download and extract the latest release of eksctl with the following command.
3. Test that your installation was successful with the following command.
eksctl version
Note
The GitTag version should be at least 0.25.0-rc.0. If not, check your terminal
output for any installation or upgrade errors, or replace the address in step 1 with
https://github.com/weaveworks/eksctl/releases/download/0.25.0-
rc.0/eksctl_Linux_amd64.tar.gz and complete steps 1-3 again.
Windows
1. If you do not already have Chocolatey installed on your Windows system, see Installing
Chocolatey.
2. Install or upgrade eksctl .
3. Test that your installation was successful with the following command.
eksctl version
Note
The GitTag version should be at least 0.25.0-rc.0. If not, check your terminal
output for any installation or upgrade errors, or manually download an archive of the
release from https://github.com/weaveworks/eksctl/releases/download/0.25.0-rc.0/
eksctl_Windows_amd64.zip, extract eksctl, and then execute it.
227
Amazon EKS User Guide
Tutorial: Deploy Kubernetes Dashboard
Prerequisites
This tutorial assumes the following:
• You have created an Amazon EKS cluster by following the steps in Getting started with Amazon
EKS (p. 3).
• The security groups for your control plane elastic network interfaces and nodes follow the
recommended settings in Amazon EKS security group considerations (p. 166).
• You are using a kubectl client that is configured to communicate with your Amazon EKS cluster (p. 23).
228
Amazon EKS User Guide
Step 2: Deploy the dashboard
2. Verify that the metrics-server deployment is running the desired number of pods with the
following command.
Output
Output:
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
1. Create a file called eks-admin-service-account.yaml with the text below. This manifest
defines a service account and cluster role binding called eks-admin.
apiVersion: v1
229
Amazon EKS User Guide
Step 4: Connect to the dashboard
kind: ServiceAccount
metadata:
name: eks-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: eks-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: eks-admin
namespace: kube-system
2. Apply the service account and cluster role binding to your cluster.
Output:
1. Retrieve an authentication token for the eks-admin service account. Copy the
<authentication_token> value from the output. You use this token to connect to the dashboard.
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-
admin | awk '{print $1}')
Output:
Name: eks-admin-token-b5zv4
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=eks-admin
kubernetes.io/service-account.uid=bcfe66ac-39be-11e8-97e8-026dce96b6e8
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: <authentication_token>
230
Amazon EKS User Guide
Step 5: Next steps
kubectl proxy
3. To access the dashboard endpoint, open the following link with a web browser: http://
localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/
proxy/#!/login.
4. Choose Token, paste the <authentication_token> output from the previous command into the
Token field, and choose SIGN IN.
Note
It may take a few minutes before CPU and memory metrics appear in the dashboard.
231
Amazon EKS User Guide
Prometheus metrics
2. Verify that the metrics-server deployment is running the desired number of pods with the
following command.
Output
Example output:
...
✓ HELP rest_client_requests_total Number of HTTP requests, partitioned by status code,
method, and host.
✓ TYPE rest_client_requests_total counter
rest_client_requests_total{code="200",host="127.0.0.1:21362",method="POST"} 4994
rest_client_requests_total{code="200",host="127.0.0.1:443",method="DELETE"} 1
rest_client_requests_total{code="200",host="127.0.0.1:443",method="GET"} 1.326086e+06
rest_client_requests_total{code="200",host="127.0.0.1:443",method="PUT"} 862173
rest_client_requests_total{code="404",host="127.0.0.1:443",method="GET"} 2
rest_client_requests_total{code="409",host="127.0.0.1:443",method="POST"} 3
rest_client_requests_total{code="409",host="127.0.0.1:443",method="PUT"} 8
✓ HELP ssh_tunnel_open_count Counter of ssh tunnel total open attempts
✓ TYPE ssh_tunnel_open_count counter
ssh_tunnel_open_count 0
✓ HELP ssh_tunnel_open_fail_count Counter of ssh tunnel failed open attempts
✓ TYPE ssh_tunnel_open_fail_count counter
ssh_tunnel_open_fail_count 0
This raw output returns verbatim what the API server exposes. These metrics are represented in a
Prometheus format. This format allows the API server to expose different metrics broken down by line.
Each line includes a metric name, tags, and a value.
232
Amazon EKS User Guide
Deploying Prometheus
metric_name{"tag"="value"[,...]} value
While this endpoint is useful if you are looking for a specific metric, you typically want to analyze these
metrics over time. To do this, you can deploy Prometheus into your cluster. Prometheus is a monitoring
and time series database that scrapes exposed endpoints and aggregates data, allowing you to filter,
graph, and query the results.
Deploying Prometheus
This topic helps you deploy Prometheus into your cluster with Helm V3. If you already have Helm
installed, you can check your version with the helm version command. Helm is a package manager
for Kubernetes clusters. For more information about Helm and how to install it, see Using Helm with
Amazon EKS (p. 235).
After you configure Helm for your Amazon EKS cluster, you can use it to deploy Prometheus with the
following steps.
2. Deploy Prometheus.
3. Verify that all of the pods in the prometheus namespace are in the READY state.
Output:
4. Use kubectl to port forward the Prometheus console to your local machine.
233
Amazon EKS User Guide
Deploying Prometheus
234
Amazon EKS User Guide
Using Helm
All of the Kubernetes endpoints that are connected to Prometheus using service discovery are
displayed.
• If you're using macOS with Homebrew, install the binaries with the following command.
• If you're using Windows with Chocolatey, install the binaries with the following command.
• If you're using Linux, install the binaries with the following commands.
2. To pick up the new binary in your PATH, Close your current terminal window and open a new one.
3. Confirm that Helm is running with the following command.
helm help
4. At this point, you can run any Helm commands (such as helm install chart_name) to install,
modify, delete, or query Helm charts in your cluster. If you're new to Helm and don't have a specific
chart to install, you can:
• Experiment by installing an example chart. See Install an example chart in the Helm Quickstart
guide.
• Install an Amazon EKS chart from the eks-charts GitHub repo or from Helm Hub.
235
Amazon EKS User Guide
Tag basics
Contents
• Tag basics (p. 236)
• Tagging your resources (p. 236)
• Tag restrictions (p. 237)
• Working with tags using the console (p. 237)
• Working with tags using the CLI, API, or eksctl (p. 238)
Tag basics
A tag is a label that you assign to an AWS resource. Each tag consists of a key and an optional value, both
of which you define.
Tags enable you to categorize your AWS resources by, for example, purpose, owner, or environment.
When you have many resources of the same type, you can quickly identify a specific resource based on
the tags you've assigned to it. For example, you can define a set of tags for your Amazon EKS clusters to
help you track each cluster's owner and stack level. We recommend that you devise a consistent set of
tag keys for each resource type. You can then search and filter the resources based on the tags that you
add.
Tags are not automatically assigned to your resources. After you add a tag, you can edit tag keys and
values or remove tags from a resource at any time. If you delete a resource, any tags for the resource are
also deleted.
Tags don't have any semantic meaning to Amazon EKS and are interpreted strictly as a string of
characters. You can set the value of a tag to an empty string, but you can't set the value of a tag to null.
If you add a tag that has the same key as an existing tag on that resource, the new value overwrites the
earlier value.
You can tag new or existing cluster resources using the AWS Management Console, the AWS CLI, or the
Amazon EKS API. You can tag only new cluster resources using eksctl.
If you use AWS Identity and Access Management (IAM), you can control which users in your AWS account
have permission to manage tags.
If you're using the Amazon EKS console, then you can apply tags to new or existing resources at any time.
You can do this by using the Tags tab on the relevant resource page. If you're using eksctl, then you
can apply tags to resources when they are created using the --tags option.
If you're using the Amazon EKS API, the AWS CLI, or an AWS SDK, you can apply tags to new resources
using the tags parameter on the relevant API action. You can apply tags to existing resources using the
TagResource API action. For more information, see TagResource.
Some resource-creating actions enable you to specify tags for a resource when the resource is created. If
tags cannot be applied while a resource is being created, the resource fails to be created. This mechanism
ensures that resources you intended to tag on creation are either created with specified tags or not
created at all. If you tag resources at the time of creation, you don't need to run custom tagging scripts
after creating a resource.
The following table describes the Amazon EKS resources that can be tagged and the resources that can
be tagged on creation.
236
Amazon EKS User Guide
Tag restrictions
Tag restrictions
The following basic restrictions apply to tags:
When you select a resource-specific page in the Amazon EKS console, it displays a list of those resources.
For example, if you select Clusters from the navigation pane, the console displays a list of Amazon EKS
clusters. When you select a resource from one of these lists (for example, a specific cluster) that supports
tags, you can view and manage its tags on the Tags tab.
237
Amazon EKS User Guide
Working with tags using the CLI, API, or eksctl
• To add a tag — choose Add tag and then specify the key and value for each tag.
• To delete a tag — choose Remove tag.
6. Repeat this process for each tag you want to add or delete, and then choose Update to finish.
The following examples show how to tag or untag resources using the AWS CLI.
The following command lists the tags associated with an existing resource.
238
Amazon EKS User Guide
Service quotas
Some resource-creating actions enable you to specify tags when you create the resource. The following
actions support tagging when creating a resource.
To view Amazon EKS and Fargate service quotas using the AWS Management Console
In the Service quotas list, you can see the service quota name, applied value (if it is available),
AWS default quota, and whether the quota value is adjustable.
4. To view additional information about a service quota, such as the description, choose the quota
name.
5. (Optional) To request a quota increase, select the quota that you want to increase, select
Request quota increase, enter or select the required information, and select Request.
To work more with service quotas using the AWS Management Console see the Service Quotas
User Guide. To request a quota increase, see Requesting a Quota Increase in the Service Quotas User
Guide.
AWS CLI
To view Amazon EKS and Fargate service quotas using the AWS CLI
239
Amazon EKS User Guide
Service quotas
--output table
Note
The quota returned is the maximum number of Amazon ECS tasks or Amazon EKS pods
running concurrently on Fargate in this account in the current Region.
To work more with service quotas using the AWS CLI, see the Service Quotas AWS CLI Command
Reference. To request a quota increase, see the request-service-quota-increase command in
the AWS CLI Command Reference.
The following tables provide the default quotas (also referred to as limits) for Amazon EKS and AWS
Fargate for an AWS account.
The following quotas are Amazon EKS service quotas. Most of these service quotas are listed under the
Amazon Elastic Kubernetes Service (Amazon EKS) namespace in the Service Quotas console. To request a
quota increase, see Requesting a quota increase in the Service Quotas User Guide.
240
Amazon EKS User Guide
Service quotas
The following quota is an Amazon EKS on AWS Fargate service quota. The service quota is listed under
the AWS Fargate namespace in the Service Quotas console. To request a quota increase, see Requesting a
quota increase in the Service Quotas User Guide.
241
Amazon EKS User Guide
Identity and access management
Security is a shared responsibility between AWS and you. The shared responsibility model describes this
as security of the cloud and security in the cloud:
• Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services
in the AWS Cloud. For Amazon EKS, AWS is responsible for the Kubernetes control plane, which
includes the control plane nodes and etcd database. Third-party auditors regularly test and verify the
effectiveness of our security as part of the AWS compliance programs. To learn about the compliance
programs that apply to Amazon EKS, see AWS Services in Scope by Compliance Program.
• Security in the cloud – Your responsibility includes the following areas.
• The security configuration of the data plane, including the configuration of the security groups that
allow traffic to pass from the Amazon EKS control plane into the customer VPC
• The configuration of the nodes and the containers themselves
• The node's guest operating system (including updates and security patches)
• Other associated application software:
• Setting up and managing network controls, such as firewall rules
• Managing platform-level identity and access management, either with or in addition to IAM
• The sensitivity of your data, your company’s requirements, and applicable laws and regulations
This documentation helps you understand how to apply the shared responsibility model when using
Amazon EKS. The following topics show you how to configure Amazon EKS to meet your security and
compliance objectives. You also learn how to use other AWS services that help you to monitor and secure
your Amazon EKS resources.
Topics
• Identity and access management for Amazon EKS (p. 242)
• Logging and monitoring in Amazon EKS (p. 274)
• Compliance validation for Amazon EKS (p. 274)
• Resilience in Amazon EKS (p. 275)
• Infrastructure security in Amazon EKS (p. 275)
• Configuration and vulnerability analysis in Amazon EKS (p. 276)
• Pod security policy (p. 276)
Audience
How you use AWS Identity and Access Management (IAM) differs, depending on the work you do in
Amazon EKS.
242
Amazon EKS User Guide
Authenticating with identities
Service user – If you use the Amazon EKS service to do your job, then your administrator provides you
with the credentials and permissions that you need. As you use more Amazon EKS features to do your
work, you might need additional permissions. Understanding how access is managed can help you
request the right permissions from your administrator. If you cannot access a feature in Amazon EKS, see
Troubleshooting Amazon EKS identity and access (p. 273).
Service administrator – If you're in charge of Amazon EKS resources at your company, you probably
have full access to Amazon EKS. It's your job to determine which Amazon EKS features and resources
your employees should access. You must then submit requests to your IAM administrator to change the
permissions of your service users. Review the information on this page to understand the basic concepts
of IAM. To learn more about how your company can use IAM with Amazon EKS, see How Amazon EKS
works with IAM (p. 246).
IAM administrator – If you're an IAM administrator, you might want to learn details about how you can
write policies to manage access to Amazon EKS. To view example Amazon EKS identity-based policies
that you can use in IAM, see Amazon EKS identity-based policy examples (p. 249).
You must be authenticated (signed in to AWS) as the AWS account root user, an IAM user, or by assuming
an IAM role. You can also use your company's single sign-on authentication, or even sign in using Google
or Facebook. In these cases, your administrator previously set up identity federation using IAM roles.
When you access AWS using credentials from another company, you are assuming a role indirectly.
To sign in directly to the AWS Management Console, use your password with your root user email or your
IAM user name. You can access AWS programmatically using your root user or IAM user access keys. AWS
provides SDK and command line tools to cryptographically sign your request using your credentials. If
you don’t use AWS tools, you must sign the request yourself. Do this using Signature Version 4, a protocol
for authenticating inbound API requests. For more information about authenticating requests, see
Signature Version 4 Signing Process in the AWS General Reference.
Regardless of the authentication method that you use, you might also be required to provide additional
security information. For example, AWS recommends that you use multi-factor authentication (MFA) to
increase the security of your account. To learn more, see Using Multi-Factor Authentication (MFA) in AWS
in the IAM User Guide.
243
Amazon EKS User Guide
Authenticating with identities
pair. You cannot recover the secret access key in the future. Instead, you must generate a new access key
pair.
An IAM group is an identity that specifies a collection of IAM users. You can't sign in as a group. You
can use groups to specify permissions for multiple users at a time. Groups make permissions easier to
manage for large sets of users. For example, you could have a group named IAMAdmins and give that
group permissions to administer IAM resources.
Users are different from roles. A user is uniquely associated with one person or application, but a role
is intended to be assumable by anyone who needs it. Users have permanent long-term credentials, but
roles provide temporary credentials. To learn more, see When to Create an IAM User (Instead of a Role) in
the IAM User Guide.
IAM roles
An IAM role is an identity within your AWS account that has specific permissions. It is similar to an IAM
user, but is not associated with a specific person. You can temporarily assume an IAM role in the AWS
Management Console by switching roles. You can assume a role by calling an AWS CLI or AWS API
operation or by using a custom URL. For more information about methods for using roles, see Using IAM
Roles in the IAM User Guide.
IAM roles with temporary credentials are useful in the following situations:
• Temporary IAM user permissions – An IAM user can assume an IAM role to temporarily take on
different permissions for a specific task.
• Federated user access – Instead of creating an IAM user, you can use existing identities from AWS
Directory Service, your enterprise user directory, or a web identity provider. These are known as
federated users. AWS assigns a role to a federated user when access is requested through an identity
provider. For more information about federated users, see Federated Users and Roles in the IAM User
Guide.
• Cross-account access – You can use an IAM role to allow someone (a trusted principal) in a different
account to access resources in your account. Roles are the primary way to grant cross-account access.
However, with some AWS services, you can attach a policy directly to a resource (instead of using a role
as a proxy). To learn the difference between roles and resource-based policies for cross-account access,
see How IAM Roles Differ from Resource-based Policies in the IAM User Guide.
• AWS service access – A service role is an IAM role that a service assumes to perform actions in your
account on your behalf. When you set up some AWS service environments, you must define a role
for the service to assume. This service role must include all the permissions that are required for the
service to access the AWS resources that it needs. Service roles vary from service to service, but many
allow you to choose your permissions as long as you meet the documented requirements for that
service. Service roles provide access only within your account and cannot be used to grant access
to services in other accounts. You can create, modify, and delete a service role from within IAM. For
example, you can create a role that allows Amazon Redshift to access an Amazon S3 bucket on your
behalf and then load data from that bucket into an Amazon Redshift cluster. For more information, see
Creating a Role to Delegate Permissions to an AWS Service in the IAM User Guide.
• Applications running on Amazon EC2 – You can use an IAM role to manage temporary credentials
for applications that are running on an EC2 instance and making AWS CLI or AWS API requests.
This is preferable to storing access keys within the EC2 instance. To assign an AWS role to an EC2
instance and make it available to all of its applications, you create an instance profile that is attached
to the instance. An instance profile contains the role and enables programs that are running on the
EC2 instance to get temporary credentials. For more information, see Using an IAM Role to Grant
Permissions to Applications Running on Amazon EC2 Instances in the IAM User Guide.
To learn whether to use IAM roles, see When to Create an IAM Role (Instead of a User) in the IAM User
Guide.
244
Amazon EKS User Guide
Managing access using policies
An IAM administrator can use policies to specify who has access to AWS resources, and what actions
they can perform on those resources. Every IAM entity (user or role) starts with no permissions. In other
words, by default, users can do nothing, not even change their own password. To give a user permission
to do something, an administrator must attach a permissions policy to a user. Or the administrator can
add the user to a group that has the intended permissions. When an administrator gives permissions to a
group, all users in that group are granted those permissions.
IAM policies define permissions for an action regardless of the method that you use to perform the
operation. For example, suppose that you have a policy that allows the iam:GetRole action. A user with
that policy can get role information from the AWS Management Console, the AWS CLI, or the AWS API.
Identity-based policies
Identity-based policies are JSON permissions policy documents that you can attach to an identity, such
as an IAM user, role, or group. These policies control what actions that identity can perform, on which
resources, and under what conditions. To learn how to create an identity-based policy, see Creating IAM
Policies in the IAM User Guide.
Identity-based policies can be further categorized as inline policies or managed policies. Inline policies
are embedded directly into a single user, group, or role. Managed policies are standalone policies that
you can attach to multiple users, groups, and roles in your AWS account. Managed policies include AWS
managed policies and customer managed policies. To learn how to choose between a managed policy or
an inline policy, see Choosing Between Managed Policies and Inline Policies in the IAM User Guide.
Resource-based policies
Resource-based policies are JSON policy documents that you attach to a resource such as an Amazon S3
bucket. Service administrators can use these policies to define what actions a specified principal (account
member, user, or role) can perform on that resource and under what conditions. Resource-based policies
are inline policies. There are no managed resource-based policies.
• Permissions boundaries – A permissions boundary is an advanced feature in which you set the
maximum permissions that an identity-based policy can grant to an IAM entity (IAM user or role).
You can set a permissions boundary for an entity. The resulting permissions are the intersection of
245
Amazon EKS User Guide
How Amazon EKS works with IAM
entity's identity-based policies and its permissions boundaries. Resource-based policies that specify
the user or role in the Principal field are not limited by the permissions boundary. An explicit deny
in any of these policies overrides the allow. For more information about permissions boundaries, see
Permissions Boundaries for IAM Entities in the IAM User Guide.
• Service control policies (SCPs) – SCPs are JSON policies that specify the maximum permissions for
an organization or organizational unit (OU) in AWS Organizations. AWS Organizations is a service for
grouping and centrally managing multiple AWS accounts that your business owns. If you enable all
features in an organization, then you can apply service control policies (SCPs) to any or all of your
accounts. The SCP limits permissions for entities in member accounts, including each AWS account
root user. For more information about Organizations and SCPs, see How SCPs Work in the AWS
Organizations User Guide.
• Session policies – Session policies are advanced policies that you pass as a parameter when you
programmatically create a temporary session for a role or federated user. The resulting session's
permissions are the intersection of the user or role's identity-based policies and the session policies.
Permissions can also come from a resource-based policy. An explicit deny in any of these policies
overrides the allow. For more information, see Session Policies in the IAM User Guide.
Topics
• Amazon EKS identity-based policies (p. 246)
• Amazon EKS resource-based policies (p. 248)
• Authorization based on Amazon EKS tags (p. 248)
• Amazon EKS IAM roles (p. 248)
Actions
The Action element of an IAM identity-based policy describes the specific action or actions that will be
allowed or denied by the policy. Policy actions usually have the same name as the associated AWS API
operation. The action is used in a policy to grant permissions to perform the associated operation.
Policy actions in Amazon EKS use the following prefix before the action: eks:. For example, to
grant someone permission to get descriptive information about an Amazon EKS cluster, you include
the DescribeCluster action in their policy. Policy statements must include either an Action or
NotAction element.
To specify multiple actions in a single statement, separate them with commas as follows:
246
Amazon EKS User Guide
How Amazon EKS works with IAM
You can specify multiple actions using wildcards (*). For example, to specify all actions that begin with
the word Describe, include the following action:
"Action": "eks:Describe*"
To see a list of Amazon EKS actions, see Actions Defined by Amazon Elastic Kubernetes Service in the
IAM User Guide.
Resources
The Resource element specifies the object or objects to which the action applies. Statements must
include either a Resource or a NotResource element. You specify a resource using an ARN or using the
wildcard (*) to indicate that the statement applies to all resources.
arn:${Partition}:eks:${Region}:${Account}:cluster/${ClusterName}
For more information about the format of ARNs, see Amazon resource names (ARNs) and AWS service
namespaces.
For example, to specify the dev cluster in your statement, use the following ARN:
"Resource": "arn:aws:eks:region-code:123456789012:cluster/dev"
To specify all clusters that belong to a specific account and Region, use the wildcard (*):
"Resource": "arn:aws:eks:region-code:123456789012:cluster/*"
Some Amazon EKS actions, such as those for creating resources, cannot be performed on a specific
resource. In those cases, you must use the wildcard (*).
"Resource": "*"
To see a list of Amazon EKS resource types and their ARNs, see Resources Defined by Amazon Elastic
Kubernetes Service in the IAM User Guide. To learn with which actions you can specify the ARN of each
resource, see Actions Defined by Amazon Elastic Kubernetes Service.
Condition keys
Amazon EKS does not provide any service-specific condition keys, but it does support using some global
condition keys. To see all AWS global condition keys, see AWS global condition context keys in the IAM
User Guide.
Examples
To view examples of Amazon EKS identity-based policies, see Amazon EKS identity-based policy
examples (p. 249).
When you create an Amazon EKS cluster, the IAM entity user or role, such as a federated user that
creates the cluster, is automatically granted system:masters permissions in the cluster's RBAC
247
Amazon EKS User Guide
How Amazon EKS works with IAM
configuration. To grant additional AWS users or roles the ability to interact with your cluster, you must
edit the aws-auth ConfigMap within Kubernetes.
For additional information about working with the ConfigMap, see Managing users or IAM roles for your
cluster (p. 217).
Service-linked roles
Service-linked roles allow AWS services to access resources in other services to complete an action on
your behalf. Service-linked roles appear in your IAM account and are owned by the service. An IAM
administrator can view but not edit the permissions for service-linked roles.
Amazon EKS supports service-linked roles. For details about creating or managing Amazon EKS service-
linked roles, see Using Service-Linked Roles for Amazon EKS (p. 251).
Service roles
This feature allows a service to assume a service role on your behalf. This role allows the service to
access resources in other services to complete an action on your behalf. Service roles appear in your
IAM account and are owned by the account. This means that an IAM administrator can change the
permissions for this role. However, doing so might break the functionality of the service.
Amazon EKS supports service roles. For more information, see the section called “Cluster IAM
role” (p. 255) and the section called “Node IAM role” (p. 257).
248
Amazon EKS User Guide
Identity-based policy examples
To learn how to create an IAM identity-based policy using these example JSON policy documents, see
Creating policies on the JSON tab in the IAM User Guide.
When you create an Amazon EKS cluster, the IAM entity user or role, such as a federated user that
creates the cluster, is automatically granted system:masters permissions in the cluster's RBAC
configuration. To grant additional AWS users or roles the ability to interact with your cluster, you must
edit the aws-auth ConfigMap within Kubernetes.
For additional information about working with the ConfigMap, see Managing users or IAM roles for your
cluster (p. 217).
Topics
• Policy best practices (p. 249)
• Using the Amazon EKS console (p. 249)
• Allow users to view their own permissions (p. 250)
• Update a Kubernetes cluster (p. 251)
• List or describe all clusters (p. 251)
• Get Started Using AWS Managed Policies – To start using Amazon EKS quickly, use AWS managed
policies to give your employees the permissions they need. These policies are already available in
your account and are maintained and updated by AWS. For more information, see Get Started Using
Permissions With AWS Managed Policies in the IAM User Guide.
• Grant Least Privilege – When you create custom policies, grant only the permissions required
to perform a task. Start with a minimum set of permissions and grant additional permissions as
necessary. Doing so is more secure than starting with permissions that are too lenient and then trying
to tighten them later. For more information, see Grant Least Privilege in the IAM User Guide.
• Enable MFA for Sensitive Operations – For extra security, require IAM users to use multi-factor
authentication (MFA) to access sensitive resources or API operations. For more information, see Using
Multi-Factor Authentication (MFA) in AWS in the IAM User Guide.
• Use Policy Conditions for Extra Security – To the extent that it's practical, define the conditions under
which your identity-based policies allow access to a resource. For example, you can write conditions to
specify a range of allowable IP addresses that a request must come from. You can also write conditions
to allow requests only within a specified date or time range, or to require the use of SSL or MFA. For
more information, see IAM JSON Policy Elements: Condition in the IAM User Guide.
249
Amazon EKS User Guide
Identity-based policy examples
create an identity-based policy that is more restrictive than the minimum required permissions, the
console won't function as intended for entities (IAM users or roles) with that policy.
To ensure that those entities can still use the Amazon EKS console, create a policy with your own unique
name, such as AmazonEKSAdminPolicy. Attach the policy to the entities. For more information, see
Adding permissions to a user in the IAM User Guide:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:PassedToService": "eks.amazonaws.com"
}
}
}
]
}
You don't need to allow minimum console permissions for users that are making calls only to the AWS
CLI or the AWS API. Instead, allow access to only the actions that match the API operation that you're
trying to perform.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ViewOwnUserInfo",
"Effect": "Allow",
"Action": [
"iam:GetUserPolicy",
"iam:ListGroupsForUser",
"iam:ListAttachedUserPolicies",
"iam:ListUserPolicies",
"iam:GetUser"
],
"Resource": ["arn:aws:iam::*:user/${aws:username}"]
},
{
"Sid": "NavigateInConsole",
"Effect": "Allow",
"Action": [
"iam:GetGroupPolicy",
"iam:GetPolicyVersion",
"iam:GetPolicy",
250
Amazon EKS User Guide
Using Service-Linked Roles
"iam:ListAttachedGroupPolicies",
"iam:ListGroupPolicies",
"iam:ListPolicyVersions",
"iam:ListPolicies",
"iam:ListUsers"
],
"Resource": "*"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "eks:UpdateClusterVersion",
"Resource": "arn:aws:eks:*:111122223333:cluster/dev"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:ListClusters"
],
"Resource": "*"
}
]
}
Topics
• Using Roles for Amazon EKS (p. 252)
• Using Roles for Amazon EKS Nodegroups (p. 253)
251
Amazon EKS User Guide
Using Service-Linked Roles
A service-linked role makes setting up Amazon EKS easier because you don't have to manually add
the necessary permissions. Amazon EKS defines the permissions of its service-linked roles, and unless
defined otherwise, only Amazon EKS can assume its roles. The defined permissions include the trust
policy and the permissions policy, and that permissions policy cannot be attached to any other IAM
entity.
You can delete a service-linked role only after first deleting their related resources. This protects your
Amazon EKS resources because you can't inadvertently remove permission to access the resources.
For information about other services that support service-linked roles, see AWS Services That Work with
IAM and look for the services that have Yes in the Service-Linked Role column. Choose a Yes with a link
to view the service-linked role documentation for that service.
The AWSServiceRoleForAmazonEKS service-linked role trusts the following services to assume the role:
• eks.amazonaws.com
The role permissions policy allows Amazon EKS to complete the following actions on the specified
resources:
• AWSServiceRoleForAmazonEKS
You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or
delete a service-linked role. For more information, see Service-Linked Role Permissions in the IAM User
Guide.
If you delete this service-linked role, and then need to create it again, you can use the same process to
recreate the role in your account. When you create a cluster, Amazon EKS creates the service-linked role
for you again.
252
Amazon EKS User Guide
Using Service-Linked Roles
reference the role. However, you can edit the description of the role using IAM. For more information, see
Editing a Service-Linked Role in the IAM User Guide.
Before you can use IAM to delete a service-linked role, you must first delete any resources used by the
role.
Note
If the Amazon EKS service is using the role when you try to delete the resources, then the
deletion might fail. If that happens, wait for a few minutes and try the operation again.
Use the IAM console, the AWS CLI, or the AWS API to delete the AWSServiceRoleForAmazonEKS service-
linked role. For more information, see Deleting a Service-Linked Role in the IAM User Guide.
A service-linked role makes setting up Amazon EKS easier because you don't have to manually add
the necessary permissions. Amazon EKS defines the permissions of its service-linked roles, and unless
defined otherwise, only Amazon EKS can assume its roles. The defined permissions include the trust
policy and the permissions policy, and that permissions policy cannot be attached to any other IAM
entity.
You can delete a service-linked role only after first deleting their related resources. This protects your
Amazon EKS resources because you can't inadvertently remove permission to access the resources.
253
Amazon EKS User Guide
Using Service-Linked Roles
For information about other services that support service-linked roles, see AWS Services That Work with
IAM and look for the services that have Yes in the Service-Linked Role column. Choose a Yes with a link
to view the service-linked role documentation for that service.
• eks-nodegroup.amazonaws.com
The role permissions policy allows Amazon EKS to complete the following actions on the specified
resources:
• AWSServiceRoleForAmazonEKS
You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or
delete a service-linked role. For more information, see Service-Linked Role Permissions in the IAM User
Guide.
If you delete this service-linked role, and then need to create it again, you can use the same process to
recreate the role in your account. When you create another managed node group, Amazon EKS creates
the service-linked role for you again.
254
Amazon EKS User Guide
Cluster IAM role
• AmazonEKSClusterPolicy
Note
Prior to April 16, 2020, AmazonEKSServicePolicy was also required and the suggested name was
eksServiceRole. With the AWSServiceRoleForAmazonEKS service-linked role, that policy is
no longer required.
255
Amazon EKS User Guide
Cluster IAM role
4. Choose Permissions.
5. Ensure that the AmazonEKSClusterPolicy managed policy is attached to the role. If the policy is
attached, your Amazon EKS cluster role is properly configured.
6. Choose Trust Relationships, Edit Trust Relationship.
7. Verify that the trust relationship contains the following policy. If the trust relationship matches the
policy below, choose Cancel. If the trust relationship does not match, copy the policy into the Policy
Document window and choose Update Trust Policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
AWS CloudFormation
1. Save the following AWS CloudFormation template to a text file on your local system.
---
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Amazon EKS Cluster Role'
Resources:
256
Amazon EKS User Guide
Node IAM role
eksClusterRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- eks.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
Outputs:
RoleArn:
Description: The role that Amazon EKS will use to create AWS resources for
Kubernetes clusters
Value: !GetAtt eksClusterRole.Arn
Export:
Name: !Sub "${AWS::StackName}-RoleArn"
Note
Prior to April 16, 2020, ManagedPolicyArns had an entry for
arn:aws:iam::aws:policy/AmazonEKSServicePolicy. With the
AWSServiceRoleForAmazonEKS service-linked role, that policy is no longer required.
2. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation.
3. Choose Create stack.
4. For Specify template, select Upload a template file, and then choose Choose file.
5. Choose the file you created earlier, and then choose Next.
6. For Stack name, enter a name for your role, such as eksClusterRole, and then choose Next.
7. On the Configure stack options page, choose Next.
8. On the Review page, review your information, acknowledge that the stack might create IAM
resources, and then choose Create stack.
• AmazonEKSWorkerNodePolicy
• AmazonEKS_CNI_Policy
• AmazonEC2ContainerRegistryReadOnly
257
Amazon EKS User Guide
Node IAM role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
258
Amazon EKS User Guide
Pod execution role
AWS CloudFormation
https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-07-23/amazon-eks-
nodegroup-role.yaml
5. On the Specify stack details page, for Stack name enter a name such as eks-node-group-
instance-role and choose Next.
6. (Optional) On the Configure stack options page, you can choose to tag your stack resources.
Choose Next.
7. On the Review page, check the box in the Capabilities section and choose Create stack.
8. When your stack is created, select it in the console and choose Outputs.
9. Record the NodeInstanceRole value for the IAM role that was created. You need this when you
create your node group.
When your cluster creates pods on AWS Fargate infrastructure, the pod needs to make calls to AWS APIs
on your behalf, for example, to pull container images from Amazon ECR. The Amazon EKS pod execution
role provides the IAM permissions to do this.
When you create a Fargate profile, you must specify a pod execution role to use with your pods. This
role is added to the cluster's Kubernetes Role based access control (RBAC) for authorization, so that the
kubelet that is running on the Fargate infrastructure can register with your Amazon EKS cluster. This is
what allows Fargate infrastructure to appear in your cluster as nodes.
Before you create a Fargate profile, you must create an IAM role with the following IAM policy:
• AmazonEKSFargatePodExecutionRolePolicy
259
Amazon EKS User Guide
IAM roles for service accounts
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks-fargate-pods.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
To create an AWS Fargate pod execution role with the AWS Management Console
Applications must sign their AWS API requests with AWS credentials. This feature provides a strategy for
managing credentials for your applications, similar to the way that Amazon EC2 instance profiles provide
260
Amazon EKS User Guide
IAM roles for service accounts
credentials to Amazon EC2 instances. Instead of creating and distributing your AWS credentials to the
containers or using the Amazon EC2 instance’s role, you can associate an IAM role with a Kubernetes
service account. The applications in the pod’s containers can then use an AWS SDK or the AWS CLI to
make API requests to authorized AWS services.
The IAM roles for service accounts feature provides the following benefits:
• Least privilege — By using the IAM roles for service accounts feature, you no longer need to provide
extended permissions to the node IAM role so that pods on that node can call AWS APIs. You can scope
IAM permissions to a service account, and only pods that use that service account have access to those
permissions. This feature also eliminates the need for third-party solutions such as kiam or kube2iam.
• Credential isolation — A container can only retrieve credentials for the IAM role that is associated
with the service account to which it belongs. A container never has access to credentials that are
intended for another container that belongs to another pod.
• Auditability — Access and event logging is available through CloudTrail to help ensure retrospective
auditing.
To get started, see Enabling IAM roles for service accounts on your cluster (p. 265).
For an end-to-end walkthrough using eksctl, see Walkthrough: Updating a DaemonSet to use IAM for
service accounts (p. 271).
Kubernetes has long used service accounts as its own internal identity system. Pods can authenticate
with the Kubernetes API server using an auto-mounted token (which was a non-OIDC JWT) that only
the Kubernetes API server could validate. These legacy service account tokens do not expire, and
rotating the signing key is a difficult process. In Kubernetes version 1.12, support was added for a new
ProjectedServiceAccountToken feature, which is an OIDC JSON web token that also contains the
service account identity, and supports a configurable audience.
Amazon EKS now hosts a public OIDC discovery endpoint per cluster containing the signing keys for the
ProjectedServiceAccountToken JSON web tokens so external systems, like IAM, can validate and
accept the OIDC tokens issued by Kubernetes.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::AWS_ACCOUNT_ID:oidc-provider/OIDC_PROVIDER"
261
Amazon EKS User Guide
IAM roles for service accounts
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"OIDC_PROVIDER:sub":
"system:serviceaccount:SERVICE_ACCOUNT_NAMESPACE:SERVICE_ACCOUNT_NAME"
}
}
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::AWS_ACCOUNT_ID:oidc-provider/OIDC_PROVIDER"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringLike": {
"OIDC_PROVIDER:sub": "system:serviceaccount:SERVICE_ACCOUNT_NAMESPACE:*"
}
}
}
]
}
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::AWS_ACCOUNT_ID:role/IAM_ROLE_NAME
Pod configuration
The Amazon EKS Pod Identity Webhook on the cluster watches for pods that are associated with service
accounts with this annotation and applies the following environment variables to them.
AWS_ROLE_ARN=arn:aws:iam::AWS_ACCOUNT_ID:role/IAM_ROLE_NAME
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token
Note
Your cluster does not need to use the mutating web hook to configure the environment
variables and token file mounts; you can choose to configure pods to add these environment
variables manually.
Supported versions of the AWS SDK (p. 265) look for these environment variables first in the credential
chain provider. The role credentials are used for pods that meet this criteria.
262
Amazon EKS User Guide
IAM roles for service accounts
Note
When a pod uses AWS credentials from an IAM role associated with a service account, the
AWS CLI or other SDKs in the containers for that pod use the credentials provided by that role
exclusively. They no longer inherit any IAM permissions from the node IAM role.
By default, only containers that run as root have the proper file system permissions to read the web
identity token file. You can provide these permissions by having your containers run as root, or by
providing the following security context for the containers in your manifest. The fsGroup ID is arbitrary,
and you can choose any valid group ID. For more information about the implications of setting a security
context for your pods, see Configure a Security Context for a Pod or Container in the Kubernetes
documentation.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
labels:
app: my-app
spec:
serviceAccountName: my-app
containers:
- name: my-app
image: my-app:latest
securityContext:
fsGroup: 1337
...
The kubelet requests and stores the token on behalf of the pod. By default, the kubelet refreshes
the token it if it is older than 80 percent of its total TTL, or if the token is older than 24 hours. You can
modify the expiration duration for any account, except the default service account, with settings in
your pod spec. For more information, see Service Account Token Volume Projection in the Kubernetes
documentation.
Example
In this example, Account A would provide Account B with the OIDC issuer URL from their cluster. Account
B follows the instructions in Enabling IAM roles for service accounts on your cluster (p. 265) and
Creating an IAM role and policy for your service account (p. 266) using the OIDC issuer URL from
Account A's cluster. Then a cluster administrator annotates the service account in Account A's cluster to
use the role from Account B.
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_B_AWS_ACCOUNT_ID:role/IAM_ROLE_NAME
263
Amazon EKS User Guide
IAM roles for service accounts
Example
In this example, Account B creates an IAM policy with the permissions to give to pods in Account A's
cluster. Account B attaches that policy to an IAM role with a trust relationship that allows AssumeRole
permissions to Account A (111111111111), as shown below.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111111:root"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
Account A creates a role with a trust policy that gets credentials from the identity provider created with
the cluster's OIDC issuer URL, as shown below.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::111111111111:oidc-provider/oidc.eks.region-
code.amazonaws.com/id/EXAMPLEC061A78C479E31025A21AC4CDE191335D05820BE5CE"
},
"Action": "sts:AssumeRoleWithWebIdentity"
}
]
}
Account A attaches a policy to that role with the following permissions to assume the role that Account B
created.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::222222222222:role/account-b-role"
}
]
}
The application code for pods to assume Account B's role uses two profiles: account_b_role and
account_a_role. The account_b_role profile uses the account_a_role profile as its source. For
the AWS CLI, the ~/.aws/config file would look like the following example.
[profile account_b_role]
source_profile = account_a_role
role_arn=arn:aws:iam::222222222222:role/account-b-role
264
Amazon EKS User Guide
IAM roles for service accounts
[profile account_a_role]
web_identity_token_file = /var/run/secrets/eks.amazonaws.com/serviceaccount/token
role_arn=arn:aws:iam::111111111111:role/account-a-role
To specify chained profiles for other AWS SDKs, consult their documentation.
Many popular Kubernetes add-ons, such as the Cluster Autoscaler and the ALB Ingress Controller support
IAM roles for service accounts. The Amazon VPC CNI plugin for Kubernetes has been updated with a
supported version of the AWS SDK for Go, and you can use the IAM roles for service accounts feature to
provide the required permissions for the CNI to work.
To ensure that you are using a supported SDK, follow the installation instructions for your preferred SDK
at Tools for Amazon Web Services when you build your containers.
If your cluster supports IAM roles for service accounts, it will have an OpenID Connect issuer URL
associated with it. You can view this URL in the Amazon EKS console, or you can use the following AWS
CLI command to retrieve it.
Important
You must use at least version 1.18.97 or 2.0.30 of the AWS CLI to receive the proper output
from this command. For more information, see Installing the AWS CLI in the AWS Command Line
Interface User Guide.
Output:
265
Amazon EKS User Guide
IAM roles for service accounts
https://oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E
To use IAM roles for service accounts in your cluster, you must create an OIDC identity provider in the
IAM console.
eksctl
To create an IAM OIDC identity provider for your cluster with eksctl
1. Check your eksctl version with the following command. This procedure assumes that you have
installed eksctl and that your eksctl version is at least 0.25.0-rc.0.
eksctl version
For more information about installing or upgrading eksctl, see Installing or upgrading
eksctl (p. 226).
2. Create your OIDC identity provider for your cluster with the following command. Substitute
cluster_name with your own value.
To create an IAM OIDC identity provider for your cluster with the AWS Management
Console
1. Retrieve the OIDC issuer URL from the Amazon EKS console description of your cluster or use
the following AWS CLI command.
Important
You must use at least version 1.18.97 or 2.0.30 of the AWS CLI to receive the proper
output from this command. For more information, see Installing the AWS CLI in the
AWS Command Line Interface User Guide.
After you have enabled the IAM OIDC identity provider for your cluster, you can create IAM roles to
associate with a service account in your cluster. For more information, see Creating an IAM role and
policy for your service account (p. 266)
266
Amazon EKS User Guide
IAM roles for service accounts
complete AWS managed policy that already does some of what you're looking for and then customize it
to your specific requirements. For more information, see Creating a New Policy in the IAM User Guide.
You must also create an IAM role for your Kubernetes service accounts to use before you associate it with
a service account. The trust relationship is scoped to your cluster and service account so that each cluster
and service account combination requires its own role. You can then attach a specific IAM policy to the
role that gives the containers in your pods the permissions you desire. The following procedures describe
how to do this.
• A policy to allow read-only access to an Amazon S3 bucket. You could store configuration information
or a bootstrap script in this bucket, and the containers in your pod can read the file from the bucket
and load it into your application.
• A policy to allow paid container images from AWS Marketplace.
The example below allows permission to the my-pod-secrets-bucket Amazon S3 bucket. You
can modify the policy document to suit your specific needs.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::my-pod-secrets-bucket/*"
]
}
]
}
The example below gives the required permissions to use a paid container image from AWS
Marketplace.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"aws-marketplace:RegisterUsage"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
267
Amazon EKS User Guide
IAM roles for service accounts
eksctl
Create the service account and IAM role with the following command. Substitute the example
values with your own values.
Note
This command only works for clusters that were created with eksctl. If you didn't create
your cluster with eksctl, then use the instructions on the AWS Management Console or
AWS CLI tabs.
An AWS CloudFormation template was deployed that created an IAM role and attached the IAM
policy to it. The role was associated with a Kubernetes service account.
AWS Management Console
1. Retrieve the OIDC issuer URL from the Amazon EKS console description of your cluster, or use
the following AWS CLI command.
Important
You must use at least version 1.18.97 or 2.0.30 of the AWS CLI to receive the proper
output from this command. For more information, see Installing the AWS CLI in the
AWS Command Line Interface User Guide.
268
Amazon EKS User Guide
IAM roles for service accounts
11. Choose the Trust relationships tab, and then choose Edit trust relationship.
1. Edit the OIDC provider suffix and change it from :aud to :sub.
2. Replace sts.amazonaws.com with your service account ID.
3. If necessary, change region-code to the Region code returned in the output from step 1.
"oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E:sub":
"system:serviceaccount:SERVICE_ACCOUNT_NAMESPACE:SERVICE_ACCOUNT_NAME"
AWS CLI
1. Set your AWS account ID to an environment variable with the following command.
2. Set your OIDC identity provider to an environment variable with the following command,
replacing your cluster name.
Important
You must use at least version 1.18.97 or 2.0.30 of the AWS CLI to receive the proper
output from this command. For more information, see Installing the AWS CLI in the
AWS Command Line Interface User Guide.
3. Copy the following code block to your computer and replace namespace and service-
account-name with your own values.
4. Run the modified code block from the previous step to create a file named trust.json.
269
Amazon EKS User Guide
IAM roles for service accounts
5. Run the following AWS CLI command to create the role, replacing your IAM role name and
description.
6. Run the following command to attach your IAM policy to your role, replacing your IAM role
name and policy ARN.
7. Associate the IAM role with a Kubernetes service account. For more information, see Specifying
an IAM role for your service account (p. 270).
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::AWS_ACCOUNT_ID:role/IAM_ROLE_NAME
1. Use the following command to annotate your service account with the ARN of the IAM role that you
want to use with your service account. Be sure to substitute your own values for the alternate-
colored example values to use with your pods.
2. Delete and re-create any existing pods that are associated with the service account to apply the
credential environment variables. The mutating web hook does not apply them to pods that are
already running. The following command deletes the existing the aws-node DaemonSet pods and
deploys them with the service account annotation. You can modify the namespace, deployment
type, and label to update your specific pods.
4. Describe one of the pods and verify that the AWS_WEB_IDENTITY_TOKEN_FILE and
AWS_ROLE_ARN environment variables exist.
Output:
270
Amazon EKS User Guide
IAM roles for service accounts
AWS_VPC_K8S_CNI_LOGLEVEL=DEBUG
AWS_ROLE_ARN=arn:aws:iam::AWS_ACCOUNT_ID:role/IAM_ROLE_NAME
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token
The IAM role was created by eksctl when you created the Kubernetes service account in a previous
step.
When you implement IAM roles for service accounts for a pod, the containers in the pod have all
permissions assigned to the service account and the node IAM role. If you implement IAM roles for
service accounts for all pods in a cluster, you may want to prevent the containers in the pods from using
the permissions assigned to the node IAM role. Keep in mind however, that there may be certain key
permissions on the node IAM role that pods need to function. It’s important to properly scope your
service account IAM roles so that your pods have all of the necessary permissions. For example, the node
IAM role is assigned permissions to pull container images from Amazon ECR. If a pod isn't assigned those
permissions, then the pod can't pull container images from Amazon ECR.
To prevent all containers in all pods on a node from using the permissions assigned to the node IAM
role (while still allowing the permissions that are assigned to the service account), run the following
iptables commands on your nodes (as root) or include them in your instance bootstrap user data
script.
Important
• These commands completely block all containers running on a node from querying the
instance metadata service for any metadata, not just the credentials for the node IAM role. Do
not run these commands on nodes that run pods that you haven't implemented IAM roles for
service accounts for or none of the containers on the node will have any of the permissions
assigned to the node IAM role.
• If you implement network policy, using a tool such as Calico (p. 187), this rule may be
overridden. When implementing network policy, ensure that it doesn't override this rule, or
that your policy includes this rule.
271
Amazon EKS User Guide
IAM roles for service accounts
the nodes have the same permissions as the CNI plugin. You can use the IAM roles for service accounts
feature to provide the AmazonEKS_CNI_Policy permissions, and then remove the policy from the node
IAM role.
For ease of use, this topic uses eksctl to configure IAM roles for service accounts. However, if you would
rather use the AWS Management Console, the AWS CLI, or one of the AWS SDKs, the same basic concepts
apply, but you will have to modify the steps to use the procedures in Enabling IAM roles for service
accounts on your cluster (p. 265).
To configure the CNI plugin to use IAM roles for service accounts
1. Check your eksctl version with the following command. This procedure assumes that you have
installed eksctl and that your eksctl version is at least 0.25.0-rc.0.
eksctl version
For more information about installing or upgrading eksctl, see Installing or upgrading
eksctl (p. 226).
2. Check the version of your cluster's Amazon VPC CNI Plugin for Kubernetes. Use the following
command to print your cluster's CNI version.
kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -
f 2
Output:
amazon-k8s-cni:1.6.2
If your CNI version is earlier than 1.6.3, complete the following steps to create a service account and
then upgrade your CNI version to the latest version:
a. Create an OIDC identity provider for your cluster with the following command. Substitute the
cluster name with your own value.
b. Create a Kubernetes service account with the following command. Substitute cluster_name
with your own value. This command deploys an AWS CloudFormation stack that creates an IAM
role, attaches the AmazonEKS_CNI_Policy AWS managed policy to it, and binds the IAM role
to the service account.
c. Upgrade your CNI version to the latest version. The manifest specifies the aws-node service
account that you created in the previous step.
3. Watch the roll out, and wait for the DESIRED count of the deployment to match the UP-TO-DATE
count. Press Ctrl + c to exit.
272
Amazon EKS User Guide
Troubleshooting
Output:
5. Check the version of your cluster's Amazon VPC CNI Plugin for Kubernetes again, confirming that the
version is 1.6.3.
kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -
f 2
Output:
amazon-k8s-cni:1.6.3
6. Describe one of the pods and verify that the AWS_WEB_IDENTITY_TOKEN_FILE and
AWS_ROLE_ARN environment variables exist.
Output:
AWS_VPC_K8S_CNI_LOGLEVEL=DEBUG
AWS_ROLE_ARN=arn:aws:iam::111122223333:role/eksctl-prod-addon-iamserviceaccount-kube-
sys-Role1-V66K5I6JLDGK
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token
The IAM role was created by eksctl when you created the Kubernetes service account in a previous
step.
7. Remove the AmazonEKS_CNI_Policy policy from your node IAM role.
Now your CNI plugin pods are getting their IAM permissions from their own role, and the instance role no
longer can provide those permissions to other pods.
273
Amazon EKS User Guide
Logging and monitoring
Entries that contain this text are expected. The username is an Amazon EKS internal service role
that performs specific operations for managed node groups and Fargate.
Amazon EKS is integrated with AWS CloudTrail, a service that provides a record of actions taken by a
user, role, or an AWS service in Amazon EKS. CloudTrail captures all API calls for Amazon EKS as events.
The calls captured include calls from the Amazon EKS console and code calls to the Amazon EKS API
operations. For more information, see Logging Amazon EKS API Calls with AWS CloudTrail (p. 282).
The Kubernetes API server exposes a number of metrics that are useful for monitoring and analysis. For
more information, see ??? (p. 232).
For a list of AWS services in scope of specific compliance programs, see AWS services in scope by
compliance program. For general information, see AWS compliance programs.
You can download third-party audit reports using AWS Artifact. For more information, see Downloading
reports in AWS Artifact.
Your compliance responsibility when using Amazon EKS is determined by the sensitivity of your data,
your company's compliance objectives, and applicable laws and regulations. AWS provides the following
resources to help with compliance:
• Security and compliance quick start guides – These deployment guides discuss architectural
considerations and provide steps for deploying security- and compliance-focused baseline
environments on AWS.
• Architecting for HIPAA security and compliance paper – This paper describes how companies can use
AWS to create HIPAA-compliant applications.
• AWS compliance resources – This collection of workbooks and guides might apply to your industry and
location.
• AWS Config – This AWS service assesses how well your resource configurations comply with internal
practices, industry guidelines, and regulations.
• AWS Security Hub – This AWS service provides a comprehensive view of your security state within AWS
that helps you check your compliance with security industry standards and best practices.
274
Amazon EKS User Guide
Resilience
Amazon EKS runs Kubernetes control plane instances across multiple Availability Zones to ensure high
availability. Amazon EKS automatically detects and replaces unhealthy control plane instances, and it
provides automated version upgrades and patching for them.
This control plane consists of at least two API server nodes and three etcd nodes that run across three
Availability Zones within a Region. Amazon EKS automatically detects and replaces unhealthy control
plane instances, restarting them across the Availability Zones within the Region as needed. Amazon EKS
leverages the architecture of AWS Regions in order to maintain high availability. Because of this, Amazon
EKS is able to offer an SLA for API server endpoint availability.
For more information about AWS Regions and Availability Zones, see AWS global infrastructure.
You use AWS published API calls to access Amazon EKS through the network. Clients must support
Transport Layer Security (TLS) 1.0 or later. We recommend TLS 1.2 or later. Clients must also support
cipher suites with perfect forward secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve
Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes.
Additionally, requests must be signed by using an access key ID and a secret access key that is associated
with an IAM principal. Or you can use the AWS Security Token Service (AWS STS) to generate temporary
security credentials to sign requests.
When you create an Amazon EKS cluster, you specify the VPC subnets for your cluster to use. Amazon
EKS requires subnets in at least two Availability Zones. We recommend a VPC with public and private
subnets so that Kubernetes can create public load balancers in the public subnets that load balance
traffic to pods running on nodes that are in private subnets.
For more information about VPC considerations, see Cluster VPC considerations (p. 163).
If you create your VPC and node groups with the AWS CloudFormation templates provided in the Getting
started with Amazon EKS (p. 3) walkthrough, then your control plane and node security groups are
configured with our recommended settings.
For more information about security group considerations, see Amazon EKS security group
considerations (p. 166).
When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server
that you use to communicate with your cluster (using Kubernetes management tools such as kubectl).
By default, this API server endpoint is public to the internet, and access to the API server is secured using
a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access
Control (RBAC).
You can enable private access to the Kubernetes API server so that all communication between your
nodes and the API server stays within your VPC. You can limit the IP addresses that can access your API
server from the internet, or completely disable internet access to the API server.
275
Amazon EKS User Guide
Configuration and vulnerability analysis
For more information about modifying cluster endpoint access, see Modifying cluster endpoint
access (p. 46).
You can implement network policies with tools such as Project Calico (p. 187). Project Calico is a third
party open source project. For more information, see the Project Calico documentation.
• Is applicable to Amazon EC2 nodes (both managed and self-managed) where you are responsible for
security configurations of Kubernetes components.
• Provides a standard, community-approved way to ensure that you have configured your Kubernetes
cluster and nodes securely when using Amazon EKS.
• Consists of four sections; control plane logging configuration, node security configurations, policies,
and managed services.
• Supports all of the Kubernetes versions currently available in Amazon EKS and can be run using kube-
bench, a standard open source tool for checking configuration using the CIS benchmark on Kubernetes
clusters.
Amazon EKS platform versions represent the capabilities of the cluster control plane, including which
Kubernetes API server flags are enabled and the current Kubernetes patch version. New clusters are
deployed with the latest platform version. For details, see Platform versions (p. 61).
You can update an Amazon EKS cluster (p. 34) to newer Kubernetes versions. As new Kubernetes versions
become available in Amazon EKS, we recommend that you proactively update your clusters to use
the latest available version. For more information about Kubernetes versions in EKS, see Amazon EKS
Kubernetes Versions (p. 58).
Track security or privacy events for Amazon Linux 2 at the Amazon Linux Security Center or subscribe to
the associated RSS feed. Security and privacy events include an overview of the issue affected, packages,
and instructions for updating your instances to correct the issue.
You can use Amazon Inspector to check for unintended network accessibility of your nodes and for
vulnerabilities on those Amazon EC2 instances.
276
Amazon EKS User Guide
Amazon EKS default pod security policy
You can view the default policy with the following command.
Output:
For more details, you can describe the policy with the following command.
Output:
Name: eks.privileged
Settings:
Allow Privileged: true
Allow Privilege Escalation: 0xc0004ce5f8
Default Add Capabilities: <none>
Required Drop Capabilities: <none>
Allowed Capabilities: *
Allowed Volume Types: *
Allow Host Network: true
Allow Host Ports: 0-65535
Allow Host PID: true
Allow Host IPC: true
Read Only Root Filesystem: false
SELinux Context Strategy: RunAsAny
User: <none>
Role: <none>
Type: <none>
Level: <none>
Run As User Strategy: RunAsAny
Ranges: <none>
FSGroup Strategy: RunAsAny
Ranges: <none>
Supplemental Groups Strategy: RunAsAny
Ranges: <none>
The following example shows the full YAML file for the eks.privileged pod security policy, its cluster
role, and cluster role binding.
---
277
Amazon EKS User Guide
Amazon EKS default pod security policy
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: eks.privileged
annotations:
kubernetes.io/description: 'privileged allows full unrestricted access to
pod features, as if the PodSecurityPolicy controller was not enabled.'
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
labels:
kubernetes.io/cluster-service: "true"
eks.amazonaws.com/component: pod-security-policy
spec:
privileged: true
allowPrivilegeEscalation: true
allowedCapabilities:
- '*'
volumes:
- '*'
hostNetwork: true
hostPorts:
- min: 0
max: 65535
hostIPC: true
hostPID: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
readOnlyRootFilesystem: false
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: eks:podsecuritypolicy:privileged
labels:
kubernetes.io/cluster-service: "true"
eks.amazonaws.com/component: pod-security-policy
rules:
- apiGroups:
- policy
resourceNames:
- eks.privileged
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: eks:podsecuritypolicy:authenticated
annotations:
kubernetes.io/description: 'Allow all authenticated users to create privileged pods.'
labels:
kubernetes.io/cluster-service: "true"
eks.amazonaws.com/component: pod-security-policy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: eks:podsecuritypolicy:privileged
278
Amazon EKS User Guide
Amazon EKS default pod security policy
subjects:
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:authenticated
After you create custom pod security policies for your cluster, you can delete the default Amazon EKS
eks.privileged pod security policy to enable your custom policies.
If you are upgrading from an earlier version of Kubernetes, or have modified or deleted the default
Amazon EKS eks.privileged pod security policy, you can restore it with the following steps.
1. Create a file called privileged-podsecuritypolicy.yaml and paste the YAML file contents
below into it.
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: eks.privileged
annotations:
kubernetes.io/description: 'privileged allows full unrestricted access to
pod features, as if the PodSecurityPolicy controller was not enabled.'
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
labels:
kubernetes.io/cluster-service: "true"
eks.amazonaws.com/component: pod-security-policy
spec:
privileged: true
allowPrivilegeEscalation: true
allowedCapabilities:
- '*'
volumes:
- '*'
hostNetwork: true
hostPorts:
- min: 0
max: 65535
hostIPC: true
hostPID: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
readOnlyRootFilesystem: false
279
Amazon EKS User Guide
Amazon EKS default pod security policy
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: eks:podsecuritypolicy:privileged
labels:
kubernetes.io/cluster-service: "true"
eks.amazonaws.com/component: pod-security-policy
rules:
- apiGroups:
- policy
resourceNames:
- eks.privileged
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: eks:podsecuritypolicy:authenticated
annotations:
kubernetes.io/description: 'Allow all authenticated users to create privileged
pods.'
labels:
kubernetes.io/cluster-service: "true"
eks.amazonaws.com/component: pod-security-policy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: eks:podsecuritypolicy:privileged
subjects:
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:authenticated
280
Amazon EKS User Guide
Creating Amazon EKS resources with AWS CloudFormation
Contents
• Creating Amazon EKS resources with AWS CloudFormation (p. 281)
• Logging Amazon EKS API calls with AWS CloudTrail (p. 282)
• Amazon EKS on AWS Outposts (p. 284)
• Deep Learning Containers (p. 287)
• Tutorial: Configure App Mesh integration with Kubernetes (p. 287)
When you use AWS CloudFormation, you can reuse your template to set up your Amazon EKS resources
consistently and repeatedly. Just describe your resources once, and then provision the same resources
over and over in multiple AWS accounts and Regions.
Amazon EKS supports creating clusters and node groups in AWS CloudFormation. For more information,
including examples of JSON and YAML templates for your Amazon EKS resources, see Amazon EKS
resource type reference in the AWS CloudFormation User Guide.
• AWS CloudFormation
• AWS CloudFormation User Guide
• AWS CloudFormation Command Line Interface User Guide
281
Amazon EKS User Guide
Logging Amazon EKS API calls with AWS CloudTrail
If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket,
including events for Amazon EKS. If you don't configure a trail, you can still view the most recent events
in the CloudTrail console in Event history. Using the information collected by CloudTrail, you can
determine the request that was made to Amazon EKS, the IP address from which the request was made,
who made the request, when it was made, and additional details.
To learn more about CloudTrail, see the AWS CloudTrail User Guide.
For an ongoing record of events in your AWS account, including events for Amazon EKS, create a trail.
A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a
trail in the console, the trail applies to all AWS Regions. The trail logs events from all Regions in the
AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can
configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs.
For more information, see the following:
All Amazon EKS actions are logged by CloudTrail and are documented in the Amazon EKS API Reference.
For example, calls to the CreateCluster, ListClusters and DeleteCluster sections generate
entries in the CloudTrail log files.
Every event or log entry contains information about who generated the request. The identity
information helps you determine the following:
• Whether the request was made with root or AWS Identity and Access Management (IAM) user
credentials.
• Whether the request was made with temporary security credentials for a role or federated user.
• Whether the request was made by another AWS service.
282
Amazon EKS User Guide
Understanding Amazon EKS log file entries
any source and includes information about the requested action, the date and time of the action, request
parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public API calls, so they
don't appear in any specific order.
The following example shows a CloudTrail log entry that demonstrates the CreateCluster action.
{
"eventVersion": "1.05",
"userIdentity": {
"type": "IAMUser",
"principalId": "AKIAIOSFODNN7EXAMPLE",
"arn": "arn:aws:iam::111122223333:user/username",
"accountId": "111122223333",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"userName": "username"
},
"eventTime": "2018-05-28T19:16:43Z",
"eventSource": "eks.amazonaws.com",
"eventName": "CreateCluster",
"awsRegion": "region-code",
"sourceIPAddress": "205.251.233.178",
"userAgent": "PostmanRuntime/6.4.0",
"requestParameters": {
"resourcesVpcConfig": {
"subnetIds": [
"subnet-a670c2df",
"subnet-4f8c5004"
]
},
"roleArn": "arn:aws:iam::111122223333:role/AWSServiceRoleForAmazonEKS-CAC1G1VH3ZKZ",
"clusterName": "test"
},
"responseElements": {
"cluster": {
"clusterName": "test",
"status": "CREATING",
"createdAt": 1527535003.208,
"certificateAuthority": {},
"arn": "arn:aws:eks:region-code:111122223333:cluster/test",
"roleArn": "arn:aws:iam::111122223333:role/AWSServiceRoleForAmazonEKS-CAC1G1VH3ZKZ",
"version": "1.10",
"resourcesVpcConfig": {
"securityGroupIds": [],
"vpcId": "vpc-21277358",
"subnetIds": [
"subnet-a670c2df",
"subnet-4f8c5004"
]
}
}
},
"requestID": "a7a0735d-62ab-11e8-9f79-81ce5b2b7d37",
"eventID": "eab22523-174a-499c-9dd6-91e7be3ff8e3",
"readOnly": false,
"eventType": "AwsApiCall",
"recipientAccountId": "111122223333"
}
283
Amazon EKS User Guide
Amazon EKS on AWS Outposts
For more information about Amazon EKS and service linked roles, see the section called “Using Service-
Linked Roles” (p. 251).
The following example shows a CloudTrail log entry that demonstrates a DeleteInstanceProfile
action made by the AWSServiceRoleForAmazonEKSNodegroup service linked role, noted in the
sessionContext.
{
"eventVersion": "1.05",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AROA3WHGPEZ7SJ2CW55C5:EKS",
"arn": "arn:aws:sts::111122223333:assumed-role/AWSServiceRoleForAmazonEKSNodegroup/
EKS",
"accountId": "111122223333",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "AROA3WHGPEZ7SJ2CW55C5",
"arn": "arn:aws:iam::111122223333:role/aws-service-role/eks-
nodegroup.amazonaws.com/AWSServiceRoleForAmazonEKSNodegroup",
"accountId": "111122223333",
"userName": "AWSServiceRoleForAmazonEKSNodegroup"
},
"webIdFederationData": {},
"attributes": {
"mfaAuthenticated": "false",
"creationDate": "2020-02-26T00:56:33Z"
}
},
"invokedBy": "eks-nodegroup.amazonaws.com"
},
"eventTime": "2020-02-26T00:56:34Z",
"eventSource": "iam.amazonaws.com",
"eventName": "DeleteInstanceProfile",
"awsRegion": "us-east-1",
"sourceIPAddress": "eks-nodegroup.amazonaws.com",
"userAgent": "eks-nodegroup.amazonaws.com",
"requestParameters": {
"instanceProfileName": "eks-11111111-2222-3333-4444-abcdef123456"
},
"responseElements": null,
"requestID": "11111111-2222-3333-4444-abcdef123456",
"eventID": "11111111-2222-3333-4444-abcdef123456",
"eventType": "AwsApiCall",
"recipientAccountId": "111122223333"
}
284
Amazon EKS User Guide
Prerequisites
Prerequisites
The following are the prerequisites for using Amazon EKS nodes on AWS Outposts:
• You must have installed and configured an Outpost in your on-premises data center.
• You must have a reliable network connection between your Outpost and its AWS Region.
• The AWS Region for the Outpost must support Amazon EKS. For a list of supported Regions, see
Amazon EKS service endpoints in the AWS General Reference.
Limitations
The following are the limitations of using Amazon EKS on Outposts:
• AWS Identity and Access Management, Application Load Balancer, Network Load Balancer, Classic Load
Balancer, and Amazon Route 53 run in the AWS Region, not on Outposts. This will increase latencies
between the services and the containers.
• AWS Fargate is not available on AWS Outposts.
• If network connectivity between your Outpost and its AWS Region is lost, your nodes will continue
to run. However, you cannot create new nodes or take new actions on existing deployments until
connectivity is restored. In case of instance failures, the instance will not be automatically replaced.
The Kubernetes control plane runs in the Region, and missing heartbeats caused by things like a loss of
connectivity to the Availability Zone could lead to failures. The failed heartbeats will lead to pods on
the Outposts being marked as unhealthy, and eventually the node status will time out and pods will be
marked for eviction. For more information, see Node Controller in the Kubernetes documentation.
• We recommend that you provide reliable, highly available, and low-latency connectivity between your
Outpost and its AWS Region.
An Outpost is an extension of an AWS Region, and you can extend a VPC in an account to span multiple
Availability Zones and any associated Outpost locations. When you configure your Outpost, you associate
a subnet with it to extend your Regional VPC environment to your on-premises facility. Instances on an
Outpost appear as part of your Regional VPC, similar to an Availability Zone with associated subnets.
285
Amazon EKS User Guide
Creating Amazon EKS nodes on an Outpost
To create Amazon EKS nodes on an Outpost with the AWS CLI, specify a security group and a subnet
associated with your Outpost.
1. Create a VPC.
2. Create Outpost subnets. The --outpost-arn parameter must be specified for the subnet to be
created for the Outpost. (This step is different for AWS Outposts.)
3. Create a cluster, specifying the subnets for the Outpost. (This step is different for AWS Outposts.)
4. Create the node group. Specify an instance type that is available on your Outpost. (This step is
different for AWS Outposts.)
286
Amazon EKS User Guide
Deep Learning Containers
--nodes 3 \
--nodes-min 1 \
--nodes-max 4
To get started using AWS Deep Learning Containers on Amazon EKS, see AWS Deep Learning Containers
on Amazon EKS in the AWS Deep Learning AMI Developer Guide.
The controller is accompanied by the deployment of the following Kubernetes custom resource
definitions: meshes, virtual services, virtual nodes, and virtual routers. The controller
watches for creation, modification, and deletion of the custom resources and makes changes to the
corresponding App Mesh mesh, virtual service, virtual node, virtual gateway, gateway
route, and virtual router (including route) resources through the App Mesh API. To learn more or
contribute to the controller, see the GitHub project.
The controller also installs a webhook that injects the following containers into Kubernetes pods that are
labeled with a name that you specify.
• App Mesh Envoy proxy – Envoy uses the configuration defined in the App Mesh control plane to
determine where to send your application traffic.
• App Mesh proxy route manager – Updates iptables rules in a pod's network namespace that route
ingress and egress traffic through Envoy. This container runs as a Kubernetes init container inside of
the pod.
Prerequisites
• An existing understanding of App Mesh concepts. For more information, see What is AWS App Mesh.
• An existing Kubernetes cluster running version 1.13 or later. If you don't have an existing cluster,
you can deploy one using the Getting Started with Amazon EKS guide. If you're running your
own Kubernetes cluster on Amazon EC2, then ensure that Docker is authenticated to the Amazon
ECR repository that the Envoy image is in. For more information, see Envoy image and Registry
authentication in the AWS documentation and Pull an Image from a Private Registry in the Kubernetes
documentation.
287
Amazon EKS User Guide
Step 1: Install the integration components
• The AWS CLI version 1.18.99 or later or 2.0.31 or later installed. To install or upgrade the AWS CLI, see
Installing the AWS CLI.
• A kubectl client that is configured to communicate with your Kubernetes cluster. If you're using
Amazon Elastic Kubernetes Service, you can use the instructions for installing kubectl and
configuring a kubeconfig file.
• Helm version 3.0 or later installed. If you don't have Helm installed, you can install it by completing
the instructions in Using Helm with Amazon EKS.
1. The remaining steps of this procedure require a cluster without a pre-release version of the
controller installed. If you have installed a pre-release version, or are not sure whether you have, you
can download and run a script that will check to see whether a pre-release version is installed on
your cluster.
If the script returns Your cluster is ready for upgrade. Please proceed to the
installation instructions then you can proceed to the next step. If a different message is
returned, then you'll need to complete the upgrade steps before continuing. For more information
about upgrading a pre-release version, see Upgrade on GitHub.
2. Add the eks-charts repository to Helm.
5. Set the following variables for use in later steps. Replace cluster-name and region-code with
the values for your existing cluster.
export CLUSTER_NAME=cluster-name
export AWS_REGION=region-code
6. (Optional) If you want to run the controller on Fargate, then you need to create a Fargate profile. If
you don't have eksctl installed, you can install it with the instructions in Installing or Upgrading
eksctl. If you'd prefer to create the profile using the console, see Creating a Fargate profile.
288
Amazon EKS User Guide
Step 1: Install the integration components
7. Create an OpenID Connect (OIDC) identity provider for your cluster. If you don't have eksctl
installed, you can install it with the instructions in Installing or upgrading eksctl. If you'd prefer to
create the provider using the console, see Enabling IAM roles for service accounts on your cluster.
8. Create an IAM role, attach the AWSAppMeshFullAccess and AWSCloudMapFullAccess AWS managed
policies to it, and bind it to the appmesh-controller Kubernetes service account. The role enables
the controller to add, remove, and change App Mesh resources.
Note
The command creates an AWS IAM role with an auto-generated name. You are not able to
specify the IAM role name that is created.
If you prefer to create the service account using the AWS Management Console or AWS CLI, see
Creating an IAM role and policy for your service account. If you use the AWS Management Console
or AWS CLI to create the account, you also need to map the role to a Kubernetes service account. For
more information, see Specifying an IAM role for your service account.
9. Deploy the App Mesh controller. For a list of all configuration options, see Configuration on GitHub.
Important
If your cluster is in the me-south-1 or ap-east-1 Regions, then you need to add the
following option to the previous command:
--set sidecar.image.repository=account-id.dkr.ecr.region-code.amazonaws.com/aws-
appmesh-envoy
Replace account-id and region-code with one of the appropriate sets of values.
• 772975370895.dkr.ecr.me-south-1.amazonaws.com/aws-appmesh-envoy:v1.12.5.0-prod
• 856666278305.dkr.ecr.ap-east-1.amazonaws.com/aws-appmesh-envoy:v1.12.5.0-prod
10. Confirm that the controller version is v1.0.0 or later. You can review the change log on GitHub.
289
Amazon EKS User Guide
Step 2: Deploy App Mesh resources
Note
If you view the log for the running container, you may see a line that includes the following
text, which can be safely ignored.
Neither -kubeconfig nor -master was specified. Using the inClusterConfig. This
might not work.
apiVersion: v1
kind: Namespace
metadata:
name: my-apps
labels:
mesh: my-mesh
appmesh.k8s.aws/sidecarInjectorWebhook: enabled
a. Save the following contents to a file named mesh.yaml on your computer. The file will be used
to create a mesh resource named my-mesh. A service mesh is a logical boundary for network
traffic between the services that reside within it.
apiVersion: appmesh.k8s.aws/v1beta2
kind: Mesh
metadata:
name: my-mesh
spec:
namespaceSelector:
matchLabels:
mesh: my-mesh
290
Amazon EKS User Guide
Step 2: Deploy App Mesh resources
c. View the details of the Kubernetes mesh resource that was created.
Output
Name: my-mesh
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"appmesh.k8s.aws/v1beta2","kind":"Mesh","metadata":
{"annotations":{},"name":"my-mesh"},"spec":{"namespaceSelector":{"matchLa...
API Version: appmesh.k8s.aws/v1beta2
Kind: Mesh
Metadata:
Creation Timestamp: 2020-06-17T14:51:37Z
Finalizers:
finalizers.appmesh.k8s.aws/mesh-members
finalizers.appmesh.k8s.aws/aws-appmesh-resources
Generation: 1
Resource Version: 6295
Self Link: /apis/appmesh.k8s.aws/v1beta2/meshes/my-mesh
UID: 111a11b1-c11d-1e1f-gh1i-j11k1l111m711
Spec:
Aws Name: my-mesh
Namespace Selector:
Match Labels:
Mesh: my-mesh
Status:
Conditions:
Last Transition Time: 2020-06-17T14:51:37Z
Status: True
Type: MeshActive
Mesh ARN: arn:aws:appmesh:us-west-2:111122223333:mesh/my-mesh
Observed Generation: 1
Events: <none>
d. View the details about the App Mesh service mesh that the controller created.
Output
{
"mesh": {
"meshName": "my-mesh",
"metadata": {
"arn": "arn:aws:appmesh:us-west-2:111122223333:mesh/my-mesh",
"createdAt": "2020-06-17T09:51:37.920000-05:00",
"lastUpdatedAt": "2020-06-17T09:51:37.920000-05:00",
"meshOwner": "111122223333",
"resourceOwner": "111122223333",
"uid": "111a11b1-c11d-1e1f-gh1i-j11k1l111m711",
"version": 1
},
"spec": {},
"status": {
"status": "ACTIVE"
291
Amazon EKS User Guide
Step 2: Deploy App Mesh resources
}
}
}
3. Create an App Mesh virtual node. A virtual node acts as a logical pointer to a Kubernetes
deployment.
a. Save the following contents to a file named virtual-node.yaml on your computer. The
file will be used to create an App Mesh virtual node named my-service-a in the my-apps
namespace. The virtual node represents a Kubernetes service that is created in a later step. The
value for hostname is the fully qualified DNS hostname of the actual service that this virtual
node represents.
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: my-service-a
namespace: my-apps
spec:
podSelector:
matchLabels:
app: my-app-1
listeners:
- portMapping:
port: 80
protocol: http
serviceDiscovery:
dns:
hostname: my-service-a.my-apps.svc.cluster.local
Virtual nodes have capabilities, such as end-to-end encryption and health checks, that aren't
covered in this tutorial. For more information, see Virtual nodes. To see all available settings for
a virtual node that you can set in the preceding spec, run the following command.
c. View the details of the Kubernetes virtual node resource that was created.
Output
Name: my-service-a
Namespace: my-apps
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"appmesh.k8s.aws/
v1beta2","kind":"VirtualNode","metadata":{"annotations":{},"name":"my-service-
a","namespace":"my-app-1"},"s...
API Version: appmesh.k8s.aws/v1beta2
Kind: VirtualNode
Metadata:
Creation Timestamp: 2020-06-17T14:57:29Z
Finalizers:
finalizers.appmesh.k8s.aws/aws-appmesh-resources
Generation: 2
292
Amazon EKS User Guide
Step 2: Deploy App Mesh resources
d. View the details of the virtual node that the controller created in App Mesh.
Note
Even though the name of the virtual node created in Kubernetes is my-service-a,
the name of the virtual node created in App Mesh is my-service-a_my-app-1. The
controller appends the Kubernetes namespace name to the App Mesh virtual node
name when it creates the App Mesh resource. The namespace name is added because in
Kubernetes you can create virtual nodes with the same name in different namespaces,
but in App Mesh a virtual node name must be unique within a mesh.
Output
{
"virtualNode": {
"meshName": "my-mesh",
"metadata": {
"arn": "arn:aws:appmesh:us-west-2:111122223333:mesh/my-mesh/
virtualNode/my-service-a_my-apps",
"createdAt": "2020-06-17T09:57:29.840000-05:00",
"lastUpdatedAt": "2020-06-17T09:57:29.840000-05:00",
"meshOwner": "111122223333",
"resourceOwner": "111122223333",
"uid": "111a11b1-c11d-1e1f-gh1i-j11k1l111m711",
"version": 1
},
"spec": {
"backends": [],
"listeners": [
{
"portMapping": {
"port": 80,
293
Amazon EKS User Guide
Step 2: Deploy App Mesh resources
"protocol": "http"
}
}
],
"serviceDiscovery": {
"dns": {
"hostname": "my-service-a.my-apps.svc.cluster.local"
}
}
},
"status": {
"status": "ACTIVE"
},
"virtualNodeName": "my-service-a_my-apps"
}
}
4. Create an App Mesh virtual router. Virtual routers handle traffic for one or more virtual services
within your mesh.
a. Save the following contents to a file named virtual-router.yaml on your computer. The file
will be used to create a virtual router to route traffic to the virtual node named my-service-a
that was created in the previous step. The controller will create the App Mesh virtual router and
route resources. You can specify many more capabilities for your routes and use protocols other
than http. For more information, see Virtual routers and Routes. Notice that the virtual node
name referenced is the Kubernetes virtual node name, not the App Mesh virtual node name that
was created in App Mesh by the controller.
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualRouter
metadata:
namespace: my-apps
name: my-service-a-virtual-router
spec:
listeners:
- portMapping:
port: 80
protocol: http
routes:
- name: my-service-a-route
httpRoute:
match:
prefix: /
action:
weightedTargets:
- virtualNodeRef:
name: my-service-a
weight: 1
(Optional) To see all available settings for a virtual router that you can set in the preceding spec,
run any of the following command.
To see all available settings for a route that you can set in the preceding spec, run any of the
following command.
294
Amazon EKS User Guide
Step 2: Deploy App Mesh resources
Abbreviated output
Name: my-service-a-virtual-router
Namespace: my-app-1
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"appmesh.k8s.aws/
v1beta2","kind":"VirtualRouter","metadata":{"annotations":{},"name":"my-service-a-
virtual-router","namespac...
API Version: appmesh.k8s.aws/v1beta2
Kind: VirtualRouter
...
Spec:
Aws Name: my-service-a-virtual-router_my-apps
Listeners:
Port Mapping:
Port: 80
Protocol: http
Mesh Ref:
Name: my-mesh
UID: 111a11b1-c11d-1e1f-gh1i-j11k1l111m711
Routes:
Http Route:
Action:
Weighted Targets:
Virtual Node Ref:
Name: my-service-a
Weight: 1
Match:
Prefix: /
Name: my-service-a-route
Status:
Conditions:
Last Transition Time: 2020-06-17T15:14:01Z
Status: True
Type: VirtualRouterActive
Observed Generation: 1
Route AR Ns:
My - Service - A - Route: arn:aws:appmesh:us-west-2:111122223333:mesh/my-mesh/
virtualRouter/my-service-a-virtual-router_my-apps/route/my-service-a-route
Virtual Router ARN: arn:aws:appmesh:us-west-2:111122223333:mesh/my-mesh/
virtualRouter/my-service-a-virtual-router_my-apps
Events: <none>
d. View the virtual router resource that the controller created in App Mesh. You specify my-
service-a-virtual-router_my-app-1 for name, because when the controller created the
virtual router in App Mesh, it appended the Kubernetes namespace name to the name of the
virtual router.
Output
295
Amazon EKS User Guide
Step 2: Deploy App Mesh resources
{
"virtualRouter": {
"meshName": "my-mesh",
"metadata": {
"arn": "arn:aws:appmesh:us-west-2:111122223333:mesh/my-mesh/
virtualRouter/my-service-a-virtual-router_my-apps",
"createdAt": "2020-06-17T10:14:01.547000-05:00",
"lastUpdatedAt": "2020-06-17T10:14:01.547000-05:00",
"meshOwner": "111122223333",
"resourceOwner": "111122223333",
"uid": "111a11b1-c11d-1e1f-gh1i-j11k1l111m711",
"version": 1
},
"spec": {
"listeners": [
{
"portMapping": {
"port": 80,
"protocol": "http"
}
}
]
},
"status": {
"status": "ACTIVE"
},
"virtualRouterName": "my-service-a-virtual-router_my-apps"
}
}
e. View the route resource that the controller created in App Mesh. A route resource was
not created in Kubernetes because the route is part of the virtual router configuration in
Kubernetes. The route information was shown in the Kubernetes resource detail in sub-step c.
The controller did not append the Kubernetes namespace name to the App Mesh route name
when it created the route in App Mesh because route names are unique to a virtual router.
Output
{
"route": {
"meshName": "my-mesh",
"metadata": {
"arn": "arn:aws:appmesh:us-west-2:111122223333:mesh/my-mesh/
virtualRouter/my-service-a-virtual-router_my-apps/route/my-service-a-route",
"createdAt": "2020-06-17T10:14:01.577000-05:00",
"lastUpdatedAt": "2020-06-17T10:14:01.577000-05:00",
"meshOwner": "111122223333",
"resourceOwner": "111122223333",
"uid": "111a11b1-c11d-1e1f-gh1i-j11k1l111m711",
"version": 1
},
"routeName": "my-service-a-route",
"spec": {
"httpRoute": {
"action": {
"weightedTargets": [
296
Amazon EKS User Guide
Step 2: Deploy App Mesh resources
{
"virtualNode": "my-service-a_my-apps",
"weight": 1
}
]
},
"match": {
"prefix": "/"
}
}
},
"status": {
"status": "ACTIVE"
},
"virtualRouterName": "my-service-a-virtual-router_my-apps"
}
}
5. Create an App Mesh virtual service. A virtual service is an abstraction of a real service that is
provided by a virtual node directly or indirectly by means of a virtual router. Dependent services
call your virtual service by its name. Though the name doesn't matter to App Mesh, we recommend
naming the virtual service the fully qualified domain name of the actual service that the virtual
service represents. By naming your virtual services this way, you don't need to change your
application code to reference a different name. The requests are routed to the virtual node or virtual
router that is specified as the provider for the virtual service.
a. Save the following contents to a file named virtual-service.yaml on your computer. The
file will be used to create a virtual service that uses a virtual router provider to route traffic
to the virtual node named my-service-a that was created in a previous step. The value for
awsName in the spec is the fully qualified domain name (FQDN) of the actual Kubernetes
service that this virtual service abstracts. The Kubernetes service is created in the section called
“Step 3: Create or update services” (p. 299). For more information, see Virtual services.
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualService
metadata:
name: my-service-a
namespace: my-apps
spec:
awsName: my-service-a.my-apps.svc.cluster.local
provider:
virtualRouter:
virtualRouterRef:
name: my-service-a-virtual-router
To see all available settings for a virtual service that you can set in the preceding spec, run the
following command.
c. View the details of the Kubernetes virtual service resource that was created.
Output
297
Amazon EKS User Guide
Step 2: Deploy App Mesh resources
Name: my-service-a
Namespace: my-app-1
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"appmesh.k8s.aws/
v1beta2","kind":"VirtualService","metadata":{"annotations":{},"name":"my-service-
a","namespace":"my-app-1"}...
API Version: appmesh.k8s.aws/v1beta2
Kind: VirtualService
Metadata:
Creation Timestamp: 2020-06-17T15:48:40Z
Finalizers:
finalizers.appmesh.k8s.aws/aws-appmesh-resources
Generation: 1
Resource Version: 13598
Self Link: /apis/appmesh.k8s.aws/v1beta2/namespaces/my-apps/
virtualservices/my-service-a
UID: 111a11b1-c11d-1e1f-gh1i-j11k1l111m711
Spec:
Aws Name: my-service-a.my-apps.svc.cluster.local
Mesh Ref:
Name: my-mesh
UID: 111a11b1-c11d-1e1f-gh1i-j11k1l111m711
Provider:
Virtual Router:
Virtual Router Ref:
Name: my-service-a-virtual-router
Status:
Conditions:
Last Transition Time: 2020-06-17T15:48:40Z
Status: True
Type: VirtualServiceActive
Observed Generation: 1
Virtual Service ARN: arn:aws:appmesh:us-west-2:111122223333:mesh/my-mesh/
virtualService/my-service-a.my-apps.svc.cluster.local
Events: <none>
d. View the details of the virtual service resource that the controller created in App Mesh. The
Kubernetes controller did not append the Kubernetes namespace name to the App Mesh virtual
service name when it created the virtual service in App Mesh because the virtual service's name
is a unique FQDN.
Output
{
"virtualService": {
"meshName": "my-mesh",
"metadata": {
"arn": "arn:aws:appmesh:us-west-2:111122223333:mesh/my-mesh/
virtualService/my-service-a.my-apps.svc.cluster.local",
"createdAt": "2020-06-17T10:48:40.182000-05:00",
"lastUpdatedAt": "2020-06-17T10:48:40.182000-05:00",
"meshOwner": "111122223333",
"resourceOwner": "111122223333",
"uid": "111a11b1-c11d-1e1f-gh1i-j11k1l111m711",
"version": 1
},
"spec": {
"provider": {
298
Amazon EKS User Guide
Step 3: Create or update services
"virtualRouter": {
"virtualRouterName": "my-service-a-virtual-router_my-apps"
}
}
},
"status": {
"status": "ACTIVE"
},
"virtualServiceName": "my-service-a.my-apps.svc.cluster.local"
}
}
Though not covered in this tutorial, the controller can also deploy App Mesh virtual gateways and
gateway routes. For a walkthrough of deploying these resources with the controller, see Configuring
Ingress Gateway, or a sample manifest that includes the resources on GitHub.
1. Enable proxy authorization. We recommend that you enable each Kubernetes deployment to stream
only the configuration for its own App Mesh virtual node.
a. Save the following contents to a file named proxy-auth.json on your computer. Make sure to
replace the alternate-colored values with your own.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "appmesh:StreamAggregatedResources",
"Resource": [
"arn:aws:appmesh:region-code:111122223333:mesh/my-mesh/virtualNode/
my-service-a_my-apps"
]
}
]
}
c. Create an IAM role, attach the policy you created in the previous step to it, create a Kubernetes
service account and bind the policy to the Kubernetes service account. The role enables the
controller to add, remove, and change App Mesh resources.
299
Amazon EKS User Guide
Step 3: Create or update services
If you prefer to create the service account using the AWS Management Console or AWS CLI,
see Creating an IAM Role and policy for your service account. If you use the AWS Management
Console or AWS CLI to create the account, you also need to map the role to a Kubernetes service
account. For more information, see Specifying an IAM role for your service account.
2. (Optional) If you want to deploy your deployment to Fargate pods, then you need to create a
Fargate profile. If you don't have eksctl installed, you can install it with the instructions in
Installing or Upgrading eksctl. If you'd prefer to create the profile using the console, see Creating a
Fargate profile.
3. Create a Kubernetes service and deployment. If you have an existing deployment that you want to
use with App Mesh, then you need to deploy a virtual node, as you did in sub-step 3 of the section
called “Step 2: Deploy App Mesh resources” (p. 290), and update your deployment to make sure
that its label matches the label that you set on the virtual node, so that the sidecar containers are
automatically added to the pods and the pods are redeployed.
a. Save the following contents to a file named example-service.yaml on your computer. If you
change the namespace name and are using Fargate pods, make sure that the namespace name
matches the namespace name that you defined in your Fargate profile.
apiVersion: v1
kind: Service
metadata:
name: my-service-a
namespace: my-apps
labels:
app: my-app-1
spec:
selector:
app: my-app-1
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service-a
namespace: my-apps
labels:
app: my-app-1
spec:
replicas: 3
selector:
matchLabels:
app: my-app-1
template:
metadata:
labels:
app: my-app-1
spec:
serviceAccountName: my-service-a
containers:
- name: nginx
image: nginx:1.19.0
ports:
- containerPort: 80
300
Amazon EKS User Guide
Step 3: Create or update services
Important
The value for the app matchLabels selector in the spec must match the value that
you specified when you created the virtual node in sub-step 3 of the section called
“Step 2: Deploy App Mesh resources” (p. 290), or the sidecar containers won't be
injected into the pod. In the previous example, the value for the label is my-app-1.
If you deploy a virtual gateway, rather than a virtual node, then the Deployment
manifest should include only the Envoy container. For more information about the
image to use, see Envoy image. For a sample manfest, see the deployment example on
GitHub.
b. Deploy the service.
Output
d. View the details for one of the pods that was deployed.
Abbreviated output
Name: my-service-a-54776556f6-2cxd9
Namespace: my-app-1
Priority: 0
Node: ip-192-168-44-157.us-west-2.compute.internal/192.168.44.157
Start Time: Wed, 17 Jun 2020 11:08:59 -0500
Labels: app=nginx
pod-template-hash=54776556f6
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.57.134
IPs:
IP: 192.168.57.134
Controlled By: ReplicaSet/my-service-a-54776556f6
Init Containers:
proxyinit:
Container ID: docker://
e0c4810d584c21ae0cb6e40f6119d2508f029094d0e01c9411c6cf2a32d77a59
Image: 111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-proxy-
route-manager:v2
Image ID: docker-pullable://111345817488.dkr.ecr.us-west-2.amazonaws.com/
aws-appmesh-proxy-route-manager
Port: <none>
Host Port: <none>
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 26 Jun 2020 08:36:22 -0500
Finished: Fri, 26 Jun 2020 08:36:22 -0500
301
Amazon EKS User Guide
Step 3: Create or update services
Ready: True
Restart Count: 0
Requests:
cpu: 10m
memory: 32Mi
Environment:
APPMESH_START_ENABLED: 1
APPMESH_IGNORE_UID: 1337
APPMESH_ENVOY_INGRESS_PORT: 15000
APPMESH_ENVOY_EGRESS_PORT: 15001
APPMESH_APP_PORTS: 80
APPMESH_EGRESS_IGNORED_IP: 169.254.169.254
APPMESH_EGRESS_IGNORED_PORTS: 22
AWS_ROLE_ARN: arn:aws:iam::111122223333:role/eksctl-app-
mesh-addon-iamserviceaccount-my-a-Role1-NMNCVWB6PL0N
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/
serviceaccount/token
...
Containers:
nginx:
Container ID: docker://
be6359dc6ecd3f18a1c87df7b57c2093e1f9db17d5b3a77f22585ce3bcab137a
Image: nginx:1.19.0
Image ID: docker-pullable://nginx
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 26 Jun 2020 08:36:28 -0500
Ready: True
Restart Count: 0
Environment:
AWS_ROLE_ARN: arn:aws:iam::111122223333:role/eksctl-app-mesh-
addon-iamserviceaccount-my-a-Role1-NMNCVWB6PL0N
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/
serviceaccount/token
...
envoy:
Container ID:
docker://905b55cbf33ef3b3debc51cb448401d24e2e7c2dbfc6a9754a2c49dd55a216b6
Image: 840364872350.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-
envoy:v1.12.4.0-prod
Image ID: docker-pullable://840364872350.dkr.ecr.us-west-2.amazonaws.com/
aws-appmesh-envoy
Port: 9901/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 26 Jun 2020 08:36:36 -0500
Ready: True
Restart Count: 0
Requests:
cpu: 10m
memory: 32Mi
Environment:
APPMESH_VIRTUAL_NODE_NAME: mesh/my-mesh/virtualNode/my-service-a_my-apps
APPMESH_PREVIEW: 0
ENVOY_LOG_LEVEL: info
AWS_REGION: us-west-2
AWS_ROLE_ARN: arn:aws:iam::111122223333:role/eksctl-app-mesh-
addon-iamserviceaccount-my-a-Role1-NMNCVWB6PL0N
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/
serviceaccount/token
...
Events:
Type Reason Age From
Message
302
Amazon EKS User Guide
Step 4: Clean up
In the preceding output, you can see that the proxyinit and envoy containers were added
to the pod by the controller. If you deployed the example service to Fargate, then the envoy
container was added to the pod by the controller, but the proxyinit container was not.
4. (Optional) Install add-ons such as Prometheus, Grafana, AWS X-Ray, Jaeger, and Datadog. For more
information, see App Mesh add-ons on GitHub.
Step 4: Clean up
Remove all of the example resources created in this tutorial. The controller also removes the resources
that were created in the my-mesh App Mesh service mesh.
If you created a Fargate profile for the example service, then remove it.
(Optional) If you deployed the Kubernetes integration components to Fargate, then delete the Fargate
profile.
303
Amazon EKS User Guide
Step 4: Clean up
304
Amazon EKS User Guide
Insufficient capacity
Insufficient capacity
If you receive the following error while attempting to create an Amazon EKS cluster, then one of the
Availability Zones you specified does not have sufficient capacity to support a cluster.
Retry creating your cluster with subnets in your cluster VPC that are hosted in the Availability Zones
returned by this error message.
• The aws-auth-cm.yaml file does not have the correct IAM role ARN for your nodes. Ensure that the
node IAM role ARN (not the instance profile ARN) is specified in your aws-auth-cm.yaml file. For
more information, see Launching self-managed Amazon Linux 2 nodes (p. 96).
• The ClusterName in your node AWS CloudFormation template does not exactly match the name of
the cluster you want your nodes to join. Passing an incorrect value to this field results in an incorrect
configuration of the node's /var/lib/kubelet/kubeconfig file, and the nodes will not join the
cluster.
• The node is not tagged as being owned by the cluster. Your nodes must have the following tag applied
to them, where <cluster_name> is replaced with the name of your cluster.
Key Value
kubernetes.io/cluster/<cluster-name> owned
• The nodes may not be able to access the cluster using a public IP address. Ensure that nodes deployed
in public subnets are assigned a public IP address. If not, you can associate an elastic IP address to a
node after it's launched. For more information, see Associating an elastic IP address with a running
instance or network interface. If the public subnet is not set to automatically assign public IP addresses
to instances deployed to it, then we recommend enabling that setting. For more information, see
Modifying the public IPv4 addressing attribute for your subnet. If the node is deployed to a private
subnet, then the subnet must have a route to a NAT gateway that has a public IP address assigned to
it.
• The STS endpoint for the Region that you're deploying the nodes to is not enabled for your account. To
enable the region, see Activating and deactivating AWS STS in an AWS Region.
305
Amazon EKS User Guide
Unauthorized or access denied (kubectl)
This could be because the cluster was created with one set of AWS credentials (from an IAM user or role),
and kubectl is using a different set of credentials.
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added
to the Kubernetes RBAC authorization table as the administrator (with system:masters permissions).
Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. For more
information, see Managing users or IAM roles for your cluster (p. 217). If you use the console to create
the cluster, you must ensure that the same IAM user credentials are in the AWS SDK credential chain
when you are running kubectl commands on your cluster.
If you install and configure the AWS CLI, you can configure the IAM credentials for your user. For more
information, see Configuring the AWS CLI in the AWS Command Line Interface User Guide.
If you assumed a role to create the Amazon EKS cluster, you must ensure that kubectl is configured to
assume the same role. Use the following command to update your kubeconfig file to use an IAM role. For
more information, see Create a kubeconfig for Amazon EKS (p. 213).
To map an IAM user to a Kubernetes RBAC user, see Managing users or IAM roles for your cluster (p. 217)
or watch a video about how to map a user.
306
Amazon EKS User Guide
getsockopt: no route to host
Error: : error upgrading connection: error dialing backend: dial tcp 172.17.nn.nn:10250:
getsockopt: no route to host
If your managed node group encounters a health issue, Amazon EKS returns an error message to help
you to diagnose the issue. The following error messages and their associated descriptions are shown
below.
• AutoScalingGroupNotFound: We couldn't find the Auto Scaling group associated with the managed
node group. You may be able to recreate an Auto Scaling group with the same settings to recover.
• Ec2SecurityGroupNotFound: We couldn't find the cluster security group for the cluster. You must
recreate your cluster.
• Ec2SecurityGroupDeletionFailure: We could not delete the remote access security group for your
managed node group. Remove any dependencies from the security group.
• Ec2LaunchTemplateNotFound: We couldn't find the Amazon EC2 launch template for your managed
node group. You may be able to recreate a launch template with the same settings to recover.
• Ec2LaunchTemplateVersionMismatch: The Amazon EC2 launch template version for your managed
node group does not match the version that Amazon EKS created. You may be able to revert to the
version that Amazon EKS created to recover.
• IamInstanceProfileNotFound: We couldn't find the IAM instance profile for your managed node group.
You may be able to recreate an instance profile with the same settings to recover.
• IamNodeRoleNotFound: We couldn't find the IAM role for your managed node group. You may be able
to recreate an IAM role with the same settings to recover.
• AsgInstanceLaunchFailures: Your Auto Scaling group is experiencing failures while attempting to
launch instances.
• NodeCreationFailure: Your launched instances are unable to register with your Amazon EKS cluster.
Common causes of this failure are insufficient node IAM role (p. 257) permissions or lack of outbound
internet access for the nodes. Your nodes must be able to access the internet using a public IP address
to function properly. For more information, see ??? (p. 164). Your nodes must also have ports open to
the internet. For more information, see ??? (p. 166).
• InstanceLimitExceeded: Your AWS account is unable to launch any more instances of the specified
instance type. You may be able to request an Amazon EC2 instance limit increase to recover.
• InsufficientFreeAddresses: One or more of the subnets associated with your managed node group
does not have enough available IP addresses for new nodes.
• AccessDenied: Amazon EKS or one or more of your managed nodes is unable to communicate with
your cluster API server.
• InternalFailure: These errors are usually caused by an Amazon EKS server-side issue.
307
Amazon EKS User Guide
CNI log collection tool
Note
If the script is not present at that location, then the CNI container failed to run. You can
manually download and run the script with the following command:
curl -O https://raw.githubusercontent.com/awslabs/amazon-eks-ami/master/log-
collector-script/linux/eks-log-collector.sh
sudo bash eks-log-collector.sh
The script collects the following diagnostic information. The CNI version that you have deployed can be
earlier than the script version.
/var/log/eks_i-0717c9d54b6cfaa19_2020-03-24_0103-UTC_0.6.1.tar.gz
308
Amazon EKS User Guide
TLS handshake timeout
The errors are most likely related to the AWS IAM Authenticator configuration map not being applied
to the nodes. The configuration map provides the system:bootstrappers and system:nodes
Kubernetes RBAC permissions for nodes to register to the cluster. For more information, see To enable
nodes to join your cluster on the Self-managed nodes tab of Launching self-managed Amazon Linux
2 nodes (p. 96). Ensure that you specify the Role ARN of the instance role in the configuration map, not
the Instance Profile ARN.
The authenticator does not recognize a Role ARN if it includes a path other than /, such as the following
example:
arn:aws:iam::111122223333:role/development/apps/prod-iam-role-NodeInstanceRole-621LVEXAMPLE
When specifying a Role ARN in the configuration map that includes a path other than /, you must drop
the path. The ARN above would be specified as the following:
arn:aws:iam::111122223333:role/prod-iam-role-NodeInstanceRole-621LVEXAMPLE
server.go:233] failed to run Kubelet: could not init cloud provider "aws": error finding
instance i-1111f2222f333e44c: "error listing AWS instances: \"RequestError: send request
failed\\ncaused by: Post net/http: TLS handshake timeout\""
The kubelet process will continually respawn and test the API server endpoint. The error can also occur
temporarily during any procedure that performs a rolling update of the cluster in the control plane, such
as a configuration change or version update.
To resolve the issue, check the route table and security groups to ensure that traffic from the nodes can
reach the public endpoint.
Troubleshooting IAM
This topic covers some common errors that you may see while using Amazon EKS with IAM and how to
work around them.
AccessDeniedException
If you receive an AccessDeniedException when calling an AWS API operation, then the AWS Identity
and Access Management (IAM) user or role credentials that you are using do not have the required
permissions to make that call.
309
Amazon EKS User Guide
aws-auth ConfigMap does not grant access to the cluster
In the above example message, the user does not have permissions to call the Amazon EKS
DescribeCluster API operation. To provide Amazon EKS admin permissions to a user, see Amazon EKS
identity-based policy examples (p. 249).
For more general information about IAM, see Controlling access using policies in the IAM User Guide.
Some AWS services allow you to pass an existing role to that service, instead of creating a new service
role or service-linked role. To do this, you must have permissions to pass the role to the service.
The following example error occurs when an IAM user named marymajor tries to use the console to
perform an action in Amazon EKS. However, the action requires the service to have permissions granted
by a service role. Mary does not have permissions to pass the role to the service.
In this case, Mary asks her administrator to update her policies to allow her to perform the
iam:PassRole action.
Access keys consist of two parts: an access key ID (for example, AKIAIOSFODNN7EXAMPLE) and a secret
access key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY). Like a user name and
password, you must use both the access key ID and secret access key together to authenticate your
requests. Manage your access keys as securely as you do your user name and password.
Important
Do not provide your access keys to a third party, even to help find your canonical user ID. By
doing this, you might give someone permanent access to your account.
When you create an access key pair, you are prompted to save the access key ID and secret access key in
a secure location. The secret access key is available only at the time you create it. If you lose your secret
access key, you must add new access keys to your IAM user. You can have a maximum of two access keys.
If you already have two, you must delete one key pair before creating a new one. To view instructions,
see Managing Access Keys in the IAM User Guide.
310
Amazon EKS User Guide
I'm an administrator and want to
allow others to access Amazon EKS
To get started right away, see Creating Your First IAM Delegated User and Group in the IAM User Guide.
• To learn whether Amazon EKS supports these features, see How Amazon EKS works with IAM (p. 246).
• To learn how to provide access to your resources across AWS accounts that you own, see Providing
Access to an IAM User in Another AWS Account That You Own in the IAM User Guide.
• To learn how to provide access to your resources to third-party AWS accounts, see Providing Access to
AWS Accounts Owned by Third Parties in the IAM User Guide.
• To learn how to provide access through identity federation, see Providing Access to Externally
Authenticated Users (Identity Federation) in the IAM User Guide.
• To learn the difference between using roles and resource-based policies for cross-account access, see
How IAM Roles Differ from Resource-based Policies in the IAM User Guide.
311
Amazon EKS User Guide
Management tools
Related projects
These open source projects extend the functionality of Kubernetes clusters running on AWS, including
clusters managed by Amazon EKS.
Management tools
Related management tools for Amazon EKS and Kubernetes clusters.
eksctl
eksctl is a simple CLI tool for creating clusters on Amazon EKS.
Networking
Related networking projects for Amazon EKS and Kubernetes clusters.
312
Amazon EKS User Guide
ExternalDNS
ExternalDNS
ExternalDNS synchronizes exposed Kubernetes services and ingresses with DNS providers including
Amazon Route 53 and AWS Service Discovery.
Security
Related security projects for Amazon EKS and Kubernetes clusters.
Machine learning
Related machine learning projects for Amazon EKS and Kubernetes clusters.
Kubeflow
A machine learning toolkit for Kubernetes.
Auto Scaling
Related auto scaling projects for Amazon EKS and Kubernetes clusters.
Cluster autoscaler
Cluster Autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster based on CPU
and memory pressure.
313
Amazon EKS User Guide
Escalator
Escalator
Escalator is a batch or job optimized horizontal autoscaler for Kubernetes.
Monitoring
Related monitoring projects for Amazon EKS and Kubernetes clusters.
Prometheus
Prometheus is an open-source systems monitoring and alerting toolkit.
Jenkins X
CI/CD solution for modern cloud applications on Amazon EKS and Kubernetes clusters.
314
Amazon EKS User Guide
Create and manage App You can create and manage App June 18, 2020
Mesh resources from within Mesh resources from within
Kubernetes with the App Mesh Kubernetes. The controller also
controller for Kubernetes automatically injects the Envoy
proxy and init containers into
pods that you deploy.
Amazon EKS now supports You can add Amazon EC2 Inf1 June 4, 2020
Amazon EC2 Inf1 nodes nodes to your cluster.
Amazon EKS Region Amazon EKS is now available May 13, 2020
expansion (p. 315) in the AWS GovCloud (US-East)
(us-gov-east-1) and AWS
GovCloud (US-West) (us-gov-
west-1) Regions.
Kubernetes version 1.16 Added Kubernetes version 1.16 April 30, 2020
support for new clusters and
version upgrades.
Kubernetes version 1.15 Added Kubernetes version 1.15 March 10, 2020
support for new clusters and
version upgrades.
Amazon EKS Region Amazon EKS is now available February 26, 2020
expansion (p. 315) in the Beijing (cn-north-1)
and Ningxia (cn-northwest-1)
Regions.
Amazon FSx for Lustre CSI driver Added topic for installing the December 23, 2019
Amazon FSx for Lustre CSI Driver
on Kubernetes 1.14 Amazon EKS
clusters.
Restrict network access to the Amazon EKS now enables you December 20, 2019
public access endpoint of a to restrict the CIDR ranges
cluster that can communicate to the
public access endpoint of the
Kubernetes API server.
315
Amazon EKS User Guide
Resolve the private access Amazon EKS now enables you December 13, 2019
endpoint address for a cluster to resolve the private access
from outside of a VPC endpoint of the Kubernetes API
server from outside of a VPC.
(Beta) Amazon EC2 A1 Amazon Launch Amazon EC2 A1 Amazon December 4, 2019
EC2 instance nodes EC2 instance nodes that register
with your Amazon EKS cluster.
AWS Fargate on Amazon EKS Amazon EKS Kubernetes clusters December 3, 2019
now support running pods on
Fargate.
Amazon EKS Region Amazon EKS is now available November 21, 2019
expansion (p. 315) in the Canada (Central) (ca-
central-1) Region.
Amazon EKS Region Amazon EKS is now available in October 16, 2019
expansion (p. 315) the South America (São Paulo)
(sa-east-1) Region.
Kubernetes Dashboard update Updated topic for installing September 28, 2019
the Kubernetes Dashboard on
Amazon EKS clusters to use the
beta 2.0 version.
316
Amazon EKS User Guide
Amazon EFS CSI driver Added topic for installing the September 19, 2019
Amazon EFS CSI Driver on
Kubernetes 1.14 Amazon EKS
clusters.
Amazon EC2 Systems Manager Added topic for retrieving September 18, 2019
parameter for Amazon EKS- the Amazon EKS-optimized
optimized AMI ID AMI ID using an Amazon EC2
Systems Manager parameter.
The parameter eliminates the
need for you to look up AMI IDs.
Amazon EKS resource tagging Manage tagging of your Amazon September 16, 2019
EKS clusters.
Amazon EBS CSI driver Added topic for installing the September 9, 2019
Amazon EBS CSI driver on
Kubernetes 1.14 Amazon EKS
clusters.
New Amazon EKS-optimized AMI Amazon EKS has updated the September 6, 2019
patched for CVE-2019-9512 and Amazon EKS-optimized AMI to
CVE-2019-9514 address CVE-2019-9512 and
CVE-2019-9514.
IAM roles for service accounts With IAM roles for service September 3, 2019
accounts on Amazon EKS
clusters, you can associate an
IAM role with a Kubernetes
service account. With this
feature, you no longer need to
provide extended permissions to
the node IAM role so that pods
on that node can call AWS APIs.
Amazon EKS Region Amazon EKS is now available in August 29, 2019
expansion (p. 315) the Middle East (Bahrain) (me-
south-1) Region.
Amazon EKS platform version New platform versions to August 28, 2019
update address CVE-2019-9512 and
CVE-2019-9514.
317
Amazon EKS User Guide
Amazon EKS Region Amazon EKS is now available July 31, 2019
expansion (p. 315) in the Asia Pacific (Hong Kong)
(ap-east-1) Region.
Added topic on ALB ingress The AWS ALB Ingress Controller July 11, 2019
controller for Kubernetes is a controller
that triggers the creation of an
Application Load Balancer when
ingress resources are created.
Kubernetes version 1.13 Added Kubernetes version 1.13 June 18, 2019
support for new clusters and
version upgrades.
New Amazon EKS-optimized AMI Amazon EKS has updated the June 17, 2019
patched for AWS-2019-005 Amazon EKS-optimized AMI
to address the vulnerabilities
described in AWS-2019-005.
Amazon EKS platform version New platform version for May 21, 2019
update Kubernetes 1.11 and 1.10
clusters to support custom DNS
names in the Kubelet certificate
and improve etcd performance.
Getting started with eksctl This getting started guide helps May 10, 2019
you to install all of the required
resources to get started with
Amazon EKS using eksctl, a
simple command line utility
for creating and managing
Kubernetes clusters on Amazon
EKS.
318
Amazon EKS User Guide
AWS CLI get-token The aws eks get-token May 10, 2019
command (p. 315) command was added to the
AWS CLI so that you no longer
need to install the AWS IAM
Authenticator for Kubernetes
to create client security
tokens for cluster API server
communication. Upgrade your
AWS CLI installation to the latest
version to take advantage of
this new functionality. For more
information, see Installing the
AWS command line interface in
the AWS Command Line Interface
User Guide.
Amazon EKS platform version New platform version for May 8, 2019
update Kubernetes 1.12 clusters to
support custom DNS names
in the Kubelet certificate and
improve etcd performance. This
fixes a bug that caused node
Kubelet daemons to request
a new certificate every few
seconds.
Amazon EKS control plane Amazon EKS control plane April 4, 2019
logging logging makes it easy for
you to secure and run your
clusters by providing audit and
diagnostic logs directly from
the Amazon EKS control plane
to CloudWatch Logs in your
account.
Added App Mesh getting started Added documentation for March 27, 2019
guide getting started with App Mesh
and Kubernetes.
Amazon EKS API server endpoint Added documentation for March 19, 2019
private access disabling public access for
your Amazon EKS cluster's
Kubernetes API server endpoint.
Added topic for installing the The Kubernetes Metrics Server is March 18, 2019
Kubernetes Metrics Server an aggregator of resource usage
data in your cluster.
319
Amazon EKS User Guide
Added list of related open source These open source projects March 15, 2019
projects extend the functionality of
Kubernetes clusters running
on AWS, including clusters
managed by Amazon EKS.
Added topic for installing Helm The helm package manager for March 11, 2019
locally Kubernetes helps you install
and manage applications on
your Kubernetes cluster. This
topic helps you install and run
the helm and tiller binaries
locally so that you can install
and manage charts using the
helm CLI on your local system.
Amazon EKS platform version New platform version updating March 8, 2019
update Amazon EKS Kubernetes 1.11
clusters to patch level 1.11.8 to
address CVE-2019-1002100.
Increased cluster limit Amazon EKS has increased the February 13, 2019
number of clusters that you can
create in a Region from 3 to 50.
Amazon EKS Region Amazon EKS is now available February 13, 2019
expansion (p. 315) in the Europe (London) (eu-
west-2), Europe (Paris) (eu-
west-3), and Asia Pacific
(Mumbai) (ap-south-1)
Regions.
New Amazon EKS-optimized AMI Amazon EKS has updated the February 11, 2019
patched for ALAS-2019-1156 Amazon EKS-optimized AMI
to address the vulnerability
described in ALAS-2019-1156.
New Amazon EKS-optimized AMI Amazon EKS has updated the January 9, 2019
patched for ALAS2-2019-1141 Amazon EKS-optimized AMI to
address the CVEs referenced in
ALAS2-2019-1141.
Amazon EKS region Amazon EKS is now available December 19, 2018
expansion (p. 315) in the following additional
regions: Europe (Frankfurt) (eu-
central-1), Asia Pacific (Tokyo)
(ap-northeast-1), Asia Pacific
(Singapore) (ap-southeast-1),
and Asia Pacific (Sydney) (ap-
southeast-2).
Amazon EKS cluster updates Added documentation for December 12, 2018
Amazon EKS cluster Kubernetes
version updates and node
replacement.
320
Amazon EKS User Guide
Amazon EKS Region Amazon EKS is now available December 11, 2018
expansion (p. 315) in the Europe (Stockholm) (eu-
north-1) Region.
Added version 1.0.0 support for The Application Load Balancer November 20, 2018
the Application Load Balancer ingress controller releases
ingress controller version 1.0.0 with formal
support from AWS.
Added support for CNI network The Amazon VPC CNI plugin October 16, 2018
configuration for Kubernetes version 1.2.1
now supports custom network
configuration for secondary pod
network interfaces.
Added support for Amazon EKS platform version October 10, 2018
MutatingAdmissionWebhook 1.10-eks.2 now supports
and MutatingAdmissionWebhook
ValidatingAdmissionWebhook and
ValidatingAdmissionWebhook
admission controllers.
Added partner AMI information Canonical has partnered with October 3, 2018
Amazon EKS to create node AMIs
that you can use in your clusters.
Added instructions for AWS CLI Amazon EKS has added the September 21, 2018
update-kubeconfig command update-kubeconfig to the
AWS CLI to simplify the process
of creating a kubeconfig file
for accessing your cluster.
New Amazon EKS-optimized Amazon EKS has updated the September 13, 2018
AMIs Amazon EKS-optimized AMIs
(with and without GPU support)
to provide various security fixes
and AMI optimizations.
Amazon EKS platform version New platform version with August 31, 2018
update support for Kubernetes
aggregation layer and the
Horizontal Pod Autoscaler(HPA).
New Amazon EKS-optimized Amazon EKS has updated the August 22, 2018
AMIs and GPU support Amazon EKS-optimized AMI to
use a new AWS CloudFormation
node template and bootstrap
script. In addition, a new
Amazon EKS-optimized AMI with
GPU support is available.
321
Amazon EKS User Guide
New Amazon EKS-optimized AMI Amazon EKS has updated the August 14, 2018
patched for ALAS2-2018-1058 Amazon EKS-optimized AMI to
address the CVEs referenced in
ALAS2-2018-1058.
Amazon EKS-optimized AMI Amazon EKS has open-sourced July 10, 2018
build scripts the build scripts that are used to
build the Amazon EKS-optimized
AMI. These build scripts are now
available on GitHub.
322