This document provides an overview of a workshop on running Kubernetes on AWS. It outlines the prerequisites including installing Git, AWS CLI, kubectl, and cloning a GitHub repository. The workshop will cover basic Kubernetes concepts like pods, labels, replication controllers, deployments and services. It will demonstrate how to build a Kubernetes cluster on AWS using CloudFormation for infrastructure as code. Hands-on portions will include deploying containers, creating services, and observing the cluster architecture and networking. Additional topics are cluster add-ons like Kubernetes Dashboard and DNS, deploying applications, and cleaning up resources.
10. www.container-solutions.com | info@container-solutions.com
Pods
● A pod is one or more containers
● Ensures co-location / shared fate
● Pods are scheduled, then do not move between nodes
● Containers share resources within the pod:
➔ Volumes
➔ Network / IP
➔ Port space
➔ CPU / Memory allocations
17. www.container-solutions.com | info@container-solutions.com
Replication Controllers
● Define the number of replicas of a pod
● Will scheduled across all applicable nodes
● Can change replica value to scale up/down
● Which pods are scaled depends on RC selector
● Labels and selectors are used for grouping
● Can do quite complex things with RCs and labels
19. www.container-solutions.com | info@container-solutions.com
Replica Set
Replica Set is the next-generation Replication Controller. The only difference
between a Replica Set and a Replication Controller right now is the selector
support. Replica Set supports the new set-based selector which allow filtering
keys according to a set of values:
- In
- Notin
- exists (only the key identifier)
For example:
environment in (production, qa)
tier notin (frontend, backend)
partition
!partition
20. www.container-solutions.com | info@container-solutions.com
Deployments
A Deployment is responsible for creating
and updating instances of your
application
● Create a Deployment to bring up Pods and a
replica set.
● Check the status of a Deployment to see if it
succeeds or not.
● Later, update that Deployment to recreate the
Pods (for example, to use a new image).
● Rollback to an earlier Deployment revision if
the current Deployment isn’t stable.
● Pause and resume a Deployment.
22. www.container-solutions.com | info@container-solutions.com
Services
“defines a logical set of Pods and a
policy by which to access them”
● As Pods are ephemeral, we can't depend
on Pod IPs
● Services find pods that match certain
selection criteria
● Services can load balance between
multiple Pods
● Services can have a single IP that doesn’t
change
23. www.container-solutions.com | info@container-solutions.com
Services
A group of pods that act as one == Service
- group == selector
Defines access policy
- LoadBalanced, NodePort
Gets a stable virtual IP and Port
- Called the service portal
- Also a DNS name
- On prem additional loadbalancer is needed
VIP is captured by kube-proxy
- Watches the service consistency
- Updates when backend changes
27. www.container-solutions.com | info@container-solutions.com
Architecture
Master Node (“Control Plane”)
Api server
- Point of interaction with the cluster
- Exposes an http endpoint
Controller Manager
- Responsible for most of the important stuff
- Interacts with the api server to retrieve cluster state
- Responsible for configuring networking
- Allocates node CIDRs
- Ensures correct number of pods are running
- Reacts to Nodes being added / deleted
- Manages Service Accounts and security tokens
Scheduler
- Schedules newly created pods to a Node
29. www.container-solutions.com | info@container-solutions.com
kubelet
- Agent for running Pods
- Mounts volumes for Pods where required
- Reports the status of Pods back to rest of system
kube-proxy
- Enforces network rules on each Node (uses iptables)
- Responsible for forwarding packets to correct destination
Architecture
Worker Node
30. www.container-solutions.com | info@container-solutions.com
Master Node (api-server)
- Takes an argument for etcd servers
Master Node (controller-manager)
- Takes an argument for api server
- Creates/defines virtual networks for containers and services
- Takes an argument for cluster node CIDR
- Takes an argument for service CIDR
kubelet
- Configures the Docker bridge
- Takes an address for the cluster DNS
kube-proxy
- Takes an argument for the cluster node CIDR
Architecture
Networking
34. www.container-solutions.com | info@container-solutions.com
We will use CloudFormation to:
- Launch EC2 instances into an existing VPC
- Create a subnet for each kubernetes cluster
- Create a route table for each subnet
- Create Security Groups (firewall rules) for each cluster
- Create Autoscale Groups for Master and Worker nodes
AWS
Today: EC2, VPC and CloudFormation
Instance Configuration:
- Userdata: Instructions to be run by AWS cloud-init system after boot
- Chef: Userdata will instruct instances to bootstrap to Chef server
CloudFormation:
- Method of keeping Infrastructure as Code
- JSON based template that defines AWS Resources
35. www.container-solutions.com | info@container-solutions.com
AWS
Other ways to build
Getting Started guide: http://kubernetes.io/docs/getting-started-guides/aws/
- $ set=something ; wget something | bash
- Great for getting a cluster up and running quickly
- Inflexible for integration into existing VPCs
- Fussy if you put anything else in the VPC it creates
Kops: https://github.com/kubernetes/kops
- “kubectl for clusters”
- Will become the standard way to launch onto AWS
- Still in alpha
Run with your own: https://github.com/kelseyhightower/kubernetes-the-hard-way
- Takes some time
- Expect to reverse-engineer
- You will know exactly how the cluster is put together
36. www.container-solutions.com | info@container-solutions.com
Using the --cloud-provider=aws flag, the kubernetes components can be instructed
to leverage AWS IaaS features.
Master instances (running controller-manager) must have an appropriate IAM role
assigned.
Kubernetes can then
- Create and destroy Elastic Load Balancers (ELBs)
- Add and delete routes from cluster Route Table
- Add and delete firewall rules on cluster Security Group
AWS and Kubernetes
Kubernetes is able to configure AWS
Relevant resources must be appropriately tagged:
- Name: KubernetesCluster
- Value: ClusterId
46. www.container-solutions.com | info@container-solutions.com
What’s happening?
$ ./find-nodes [cluster-id]
x.x.x.x
x.x.x.x
$ ssh ubuntu@x.x.x.x
$ ip route list
$ route -n
$ sudo iptables -L -t nat
# view route table in AWS, note that the pod CIDRs are routed directly to an EC2 NIC