This document provides an introduction and overview of Kubernetes. It begins by discussing the state of modern infrastructure and applications, and how microservices architectures help address these challenges. It then introduces containers and Docker as a way to package and run applications. Kubernetes is presented as an open source tool for orchestrating and managing containers at scale. The rest of the document outlines some key Kubernetes concepts like pods, replication controllers, services, labels and selectors. It concludes by noting there will be a demo of these Kubernetes features.
2. Agenda
• State of Infrastructure & Applications today
• Microservices and two pizza teams
• Orchestration challenge
• Containers and Docker
• Kubernetes to the rescue
• Kubernetes Architecture 101
• Demo
3. State of Infra & Apps today
• Application specific machine pools
• Usually one application per machine
• Inefficient and painful to manage at scale
• Application are tightly coupled with the
underlying infrastructure
• Patching / Configuration updates can potentially
cause application outages
• Machines are really the primary key of datacenter
4. Microservices & 2 Pizza Teams
“Microservice architectural style is an approach to developing a single
application as a suite of small services, each running in its own process and
communicating with lightweight mechanisms, often an HTTP resource API.
These services are built around business capabilities and independently
deployable by fully automated deployment“ - Martin Fowler
“Small cross functional teams are responsible to build, test, deploy, monitor,
scale and operate each of these services, teams communicate with each
other via API contracts”
“In our conversations with microservice practitioners, we see a range of sizes
of services. The largest sizes reported follow Amazon's notion of the Two
Pizza Team (i.e. the whole team can be fed by two pizzas)” - Martin Fowler
6. Linux Containers & Docker
• Linux processes not lightweight Virtual
Machines
• Application + Dependencies = image
• Runtime environment (cgroups, namespaces,
env vars)
• Docker made containers easy to build, share
and run
9. Kubernetes aka k8s
• Greek for “helmsman”; person who steers a ship
• Runs and manages containers at scale
• Inspired by Googles experience with running
containers at scale
• Open Source & Supports almost all platforms
• IP owned by Cloud Native Computing Foundation
• Provides a Unified Compute Substrate
• Applications become the primary key of our
datacenter / cloud infrastructure
11. Pod
• A pod is a group of one or more application
containers which are relatively tightly coupled,
• Containers within a Pod are always co-located
and co-scheduled, and run in a shared context
i.e. they share the same storage, IP address and
port space, and can find each other via localhost
Useful Reading
The Distributed System Tool Kit: Patterns for Composite Containers
http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html
12. Pod: Resource sharing and
communication
Why not just run multiple programs in a single container?
Decoupling software dependencies;
The individual containers may be versioned, rebuilt and redeployed independently.
13. Labels & Selector
• Labels are key/value pairs that are attached to
objects such as pods specify attributes
meaningful and relevant to users
• Via a label selector, the client/user can
identify a set of objects. The label selector is
the core grouping primitive in Kubernetes.
14. Replication Controller
• A replication controller ensures that a
specified number of pod “replicas” are
running at any one time
• If there are too many pods, it will kill some. If
there are too few, the replication controller
will start more
• Pods maintained by a replication controller
are automatically replaced if they fail, get
deleted, or are terminated
16. Services
• Pods are mortals, they are automatically
replaced by the replication controller if they
fail, get deleted, or are terminated
• A Kubernetes Service is an abstraction which
defines a logical set of Pods and a policy by
which to access them
• Virtual IP per service
• Think of them as like internal load balancers
with built-in service discovery