Kubernatis (k8s) Basics: Components
Kubernatis (k8s) Basics: Components
Kubernatis (k8s) Basics: Components
With Kubernetes,
Goal is to foster an ecosystem of components and tools that relieve the burden of running applications
in public and private clouds.
Kubernetes is
Components:
Nodes – minions
Pods
Labels
Selectors
Controllers
Services
Control Pane
API
Architecture:
Each minion will run ETCD (key pair management and communication service ) for exchange of
messages and reporting on cluster status
Pods
Pod consists of one or more containers, these containers are located on the same host.
Labels:
Controllers:
Service:
set of pods can work together , they can defined and implement a service ex: mysql or Apache
vi /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
Configuring Master:
=================
Add the information about the master controller
MASTER=”--http://master:8080”
KUBE_ETCD_SERVERS=”—etcd-servers=http://master:2379”
vi /etc/kubernets/apiserver
KUBE_API_ADDRESS=”—address=0.0.0.0”
Uncomment
KUBR_API_SEVER=”—port=8080”
KUBELET_PORT=”—kubelet-port=10250”
KUBE_SERVICE_ADDRESSES=”—service-cluster-ip-range=10.254.0.0/16”
On master
systemctl enable etcd kube-apiserver kube-controller-manager kube-scheduler
systemctl start etcd kube-apiserver kube-controller-manager kube-scheduler
MASTER=”--http://master:8080”
KUBE_ETCD_SERVERS=”—etcd-servers=http://master:2379”
KUBELET_ADDRESS=”—address=0.0.0.0”
Uncomment kubelet port
KUBELET_PORT=”—port=10250”
KUBELET_HOSTNAME=”—hostname-override=minion1”
KUBELET_API_SERVER=”—api-servers=http://master:8080”
Comment out KUBELET_POD_INFRA_CONTAINER
Name: minion2
Role:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=minion2
Taints: <none>
CreationTimestamp: Sun, 15 Oct 2017 03:33:10 +0000
Phase:
Conditions:
Type Status LastHeartbeatTime
LastTransitionTime Re
ason Message
---- ------ -----------------
------------------ --
---- -------
OutOfDisk False Sun, 15 Oct 2017
03:37:41 +0000 Sun, 15 Oct 2017 03:33:10 +0000
Ku
beletHasSufficientDisk kubelet has sufficient disk
space available
MemoryPressure False Sun, 15 Oct 2017
03:37:41 +0000 Sun, 15 Oct 2017 03:33:10 +0000
Ku
beletHasSufficientMemory kubelet has sufficient
memory available
DiskPressure False Sun, 15 Oct 2017
03:37:41 +0000 Sun, 15 Oct 2017 03:33:10 +0000
Ku
beletHasNoDiskPressure kubelet has no disk pressure
Ready True Sun, 15 Oct 2017
03:37:41 +0000 Sun, 15 Oct 2017 03:33:20 +0000
Ku
beletReady kubelet is posting
ready status
Addresses: 10.0.0.61,10.0.0.61,minion2
Capacity:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 500248Ki
pods: 110
Allocatable:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 500248Ki
pods: 110
System Info:
Machine ID:
980d6e7da5004216ade783778573fd3b
System UUID: D5CE9FAF-C19F-43AF-
A2B8-8D2D07799368
Boot ID: 7f676c28-3494-472c-
a900-e57bad33f042
Kernel Version: 3.10.0-
514.26.2.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.12.6
Kubelet Version: v1.5.2
Kube-Proxy Version: v1.5.2
ExternalID: minion2
Non-terminated Pods: (0 in total)
Namespace Name CPU
Requests CPU Limits Memory Requests Memory
Limits
--------- ---- -------
----- ---------- --------------- -------------
Allocated resources:
(Total limits may be over 100 percent, i.e.,
overcommitted.
CPU Requests CPU Limits Memory Requests Memory
Limits
------------ ---------- --------------- -------
------
0 (0%) 0 (0%) 0 (0%) 0 (0%)
Events:
FirstSeen LastSeen Count From
SubObjectPath Type Reason M
essage
--------- -------- ----- ----
------------- -------- ------ -
------
4m 4m 1 {kube-proxy
minion2} Normal Starting
S
tarting kube-proxy.
4m 4m 1 {kubelet
minion2} Normal Starting
S
tarting kubelet.
4m 4m 1 {kubelet
minion2} Warning
ImageGCFailed u
nable to find data for container /
4m 4m 2 {kubelet
minion2} Normal
NodeHasSufficientD
isk Node minion2 status is now:
NodeHasSufficientDisk
4m 4m 2 {kubelet
minion2} Normal
NodeHasSufficientM
emory Node minion2 status is now:
NodeHasSufficientMemory
4m 4m 2 {kubelet
minion2} Normal
NodeHasNoDiskPress
ure Node minion2 status is now:
NodeHasNoDiskPressure
4m 4m 1 {kubelet
minion2} Normal
NodeReady N
ode minion2 status is now: NodeReady
=================================
https://github.com/kubernetes/kubernetes/issues/11355#issuecomment-127378691
Description of pods
Deployments:
For production:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deploy
spec:
replicas: 3
template:
metadata:
labels:
app: nginx-deploy
spec:
containers:
- name: nginx-deploy
image: nginx:1.7.9
ports:
- containerPort: 80
Kind:
Kubernetes Pods are mortal. They are born and when they die, they are not resurrected.
ReplicationControllers in particular create and destroy Pods dynamically (e.g. when scaling up or down
or when doing rolling updates). While each Pod gets its own IP address, even those IP addresses cannot
be relied upon to be stable over time.
This leads to a problem: if some set of Pods (let’s call them backends) provides functionality to other
Pods (let’s call them frontends) inside the Kubernetes cluster,
how do those frontends find out and keep track of which backends are in that set?
Solution is services
A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access
them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined
by a Label Selector (see below for why you might want a Service without a selector).
As an example, consider an image-processing backend which is running with 3 replicas. Those replicas
are fungible - frontends do not care which backend they use. While the actual Pods that compose the
backend set may change, the frontend clients should not need to be aware of that or keep track of the
list of backends themselves. The Service abstraction enables this decoupling.
For Kubernetes-native applications, Kubernetes offers a simple Endpoints API that is updated whenever
the set of Pods in a Service changes. For non-native applications, Kubernetes offers a virtual-IP-based
bridge to Services which redirects to the backend Pods.
Documentation :
https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
Examples:
Example:
$ vi nginx_pod.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 2
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: api
namespace: wardle
spec:
ports:
- port: 443
protocol: TCP
targetPort: 443
selector:
apiserver: "true"
[root@minion1 ~]# docker ps
CONTAINER ID IMAGE
COMMAND CREATED STATUS
PORTS NAMES
ae3fc2bb7b68 nginx
"nginx -g 'daemon off" 35 minutes ago Up 35
minutes
k8s_nginx.a6022f15_nginx-gr8ck_default_0535328e-b73e-
11e7-8a52-080027ce8866_7000
8e87
2d3378f028ca gcr.io/google_containers/pause-
amd64:3.0 "/pause" 35 minutes ago
Up 35
minutes
k8s_POD.b2390301_nginx-gr8ck_default_0535328e-b73e-
11e7-8a52-080027ce8866_0cb243
ea
[root@minion1 ~]#
Why?
In Kubernetes, each pod has an IP and within a pod there exists a so called infrastructure container,
which is the first container that the Kubelet instantiates and it acquires the pod’s IP and sets up the
network namespace. All the other containers in the pod then join the infra container’s network and IPC
namespace. The infra container has network bridge mode enabled and all the other containers in the
pod share its namespace via container mode. The initial process that runs in the infra container does
effectively nothing since its sole purpose is to act as the home for the namespaces.
Nginx Server Deployment using Kubernetes
=======================================
vi nginx_pod.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 2
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
2. Create pod
kubectl create -f nginx_pod.yaml
3. Deploy the nginx service using yaml file in order to expose the nginx pod on the host port “82”
$ vi nginx_service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: nginxservice
name: nginxservice
spec:
ports:
- port: 82
# Label keys and values that must match in order to receive traffic for this service.
selector:
app: nginx
type: LoadBalancer
services/nginxservice