kubernetes_practical command (1) (1)
kubernetes_practical command (1) (1)
#orphan pod means which are not created by any controller such as replicaton
controller and replicaset controller and deployment controller. we wills see these
controller in next sessions.
#running first container, by deault every pods goes under "default" namespace. Name
space is nothing just a project name.
kubectl run test –image=docker.io/nginx
#get pod details from specific namespace. in following example we are looking pods
details running in kube-system name space.
kubectl get pod -n kube-system
#delete a pod
kubectl delete pod <podname>
##Lab 2: creation of pods using yaml( in below example we are running two
containers inside on pod)
#save file with name pod-2con.yaml
apiVersion: v1
kind: Pod
metadata:
name: webserver
spec:
containers:
- name: webserver
image: nginx:latest
ports:
- containerPort: 80
- name: webwatcher
image: afakharany/watcher:latest
####Replication Controller###########
pod state maintain
scalability
high availbility
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
#whatif you set replicaset value 0. In this case rc will remain but pod will be
demolished.
kubectl scale --replicas=0 rc amartest
kubectl get rc
kubectl get pod
# which command can put autoscale pods using replication controller on behalf of
cpu utilzation, this command can be much useful when you get to know how to
allocate cpu and ram on pods and namesapce.
labes in case of pod( this is used to categoriezed pods according to label, in pod
we can give multiple label to single pod.
===================================================================================
======================================
a. yaml for frontend web
apiVersion: v1
kind: Pod
metadata:
labels:
tier:frontend
name:web
spec:
containers:
-image: nginx
after run both yaml file using command.. kubectl create -f < yaml file>
kubectl get pod
kubctl get pod --show-lables
kuberctl get pod -l tier=frontend
kubectl get pod -l tier=backend
kubectl delete pod -l tier=frontend
kubectl delete pod -l tier=backend
kubectl delete pod --all
## replication yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: amartest
spec:
replicas: 3
selector:
app: nginx ( Replication controller will communicate with all pods which will
have same label)
template:
metadata:
name: nginx
labels:
app: nginx ( this is the label for pod in repication controller file)
###################ReplicaSet########################
ReplicaSet replaced replication controller in industry. in yaml file we will change
kind type to Replicaset.. this will mange pod which will have labesl frontend
## replicaset yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
#yaml 2########### this yamal is set based slector.. like replicaset mnaging pods
which will have labes prod and frontend
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name:frontend
labels:
app:guestbook
tier:frontend
spec:
replicas:3
selector:
matchExpression:
-key: tier
operator: In
values:
-frontend
-key: env
operator:In
values:
-prod
template:
metadata:
labels:
tier: frontend
env:prod
spec:
containers:
-name:php-redis
image:gcr.io/google_samples/gb-frontend:v3
Note: if we already launched a pod which have same labels which we defined in
replication controller then replication controller will less that no of pod from
defined replicas, suppose you defined 3 repicase in repplication yaml file and we
launched one pod manullay using same label which we we gave in yaml, then
repication controller will deploy 2 pod instead of 3.
##Note: If any pod goes down or delete in case replication controller launch new
one itself because its maintain the state of pods.
This rollout and rollback term is used to apply changes on running application.
Suppose if we want to apply latest version with some changes on running applcation
so this method is called rollout. If any case new changes will not work properly
then we will go back on on old version so this method is called rollback. This
process can be done with zero timeout.
Note: using RC and RS you can maintain same version of application, but you can't
rollout and rollback.
Deployment controller: this controller have all feature of RS and RC and apart of
this this have rollout and rollback feature. In production we use deployment
controller instead of RC and RS. this also provide scalability and replication
maintain feature.
Deployment Stategy:--
1a) rolling update.. This is default. this will create 25% of pods from running pod
and then remove 25 % from running pods.
1b) recrete
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
##deployment command
kubectl create -f deployment.yaml
kubectl get deploy
kubectl scale --replicas=4 deploy nginx-deployment(name of deployment)
kubectl scale --replicas=2 deploy nginx-deployment(name of deployment)
kubectl get pod
kubectl delete pod < podname>
kubectl get pod
kubectl delete deploy nginx-deployment
kubectl describe deploy nginx-deployment |grep -i max
kubectl get deploy nginx-deployment -o yaml|grep -i grace
#above command will show unavaiable and surge percentage of pods. which is default
is 25%. Grace period by default is 30 sec.
Note: kube-proxy is l3 load balncer which we will study during network study.
#rolling update
kubectl describe deploy nginx-deployment|grep -i image ( command to check running
version of app)
kubectl rollout history deploy nginx-deployment (command to check any rollout or
change goes to runnning pod in deployment)
kubectl set image deploy nginx-deployment nginx=nginx:1.20 ( this command is used
to send change of application version in running pod using deployment controller)
kubectl set image deploy nginx-deployment nginx=nginx:1.20 --record ( this record
is used to make history of command of deployment)
kubectl set image deploy nginx-deployment nginx=nginx:1.21 --record
kubectl rollout history deploy nginx-deployment
#rollback
kubectl rollout undo deploy nginx-deployment --to-version=1 ( this version we can
see from history version)
kubectl describe deploy nginx-deployment |grep -i image
##edit max surge and maxUnavailable default value in deployment by edit existing
deployment,or if you want to change deployment type which are RollingUpdate and
Recreate.
kubectl edit deploy nginx-deployment
### Resource and Compute Quota in kubernets##########
Earlier whatever method we had learn to launcg pods, those dont have limit of
utilization of CPU and RAM by pods in kubernetes, now we will se how can we
allocate cpu and memory to a pod and on a name space.
===> we know one cpu have 1000 micro so unit in kubernets for cpu is m .
==> unit for RAM is Gi(KB), Mi(MB),Gi(GB)
apiVersion: v1
kind: Pod
metadata:
name:test
spec:
containers:
- image:nginx
name:test
resources:
requests: ( this is gaurantee memory which assign during pod creation)
memory:200Mi
cpu:200m
limits:
memory:400Mi
cpu:300m
( this is the limit of pod for memory allocation, above this pod can't
use in any case). But this not gaurntee memory, this will assign
to pod in case node will have remain memory.
vim onlylimit.yaml
apiVersion: v1
kind: Pod
metadata:
name:demo
spec:
containers:
- image:nginx
name:test
resources:
limits:
memory:400Mi
cpu:300m
###note: in a same way you can define quota on replicaset and deployment as you did
in repplication controller.
vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
name:demo
spec:
containers:
- image:nginx
name:test
resources:
limits:
memory:200Mi
cpu:200m
#note: if we set the quota limit on a name space as set in above commands, after
that can't launch any orphan pod inside name space without defining cpu and memory
limit.
LimitRange: this is also a method in compute resource, by using this we can set
minimum and maximum cpu and ram limit during launching pod. This is also one of
kind like deployment, replication controller and replica-set.
a. default value can be inject in pod
b. user will restrict to define resource above or less( define in Limit Range)
vim limitrange.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-min-max-demo-lr
spec:
limits:
- max:
cpu: "800m"
memory: "400Mi"
min:
cpu: "200m"
memory: "300Mi"
type: Container
Previously we have created many pods in kubernets cluster using diffrent type of
method such as orphan pod,deployment,replicaset, replication controller but in all
these case we don't have controll about on which node pod should be create in
cluster. In scheduling we will create pod on specific node where we want to.
benifits:
1.protect from resource crunch
2. protect from eviction of running pod
3. protect from downtime due to eviction on prod.
a. NodeName base Pod scheduling ( in this method you can put only one node, so high
avaialbity is in red zone in case node goes down)
1. example of normal pod: crate a yaml using nodeName option in spec tag.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
nodeName: worker1.example.com
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
#kubectl create -f pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
nodeName: worker2.example.com
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Benifits:
a. High Availbility
b. Protect us from resource crunch.
c. Deployed pods which match labels given in NodeSelector during deployment and
labeling to node ( these key/value should same).
example: in following example will give compute=db label to worker1, worker2 and
worker3( will change it label in next step) and after that will use NodeSelector:
method during deployment.
Note: you can overwrite existing label for a particular node. suppose I want to
change label of worker3, then use following command with --overwrite option.
#kubectl label node worker3.example.com compute=app --overwrite
vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
nodeSelector: ( this is use of nodeSelctor)
compute:db ( this is the label of node which we gave in above
commands)
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
###Scheduling on NameSpace
label = disk=ssd
kubectl label node worker1.example.com disk=ssd ( labeling node1 with label
disk=ssd)
vim /etc/kubernetes/manifests/kube-apiserver.yaml ( PodNodeSelector option is not
provided for name space by api server, we need to enable this feaature in api
first)
--enable-admission-plugins=NodeRestriction,PodNodeSelector ( save and quit)
kubectl edit namespace dev (after enable PodNodeSelector option in API as given in
above step edit namespace, undermetadata section put following parameter)
annotations:
scheduler.alpha.kubernetes.io/node-selector: disk=ssd ( save and quit)
##NOTE: In nodeSelector method it will launch your pod on specific worker node
but that not means other deployment will not come on this worker node. If you want
to prohibited other deployment on that node then you need to go for taint and
toleration method of scheduling.
vim Pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
####################
tolerations:
- key:app
operator:Equal ( if operator is equal then value should define but if you
use operator as Exists then value is not neccessary to define)
value:monitoring
effect:NoSchedule
or
tolerations:
- key:app
operator:Exists ( if you use operator as Exist then value is not
neccessary to define)
effect:NoSchedule
########################
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
### Taint-Effect-Type#####
NoSchedule: This not evict running pod(dont have torations) on worker node, which
is launched before implement taint rule
NoExecute: it doesnt allow. It will evict all running pod from worker node which
launch before taint rule. but when will implement taint rule it will delete all
running pod which will not have matched tolreations.
NoPrefered: This is soft type taint, if you launch a pod without tolretations and
this pod could not find essesntials compute resources then, worker which have soft
taint rule and have compute resource can allow this pod to on same worker node.
NETWORKKING
===========
In this section we will see how network things works in kubernetes cluster. In our
case we will use Calico network as this is mostly used in production environment.
Pod Network works on CNI(container network interface) concept, its kind of sofware
define network which provide OSI model's L3 capacity such as routing and policies.
There are multiple flavor for CNI such as flannel,Calico. Calico is recommended in
production environment.Calico has been configured with a subnet pool. Calico work
on CISCO BGP protocol.
1. Pod should have unique IP address ( default its assign)
2. Two pod should be communicate to each other on same worker. (Default its
communicate)
3.Two pod shouold be communicate to each other on different workers( remote
containers). (Default Network allow)
- name: CALICO_IPV4POOL_CIDR ( you can search this string in file during editing)
value: "192.168.0.0/24"
- blockSize: 26
Note: Above both commands will run on master node, and API will deploy calico
networ by running one pod on worker nodes including master node. you can check
using following command.
#kubectl get pod -n kube-system|grep calico-node
#############################################################
apiVersion: projectcalico.org/v3
items:
- apiVersion: projectcalico.org/v3
kind: IpPool
metadata:
name: my-custom-pool ######( change this value to change the name of
pool)#####
spec:
blockSize: 22 ####### ( change the block size, as block size would less number
of IP will increase) ##,.
cidr: 192.168.0.0/16 ##### ( Define Subnet and CIDR Value Here)####
ipipMode: Always
natOutgoing: true
nodeSelector: all()
vxlanMode: Never
kind: IPPoolList
###########################################################
NOTE: We can have multiple pool in kubernets cluster but only one pool can be
active in single point of time.
Note: By default calico network works on ipip mode , you can see following
parameters in calico.yaml file
-name: CALICO_IPV4POOL_IPIP
value: "Always" =====( if you want to use tunnel mode then change this value to
Never)
features of Service IP
#####################
1.it is static ip for deployment
2.service Ip also use diffrent pool network
3.Service IP Reserved IP address for deployment manage by kube-proxy.
4. Kube-proxy is like software define frontend blancer/proxy for your pod.
5.Service ip work with kube-proxy so its also provide loadbalancer.
# command to expose your deployment on port 80. in following command we will expose
test deployment on port 80 get a service ip with name db.
1.kubectl create deploy test --image=nginx
2. kubectl expose deploy test --port=80 --name=db ( if not define target ports then
it will take backend port default as target port)
kubectl expose deploy test --name=db --port=8080( this is frontend port) --target-
port=80( this is backend port)
3. kubectl get service ( command for ckecking service ip with name db on cluster
level).
4. kubectl descibe service db|grep -i end ( it will show the ip address attached of
pods which are launched during deployment test, if any pod will stop and start end
point will change automatically)
5. kubectl delete deploy test
6.kubectl delete service db
Note: If you delete your deployment named test and deploy an orphan container with
label test then that orphan pod will reclaim this service IP. If we deleted
deployment and pod service ip will be remain on clusetr and other resource can
recalim service ip. This all happend becasue service ip used selector to match
labels.
###Service IP Types###
ref link: https://medium.com/devops-mojo/kubernetes-service-types-overview-
introduction-to-k8s-service-types-what-are-types-of-kubernetes-services-
ea6db72c3f8c
1.ClusterIP
===========
.ClusterIP is the default and most common service type.
.Kubernetes will assign a cluster-internal IP address to ClusterIP service. This
makes the service only reachable within the cluster.
.You cannot make requests to service (pods) from outside the cluster.
.You can optionally set cluster IP in the service definition file.
###Use Cases####
Inter service communication within the cluster. For example, communication between
the front-end and back-end components of your app.
###Example###
apiVersion: v1
kind: Service
metadata:
name: my-backend-service
spec:
type: ClusterIP # Optional field (default)
clusterIP: 10.10.0.1 # within service cluster ip range
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
2.NodePort
==========
a. NodePort service is an extension of ClusterIP service. A ClusterIP Service, to
which the NodePort Service routes, is automatically created.
b. It exposes the service outside of the cluster by adding a cluster-wide port on
top of ClusterIP.
c. NodePort exposes the service on each Node’s IP at a static port (the NodePort).
Each node proxies that port into your Service. So, external traffic has access to
d. fixed port on each Node. It means any request to your cluster on that port gets
forwarded to the service.
e. You can contact the NodePort Service, from outside the cluster, by requesting
<NodeIP>:<NodePort>.
f. Node port must be in the range of 30000–32767. Manually allocating a port to the
service is optional. If it is undefined, Kubernetes will automatically assign one.
g. If you are going to choose node port explicitly, ensure that the port was not
already used by another service.
###Use Cases##
a.When you want to enable external connectivity to your service.
b.Using a NodePort gives you the freedom to set up your own load balancing
solution, to configure environments that are not fully supported by Kubernetes, or
even to c.expose one or more nodes’ IPs directly.
d. Prefer to place a load balancer above your nodes to avoid node failure.
###example###
apiVersion: v1
kind: Service
metadata:
name: my-frontend-service
spec:
type: NodePort
selector:
app: web
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
nodePort: 30000 # 30000-32767, Optional field
example 2:
kubectl create deploy test --image=nginx
kubect expose deploy test --port=80 --type=NodePort
kubectl get service ( you will see a node port will allocate which will forward
traffic on port 80)
netstat -tlnp|grep <nodeport>
you can access pod service using worker ip address:port from out side of the world.
Note: you can expose pod service using ingress controller also to outside world....
3.LoadBalancer
===============
a.LoadBalancer service is an extension of NodePort service. NodePort and ClusterIP
Services, to which the external load balancer routes, are automatically created.
b. It integrates NodePort with cloud-based load balancers.
c. It exposes the Service externally using a cloud provider’s load balancer.
d. Each cloud provider (AWS, Azure, GCP, etc) has its own native load balancer
implementation. The cloud provider will create a load balancer, which then
automatically e. routes requests to your Kubernetes Service.
f. Traffic from the external load balancer is directed at the backend Pods. The
cloud provider decides how it is load balanced.
g. The actual creation of the load balancer happens asynchronously.
h. Every time you want to expose a service to the outside world, you have to create
a new LoadBalancer and get an IP address.
###Use Cases###
When you are using a cloud provider to host your Kubernetes cluster.This type of
service is typically heavily dependent on the cloud provider.
###Example#####
apiVersion: v1
kind: Service
metadata:
name: my-frontend-service
spec:
type: LoadBalancer
clusterIP: 10.0.171.123
loadBalancerIP: 123.123.123.123
selector:
app: web
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
4.ExternalName
==============
a.Services of type ExternalName map a Service to a DNS name, not to a typical
selector such as my-service.
b.You specify these Services with the `spec.externalName` parameter.
c.It maps the Service to the contents of the externalName field (e.g.
foo.bar.example.com), by returning a CNAME record with its value.
d.No proxying of any kind is established.
###Use Cases###
a.This is commonly used to create a service within Kubernetes to represent an
external datastore like a database that runs externally to Kubernetes.
b.You can use that ExternalName service (as a local service) when Pods from one
namespace to talk to a service in another namespace.
###Example####
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: my.database.example.com
example lab
in same name space there are three pods(web/db/app)
$db should be allow only from app
=================================
kubectl delete deploy --all --force
kubectl delete pod --all
kubectl run web --image=nginx
kubectl run db --image=nginx
kubectl run app --image=nginx
vim policy-db-app.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-app
namespace: default-----always put the namespace same where your pods are running.
spec:
podSelector:
matchLabels:
run: db --- this is the label of db and will apply on db pod
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
run: app
ports:
- protocol: TCP
port: 80
Example2:
diffrent pod in diffrent name space
db should communicate only from app pods.
##############
kubectl run app --image=nginx -n webprod -l env:webprod ( you can give label to
name space by edit namespace using following command)
kubectl edit ns webprod ( and can set same label env:webprod)
vim policy-db-app1.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-app1
namespace: default-----always put the namespace same where your pods are running.
spec:
podSelector:
matchLabels:
run: db --- this is the label of db and will apply on db pod
policyTypes:
- Ingress
ingress:
- from:
- podSelector: ( condition one)
matchLabels:
run: app
namespaceSelector:
env:webprod
1. PVC( persistant volume claim) - we need to claim volume using PVC through
Resouce API. It is requested by administrator for external volume
2. PV ( persistant volume): it is available in cluster level which provide by
storage for kubernetes cluster.
Volume Provisioning
-------------------
a.static
b.dynamic
6. make nfs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany--- this mode should same as PVC
persistentVolumeReclaimPolicy: Retain
storageClassName: slow -- this calss should available in pvc file also..
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/exports
server: 192.168.30.35
#kubectl create -f pv.yaml
#kubectl get pv
#kubectl get pvc
vim class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfsclass
provisioner: example.com/nfs
parameters:
archiveOnDelete: "false"
vim deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
selector:
matchLabels:
app: nfs-client-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: example.com/nfs
- name: NFS_SERVER
value: 192.168.30.35
- name: NFS_PATH
value: /data/exports
volumes:
- name: nfs-client-root
nfs:
server: 192.168.30.35
path: /data/exports
--feature-gates=RemoveSelfLink=false
vim pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim1--- this name should be uniq
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
storageClassName: nfsclass-- this is the class name which we created duing enble
dynamic provisioning...
resources:
requests:
storage: 5Gi
vim poddeploymentusingdynamicprovisioning.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: nfsvolume
mountPath: /usr/share/nginx/html/
volumes:
- name: nfsvolume
persistentVolumeClaim:
claimName: myclaim1 --- this is the name of pvc which we created earlier.
HELM INSTALLATION
curl -fsSL -o get_helm.sh
https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod +x get_helm.sh
./get_helm.sh
helm
What is Helm?
HELM is a package manager in kubernetes cluster, its have following features.
1.Deployment creation
2.Upgration
3.rollback
4.deletion
Helm chart: This is bundel which have installation files. this is two types, custom
and cummunity.
custom chart
=============
#kubectl create ns webprod1
#helm create nginx ( this will create nginx helm chart files)
#cd nginx
#ls
Note: template folder and values.yaml very important. template holds all menifests
such as deployment,quota,replicaset and replication controller,service account,rbac
and etc.
# modify values.yaml file and pass your values as per your requirement... after
that run following command
#helm install krnetwork nginx<name of helm chart which we created> -n
webprod1<namespace>
#helm list -n webprod1 ( list the name of deployment in webprod1 namespace)
# run following command after modify the values in value.yaml file.
helm upgrade krnetwork nginx -n webprod1
helm delete krnetwork -n webprodq ( delete the all deployment in one go)
kubectl get all -n webprod1
#cat ca.crt |base64 -w0 < these command convert output to base64 in single word
format>
#cat jeff.key|base64 -w0
#cat jeff.crt|base64 -w0
# when you put output of above commands in jeff.conf file then scp this file to
bastion node and put into .kube/config directory inside home directory.
#kubectl get node < command will throw permission error, becasue if dont delegate
any role to this user till yet)
2.roles: set of access rule in api.
a. defualt roles
b.custom roles
Note: Cluster level scope role can bind on cluster level binding as well on name
space level binding. but namespace scope level role can bind on name space level
binding.
a.cluster level
b.name space level
vim role.yaml
## in this example we will eanble jeff for multiple name space using rolebinding
using sinngle yaml file instead of run multiple command for each name space#
apiVersion:rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
crationTimestamp: null
name: test-admin
namespace: webprod
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
Subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: jeff
---
apiVersion:rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
crationTimestamp: null
name: test-admin
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
Subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: jeff
##CUSTOME ROLE###
###During custom role creation we need to remember following things####
API RESOURCES
1.pod
2.replicaset
3.replication controller'
4. deployment
5.PV/PVC
Verbs(Actions)
1.get
2.create
3.delete
4.patch
5.scale
5.describe
vim cluster-admin.yaml
4.Service account: This account is used by cluster service which will take the data
information from API, so for that we create service account. For example prometheus
is a service which need information from cluster so we need to create sevice
account and provide access to get cluster level informations.
spec:
ServiceAccount: xyz
Containers:
statefullset: StatefulSet keeps a unique identity for each pod it manages. It uses
the same identity whenever it needs to reschedule those pods.
Headless: headless is nothing ,this is service for IP, instead of IP we use the
name conventions for stop the spin traffic.
================================================================
there are two ways to map secret in deployment
a. as a environment variable: literal types of secret used in to this.
b. as a volume: TLS and Certificates mapped using volume.
apiVersion: app/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app:tst
name:tst
spec:
replicas: 1
selector:
matchLabels:
app:tst
strategy: {}
template:
metadata:
creationTimestamp: null
lables:
app: tst
spec:
containers:
- image: mysql
name: mysql
env:
- name:MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name:super-secret
key:mypass
TLS secret
=============
mkdir certs
openssl genrsa -out example.key 2048
openssl req -new -key example.key -out example.csr -subj "/c=US/ST=CA/L=Los
Angeless/O=Example/OU=IT/CN=test.example.com"
openssl x509 -req -days 365 -in example.csr -signkey example.key -out example.crt
configmap
============
configmap is same as secret and hold is value in text form while secret hold value
in encrypted ( base64) form.
restartPolicy: Never
Note: in above example we are passing both key and value store in configmap named
super-configmap, what if we want to pass only value in DB-SERVER environment
variable during deployment.
apiVersion: v1
kind: Pod
metadata:
name: nginx-test
spec:
containers:
- name: test-container
image: nginx
env:
# Define the environment variable
- name: DB_HOST
valueFrom:
- configMapKeyRef:
name: super-configmap
key: db-serverip
restartPolicy: Never
#notedown encrypted value of user and password from output of above command then
run following command
echo -n <value of username> |base64 -d
echo -n <value of password>|base64 -d
Note: After successfully login integrate prometheus with grafana usin url
http://prometheus-serviceip
EFK(elasticsearch,fluented,kibana) INSTALLATION
==================
In kubernetes its come in statefull deployment. elastic search always run in
active-passive mode in cluster... you can install this tool helm chart also, but
here we will use manual method.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: kube-logging
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-
cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage ( this is storage calss which we enable in
dynamic provisioning.. this must be enable before deployment)
resources:
requests:
storage: 5Gi
Make kibana.yaml
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: kube-logging
labels:
app: kibana
spec:
ports:
- port: 5601
selector:
app: kibana
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: kube-logging
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.2.0
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200
ports:
- containerPort: 5601
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: kube-logging
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: kube-logging
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-logging
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch.kube-logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
resources:
limits:
memory: 512Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Note: for access kibana from outside, change the IP Type of Kibana to
nodePort....... as you change for grafana and prometheus.
#########################monitoring
END################################################################################
###
example:
sudo mkdir /mnt/data
sudo sh -c "echo 'Hello from Kubernetes storage' > /mnt/data/index.html"
vim hostpath-pv.yaml
====================
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
#kubectl create -f hostpath-pv.yaml
vim hostpath-pvc.yaml
======================
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual === This class name should same which we gave during PV
creation.
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
Note: you can use use local path and hostpath both to mount local folder in pod..
but difference is in hostpath we dont need to create folder in hostpath, it will
create automatically but in local we need to create directory manually. In local
type we need to define affinty[from which node we are having local folder which we
will use]. Dynamic provisioning not supported to local volume method.
go on worker node2 and craete a directory /op/application then use following yaml
file..
vim local-pvc.yaml
==================
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-pv-claim
spec:
storageClassName: local-storage === This class name should same which we gave
during PV creation.
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
#kubectl create -f local-pvc.yaml
2. Init Container:
Init containers always run to completion.
Each init container must complete successfully before the next one starts