Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
Running MongoDB Enterprise
Database Services on
Kubernetes
At your MongoDB.local, you’ll learn technologies, tools, and best practices that make it easy for you to build
data-driven applications without distraction.
Jason
Mimick
Technical Director, MongoDB
@jmimick
jasonmimick
Agenda
Kubernetes & MongoDB Target Architecture
Demo: installing the MongoDB Operator & Running a replica set
Advanced configurations
Accessing MongoDB services
Release plans: Target GA, Additional Features, OpenShift 3.11
Technologies -
Containers
A standardized unit of software
lightweight, standalone, executable
package of software that includes
everything needed to run an application:
code, runtime, system tools, system
libraries and settings
(https://www.docker.com/resources/what-container)
Technologies -
Kubernetes
Kubernetes is an open-source
system for automating deployment,
scaling, and management of
containerized applications.
(https://kubernetes.io/)
Important Concepts: Master Node, Worker Nodes,
Pods, Image Repo, API
Requirement: >= v1.9
Technologies -
MongoDB Ops
Manager
Ops Manager is a package for
managing MongoDB
deployments.
Management (Automation)
Monitoring
Backups
Cloud Manager
Technologies -
Kubernetes
Operators
An Operator is a method of packaging, deploying
and managing a Kubernetes application. A
Kubernetes application is an application that is
both deployed on Kubernetes and managed using
the Kubernetes APIs and kubectl tooling..
https://www.docker.com/resources/what-container
Kubernetes
MongoDB Ops
Manager
Operator
Architecture
Part 2 - Using the Operator
Getting the operator
Official container images
hosted on quay.io
Public GitHub repository
https://github.com/mongodb/mongodb-enterprise-kub
ernetes
Demo
Installing the Operator
Spin up k8s & install operator
# Start cluster
minikube start
# Install operator
kubectl create -f ./mongodb-enterprise.yaml
# set default namespace (op creates & installs to 'mongodb')
kubectl config set-context $(kubectl config current-context)
--namespace=mongodb
# See all operator stuff
kubectl get all --selector=app=mongodb-enterprise-operator
https://docs.opsmanager.mongodb.com/current/tutorial/install-k8s-operator/
Configuration - Connection to Ops
Manager
---
apiVersion: v1
kind: ConfigMap
metadata:
name: dot-local
data:
projectId:
5b76d1750bd66b7ea136427f
baseUrl:
https://cloud.mongodb.com/
---
apiVersion: v1
kind: Secret
metadata:
name: opsmgr-credentials
stringData:
user: jason.mimick
publicApiKey:
02b9674b-e912-4bf5-bec3-43687
832a6cd
Create Secret & ConfigMap
kubectl create -f opsmgr-credentials.yaml
kubectl create -f dot-local-opsmgr-project.yaml
kubectl get configmaps,secrets
Demo
Deploying a MongoDB
Replica Set
Deploying a Replica Set
apiVersion: mongodb.com/v1
kind: MongoDbReplicaSet
metadata:
name: chicago-rs1
namespace: mongodb
spec:
members: 3
version: 4.0.0
project: dot-local
credentials: opsmgr-credentials
podSpec:
storageClass: standard
kubectl create -f chicago-rs1.yaml
kubectl get all --selector=app=chicago-rs1-svc
NAME READY STATUS RESTARTS AGE
pod/chicago-rs1-0 1/1 Running 12 13h
pod/chicago-rs1-1 1/1 Running 12 13h
pod/chicago-rs1-2 1/1 Running 12 13h
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
service/chicago-rs1-svc ClusterIP None <none>
27017/TCP 13h
service/chicago-rs1-svc-external NodePort 10.102.95.40 <none>
27017:30780/TCP 13h
NAME DESIRED CURRENT AGE
statefulset.apps/chicago-rs1 3 3 13h
High Availability - replication + statefulset =
kubectl delete pod chicago-rs1-1
pod "chicago-rs1-1" deleted
root@ip-172-31-19-43:~# kubectl get all --selector=app=chicago-rs1-svc
NAME READY STATUS RESTARTS AGE
pod/chicago-rs1-0 1/1 Running 12 13h
pod/chicago-rs1-1 1/1 Running 0 1m
pod/chicago-rs1-2 1/1 Running 12 13h
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
service/chicago-rs1-svc ClusterIP None <none>
27017/TCP 13h
service/chicago-rs1-svc-external NodePort 10.102.95.40 <none>
27017:30780/TCP 13h
NAME DESIRED CURRENT AGE
statefulset.apps/chicago-rs1 3 3 13h
Scaling
# vim chicago-rs1.yaml
# kubectl apply -f chicago-rs1.yaml
mongodbreplicaset.mongodb.com/chicago-rs1 configured
# kubectl get all --selector=app=chicago-rs1-svc
NAME READY STATUS RESTARTS AGE
pod/chicago-rs1-0 1/1 Running 12 13h
pod/chicago-rs1-1 1/1 Running 0 3m
pod/chicago-rs1-2 1/1 Running 12 13h
pod/chicago-rs1-3 1/1 Running 0 10s
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
service/chicago-rs1-svc ClusterIP None <none>
27017/TCP 13h
service/chicago-rs1-svc-external NodePort 10.102.95.40 <none>
27017:30780/TCP 13h
NAME DESIRED CURRENT AGE
statefulset.apps/chicago-rs1 4 4 13h
Part 2 - Advanced Configuration
Resource requests
memory, cpu
Persistent Storage - DB right?
apiVersion: mongodb.com/v1
kind: MongoDbReplicaSet
metadata:
name: chicago-rs1
namespace: mongodb
spec:
members: 3
version: 4.0.0
project: dotlocal-chicago
credentials: opsmgr-credentials
podSpec:
storage: 5G
storageClass: managed-premium
Persistent Storage - What's
createdkubectl get sc,pv,pvc 
--output=custom-columns=KIND:.kind,NAME:.metadata.name,STATUS:.status.phas
e
KIND NAME STATUS
StorageClass default <none>
StorageClass managed-premium <none>
PersistentVolume pvc-2428e963-b5ff-11e8-bef4-0a58ac1f12f3 Bound
PersistentVolume pvc-877ac893-b5ff-11e8-bef4-0a58ac1f12f3 Bound
PersistentVolume pvc-c8993a87-b5ff-11e8-bef4-0a58ac1f12f3 Bound
PersistentVolumeClaim data-chicago-rs1-0 Bound
PersistentVolumeClaim data-chicago-rs1-1 Bound
PersistentVolumeClaim data-chicago-rs1-2 Bound
Node Affinity & Anti-Affinity
---
apiVersion: mongodb.com/v1
kind: MongoDbReplicaSet
metadata:
name: chicago-rs1-happy-nodes
namespace: mongodb
spec:
members: 3
version: 4.0.0
project: dotlocal-chicago
credentials: opsmgr-credentials
podSpec:
storage: 5G
storageClass: managed-premium
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: mood
operator: In
values:
- happy
Node Affinity & Anti-Affinity
[ec2-user@ip-172-31-58-143 mongodb-enterprise-kubernetes]$ kubectl describe node aks-nodepool1-21637950-{0,1,2} | grep -E
"^Name:|mood"
Name: aks-nodepool1-21637950-0
mood=happy
Name: aks-nodepool1-21637950-1
mood=happy
Name: aks-nodepool1-21637950-2
mood=content
[ec2-user@ip-172-31-58-143 mongodb-enterprise-kubernetes]$ kubectl describe node aks-nodepool1-21637950-{0,1,2} | grep -E
"^ mongodb|^Name:"
Name: aks-nodepool1-21637950-0
mongodb chicago-rs1-2 0 (0%) 0 (0%) 0 (0%) 0
(0%)
mongodb chicago-rs1-happy-2 0 (0%) 0 (0%) 0 (0%) 0
(0%)
mongodb chicago-rs1-happy-nodes-0 0 (0%) 0 (0%) 0 (0%) 0
(0%)
mongodb chicago-rs1-happy-nodes-2 0 (0%) 0 (0%) 0 (0%) 0
(0%)
mongodb mongodb-enterprise-operator-57bdc5c59f-lq6gt 0 (0%) 0 (0%) 0 (0%) 0
(0%)
Name: aks-nodepool1-21637950-1
mongodb chicago-rs1-0 0 (0%) 0 (0%) 0 (0%) 0 (0%)
mongodb chicago-rs1-happy-0 0 (0%) 0 (0%) 0 (0%) 0 (0%)
mongodb chicago-rs1-happy-nodes-1 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Name: aks-nodepool1-21637950-2
mongodb chicago-rs1-1 0 (0%) 0 (0%) 0 (0%) 0 (0%)
mongodb chicago-rs1-happy-1 0 (0%) 0 (0%) 0 (0%) 0 (0%)
[ec2-user@ip-172-31-58-143 mongodb-enterprise-kubernetes]$ ^C
Demo
Connecting to your MongoDB
deployment
intra-k8s cluster connections
1. Jump into cluster: kubectl exec -it chicago-rs1-0 -- /bin/bash
2. Find mongo binaries - run shell
ps -ef | grep mongo
/var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.0.0/bin/mon
go
"mongodb://chicago-rs1-0.chicago-rs1-svc.mongodb.svc.cluster.local
,chicago-rs1-1.chicago-rs1-svc.mongodb.svc.cluster.local,chicago-r
s1-2.chicago-rs1-svc.mongodb.svc.cluster.local/?replicaSet=chicago
-rs1"
mongodb+srv://
#!/bin/bash
: ${1?"Usage: $0 <MongoDB Service Name>"}
POD_STATE_WAIT_SECONDS=5
SERVICE=$1
DNS_SRV_POD="mongodb-${SERVICE}-dns-srv-test"
SIMPLE_CONNECTION_POD="mongodb-${SERVICE}-connection-test"
echo "MongoDB Enterprise Kubernetes - DNS SRV Connection Test - START"
echo "Testing service '${SERVICE}'"
echo "DNS SRV pod '${DNS_SRV_POD}'"
echo "Connection test pod '${SIMPLE_CONNECTION_POD}'"
kubectl run ${DNS_SRV_POD} --restart=Never --image=tutum/dnsutils -- host -t srv ${SERVICE}
STATE=""
while [ "${STATE}" != "Terminated" ]
do
STATE=$(kubectl describe pod ${DNS_SRV_POD} | grep "State:" | cut -d":" -f2 | tr -d '[:space:]')
if [ "${STATE}" = "Terminated" ]; then break; fi
echo "Polling state for pod '${DNS_SRV_POD}': ${STATE} ...sleeping ${POD_STATE_WAIT_SECONDS}"
sleep ${POD_STATE_WAIT_SECONDS}
done
kubectl logs ${DNS_SRV_POD}
SRV_HOST=$(kubectl logs ${DNS_SRV_POD} | cut -d' ' -f1 | head -1)
echo "Found SRV hostname: '${SRV_HOST}'"
kubectl run ${SIMPLE_CONNECTION_POD} --restart=Never --image=jmimick/simple-mongodb-connection-tester "${SRV_HOST}"
STATE=""
while [ "${STATE}" != "Terminated" ]
do
STATE=$(kubectl describe pod ${SIMPLE_CONNECTION_POD} | grep "State:" | cut -d":" -f2 | tr -d '[:space:]')
if [ "${STATE}" = "Terminated" ]; then break; fi
echo "Polling state for pod '${SIMPLE_CONNECTION_POD}': ${STATE} ...sleeping ${POD_STATE_WAIT_SECONDS}"
sleep ${POD_STATE_WAIT_SECONDS}
done
kubectl logs ${SIMPLE_CONNECTION_POD} | head
echo "...truncating logs, 100 documents should've been inserted..."
kubectl logs ${SIMPLE_CONNECTION_POD} | tail
kubectl delete pod ${DNS_SRV_POD}
kubectl delete pod ${SIMPLE_CONNECTION_POD}
./connection-srv-demo.sh chicago-rs1-svc
MongoDB Enterprise Kubernetes - DNS SRV Connection Test - START
Testing service 'chicago-rs1-svc'
DNS SRV pod 'mongodb-chicago-rs1-svc-dns-srv-test'
Connection test pod 'mongodb-chicago-rs1-svc-connection-test'
pod/mongodb-chicago-rs1-svc-dns-srv-test created
Polling state for pod 'mongodb-chicago-rs1-svc-dns-srv-test': Waiting ...sleeping 5
chicago-rs1-svc.mongodb.svc.cluster.local has SRV record 10 33 0
chicago-rs1-0.chicago-rs1-svc.mongodb.svc.cluster.local.
chicago-rs1-svc.mongodb.svc.cluster.local has SRV record 10 33 0
chicago-rs1-1.chicago-rs1-svc.mongodb.svc.cluster.local.
chicago-rs1-svc.mongodb.svc.cluster.local has SRV record 10 33 0
chicago-rs1-2.chicago-rs1-svc.mongodb.svc.cluster.local.
Found SRV hostname: 'chicago-rs1-svc.mongodb.svc.cluster.local'
pod/mongodb-chicago-rs1-svc-connection-test created
Polling state for pod 'mongodb-chicago-rs1-svc-connection-test': Waiting ...sleeping 5
simple-connection-test: testing connection to chicago-rs1-svc.mongodb.svc.cluster.local
Creating and reading 100 docs in the 'test-6b676382.foo' namespace
Database(MongoClient(host=['chicago-rs1-svc.mongodb.svc.cluster.local:27017'], document_class=dict,
tz_aware=False, connect=True), u'test-6b676382')
{u'i': 0, u'_id': ObjectId('5b98992c62b04e00014b7d5a')}
{u'i': 1, u'_id': ObjectId('5b98992d62b04e00014b7d5b')}
{u'i': 2, u'_id': ObjectId('5b98992d62b04e00014b7d5c')}
...truncating logs, 100 documents should've been inserted...
{u'i': 97, u'_id': ObjectId('5b98992d62b04e00014b7dbb')}
{u'i': 98, u'_id': ObjectId('5b98992d62b04e00014b7dbc')}
{u'i': 99, u'_id': ObjectId('5b98992d62b04e00014b7dbd')}
Dropped db 'test-6b676382'
pod "mongodb-chicago-rs1-svc-dns-srv-test" deleted
pod "mongodb-chicago-rs1-svc-connection-test" deleted
[ec2-user@ip-172-31-58-143 mongodb-enterprise-kubernetes]$
Part 3 - Release plans & coming
features
Roadmap
Additional Features
Release Plans
SSL/TLS
Authentication
Enabling Backups
Running Ops Manager inside Kubernetes
Beta now - target Q1 2019 for GA
https://docs.opsmanager.mongodb.com/cu
rrent/reference/known-issues-k8s-beta/
OpenShift 3.11
Community Resources
Where to get involved and learn more
Github: https://github.com/mongodb/mongodb-enterprise-kubernetes
Slack: https://launchpass.com/mongo-db #enterprise-kubernetes
Talk to me!
Thank You!
Two column plain background
Table
Header 1 Header 2 Header 3 Header 4 Header 5
Title 2 column with subhead
Title 2 column
MongoDB.local DC 2018: MongoDB Ops Manager + Kubernetes
MongoDB.local DC 2018: MongoDB Ops Manager + Kubernetes
Code Title 1 Column
Code Title 2 Column
MongoDB.local DC 2018: MongoDB Ops Manager + Kubernetes
Header
• Bullets
• Bullets
WIFI: Random Router
PW: Password
Thank You!
MongoDB.local DC 2018: MongoDB Ops Manager + Kubernetes

More Related Content

MongoDB.local DC 2018: MongoDB Ops Manager + Kubernetes

  • 1. Running MongoDB Enterprise Database Services on Kubernetes At your MongoDB.local, you’ll learn technologies, tools, and best practices that make it easy for you to build data-driven applications without distraction.
  • 3. Agenda Kubernetes & MongoDB Target Architecture Demo: installing the MongoDB Operator & Running a replica set Advanced configurations Accessing MongoDB services Release plans: Target GA, Additional Features, OpenShift 3.11
  • 4. Technologies - Containers A standardized unit of software lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings (https://www.docker.com/resources/what-container)
  • 5. Technologies - Kubernetes Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. (https://kubernetes.io/) Important Concepts: Master Node, Worker Nodes, Pods, Image Repo, API Requirement: >= v1.9
  • 6. Technologies - MongoDB Ops Manager Ops Manager is a package for managing MongoDB deployments. Management (Automation) Monitoring Backups Cloud Manager
  • 7. Technologies - Kubernetes Operators An Operator is a method of packaging, deploying and managing a Kubernetes application. A Kubernetes application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl tooling.. https://www.docker.com/resources/what-container Kubernetes MongoDB Ops Manager Operator
  • 9. Part 2 - Using the Operator
  • 10. Getting the operator Official container images hosted on quay.io Public GitHub repository https://github.com/mongodb/mongodb-enterprise-kub ernetes
  • 12. Spin up k8s & install operator # Start cluster minikube start # Install operator kubectl create -f ./mongodb-enterprise.yaml # set default namespace (op creates & installs to 'mongodb') kubectl config set-context $(kubectl config current-context) --namespace=mongodb # See all operator stuff kubectl get all --selector=app=mongodb-enterprise-operator https://docs.opsmanager.mongodb.com/current/tutorial/install-k8s-operator/
  • 13. Configuration - Connection to Ops Manager --- apiVersion: v1 kind: ConfigMap metadata: name: dot-local data: projectId: 5b76d1750bd66b7ea136427f baseUrl: https://cloud.mongodb.com/ --- apiVersion: v1 kind: Secret metadata: name: opsmgr-credentials stringData: user: jason.mimick publicApiKey: 02b9674b-e912-4bf5-bec3-43687 832a6cd
  • 14. Create Secret & ConfigMap kubectl create -f opsmgr-credentials.yaml kubectl create -f dot-local-opsmgr-project.yaml kubectl get configmaps,secrets
  • 16. Deploying a Replica Set apiVersion: mongodb.com/v1 kind: MongoDbReplicaSet metadata: name: chicago-rs1 namespace: mongodb spec: members: 3 version: 4.0.0 project: dot-local credentials: opsmgr-credentials podSpec: storageClass: standard
  • 17. kubectl create -f chicago-rs1.yaml kubectl get all --selector=app=chicago-rs1-svc NAME READY STATUS RESTARTS AGE pod/chicago-rs1-0 1/1 Running 12 13h pod/chicago-rs1-1 1/1 Running 12 13h pod/chicago-rs1-2 1/1 Running 12 13h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/chicago-rs1-svc ClusterIP None <none> 27017/TCP 13h service/chicago-rs1-svc-external NodePort 10.102.95.40 <none> 27017:30780/TCP 13h NAME DESIRED CURRENT AGE statefulset.apps/chicago-rs1 3 3 13h
  • 18. High Availability - replication + statefulset = kubectl delete pod chicago-rs1-1 pod "chicago-rs1-1" deleted root@ip-172-31-19-43:~# kubectl get all --selector=app=chicago-rs1-svc NAME READY STATUS RESTARTS AGE pod/chicago-rs1-0 1/1 Running 12 13h pod/chicago-rs1-1 1/1 Running 0 1m pod/chicago-rs1-2 1/1 Running 12 13h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/chicago-rs1-svc ClusterIP None <none> 27017/TCP 13h service/chicago-rs1-svc-external NodePort 10.102.95.40 <none> 27017:30780/TCP 13h NAME DESIRED CURRENT AGE statefulset.apps/chicago-rs1 3 3 13h
  • 19. Scaling # vim chicago-rs1.yaml # kubectl apply -f chicago-rs1.yaml mongodbreplicaset.mongodb.com/chicago-rs1 configured # kubectl get all --selector=app=chicago-rs1-svc NAME READY STATUS RESTARTS AGE pod/chicago-rs1-0 1/1 Running 12 13h pod/chicago-rs1-1 1/1 Running 0 3m pod/chicago-rs1-2 1/1 Running 12 13h pod/chicago-rs1-3 1/1 Running 0 10s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/chicago-rs1-svc ClusterIP None <none> 27017/TCP 13h service/chicago-rs1-svc-external NodePort 10.102.95.40 <none> 27017:30780/TCP 13h NAME DESIRED CURRENT AGE statefulset.apps/chicago-rs1 4 4 13h
  • 20. Part 2 - Advanced Configuration
  • 22. Persistent Storage - DB right? apiVersion: mongodb.com/v1 kind: MongoDbReplicaSet metadata: name: chicago-rs1 namespace: mongodb spec: members: 3 version: 4.0.0 project: dotlocal-chicago credentials: opsmgr-credentials podSpec: storage: 5G storageClass: managed-premium
  • 23. Persistent Storage - What's createdkubectl get sc,pv,pvc --output=custom-columns=KIND:.kind,NAME:.metadata.name,STATUS:.status.phas e KIND NAME STATUS StorageClass default <none> StorageClass managed-premium <none> PersistentVolume pvc-2428e963-b5ff-11e8-bef4-0a58ac1f12f3 Bound PersistentVolume pvc-877ac893-b5ff-11e8-bef4-0a58ac1f12f3 Bound PersistentVolume pvc-c8993a87-b5ff-11e8-bef4-0a58ac1f12f3 Bound PersistentVolumeClaim data-chicago-rs1-0 Bound PersistentVolumeClaim data-chicago-rs1-1 Bound PersistentVolumeClaim data-chicago-rs1-2 Bound
  • 24. Node Affinity & Anti-Affinity --- apiVersion: mongodb.com/v1 kind: MongoDbReplicaSet metadata: name: chicago-rs1-happy-nodes namespace: mongodb spec: members: 3 version: 4.0.0 project: dotlocal-chicago credentials: opsmgr-credentials podSpec: storage: 5G storageClass: managed-premium nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: mood operator: In values: - happy
  • 25. Node Affinity & Anti-Affinity [ec2-user@ip-172-31-58-143 mongodb-enterprise-kubernetes]$ kubectl describe node aks-nodepool1-21637950-{0,1,2} | grep -E "^Name:|mood" Name: aks-nodepool1-21637950-0 mood=happy Name: aks-nodepool1-21637950-1 mood=happy Name: aks-nodepool1-21637950-2 mood=content [ec2-user@ip-172-31-58-143 mongodb-enterprise-kubernetes]$ kubectl describe node aks-nodepool1-21637950-{0,1,2} | grep -E "^ mongodb|^Name:" Name: aks-nodepool1-21637950-0 mongodb chicago-rs1-2 0 (0%) 0 (0%) 0 (0%) 0 (0%) mongodb chicago-rs1-happy-2 0 (0%) 0 (0%) 0 (0%) 0 (0%) mongodb chicago-rs1-happy-nodes-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) mongodb chicago-rs1-happy-nodes-2 0 (0%) 0 (0%) 0 (0%) 0 (0%) mongodb mongodb-enterprise-operator-57bdc5c59f-lq6gt 0 (0%) 0 (0%) 0 (0%) 0 (0%) Name: aks-nodepool1-21637950-1 mongodb chicago-rs1-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) mongodb chicago-rs1-happy-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) mongodb chicago-rs1-happy-nodes-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) Name: aks-nodepool1-21637950-2 mongodb chicago-rs1-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) mongodb chicago-rs1-happy-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) [ec2-user@ip-172-31-58-143 mongodb-enterprise-kubernetes]$ ^C
  • 26. Demo Connecting to your MongoDB deployment
  • 27. intra-k8s cluster connections 1. Jump into cluster: kubectl exec -it chicago-rs1-0 -- /bin/bash 2. Find mongo binaries - run shell ps -ef | grep mongo /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.0.0/bin/mon go "mongodb://chicago-rs1-0.chicago-rs1-svc.mongodb.svc.cluster.local ,chicago-rs1-1.chicago-rs1-svc.mongodb.svc.cluster.local,chicago-r s1-2.chicago-rs1-svc.mongodb.svc.cluster.local/?replicaSet=chicago -rs1"
  • 28. mongodb+srv:// #!/bin/bash : ${1?"Usage: $0 <MongoDB Service Name>"} POD_STATE_WAIT_SECONDS=5 SERVICE=$1 DNS_SRV_POD="mongodb-${SERVICE}-dns-srv-test" SIMPLE_CONNECTION_POD="mongodb-${SERVICE}-connection-test" echo "MongoDB Enterprise Kubernetes - DNS SRV Connection Test - START" echo "Testing service '${SERVICE}'" echo "DNS SRV pod '${DNS_SRV_POD}'" echo "Connection test pod '${SIMPLE_CONNECTION_POD}'" kubectl run ${DNS_SRV_POD} --restart=Never --image=tutum/dnsutils -- host -t srv ${SERVICE} STATE="" while [ "${STATE}" != "Terminated" ] do STATE=$(kubectl describe pod ${DNS_SRV_POD} | grep "State:" | cut -d":" -f2 | tr -d '[:space:]') if [ "${STATE}" = "Terminated" ]; then break; fi echo "Polling state for pod '${DNS_SRV_POD}': ${STATE} ...sleeping ${POD_STATE_WAIT_SECONDS}" sleep ${POD_STATE_WAIT_SECONDS} done kubectl logs ${DNS_SRV_POD} SRV_HOST=$(kubectl logs ${DNS_SRV_POD} | cut -d' ' -f1 | head -1) echo "Found SRV hostname: '${SRV_HOST}'" kubectl run ${SIMPLE_CONNECTION_POD} --restart=Never --image=jmimick/simple-mongodb-connection-tester "${SRV_HOST}" STATE="" while [ "${STATE}" != "Terminated" ] do STATE=$(kubectl describe pod ${SIMPLE_CONNECTION_POD} | grep "State:" | cut -d":" -f2 | tr -d '[:space:]') if [ "${STATE}" = "Terminated" ]; then break; fi echo "Polling state for pod '${SIMPLE_CONNECTION_POD}': ${STATE} ...sleeping ${POD_STATE_WAIT_SECONDS}" sleep ${POD_STATE_WAIT_SECONDS} done kubectl logs ${SIMPLE_CONNECTION_POD} | head echo "...truncating logs, 100 documents should've been inserted..." kubectl logs ${SIMPLE_CONNECTION_POD} | tail kubectl delete pod ${DNS_SRV_POD} kubectl delete pod ${SIMPLE_CONNECTION_POD}
  • 29. ./connection-srv-demo.sh chicago-rs1-svc MongoDB Enterprise Kubernetes - DNS SRV Connection Test - START Testing service 'chicago-rs1-svc' DNS SRV pod 'mongodb-chicago-rs1-svc-dns-srv-test' Connection test pod 'mongodb-chicago-rs1-svc-connection-test' pod/mongodb-chicago-rs1-svc-dns-srv-test created Polling state for pod 'mongodb-chicago-rs1-svc-dns-srv-test': Waiting ...sleeping 5 chicago-rs1-svc.mongodb.svc.cluster.local has SRV record 10 33 0 chicago-rs1-0.chicago-rs1-svc.mongodb.svc.cluster.local. chicago-rs1-svc.mongodb.svc.cluster.local has SRV record 10 33 0 chicago-rs1-1.chicago-rs1-svc.mongodb.svc.cluster.local. chicago-rs1-svc.mongodb.svc.cluster.local has SRV record 10 33 0 chicago-rs1-2.chicago-rs1-svc.mongodb.svc.cluster.local. Found SRV hostname: 'chicago-rs1-svc.mongodb.svc.cluster.local' pod/mongodb-chicago-rs1-svc-connection-test created Polling state for pod 'mongodb-chicago-rs1-svc-connection-test': Waiting ...sleeping 5 simple-connection-test: testing connection to chicago-rs1-svc.mongodb.svc.cluster.local Creating and reading 100 docs in the 'test-6b676382.foo' namespace Database(MongoClient(host=['chicago-rs1-svc.mongodb.svc.cluster.local:27017'], document_class=dict, tz_aware=False, connect=True), u'test-6b676382') {u'i': 0, u'_id': ObjectId('5b98992c62b04e00014b7d5a')} {u'i': 1, u'_id': ObjectId('5b98992d62b04e00014b7d5b')} {u'i': 2, u'_id': ObjectId('5b98992d62b04e00014b7d5c')} ...truncating logs, 100 documents should've been inserted... {u'i': 97, u'_id': ObjectId('5b98992d62b04e00014b7dbb')} {u'i': 98, u'_id': ObjectId('5b98992d62b04e00014b7dbc')} {u'i': 99, u'_id': ObjectId('5b98992d62b04e00014b7dbd')} Dropped db 'test-6b676382' pod "mongodb-chicago-rs1-svc-dns-srv-test" deleted pod "mongodb-chicago-rs1-svc-connection-test" deleted [ec2-user@ip-172-31-58-143 mongodb-enterprise-kubernetes]$
  • 30. Part 3 - Release plans & coming features
  • 31. Roadmap Additional Features Release Plans SSL/TLS Authentication Enabling Backups Running Ops Manager inside Kubernetes Beta now - target Q1 2019 for GA https://docs.opsmanager.mongodb.com/cu rrent/reference/known-issues-k8s-beta/ OpenShift 3.11
  • 32. Community Resources Where to get involved and learn more Github: https://github.com/mongodb/mongodb-enterprise-kubernetes Slack: https://launchpass.com/mongo-db #enterprise-kubernetes Talk to me!
  • 34. Two column plain background
  • 35. Table Header 1 Header 2 Header 3 Header 4 Header 5
  • 36. Title 2 column with subhead
  • 40. Code Title 1 Column
  • 41. Code Title 2 Column