paint-brush
Kubernetes: Installing MongoDB ReplicaSet on Azure using Helmby@ivanfioravanti
4,700 reads
4,700 reads

Kubernetes: Installing MongoDB ReplicaSet on Azure using Helm

by Ivan FioravantiSeptember 10th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

It’s time to install a MongoDB ReplicaSet on a Kubernetes cluster on Azure and try to kill it in all possible ways!<br>Starring:
featured image - Kubernetes: Installing MongoDB ReplicaSet on Azure using Helm
Ivan Fioravanti HackerNoon profile picture


It’s time to install a MongoDB ReplicaSet on a Kubernetes cluster on Azure and try to kill it in all possible ways!Starring:

Azure Kubernetes Cluster configuration

Let’s install a Cluster with 1 Master and 3 Nodes all running Linux using ACS Engine with following commands.

I described detailed steps on ACS Engine usage for installing a cluster in my Kubernetes Adventures on Azure — Part 3 (ACS Engine & Hybrid Cluster) article. Please refer to it for details. Here I list quickly commands to be used.

Creation of Resource Group

This is needed to group all resources for this tutorial in a single logical group, to be able to delete everything with a single command at the end.

az group create --name k8sMongoTestGroup --location westeurope

Cluster provisioning

I usually create an ssh pair for my test on acs. Please check my article here on how to do it. I will change the examples/kubernetes.json file to use previously created ssh pair, dnsPrefix and servicePrincipalProfile (following Deploy a Kubernetes Cluster suggestions from Microsoft).

my kubernetes.json file with changes in bold is:




































{"apiVersion": "vlabs","properties": {"orchestratorProfile": {"orchestratorType": "Kubernetes","orchestratorRelease": "1.7"},"masterProfile": {"count": 1,"dnsPrefix": "ivank8stest","vmSize": "Standard_D2_v2"},"agentPoolProfiles": [{"name": "agentpool1","count": 3,"vmSize": "Standard_D2_v2","availabilityProfile": "AvailabilitySet"}],"linuxProfile": {"adminUsername": "azureuser","ssh": {"publicKeys": [{"keyData": "ssh-rsa yourpubkeyhere"}]}},"servicePrincipalProfile": {"clientId": "yourappclientid","secret": "yourappsecrete"}}}

Create your cluster with:

acs-engine deploy --subscription-id 

Wait for the cluster to be up running:


INFO[0010] Starting ARM Deployment (k8sMongoGroup-2051810234). This will take some time…INFO[0651] Finished ARM Deployment (k8sMongoGroup-2051810234).

Connect to it using kubeconfig file generated during deployment in _output folder.

export KUBECONFIG=~/acs/acs-engine/_output/ivank8stest/kubeconfig/kubeconfig.westeurope.json

Following commands can be used to determine when cluster is ready:


kubectl cluster-infokubectl get nodes





NAME STATUS AGE VERSIONk8s-agentpool1-33584487-0 Ready 46m v1.7.4k8s-agentpool1-33584487-1 Ready 46m v1.7.4k8s-agentpool1-33584487-2 Ready 46m v1.7.4k8s-master-33584487-0 Ready 46m v1.7.4

Now you can open Kubernetes Dashboard if you want to use a UI to check you cluster status: kubectl proxy and then open a browser at http://127.0.0.1:8001/ui

Helm MongoDB Charts

Helm is the Package Manager for Kubernetes. It simplifies installation and maintenance of products and services like:

  • MongoDB
  • Redis
  • RabbitMQ
  • and many other

We will use it to install and configure a MongoDB Replica Set.

Helm installation

Prerequisites, installation steps and details can be found in the Use Helm to deploy containers on a Kubernetes cluster article from Microsoft.

With all prerequisites in place, Helm installation is as simple as running command:

helm init --upgrade

Clone charts repository

Let’s clone charts repository to be able to examine and change MongoDB chart files before deploying everything on our cluster:

git clone https://github.com/kubernetes/charts.git


Now go in the /charts/stable/mongodb-replicaset folder. Here you will find all artifacts composing an Helm Chart. If needed you can change values.yaml file to tailor installation based on your needs. For now let’s try a standard installation.

Run following command: helm install . and wait for following output:




NAME: foppish-angelfishLAST DEPLOYED: Sun Sep 10 20:42:42 2017NAMESPACE: defaultSTATUS: DEPLOYED




RESOURCES:==> v1/ServiceNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEfoppish-angelfish-mongodb-replicaset None <none> 27017/TCP 5s



==> v1beta1/StatefulSetNAME DESIRED CURRENT AGEfoppish-angelfish-mongodb-replicaset 3 1 5s




==> v1/ConfigMapNAME DATA AGEfoppish-angelfish-mongodb-replicaset 1 5sfoppish-angelfish-mongodb-replicaset-tests 1 5s


NOTES:...

DONE!

MongoDB Replicaset is up and running! Helm is ultra easy and powerful!

Helm is ultra easy and powerful!

MongoDB installation test

From output of helm install pick up the NAME of your release and use it in the following command:

export RELEASE_NAME=foppish-angelfish

Here we follow a different path from Helm Chart. Let’s open an interactive shell session with remote Mongo server!

kubectl exec $RELEASE_NAME-mongodb-replicaset-0 --mongo --shell

OUTPUT










MongoDB shell version v3.4.8connecting to: mongodb://127.0.0.1:27017MongoDB server version: 3.4.8type "help" for help......A LOT OF WARNING (I will check these in a future post to clean them up, if possible)......rs0:PRIMARY>

Who is the primary?

In theory Pod 0 should be the master as you can see rom rs:PRIMARY> prompt. If this is not the case tun following command to find the Master:

rs0:SECONDARY> db.isMaster().primary

Take note of the primary Pod because we are going to kill it soon and connect to it using last kubectl exec used above.

Failover testing

We need to create some data to check persistence across failures. We are already connect to the mongo shell, creating a document and leaving the session is as simple as:


rs0:PRIMARY> db.test.insert({key1: 'value1'})rs0:PRIMARY> exit

Use following command to monitor changes in replica set:

kubectl run --attach bbox --image=mongo:3.4 --restart=Never --env="RELEASE_NAME=$RELEASE_NAME" -- sh -c 'while true; do for i in 0 1 2; do echo $RELEASE_NAME-mongodb-replicaset-$i $(mongo --host=$RELEASE_NAME-mongodb-replicaset-$i.$RELEASE_NAME-mongodb-replicaset --eval="printjson(rs.isMaster())" | grep primary); sleep 1; done; done';







OUTPUTfoppish-angelfish-mongodb-replicaset-0 "primary" : "foppish-angelfish-mongodb-replicaset-0.foppish-angelfish-mongodb-replicaset.default.svc.cluster.local:27017",foppish-angelfish-mongodb-replicaset-1 "primary" : "foppish-angelfish-mongodb-replicaset-0.foppish-angelfish-mongodb-replicaset.default.svc.cluster.local:27017",foppish-angelfish-mongodb-replicaset-2 "primary" : "foppish-angelfish-mongodb-replicaset-0.foppish-angelfish-mongodb-replicaset.default.svc.cluster.local:27017",.........

Kill the primary!

Here it is: kubectl delete pod $RELEASE_NAME-mongob-replicaset-0

MongoDB will start an election and another Pod will become master:

foppish-angelfish-mongodb-replicaset-1 “primary” : “foppish-angelfish-mongodb-replicaset-1.foppish-angelfish-mongodb-

And in the meantime Kubernetes will immediately take corrective actions instantiating a new Pod 0.

Kill’em all

Now we have to simulate a real disaster: let’s kill all Pods and see the StatefulSet magically recreating everything with all data available.


kubectl delete po -l "app=mongodb-replicaset,release=$RELEASE_NAME"kubectl get po --watch-only

After few minutes our MongoDB replicaset will be back online and we can test it again to see if our data are still there.

Final check and clean up

Run following command to verify that key created is still there:

kubectl exec $RELEASE_NAME-mongodb-replicaset-1 --mongo --eval=”rs.slaveOk(); db.test.find({key1:{\$exists:true}}).forEach(printjson)”

As always you can delete everything with a simple Azure CLI 2 command: az group delete --name k8sMongoTestGroup --yes --no-wait

How to expose this replicaSet externally?


This is a topic for a future post. It seems trivial as a kubectl expose but it is not so easy. If you expose the existing service, it will be LoadBalanced on 3 nodes behind it and this is wrong. We need 3 LoadBalancers, exposing 3 services, 1 for each Pod in the StatefulSet. Moreover we have to activate Authentication and SSL to ensure best practice from security perspective.

I will find best way to do it, while playing with Heml, Kubernetes, MongoDB and Azure!