In this tutorial we are going to expand our examples with deploying a more complex microservice. The idea is to make you more comfortable with the platform and to show you how you can leverage it for more advanced scenarios. In this example we are going to see deployment of: A Redis master Multiple Redis slaves A sample guestbook application that uses Redis as a store We assume that you have already set up a Platform9 cluster with at least one node, and the cluster is ready. Let’s start with the Redis parts. Deploying and exposing a Redis Cluster Redis is a key-value in-memory store that is used mainly as a cache service. In order to set up Clustering for Data Replication we need a Redis instance that acts as Master, together with additional instances as slaves. Then the guestbook application can use this instance to store data. The Redis master will propagate the writes to the slave nodes. We can initiate a Redis Master deployment in a few different ways: either using the kubectl tool, the Platform9 UI or the Kubernetes UI. For convenience, we use the kubectl tool as it’s the most commonly understood in tutorials. First we need to create a Redis Cluster Deployment. Looking at their , to set up a cluster, we need some configuration properties. We can leverage kubernetes configmaps to store and reference them in the deployment spec. documentation here We need to save a script and a redis.conf file that is going to be used to configure the master and slave nodes. Create the following config redis-cluster.config.yml With these values $ cat redis-cluster.config.yml --- apiVersion: v1 kind: ConfigMap metadata: name: redis-cluster-config data: update-ip.sh: | #! sh sed -i -e /data/nodes.conf exec redis.conf: |+ cluster-enabled yes cluster-config-file /data/nodes.conf appendonly yes /bin/ "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${IP}/" "$@" We define a script that will insert an IP value to the node.conf file. This is to fix an issue with Redis as . We use this script every time we deploy a new redis image. referenced here Then we have the redis.conf, which applies the minimal cluster configuration. Apply this spec into the cluster: $ kubectl apply -f redis-cluster.config.yml Then verify that it exists in the list of configmaps: $ kubectl get configmaps Next we need to define a spec for the redis cluster instances. We can use a Deployment or a StatefulSet to define 3 instances: Here is the spec: redis-cluster.statefulset.yml $ cat redis-cluster.statefulset.yml --- apiVersion: apps/v1 kind: StatefulSet metadata: name: redis-cluster spec: serviceName: redis-cluster replicas: selector: matchLabels: app: redis-cluster template: metadata: labels: app: redis-cluster spec: containers: - name: redis image: redis: -alpine ports: - containerPort: name: client - containerPort: name: gossip command: [ , , ] env: - name: IP valueFrom: fieldRef: fieldPath: status.podIP volumeMounts: - name: conf mountPath: data readOnly: volumes: - name: conf configMap: name: redis-cluster-config defaultMode: volumeClaimTemplates: - metadata: name: data spec: accessModes: [ ] resources: requests: storage: Gi 6 5.0 .7 6379 16379 "/conf/update-ip.sh" "redis-server" "/conf/redis.conf" /conf readOnly: false - name: data mountPath: / false 0755 "ReadWriteOnce" 1 In the above step we defined a few things: An IP environment variable that we need in the update-ip.sh script that we defined in the configmap earlier. This is the pod-specific IP address using the .Some shared volumes including the configmap that we defined earlier.Two container ports – 6379 and 16379 – for the gossip protocol. Downward API With this spec we can deploy the Redis cluster instances: $ kubectl apply -f redis-cluster.statefulset.yml Once we verify that we have the deployment ready, we need to perform the last step, which is bootstrapping the cluster. Consulting the documentation here for , we need to ssh into one of the instances and run the redis-cli cluster create command. For example taken from the docs: creating the cluster $ redis-cli --cluster create : : \ : : : : \ --cluster-replicas 127.0 .0 .1 7000 127.0 .0 .1 7001 127.0 .0 .1 7002 127.0 .0 .1 7003 127.0 .0 .1 7004 127.0 .0 .1 7005 1 To do that in our case, we need to get the local pod IPs of the instances and feed them to that command. We can query the IP using this command: $ kubectl get pods -l app=redis-cluster -o jsonpath= '{range.items[*]}{.status.podIP}:6379 ' So if we save them in a variable or a file, we can pipe them at the end of the redis-cli command: $ POD_IPS = $(kubectl get pods -l app=redis-cluster -o jsonpath= ) '{range.items[*]}{.status.podIP}:6379 ' Then we can run the following command: $ kubectl exec -it redis-cluster -- redis-cli --cluster create --cluster-replicas $POD_IPS -0 1 If everything is OK, you will see the following prompt. Enter ‘yes’ to accept and continue: Can I set the above configuration? (type to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting the cluster to join ........ [OK] All nodes agree about slots configuration. >>> Check open slots... >>> Check slots coverage... [OK] All slots covered. 'yes' for for 16384 Then we can verify the cluster state by running the cluster info command: $ kubectl exec -it redis-cluster -- redis-cli cluster info cluster_state:ok cluster_slots_assigned: cluster_slots_ok: cluster_slots_pfail: cluster_slots_fail: cluster_known_nodes: cluster_size: cluster_current_epoch: cluster_my_epoch: cluster_stats_messages_ping_sent: cluster_stats_messages_pong_sent: cluster_stats_messages_sent: cluster_stats_messages_ping_received: cluster_stats_messages_pong_received: cluster_stats_messages_meet_received: cluster_stats_messages_received: -0 16384 16384 0 0 6 3 6 1 28 34 62 29 28 5 62 Before we continue deploying the guestbook app, we need to offer a unified service frontend for the Redis Cluster so that it’s easily discoverable in the cluster. Here is the service spec: redis-cluster.service.yml $ cat redis-cluster.service.yml --- apiVersion: v1 kind: Service metadata: name: redis-master spec: type: ClusterIP ports: - port: targetPort: name: client - port: targetPort: name: gossip 6379 6379 16379 16379 We expose the cluster as redis-master here, as the guestbook app will be looking for a host service to connect to with that name. Once we apply this service spec, we can move on to deploying and exposing the Guestbook Application: $ kubectl apply -f redis-cluster.service.yml Deploying and exposing a GuestBook Application The guestbook application is a simple php script that shows a form to submit a message. Initially it will attempt to connect to either the redis-master host or the redis-slave hosts. It needs the environment variable set pointing to the file with the following variables: : of the master : of the slave GET_HOSTS_FROM REDIS_MASTER_SERVICE_HOST REDIS_SLAVE_SERVICE_HOST First, let’s define the deployment spec bellow: php-guestbook.deployment.yml $ cat php-guestbook.deployment.yml --- apiVersion: apps/v1 kind: Deployment metadata: name: guestbook spec: replicas: selector: matchLabels: app: guestbook template: metadata: labels: app: guestbook spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: m memory: Mi env: - name: GET_HOSTS_FROM value: env - name: REDIS_MASTER_SERVICE_HOST value: - name: REDIS_SLAVE_SERVICE_HOST value: ports: - containerPort: 1 150 150 "redis-master" "redis-master" 80 The code of the image is located . gb-frontend here Next is the the associated service spec: --- apiVersion: v1 kind: Service metadata: name: guestbook-lb spec: type: NodePort ports: - port: selector: app: guestbook 80 Note: will assign a random port over the public IP of the Node. In either case, we get a public host:port pair where we can inspect the application. Here is a screenshot of the app after we deployed it: NodePort Cleaning up Once we have finished experimenting with the application, we can clean up the resources and all the servers by issuing kubectl delete statements. A convenient way is to delete by labels. For example: **Please note I am an employee of Platform9 and my team helped contribute to this guide**