Table of Contents
- Part 1: Creating a Micro-Service and Deploying in Minikube
- Kubernetes Deployment Strategies
- Initial Set up
- BusinessWorks Greeting Micro Service
- Getting Minikube ready
- Initial Deployment into Kubernetes
Much of the discussion about deploying applications in Containers centers around how an orchestration platform such as Kubernetes provides Elastic Scalability of your applications ? allowing incoming demand to scale your application up and down to meet demand appropriately. This can be extremely cost effective when you?re paying hourly usage rates for cloud hosting, and provides one of the biggest cost savings over data center hosting when you have a proper cloud-native deployment. To do this properly, the application in your container needs to provide the appropriate metrics that can be queried by Kubernetes to determine the scaling up or down points.
After scalability, the next topic is Self-Healing ? when a micro service is no longer responding, a new container is started, traffic is routed to the new container, and the old container is unceremoniously removed. To remove any downtime, this means that you need at least 2 instances of your micro service running (n+1) and that way, there should be no interruption of service.
In this post, I want to talk about another aspect of Containerisation that I think has been overlooked of late, but has a significant impact on how organisations manage platforms that are deployed in Kubernetes, and in particular, how to handle upgrades of the container runtime as well as the micro service running in the container.
In times gone by, if you had a non-containerised deployment of any application that it underpins your Business, then an upgrade of the platform takes time and planning. No-one wants to just upgrade the platform, restart the services and hope it works ? so it is typical that a new platform is deployed, applications are rebuilt and deployed in Test/QA platforms before finally being deployed in Production ? with the implied interruption of service at an appropriate non-busy time, with everyone on hand in case things go wrong and need to be fixed/rolled back.
Thankfully, Containers and Orchestration platforms such as Kubernetes have consigned those long nights and war rooms to the past, where they belong!
Kubernetes supports a number of different deployment strategies and choosing the right deployment procedure depends on your needs, some of the possible strategies you can adopt are:
- Recreate: All existing Pods are killed and new Pods are created. The entire application is redeployed ? but requires downtime
- Ramped/Rolling Update: Release a new version, route traffic to it and terminate an existing version, then move on to the next instance. Removes any downtime whilst Pods are updated, but you have no control over traffic and you will need to ensure API compatibility across versions
- Blue/Green: Release a new version alongside the older version and then switch traffic. Avoids API compatibility issues, instance rollback but double the resources are required
- Canary: Release a new version to a subset of users. Quite easy to set up, but rollouts can be slower depending on how you manage traffic between Pods
- A/B: Similar to Canary, except that traffic is routed by specific conditions such as Header values. Requires the use of an intelligent load balancer
In this series of posts, I?ll look at using Recreate, Ramped, Blue/Green, and Canary with a micro service created using TIBCO BusinessWorks? Container Edition.
I?m starting with a simple setup on Ubuntu that includes:
The Docker and Yaml files above are included to provide some example configurations that you might like to use, and are pretty straightforward to change for your needs.
I have created a simple Greeting Service in TIBCO BusinessWorks ready for deployment into a Container ? as you can see it?s very simple and all it does is return a Greeting to whatever query parameter was sent to the REST service. It also returns the Version Number of the application from the Application?s Module Properties (handy for testing our rollout strategy!)
I?ve generated two versions of this micro service ? 1.0 and 1.1 and generated the EAR files ready for the creation of the Docker images. I?ve also created two Docker files ? one for each version.
Docker File for version 1.0
Docker File for version 1.1
You can see that I?m exposing 2 ports, 8081 is the main REST service, whilst 7777 is the port used for the Readiness Probe.
The Readiness probe is important to use as you don?t want Kubernetes to send requests to a container that isn?t able to start processing requests yet.
Once you have Minikube installed on your platform, there?s a few commands you want to keep handy:
Once Minikube is up and running, we need to configure our local environment to use the docker daemon within Minikube. This is done with the following command:
We?re at a stage now where we can create our two versions of the Greeting App docker images. To do this, I?m going to use the two EAR files, and the two Docker Files and run the following commands:
Now we can deploy our application into Minikube.
To deploy into Kubernetes you need to have a manifest file that describes the deployment. I?ll be using the Yaml version of the files, although you can also create json format files, as that seems to be the most popular, and there are more examples out there.
My 1.0 version of the manifest file looks like the following:
There?s a fair amount of detail there ? but most of it is self explanatory. Couple of things I?ll point out.
- I?ve set up the readiness probe to start after 60 seconds to give the BusinessWorks container time to start.
- I?ve also set the image to be greeting-app:1.0 ? so specifying a particular version number to use in this deployment
- I?ve set Replicas to 2 ? this means that there will be 2 Pods deployed
To create the deployment in Kubernetes, it?s a simple command:
Then we can switch to the Kubernetes Dashboard and watch our deployment come up
You?ll see we have 2 Pods, each with their own unique name, which BusinessWorks will use as the Hostname, so we?ll be able to see which Pod is handling our request.
In the base deployment of Minikube, it is using kube-proxy to proxy requests into the Pods themselves and so it will perform load balancing between those pods. To get the URL of my micro service, I use the command:
You can see that my URL would be http://192.168.49.2:30914. From BusinessWorks, I know that my REST endpoint is /greeting and I can pass in a parameter.
So my full URL would be http://192.168.49.2:30914/greeting/Dave
To make testing easier, I have a quick script that I can run on the command line:
You can see from the screen-grab that the REST API is returning my greeting, along with the Hostname of the Pod and the Version of the application. You can see that Kubernetes is sending requests to both Pods that we saw earlier in the Kubernetes Dashboard.
Cool ? Version 1.0 is up and running!
In the following posts, I?ll be going through the different upgrade strategies we discussed earlier, starting with the Recreate Strategy.
In the meantime ? I?ve also created a