The TIBCO Platform is a real-time, composable data platform that will bring together an evolving set of your TIBCO solutions - and it's available now!
A chart showing the TIBCO Platform vision
Jump to content
Articles
Read more about TIBCO use cases, product features, capabilities and more
  • Zero Downtime deployment of BusinessWorks Container Edition with Kubernetes ? Part 1


    Manoj Chaurasia

    Table of Contents

    Part 1: Creating a Micro-Service and Deploying in Minikube


    Much of the discussion about deploying applications in Containers centers around how an orchestration platform such as Kubernetes provides Elastic Scalability of your applications ? allowing incoming demand to scale your application up and down to meet demand appropriately. This can be extremely cost effective when you?re paying hourly usage rates for cloud hosting, and provides one of the biggest cost savings over data center hosting when you have a proper cloud-native deployment. To do this properly, the application in your container needs to provide the appropriate metrics that can be queried by Kubernetes to determine the scaling up or down points.

    After scalability, the next topic is Self-Healing ? when a micro service is no longer responding, a new container is started, traffic is routed to the new container, and the old container is unceremoniously removed. To remove any downtime, this means that you need at least 2 instances of your micro service running (n+1) and that way, there should be no interruption of service.

    In this post, I want to talk about another aspect of Containerisation that I think has been overlooked of late, but has a significant impact on how organisations manage platforms that are deployed in Kubernetes, and in particular, how to handle upgrades of the container runtime as well as the micro service running in the container.

    In times gone by, if you had a non-containerised deployment of any application that it underpins your Business, then an upgrade of the platform takes time and planning. No-one wants to just upgrade the platform, restart the services and hope it works ? so it is typical that a new platform is deployed, applications are rebuilt and deployed in Test/QA platforms before finally being deployed in Production ? with the implied interruption of service at an appropriate non-busy time, with everyone on hand in case things go wrong and need to be fixed/rolled back.

    Thankfully, Containers and Orchestration platforms such as Kubernetes have consigned those long nights and war rooms to the past, where they belong!

    Kubernetes Deployment Strategies

    Kubernetes supports a number of different deployment strategies and choosing the right deployment procedure depends on your needs, some of the possible strategies you can adopt are:

    • Recreate: All existing Pods are killed and new Pods are created. The entire application is redeployed ? but requires downtime
    • Ramped/Rolling Update: Release a new version, route traffic to it and terminate an existing version, then move on to the next instance. Removes any downtime whilst Pods are updated, but you have no control over traffic and you will need to ensure API compatibility across versions
    • Blue/Green: Release a new version alongside the older version and then switch traffic. Avoids API compatibility issues, instance rollback but double the resources are required
    • Canary: Release a new version to a subset of users. Quite easy to set up, but rollouts can be slower depending on how you manage traffic between Pods
    • A/B: Similar to Canary, except that traffic is routed by specific conditions such as Header values. Requires the use of an intelligent load balancer

    In this series of posts, I?ll look at using Recreate, Ramped, Blue/Green, and Canary with a micro service created using TIBCO BusinessWorks? Container Edition.

    Initial Set up

    I?m starting with a simple setup on Ubuntu that includes:

    The Docker and Yaml files above are included to provide some example configurations that you might like to use, and are pretty straightforward to change for your needs.

    BusinessWorks Greeting Micro Service

    I have created a simple Greeting Service in TIBCO BusinessWorks ready for deployment into a Container ? as you can see it?s very simple and all it does is return a Greeting to whatever query parameter was sent to the REST service. It also returns the Version Number of the application from the Application?s Module Properties (handy for testing our rollout strategy!)

    zerodowntime_1.png.52cad9c13db3eeff61be46010e985bcf.png

    I?ve generated two versions of this micro service ? 1.0 and 1.1 and generated the EAR files ready for the creation of the Docker images. I?ve also created two Docker files ? one for each version.

    Docker File for version 1.0

    zerodowntime_2.png.d33d8fda9da5f2d22ec935edc23db0e3.png

    Docker File for version 1.1

    zerodowntime_3.png.bff89ac9d1090c7f38060355e32f4d12.png

    You can see that I?m exposing 2 ports, 8081 is the main REST service, whilst 7777 is the port used for the Readiness Probe.

    The Readiness probe is important to use as you don?t want Kubernetes to send requests to a container that isn?t able to start processing requests yet.

    Getting Minikube ready

    Once you have Minikube installed on your platform, there?s a few commands you want to keep handy:

    zerodowntime_5.png.dbcf5fc2315e4bd6a12df6c55e99af9e.png

    Once Minikube is up and running, we need to configure our local environment to use the docker daemon within Minikube. This is done with the following command:

    zerodowntime_6.png.ee733de7e448c61245ea606a5c612ad3.png

    We?re at a stage now where we can create our two versions of the Greeting App docker images. To do this, I?m going to use the two EAR files, and the two Docker Files and run the following commands:

    zerodowntime_7.png.f560210eb7a3aaa28c299d031654e8e4.png

    Now we can deploy our application into Minikube.

    Initial Deployment into Kubernetes

    To deploy into Kubernetes you need to have a manifest file that describes the deployment. I?ll be using the Yaml version of the files, although you can also create json format files, as that seems to be the most popular, and there are more examples out there.

    My 1.0 version of the manifest file looks like the following:

    zerodowntime_8.thumb.png.2b5c53ea0463161d9e623a7185a2b936.png

    There?s a fair amount of detail there ? but most of it is self explanatory. Couple of things I?ll point out.

    • I?ve set up the readiness probe to start after 60 seconds to give the BusinessWorks container time to start.
    • I?ve also set the image to be greeting-app:1.0 ? so specifying a particular version number to use in this deployment
    • I?ve set Replicas to 2 ? this means that there will be 2 Pods deployed

    To create the deployment in Kubernetes, it?s a simple command:

    zerodowntime_9.png.54769b13c62e6423d0977a02edf17159.png

    Then we can switch to the Kubernetes Dashboard and watch our deployment come up

    zerodowntime_10.png.972c86eebf83961e20faa5f06bc41189.png

    You?ll see we have 2 Pods, each with their own unique name, which BusinessWorks will use as the Hostname, so we?ll be able to see which Pod is handling our request.

    In the base deployment of Minikube, it is using kube-proxy to proxy requests into the Pods themselves and so it will perform load balancing between those pods. To get the URL of my micro service, I use the command:

    zerodowntime_11.png.ab25b33dc83b0494040818514b3c6936.png

    zerodowntime_12.png.eb4a161e97eff918edf089b6312680d8.png

    You can see that my URL would be http://192.168.49.2:30914. From BusinessWorks, I know that my REST endpoint is /greeting and I can pass in a parameter.

    So my full URL would be http://192.168.49.2:30914/greeting/Dave

    To make testing easier, I have a quick script that I can run on the command line:

    zerodowntime_13.png.9d20f7df3721cb49e8efbe538d63cbec.png

    zerodowntime_14.png.382b7ed6fc17b27ff1387ea939b0d142.png

    You can see from the screen-grab that the REST API is returning my greeting, along with the Hostname of the Pod and the Version of the application. You can see that Kubernetes is sending requests to both Pods that we saw earlier in the Kubernetes Dashboard.

    Cool ? Version 1.0 is up and running!

    In the following posts, I?ll be going through the different upgrade strategies we discussed earlier, starting with the Recreate Strategy.

    In the meantime ? I?ve also created a 

    , so you can follow along and get your environment ready for the next post.

    User Feedback

    Recommended Comments

    There are no comments to display.



    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

×
×
  • Create New...