Introducing the all-new TIBCO Community site!

For current users, please click "Sign In" to reset your password and access the enhanced features. If you're a first-time visitor, we extend a warm welcome—click "Sign Up" to become a part of the TIBCO Community!

If you're seeking alternative community sites, explore ibi, Jaspersoft, and Spotfire.

Jump to content
  • Upgrading TIBCO BusinessWorks Container Engine Runtime in a Kubernetes Environment


    Dave Winstone

    1. Summary

    2. Building the BusinessWorks Container Edition Base Image

    3. Building the Application Container Image

    4. Deploying to Kubernetes

    5. Updating the Base Image

    6. Conclusion

     

    1. Summary

    In this article I want to show you how to upgrade the TIBCO BusinessWorks Container Edition runtime (the BWCE Base Image) for an application/microservice that is deployed in a Kubernetes environment and achieve a Zero Downtime upgrade, without having to rebuild your application EAR file etc. This will significantly reduce the time for you to upgrade your BusinessWorks Container Edition versions and also mean that much less testing is required as you are not changing any of your business logic and the application itself does not need to be rebuilt.

    I?ll be using a simple Greeting microservice that is a REST API that responds with a message, the container hostname and the BusinessWorks runtime version:

    Screenshot2023-01-09at2_24_13PM.png.6003c8d13e46a64d5230da619f520ff4.png

     

    Building the BusinessWorks Container Edition Base Image

    -->

    There?s lots of instructions and different examples of how to do this at TIBCO?s official github repo here that I?d recommend you take a look at.

    I?m going to start with my application built using version 2.7.3 of BusinessWorks Container Edition, then upgrade the runtime to 2.8.0.

    Once you?ve followed the instructions from the TIBCO GitHub to clone the BWCE repo and downloaded the BWCE runtime from TIBCO?s download site you are ready to build the base image using a command similar to the following from the cloned bwce repo:

    The first argument is the BWCE runtime zip file that you downloaded from TIBCO, and the second argument is the tag we?ll be using. You?ll notice that in my tag I?ve got an ip address and port, and that?s because I want to push this image to a registry ? in this case I?m using microk8s, so when the base image is built, I can push it to the registry using this command:

    Building the Application Container Image

    I?ll adjust my application Dockerfile to include the correct base image tag as follows:

    Now to build the application and push it to the registry too:

    Deploying to K8s (microk8s)

    To deploy my application to K8s, I need to create a yaml file that configures my application, creates a service and also defines the update strategy:

    To deploy the application from the command line, I can execute the following:

    When the application is ready (kubectl get deployments) you can test the application with a curl command (notice we defined the nodeport as 30092 in the yaml file):

    Screenshot2023-01-10at1_14_48PM.png.45b25f682d139102078d5cb8b69aa8f5.png

    As we have 2 replicas created if we keep issuing the curl command we?ll see that both pods are responding:

    So ? we?re all good ? we have our microservice API up and running, and we?ve got 2 instances available and processing our messages. Now we want to keep the application running and update the base image to a new version of the container runtime.

    Update the Base Image

    To rebuild the base image to use v2.8.0 of BusinessWorks Container Edition, we first need to remove the bwce-runtime-2.7.3.zip from the resources folder under the cloned bwce repo, then run the command (assuming I have downloaded the bwce-runtime-2.8.0.zip file from TIBCO), followed by the Docker Push command:

    Make a small change to our Dockerfile to use this new version (tag is 127.0.0.1:3200/bwce:v2.8.0):

    To rebuild the application use the following (and don?t forget to push to the registry):

    The next job we need to do is update our yaml file that describes the deployment in Kubernetes to point to the new version of the image and then re-apply the yaml file to the deployment using the kubectl command:

    Screenshot2023-01-10at1_16_02PM.png.78c07b22dbdb4f6581fcb6add83ef89e.png

    I deliberately left my curl command running so that as Kubernetes is deploying the rolling update ? we can see that when the new pod is instantiated it starts responding with the new version of the BWCE container runtime, whilst another instance of the pod is being started. When completed, you can see that all instances are now running at the latest version:

    Screenshot2023-01-10at1_16_13PM.png.a5fca4ff558f8b21955ec855dc3e9c63.png

    Conclusion

    So there we have it ? we have successfully upgraded our BWCE container runtime in a Kubernetes environment without having to upgrade the entire infrastructure ? saving lots of time, effort and of course money.

    When you start applying DevOps principles to this approach ? then you can see how this level of automation can significantly, positively impact your development and operations costs.

    As you can see from the screenshot above ? this approach does not force you to upgrade every microservice to the same runtime version all at the same time ? different versions can co-exist at the same time, which allows you to better plan upgrades alongside existing development and maintenance activities rather than having to plan a separate activity just for upgrading the platform. Co-Existence of containers running different versions of the core run-time is actually a better way of making the best use of developers and operations time.

    Screenshot2023-01-10at1_15_45PM.png.d693ec387f532cf8942c4aa370213296.png


    User Feedback

    Recommended Comments

    There are no comments to display.


×
×
  • Create New...