Introducing the all-new TIBCO Community site!

For current users, please click "Sign In" to reset your password and access the enhanced features. If you're a first-time visitor, we extend a warm welcome—click "Sign Up" to become a part of the TIBCO Community!

If you're seeking alternative community sites, explore ibi, Jaspersoft, and Spotfire.

Jump to content
  • Deploying and running TIBCO Hawk® Container Edition on Google Kubernetes Engine


    Manoj Chaurasia

    Table of Contents

    Introduction

    Google Cloud Platform (GCP), offered by Google, is a suite of cloud computing services that runs on the same infrastructure that Google uses internally for its end-user products, such as Google Search and YouTube. Alongside a set of management tools, it provides a series of modular cloud services including computing, data storage, data analytics, and machine learning.

    Google Kubernetes Engine (GKE) is a management and orchestration system for Docker container and container clusters that run within Google's public cloud services. Google Kubernetes Engine is based on Kubernetes, Google's open source container management system

    Prerequisites

    [Note: Skip this section if you have already installed and configured gcloud, kubectl and other required tools, and built docker images for applications such as  TIBCO Hawk® Container Edition, TIBCO Businessworks? Container Edition,  etc] . Steps for getting started with GKE and deploying applications like TIBCO Hawk Container Edition:

    1. Install Docker for your OS

    2. Install ?kubectl? for your OS. This is the main CLI tool for interacting with Kubernetes.

    3. Install and configure ?gcloud? CLI. MacOS download: https://cloud.google.com/sdk/docs/quickstart-macos

      1. Extract the tar.gz file and run /google_cloud_sdk/install.sh

        The installation will guide you through the process of Authentication to Google Cloud account and allows to select the cloud project.

      2. It will ask you to select the Google Compute Engine zone. For Pune, India, the best option is #37 [asia-south1-a]

      3. Commands to check your environment:

         $ gcloud auth list (your account info) $ gcloud config list $ gcloud info

    4. Build TIBCO Hawk Container Edition docker images (https://docs.tibco.com/pub/hkce/2.0.0/doc/html/GUID-C51B12F3-4C9F-4FC0-9FEC-2F754AF626D5.html)

    5. Build the TIBCO Businessworks? Container Edition image with TIBCO Hawk® Microagent for TIBCO BusinessWorks? Container Edition plugin (https://docs.tibco.com/pub/hkbwce/2.0.0/doc/html/GUID-764188FE-EA50-4E7F-BB99-EA8309A4A18A.html)

    Getting Started

    Now that you have gcloud, kubectl and docker images for your applications (TIBCO Hawk Container Edition/ TIBCO Businessworks Container Edition) all set, it is time for deploying the applications on GKE.

    Create Cluster

    A Kubernetes cluster is a managed group of VM instances for running containerized applications.

    Procedure for creating the cluster is documented here: https://cloud.google.com/kubernetes-engine/docs/quickstart (choose the local shell option)

     $ export CLUSTER_NAME=<prefix>-hkce $ gcloud container clusters create ${CLUSTER_NAME}

     

    Configuring kubectl for GKE

     $ gcloud container clusters get-credentials ${CLUSTER_NAME}

     

    Get the project ID

     $ gcloud projects list $ export PROJECT_ID=<from previous command>

     

    Configure docker for using Google Cloud Repository (GCR)

     $ gcloud auth configure-docker

     

    Tag the docker images with GCR

    Show all the docker images on your local shell

     $ docker image list  REPOSITORY             TAG IMAGE ID            CREATED SIZE hkce_console          2.0 e43536e8b94b        2 months ago 332MB hkce_clustermanager   2.0 19c4c9c39fa5        2 months ago 296MB hkce_agent            2.0 0726ab0f44f3        2 months ago 316MB bwcehk                2.0 367d96449b3b        2 months ago 316MB

     

    Tagging format:

     gcr.io/<PROJECT_NAME>/image_name[:version] e.g.  $ docker tag hkce_agent:2.0 gcr.io/hawk-engineering/hkce_agent:2.0

     

    Do it for all other images:

     $ docker tag hkce_clustermanager:2.0 gcr.io/hawk-engineering/hkce_clustermanager:2.0 $ docker tag hkce_console:2.0 gcr.io/hawk-engineering/hkce_console:2.0 $ docker tag bwcehk:2.0 gcr.io/hawk-engineering/bwcehk:2.0

     

    Push the tagged images to GCR

     $ docker push gcr.io/hawk-engineering/hkce_agent:2.0 $ docker push gcr.io/hawk-engineering/hkce_clustermanager:2.0 $ docker push gcr.io/hawk-engineering/hkce_console:2.0 $ docker push gcr.io/hawk-engineering/bwcehk:2.0

     

    Check the images on GCR

     $ gcloud container images list  

     NAME  gcr.io/hawk-engineering/fluentd-uldp gcr.io/hawk-engineering/hkbwcenew gcr.io/hawk-engineering/hkce_agent gcr.io/hawk-engineering/hkce_clustermanager gcr.io/hawk-engineering/hkce_console  Only listing images in gcr.io/hawk-engineering. Use --repository to list images in other repositories.

     

    Note: In GKE clusters using GKE v1.11.x and older, there is a limitation that Cloud IAM cannot grant the ability to create a Kubernetes RBAC Role or ClusterRole, so you need to:

     $ export USER_ACCOUNT=<your email in lowercase> $ kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user ${USER_ACCOUNT}

     

    Deploying TIBCO Hawk Container Edition applications on GKE

    hkce_k8_0_0.png.0970cc68fd082ead8f3f399337d95dd2.png

    Deploy TIBCO Hawk ClusterManager as Stateful Set. StatefulSets represent a set of [Pods] with unique, persistent identities and stable hostnames that GKE maintains regardless of where they are scheduled. The state information and other resilient data for any given StatefulSet.

    Service and a Stateful set manifest file for hkce_clustermanager.yaml

     apiVersion: v1 kind: Service metadata: name: hkce-service labels:    app: hkce-service spec: ports: - port: 2561   protocol: TCP   targetPort: 2561 selector:   app: hkce-clustermanager clusterIP: None --- apiVersion: apps/v1 kind: StatefulSet metadata: labels:    app: hkce-clustermanager name: hkce-clustermanager-set spec: serviceName: hkce-service selector:   matchLabels:     app: hkce-clustermanager template:   metadata:     labels:       app: hkce-clustermanager   spec:     containers:     - name: hkce-clustermanager       image: gcr.io/hawk-engineering/hkce_clustermanager:2.0       imagePullPolicy: Always       env:       - name: tcp_self_url         value: hkce-clustermanager-set-0.hkce-service:2561       - name: tcp_daemon_url         value: hkce-clustermanager-set-0.hkce-service:2561       ports:       - containerPort: 2561         protocol: TCP

     

    In this manifest:

    • A Service object named hkce_service is created. The Service targets an app called hkce_clustermanager, indicated by labels: app: hkce_clustermanager and selector: app: hkce_clustermanager. The Service exposes port 2561. This Service controls the network domain and to route Internet traffic to the containerized application deployed by the StatefulSet.
    • A StatefulSet named hkce_clustermanager is created
    • The Pod template (spec: template) indicates that its Pods are labeled app: hkce_clustermanager.
    • The Pod specification (template: spec) indicates that the StatefulSet's Pods run one container, hkce_clustermanageer, which runs the hkce_clustermanager image at version 2.0. The container image is hosted by Container Registry.

    Create this Service and Stateful Set using:

     $ kubectl apply -f hkce_clustermanager.yml

     

    Deploy the TIBCO Hawk Agent as DaemonSet. DaemonSets manage groups of replicated Pods. However, DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes.

    DaemonSet manifest file for hkce_agent.yaml

     apiVersion: apps/v1 kind: DaemonSet metadata: name: hkce-agent-set labels:   app: hkce-agent spec: selector:   matchLabels:     name: hkce-agent-set template:   metadata:     labels:       name: hkce-agent-set   spec:     containers:     - name: hkce-agent       image: gcr.io/hawk-engineering/hkce_agent:2.0       imagePullPolicy: Always       env:       - name: HOST_NAME         valueFrom:           fieldRef:             apiVersion: v1             fieldPath: status.podIP       - name: HOST_IP         valueFrom:           fieldRef:             fieldPath: status.hostIP       - name: tcp_self_url         value: ${HOST_NAME}:2551       - name: tcp_daemon_url         value: hkce-clustermanager-set-0.hkce-service:2561       - name: ami_tcp_session         value: ${HOST_IP}:2571       - name: config_path         value: /tibco.home/hkce/2.0/config/       - name: DOCKER_HOST         value: unix:///var/run/docker.sock       volumeMounts:       - mountPath: /var/run/docker.sock         name: docker-sock-volume       ports:       - containerPort: 2551         name: agentport         protocol: TCP       - containerPort: 2571         hostPort: 2571         protocol: TCP     volumes:     - name: docker-sock-volume       hostPath:        path: /var/run/docker.sock

     

    Create this DaemonSet using:

     $ kubectl apply -f hkce_agent.yml

     

    Deploy TIBCO Businessworks Container Edition (with App and TIBCO Hawk® Microagent for TIBCO BusinessWorks? Container Edition plugin) as Pod. Manifest file for hkbwce.yml:

     apiVersion: v1 kind: Pod metadata:  name: bwcehawkadapter  labels:    name: bwcehawkadapter    app: hkce spec:  containers:  - name: bwcehawkadapter    image: gcr.io/hawk-engineering/hkbwcenew:2.0    imagePullPolicy: Always    env:    - name: HOST_NAME      valueFrom:       fieldRef:         apiVersion: v1         fieldPath: status.podIP    - name: HOST_IP      valueFrom:       fieldRef:        fieldPath: status.hostIP    - name: tcp_self_url      value: ${HOST_NAME}:2551    - name: ami_agent_url       value: ${HOST_IP}:2571    ports:    - containerPort: 2551      protocol: TCP

     

    Create this pod using:

     $ kubectl apply -f hkbwce.yml

     

    Deploy TIBCO Hawk Console as Pod and create a load balancer

    Load Balancer and Pod manifest file for hkce_console.yaml

     apiVersion: v1 kind: Service metadata: name: hkce-console-service spec: type: LoadBalancer ports: - port: 8083   targetPort: 8083 selector:  app: hkce-console --- apiVersion: v1 kind: Pod metadata: name: hkce-console labels:   name: hkce-console   app: hkce-console spec: containers: - name: hkce-console   image: gcr.io/hawk-engineering/hkce_console:2.0   imagePullPolicy: Always   env:   - name: HOST_NAME     valueFrom:      fieldRef:       apiVersion: v1       fieldPath: status.podIP   - name: tcp_self_url     value: ${HOST_NAME}:2551   - name: tcp_daemon_url     value: hkce-clustermanager-set-0.hkce-service:2561   - name: hawk_domain     value: default   - name: hawk_console_repository_path     value: /tibco.home/hkce/2.0/repo   ports:    - containerPort: 2551      name: consoleport      protocol: TCP    - containerPort: 8083

     

    Create this Load Balancer and pod using:

     $ kubectl apply -f hkce_console.yml

     

    Access the Hawk Console UI

     $ kubectl get services  NAME                  TYPE         CLUSTER-IP   EXTERNAL-IP    PORT(S)         AGE  hkce-console-service  LoadBalancer 10.63.246.76 35.244.49.218  8083:30901/TCP   6h  hkce-service          ClusterIP    None         <none>         2561/TCP        15h  kubernetes            ClusterIP    10.63.240.1  <none>         443/TCP          1d

     

    Storage Options in GKE

    So far, in the TIBCO Hawk Container Edition deployments above we have not used the Persistent Volume to store the state of Rulebase Repository, which will be accessible even if the TIBCO Hawk Container Edition Console pod container is restarted or re-provisioned.

    There are various options for storage provided by the Kubernetes engine. Go through the concepts here: https://cloud.google.com/kubernetes-engine/docs/concepts/volumes and https://kubernetes.io/docs/concepts/storage/persistent-volumes/

    Note: No need to create explicit Persistent Volume (PV). When the Persistent Volume Claim (PVC) is created, GKE automatically creates the Persistent Volume and assigns the PVC to the PV.

    Creating Persistent Volume Claim

    The manifest file for persistent volume claim:

     kind: PersistentVolumeClaim apiVersion: v1 metadata:  name: hkce-pv-claim spec:  storageClassName: standard  accessModes:    - ReadWriteOnce  resources:    requests:      storage: 500Mi

     

    Note: The storageClassName should be ?standard? else, it cannot attach to the Persistent Volume.

    Using the persistent volume claim in TIBCO Hawk Container Edition Console

    The manifest for hkce_console_pv.yml is updated as:

     apiVersion: v1 kind: Service metadata: name: hkce-console-service spec: type: LoadBalancer ports: - port: 8083   targetPort: 8083 selector:  app: hkce-console --- apiVersion: v1 kind: Pod metadata: name: hkce-console labels:   name: hkce-console   app: hkce-console spec: volumes: - name: hkce-pv-storage   persistentVolumeClaim:    claimName: hkce-pv-claim containers: - name: hkce-console   image: gcr.io/hawk-engineering/hkce_console:2.0   imagePullPolicy: Always   env:   - name: HOST_NAME     valueFrom:      fieldRef:       apiVersion: v1       fieldPath: status.podIP   - name: tcp_self_url     value: ${HOST_NAME}:2551   - name: tcp_daemon_url     value: hkce-clustermanager-set-0.hkce-service:2561   - name: hawk_domain     value: default   - name: hawk_console_repository_path     value: /tibco.home/hkce/2.0/repo   ports:    - containerPort: 2551      name: consoleport      protocol: TCP    - containerPort: 8083   volumeMounts:    - mountPath: "/tibco.home/hkce/2.0/repo"      name: hkce-pv-storage

     

    Note that the Volume should be the same PVC created before and the mount point needs to be specified wherever the application needs to store. E.g. in TIBCO Hawk Container Edition, the mount point is RuleBase Repo path.


    User Feedback

    Recommended Comments

    There are no comments to display.


×
×
  • Create New...