Introducing the all-new TIBCO Community site!

For current users, please click "Sign In" to reset your password and access the enhanced features. If you're a first-time visitor, we extend a warm welcome—click "Sign Up" to become a part of the TIBCO Community!

If you're seeking alternative community sites, explore ibi, Jaspersoft, and Spotfire.

Jump to content
  • Deploying and running TIBCO Hawk® Container Edition in Kubernetes Cluster


    Manoj Chaurasia

    Table of Contents


    TIBCO Hawk® provides the industry's best, most sophisticated tool for monitoring and managing distributed applications and systems throughout the enterprise. With TIBCO Hawk, system administrators can monitor application parameters, behavior, and loading activities for all nodes in a local or wide-area network and take action when pre-defined conditions occur. In many cases, runtime failures or slowdowns can be repaired automatically within seconds of their discovery, reducing unscheduled outages and slowdowns of critical business systems. 


    Kubernetes is an open-source system for managing containerized applications across multiple hosts, providing basic mechanisms for the deployment, maintenance, and scaling of applications.

    For more information about Kubernetes, refer to the Kubernetes documentation at https://kubernetes.io/docs/concepts/

    TIBCO Hawk Container Edition 1.0.0. is a lightweight, low-footprint, containerized Hawk Agent along with other separate Hawk components such as Cluster Manager, and TIBCO Hawk Console. TIBCO Hawk Container Edition helps you carry the same TIBCO Hawk monitoring and management experience from the enterprise world to the container world.

    Kubernetes is one of the most popular choices for PaaS deployments. This article describes how you can deploy TIBCO Hawk Container Edition Components in a Kubernetes Cluster.

    Cluster Design Approach

    TIBCO Hawk Container Edition has a few components that should interact in a certain way to form the TIBCO Hawk Domain to monitor TIBCO Businessworks? Container Edition containers/ applications. We will see a design approach to build this cluster with Kubernetes features:

    Hawk Container Edition ClusterManager:

    • Seed node for TIBCO Hawk Cluster should be reachable from other containers
    • Stable network address, regardless of restarts
    • Kubernetes ?StatefulSets? - perfect fit along with headless service
    • Discoverable as ?pod_name.headless_service_name? (DNS entry)

    Hawk Container Edition Agents:

    • Deployed on each node
    • Kubernetes ?DaemonSet? perfect fit
    • Self URL = status.podIP (Kubernetes Downward API) - unique IP for each TIBCO Hawk Container Edition Agent
    • AMI_Session_URL = status.hostIP (for AMI applications to connect to Agent)

    TIBCO Businessworks Container Edition and TIBCO Hawk Agent communication

    • TIBCO Hawk-TIBCO Businessworks Container Edition instances on node will connect to TIBCO Hawk Container Edition Agent pod
    • AMI_Session_URL = status.hostIP

    hkce_k8_0.png.69c29a0a428d8b69ca987398b9f36c19.png

     

    Prerequisites

    Download and install the following CLI tools on your system:

    CLIDownload and Installation Instruction Link
    kopshttps://github.com/kubernetes/kops/blob/master/docs/aws.md
    kubectlhttps://kubernetes.io/docs/tasks/tools/install-kubectl/
    awshttps://aws.amazon.com/cli/

     

     

     

     

     

     

     

    Steps for deployment

    Step 1: Setup Kubernetes Cluster on Amazon Web Services (AWS)

    1. Create an S3 storage to store the cluster configuration and state. You can use either the AWS CLI or AWS console to create the storage.

    2. The sample AWS CLI command for creating S3 storage is:  aws s3 mb s3://hkce-bucket

    For more information on Amazon Simple Storage Service (Amazon S3), see the Amazon S3 Documentation at https://aws.amazon.com/documentation/s3/

    3. Create the Kubernetes cluster on AWS using the following ?kops? command:

     kops create cluster --zones us-west-2a --master-zones us-west-2a --master-size t2.large --node-size t2.large --name hkcecluster.k8s.local --state s3://<s3-bucket-name> --yes

    Where,

    • s3-bucket-name is a name of the s3 bucket created earlier (hkce-bucket).
    • hkcecluster.k8s.local is the name of the cluster being created. Use k8s.local prefix to identify a gossip-based Kubernetes cluster and you can skip the DNS configuration.

    For more information on the kops create cluster command either use the help parameter or refer to the kops tool documentation at https://github.com/kubernetes/kops/tree/master/docs

    4. Validate your cluster using the validate command:

     kops validate cluster

    5. Node and master must be in ready state. The kops utility stores the connection information at ~/.kops/config, and kubectl uses the connection information to connect to the cluster.

    6. If needed, you can delete the cluster using the following command:

     kops delete cluster hkcecluster.k8s.local --state=s3://<s3-bucket-name> --yes

    Step 2: Create Docker image of Hawk Container Edition components

    Refer to the documentation: Building Hawk Container Edition Components Docker Images

    Step 3: Create AWS Repository

    Go to the EC2 Container Services dashboard and create a repository with the same name as the Docker image of Hawk Container Edition component. Upload the component image to the repository and for help you might use the View Push Commands button.

     

    Note: AWS Repository name must be the same as the Docker image name of Hawk Container Edition component. For more information on how to create a repository in Amazon AWS, refer to https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html

    Step 4: Deploy Hawk Container Edition components on AWS

    1. Create Kubernetes resources, required for deploying Hawk Container Edition cluster, using the YAML files. These resources include deployment and services for the cluster. Thus, to deploy a Hawk Container Edition cluster, create:

    • a Hawk Cluster Manager node (pod) to start the cluster
    • a service to connect to Hawk Cluster Manager node
    • a Hawk agent node which connects to the Hawk Cluster Manager node service
    • a Hawk console node which connects to the Hawk Cluster Manager node service

    Sample YAML files configurations of Hawk Container Edition components are as follows:

    Hawk Cluster Manager YAML file:

     apiVersion: v1 kind: Service metadata: name: hkce-service spec: ports: - port: 2561    protocol: TCP    targetPort: 2561 selector:    app: hkce-clustermanager clusterIP: None --- apiVersion: apps/v1 kind: StatefulSet metadata: labels:    app: hkce-clustermanager name: hkce-clustermanager-set spec: serviceName: hkce-service selector:    matchLabels:      app: hkce-clustermanager template:    metadata:      labels:        app: hkce-clustermanager    spec:      containers:      - name: hkce-clustermanager        image: <aws_docker_registry>/hkce_clustermanager:1.0        imagePullPolicy: Always        env:        - name: tcp_self_url          value: hkce-clustermanager-set-0.hkce-service:2561        - name: tcp_daemon_url          value: hkce-clustermanager-set-0.hkce-service:2561        ports:        - containerPort: 2561          protocol: TCP

     

    Hawk Agent YAML file:

     apiVersion: apps/v1 kind: DaemonSet metadata:  name: hkce-agent-set  labels:    app: hkce-agent spec:  selector:    matchLabels:      name: hkce-agent-set  template:    metadata:      labels:        name: hkce-agent-set    spec:      containers:      - name: hkce-agent        image: <aws_docker_registry>/hkce_agent:1.0        imagePullPolicy: Always        env:        - name: HOST_NAME          valueFrom:            fieldRef:              apiVersion: v1              fieldPath: status.podIP        - name: HOST_IP          valueFrom:            fieldRef:              fieldPath: status.hostIP        - name: tcp_self_url          value: ${HOST_NAME}:2551        - name: tcp_daemon_url          value: hkce-clustermanager-set-0.hkce-service:2561        - name: ami_tcp_session          value: ${HOST_IP}:2571        - name: auto_config_dir          value: /tibco.home/hkce/1.0/autoconfig/        - name: DOCKER_HOST          value: unix:///var/run/docker.sock        volumeMounts:        - mountPath: /var/run/docker.sock          name: docker-sock-volume        ports:        - containerPort: 2551          name: agentport          protocol: TCP        - containerPort: 2571          hostPort: 2571          protocol: TCP      volumes:      - name: docker-sock-volume        hostPath:         path: /var/run/docker.sock

     

    Hawk Console YAML file:

     apiVersion: v1 kind: Service metadata:  name: hkcc-console-service spec:  type: LoadBalancer  ports:  - port: 8083    targetPort: 8083  selector:   app: hkcc-console ---  apiVersion: v1 kind: Pod metadata:  name: hkcc-console  labels:    name: hkcc-console    app: hkcc-console spec:  containers:  - name: hkcc-console    image: <aws_docker_registry>/hkcc_console:1.0    imagePullPolicy: Always    env:    - name: HOST_NAME      valueFrom:       fieldRef:         apiVersion: v1         fieldPath: status.podIP    - name: tcp_self_url      value: ${HOST_NAME}:2551    - name: tcp_daemon_url      value: hkce-clustermanager-set-0.hkce-service:2561    - name: hawk_domain      value: default    ports:    - containerPort: 2551      name: consoleport      protocol: TCP    - containerPort: 8083

     

    BWCE YAML file:

     apiVersion: v1 kind: Pod metadata:   name: bwcehawkadapter   labels:     name: bwcehawkadapter     app: hkce spec:   containers:   - name: bwcehawkadapter     image: <aws_docker_registry>/hawkadapter:improved     imagePullPolicy: Always     env:     - name: HOST_NAME       valueFrom:        fieldRef:          apiVersion: v1          fieldPath: status.podIP     - name: HOST_IP       valueFrom:        fieldRef:         fieldPath: status.hostIP     - name: tcp_self_url       value: ${HOST_NAME}:2551     - name: ami_agent_url        value: ${HOST_IP}:2571     ports:     - containerPort: 2551       protocol: TCP

     

    For more details on deployment strategy each of the HKCE components, refer to: Hawk Container Edition Components YAML Files

    2. Run the create command of kubectl utility by using the YAML files to deploy the Hawk Container Edition cluster.

     kubectl create -f <component_file>.yml

    For example, the following are the YAML files for Hawk Container Edition components:

    • daemonstateful.yml - Hawk Cluster Manager
    • agentdaemonset.yml - Hawk Agent
    • consolepod.yml - Hawk Console

    Run the kubectl create command to deploy Hawk Container Edition cluster:

     kubectl create -f daemonstateful.yml kubectl create -f agentdaemonset.yml kubectl create -f consolepod.yml

    3. You can also get the external IP to the external service of the cluster by using the get services command. You can then use that IP to connect to the cluster. For example:

     kubectl get services hkcc-console-service

    4. You can check the logs of individual Hawk component container pods using the following command:

     kubectl logs <pod>


    User Feedback

    Recommended Comments

    There are no comments to display.


×
×
  • Create New...