Introducing the all-new TIBCO Community site!

For current users, please click "Sign In" to reset your password and access the enhanced features. If you're a first-time visitor, we extend a warm welcome—click "Sign Up" to become a part of the TIBCO Community!

If you're seeking alternative community sites, explore ibi, Jaspersoft, and Spotfire.

Jump to content
  • Manoj Chaurasia
    Table of Contents
    This is a short guide on how to run the JDBC basic sample using TIBCO Businessworks? Container Edition with Docker. First step is to start the mysql container. Note : Note: Note : This is a short guide on how to run the JDBC basic sample using TIBCO Businessworks? Container Edition with Docker.
    The complete info can be found on the TIBCO Businessworks documentation, the purpose of this post it's only to add extra info that can be helpful to run the sample. The sample by default use oracle, We use mysql because is the db used by the monitoring application by default so we have already the container running on my machine.
    The first step is to start the mysql container.
    We use the docker-compose.yml file provided with the SW. In this compose file are specified 4 containers (the monitoring app, mysql, postgres, and mongodb). For this sample is required only mysql so feel free to comment or delete the other containers or lines not needed.
    Note :
    We have modified the default file to add a network (my_network) so the containers mysql and monitoring app are on the same network and the monitoring app can talk to mysql directly. A monitoring app is not used in this sample but the same concept is used to link the sample JDBC application with the mysql db container at runtime.
    version: '3.0' services: mysql_db: image: mysql:5.5 container_name: mon-mysql ports: - "3306:3306" environment: MYSQL_DATABASE: bwcemon MYSQL_ROOT_PASSWORD: admin volumes: - mysql_data:/var/lib/mysql - ./dbscripts/mysql:/docker-entrypoint-initdb.d networks: - my_network postgres_db: image: postgres:latest container_name: mon-postgres ports: - "5432:5432" environment: POSTGRES_DB: bwcemon POSTGRES_PASSWORD: admin volumes: - postgres_data:/var/lib/postgres - ./dbscripts/postgres:/docker-entrypoint-initdb.d networks: - my_network mon_app: build: . ports: - "8080:8080" #-links: #- mysql_db #- postgres_db environment: DB_URL: mysql://admin:admin@mon-mysql:3306/bwcemon PERSISTENCE_TYPE: mysql #DB_URL: postgresql://admin:admin@mon-postgres:5432/bwcemon #PERSISTENCE_TYPE: postgres networks: - my_network volumes: #mongo: mysql_data: postgres_data: networks: my_network: To start the containers defined in the file from the folder containg the yml file: docker-compose up -d  
    Note:
    In this case, the folder containing the file is bwce-mon so the folder name is used as a prefix for the network. Run the docker-compose up in that folder so you can use the relative path used for the db scripts.
    We can see the my-sql container running :
    docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d8f187397115 mysql:latest "docker-entrypoint.s?" 39 minutes ago Up 39 minutes 0.0.0.0:3306->3306/tcp mon-mysql 3dd22c7b8ab6 bwcemon_mon_app "npm start" 39 minutes ago Up 39 minutes 0.0.0.0:8080->8080/tcp bwcemon_mon_app_1  
    The compose file exposes the mysql port 3306 on the host. In this way, the db can be accessed externally by applications that are not in the same docker network (bwcemon_my_network) . You can use sql developer to browse the db. Now we can use the business studio to run our sample. In the attachment, the zip file with the sample modified to use mysql db. The only changes are to use the mysqldriver and the url string:
    jdbc:mysql://localhost:3306/bwcemon  
    The hostname is localhost. This is important. It means our application is connecting to mysql on the exposed port on the host.
    Note :
    We're using the bwcemon database used for the monitoring app. This is just for simplicity, feel free to create another one.
    How to install my-sql driver for local testing is not shown. It's the same procedure used for TIBCO Businessworks 6 and it's explained in the documentation.
    Once checked the sample is running fine at debug time we can move to the next step and create a container for our application. Remember to set Docker as a Container platform before creating the ear file and set the docker profile as default. To use JDBC driver in our container we need to add these drivers to the TIBCO Businessworks Container Edition runtime image (instructions on how to build the first time this image are in the doc).
    Move to the folder :
    /bwce/2.3/config/drivers/shells/jdbc.mysql.runtime/runtime/plugins and copy the folder : com.tibco.bw.jdbc.datasourcefactory.mysql  
    in the same directory where you have the following Dockerfile :
    FROM tibco/bwce:latest COPY com.tibco.bw.jdbc.datasourcefactory.mysql /resources/addons/jars/com.tibco.bw.jdbc.datasourcefactory.mysql  
    This is only done to avoid inserting the full path in the copy statement if the dockerfile is in a different folder.
    tibco/bwce:latest is the default TIBCO Businessworks Container Edition image. We are going to create a new image adding another layer.
    docker build -t tibco/bwce_jdbc .  
    tibco/bwce_jdbc is the name we chose for this image. The '.' is to specify to use of the Dockerfile in that folder. Now we can create a new image (or modify the existing one, your choice) by adding the ear file. As done for the previous image the simple way is to have the Dockerfile and the ear in the same folder :
    FROM tibco/bwce_jdbc:latest MAINTAINER Tibco ADD tibco.bwce.sample.palette.jdbc.Basic.application_1.0.0.ear /  
    So:
    docker build -t jdbc_mysql .
    In this case, I called my image jdbc_mysql. The name can be of course changed. Now we have an image with the JDBC drivers and the ear, we can now create a container. Also in this case I use a compose file :
    version: '3.0' services: bwce-jdbc-basic-app: image: jdbc_mysql container_name: bwce-jdbc-basic-app environment: DB_USERNAME: admin DB_PASSWORD: admin DB_URL: jdbc:mysql://mon-mysql:3306/bwcemon networks: default: external: name: bwcemon_my_network  
    There are 3 important things to note :
    image name is jdbc_mysql . If you change the image in the previous step, update the value in the compose file
    In theDB URL jdbc:mysql://mon-mysql:3306/bwcemon mon-mysql is used for the hostname (in the studio was localhost). In this case the container we'll connect directly to the mysql container and this is possible because they are on the same network. It works also if the mysql port is not externally exposed.
    bwcemon_mynetwork is added at the end of the file to specify to use an exiting network.
    So let's run this container:
    docker-compose up -d  
    To check is running :
    docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 51dcafe10386 jdbc_mysql "/scripts/start.sh" 46 seconds ago Up 45 seconds bwce-jdbc-basic-app d8f187397115 mysql:latest "docker-entrypoint.s?" About an hour ago Up About an hour 0.0.0.0:3306->3306/tcp mon-mysql 3dd22c7b8ab6 bwcemon_mon_app "npm start" About an hour ago Up About an hour 0.0.0.0:8080->8080/tcp bwcemon_mon_app_1  
    We can check the appnode logs:
    docker container logs bwce-jdbc-basic-app
    It's possible to check the containers are in the same network:
    docker network inspect bwcemon_my_network  
    A subset of the output of the previous command show containers mon-mysql bwce-jdbc-basic-app are in the same network:
    Containers": { "3dd22c7b8ab6a73798057f9357f421bc0192c2ccee85f9b3968cd30423058dcc": { "Name": "bwcemon_mon_app_1", "EndpointID": "363b6509560449dcb660654f707d3a6e309ae9777b4bba487d5569343793486f", "MacAddress": "02:42:ac:17:00:03", "IPv4Address": "172.23.0.3/16", "IPv6Address": "" }, "51dcafe103867f4712a35952a26b85f90c42dd54f9820ab313a9ab8e94d928fd": { "Name": "bwce-jdbc-basic-app", "EndpointID": "92b542ca3f33edea791fa5321a482b43c05b0917f1fa75f6fb3232fe5308289e", "MacAddress": "02:42:ac:17:00:05", "IPv4Address": "172.23.0.5/16", "IPv6Address": "" }, "c2ff062453e8fee8c465bce700eaf196e48c445acb1b8f993a7dbece76ea0717": { "Name": "mon-postgres", "EndpointID": "58b5907acc900fd40d85c28dc980e4bcab0a8eea2c24eb9ee8b792a7d7ac3ba6", "MacAddress": "02:42:ac:17:00:02", "IPv4Address": "172.23.0.2/16", "IPv6Address": "" }, "d8f18739711506581c4338acb599284c859d16a52b3698d5fac8a1aab3b9b5ce": { "Name": "mon-mysql", "EndpointID": "5d093629cb99c30450580ee001b2bb9fdb3cb638eb723d7ed501fbeae7f376ee", "MacAddress": "02:42:ac:17:00:04", "IPv4Address": "172.23.0.4/16", "IPv6Address": "" } }  
    This is only one of the possible configurations to use to run the sample. Having both containers in the same network is an easy way for them to communicate in a simple setup. Using a compose file is the best option to run a container so you have more control over the parameters used and the same file can be also used in a multi-node environment using Docker Swarm.
    Hope this guide is helpful.
    bwce-mon.7z
    tibco.bwce_.sample.palette.jdbc_.7z
    tibco.bwce_.sample.palette.jdbc_.basic_.7z

    Manoj Chaurasia
    Table of Contents
    REPROCESSING OF FAILED TRANSACTIONS Note: MONITORING OF JVM PARAMETERS TIBCO Businessworks? is a Java-based platform, however, normally very little development is done in Java. At it's heart TIBCO Businessworks is an XSLT processing engine with lots of connectivity components.
    REPROCESSING OF FAILED TRANSACTIONS
    Write a Rulebase to verify the log for reprocessing failed transactions.

    Select the TraceLevel method in EventLog microagent for logging event.

    Provide values for conditions to be monitored in Test Editor. Jovi Soft Solutions provides the best AEM Training. Online training by real time experts.

    The alert message is set to display errors.

    Note:
    .hrb File created for the Reprocessing of failed transactions:

    MONITORING OF JVM PARAMETERS
    Monitoring of JVM Parameters in TIBCO requires a similar procedure used in,
    Monitoring of memory and virtual memory.
    Please refer to the previous post MONITORING OF MEMORY, VIRTUAL MEMORY
    Monitoring of Threads.

    Manoj Chaurasia
    This article is focused on setting up an EKS cluster and the possible pitfalls that you may experience while doing so. Hopefully, this will be helpful in setting up your own EKS cluster! We will focus on some of the major milestones in the setup.
    To get started, we suggest looking at the official documentation, https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html. If you follow this, you should be able to get everything set up, but issues may arise, so ill list the most common ones below.
    Possible issues:
    The access Key and Secret Access Key haven't been set yet. In order for your computer to connect to the EKS cluster it needs these keys to authenticate yourself as the actual user. These keys can be set by running "aws configure" within your terminal. Please keep in mind that this stores your keys, so only do this on a private computer that only you have access to. Your keys essentially give access to your account.  Have the proper versions of the CLIs required. This is mainly focused on kubectl (1.10) and AWS cli (1.15). Older versions of the AWS cli do not support EKS functions. Upgrading the AWS cli can be a pain if you do not have the newest versions of python 2 or 3 along with pip. But it must be done. Make sure you don't skip the step for the heptio-authenticator step in the getting-started guide. This is very important to install or else your cluster won't authenticate your CLI requests. Make sure the name of your config file matches the name of your cluster. This makes it easy to manage in case you have multiple K8s config files. Also, make sure to export that config file to KUBECONFIG to either your bash_profile or bashrc file. That way you don?t need to export it every time you open up a new terminal session. Create proper policies and roles for security reasons. Don't assign your cluster administrative rights because you are being lazy and can't be bothered to create a new policy. Project your Cluster! Create appropriate policies! These are just a few things that may come up. 
    If you're a beginner, we suggest just using the WebUI to create your cluster and setup up your roles and policies. This simplifies the process and makes it much more intuitive. Also, you have the choice to create a new VPC or use an existing one. We suggest using an existing one since it has everything you need on it. (Don't want to accidentally forget something). After you've set up your control panel, you should see something like this.


    We will use our certificate authority, cluster ARN and API server endpoint for some of the config files so just keep note of them (follow the getting started guide).
    After you set that up, you will need to deploy your worker nodes on your AWS account. This is done with a cloud formation script. (provided on the getting started page). Just fill in the parameters that it asks for. This should take 5-10 minutes to deploy. Once done, on the CloudFormation page, navigate to the Outputs tab. Keep a note of this value as you will need it when binding your worker nodes to your control panel.
    Continue following the getting started guide. At the end of it, you should be able to run "kubectl get svc" and get an output that shows your Kubernetes service. If not, maybe you get an error, check to make sure you've downloaded and installed the heptio-authenticator correctly. And that whatever role/policy combination you are using has the right permissions. If you do see a service, that means your EKS cluster is up and running and you are able to start deploying projects onto it. 
    If you wish to have a UI to work with, follow this guide: https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html I suggest it for beginners. It's also easier to demo/talk about (more interesting).

    Manoj Chaurasia
    AWS recently made Amazon EKS generally available to the public in us-east-1 (N. Virginia) and us-west-2 (Oregon) with more regions to come in the future. Essentially, EKS is an easy way to deploy a Kubernetes cluster on AWS, where you don't have to manage the Control Plane nodes; all you need to worry about are the worker nodes. This makes it a lot easier to handle while simplifying the process.  Also, other AWS services integrate directly with EKS, so if you plan to use ECR as your repository you no longer need to worry about access tokens. Or maybe you want to use Cloudwatch for more control on the management/logging side. Either way, you are staying within the AWS ecosystem.
    TIBCO BusinessWorks? Container Edition (BWCE) was built to work on any PaaS/IaaS, with Amazon EKS being no different. If you?ve built BWCE applications for other PaaS environments (Kubernetes or something else), and want to now deploy them to EKS, it?s just a matter of taking the EAR file generate from BWCE and pushing it to EKS. No need to go back into the BusinessWorks Studio to refactor, or rebuild, it already natively works as built. This way you get the benefits of Amazon's cloud deployment knowledge and experiences coupled with the same CI/CD pipeline you use today, regardless of deployment location.
    Here's a short community post on setting up your EKS cluster with some notes on possible issues you may face.
    The video below goes over how to deploy your BusinessWorks Container Edition application to Amazon EKS. In the future we will also post more advanced videos that highlight certain features!:
    More Advanced Topics:
    Config Maps on EKS:

    Manoj Chaurasia
    TIBCO BusinessWorks? provides REST samples, but they are pretty complicated. Here, I have much simpler examples using only the file and XML palette. There are step-by-step instructions and also ready-made project examples.
    1. Simple REST example: Download the word document with pictures and the project in the zip file testrestexample.zip from Resources below.
    2. A more complicated example with a multi-operation subprocess. This example continues from Step #1. Download the word document with pictures and the project in the zip file testrestexamplewithsubprocess.zip from Resources below
    testrestexample.zip
    testrestexamplewithsubprosess.zip

    Manoj Chaurasia
    Amazon recently announced AWS Fargate during Re: Invent 2017.  With Fargate, instead of having to use EC2 instances (VMs), you can use just a container. Fargate provisions a container within the platform itself for your applications without having to deal with all the underlying infrastructure. By using a combination of ECS and Fargate, you will no longer have to worry about keeping your EC2 instances up to date with the latest security patches. Amazon manages the Fargate platform, while still allowing some control to manage your applications.


    Of course, there are some use cases where Fargate won't be the right choice. Let's say your application requires bridge networking, Fargate doesn't support that so you would have to use the traditional ECS + EC2 instances model for those container deployments. Or if you want to have control of the instances that are running your containers, EC2 would be a better choice. But Amazon has done a good job allowing users to use both Fargate and traditional EC2 instance deployments on the same cluster at the same time.
    That being said, TIBCO BusinessWorks? Container Edition allows you to deploy applications to ECS using both "backend" models. ECS + EC2 and ECS + Fargate can be used as deployment platforms with little changes. This ties into the idea that TIBCO BusinessWorks Container Edition was built to work on your PaaS and IaaS of choice, and even though Fargate was just announced/released a few days ago (November 30th), TIBCO BusinessWorks? Container Edition applications work on it day one.
    Here's a simple video that walks through this process from application design to deployment on ECS with Fargate: 

    Manoj Chaurasia
    Table of Contents
    Prerequisite Procedure Create Application Create Cluster Set the basic application information. Click Create with Image Once Container is started you will see service is getting created along with application Click on Services and you will get details about the service Click on the endpoint and append swagger to url Provide input in Swagger and check the output Prerequisite
    Create a Sample TIBCO BusinessWorks? Container Edition application and create a docker image of the same application
    Push the application image to the Docker hub
    Alibaba cloud account with Alibaba container service enabled.
    Procedure
    Create Application
    Create Cluster
    Log on to the Container Service console.
    Click Clusters in the left navigation pane, and then click Create Cluster in the upper-right corner.
    Enter the basic information of the cluster. Cluster Name: The name of the cluster to be created.
    Set the network type of the cluster.
    You can set the network types to Classic or VPC. Corresponding ECS instances and other cloud resources are managed under the corresponding network environment. If you select Classic, no additional configuration is required. The classic network is a public basic network uniformly planned by Alibaba Cloud. The network address and topology are assigned by Alibaba Cloud and can be used without special configurations. If you select VPC, you need to configure relevant information.VPC enables you to build an isolated network environment based on Alibaba Cloud. You will have full control over your own virtual network, including a free IP address range, network segment division, route table, gateway configuration, and so on. You need to specify a VPC, a VSwitchId, and the starting network segment of a container (subnet segment to which the Docker container belongs. For the convenience of IP management, the container of each virtual machine belongs to a different network segment, and the container subnet segment should not conflict with the virtual machine segment). It is recommended that you build an exclusive VPC/VSwitchId for the container cluster to prevent network conflicts.
    Add nodes.
    You can create a cluster with nodes, or create a zero-node cluster and then add existing nodes to the cluster. For information about how to add existing nodes to the cluster, refer to Add an existing ECS instance. Add Set the operating system of the node. Operating systems such as 64-bit Ubuntu 14.04 and 64-bit CentOS 7.0 are supported.
    Configure the ECS instance specifications
    You can specify different instance types and quantities, the capacity of the data disk (The ECS instance has a 20GB system disk by default), and logon password.
    If you set the network type to VPC, by default, the Container Service configures an EIP for each ECS instance under the VPC. If this is not required, select Do not Configure Public EIP. However, you will then need to configure the SNAT gateway.
    Configure EIP.
    Create a Server Load Balancer instance.
    When a cluster is created, a public network Server Load Balancer instance is created by default. You can access the container applications in the cluster through this Server Load Balancer.
    This is a Pay-As-You-Go Server Load Balancer instance.
    Click Create Cluster.
    After the cluster is successfully created, you can see in the cluster list
    Log on to the Container Service console.
    Click Applications in the left navigation pane and click Create Application in the upper-right corner. Set the basic application information.
    Name: The name of the application to be created. It must contain 1~64 characters and can be composed of numbers, Chinese characters, English letters, and hyphens (-).
    Version: The version of the application to be created. By default, the version is 1.0.
    Cluster: The cluster to which the application will be deployed to.
    Update Method: The release method of the application. You can select Standard Release or Blue-Green Release.
    Description: Information on the application. It can be left blank and, if entered, cannot exceed 1,024 characters. This information will be displayed on the Application List page.
    Pull Docker Image: When selected, Container Service pulls the latest Docker image in the registry to create the application, even when the tag of the image does not change.
    In order to improve efficiency, Container Service caches the image; and at deployment, if the tag of the image is consistent with that of the local cache, Container Service uses the cached image instead of pulling the image from the registry. Therefore, if you modify your code and image but do not modify the image tag, Container Service will use the old image cached locally to deploy the application. When this option is selected, Container Service ignores the cached image and re-pulls the image from the registry no matter whether the tag of the image is consistent with that of the cached image, ensuring that the latest image and code are always used.
    Click Create with Image
    Set the Image Name and Image Version.
    Set the Image Name as the Docker hub image that we have already pushed to the docker hub
    Set the number of containers (Scale).
    Set the Network Mode.
    Currently, the Container Service supports two network modes: Default and host. If you do not set this parameter, the Default mode is used by default.
    Set the Restart parameter, namely whether to restart the container automatically in case of exception.
    Set the launch command (Command and Entrypoint) of the container.
    If specified, this will overwrite the image configuration.
    Set the resource limits (CPU Limit and Memory Limit) of the container.
    Set the Port Mapping, Web Routing, and Load Balancer parameters.
    Note: Add web routing and Map container port to the domain name of your choice. So once the container is running user can access the application by Domain Name. Region name.alicontainer.com
    Set the container Data Volume.
    Set the Environment variables.
    Set the container Labels..
    Set whether to enable container Smooth Upgrade.
    Set the container Across Multiple zones settings.
    You can select Ensure to distribute the containers in two different zones; if you select this option, the container creation fails if there are less than two zones in the current cluster or if the containers cannot be distributed in two different zones due to limited machine resources. If you select Try best, the Container Service will distribute the containers in two different zones as long as possible and the containers will still be created successfully even if they cannot be deployed in two different zones.
    If you do not set this setting, the Container Service will distribute the containers in a single zone by default.
    Set the container Auto Scaling rules.
    Click Create and the Container Service creates the application according to the preceding settings
    Once Container is started you will see service is getting created along with application
    Click on Services and you will get details about the service
    Click on the endpoint and append swagger to url
    Provide input in Swagger and check the output

    Manoj Chaurasia
    This document highlights various components included in the TIBCO ActiveMatrix BusinessWorks? Managed File Transfer palette and is intended to help garner an understanding of how to use the palette.  It is supplementary material and is not intended to replace existing documentation.
    This document is applicable to TIBCO® Managed File Transfer Command Center, Internet Server, Platform Server, and ActiveMatrix BusinessWorks
    Microsoft Word - TIBCO MFT - BusinessWorks MFT Palette.docx.pdf

×
×
  • Create New...