Introducing the all-new TIBCO Community site!

For current users, please click "Sign In" to reset your password and access the enhanced features. If you're a first-time visitor, we extend a warm welcome—click "Sign Up" to become a part of the TIBCO Community!

If you're seeking alternative community sites, explore ibi, Jaspersoft, and Spotfire.

Jump to content
  • Richard Flather
    Configuration of EMS to bridge to FTLConfiguration of FTL to bridge to TIBCO Quasar powered by Apache Pulsar Configuration of a bidirectional (sink and source) connectors to bridge Pulsar and FTL for simple string messagesSimple tests to ensure bidirectional message transfer between EMS via FTL and Pulsar. For sending/receiving more complex message structures between the components, refer to the appropriate TIBCO messaging component documentation.How_to_Configure_EMS_Pulsar_Bridge.pdf

    Manoj Chaurasia
    How to parse a large XML document is a common problem in XML applications. A large XML document always has many repeatable elements and the application needs to handle these elements iteratively . The problem is obtaining the elements from the document with the least possible overhead. Sometimes XML documents are so large (100MB or more) that they are difficult to handle with traditional XML parsers.
    One traditional parser is Document Object Model (DOM) based. It is easy to use, supports navigation in any direction (e.g., parent and previous sibling) and allows for arbitrary modifications. But as an exchange DOM, it will parse the whole document and construct a complete document tree in memory before we can obtain the elements. It may also consume large amounts of memory when parsing large XML documents.
    TIBCO ActiveMatrix BusinessWorks? uses XML in a similar way as DOM. It loads the entire XML document into memory as a tree. Generally this is good as it provides a convenient way to navigate, manipulate and map XML with XPATH and XSLT. But it also shares the drawback of DOM. With large XML files, it may occupy too much memory and in some extreme situations may cause an OutOfMemory error.
    Simple API for XML (SAX) may be a solution. But as a natural pull model it may be too complicated an application for this specific task. With StAX , you can split large XML documents into trunks efficiently without the drawbacks of traditional push parsers.
    This article shows how to retrieve repeatable information from XML documents and handle them separately. It will also show how to implement a solution for large XML files in BW with StAX, Java Code Activity and File Poller Activity.
    What is StAX
    Streaming API for XML (StAX) is an application programming interface (API) to read and write XML documents in the Java programming language.
    StAX offers a pull parser that gives client applications full control over the parsing process. The StAX parser provides a "cursor" in the XML document. The application moves the "cursor" forward, pulling the information from the parser as needed.
    StAX Event
    StAX provides another event-based (upon cursor-based) pulling API. The application pulls events instead of cursor from the parser one by one and deals with it if needed, until the end of the stream or until the application stops.
    XMLEventReader interface is the major interface for reading XML document. It iterates over it as a stream.
    XMLEventWriter interface is the major interface for writing purposes.
    Now, let's see how to split a large XML file using StAX:
    Initializing Factories
    XMLInputFactory inputFactory = XMLInputFactory.newInstance(); XMLOutputFactory outputFactory = XMLOutputFactory.newInstance(); outputFactory.setProperty("" , Boolean.TRUE);
    With XMLInputFactory.newInstance(), we get an instance of XMLInputFactory with the default implementation. It can be used to create XMLEventReader to read XML files.
    With XMLOutputFactory.newInstance(), we get an instance of XMLOutputFactory with the default implementation. It can be used to create XMLEventWriter. We also set "" to Boolean -- TRUE as we want to keep the namespace in the output XML files.
    Creating XMLEventReader
    String xmlFile = "..."; XMLEventReader reader = inputFactory.createXMLEventReader(new FileReader(xmlFile));
    In this way, we build a XMLEventReader to read the XML File.
    Using XMLEventReader To Go Through XML File
    int count = 0; QName name = new QName(namespaceURI, localName); try { while (true) { XMLEvent event = reader.nextEvent(); if (event.isStartElement()) { StartElement element = event.asStartElement(); if (element.getName().equals(name)) { writeToFile(reader, event, outputFilePrefix + (count++) + ".xml"); } } if (event.isEndDocument()) break; } } catch (XMLStreamException e) { throw e; } finally { reader.close(); }

    With XMLEventReader.nextEvent(), we can get the next XMLEvent in the XML File. XMLEvent can be a StartElement, EndElement, StartDocument, EndDocument, etc. Here, we check the QName of the StartElement. If it is the same as the target QName (which is the one repeatable element in the XML file in this case), we write this element and its content into an output file with writeToFile(). Below is the code for wrtieToFile().
    Writing Selected Element into file with XMLEventWriter
    private void writeToFile(XMLEventReader reader, XMLEvent startEvent, String filename ) throws XMLStreamException, IOException { StartElement element = startEvent.asStartElement(); QName name = element.getName(); int stack = 1; XMLEventWriter writer = outputFactory.createXMLEventWriter( new FileWriter( filename )); writer.add(element); while (true) { XMLEvent event = reader.nextEvent(); if (event.isStartElement() && event.asStartElement().getName().equals(name)) stack++; if (event.isEndElement()) { EndElement end = event.asEndElement(); if (end.getName().equals(name)) { stack--; if (stack == 0) { writer.add(event); break; } } } writer.add(event); } writer.close(); }  

    We create an XMLEventWriter with XMLOutputFactory.createXMLEventWriter(). With XMLEventWriter.add(), we can write XMLEvent/XMLElement to the target XML File. It is the user's responsibility to make sure that the output XML is well-formed and so the user must check the EndElement event and make sure it matches the StartElement in pairs. Here, we finish all the codes required to split XML file into trunks.
    Build a Solution with StAX in ActiveMatrix BusinessWorks
    Integrating StAX in ActiveMatrix BusinessWorks
    First, choose an implementation of StAX. There are some open-source implementations you can choose from, one is Woodstox and another is StAX Reference Implementation (RI).
    Next, the steps to integrate StAX with ActiveMatrix BusinessWorks for a solution to handle large XML files.
    Copy the .jar file into /lib.
    Create a new project in Designer named StAXSplitter and add a new process to it named splitXMLFile.
    Select a Java Code Activity in the process and add some input parameters.
    Copy and paste all code into Java Code Activity > Code and in invoke(), then add the following code:
    splitXmlFile(inputFileName, targetElementLocalName , targetElementNamespace, outputFileFullPath );
    Compile the code by clicking the Compile Button. This process can be used to split a large XML file into small trunks for processing.
    Create another process to handle every trunk file separately. File Poller Starter can be used to trigger the event. The process can be similar to the following:
    When should I use the StAX solution?
    If you have to parse a large XML file and the XML document has many repeatable elements.
    How do I know if the XML file is too large for parsers like DOM?
    Your OS will tell you. Monitor CPU and memory usage. The obvious sign will be if the DOM parser fails with an OutOfMemory error.
    Information to be sent to TIBCO Support
    Please open a Support Request (SR) with TIBCO Support and upload the following:
    Project folder with all the necessary files.
    A simplified project demonstrating the issue always helps.

    Manoj Chaurasia
    In a previous article, Paul Varley showed how you can implement a Version Control System for maps. Today we will talk about deployment pipelines and Continuous Delivery for TIBCO Cloud Integration solutions.
    Having separate environments for testing and production is a useful development and release management best practice that can prevent bugs in production. At the same time, this practice creates another problem ? moving changes between testing and production and ensuring they stay in sync over time. Of course, it can be done manually, but this is error-prone and inefficient. Here?s where Continuous Delivery (CD) arises. In this article, we will explore one of the ways we can apply CD to TIBCO Cloud Integration solutions with help of the API Connector.
    Use Case
    We have two TIBCO Cloud Integration organizations: Testing and Production. We want to automatically clone solutions from Testing to Production when they?re ready for production.
    First, we should install the Scribe Platform API connector from the Marketplace and establish the connection to Scribe Platform API.
    Next, let?s create an integration solution with the name ?Continuous Delivery? in the Testing organization. This solution will clone other solutions to the Production organization when they are ready. The Testing organization has solutions that we want to clone to the Production organization when they?re ready. ?Integration Solution? is one of them.
    The following diagram illustrates the initial state :

    Iteration #1: The basis
    How will we determine that a solution is ready to deploy into Production? We can use several conventions: for example, we put ?Production Ready? in the description of solutions ready to deploy into production. So, let?s create in the ?Continuous Delivery? solution a new map with the name ?Clone Production Ready Maps to Production Org? which will:
    Query all solutions from the Testing organization Clone any solutions whose description equals ?Production Ready? to Production ?Note that this comparison is case-sensitive ? so put exactly the same string as the solution?s description (without extra whitespace) The CloneSolution command requires that you fill in the following fields:
    DestinationOrganizationId ? ID of the target Organization where the Solution is being copied to. DestinationAgentId ? ID of the Agent in the target Organization to associate with the copied Solution. OrganizationId ? ID of the source Organization. SolutionId ? ID of the source Solution. In our example, we use hard-coded DestinationAgentId, but you could also use a Fetch or Lookup block with the Agent entity.
    Iteration #2: Redefine the production readiness
    Let?s run the map. Whoops, we got an error: ?All maps must be valid to Clone a Solution?. According to API documentation ?To successfully clone a Solution, all Maps in the source Solution must be valid, and the destination Organization must use the same Connection types and names. The cloned Solution is incomplete until a Solutions POST prepare command is issued against it?.
    Based on this, it is like we should filter out all incomplete solutions because we can?t clone them.

    Let?s try to put ?Production Ready? in the description of ?Integration Solution?, run the ?Continuous Delivery? solution, and check the Production organization?
    Whoo-hoo, we have the first solution successfully passed through our deployment pipeline!
    Iteration #3: Preventing duplicate cloning
    What if we run our ?Continuous Delivery? multiple times? What will happen with already-cloned solutions? It looks like our map clones all production-ready solutions on every run.

    We implemented a basic scenario for cloning solutions from our Testing organization to Production. Here are some ideas for those of you who want to take this a step further. Consider implementing one or both of the following:
    Fetch the most suitable agent (for example, by name) instead of hardcode Agent IDs Use Lookup Tables instead of hardcoded ?Production Ready? descriptions. Both of the options should work: good old formula editor or fetch block for Lookup Table Values of Scribe Platform API Connector I hope that this article piqued your interest in exploring the features of Platform API Connector and its possibilities. You?ve got this!
    This blog post was created by Aquiva Labs. Learn more about their services here.

    Manoj Chaurasia
    Logging is a must-have feature for every production-level application or service. Without it, it?s pretty hard to get insights of what?s going wrong in when errors or exceptions arise. However, the larger an application is, the greater the quantity of logs this application will produce. This feedback is valuable but we don?t want to drown in it. That?s why it?s important to effectively monitor and analyze logs. In the world of distributed applications and serverless architecture, it is best to use centralized log storage, so we can see the logs of every application, right in one place.
    In this article we will show you how to store and analyze execution history of TIBCO Cloud Integration solutions with help of the Scribe Platform API Connector.
    Use Case
    Consider a scenario where we have many TIBCO Cloud Integration organizations with a lot of solutions in each of them. We want to store the execution history of each solution in one place (eg. a relational database) so we can analyze it easily.
    How do we do this in practice?
    As a prerequisite, we should install the Scribe Platform API connector from the Marketplace, establish the connection to Scribe Platform API, and create a new integration solution called ?Logger? in your TIBCO Cloud Integration organization.

    For the target connection, we will use the PostgreSQL Connector. Let?s create a table to store execution history with the following SQL command:
    CREATE TABLE public.scribe_logs ( id BIGSERIAL PRIMARY KEY, organization_id INT, solution_id UUID, start TIMESTAMP, stop TIMESTAMP, records_processed INT, records_failed INT, result VARCHAR(64), details TEXT, reprocess_records_number INT, is_reprocess_job BOOLEAN, duration REAL, source_data_local BOOLEAN, process_execution_id UUID );  
    Just as in my previous article we?ll implement a solution step-by-step.
    Iteration #1: Get All Execution History
    Let?s create a simple map that will iterate each solution of each organization you have access to (the user that you put in connection, more precisely), and save its execution history to PostgreSQL.

    Minor notes about the above map:
    If you want to grab execution history from a single organization, you can add a filter by Id to Query Organization block The picture above doesn?t contain a comprehensive field mapping list for ?Create publicscribe_logs? In PostgreSQL your table should be named as public.scribe_logs, but the TIBCO Cloud Integration UI likes dots and it eats them like Pacman Let?s run the map, and after it finishes executing the following SQL query in your favorite PostgreSQL client:
    SELECT id, details, duration, result FROM scribe_logs  
    If everything goes fine, the following query in PostgreSQL will return our successfully saved execution history!

    Iteration #2: Reinventing Net Change with Lookup Tables
    But, what if you run the map again? It will go through the executions starting from the beginning of time. This causes some negative consequences:
    ?  It?s slow since the map re-iterates all the history records again and again

    ?  It can eat up your API limits (15000 calls per day)

    ?  It can create a lot of row errors

    ?  If the scribe_logs.ids column declared as the primary key then you should have a lot of row errors

    ?  It can create a lot of duplicate data in your target tableFor example, in a case where you link History.Id to column which is not declared as unique
    Ideally, we want to process only new execution histories since the last run. Unfortunately, the Net Change feature is not available in Fetch blocks, but we have Lookup Tables to the rescue. With help of Platform API connector we can insert/update lookup tables and their values.
    The idea: we can reinvent Net Change functionality using Lookup Table Values, which will be used as storage for last execution history date.
    Let?s create new Lookup Table with LoggingSettings name
    1. More ? Lookup Tables ? click on + sign

    2. Create new Lookup Table Value with LaterThanDate in Value1 and nothing in Value2
    After that we can create new map ?Update LaterThanDate in LoggingSettings? in the ?Logger? solution, which will update LaterThanDate lookup table value based on latest execution date PostgreSQL data. Then we need to change execution order of the maps in ?Logger? solution, so ?Update LaterThanDate in LoggingSettings? will be executed before ?Save execution history to database?.

    Few comments:
    ? In this map we?re using the Native Query block to select the latest possible timestamp across all saved start and stop from execution history with help of max aggregate function and greatest conditional expression

    ? After the Update block we don?t need to iterate through all Lookup Tables and Lookup Table Values because we know that only one such Lookup Table Value exists
    Optionally, to improve the performance of the map, you can remove all the Fetch and Condition blocks and use raw IDs in Update block. You can get the IDs in Query Preview or in API tester
    (tip: you can set includeLookupTableValues to true to get lookup table values with all their values).

    Iteration #3: Consume dynamically updated Lookup Table Value
    Finally, we should use Lookup Table Value in the map ?Save execution history to database?.

    As you can see, the previous version of the map was updated:
    ? We added a new condition in the Fetch History filter
    ? The Platform API provides a LaterThanDate parameter which will filter out all executions older than the parameter?s value. Of course, the Platform API Connector also supports it!

    ? In the right side of the condition we will use the LOOKUPTABLEVALUE2 function to get Value2 by Value1, which is LaterThanDate
    ? We changed the Create block to an Update/Insert block, so we can update existing execution history records in PostgreSQL
    ? Example: the execution history status can be changed between ?Save execution history to database? map executions
    It?s time to execute the whole ?Logger? solution. It will process only new solution executions since the last run and we don?t have any row errors. Perfect!
    I showed you an approach to implement centralized logging of TIBCO Cloud Integration solution execution history, but you can go further:
    ? Try other connectors as the target for your execution history entries

    ? Use your favorite log analyzing tool to get more value (statistics, free-text search, etc.) from logs

    ? Reprocess errors with help of the Command block

    ? Control the log verbosity by using the result field in the Fetch History block. The possible values are:
    ? CompletedSuccessfully

    ? FatalError

    ? CompletedWithErrors

    ? InProgress

    ? RecordErrorsAndFatalErrors
    ? For Developers: Build a connector for a logging service like Kibana, Splunk, or Seq, so you can monitor the health of your solutions in real-time

    Manoj Chaurasia
    Failures, errors, and outages are unavoidable parts of any technical system. Of course, as engineers, we should do our best to design solutions with failures in mind. Regardless of our best intentions and planning, situations sometimes come up we had not anticipated, which makes elegant recovery difficult. All we can do is re-attempt and hope that connectivity is restored. One such example of this is the so-called heisenbugs.
    The Connect capability of TIBCO Cloud Integration provides the ability to reprocess failed records. When an execution fails with record errors, a copy of each source record with an error is stored, either in the cloud or locally in the on-premise agent database. It gives us the ability to retry the processing of these failed records.
    In this article, we will show you how we can automate reprocessing of solution errors with the help of the Scribe Platform API Connector.
    Short on time? Check out this video on how to reprocess solution errors!

    Use Case
    Consider the case when you have an unstable connection to one of your source or target systems in a solution. We want to automate reprocessing of all failed records in this solution.
    As a prerequisite, you should have one unstable solution. For demo purposes let?s use solution with a single map as follows:

    This map will only succeed in 50% of the cases. Let?s see why:
    We?re using a fictional entity called SelectOne from Scribe Labs Tools Connector. It just provides a single row with current datetime in it. It can be very handy if you just want to start the map without querying an external data source. IF block checks the seconds part of current datetime using DATEPART function and compares it with 30 (here we get 50% success rate) You can replace 30 with another value if you want a different success rate We?re using GETUTCDATETIME function to get current datetime instead of UtcNow property, because in the latter case TIBCO Cloud Integration will use the same datetime value during reprocessing. This leaves no chance of successful reprocessing. However, GETUTCDATETIME will always provide current datetime. In the ELSE clause, we put an Execute command with a Dates entity ? which will always fail because we put invalid values to target connection fields After you finish with the map you should keep in mind Id and OrganizationId of this solution (you can get them from the URI). In this article, I will use the following values:
    OrganizationId = 3531 SolutionId = ?6c6bac38-4447-4ce3-a841-8621a3f72f9b? Also, I encourage you to check the Scribe Labs Tool Connector. It provides other useful blocks such as SHA1, which can help with GDPR compliance in some cases.
    Iteration #1: Getting solutions with errors
    The execution history of the solution can be retrieved both from the API directly, or from an external system as shown in a previous article. For simplicity, I will use the first approach since it doesn?t require any additional connectors:

    A few notes about the map above:
    We want to reprocess only the latest solution history, that?s why: Query block sorting histories by Start column are in descending order Possible values for ExecutionHistoryColumnSort and SortOrder columns can see in API tester We use Map Exit block to guarantee to reprocess of no more than one execution history We want to reprocess only the histories that contain errors. For this reason, we?re using If/Else control block which filters out histories by the Result value If you want to distinguish reprocess only fatal and/or record errors you can change the condition Iteration #2: Marking solution errors for reprocessing
    To reprocess errors, first, we should mark all the errors for reprocessing. Scribe Platform API provides two REST resources to accomplish this task:
    POST /v1/orgs/{orgId}/solutions/{solutionId}/history/{id}/mark Mark all errors from the solution execution history for reprocessing POST /v1/orgs/{orgId}/solutions/{solutionId}/history/{historyId}/errors/{id}/mark Mark particular errors from the solution execution history for reprocessing Currently, the Scribe Platform API connector supports only the first resource via MarkAllErrors command.

    Iteration #3: Reprocessing solution errors
    The next step after marking all the errors is reprocessing. We will use ReprocessAllErrors command block, which will reprocess all marked errors from solution execution. Important note from documentation: this command will be ignored if the solution is running.

    Iteration #4: Retries
    If you want to have more attempts for solving errors by reprocessing, we can add retry logic into the map itself. However, it will require refactoring our map a bit.

    Notable changes:
    We added a Loop with and If/Else control block which uses SEQNUM function as a retry counter As an alternative to SEQNUM function you can try to use Scribe Labs Variables Connector On every retry, we want to work with the latest Execution History record. That?s why the initial root block decomposed into two: The new root query block which works with Solutions Lookup History block which will retrieve the latest possible history record Iteration #5: Truncated Exponential Backoff
    From the other side, straightforward retries can be one of the sources of accidental Denial-of-Service. It?s a classic example of ?The road to hell is paved with good intentions? anti-pattern.
    To avoid this pitfall we can implement truncated exponential backoff algorithm. It?s not as hard as it sounds. The idea here is to exponentially increase the delay time between retries until we reach the maximum retry count or maximum backoff time.

    Optionally, we can add some amount of randomness when we compute value of delay time, but it?s not needed for our case.

    At the time of writing the Connect capability of TIBCO Cloud Integration doesn?t support POW function (you can check that here). But we can emulate it with precomputed Lookup Table Values since we know all the possible retry counter values. This is so-called memoization.

    And here?s the updated map:

    Notable Changes:
    We used the Sleep block from Scribe Labs Tools Connector for suspending the work of the map SEQNUM function was replaced by SEQNUMN function We created ?RetryCounter? named sequence, with which we can work in any further map blocks With the help of SEQNUMNGET we can peek the current value of our named sequence without increment (just as with stack!) LookupTableValue2 function gets precomputed, resulting a power of 2 from according Lookup Table Summary
    In this article we learned:
    How to mark and reprocess all errors from particular solution execution with help of Command block from Scribe Platform API Connector How to implement retries with exponential backoff to prevent accidental Denial-of-Service Sleep block helped us with pausing the solution With Lookup Tables we overcame the absence of POW function

    Manoj Chaurasia
    Table of Contents
    Case 1 Case 2 Case 3 Document References Troubleshooting Information to be sent to TIBCO Support Case 1
    Consider the scenario where you are using a JMS Queue Requestor which sends a request and waits for a reply. Additionally, you have a corresponding process (say a JMSQueue Receiver) that receives these requests and sends back replies (Reply To JMS Message).
    The JMS request/reply activity uses temporary destinations to ensure that reply messages are received only by the process instance that sent the request. While sending each request the JMS Queue Requestor creates a temporary queue for the reply. It then sends the temporary reply queue name along with the request message. The temporary queue name is unique for each process instance.
    If the replyToQueue queue (static) is specified then all replies will be sent to the same queue and there will be no guarantee that the correct reply will be received by the process instance that sent the request.
    You can use an expression for the replyToQueue to create different replyToDestinations for each request.
    Case 2
    In Case1, if you need to use constant destinations for all replies and you do not want to use temporary destinations, then instead of using JMSQueueRequestor you need to do the following procedure:
    use a pair of "JMSQueueSender" and "Wait for JMSQueueMessage" activities
    map the messageID of the JMSSender as the event key of the "Wait for JMS" activity
    use the JMSCorrelationID header of the input message as the Candidate Event Key
    Case 3
    In a multi-engine environment, where you have multiple "Wait for JMS Message" activities listening on the same queue for reply messages, you should consider using GetJMSQueue Message.
    In a multi-engine environment, with multiple ?Wait For? activities listening on the same queue, it is likely that the first requestor will be waiting for a reply it will never receive as the second requestor has already consumed the reply message. Since the candidate event key does not match the incoming message?s event key the message is discarded. In this case, the first requestor who sent out the request will never receive the reply.
    This is the default behavior of ?Wait For? activities. When using ?Wait For? JMS message activities, a listener consumes all messages from the queue at engine startup and stores them in process memory. In the case of multiple ?Wait For? activities listening on the same queue, if one listener has already consumed the message, the other listener on the same queue will never receive the message.
    The correct design would be to use the ?Get JMS Message? activity instead of the ?Wait For JMS? activity. You can set the "selector" property of the "Get JMS Queue Message" activity to use the following XPath to correlate the request message with its reply message.
    concat("JMSCorrelationID = '" ,$JMS-Queue-ender/pfx:aEmptyOutputClass/pfx:MessageID,"'")
    When using a message selector, the EMS server does the filtering of the message based on the selector and determines if the message can be delivered to the particular "GetJMSQueue Message" activity.
    Whereas with the ?Wait for JMS" activity, the message is sent to the queue as soon as it arrives on the queue and the filtering is done at the job level where the Candidate Event key is matched with the incoming message?s event key.
    Document References
    For details, please refer to the following TIBCO ActiveMatrix BusinessWorks? documentation:
    Palette Reference --> Chapter 9 JMS Palette
    If the correct replies are not received, review the process design.
    You can connect to tibemsadmin tool and check for the number of receivers on the queue by using
    Show queue  
    You can enable tracing for message IDs and correlation IDs in the tibemsd.conf
    track_message_ids = enabled track_correlation_ids = enabled  
    Additionally, you can turn on detailed tracing for both EMS server and client as follows
    set server log_trace=DEFAULT,+PRODCONS,+MSG
    set server client_trace=enabled
    Addprop queue trace=body (* For both the request and reply queues.)
    Then check the messages that are sent by the server and received by the client.
    Information to be sent to TIBCO Support
    Confirm the Admin/TRA/BW/EMS versions with hotfixes, if any.
    Please send the multi-file project and the deployed .ear file.
    EMS configuration files.
    Other output of EMS admin commands as and when requested by TIBCO Support.

    Manoj Chaurasia
    Attached is a PPT in PDF form that covers a good amount of ground on X.509, PKI, and TLS/SSL.
    All Browsers will validate a chain, but when you go to find the chain, the browsers will pick the first certificate based on the Distinguished Name.  Many CA cert vendors are re-releasing 'same-named' CA certs, so the chain can be a 'false chain'.  Why is this?  It is cryptographically cheaper to parse a public key and certificate than it is to validate the signature, and it is not always possible to trace serial numbers, so Browser vendors look to the DN/CN and pick the first one they find...Bob is Bob, even if the DNA is different? No.
    Sites are not under any obligation to send the full chain.  I have many examples of partial chains, usually missing the self-signed ROOT.
    Some sites are 'rooted' (pun intended) with a very old CA - X.509v1-based - and modern infrastructure may reject them for valid security reasons.

    Manoj Chaurasia
    Use Case
    This article focuses on Customer OnBoarding and how companies can leverage TIBCO's Hybrid integration platform to digitize their Customer OnBoarding process. The demo runs on a Kubernetes cluster and showcases our strengths like being DevOps compliant, Elastic scaling, API-Led design, and many other factors. The key components of this demo can be found below:
    CustomerOnBoarding Kubernetes Setup & UseCase: This video explains how the flow of the use case is set up and how microserivces running on containers are being used for providing compelling customer experiences
    Elastic Scaling: Optimizing infrastructure costs and attaining operational excellence is something every customer is looking for and Elastic scaling 
    Hystrix Monitoring: A demo that walks through how you can set up circuit breaker patterns with Zero Coding and ensure your system is ready for Failures.  
    Configuration Management & Service Discovery: Microservices are more than just containers. It is about embracing an entire Ecosystem that includes open-source tooling like Consul & Eureka for service discovery & configuration management. 
    Microservices Patterns
    Polyglot Persistence ? multiple data sources (Cassandra & PostgresSQL)                                        
    Service Discovery ? discover distributed services by name (Consul)
    Config Management ? manage deployment configuration outside the application (Consul)               
    Circuit Breaker ? prevent service failures from cascading to others (Hystrix)
    CI/CD pipeline ? automated deployment (Maven & Jenkins)
    Runtime Considerations
    Container Management System ? Automated Deployments to Kubernetes
    Elastic Scaling ? Scale on Demand both Horizontally & Vertically (Auto Scaling Groups in AWS & Kubernetes)
    Key Technology components: 
    1) TIBCO BusinessWorks Container Edition
    2) TIBCO Mashery
    3) TIBCO BusinessWorks 6.X
    4) Consul
    5) Jenkins
    6) Kubernetes on AWS

    Manoj Chaurasia
    Table of Contents
    Getting Started: Development - TIBCO BusinessWorks Container Edition: Deployment - BusinessWorks Container Edition Development - TIBCO Cloud Integration This article walks through an example using TIBCO Cloud? Messaging, TIBCO Cloud? Integration, and TIBCO BusinessWorks? Container Edition together to create a basic pub/sub app.  Knowledge about TIBCO Cloud Integration and TIBCO BusinessWorks Container Edition will be helpful.
    Getting Started:
    There are a few things we need to do before we can develop our applications. The first is making sure that we have all the necessary components. You can receive a free trial of TIBCO Cloud Messaging and TIBCO Cloud Integration from so sign up for those. Also, make sure you have the TIBCO BusinessWorks Container Edition studio available. Your TIBCO Cloud Messaging trial broker may take a little while to be active. Make sure the status says active before starting.

    Generate key (Under Authentication Keys), we will be using this key to connect with TIBCO Cloud Messaging. Now, under Download SDK's, download the Java/Android SDK. Unzip the file and under the lib folder, there's a jar file called 'tibeftl.jar'. We will be using this later.
    Development - TIBCO BusinessWorks Container Edition:
    Let's start by opening up our TIBCO BusinessWorks Container Edition studio. Create a new BusinessWorks application, you can call it whatever you want, but in this example, i will call it 'tcm.publish'. Next, let's convert our project to a Java project. Right-click on the .module project. Navigate down to 'Configure' and select 'Convert to Java Project'.

    Once converted you should see a folder/library under the .module project called 'JRE System Library [TIBCO JRE]'. Right-click on it and navigate down to properties. This will cause a window to pop up where you can select your JRE System Library. Change the Execution environment to J2Se-1.5 (TIBCO JRE) and hit OK.

    Now, under your .module project, right-click on the lib folder. Navigate to import->import, this will open up a window prompting you to import certain files. In this case, we will import a 'File System', so let's select that. A new window will pop up that will let you import a file system from a local directory. Select the directory that has your TIBCO Cloud Messaging client that you downloaded earlier (eftl-3.3.2-java). Within that directory, find the tibeftl.jar file (should be in the lib folder). Select that file. Once done, hit finish.

    Now under your .module project, expand the Module Descriptors. You will see a descriptor called 'Dependencies', double click it. This will open a new window that will let you add packages to your project. Click on add and a window should pop up for 'Package Selection'. Type in '' and select the palette that appears. Click OK and save your project should now have that jar file under your 'Plug-in Dependencies'.
    Let's create our REST service. Click on the little globe with a cloud on the left-hand side of your screen. This will open up your REST service wizard. Give your Resource a name, I choose 'tcmpublish' and set the Resource Service Path to '/tcmpublish/{text}' (don't include the quotes). Change the operation from POST to GET (only GET should be selected). Once done, hit Finish. Your REST service will now be generated, we will need to configure it more but for now, let's leave it be. Now let's drag and drop the JavaInvoke activity. This will be found within your palette library. Your project should now look something like this.

    Click on your JavaInvoke activity, you should see the properties for the activity. Under the general tab within the properties tab, you should see a variable called 'Java Global Instance', click on the magnifying glass on the other side of it. This will bring up a new window to create a Java Global Resource. Create the resource. You should now see your 'Java Global Instance' variable filled with your Global Resource. Save your project.
    Now, under your .module project, find the src folder, right-click it and select new -> package. This will bring up a window to create a new java package. In this example I called it ''.

    Let's go back to our 'Java Invoke' activity properties. Under the general tab (the same place you created the Java Global Instance), create a new class. This is done by clicking on the green C within the 'Class Name' parameter. This will cause a new window to pop up where you will configure your Java class. We need to fill out the Package (should be the name of the Java Package you just created) and the Name (can be anything) value. Once done, hit finish. Example in the screenshot below.

    Now under the src folder, you should see your class created. Double-click on your .java file. This will open it up within the studio, it should be relatively empty, only showing the package name and class.  Now let's edit this file so that we can use it.  Assuming you have followed the guide step by step (with the same names), you can just copy and paste this:
    package; import java.util.HashMap; import java.util.Properties; import com.tibco.eftl.Connection; import com.tibco.eftl.ConnectionListener; import com.tibco.eftl.EFTL; import com.tibco.eftl.Message; public class TCMConnection { HashMap<String,Object> moduleProperties = new HashMap<String, Object>(); Properties tcmProps = new Properties(); private String authKey = ""; //TCM authKey private String clientId = ""; //TCM clientId (this can be anything) private String url = ""; //TCM connection URL private Connection tcmConnection; public TCMConnection() { final Properties props = new Properties(); // set the password using your authentication key props.setProperty(EFTL.PROPERTY_PASSWORD, authKey); // provide a unique client identifier props.setProperty(EFTL.PROPERTY_CLIENT_ID, clientId); // connect to TIBCO Cloud Messaging EFTL.connect(url, props, new ConnectionListener() { public void onConnect(Connection connection) { if (connection != null) { tcmConnection = connection; } System.out.printf("connected\n"); } public void onDisconnect(Connection connection, int code, String reason) { System.out.printf("disconnected: %s\n", reason); } public void onReconnect(Connection connection) { System.out.printf("reconnected\n"); } public void onError(Connection connection, int code, String reason) { System.out.printf("error: %s\n", reason); } }); } public String getAuthKey() { return authKey; } public String getClientId() { return clientId; } public String getUrl() { return url; } public Connection getTCMConnection() { return tcmConnection; } public void sendMessage(String event, String text) { final Message message = tcmConnection.createMessage(); message.setString("event", event); message.setString("text", text); tcmConnection.publish(message, null); } }  
    We need to edit the following variables in the code: authKey, clientId, and url.  The authKey and url come from your TCM authentication keys, while the clientId can be anything as long as it's unique.
    Now, back under the .module application, find the resources folder, expand it and click on the Java Global resource. A new tab/window should open up that will let you configure the instance.  Next to the class variable on that window, click on browse.

    A new window would have popped up. Search for the class you created, if you named everything the same as this guide, you can search 'com.t' and it should pop up. Select it and hit finish, you should now see your class parameter filled. Now select the Method (you should only have one choice). Once done, save your project.
    Navigate back to your Java Invoke properties. Under the general tab (where you configured the Java Global Instance), you'll want to hit the reload button on the same line as the Class Name variable. Once reloaded, you should have the Class Name variable filled with something like '' and under the Method drop-down menu, you should be able to select the sendMessage method.

    Let's now configure the input. Go to the input tab within the Java Invoke activity. We need to map two parameters, event, and text. For the event we can type the value as 'lambdainvoke'; for text, we will drag and drop the 'text' data source from the GET invoke text parameter. Save your project. Example below. 

    You should no longer have an error message for your java invoke activity. Let's finish this up by mapping the input for the REST service. Click on your Reply activity on your design canvas and within the properties go to the input tab. Here we will need to map the response item, for the sake of simplicity you can just copy the following input (as long as you followed along): concat("Published message: ", $get/parameters/tns1:tcmpublishGetParameters/tns1:text, " to topic called demotopic")

    Let's configure our http connection now. Go to the Resources folder within the .module project and click on the http connection resource. Change the port property from a literal value to a module property. You should now see the port value replaced with the 'BW.CLOUD.PORT'. Save your project. The design portion of the project is done.
    Deployment - BusinessWorks Container Edition
    So now that we've built our BWCE app, we need to deploy it. There's a large number of options for what platform you can deploy it on. If you don't have any PaaS setup, i would recommend just using Docker as it's easy to install and run on your computer. This app deployment is just like any other BWCE application deployment so I won't spend much time explaining this, if you need more information check out some of the videos I've posted on youtube.  
    Flow for docker: Have base BWCE image -> Export EAR -> Create Dockerfile -> docker build (builds image) -> docker run
    If deployed correctly, you should see your project running on port 8080. You can test the REST service by entering some value for the text parameter and you should get a response message with a 200 response code.
    Development - TIBCO Cloud Integration
    Go to and open TIBCO Cloud Integration (TCI) and navigate to the connections tab. We need to make a connection for our TCM instance. Click on 'Add Connection' this will pop up a window for a TIBCO Cloud Messaging Connector. We need to provide a value for the Connection Name (can be anything), Connection URL (the url of your TCM instance), and Authentication Key (the authentication key you created at the start and used in your BWCE project). Once these have been filled in, you can hit save. You should now be able to connect to your TCM instance.
    Now let's start building our TCI app!
    Create a new TCI app, this is done by clicking the 'Create' button. A pop-up will appear asking you to fill in a name, let's call this app 'TCM-Application'. Now let's choose to create a Flogo-app, afterwards, click on the option to 'Create a flow'. A window will pop up where you can enter the name of the flow, in this case, let's call it 'TCM Subscriber', and hit next. Afterward, you will have the option of choosing to start your flow as a blank or with a trigger, pick a trigger and select "Message Subscriber", and hit next. Choose the connection (should only be one) and finish. You should now see something like this:

    Click on your TCM Subscriber flow. You should see one activity called "MessageSubscriber" that will have one error. Essentially it's telling you that it still needs to be configured. Click on that activity and go to Output Settings and enter the following schema: { "text": "String" }.

    The subscriber activity should now be configured fully (the error goes away). Now let's add an activity to this flow. Next to the MessageSubscriber you should see a blue box (you may need to move your mouse around), click on it and you will get the choice to add a new activity. Choose the Log Message activity under the general tab. Now let's configure it. Click on the newly created log activity and go to the Input tab. Set the message value to $TriggerData.message.text.

    Now we can push our app. Try it out, it should take less than a minute. If successful you should see the app that says "running". 
    Now let's test the entire flow. Go back to where your BWCE application was deployed (Docker, different PaaS, etc..) and run a test command in the swagger interface. You should get a 200 response (just like when you tested it the first time). Now let's check the logs of our TCI project. We should see the message that we wrote within the TCI logs. IF you do, then everything was set up correctly and running. 

    This is just a simple example of how you could use BWCE, TCM, and TCI together to build a pub/sub project. Obviously, more real-life solutions you could and would do more with it, but this guide gives an idea of how the pieces fit together. And we can do some other cool things with this sample project, maybe wrap the BWCE endpoint within a lambda call using Flogo. The ideas are endless!

  • Create New...