Manoj Chaurasia
Standard-
Posts
2,154 -
Joined
-
Last visited
-
Days Won
4
Content Type
Profiles
Forums
Blogs
Events
Security Advisories
Public Notices
Articles
Training and Certification
Downloads
Gallery
Store
Everything posted by Manoj Chaurasia
-
TIBCO BusinessWorks Container Edition Release Notes TIBCO ActiveMatrix BusinessWorks v6.x Release Notes TIBCO ActiveMatrix BusinessWorks v5.x Release Notes
-
How to parse a large XML document is a common problem in XML applications. A large XML document always has many repeatable elements and the application needs to handle these elements iteratively . The problem is obtaining the elements from the document with the least possible overhead. Sometimes XML documents are so large (100MB or more) that they are difficult to handle with traditional XML parsers. One traditional parser is Document Object Model (DOM) based. It is easy to use, supports navigation in any direction (e.g., parent and previous sibling) and allows for arbitrary modifications. But as an exchange DOM, it will parse the whole document and construct a complete document tree in memory before we can obtain the elements. It may also consume large amounts of memory when parsing large XML documents. TIBCO ActiveMatrix BusinessWorks? uses XML in a similar way as DOM. It loads the entire XML document into memory as a tree. Generally this is good as it provides a convenient way to navigate, manipulate and map XML with XPATH and XSLT. But it also shares the drawback of DOM. With large XML files, it may occupy too much memory and in some extreme situations may cause an OutOfMemory error. Simple API for XML (SAX) may be a solution. But as a natural pull model it may be too complicated an application for this specific task. With StAX , you can split large XML documents into trunks efficiently without the drawbacks of traditional push parsers. This article shows how to retrieve repeatable information from XML documents and handle them separately. It will also show how to implement a solution for large XML files in BW with StAX, Java Code Activity and File Poller Activity. What is StAX Streaming API for XML (StAX) is an application programming interface (API) to read and write XML documents in the Java programming language. StAX offers a pull parser that gives client applications full control over the parsing process. The StAX parser provides a "cursor" in the XML document. The application moves the "cursor" forward, pulling the information from the parser as needed. StAX Event StAX provides another event-based (upon cursor-based) pulling API. The application pulls events instead of cursor from the parser one by one and deals with it if needed, until the end of the stream or until the application stops. XMLEventReader interface is the major interface for reading XML document. It iterates over it as a stream. XMLEventWriter interface is the major interface for writing purposes. Now, let's see how to split a large XML file using StAX: Initializing Factories XMLInputFactory inputFactory = XMLInputFactory.newInstance(); XMLOutputFactory outputFactory = XMLOutputFactory.newInstance(); outputFactory.setProperty("javax.xml.stream.isRepairingNamespaces" , Boolean.TRUE); With XMLInputFactory.newInstance(), we get an instance of XMLInputFactory with the default implementation. It can be used to create XMLEventReader to read XML files. With XMLOutputFactory.newInstance(), we get an instance of XMLOutputFactory with the default implementation. It can be used to create XMLEventWriter. We also set "javax.xml.stream.isRepairingNamespaces" to Boolean -- TRUE as we want to keep the namespace in the output XML files. Creating XMLEventReader String xmlFile = "..."; XMLEventReader reader = inputFactory.createXMLEventReader(new FileReader(xmlFile)); In this way, we build a XMLEventReader to read the XML File. Using XMLEventReader To Go Through XML File int count = 0; QName name = new QName(namespaceURI, localName); try { while (true) { XMLEvent event = reader.nextEvent(); if (event.isStartElement()) { StartElement element = event.asStartElement(); if (element.getName().equals(name)) { writeToFile(reader, event, outputFilePrefix + (count++) + ".xml"); } } if (event.isEndDocument()) break; } } catch (XMLStreamException e) { throw e; } finally { reader.close(); } With XMLEventReader.nextEvent(), we can get the next XMLEvent in the XML File. XMLEvent can be a StartElement, EndElement, StartDocument, EndDocument, etc. Here, we check the QName of the StartElement. If it is the same as the target QName (which is the one repeatable element in the XML file in this case), we write this element and its content into an output file with writeToFile(). Below is the code for wrtieToFile(). Writing Selected Element into file with XMLEventWriter private void writeToFile(XMLEventReader reader, XMLEvent startEvent, String filename ) throws XMLStreamException, IOException { StartElement element = startEvent.asStartElement(); QName name = element.getName(); int stack = 1; XMLEventWriter writer = outputFactory.createXMLEventWriter( new FileWriter( filename )); writer.add(element); while (true) { XMLEvent event = reader.nextEvent(); if (event.isStartElement() && event.asStartElement().getName().equals(name)) stack++; if (event.isEndElement()) { EndElement end = event.asEndElement(); if (end.getName().equals(name)) { stack--; if (stack == 0) { writer.add(event); break; } } } writer.add(event); } writer.close(); } We create an XMLEventWriter with XMLOutputFactory.createXMLEventWriter(). With XMLEventWriter.add(), we can write XMLEvent/XMLElement to the target XML File. It is the user's responsibility to make sure that the output XML is well-formed and so the user must check the EndElement event and make sure it matches the StartElement in pairs. Here, we finish all the codes required to split XML file into trunks. Build a Solution with StAX in ActiveMatrix BusinessWorks Integrating StAX in ActiveMatrix BusinessWorks First, choose an implementation of StAX. There are some open-source implementations you can choose from, one is Woodstox and another is StAX Reference Implementation (RI). Next, the steps to integrate StAX with ActiveMatrix BusinessWorks for a solution to handle large XML files. Copy the .jar file into /lib. Create a new project in Designer named StAXSplitter and add a new process to it named splitXMLFile. Select a Java Code Activity in the process and add some input parameters. Copy and paste all code into Java Code Activity > Code and in invoke(), then add the following code: splitXmlFile(inputFileName, targetElementLocalName , targetElementNamespace, outputFileFullPath ); Compile the code by clicking the Compile Button. This process can be used to split a large XML file into small trunks for processing. Create another process to handle every trunk file separately. File Poller Starter can be used to trigger the event. The process can be similar to the following: FAQs When should I use the StAX solution? If you have to parse a large XML file and the XML document has many repeatable elements. How do I know if the XML file is too large for parsers like DOM? Your OS will tell you. Monitor CPU and memory usage. The obvious sign will be if the DOM parser fails with an OutOfMemory error. Information to be sent to TIBCO Support Please open a Support Request (SR) with TIBCO Support and upload the following: Project folder with all the necessary files. A simplified project demonstrating the issue always helps.
-
In a previous article, Paul Varley showed how you can implement a Version Control System for maps. Today we will talk about deployment pipelines and Continuous Delivery for TIBCO Cloud Integration solutions. Having separate environments for testing and production is a useful development and release management best practice that can prevent bugs in production. At the same time, this practice creates another problem ? moving changes between testing and production and ensuring they stay in sync over time. Of course, it can be done manually, but this is error-prone and inefficient. Here?s where Continuous Delivery (CD) arises. In this article, we will explore one of the ways we can apply CD to TIBCO Cloud Integration solutions with help of the API Connector. Use Case We have two TIBCO Cloud Integration organizations: Testing and Production. We want to automatically clone solutions from Testing to Production when they?re ready for production. Implementation First, we should install the Scribe Platform API connector from the Marketplace and establish the connection to Scribe Platform API. Next, let?s create an integration solution with the name ?Continuous Delivery? in the Testing organization. This solution will clone other solutions to the Production organization when they are ready. The Testing organization has solutions that we want to clone to the Production organization when they?re ready. ?Integration Solution? is one of them. The following diagram illustrates the initial state : Iteration #1: The basis How will we determine that a solution is ready to deploy into Production? We can use several conventions: for example, we put ?Production Ready? in the description of solutions ready to deploy into production. So, let?s create in the ?Continuous Delivery? solution a new map with the name ?Clone Production Ready Maps to Production Org? which will: Query all solutions from the Testing organizationClone any solutions whose description equals ?Production Ready? to Production ?Note that this comparison is case-sensitive ? so put exactly the same string as the solution?s description (without extra whitespace) The CloneSolution command requires that you fill in the following fields: DestinationOrganizationId ? ID of the target Organization where the Solution is being copied to.DestinationAgentId ? ID of the Agent in the target Organization to associate with the copied Solution.OrganizationId ? ID of the source Organization.SolutionId ? ID of the source Solution. In our example, we use hard-coded DestinationAgentId, but you could also use a Fetch or Lookup block with the Agent entity. Iteration #2: Redefine the production readiness Let?s run the map. Whoops, we got an error: ?All maps must be valid to Clone a Solution?. According to API documentation ?To successfully clone a Solution, all Maps in the source Solution must be valid, and the destination Organization must use the same Connection types and names. The cloned Solution is incomplete until a Solutions POST prepare command is issued against it?. Based on this, it is like we should filter out all incomplete solutions because we can?t clone them. Let?s try to put ?Production Ready? in the description of ?Integration Solution?, run the ?Continuous Delivery? solution, and check the Production organization? Whoo-hoo, we have the first solution successfully passed through our deployment pipeline! Iteration #3: Preventing duplicate cloning What if we run our ?Continuous Delivery? multiple times? What will happen with already-cloned solutions? It looks like our map clones all production-ready solutions on every run. Summary We implemented a basic scenario for cloning solutions from our Testing organization to Production. Here are some ideas for those of you who want to take this a step further. Consider implementing one or both of the following: Fetch the most suitable agent (for example, by name) instead of hardcode Agent IDsUse Lookup Tables instead of hardcoded ?Production Ready? descriptions. Both of the options should work: good old formula editoror fetch block for Lookup Table Values of Scribe Platform API Connector I hope that this article piqued your interest in exploring the features of Platform API Connector and its possibilities. You?ve got this! This blog post was created by Aquiva Labs. Learn more about their services here.
-
Logging is a must-have feature for every production-level application or service. Without it, it?s pretty hard to get insights of what?s going wrong in when errors or exceptions arise. However, the larger an application is, the greater the quantity of logs this application will produce. This feedback is valuable but we don?t want to drown in it. That?s why it?s important to effectively monitor and analyze logs. In the world of distributed applications and serverless architecture, it is best to use centralized log storage, so we can see the logs of every application, right in one place. In this article we will show you how to store and analyze execution history of TIBCO Cloud Integration solutions with help of the Scribe Platform API Connector. Use Case Consider a scenario where we have many TIBCO Cloud Integration organizations with a lot of solutions in each of them. We want to store the execution history of each solution in one place (eg. a relational database) so we can analyze it easily. How do we do this in practice? Implementation As a prerequisite, we should install the Scribe Platform API connector from the Marketplace, establish the connection to Scribe Platform API, and create a new integration solution called ?Logger? in your TIBCO Cloud Integration organization. For the target connection, we will use the PostgreSQL Connector. Let?s create a table to store execution history with the following SQL command: CREATE TABLE public.scribe_logs ( id BIGSERIAL PRIMARY KEY, organization_id INT, solution_id UUID, start TIMESTAMP, stop TIMESTAMP, records_processed INT, records_failed INT, result VARCHAR(64), details TEXT, reprocess_records_number INT, is_reprocess_job BOOLEAN, duration REAL, source_data_local BOOLEAN, process_execution_id UUID ); Just as in my previous article we?ll implement a solution step-by-step. Iteration #1: Get All Execution History Let?s create a simple map that will iterate each solution of each organization you have access to (the user that you put in connection, more precisely), and save its execution history to PostgreSQL. Minor notes about the above map: If you want to grab execution history from a single organization, you can add a filter by Id to Query Organization blockThe picture above doesn?t contain a comprehensive field mapping list for ?Create publicscribe_logs?In PostgreSQL your table should be named as public.scribe_logs, but the TIBCO Cloud Integration UI likes dots and it eats them like Pacman Let?s run the map, and after it finishes executing the following SQL query in your favorite PostgreSQL client: SELECT id, details, duration, result FROM scribe_logs If everything goes fine, the following query in PostgreSQL will return our successfully saved execution history! Iteration #2: Reinventing Net Change with Lookup Tables But, what if you run the map again? It will go through the executions starting from the beginning of time. This causes some negative consequences: ? It?s slow since the map re-iterates all the history records again and again ? It can eat up your API limits (15000 calls per day) ? It can create a lot of row errors ? If the scribe_logs.ids column declared as the primary key then you should have a lot of row errors ? It can create a lot of duplicate data in your target tableFor example, in a case where you link History.Id to column which is not declared as unique Ideally, we want to process only new execution histories since the last run. Unfortunately, the Net Change feature is not available in Fetch blocks, but we have Lookup Tables to the rescue. With help of Platform API connector we can insert/update lookup tables and their values. The idea: we can reinvent Net Change functionality using Lookup Table Values, which will be used as storage for last execution history date. Let?s create new Lookup Table with LoggingSettings name 1. More ? Lookup Tables ? click on + sign 2. Create new Lookup Table Value with LaterThanDate in Value1 and nothing in Value2 After that we can create new map ?Update LaterThanDate in LoggingSettings? in the ?Logger? solution, which will update LaterThanDate lookup table value based on latest execution date PostgreSQL data. Then we need to change execution order of the maps in ?Logger? solution, so ?Update LaterThanDate in LoggingSettings? will be executed before ?Save execution history to database?. Few comments: ? In this map we?re using the Native Query block to select the latest possible timestamp across all saved start and stop from execution history with help of max aggregate function and greatest conditional expression ? After the Update block we don?t need to iterate through all Lookup Tables and Lookup Table Values because we know that only one such Lookup Table Value exists Optionally, to improve the performance of the map, you can remove all the Fetch and Condition blocks and use raw IDs in Update block. You can get the IDs in Query Preview or in API tester (tip: you can set includeLookupTableValues to true to get lookup table values with all their values). Iteration #3: Consume dynamically updated Lookup Table Value Finally, we should use Lookup Table Value in the map ?Save execution history to database?. As you can see, the previous version of the map was updated: ? We added a new condition in the Fetch History filter ? The Platform API provides a LaterThanDate parameter which will filter out all executions older than the parameter?s value. Of course, the Platform API Connector also supports it! ? In the right side of the condition we will use the LOOKUPTABLEVALUE2 function to get Value2 by Value1, which is LaterThanDate ? We changed the Create block to an Update/Insert block, so we can update existing execution history records in PostgreSQL ? Example: the execution history status can be changed between ?Save execution history to database? map executions It?s time to execute the whole ?Logger? solution. It will process only new solution executions since the last run and we don?t have any row errors. Perfect! Summary I showed you an approach to implement centralized logging of TIBCO Cloud Integration solution execution history, but you can go further: ? Try other connectors as the target for your execution history entries ? Use your favorite log analyzing tool to get more value (statistics, free-text search, etc.) from logs ? Reprocess errors with help of the Command block ? Control the log verbosity by using the result field in the Fetch History block. The possible values are: ? CompletedSuccessfully ? FatalError ? CompletedWithErrors ? InProgress ? RecordErrorsAndFatalErrors ? For Developers: Build a connector for a logging service like Kibana, Splunk, or Seq, so you can monitor the health of your solutions in real-time
-
Overview Failures, errors, and outages are unavoidable parts of any technical system. Of course, as engineers, we should do our best to design solutions with failures in mind. Regardless of our best intentions and planning, situations sometimes come up we had not anticipated, which makes elegant recovery difficult. All we can do is re-attempt and hope that connectivity is restored. One such example of this is the so-called heisenbugs. The Connect capability of TIBCO Cloud Integration provides the ability to reprocess failed records. When an execution fails with record errors, a copy of each source record with an error is stored, either in the cloud or locally in the on-premise agent database. It gives us the ability to retry the processing of these failed records. In this article, we will show you how we can automate reprocessing of solution errors with the help of the Scribe Platform API Connector. Short on time? Check out this video on how to reprocess solution errors! Use Case Consider the case when you have an unstable connection to one of your source or target systems in a solution. We want to automate reprocessing of all failed records in this solution. Prerequisites As a prerequisite, you should have one unstable solution. For demo purposes let?s use solution with a single map as follows: This map will only succeed in 50% of the cases. Let?s see why: We?re using a fictional entity called SelectOne from Scribe Labs Tools Connector. It just provides a single row with current datetime in it. It can be very handy if you just want to start the map without querying an external data source.IF block checks the seconds part of current datetime using DATEPART function and compares it with 30 (here we get 50% success rate) You can replace 30 with another value if you want a different success rateWe?re using GETUTCDATETIME function to get current datetime instead of UtcNow property, because in the latter case TIBCO Cloud Integration will use the same datetime value during reprocessing. This leaves no chance of successful reprocessing. However, GETUTCDATETIME will always provide current datetime.In the ELSE clause, we put an Execute command with a Dates entity ? which will always fail because we put invalid values to target connection fields After you finish with the map you should keep in mind Id and OrganizationId of this solution (you can get them from the URI). In this article, I will use the following values: OrganizationId = 3531SolutionId = ?6c6bac38-4447-4ce3-a841-8621a3f72f9b? Also, I encourage you to check the Scribe Labs Tool Connector. It provides other useful blocks such as SHA1, which can help with GDPR compliance in some cases. Iteration #1: Getting solutions with errors The execution history of the solution can be retrieved both from the API directly, or from an external system as shown in a previous article. For simplicity, I will use the first approach since it doesn?t require any additional connectors: A few notes about the map above: We want to reprocess only the latest solution history, that?s why: Query block sorting histories by Start column are in descending order Possible values for ExecutionHistoryColumnSort and SortOrder columns can see in API tester We use Map Exit block to guarantee to reprocess of no more than one execution history We want to reprocess only the histories that contain errors. For this reason, we?re using If/Else control block which filters out histories by the Result valueIf you want to distinguish reprocess only fatal and/or record errors you can change the condition Iteration #2: Marking solution errors for reprocessing To reprocess errors, first, we should mark all the errors for reprocessing. Scribe Platform API provides two REST resources to accomplish this task: POST /v1/orgs/{orgId}/solutions/{solutionId}/history/{id}/mark Mark all errors from the solution execution history for reprocessingPOST /v1/orgs/{orgId}/solutions/{solutionId}/history/{historyId}/errors/{id}/mark Mark particular errors from the solution execution history for reprocessing Currently, the Scribe Platform API connector supports only the first resource via MarkAllErrors command. Iteration #3: Reprocessing solution errors The next step after marking all the errors is reprocessing. We will use ReprocessAllErrors command block, which will reprocess all marked errors from solution execution. Important note from documentation: this command will be ignored if the solution is running. Iteration #4: Retries If you want to have more attempts for solving errors by reprocessing, we can add retry logic into the map itself. However, it will require refactoring our map a bit. Notable changes: We added a Loop with and If/Else control block which uses SEQNUM function as a retry counter As an alternative to SEQNUM function you can try to use Scribe Labs Variables Connector On every retry, we want to work with the latest Execution History record. That?s why the initial root block decomposed into two: The new root query block which works with SolutionsLookup History block which will retrieve the latest possible history record Iteration #5: Truncated Exponential Backoff From the other side, straightforward retries can be one of the sources of accidental Denial-of-Service. It?s a classic example of ?The road to hell is paved with good intentions? anti-pattern. To avoid this pitfall we can implement truncated exponential backoff algorithm. It?s not as hard as it sounds. The idea here is to exponentially increase the delay time between retries until we reach the maximum retry count or maximum backoff time. Optionally, we can add some amount of randomness when we compute value of delay time, but it?s not needed for our case. At the time of writing the Connect capability of TIBCO Cloud Integration doesn?t support POW function (you can check that here). But we can emulate it with precomputed Lookup Table Values since we know all the possible retry counter values. This is so-called memoization. And here?s the updated map: Notable Changes: We used the Sleep block from Scribe Labs Tools Connector for suspending the work of the mapSEQNUM function was replaced by SEQNUMN function We created ?RetryCounter? named sequence, with which we can work in any further map blocksWith the help of SEQNUMNGET we can peek the current value of our named sequence without increment (just as with stack!)LookupTableValue2 function gets precomputed, resulting a power of 2 from according Lookup Table Summary In this article we learned: How to mark and reprocess all errors from particular solution execution with help of Command block from Scribe Platform API ConnectorHow to implement retries with exponential backoff to prevent accidental Denial-of-Service Sleep block helped us with pausing the solutionWith Lookup Tables we overcame the absence of POW function
-
How to correlate EMS messages in a request response scenario
Manoj Chaurasia posted an article in BusinessWorks
Table of Contents Case 1Case 2Case 3Document ReferencesTroubleshootingInformation to be sent to TIBCO Support Case 1 Consider the scenario where you are using a JMS Queue Requestor which sends a request and waits for a reply. Additionally, you have a corresponding process (say a JMSQueue Receiver) that receives these requests and sends back replies (Reply To JMS Message). The JMS request/reply activity uses temporary destinations to ensure that reply messages are received only by the process instance that sent the request. While sending each request the JMS Queue Requestor creates a temporary queue for the reply. It then sends the temporary reply queue name along with the request message. The temporary queue name is unique for each process instance. If the replyToQueue queue (static) is specified then all replies will be sent to the same queue and there will be no guarantee that the correct reply will be received by the process instance that sent the request. You can use an expression for the replyToQueue to create different replyToDestinations for each request. Case 2 In Case1, if you need to use constant destinations for all replies and you do not want to use temporary destinations, then instead of using JMSQueueRequestor you need to do the following procedure: use a pair of "JMSQueueSender" and "Wait for JMSQueueMessage" activities map the messageID of the JMSSender as the event key of the "Wait for JMS" activity use the JMSCorrelationID header of the input message as the Candidate Event Key Case 3 In a multi-engine environment, where you have multiple "Wait for JMS Message" activities listening on the same queue for reply messages, you should consider using GetJMSQueue Message. In a multi-engine environment, with multiple ?Wait For? activities listening on the same queue, it is likely that the first requestor will be waiting for a reply it will never receive as the second requestor has already consumed the reply message. Since the candidate event key does not match the incoming message?s event key the message is discarded. In this case, the first requestor who sent out the request will never receive the reply. This is the default behavior of ?Wait For? activities. When using ?Wait For? JMS message activities, a listener consumes all messages from the queue at engine startup and stores them in process memory. In the case of multiple ?Wait For? activities listening on the same queue, if one listener has already consumed the message, the other listener on the same queue will never receive the message. The correct design would be to use the ?Get JMS Message? activity instead of the ?Wait For JMS? activity. You can set the "selector" property of the "Get JMS Queue Message" activity to use the following XPath to correlate the request message with its reply message. concat("JMSCorrelationID = '" ,$JMS-Queue-ender/pfx:aEmptyOutputClass/pfx:MessageID,"'") When using a message selector, the EMS server does the filtering of the message based on the selector and determines if the message can be delivered to the particular "GetJMSQueue Message" activity. Whereas with the ?Wait for JMS" activity, the message is sent to the queue as soon as it arrives on the queue and the filtering is done at the job level where the Candidate Event key is matched with the incoming message?s event key. Document References For details, please refer to the following TIBCO ActiveMatrix BusinessWorks? documentation: Palette Reference --> Chapter 9 JMS Palette Troubleshooting If the correct replies are not received, review the process design. You can connect to tibemsadmin tool and check for the number of receivers on the queue by using Show queue You can enable tracing for message IDs and correlation IDs in the tibemsd.conf track_message_ids = enabled track_correlation_ids = enabled Additionally, you can turn on detailed tracing for both EMS server and client as follows set server log_trace=DEFAULT,+PRODCONS,+MSG set server client_trace=enabled Addprop queue trace=body (* For both the request and reply queues.) Then check the messages that are sent by the server and received by the client. Information to be sent to TIBCO Support Confirm the Admin/TRA/BW/EMS versions with hotfixes, if any. Please send the multi-file project and the deployed .ear file. EMS configuration files. Other output of EMS admin commands as and when requested by TIBCO Support. -
Attached is a PPT in PDF form that covers a good amount of ground on X.509, PKI, and TLS/SSL. All Browsers will validate a chain, but when you go to find the chain, the browsers will pick the first certificate based on the Distinguished Name. Many CA cert vendors are re-releasing 'same-named' CA certs, so the chain can be a 'false chain'. Why is this? It is cryptographically cheaper to parse a public key and certificate than it is to validate the signature, and it is not always possible to trace serial numbers, so Browser vendors look to the DN/CN and pick the first one they find...Bob is Bob, even if the DNA is different? No. Sites are not under any obligation to send the full chain. I have many examples of partial chains, usually missing the self-signed ROOT. Some sites are 'rooted' (pun intended) with a very old CA - X.509v1-based - and modern infrastructure may reject them for valid security reasons. TLS-TIBCOmmunity.pdf
-
Digitizing Customer Experiences - A Microservices Perspective
Manoj Chaurasia posted an article in BusinessWorks
Use Case This article focuses on Customer OnBoarding and how companies can leverage TIBCO's Hybrid integration platform to digitize their Customer OnBoarding process. The demo runs on a Kubernetes cluster and showcases our strengths like being DevOps compliant, Elastic scaling, API-Led design, and many other factors. The key components of this demo can be found below: Assets CustomerOnBoarding Kubernetes Setup & UseCase: This video explains how the flow of the use case is set up and how microserivces running on containers are being used for providing compelling customer experiences Elastic Scaling: Optimizing infrastructure costs and attaining operational excellence is something every customer is looking for and Elastic scaling Hystrix Monitoring: A demo that walks through how you can set up circuit breaker patterns with Zero Coding and ensure your system is ready for Failures. Configuration Management & Service Discovery: Microservices are more than just containers. It is about embracing an entire Ecosystem that includes open-source tooling like Consul & Eureka for service discovery & configuration management. Microservices Patterns Polyglot Persistence ? multiple data sources (Cassandra & PostgresSQL) Service Discovery ? discover distributed services by name (Consul) Config Management ? manage deployment configuration outside the application (Consul) Circuit Breaker ? prevent service failures from cascading to others (Hystrix) Devops CI/CD pipeline ? automated deployment (Maven & Jenkins) Runtime Considerations Container Management System ? Automated Deployments to Kubernetes Elastic Scaling ? Scale on Demand both Horizontally & Vertically (Auto Scaling Groups in AWS & Kubernetes) Key Technology components: 1) TIBCO BusinessWorks Container Edition 2) TIBCO Mashery 3) TIBCO BusinessWorks 6.X 4) Consul 5) Jenkins 6) Kubernetes on AWS -
Table of Contents Getting Started:Development - TIBCO BusinessWorks Container Edition:Deployment - BusinessWorks Container EditionDevelopment - TIBCO Cloud Integration This article walks through an example using TIBCO Cloud? Messaging, TIBCO Cloud? Integration, and TIBCO BusinessWorks? Container Edition together to create a basic pub/sub app. Knowledge about TIBCO Cloud Integration and TIBCO BusinessWorks Container Edition will be helpful. Getting Started: There are a few things we need to do before we can develop our applications. The first is making sure that we have all the necessary components. You can receive a free trial of TIBCO Cloud Messaging and TIBCO Cloud Integration from cloud.tibco.com so sign up for those. Also, make sure you have the TIBCO BusinessWorks Container Edition studio available. Your TIBCO Cloud Messaging trial broker may take a little while to be active. Make sure the status says active before starting. Generate key (Under Authentication Keys), we will be using this key to connect with TIBCO Cloud Messaging. Now, under Download SDK's, download the Java/Android SDK. Unzip the file and under the lib folder, there's a jar file called 'tibeftl.jar'. We will be using this later. Development - TIBCO BusinessWorks Container Edition: Let's start by opening up our TIBCO BusinessWorks Container Edition studio. Create a new BusinessWorks application, you can call it whatever you want, but in this example, i will call it 'tcm.publish'. Next, let's convert our project to a Java project. Right-click on the .module project. Navigate down to 'Configure' and select 'Convert to Java Project'. Once converted you should see a folder/library under the .module project called 'JRE System Library [TIBCO JRE]'. Right-click on it and navigate down to properties. This will cause a window to pop up where you can select your JRE System Library. Change the Execution environment to J2Se-1.5 (TIBCO JRE) and hit OK. Now, under your .module project, right-click on the lib folder. Navigate to import->import, this will open up a window prompting you to import certain files. In this case, we will import a 'File System', so let's select that. A new window will pop up that will let you import a file system from a local directory. Select the directory that has your TIBCO Cloud Messaging client that you downloaded earlier (eftl-3.3.2-java). Within that directory, find the tibeftl.jar file (should be in the lib folder). Select that file. Once done, hit finish. Now under your .module project, expand the Module Descriptors. You will see a descriptor called 'Dependencies', double click it. This will open a new window that will let you add packages to your project. Click on add and a window should pop up for 'Package Selection'. Type in 'com.tibco.bw.palette.sh' and select the palette that appears. Click OK and save your project should now have that jar file under your 'Plug-in Dependencies'. Let's create our REST service. Click on the little globe with a cloud on the left-hand side of your screen. This will open up your REST service wizard. Give your Resource a name, I choose 'tcmpublish' and set the Resource Service Path to '/tcmpublish/{text}' (don't include the quotes). Change the operation from POST to GET (only GET should be selected). Once done, hit Finish. Your REST service will now be generated, we will need to configure it more but for now, let's leave it be. Now let's drag and drop the JavaInvoke activity. This will be found within your palette library. Your project should now look something like this. Click on your JavaInvoke activity, you should see the properties for the activity. Under the general tab within the properties tab, you should see a variable called 'Java Global Instance', click on the magnifying glass on the other side of it. This will bring up a new window to create a Java Global Resource. Create the resource. You should now see your 'Java Global Instance' variable filled with your Global Resource. Save your project. Now, under your .module project, find the src folder, right-click it and select new -> package. This will bring up a window to create a new java package. In this example I called it 'com.tibco.bw.palette.tcm'. Let's go back to our 'Java Invoke' activity properties. Under the general tab (the same place you created the Java Global Instance), create a new class. This is done by clicking on the green C within the 'Class Name' parameter. This will cause a new window to pop up where you will configure your Java class. We need to fill out the Package (should be the name of the Java Package you just created) and the Name (can be anything) value. Once done, hit finish. Example in the screenshot below. Now under the src folder, you should see your class created. Double-click on your .java file. This will open it up within the studio, it should be relatively empty, only showing the package name and class. Now let's edit this file so that we can use it. Assuming you have followed the guide step by step (with the same names), you can just copy and paste this: package com.tibco.bw.palette.tcm; import java.util.HashMap; import java.util.Properties; import com.tibco.eftl.Connection; import com.tibco.eftl.ConnectionListener; import com.tibco.eftl.EFTL; import com.tibco.eftl.Message; public class TCMConnection { HashMap<String,Object> moduleProperties = new HashMap<String, Object>(); Properties tcmProps = new Properties(); private String authKey = ""; //TCM authKey private String clientId = ""; //TCM clientId (this can be anything) private String url = ""; //TCM connection URL private Connection tcmConnection; public TCMConnection() { final Properties props = new Properties(); // set the password using your authentication key props.setProperty(EFTL.PROPERTY_PASSWORD, authKey); // provide a unique client identifier props.setProperty(EFTL.PROPERTY_CLIENT_ID, clientId); // connect to TIBCO Cloud Messaging EFTL.connect(url, props, new ConnectionListener() { public void onConnect(Connection connection) { if (connection != null) { tcmConnection = connection; } System.out.printf("connected\n"); } public void onDisconnect(Connection connection, int code, String reason) { System.out.printf("disconnected: %s\n", reason); } public void onReconnect(Connection connection) { System.out.printf("reconnected\n"); } public void onError(Connection connection, int code, String reason) { System.out.printf("error: %s\n", reason); } }); } public String getAuthKey() { return authKey; } public String getClientId() { return clientId; } public String getUrl() { return url; } public Connection getTCMConnection() { return tcmConnection; } public void sendMessage(String event, String text) { final Message message = tcmConnection.createMessage(); message.setString("event", event); message.setString("text", text); tcmConnection.publish(message, null); } } We need to edit the following variables in the code: authKey, clientId, and url. The authKey and url come from your TCM authentication keys, while the clientId can be anything as long as it's unique. Now, back under the .module application, find the resources folder, expand it and click on the Java Global resource. A new tab/window should open up that will let you configure the instance. Next to the class variable on that window, click on browse. A new window would have popped up. Search for the class you created, if you named everything the same as this guide, you can search 'com.t' and it should pop up. Select it and hit finish, you should now see your class parameter filled. Now select the Method (you should only have one choice). Once done, save your project. Navigate back to your Java Invoke properties. Under the general tab (where you configured the Java Global Instance), you'll want to hit the reload button on the same line as the Class Name variable. Once reloaded, you should have the Class Name variable filled with something like 'com.tibco.bw...' and under the Method drop-down menu, you should be able to select the sendMessage method. Let's now configure the input. Go to the input tab within the Java Invoke activity. We need to map two parameters, event, and text. For the event we can type the value as 'lambdainvoke'; for text, we will drag and drop the 'text' data source from the GET invoke text parameter. Save your project. Example below. You should no longer have an error message for your java invoke activity. Let's finish this up by mapping the input for the REST service. Click on your Reply activity on your design canvas and within the properties go to the input tab. Here we will need to map the response item, for the sake of simplicity you can just copy the following input (as long as you followed along): concat("Published message: ", $get/parameters/tns1:tcmpublishGetParameters/tns1:text, " to topic called demotopic") Let's configure our http connection now. Go to the Resources folder within the .module project and click on the http connection resource. Change the port property from a literal value to a module property. You should now see the port value replaced with the 'BW.CLOUD.PORT'. Save your project. The design portion of the project is done. Deployment - BusinessWorks Container Edition So now that we've built our BWCE app, we need to deploy it. There's a large number of options for what platform you can deploy it on. If you don't have any PaaS setup, i would recommend just using Docker as it's easy to install and run on your computer. This app deployment is just like any other BWCE application deployment so I won't spend much time explaining this, if you need more information check out some of the videos I've posted on youtube. Flow for docker: Have base BWCE image -> Export EAR -> Create Dockerfile -> docker build (builds image) -> docker run If deployed correctly, you should see your project running on port 8080. You can test the REST service by entering some value for the text parameter and you should get a response message with a 200 response code. Development - TIBCO Cloud Integration Go to cloud.tibco.com and open TIBCO Cloud Integration (TCI) and navigate to the connections tab. We need to make a connection for our TCM instance. Click on 'Add Connection' this will pop up a window for a TIBCO Cloud Messaging Connector. We need to provide a value for the Connection Name (can be anything), Connection URL (the url of your TCM instance), and Authentication Key (the authentication key you created at the start and used in your BWCE project). Once these have been filled in, you can hit save. You should now be able to connect to your TCM instance. Now let's start building our TCI app! Create a new TCI app, this is done by clicking the 'Create' button. A pop-up will appear asking you to fill in a name, let's call this app 'TCM-Application'. Now let's choose to create a Flogo-app, afterwards, click on the option to 'Create a flow'. A window will pop up where you can enter the name of the flow, in this case, let's call it 'TCM Subscriber', and hit next. Afterward, you will have the option of choosing to start your flow as a blank or with a trigger, pick a trigger and select "Message Subscriber", and hit next. Choose the connection (should only be one) and finish. You should now see something like this: Click on your TCM Subscriber flow. You should see one activity called "MessageSubscriber" that will have one error. Essentially it's telling you that it still needs to be configured. Click on that activity and go to Output Settings and enter the following schema: { "text": "String" }. The subscriber activity should now be configured fully (the error goes away). Now let's add an activity to this flow. Next to the MessageSubscriber you should see a blue box (you may need to move your mouse around), click on it and you will get the choice to add a new activity. Choose the Log Message activity under the general tab. Now let's configure it. Click on the newly created log activity and go to the Input tab. Set the message value to $TriggerData.message.text. Now we can push our app. Try it out, it should take less than a minute. If successful you should see the app that says "running". Now let's test the entire flow. Go back to where your BWCE application was deployed (Docker, different PaaS, etc..) and run a test command in the swagger interface. You should get a 200 response (just like when you tested it the first time). Now let's check the logs of our TCI project. We should see the message that we wrote within the TCI logs. IF you do, then everything was set up correctly and running. This is just a simple example of how you could use BWCE, TCM, and TCI together to build a pub/sub project. Obviously, more real-life solutions you could and would do more with it, but this guide gives an idea of how the pieces fit together. And we can do some other cool things with this sample project, maybe wrap the BWCE endpoint within a lambda call using Flogo. The ideas are endless!
-
Table of Contents This is a short guide on how to run the JDBC basic sample using TIBCO Businessworks? Container Edition with Docker.First step is to start the mysql container.Note :Note:Note : This is a short guide on how to run the JDBC basic sample using TIBCO Businessworks? Container Edition with Docker. The complete info can be found on the TIBCO Businessworks documentation, the purpose of this post it's only to add extra info that can be helpful to run the sample. The sample by default use oracle, We use mysql because is the db used by the monitoring application by default so we have already the container running on my machine. The first step is to start the mysql container. We use the docker-compose.yml file provided with the SW. In this compose file are specified 4 containers (the monitoring app, mysql, postgres, and mongodb). For this sample is required only mysql so feel free to comment or delete the other containers or lines not needed. Note : We have modified the default file to add a network (my_network) so the containers mysql and monitoring app are on the same network and the monitoring app can talk to mysql directly. A monitoring app is not used in this sample but the same concept is used to link the sample JDBC application with the mysql db container at runtime. version: '3.0' services: mysql_db: image: mysql:5.5 container_name: mon-mysql ports: - "3306:3306" environment: MYSQL_DATABASE: bwcemon MYSQL_ROOT_PASSWORD: admin volumes: - mysql_data:/var/lib/mysql - ./dbscripts/mysql:/docker-entrypoint-initdb.d networks: - my_network postgres_db: image: postgres:latest container_name: mon-postgres ports: - "5432:5432" environment: POSTGRES_DB: bwcemon POSTGRES_PASSWORD: admin volumes: - postgres_data:/var/lib/postgres - ./dbscripts/postgres:/docker-entrypoint-initdb.d networks: - my_network mon_app: build: . ports: - "8080:8080" #-links: #- mysql_db #- postgres_db environment: DB_URL: mysql://admin:admin@mon-mysql:3306/bwcemon PERSISTENCE_TYPE: mysql #DB_URL: postgresql://admin:admin@mon-postgres:5432/bwcemon #PERSISTENCE_TYPE: postgres networks: - my_network volumes: #mongo: mysql_data: postgres_data: networks: my_network: To start the containers defined in the file from the folder containg the yml file: docker-compose up -d Note: In this case, the folder containing the file is bwce-mon so the folder name is used as a prefix for the network. Run the docker-compose up in that folder so you can use the relative path used for the db scripts. We can see the my-sql container running : docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d8f187397115 mysql:latest "docker-entrypoint.s?" 39 minutes ago Up 39 minutes 0.0.0.0:3306->3306/tcp mon-mysql 3dd22c7b8ab6 bwcemon_mon_app "npm start" 39 minutes ago Up 39 minutes 0.0.0.0:8080->8080/tcp bwcemon_mon_app_1 The compose file exposes the mysql port 3306 on the host. In this way, the db can be accessed externally by applications that are not in the same docker network (bwcemon_my_network) . You can use sql developer to browse the db. Now we can use the business studio to run our sample. In the attachment, the zip file with the sample modified to use mysql db. The only changes are to use the mysqldriver and the url string: jdbc:mysql://localhost:3306/bwcemon The hostname is localhost. This is important. It means our application is connecting to mysql on the exposed port on the host. Note : We're using the bwcemon database used for the monitoring app. This is just for simplicity, feel free to create another one. How to install my-sql driver for local testing is not shown. It's the same procedure used for TIBCO Businessworks 6 and it's explained in the documentation. Once checked the sample is running fine at debug time we can move to the next step and create a container for our application. Remember to set Docker as a Container platform before creating the ear file and set the docker profile as default. To use JDBC driver in our container we need to add these drivers to the TIBCO Businessworks Container Edition runtime image (instructions on how to build the first time this image are in the doc). Move to the folder : /bwce/2.3/config/drivers/shells/jdbc.mysql.runtime/runtime/plugins and copy the folder : com.tibco.bw.jdbc.datasourcefactory.mysql in the same directory where you have the following Dockerfile : FROM tibco/bwce:latest COPY com.tibco.bw.jdbc.datasourcefactory.mysql /resources/addons/jars/com.tibco.bw.jdbc.datasourcefactory.mysql This is only done to avoid inserting the full path in the copy statement if the dockerfile is in a different folder. tibco/bwce:latest is the default TIBCO Businessworks Container Edition image. We are going to create a new image adding another layer. docker build -t tibco/bwce_jdbc . tibco/bwce_jdbc is the name we chose for this image. The '.' is to specify to use of the Dockerfile in that folder. Now we can create a new image (or modify the existing one, your choice) by adding the ear file. As done for the previous image the simple way is to have the Dockerfile and the ear in the same folder : FROM tibco/bwce_jdbc:latest MAINTAINER Tibco ADD tibco.bwce.sample.palette.jdbc.Basic.application_1.0.0.ear / So: docker build -t jdbc_mysql . In this case, I called my image jdbc_mysql. The name can be of course changed. Now we have an image with the JDBC drivers and the ear, we can now create a container. Also in this case I use a compose file : version: '3.0' services: bwce-jdbc-basic-app: image: jdbc_mysql container_name: bwce-jdbc-basic-app environment: DB_USERNAME: admin DB_PASSWORD: admin DB_URL: jdbc:mysql://mon-mysql:3306/bwcemon networks: default: external: name: bwcemon_my_network There are 3 important things to note : image name is jdbc_mysql . If you change the image in the previous step, update the value in the compose file In theDB URL jdbc:mysql://mon-mysql:3306/bwcemon mon-mysql is used for the hostname (in the studio was localhost). In this case the container we'll connect directly to the mysql container and this is possible because they are on the same network. It works also if the mysql port is not externally exposed. bwcemon_mynetwork is added at the end of the file to specify to use an exiting network. So let's run this container: docker-compose up -d To check is running : docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 51dcafe10386 jdbc_mysql "/scripts/start.sh" 46 seconds ago Up 45 seconds bwce-jdbc-basic-app d8f187397115 mysql:latest "docker-entrypoint.s?" About an hour ago Up About an hour 0.0.0.0:3306->3306/tcp mon-mysql 3dd22c7b8ab6 bwcemon_mon_app "npm start" About an hour ago Up About an hour 0.0.0.0:8080->8080/tcp bwcemon_mon_app_1 We can check the appnode logs: docker container logs bwce-jdbc-basic-app It's possible to check the containers are in the same network: docker network inspect bwcemon_my_network A subset of the output of the previous command show containers mon-mysql bwce-jdbc-basic-app are in the same network: Containers": { "3dd22c7b8ab6a73798057f9357f421bc0192c2ccee85f9b3968cd30423058dcc": { "Name": "bwcemon_mon_app_1", "EndpointID": "363b6509560449dcb660654f707d3a6e309ae9777b4bba487d5569343793486f", "MacAddress": "02:42:ac:17:00:03", "IPv4Address": "172.23.0.3/16", "IPv6Address": "" }, "51dcafe103867f4712a35952a26b85f90c42dd54f9820ab313a9ab8e94d928fd": { "Name": "bwce-jdbc-basic-app", "EndpointID": "92b542ca3f33edea791fa5321a482b43c05b0917f1fa75f6fb3232fe5308289e", "MacAddress": "02:42:ac:17:00:05", "IPv4Address": "172.23.0.5/16", "IPv6Address": "" }, "c2ff062453e8fee8c465bce700eaf196e48c445acb1b8f993a7dbece76ea0717": { "Name": "mon-postgres", "EndpointID": "58b5907acc900fd40d85c28dc980e4bcab0a8eea2c24eb9ee8b792a7d7ac3ba6", "MacAddress": "02:42:ac:17:00:02", "IPv4Address": "172.23.0.2/16", "IPv6Address": "" }, "d8f18739711506581c4338acb599284c859d16a52b3698d5fac8a1aab3b9b5ce": { "Name": "mon-mysql", "EndpointID": "5d093629cb99c30450580ee001b2bb9fdb3cb638eb723d7ed501fbeae7f376ee", "MacAddress": "02:42:ac:17:00:04", "IPv4Address": "172.23.0.4/16", "IPv6Address": "" } } This is only one of the possible configurations to use to run the sample. Having both containers in the same network is an easy way for them to communicate in a simple setup. Using a compose file is the best option to run a container so you have more control over the parameters used and the same file can be also used in a multi-node environment using Docker Swarm. Hope this guide is helpful. bwce-mon.7z tibco.bwce_.sample.palette.jdbc_.7z tibco.bwce_.sample.palette.jdbc_.basic_.7z
-
How to Reprocess Failed Transactions in TIBCO Businessworks?
Manoj Chaurasia posted an article in BusinessWorks
Table of Contents REPROCESSING OF FAILED TRANSACTIONSNote:MONITORING OF JVM PARAMETERS TIBCO Businessworks? is a Java-based platform, however, normally very little development is done in Java. At it's heart TIBCO Businessworks is an XSLT processing engine with lots of connectivity components. REPROCESSING OF FAILED TRANSACTIONS Write a Rulebase to verify the log for reprocessing failed transactions. Select the TraceLevel method in EventLog microagent for logging event. Provide values for conditions to be monitored in Test Editor. Jovi Soft Solutions provides the best AEM Training. Online training by real time experts. The alert message is set to display errors. Note: .hrb File created for the Reprocessing of failed transactions: MONITORING OF JVM PARAMETERS Monitoring of JVM Parameters in TIBCO requires a similar procedure used in, Monitoring of memory and virtual memory. Please refer to the previous post MONITORING OF MEMORY, VIRTUAL MEMORY Monitoring of Threads. -
This article is focused on setting up an EKS cluster and the possible pitfalls that you may experience while doing so. Hopefully, this will be helpful in setting up your own EKS cluster! We will focus on some of the major milestones in the setup. To get started, we suggest looking at the official documentation, https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html. If you follow this, you should be able to get everything set up, but issues may arise, so ill list the most common ones below. Possible issues: The access Key and Secret Access Key haven't been set yet. In order for your computer to connect to the EKS cluster it needs these keys to authenticate yourself as the actual user. These keys can be set by running "aws configure" within your terminal. Please keep in mind that this stores your keys, so only do this on a private computer that only you have access to. Your keys essentially give access to your account. Have the proper versions of the CLIs required. This is mainly focused on kubectl (1.10) and AWS cli (1.15). Older versions of the AWS cli do not support EKS functions. Upgrading the AWS cli can be a pain if you do not have the newest versions of python 2 or 3 along with pip. But it must be done.Make sure you don't skip the step for the heptio-authenticator step in the getting-started guide. This is very important to install or else your cluster won't authenticate your CLI requests.Make sure the name of your config file matches the name of your cluster. This makes it easy to manage in case you have multiple K8s config files. Also, make sure to export that config file to KUBECONFIG to either your bash_profile or bashrc file. That way you don?t need to export it every time you open up a new terminal session.Create proper policies and roles for security reasons. Don't assign your cluster administrative rights because you are being lazy and can't be bothered to create a new policy. Project your Cluster! Create appropriate policies! These are just a few things that may come up. If you're a beginner, we suggest just using the WebUI to create your cluster and setup up your roles and policies. This simplifies the process and makes it much more intuitive. Also, you have the choice to create a new VPC or use an existing one. We suggest using an existing one since it has everything you need on it. (Don't want to accidentally forget something). After you've set up your control panel, you should see something like this. We will use our certificate authority, cluster ARN and API server endpoint for some of the config files so just keep note of them (follow the getting started guide). After you set that up, you will need to deploy your worker nodes on your AWS account. This is done with a cloud formation script. (provided on the getting started page). Just fill in the parameters that it asks for. This should take 5-10 minutes to deploy. Once done, on the CloudFormation page, navigate to the Outputs tab. Keep a note of this value as you will need it when binding your worker nodes to your control panel. Continue following the getting started guide. At the end of it, you should be able to run "kubectl get svc" and get an output that shows your Kubernetes service. If not, maybe you get an error, check to make sure you've downloaded and installed the heptio-authenticator correctly. And that whatever role/policy combination you are using has the right permissions. If you do see a service, that means your EKS cluster is up and running and you are able to start deploying projects onto it. If you wish to have a UI to work with, follow this guide: https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html I suggest it for beginners. It's also easier to demo/talk about (more interesting).
-
TIBCO BusinessWorks? Container Edition with Amazon EKS
Manoj Chaurasia posted an article in BusinessWorks
AWS recently made Amazon EKS generally available to the public in us-east-1 (N. Virginia) and us-west-2 (Oregon) with more regions to come in the future. Essentially, EKS is an easy way to deploy a Kubernetes cluster on AWS, where you don't have to manage the Control Plane nodes; all you need to worry about are the worker nodes. This makes it a lot easier to handle while simplifying the process. Also, other AWS services integrate directly with EKS, so if you plan to use ECR as your repository you no longer need to worry about access tokens. Or maybe you want to use Cloudwatch for more control on the management/logging side. Either way, you are staying within the AWS ecosystem. TIBCO BusinessWorks? Container Edition (BWCE) was built to work on any PaaS/IaaS, with Amazon EKS being no different. If you?ve built BWCE applications for other PaaS environments (Kubernetes or something else), and want to now deploy them to EKS, it?s just a matter of taking the EAR file generate from BWCE and pushing it to EKS. No need to go back into the BusinessWorks Studio to refactor, or rebuild, it already natively works as built. This way you get the benefits of Amazon's cloud deployment knowledge and experiences coupled with the same CI/CD pipeline you use today, regardless of deployment location. Here's a short community post on setting up your EKS cluster with some notes on possible issues you may face. The video below goes over how to deploy your BusinessWorks Container Edition application to Amazon EKS. In the future we will also post more advanced videos that highlight certain features!: More Advanced Topics: Config Maps on EKS: -
TIBCO BusinessWorks? provides REST samples, but they are pretty complicated. Here, I have much simpler examples using only the file and XML palette. There are step-by-step instructions and also ready-made project examples. 1. Simple REST example: Download the word document with pictures and the project in the zip file testrestexample.zip from Resources below. 2. A more complicated example with a multi-operation subprocess. This example continues from Step #1. Download the word document with pictures and the project in the zip file testrestexamplewithsubprocess.zip from Resources below testrestexample.zip testrestexamplewithsubprosess.zip
-
Amazon recently announced AWS Fargate during Re: Invent 2017. With Fargate, instead of having to use EC2 instances (VMs), you can use just a container. Fargate provisions a container within the platform itself for your applications without having to deal with all the underlying infrastructure. By using a combination of ECS and Fargate, you will no longer have to worry about keeping your EC2 instances up to date with the latest security patches. Amazon manages the Fargate platform, while still allowing some control to manage your applications. Of course, there are some use cases where Fargate won't be the right choice. Let's say your application requires bridge networking, Fargate doesn't support that so you would have to use the traditional ECS + EC2 instances model for those container deployments. Or if you want to have control of the instances that are running your containers, EC2 would be a better choice. But Amazon has done a good job allowing users to use both Fargate and traditional EC2 instance deployments on the same cluster at the same time. That being said, TIBCO BusinessWorks? Container Edition allows you to deploy applications to ECS using both "backend" models. ECS + EC2 and ECS + Fargate can be used as deployment platforms with little changes. This ties into the idea that TIBCO BusinessWorks Container Edition was built to work on your PaaS and IaaS of choice, and even though Fargate was just announced/released a few days ago (November 30th), TIBCO BusinessWorks? Container Edition applications work on it day one. Here's a simple video that walks through this process from application design to deployment on ECS with Fargate:
-
TIBCO BusinessWorks? Container Edition on Alicloud
Manoj Chaurasia posted an article in BusinessWorks
Table of Contents PrerequisiteProcedureCreate ApplicationCreate ClusterSet the basic application information.Click Create with ImageOnce Container is started you will see service is getting created along with applicationClick on Services and you will get details about the serviceClick on the endpoint and append swagger to urlProvide input in Swagger and check the output Prerequisite Create a Sample TIBCO BusinessWorks? Container Edition application and create a docker image of the same application Push the application image to the Docker hub Alibaba cloud account with Alibaba container service enabled. Procedure Create Application Create Cluster Log on to the Container Service console. Click Clusters in the left navigation pane, and then click Create Cluster in the upper-right corner. Enter the basic information of the cluster. Cluster Name: The name of the cluster to be created. Set the network type of the cluster. You can set the network types to Classic or VPC. Corresponding ECS instances and other cloud resources are managed under the corresponding network environment. If you select Classic, no additional configuration is required. The classic network is a public basic network uniformly planned by Alibaba Cloud. The network address and topology are assigned by Alibaba Cloud and can be used without special configurations. If you select VPC, you need to configure relevant information.VPC enables you to build an isolated network environment based on Alibaba Cloud. You will have full control over your own virtual network, including a free IP address range, network segment division, route table, gateway configuration, and so on. You need to specify a VPC, a VSwitchId, and the starting network segment of a container (subnet segment to which the Docker container belongs. For the convenience of IP management, the container of each virtual machine belongs to a different network segment, and the container subnet segment should not conflict with the virtual machine segment). It is recommended that you build an exclusive VPC/VSwitchId for the container cluster to prevent network conflicts. Add nodes. You can create a cluster with nodes, or create a zero-node cluster and then add existing nodes to the cluster. For information about how to add existing nodes to the cluster, refer to Add an existing ECS instance. Add Set the operating system of the node. Operating systems such as 64-bit Ubuntu 14.04 and 64-bit CentOS 7.0 are supported. Configure the ECS instance specifications You can specify different instance types and quantities, the capacity of the data disk (The ECS instance has a 20GB system disk by default), and logon password. If you set the network type to VPC, by default, the Container Service configures an EIP for each ECS instance under the VPC. If this is not required, select Do not Configure Public EIP. However, you will then need to configure the SNAT gateway. Configure EIP. Create a Server Load Balancer instance. When a cluster is created, a public network Server Load Balancer instance is created by default. You can access the container applications in the cluster through this Server Load Balancer. This is a Pay-As-You-Go Server Load Balancer instance. Click Create Cluster. After the cluster is successfully created, you can see in the cluster list Log on to the Container Service console. Click Applications in the left navigation pane and click Create Application in the upper-right corner. Set the basic application information. Name: The name of the application to be created. It must contain 1~64 characters and can be composed of numbers, Chinese characters, English letters, and hyphens (-). Version: The version of the application to be created. By default, the version is 1.0. Cluster: The cluster to which the application will be deployed to. Update Method: The release method of the application. You can select Standard Release or Blue-Green Release. Description: Information on the application. It can be left blank and, if entered, cannot exceed 1,024 characters. This information will be displayed on the Application List page. Pull Docker Image: When selected, Container Service pulls the latest Docker image in the registry to create the application, even when the tag of the image does not change. In order to improve efficiency, Container Service caches the image; and at deployment, if the tag of the image is consistent with that of the local cache, Container Service uses the cached image instead of pulling the image from the registry. Therefore, if you modify your code and image but do not modify the image tag, Container Service will use the old image cached locally to deploy the application. When this option is selected, Container Service ignores the cached image and re-pulls the image from the registry no matter whether the tag of the image is consistent with that of the cached image, ensuring that the latest image and code are always used. Click Create with Image Set the Image Name and Image Version. Set the Image Name as the Docker hub image that we have already pushed to the docker hub Set the number of containers (Scale). Set the Network Mode. Currently, the Container Service supports two network modes: Default and host. If you do not set this parameter, the Default mode is used by default. Set the Restart parameter, namely whether to restart the container automatically in case of exception. Set the launch command (Command and Entrypoint) of the container. If specified, this will overwrite the image configuration. Set the resource limits (CPU Limit and Memory Limit) of the container. Set the Port Mapping, Web Routing, and Load Balancer parameters. Note: Add web routing and Map container port to the domain name of your choice. So once the container is running user can access the application by Domain Name. Region name.alicontainer.com Set the container Data Volume. Set the Environment variables. Set the container Labels.. Set whether to enable container Smooth Upgrade. Set the container Across Multiple zones settings. You can select Ensure to distribute the containers in two different zones; if you select this option, the container creation fails if there are less than two zones in the current cluster or if the containers cannot be distributed in two different zones due to limited machine resources. If you select Try best, the Container Service will distribute the containers in two different zones as long as possible and the containers will still be created successfully even if they cannot be deployed in two different zones. If you do not set this setting, the Container Service will distribute the containers in a single zone by default. Set the container Auto Scaling rules. Click Create and the Container Service creates the application according to the preceding settings Once Container is started you will see service is getting created along with application Click on Services and you will get details about the service Click on the endpoint and append swagger to url Provide input in Swagger and check the output -
This document highlights various components included in the TIBCO ActiveMatrix BusinessWorks? Managed File Transfer palette and is intended to help garner an understanding of how to use the palette. It is supplementary material and is not intended to replace existing documentation. This document is applicable to TIBCO® Managed File Transfer Command Center, Internet Server, Platform Server, and ActiveMatrix BusinessWorks Microsoft Word - TIBCO MFT - BusinessWorks MFT Palette.docx.pdf
-
TIBCO ActiveMatrix BusinessWorks? does not natively support Web Services Addressing (WS-Addressing) as of version 5.7.x. The underlying XML engines are flexible, however, and you can create WS-Addressing compatible WSDLs and map in-to and out-of the WS-Addressing elements and attributes. Creating a new listener will not be automatic and any of the underlying functions will have to be architected into a solution. This document only covers the WS-Addressing schema structure within ActiveMatrix BusinessWorks. This document covers the specific case of testing ActiveMatrix BusinessWorks against the Axis2 (v1.4.1) WsaMappingTest, which is basically an ?echo? model. This approach can be used to build out other Web Services functions as well, such as SAML or Web Services Security. WS-Addressing_in_BW.docx
-
TIBCO ActiveMatrix BusinessWorks? v5.x Deployment Process
Manoj Chaurasia posted an article in BusinessWorks
The attached document lists the procedures for deploying TIBCO ActiveMatrix BusinessWorks? 5.x projects onto all environments (Development, Staging, Beta and Production). The intended audience for this document is project administrators and Production Control personnel. Developers can also refer to this document to understand the deployment process. This document covers the initial release and subsequent deployments of projects but not the initial installation or setup of the administration server, domain, or adapters or any application specific deployment procedures (e.g. database changes, application server changes, etc.). This document does not cover the deployment and configuration of other TIBCO Components, such as TIBCO BusinessConnect?, TIBCO Enterprise Message Service?, and TIBCO Hawk®. The reader of this document is suggested to consult the appropriate documentation for the deployment and configuration of other TIBCO components. The reader of this document is assumed to have a basic familiarity with TIBCO products, including TIBCO Rendezvous®, TIBCO Administrator?, and ActiveMatrix BusinessWorks. Documentation on setting up the development environment and how the files are maintained within ClearCase is described in the TIBCO development/version control document. 14052962-TIBCO-BW-5x-Deployment-Process.pdf -
Table of Contents IntroductionPre-RequisiteDevelopment :Create a BusinessWorks Application.Create a Docker ImageDeploymentSetting Docker Swarm in AWS Introduction This guide provides the steps to follow in order to deploy TIBCO BusinessWorks? Container Edition application as a docker service in docker swarm. The swarm being setup in Amazon Web Services (AWS). We carry out the setup in two phases, viz. development, and deployment. In order to demonstrate this, we develop a simple TIBCO BusinessWorks REST Application, create a docker image out of it in the development phase, and deploy it as a docker service in docker swarm in the deployment phase. Pre-Requisite TIBCO BusinessWorks Container EditionAccess to an AWS account with permissions to use CloudFormation and create the following objects: EC2 instances + Auto Scaling groupsIAM profilesDynamoDB TablesSQS QueueVPC + subnets and security groupsELBCloudWatch Log GroupSSH key in AWS in the region where you want to deploy (required to access the completed Docker install) To access the full set of required permissions please check Docker for AWS IAM permissions here: https://docs.docker.com/docker-for-aws/iam-permissions Development : Create a BusinessWorks Application. Open the REST service Wizard to create a new REST service and fill it as shown. Click NextIn the Configuration of the Get operation provide Response name as "user" and type as "XSD Element" and select Create New Schema.Create the new schema as shown below Click Ok and then FinishUse the below configuration in the HTTP connection resource Please make sure that the port value of the Module property is changed from 8080 so as to avoid any port conflicts Create a Docker Image Export the EAR file. Create a Dockerfile with your favorite text editor that uses the base TIBCO BusinessWorks Container Edition image and adds the new EAR file. Sample Dockerfile: FROM tibco/bwce:2.2 ADD SFDemo_1.0.0.ear / Build your docker image on your local repository: example: docker build ?t tibco/sfdemo . Login to your docker hub account by entering the command in CLI: ?docker login? Provide the username and password of your dockerhub account. Tag the image: ?docker tag <image> <dockerhub_username/repository:tag>? Notice that the notation for associating a local image with a repository on a registry is: ?username/repository:tag? Check the newly created image tag : ?docker image ls? Publish the image to your dockerhub account: ?docker push username/repository:tag?Now your image is publicly available and from now on you can pull and run the image from the remote repository. Deployment Setting Docker Swarm in AWS Go to https://docs.docker.com/docker-for-aws/#quickstart and click on Deploy Docker Community Edition for AWS It will open CloudFormation windows with a template. Click on NextFill in the parameters below (you can use values as per your requirements also) Number of Swarm Managers = 1Number of Swarm Workers = 2Which SSH key to use = SSH key in AWS in the region where you want to set up SwarmEnable daily resource cleanup = yesUse CloudWatch for container logging = yesSwarm manager instance type = t2.smallManager ephemeral storage volume size = 20 GBManager ephemeral storage volume type = standardAgent worker instance type = t2.microWorker ephemeral storage volume size = 20 GBWorker ephemeral storage volume type = gp2Click NextYou can skip the options page that opens by clicking Next.Click on the acknowledgment radio button and click on Create Button.When created you will get the create complete status in the AWS CloudFormation console.Click on Output Tab. ?? ????The different keys means as below DefaultDNSTarget is the AWS load balancer.Managers will list all the managers working in Swarm.Click on the link (value) provided against managers and copy the Public IPv4 IP address of the managerSSH into the manager node of the swarm ssh -i "Path_to_AWS_SSH key filename" docker@IP_AddressOfManager-i option is used to provide the ssh key filenamedocker@IP_AddressOfManager is used to login into Swarm manager as a docker user Now we are ready to deploy our BW application as a service in the swarm.Use the below command syntax to create a service docker service create --name <service Name> --publish mode=host,target=<Internal port used in BW app>,published=<Port Exposed to outside world> <Docker Hub username/repository:tag>Once the service is created, you can check the logs in CLoudWatch console to check if the BW application started successfully. Once the app is running, you can access it using the address : DefaultDNSTarget:ExposedPort/swagger
-
TIBCO BusinessWorks? Process Monitor, a Video overview
Manoj Chaurasia posted an article in BusinessWorks
Here's a video of TIBCO BusinessWorks? ProcessMonitor, an integration process monitoring solution for TIBCO ActiveMatrix BusinessWorks? we have recently released. You can also find information about BusinessWorks ProcessMonitor here. -
Table of Contents IntroductionBuilding the docker imageDeploymentConfiguring the serverAccessing turbine server on hystrix dashboardMonitoring applications in projects/namespaces other than currentUsing Attached Examples Introduction Hystrix dashboard is used to monitor the circuit status when the circuit breaker is enabled in TIBCO BusinessWorks? Container Edition. To monitor multiple applications on the same Hystrix dashboard, a stream aggregator is required. The turbine server is a stream aggregator that collects the required data from all pods which has circuit-breaker enabled. The server can be configured to run on both Kubernetes and Openshift platforms. The project uses Netflix turbine-core 1.0.0 library. Custom code is written to fetch the pod details on K8S/Openshift platform and populate the endpoints in the turbine initializer. The application runs as a container, periodically updating the pod endpoints. It listens at 8090/hystrix.stream endpoint of the pods. This endpoint is specific to TIBCO BusinessWorks Container Edition applications. Building the docker image Extract the attached project and execute the following maven goal - mvn clean package docker:build This will create a docker image - tibco/turbine-server:1.0.0-SNAPSHOT Deployment Deploy this turbine server to Kubernetes/Openshift platform. Expose turbine-server deployment as a service - Deploy Hystrix Dashboard [docker image - fabric8/hystrix-dashboard] to the same namespace. Expose the dashboard service to be accessible externally (Type: NodePort or LoadBalancer) Configuring the server Environment Variables - PLATFORM = Openshit/Kubernetes (Default - Kubernetes) NAMESPACE = k8s namespace to monitor (Default - Current Namespace) Enable Collection Add following label to BWCE app deployments - hystrix.enabled=true Ensure that the label is added at the pod level i.e. specs/template/metadata/label in a deployment yaml file Accessing turbine server on hystrix dashboard Open the Hystrix dashboard in the browser using the service url configured above. On the dashboard, configure the following url to listen to the turbine stream - http://turbine-server/turbine.stream [replace turbine-server with the appropriate service name] On clicking the Monitor Stream button, the circuit breaker status for all the deployed TIBCO BusinessWorks Container Edition applications should be visible. Monitoring applications in projects/namespaces other than current Permissions need to be provided to view services from other projects/namespaces. Kubernetes (Tested on minikube) -Add clusterrolebinding kubectl create clusterrolebinding [hystrix-namespace] --clusterrole cluster-admin --serviceaccount=[hystrix-namespace]:[user] Openshift - oc policy add-role-to-user view system:serviceaccount:[hystrix-namespace]:[user] Using Attached Examples Examples are attached to quickly deploy and test the turbine server along with hystrix dashboard and BWCE applications on Kubernetes/Openshift. turbine-demo-client Import the zip into the studio to check the code. This client code invokes turbine-demo-server and has circuit-breaker enabled in httpClientResource. It exposes GET /checkservice API which returns the client hostname and the server hostname. In case the server is not reachable, it returns an appropriate error message given by the circuit breaker. turbine-demo-server Import the zip into the studio to check the code. The server code simply returns the hostname of the container at GET /server API. In order to open the circuit in the client, scale the pod count of the server to 0. Build the docker images for both turbine-demo-client and turbine-demo-server using maven goal ? clean package initialize docker:build at the project's parent location. Use the attached platform-specific yaml files to deploy the application. turbine-demo-client.yaml ? This will deploy 2 applications ? client-app-1 and client-app-2. Ensure that the image field in yaml file is updated to reflect the above-created docker image for the client. Expose the service externally using either NodePort/LoadBalancer/Ingress/Route as applicable. turbine-demo-server.yaml ? This will deploy the turbine-demo-server application. turbine-hystrix-dashboard.yml ? Deploys the turbine server and hystrix-dashboard along with the services. Expose hystrix-dashboard service to be accessible externally. Update the turbine-server docker image name in yaml file. (Use the image created in Building the docker image section) # For Kubernetes ? namespace-clusterrolebinding.yaml ? Creates turbine namespace and cluster role binding to add admin role to default user. This enables the turbine server to list the applications. # For Openshift ? Execute following commands ? oc new-project turbine oc policy add-role-to-user view system:serviceaccount:turbine:default turbine-server.rar turbine.demo_.server.zip turbine.demo_.client.zip openshift.rar kubernetes_1.rar