Logging is a must-have feature for every production-level application or service. Without it, it?s pretty hard to get insights of what?s going wrong in when errors or exceptions arise. However, the larger an application is, the greater the quantity of logs this application will produce. This feedback is valuable but we don?t want to drown in it. That?s why it?s important to effectively monitor and analyze logs. In the world of distributed applications and serverless architecture, it is best to use centralized log storage, so we can see the logs of every application, right in one place.
In this article we will show you how to store and analyze execution history of TIBCO Cloud Integration solutions with help of the Scribe Platform API Connector.
Consider a scenario where we have many TIBCO Cloud Integration organizations with a lot of solutions in each of them. We want to store the execution history of each solution in one place (eg. a relational database) so we can analyze it easily.
How do we do this in practice?
As a prerequisite, we should install the Scribe Platform API connector from the Marketplace, establish the connection to Scribe Platform API, and create a new integration solution called ?Logger? in your TIBCO Cloud Integration organization.
For the target connection, we will use the PostgreSQL Connector. Let?s create a table to store execution history with the following SQL command:
CREATE TABLE public.scribe_logs ( id BIGSERIAL PRIMARY KEY, organization_id INT, solution_id UUID, start TIMESTAMP, stop TIMESTAMP, records_processed INT, records_failed INT, result VARCHAR(64), details TEXT, reprocess_records_number INT, is_reprocess_job BOOLEAN, duration REAL, source_data_local BOOLEAN, process_execution_id UUID );
Just as in my previous article we?ll implement a solution step-by-step.
Iteration #1: Get All Execution History
Let?s create a simple map that will iterate each solution of each organization you have access to (the user that you put in connection, more precisely), and save its execution history to PostgreSQL.
Minor notes about the above map:
- If you want to grab execution history from a single organization, you can add a filter by Id to Query Organization block
- The picture above doesn?t contain a comprehensive field mapping list for ?Create publicscribe_logs?
- In PostgreSQL your table should be named as public.scribe_logs, but the TIBCO Cloud Integration UI likes dots and it eats them like Pacman
Let?s run the map, and after it finishes executing the following SQL query in your favorite PostgreSQL client:
SELECT id, details, duration, result FROM scribe_logs
Iteration #2: Reinventing Net Change with Lookup Tables
But, what if you run the map again? It will go through the executions starting from the beginning of time. This causes some negative consequences:
? It?s slow since the map re-iterates all the history records again and again
? It can eat up your API limits (15000 calls per day)
? It can create a lot of row errors
? If the scribe_logs.ids column declared as the primary key then you should have a lot of row errors
? It can create a lot of duplicate data in your target tableFor example, in a case where you link History.Id to column which is not declared as unique
Ideally, we want to process only new execution histories since the last run. Unfortunately, the Net Change feature is not available in Fetch blocks, but we have Lookup Tables to the rescue. With help of Platform API connector we can insert/update lookup tables and their values.
The idea: we can reinvent Net Change functionality using Lookup Table Values, which will be used as storage for last execution history date.
Let?s create new Lookup Table with LoggingSettings name
1. More ? Lookup Tables ? click on + sign
2. Create new Lookup Table Value with LaterThanDate in Value1 and nothing in Value2
After that we can create new map ?Update LaterThanDate in LoggingSettings? in the ?Logger? solution, which will update LaterThanDate lookup table value based on latest execution date PostgreSQL data. Then we need to change execution order of the maps in ?Logger? solution, so ?Update LaterThanDate in LoggingSettings? will be executed before ?Save execution history to database?.
? In this map we?re using the Native Query block to select the latest possible timestamp across all saved start and stop from execution history with help of max aggregate function and greatest conditional expression
? After the Update block we don?t need to iterate through all Lookup Tables and Lookup Table Values because we know that only one such Lookup Table Value exists
Optionally, to improve the performance of the map, you can remove all the Fetch and Condition blocks and use raw IDs in Update block. You can get the IDs in Query Preview or in API tester
Iteration #3: Consume dynamically updated Lookup Table Value
As you can see, the previous version of the map was updated:
? We added a new condition in the Fetch History filter
? The Platform API provides a LaterThanDate parameter which will filter out all executions older than the parameter?s value. Of course, the Platform API Connector also supports it!
? In the right side of the condition we will use the LOOKUPTABLEVALUE2 function to get Value2 by Value1, which is LaterThanDate
? We changed the Create block to an Update/Insert block, so we can update existing execution history records in PostgreSQL
? Example: the execution history status can be changed between ?Save execution history to database? map executions
It?s time to execute the whole ?Logger? solution. It will process only new solution executions since the last run and we don?t have any row errors. Perfect!
I showed you an approach to implement centralized logging of TIBCO Cloud Integration solution execution history, but you can go further:
? Try other connectors as the target for your execution history entries
? Use your favorite log analyzing tool to get more value (statistics, free-text search, etc.) from logs
? Reprocess errors with help of the Command block
? Control the log verbosity by using the result field in the Fetch History block. The possible values are:
? For Developers: Build a connector for a logging service like Kibana, Splunk, or Seq, so you can monitor the health of your solutions in real-time