Exchange
24 files
-
Adobe Analytics online Training course in Hyderabad
By GoLogica
With GoLogica's extensive online training curriculum for Adobe Analytics, you may enhance your job. This course gives you the in-demand skills you need to succeed in the digital world, whether you're a seasoned professional looking to expand your horizons or a fledgling business analyst.
Explore the realm of insights derived from data in depth:
Discover the secrets of internet marketing: go beyond conversions and clicks. Learn how Adobe Analytics enables you to monitor, assess, and improve each facet of your marketing initiatives in real-time.
Become an expert in data: Learn how to use key tools including Tableau, VBA, and Excel. To fully realize the promise of data science and business analytics, acquire sophisticated skills such as SAS, R, and machine learning.
Create individualized client experiences: Discover how to define important KPIs, segment your audience, and construct dashboards that are targeted. Make better use of these facts to target marketing campaigns and cultivate enduring connections with clients.
Obtain practical experience GoLogica offers a hands-on learning environment with its demo projects and real-time case studies. Develop your real-time analytics and predictive marketing abilities to better equip yourself for the demands of the digital age.
If you're a novice looking to start a career in business analytics or a professional moving into business development jobs, this course is a wonderful fit for you.
• A sales or marketing expert ready to strengthen your analytical skills.
• An owner of a medium-sized to big company looking to leverage data insights.
No prior knowledge is required! If you have a basic understanding of technology and a strong desire to study, you're prepared to start this life-changing adventure.
Put money down for your future.
Increase your capacity for income. The According to Indeed, the average income for an Adobe analyst is an astounding $125,000.
Upskill for in-demand positions: Demand for qualified data analysts is rising in all sectors of the economy.
Obtain a competitive advantage: Make a lasting impression by showcasing your proficiency with the Adobe Analytics Course, a prominent player in the digital analytics arena.
Take action now! Take advantage of GoLogica's Adobe Analytics course to learn the keys to success powered by data.
0 downloads
- online training
- freedemo
- (and 3 more)
Submitted
-
Data Transformation with GoLogica AB INITIO Online Training course
By GoLogica
GoLogica extends an invitation for you to delve into the realm of data transformation with our extensive AB INITIO Training curriculum.
0 downloads
- online training
- course
- (and 2 more)
Submitted
-
Tibco_BW_Common_FileTransfer_Tool_Implementation.docx
By AmitSoni123
This file talks about implementation of Common File Transfer tool using Tibco BW.
3 downloads
Submitted
-
Tibco_BW_oData_Service_Implementation_Limitations.docx
By AmitSoni123
This file talks about limitations while implementing oData Service and its workaround.
0 downloads
Submitted
-
Tibco_BW_ExternalCommand_Activity_Timeout_Workaround.docx
By AmitSoni123
This file talks about workarounds to implement a timeout feature for External Command Activity.
3 downloads
Submitted
-
Healthcare Interoperability Accelerator
Overview:
The lack of seamless data exchange in healthcare has historically detracted from patient care, leading to poor health outcomes and higher costs. The recent CMS rule established polices that break down barriers in the nation's health system to enable better patient access to their health information, improve interoperability and unleash innovation, while reducing burden on payers and providers.
The new policies mandate that Payers make personal health data available via application programming interfaces (APIs) to third-party app developers and to exchange such data with other payers: Patient Access API: Provide access to patient health records; Provider Directory API: Provide access to provider directory; Payer to Payer Data Exchange: Send/receive patient health records.
Support Details
The list of Supported Versions represents the TIBCO product versions that were used to build the currently released version of this accelerator. We expect newer versions of the TIBCO products will also work. Please see the wiki page for the accelerator for possible further details around product versions.
Accelerators are provided as fast start templates and design pattern examples and are supported as delivered. Please join the Community to discuss the use and implementation of the Healthcare Interoperability Accelerator.
Reference Info:
Increasing global healthcare data stored in multiple databases/formats/systems of record makes the secure, seamless exchange of it difficult. Patient demand for transparent records-of-health access anywhere, anytime along with time-sensitive and urgent government mandates such as, in the US (US ? Interoperability and Patient Access final rule CMS-9115-F) are compelling payers/providers to deploy an accelerated digital transformation agenda to meet these demands.
The Interoperability and Patient Access Final rule
The Healthcare Interoperability Accelerator helps payers meet these CMS mandates while building a modern healthcare architecture to provide better healthcare and increasing their revenue stream.
Business Scenario
TIBCO can help Payers become compliant with the Interoperability mandates.
Value Drivers
Business Benefits
API Driven Architecture
Create/host/manage APIs & provide a developer portal
Seamlessly connect data sources and requests through APIs
Data Transformation
Access & convert source data to FHIR
Facilitates integration in real-time with producers & consumers of data
Data Governance & Management
Clean, accurate & secure data policies & procedures
Produces 360o view of patient, providers & members
Data Virtualization
Unifies versions across all data sources & types
Access controls & one true source
Typical Use Cases
As A/AN
I WANT TO
SO THAT?.
member/patient
See my clinical history (diagnostic reports, lab orders/results, vital signs, etc.)
I have access to all my clinical history from all the providers on a single mobile app or web portal.
member/patient
See my encounters
I can see information about my interaction with healthcare providers on a mobile or web app of my choice. I can have a record of the purpose of my visit, the type of visit (ER, inpatient, office, etc.)
member/patient
See the cost of drugs
can see the brand and generic prescription drugs covered fully or partially by my plan. I have access to the cost of the medication as per my plan, deductibles, coinsurance, and copay.
member/patient
Have access to my claims and their status
I can see the status of my claims, deductibles, out-of-pocket, copay, coinsurance, and EOBs.
member/patient
Search for a provider
I can search for a provider based on gender, specialty, language, distance, plan, etc.
provider (physician or hospitals)
Make clinical and encounter information available to patients/members
Patients/members can access their medical history in a seamless way.
payer (health plans)
Make claims and formulary information available to patients/members
Patients/members can access medical and pharmacy benefits, EOBs, etc.
Concepts
FHIR: Fast Healthcare Interoperability Resources (FHIR, pronounced "fire") is a standard describing data formats and elements (known as "resources") and an application programming interface (API) for exchanging electronic health records (EHR},
Benefits and Business Value
In compliance with the 21st Century Cures Act Accelerate source data to FHIR resource mapping to create FHIR resources Implementation of FHIR APIs Unifying siloed data from different environments into a single data layer using TIBCO Data Virtualization and managing Master data is TIBCO EBX. Reduced development time
Technical Scenario
As A/AN
I WANT TO
SO THAT?.
solutions engineer/developer
Convert a patient record in the database into a FHIR patient resource
I have patient data in the form of FHIR patient resource.
solutions engineer/developer
Parse FHIR patient resource
Use parsed FHIR patient resource in downstream processing (logging, saving in the FHIR server's repository, etc.)
solutions engineer/developer
Persist FHIR patient resource in the FHIR server?s repository
The 'Patient Access' API can be used to query patient resource from the FHIR server
solutions engineer/developer
Update FHIR patient resource
The changes to patient resource (address, phone, etc.) reflect in the FHIR server?s repository. Updated changes are available for querying via patient API.
solutions engineer/developer
Delete FHIR patient resource from the FHIR server?s repository
The FHIR patient resource is deleted from the FHIR server.
solutions engineer/developer
Develop a nightly job to load clinical, claims, formulary, encounter and provider resources in the FHIR server
Make clinical, claims, encounter, formulary and provider data available via 'Patient Access' and 'Provider Directory' API.
solutions engineer/developer
Convert a HL7v2 Admit/Discharge/Transfer (ADT) message to FHIR patient and encounter resources. ADT events are quickly made available via 'Patient Access' API and 'Encounter Access' APIs
Convert a HL7v2 Admit/Discharge/Transfer (ADT) message to FHIR patient and encounter resources.
solutions engineer/developer
Convert FHIR Patient and Encounter events to HL7v2 Admit/Discharge/Transfer message. ADT events can be sent to adjudication/measure systems.
Send ADT events to adjudication/measure systems via API.
solutions engineer/developer
Data virtualization of provider directory data sources
I have Provider 360° view
solutions engineer/developer
Do master data management of provider information
Updated and verified provider information is always available via 'Provider Directory' API.
solutions engineer/developer
API management of 'Patient Access' and 'Provider Directory' APIs
Patient Access and Provider Directory APIs are secured and have quotas and policies in place for the access of APIs
Machine Requirements
This Accelerator demonstrates the capabilities to address the CMS mandates. In order to run this Accelerator, the following are the minimum recommended guidelines for installation:
Operating System
CPU
Memory
Disk
Windows 10/2016 64-bit
Minimum 8 cores, more will give better performance
Minimum 24GB, more will give better performance
Minimum 30GB
For AWS deployment, the minimum configuration is:
AMI
Instance Type
Storage
Security
Microsoft Windows Server 2016 Base
m4.2xlarge 8 vCPUs, 32GB RAM
Minimum 30GB
Open port 8080
Distribution
The accelerator distribution is a ZIP file. Within the zip is a Quick start guide that includes installation and configuration details. NOTE: the distribution does NOT contain any of the required underlying software packages. The end user is responsible to have licensed the underlying software in order to run this Accelerator.
TIBCO Software Requirements
The accelerator has the following runtime software dependencies. The following components are mandatory for all installation types: TIBCO Software dependencies:
Third-Party Software Requirements:
Software
Source
Purpose
MySql database
https://www.mysql.com/
Use this as the data source to import data to FHIR server.
Apache Tomcat
https://tomcat.apache.org/
Use this to host EBX server and FHIR server webapps.
FindBugs Annotations jar
https://mvnrepository.com/artifact/com.google.code.findbugs/annotations/...
TIBCO ActiveMatrix BusinessWorks Plug-in for HL7with FHIR is dependent on this and this will be required during installation.
Note that the versions specified here were current as of the date of release of the Accelerator on the TIBCO Community. In most cases using later versions is expected to work, but the installation scripts may need to be adjusted, and the code may need to be recompiled on the newer versions.
0 downloads
Submitted
-
Business Activity Monitoring Accelerator
The Business Activity Monitoring Accelerator models processes using a no-coding template configuration approach. At runtime the accelerator uses a decision table and the template configuration to create processes instances. Processes need not be automated using any sort of BPM system, and indeed it is particularly useful in distributed environments where there is no overall system of control. The accelerator allows for real-time tracking of process performance to determine if SLAs are being met, giving businesses the opportunity to take proactive action to correct exception situations before they escalate. In addition, a repository of current process state along with a running log of activity events when combined with a sophisticated BI tool can be used to determine opportunities for business process improvement.
Business Scenario
Most businesses are a collection of business processes. These processes often interact with one another and often times there is little visibility of the execution of these processes. This is particularly so in cases where there is little automation or minimal overall control of process execution. Tracking processes is the key to monitoring the overall business performance and the lack of oversight is a missed opportunity for business process improvement.
Concepts
The Business Activity Monitoring Accelerator models processes using design-time entities called Templates and Workflows. At runtime, the Event Manager will convert the Template and Workflow into a Process instance which then contains a series of actions or Activities. These Activities have Milestones which are points of interest during the action. They also have Sections which represent the period of time between adjacent Milestones. Activities can be grouped together logically into Tasks. Dependencies between Activities are modelled using Transitions. Finally, SLAs measure performance between two milestones.
Benefits and Business Value
The Business Activity Monitoring Accelerator can be used to model processes whether or not they are automated using a BPM system. It allows for real-time tracking of process performance to determine if SLAs are being met, giving businesses the opportunity to take proactive action to correct exception situations before they escalate. In addition, a repository of current process state along with a running log of activity events when combined with a sophisticated BI tool can be used to determine opportunities for business process improvement.
Some of the challenges that businesses face when monitoring processes:
Diverse set of systems, each producing their own output Existing tracking systems that are rigid and inflexible Historical-based reporting, minimal real-time view Lacking predictive capability Technical Scenario
The accelerator includes demos showing various types of processes and illustrates both milestone-based and transition-based addressing for process reports.
At the heart of the accelerator is the Event Manager which is implemented using BusinessEvents. It receives messages from the systems doing the business process work in the form of Process Reports. It puts the report in context of the associated template and workflow, and monitors execution of the process instance. The Event Manager also produces outbound messages called Notifications which are then used to store data in the Repository, trigger alerts, enforce business rules, etc.
The Real Time Dashboard is implemented using Live Datamart and it captures the current state of the network from the Event Manager notifications. It displays this information on a fully-interactive, HTML5 application. This displays a summary dashboard as well as detailed information about workflows and processes.
Underlying all the components is a service bus, implemented using Enterprise Message Service and StreamBase. This provides the connectivity between components, and with other systems.
Components
0 downloads
Submitted
-
Intelligent Equipment Accelerator
The Intelligent Equipment Accelerator provides a reference architecture and code assets for building telemetry monitoring solutions inside of equipment hierarchies. It is primarily configuration-driven which allows a flexible object hierarchy based on the generic concept of Entities. Attached to these Entities are Devices which represent data producing sensors. The platform illustrates how capturing sensor telemetry can be used to gain business insights.
And here's a video showing the Accelerator in action.
Business Scenario
Most modern equipment are instrumented in some way with a variety of telemetry captured from sensors, from cars to electronics to lightbulbs. Gathering this data and making sense of it all is a key problem for owners of this equipment. Once data is captured either on edge devices or within a core infrastructure, it then becomes a challenge to detect patterns and meaningful behaviours from the noise. Through the use of rule-based systems and data science models, actionable insights can be gleaned. That allows the ability to take action in developing situations, or just capture the data to refine models for future improvements to the system.
Concepts
The Intelligent Equipment Accelerator has a generic data model that is configuration driven. At the top level there are two main concepts:
Devices -- are anything that produce a stream of data. Also known as sensors. Typically produce data triplets at high frequency, consisting of a unique identifier, a timestamp, and a data value. Devices are attached to a single Entity, but an Entity can have multiple Devices.
Entities -- are anything else. This can be factories, production lines, equipment, aircraft, buses, ovens, drilling rigs... anything. Organized into hierarchies, one Entity may have a single parent, but multiple children.
To help with configuration, the Accelerator also separates configuration into Templates and Instances.
Instances -- are physical examples of Devices or Entities, equivalent in object-oriented programming to an Object Instance. They are linked to a single Template, have a physical location, and a unique identifier like a serial number.
Templates -- definition of common properties for all Instances of a given Template, equivalent in object-oriented programming to a Class. May also be known as a type. Will not have a physical location or a unique identifier like a serial number (but could be a unique model number).
Since Devices often send only single data points at a time, it is often useful to aggregate these together into virtual rows of data for processing.
Features-- are linked to a single Device Instance or to a Device Template associated with an Entity Template. Represents a single value in a virtual row.
Feature Sets-- are logical groupings of Features into a single virtual row. This virtual row can then be passed to rules and data science models to evaluate multi-variate conditions and states.
This configuration looks like this:
In addition, users can configure Modules which link to physical EventFlow application modules implementing specific business rules or interfaces. These may be implemented as Validation Modules, Cleansing Modules, or Rule Modules. These modules are then linked to Devices, Device Templates, and Feature Sets so they are called during the processing of data from these data sources.
The Accelerator captures data feeds from external systems as reports.
Alert -- reports from external systems of alert conditions.
Reading -- device reading reports consisting of a triplet of unique identifier, reading date and time, and the value.
Status -- a condition status for a given Device or Entity. For example, a production line or pump operating status.
Part -- a part produced report used as part of operational metrics.
Position -- a physical location for a given Entity that may change over time.
After processing the inbound reports, the Accelerator produces external actions.
Alerts -- similar to Alert reports, indicates an alert condition on a Device or Entity.
Status -- similar to Status reports, indicates the status of a given Device or Entity has changed.
Readings -- certain rules may produce additional Feature values as part of the rule execution. For example, an autoencoder rule may generate a cluster number and a reconstruction error. These Feature values are produced as new Readings.
The dynamic data model looks like this:
Benefits and Business Value
Most modern equipment today are instrumented with some sort of sensor. We can use the streaming data from these sensors, combined with context information from various systems to gain a complete real-time view of all operations in order to rapidly resolve current issues and intervene to address preventable problems before they occur.
Technical Scenario
The Accelerator provides a generic data model for building entity and device hierarchies with a configuration interface. The included demos capture sensor data from a number of devices installed on equipment in their respective environments. These demo scenarios are:
Production oilfield with a series of wells using electric submersible pumps (ESP). The Accelerator captures telemetry and attempts to identify a failure pattern and alert when this looks likely. Heavy equipment monitoring engine signals for preventative maintenance Power plant where the overall state of the generation lines in the plant are computed using both an R model using a K-means clustering algorithm, and an H2O model using an Autoencoder algorithm. Servers showing monitoring of an IT infrastructure hierarchy showing infrastructure, platform, and service level monitoring. Widgets showing operational analytics monitoring the production of parts from various factories and production lines The Accelerator is based around a single TIBCO Streaming engine called the Event Manager. This engine receives a defined set of reports from multiple sources either through directly enqueued stream data or through a JMS receiver. In the demo the Simulator connects to the Event Manager through the internal messaging bus. In a real implementation the integration of data sources will always be a project and will likely require development of adapters and ingress EventFlow to transform the data into the Accelerator canonical formats.
As device readings flow through the Event Manager, they are subjected to several analyses, validation to ensure the data is correct, cleansing, business rules, summarizing, and statistics calculation. The results of these are pushed through to Live Datamart as appropriate, and a fully custom HTML5 application can be used to view the contents, as well as Spotfire.
Components
0 downloads
Submitted
-
Continuous Supply Chain Accelerator
The Continuous Supply Chain Accelerator allows users to evaluate historical sales to generate inventory ordering models based on Economic Order Quantity and Safety Stock principles. It also provides tools to optimize allocation of stores to distribution centers based on constraints, as well as generating real-time routing for deliveries using integration with TIBCO Geoanalytics.
Here's a video showing the real-time delivery tracking in action.
Business Scenario
Today, everything is connected and every participant in a global supply chain must access data. So it is essential to lower the barrier between artificial intelligence and human intelligence. With open source at the core and democratizing business intelligence through self-service, an intelligent nervous system is now available to anyone. This augmented intelligence enables a shift from reactive to proactive management of all supply chain areas. Digital twins allow us to predict future system states, anticipate problems, model alternative scenarios and choose an optimal solution. Humans better understand that digital fabric and are able to act in real time.
Typically, planning is the most data-driven process in the supply chain, using a wide range of inputs from Enterprise Resource Planning (ERP) and Supply Chain Management (SCM) planning tools. There is now significant potential to truly redefine the planning process to sense and respond to billions of events a day, in collaboration with suppliers, to make real-time demand and supply adaptation a reality.
Transportation firms have used analytics to improve operations for years to optimize routing and reduce wait times. But most existing analytics are based on historical data, and new possibilities help companies monitor and respond to changing conditions in real-time data from connected land, air and sea vehicles, shipping environment sensors, real-time order flow, supply chain geoanalytics, live traffic patterns and continuous weather forecasting and the rescoring of predictive models.
Concepts
Benefits and Business Value
Making a supply chain more real-time gives business the ability to be more agile and react to competitive and market pressures. Being able to make changes quickly and innovate through a phased implementation approach can deliver near-term value by leveraging existing ERP and SCM infrastructure and tools, and forms the foundation for future projects.
Supply chains are constantly in motion so the first phase of supply chain nervous system adoption is obtaining a 360-degree streaming or near real-time view of the data that impacts supply chain assumptions and forecasts. Real-time analytics and simulation tools provide streaming or near real-time insight to stakeholders to any element that can impact the supply chain, including orders, package scans, inventory updates, in real time. Predictive data science models can be scored with streaming data science against this real-time feed of data and explored by supply chain management experts.
Virtualized data and real-time visibility is just the start. The next phase introduces key elements for scale: dynamic learning, data curation and automation. Dynamic learning is the secret sauce of supply chain innovation. Algorithms applied to streaming data yield smarter supply chain decisions and situational awareness. This algorithmic awareness is the pinnacle of supply chain innovation power. Data creation introduces a culture of curation to metadata management to trace lineage and manage assets and analytics assumptions. With real time analytics in place, automation with streaming data can begin. The best place to start is to automate insights that business users can use to better empower them to see and act on changing factors that impact the supply chain.
The scaling phase of this nervous system focuses on how to scale the center of excellence, enterprise architecture and cloud-hybrid architecture, and edge computing. Enterprise scaling is outside the scope of this paper, but considered insofar as the technology innovations below are expressly designed to future-proof the evolution of data, automation and AI in a global enterprise.
Technical Scenario
The accelerator includes a demonstration called Distribution Logistics. It consists of a series of static Spotfire DXP analyses that evaluate historical sales to build models for ordering. It provides impact analysis for promotions and forecasting for next month sales. Once this forecasted demand is known for each store, an optimization model allocates retail stores to distribution centers using constraints such as maximum capacity and minimizing total distance driven.
The real-time component of the accelerator tracks actual unit sales in stores and triggeres automated re-ordering once safety stock thresholds are reached. Orders are sent to the allocated distribution center. At the start of the day the system will take all orders for each distribution center and build a series of delivery routes based on constraints such as vehicle size and minimizing distance travelled. It then tracks the vehicle deliveries to ensure on-time performance.
Components
1 download
Submitted
-
Grid Monitoring Accelerator
The Grid Monitoring Accelerator provides a reference architecture and code assets for monitoring and managing computational data grids. It makes use of rule processing and data science models to alert and predict anomalies before they cause issues with completing a processing run, allowing operational staff the opportunity to intervene in a timely manner.
Business Scenario
A data grid is a software architecture that allows for highly distributed processing. It is often applied in situations where there are large amounts of data, and computations can be broken down into small, individual units of work. The individual computation results are then aggregated together to produce a final computed result. Data grids can be located on a single site with many physical or virtual machines, or geographically distributed. Monitoring and managing the performance of data grids is a complex problem.
Data grids are managed by supervising software, such as TIBCO GridServer®. They can capture telemetry about the performance of individual engines, brokers, and drivers that compose the grid, and present this information for analytics purposes. This telemetry can provide insights into the grid health and performance.
Concepts
The Accelerator was written specifically with TIBCO GridServer® as the data source, but principles can apply to any generic grid supervisor, provided data can be provided in the correct format.
For TIBCO GridServer®, the following components are involved:
Grid Client-- these are the components that submit service requests into the grid, also known as Drivers Engines -- processes that host and run services on grid nodes, the workers Brokers -- provide request queuing, scheduling, and load-balancing, as well as Engine management Directors -- component that assign Grid Clients to Brokers based on policies, such as what are the installed capabilities of the Broker's Engines and how busy are the Engines The Accelerator captures telemetry from each of these components and transforms it into a standard data format. The data can then be viewed on live dashboards implemented using TIBCO Spotfire®. In addition, the Accelerator builds a task state model for each of the submitted tasks. There are 3 different task notifications used to determine state:
Task Submitted -- the task has been submitted to the grid for processing Task Assigned -- the task has been allocated to an engine for execution Task Completed -- the engine has completed executing the task Under normal processing these 3 events will occur in sequence in a timely manner. If there is a gap between Submitted and Assigned this means the task was queued, and the grid was too busy to accept it at this time. Tasks can also experience rescheduling and reassigning, both of which are indicators of non-optimal grid health.
Since data grids produce different types of events, with many dozens of parameters per individual event, it becomes difficult to manually inspect the data, or even build simple rules-based systems to detect anomalies. The use of data science models can automate this process through the use of anomaly detection models. By using unsupervised model techniques against grid data streams, outliers can be identified and flagged to operations staff for investigation.
Benefits and Business Value
Data grids are used for complex calculations in large global financial institutions. These platforms are critical for nightly reconciliation of positions and reporting to government regulators. Failure to report in a timely manner can result in fines and costly adverse publicity.
When grids go wrong, it's often a difficult task to detect this early enough to take corrective action. Since the underlying engines are executing code created by data analysts and programmers, it is subject to the same quality control issues as any other piece of software. Memory leaks, crashing nodes, and incomplete calculations are all issues that can adversely impact grid health. The Accelerator provides an intelligent platform for capturing grid telemetry and presenting it to operations staff in a manner to flag potential issues before they consume a large amount of time and processing power.
Technical Scenario
The Accelerator demonstrates grid monitoring using a recorded dataset produced from a real TIBCO GridServer® implementation. Using a recorded dataset allows users to try out the Accelerator without having to spin up an entire data grid. In a real implementation an integration between the data grid and the Accelerator would be necessary.
A Spotfire® dashboard is provided to show key grid metrics and task states. The Accelerator also executes an anomaly detection model in Python to produce an anomaly score called Loss MAE. Once this value exceeds a configurable threshold, the grid state is declared to be anomalous and this is a flag to operations to begin investigating activities.
Components
1 download
Submitted
-
Track and Trace Accelerator
The Track and Trace Accelerator is a fully cloud-native application that runs on products in TIBCO Cloud. It captures data feeds related to the lifecycle of a parcel as it transits from collection point through to delivery point. Proactive SLA monitoring recalculates whether or not a parcel will be delivered in time and publishes alerts and notifications in the event of SLA violation. Parcel containerization allows for rolling up parcel events into containers and gives visibility of affected packages in the event of container delays.
This video shows a walk-through of the Accelerator in action.
Business Scenario
The logistics industry has experienced a great deal of change over the past decade. Traditional postal products are dropping off the radar, giving a new importance to parcel delivery. The proliferation of eCommerce and online shopping has made efficient and timely deliveries key to both vendor and customer satisfaction. Retailers entering the logistics market like Amazon and Ocado have increased pressure on incumbent operators to improve their offerings. These new entrants are not necessarily better, but they can be cheaper and more agile, combined with modern information systems help them to be more efficient. But customers ultimately drive revenue, and if they're unhappy with a delivery they will complain to the retailer. Retailers want happy customers, and a healthy logistics marketplace is required to improve services and maintain competitive pricing.
Concepts
The Accelerator is an in-memory model of a parcel delivery lifecycle. It receives two types of external events which are used to build an internal state model.
Announcements -- this is the initial notification that a parcel is ready for collection Observations -- this represents a barcode scan at various points during the delivery lifecycle
Benefits and Business Value
The Track and Trace Accelerator is a reference architecture for a new generation tracking technology based on event processing software. The process is configuration-driven which allows for easy modification of states and allows introduction of new logistics products in days or hours rather than weeks. The real-time dashboard for operations staff gives an at-a-glance view of network health while flagging up exceptions that require attention. With active SLA management the system can predict when parcels will be late, upwards of 8 hours before this actually happens. This also provides tools for adaptable last mile delivery changes based on customer preference.
Technical Scenario
The Accelerator shows several use cases tracking a package throughout it lifecycle from collection through to delivery. The parcel is announced to the system, and then a series of barcode scans known as observations occur as the parcel transits the network. Updates on estimated delivery are recalculated at every step so that active monitoring of SLA violations can determine if a parcel will be delivered within the guidelines for a given product.
1 download
Submitted
-
FX Dealing Accelerator
The FX Dealing Accelerator (FXDA), a reusable set of software components that provide TIBCO Foreign Exchange (FX) customers a ?fast start? to deploying FX Market Data/Dealing solutions based on the TIBCO Fast Data platform. The FXDA is available to TIBCO customers in open source format, available for customization and rapid deployment of highly customizable FX pricing/trading platforms.
The FXDA provides FX Venue connectivity, Market Data and Execution venue handlers, customized spread calculations/distribution and execution modules, simulation, trading execution and live monitoring, in one continuous loop. The FXDA provides a template for FX system implementation that reduces time to market from months or years to weeks.
Here's a video showing how the Accelerator works.
In addition to the introductory video above, a longer, more detailed 12 minute video which describes the FX Dealing Accelerator in more depth and gives a quick demo is available here.
Business Scenario
The Foreign Exchange (FX) business is the exchange of one currency for another. Currencies are traded over the counter (OTC) at an agreed exchange rate. Unlike the stock market, there are no centralized venues/exchanges. Parties agree on a rate and trade directly. The market operates 24/5, that is from 0100 GMT on Monday (Hong Kong) to 2300 GMT on Friday (Chicago). The market operates a combination of machine and human (voice) trading. The approximate daily turnover in FX trading is $5 trillion, far higher than in Equity (stock) markets.
Benefits and Business Value
The TIBCO FX Dealing Accelerator offers the ability to create an FX trading/pricing application that is both flexible and fast to deploy. An FX platform will typically source prices from a number on venues, or Liquidity Providers (LP). These providers will typically have a FIX API or a proprietary API. For the most part a FIX API is now becoming standard. The TIBCO FX Accelerator provides a framework that connects to LPs and manages the lifecycle of the connection. The burden of maintaining the connection for both Market Data prices and Execution handling is removed. The developer is free to concentrate on adding business specific logic and rules that add value to their organisation.
Functional Objectives
The TIBCO FX Dealing Accelerator and Demo as described, provide an FX Dealing (Pricing and Execution) application. The Stages in this application are out lined in the figure below:
In Summary the processing steps are:
Ingest Market Data from Liquidity Providers (LPs) via the Trading Components Framework Produced an aggregated view of these prices Create an average calculation of these prices (in our case VWAP volume weighted average price) Apply custom spreads, as loaded in the reference data section (more later) Publish these rates to interested subscribers via TIBCO Live Datamart. Display Market Data and Position in TIBCO Spotfire Technical Scenario
The FX Dealing Accelerator (FXDA) provides FX Venue connectivity, Market Data and Execution venue handlers, customised spread calculations/distribution and execution modules, simulation, algorithmic trading execution and live monitoring, in one continuous loop. The FXDA provides a template for FX system implementation that reduces time to market from months or years to weeks while enabling the customer to express their unique IP and or business model.
The accelerator is written using TIBCO StreamBase, TIBCO Live Datamart, TIBCO StreamBase component exchange LV Angular Bridge and a customised JavaScript UI. The Trading components framework ships with StreamBase and provides the connectivity and venue handling, along with samples to the following FX venues: 360T, SuperSonic, Barclays BARX, CitiFX, Currenex, Deutsche Bank Autobahn FX, Digitec D3 Streaming Interface, EBS, Exegy Input, FXall, FXSpotStream, GAIN GTX, Goldman Sachs Electronic Trading FX, Hotspot FX Trading System, HSBC FIX, Integral FX Inside, KCG Hotspot FX, LavaFX, MarketFactory?, Morgan Stanley, Nomura FX, Saxo Bank, Thomson Reuters Enterprise, UBS, Wall Street Systems.
The demo illustrates using the building blocks of StreamBase, StreamBase Trading Components Framework, Live Datamart and a sample JavaScript UI. The market data is provided by means of an in built simulator that provides prices based on current market exchange rates.
Live Datamart is used to capture the current state of market data and display information on an interactive, custom developed HTML5 application. This is all built on top of the LDM JS API, which is fully supported.
Components
The FX Dealing Accelerator built TIBCO Streaming and TIBCO Spotfire
The StreamBase Trading Components Framework simplifies creating foreign exchange trading applications by providing a set of modules and schemas that include market data and execution handlers for more than a dozen FX venues. The framework's packaged modules, parameterized properties, and consistent interfaces simplify many of the complexities normally associated with creating FX trading applications.
At the highest level, the Trading Components Framework packages its modules into two types of venue-specific handlers:
Market Data Handlers
Modules that access streaming market currency exchange data.
Execution Handlers
Modules that communicate trades with execution venues.
Supported Venues
A Trading Components venue is the source of a data feed. All supported venues are for FIX. Market Data handlers for the following venues are currently available in Trading Components. Nearly all venues also have execution handlers, as indicated in the second column.
Note
To connect to venues you must have purchased their associated premium adapters and in some cases downloaded them from tibco.com. Whether packaged with StreamBase software or separately, you are only entitled to use premium adapters that are listed in your contract. If the Separate column in the table contains Yes, the associated adapter comes as a separate download. For a complete list of standard and premium adapters and their usage restrictions, click here.
TABLE of Venues here
Venue
Execution
Streaming
RFQ Types
Folder name
Separate
Barclays BARX FIX
Yes
N/A
Spot, NDF
barclays-barx
No
Bloomberg Tradebook FIX
No
Spot
N/A
bbg-tradebook
Yes
CitiFX ESP
Yes
N/A
Spot, Forward, NDF
citifxesp
No
CitiFX Options
No
N/A
N/A
citifxoptions
No
Currenex Market Data
Yes
Spot, Forward
N/A
currenex
No
Deutsche Bank AutobahnFX Classic
Yes
Spot, Forward, Swap
N/A
db-classic-fix
No
Deutsche Bank AutobahnFX Rapid
Yes
Spot
N/A
db-rapid-fix
No
ICAP/EBS
Yes
Spot, NDF
N/A
ebs
No
FXSpotStream FIX
Yes
Spot, Forward
N/A
fxspotstream
No
GAIN GTX
Yes
Spot, Forward
N/A
gain-gtx
Yes
Goldman Sachs
Yes
Spot, Forward
Spot, Forward, NDF, Swap
gs
No
Morgan Stanley
Yes
N/A
Spot, Forward, NDF, Swap
ms
No
Nomura
Yes
Spot
Spot, Forward, Swap
nomura
No
Saxo Bank FIX
Yes
N/A
Spot
saxo
No
UBS Investment Bank
Yes
Spot, Forward, NDF
Spot, Forward, NDF, Swap
ubs
No
Additional Resources
The Readme for the FX Dealing Accelerator is here
0 downloads
Submitted
-
Next Best Action Accelerator
The Next Best Action Accelerator provides a reference architecture and code assets for building an event-driven marketing platform for customer engagement. It is configuration-driven through a custom web interface based on TIBCO Spotfire which allows marketing personnel to define target audiences and offers for customer engagement.
Here's a video showing the Accelerator in action:
Business Scenario
Event-driven marketing is based on capturing customer events and then determining audience membership in real-time. Linked offers are then issued to customers who can then take some action to receive the benefits of those offers. Various types of real-time events can be handled, such as purchases, entering a store, or completing a qualifying context for an offer. The event then triggers an audience selection based on who the customer is and what they have done in the past. This may include the use of data science models for segmentation and/or propensity. A customer may match several audiences, which in turn may match several offers. A series of best action rules then apply to ensure the optimal offer for the customer is selected and issued. Again, this may involve the use of data science propensity models, as well as business rules.
Concepts
The Accelerator configuration is driven by two primary static data objects:
Audience ? a targeted customer group defined by selection attributes and triggering events Engagement ? any interaction between the platform and customers
Audiences are composed of a series of filters that narrow selection down to specific criteria. Currently this includes customer demographics and customer segmentation using a PMML model. Audiences also include a triggering event that cause them to be evaluated. This could be a Purchase, Position, Campaign Trigger, or other arbitrary Generic Event.
Engagements are used to make offers to Audiences. When a triggering event occurs, it may trigger off zero to many Engagements based on audience membership. Default functionality is to examine all these offers and determine which is the best based on business rules, and then issue only that offer to the customer.
An offer follows this lifecycle:
Matched ? the customer and event matches one of the included Audiences, but none of the excluded Audiences Issued ? the offer is determined to be the best for a given set of offers triggered by an event, or the Engagement is marked as Engage All Matched Qualified ? the customer has completed one of the qualifying contexts for a given offer and they are awarded the next action
The business logic to determine the best offer from the set can be customized. In the Accelerator the following decision path is taken:
The steps are:
For each matched Engagement, are there one to many qualifying contexts with a propensity model? If the Engagement has qualifying contexts with propensity models: Calculate the propensity for each qualifying context that has a model The propensity for the offer is the maximum propensity of all those qualifying contexts Calculate the value of the engagement to the customer using: For Coupon type, the monetary value of the coupon For Points type, using a nominal 0.02 currency units per point Sort the Engagement list first by descending propensity for those that have propensity values, then by offer value for those that do not The Engagement at the top of the list is the best action and is issued to the customer
For example, this set of offers has a mix of propensity and offer values:
EngagementPropensityOffer ValueBonus Points for buying Chalk0.4300.05Bonus Points for buying Home or Clothing Accessories0.1200.25Bonus Points for Toy Wagon for Male Customers 2.50
Those offers with propensity are ranked higher than those with only offer value. In this case Bonus Points for buying Chalk is the offer the customer is most likely to engage with, therefore it is issued as best offer.
The dynamic data model for the Accelerator takes the form of report events:
Order ? customer purchases one to many order lines of products Position ? customer position is recorded through consent via a mobile app Generic ? this is a catch-all event that can be used for arbitrary named events with an optional single value Offer ? an offer has been made to a customer; this is generated by the Accelerator or externally Campaign ? trigger for a campaign
Any of these event types may trigger off an Audience selection and customer Engagement cycle.
Benefits and Business Value
The accelerator provides immersive visual analytics and machine learning to magnify the power of the marketer resulting in dramatically improved effectiveness of customer interactions.
Technical Scenario
The accelerator demonstrates several scenarios for event-driven marketing along with some sample audience and offer configurations. There are two different contexts provided: Retail and Telco.
In the Retail case, the first scenario demonstrates order processing with offers being generated in response to customer purchase events. Audiences are identified and offers made to customers based on rules and model-driven propensity analysis. Customers then complete qualifying contexts to receive awards like points and discount coupons.
The second scenario shows how location events and geofences can be used to detect when customers are within certain defined areas, or within a specified radius of store. Offers are generated based on location and customer attributes.
In the Telco case, these two scenarios are also implemented. There is also a scenario to show how a campaign trigger can be used to target a broader audience, and examples of how arbitrary events like churn risk and dropped calls can be used to personalize offers for customers to improve engagement.
Components
0 downloads
Submitted
-
Smart Transport Accelerator
Smart Transport Accelerator (Cloud Edition) is a 100% cloud-based solution that gives Transport & Logistics companies a fast start to solving two key business challenges:
Real-time operational awareness of moving assets & incidents to help improve network and operational efficiency. It does this by consuming GTFS data in Protobuf format and providing real-time visualizations to identify and respond immediately to any operational incidents. Event-driven actions when an incident, such as an accident or breakdown, occurs to help improve customer experience and quality of service. It does this by connecting to event feeds delivered through modern messaging formats - for example, events detected through AI-enabled smart cameras, as seen in the TIBCO Can Do That . When an incident event occurs the accelerator then creates a case for the management of the incident. What's New
In the 1.0.0 release:
the Smart Transport Accelerator uses TIBCO Cloud? Integration to transform the GTFS feed from Protobuf format into JSON format. The JSON data is then processed by TIBCO Cloud? Events to execute a set of defined business rules and correlate the events in real-time. TIBCO Cloud? LiveApps automates the creation of incidents for notifications about any accidents or medical emergency incidents. TIBCO Cloud? Spotfire generates real-time visualizations of all these correlated events. Technology Overview
The General Transit Feed Specification (GTFS) is a public data specification originating from Google that allows public transportation companies to publish their transit data in a standard format. This allows the data to be consumed by a variety of software applications. Thousands of transportation providers around the world use this specification on a daily basis. More information is available here. The GTFS specification is composed of static data feeds that are generally updated on a daily basis, and GTFS RT which are real-time feeds that are updated every few seconds. This allows giving a true real-time view of operations inside the transport provider?s network.
The Accelerator uses TIBCO Cloud? Integration to subscribe to real-time position and trip update feeds delivered through GTFS-RT. In this case, we are using the real-time position data feed from the Open Data Hub provided by Transport for NSW (TfNSW). The Accelerator uses TIBCO Cloud? Integration to transform the Protobuf format into JSON format. The JSON data is then processed by TIBCO Cloud? Events to execute a set of defined business rules and correlate the events in real-time. TIBCO Cloud? LiveApps automates the creation of incidents for notifications about any accidents or medical emergency incidents. TIBCO Cloud? Spotfire generates real-time visualizations of all these correlated events.
Components
TIBCO Software products and versions used:
Software
Minimum Version
TIBCO Cloud Integration
Latest
TIBCO Cloud LiveApps
Latest
TIBCO Cloud Events
Latest
TIBCO Cloud Messaging
Latest
TIBCO Cloud Data Streams
Latest
TIBCO Cloud Spotfire
Latest
Solution Design
This accelerator demonstrates end-to-end real-time data integration with open-source custom extensions, messaging, cloud events, streaming, and visual analytics features using TIBCO's Connect and Predict capabilities using real-time GTFS feeds provided by TfNSW?s Open Data Hub data services.
Architecture Overview
This Accelerator:
Uses TIBCO Cloud? Integration (TCI Flogo) to extract, and transform real-time GTFS positions and trip update feeds from Transport for NSW's Open Data Hub data services. It then publishes the streaming data in JSON format to TIBCO Cloud? Messaging eFTL service (TCM eFTL). TIBCO Cloud? Events subscribes to an incoming stream of messages and runs it through a series of prescribed business rules. Based on these business rules, some incidents will cause TCI Flogo to create a new case within TIBCO Cloud? Live Apps. TCI Flogo will also generate new incident messages and publish these back to TIBCO Cloud? Messaging. TIBCO Cloud? Data Streams is used as a durable subscriber to the incidents_data and general_feed messages within TCM eFTL service. TIBCO Cloud? Data Streams then provides the live streaming data directly to TIBCO Cloud? Spotfire which displays the visual analytics.
Documentation
A step-by-step guide to set up the Smart Transport Accelerator is available to download and the artifacts can be downloaded from here under the Releases section. The documents are available to download in the Resources section on the Smart Transport Accelerator page.
Demo
Check out the short demo video of the Smart Transport Accelerator.
0 downloads
Submitted
-
Data Historian Accelerator
The Data Historian Accelerator captures real-time telemetry from data historians like OPC UA and OSI PI. A custom HTML5 web interface provides the user the ability to visualize the object hierarchy of the historian, and create subscriptions on the nodes or tags of interest. The accelerator receives this data in real-time and assembles the points into logical data sets that can then be passed to business rule modules that implement decision tables or data science models. The data and model output are streamed into Live Datamart for visualization in Spotfire.
The Intelligent Equipment Accelerator is a similar offering, but is a generic platform for any data provider, whereas the Data Historian Accelerator focuses specifically on systems like OPC UA and OSI PI.
And here's a video showing how the Accelerator works with OSI PI.
Business Scenario
Data Historians are applications that retrieve production and process data from manufacturing and other process-oriented systems. They store data in an efficient database reducing the requirement for large amoutns of disk space. They also provide quick access to the data through API-based queries.
Historians are a mature technology. OSI PI is over 40 years old, for example. They have large existing install-bases and are well-integrated with manufacturing and process-based technologies through DCS and PLC control systems. However they do not contain native advanced analytics capabilities and the ability to execute machine learning models.
Concepts
The Event Manager implementation handles the connection to the historian systems and full processing of the data. The following concepts and terms apply to this component.
Data Source is an implementation of a connection to a historian system. It includes some standard web services that allow the user to browse the hierarchy on the historian and setup Subscriptions to data points of interest. The Accelerator refers to these data points as Nodes but this will map on to different concepts depending on the historian being connected to. In OPC UA these are also referred to as Nodes but in OSI PI these refer to Tags.
Once a Subscription is setup to one or more Nodes, we can assemble these into a logical set of data points called a Feature Set, with each data point mapping to a single Feature. Each Feature Set may have one to many Features, which typically will all come from the same Subscription or Data Source, but may span different Subscriptions and Data Sources if necessary.
A Feature Set can then be directed to call one to many Indicators. An Indicator is an implementation of a business rule that requires certain data to operator, which is modelled as the Feature Set. Indicators are called in an pre-defined order, with the output of earlier Indicators passed into the subsequent Indicators, along with all Features from the initial Feature Set, unless these have updated by preceeding Indicators.
An Indicator can implement any kind of arbitrary business logic as required. The Accelerator provides examples of Indicators that compute the mean of all a set of Features using TERRTM and Python. There is also an example that computes a cluster using PMML. Indicators may implement any other standard EventFlow logic, including Decision Tables. They may output a Feature Set of Features which can include new Feature values or updates to Features passed in to the Indicator.
All raw data from a Data Source are passed to Data Sinks for storage. The Accelerator implements a single Data Sink for TIBCO Live Datamart, but this could be extended to other sinks such as CSV files. All Features in a Feature Set are also sent to the same Data Sinks for storage. This includes both original Feature Sets as well as new and updated Features generated by Indicators.
Benefits and Business Value
By integrating TIBCO Data Science, TIBCO Spotfire® and TIBCO Streaming with these Data Historians, process data can be captured and and analysed to detect patterns. Models can be developed using languages like R or Python, or developed using more advanced tools like StatisticaTM which can then be deployed to a running TIBCO Streaming engine for real-time model execution. These models could be used to detect anomalies, detect production quality issues, or do predictive or condition-based maintenance.
Technical Scenario
The Accelerator shows how to integrate Spotfire, Statistica, and TIBCO Streaming to capture historian data and execute against some simple models.
Two datasets are provided:
Electric Submersible Pump (ESP) telemetry for intake pressure and current draw Power plant telemetry for a gas-fired turbine plant which generates electricity Simple model implementations in R and Python are provided to show how to integrate with inbound historian data. These models simply compute a mean of all provided features.
A more sophisticated K-Means Clustering model is provided using PMML that can be used to detect anomalies in the power plant telemetry data.
Components
0 downloads
Submitted
-
TIBCO LiveView? Recovery from Interruption
This component demonstrates TIBCO LiveView? recovering from a service interruption by using table persistence and a data publisher that uses either the Tibco FTL® or the open source Kafka message base.0 downloads
Submitted
-
Simple TableProvider Example for TIBCO® Live Datamart
This component provides Tibco® Live DataMart configuration and TableProvide sample source code. The TableProvider sample code reads Excel files, just as an example. While it's convenient to simply read files, this does mean that effectively only LiveView SNAPSHOT queries are supported.
A TableProvider is intended to front a data source that has some native querying capability. As reading Excel files doesn't natively provide any querying capability, this sample simply provides predicate support to read row ranges from the Excel spread sheets.
Files Included
A file called ExcelLiveview.lvconf configures the TableProvider class and passes parameters - the name of Excel files to present as tables - to this class.
Running The Component
You can start this LiveView project from Studio by right-clicking on the project selecting Run As -> Liveview Project.
This project is configured to run only a services layer and the default port of 10080 is used.
Once started you should see two tables presented, ItemsSales.xls and ItemsInventory.xls. If you have access to Liveview desktop, download the workspace and you will see a grid of the ItemsSales data and a pie chart for the ItemsInventory data.
You can also query the tables via lv-client by:
lv-client select "* from ItemsInventory.xls" lv-client select "* from ItemsSales.xls where 1,20" lv-client select "* from ItemsSales.xls limit 10"
The later query has a limit of 10 rows which will cause the query to terminate early. To see all 3,000 rows of ItemsSales.xls, remove the limit.
0 downloads
Submitted
-
Iframe Custom Card for TIBCO LiveView? Web
This component provides a custom visualization for LiveView Web.
Once loaded the card gives the user a text box in which the users enter a URL. The contents of that web page named by the URL are then displayed in the visualization.
This card is built for version 1.1.1 of LiveView Web.
This card does not require (nor take into account) any query in the query builder portion of LiveView Web
STEPS TO USE:
1) Unzip the file. Drag the folder to a LiveView Web project and place it in the plugins folder under the lv-web folder of the project.
2) Start your LiveView Web project
3) Once this custom card is loaded as a plugin, the visualization choices in the editor include an iframe.
4) Choose the iframe visualization type: a text box is presented
5) Enter a URL into the text box and click "Save" in the visualization editor of LiveView web. The content of the web page located at the URL is displayed in the LiveView Web card.
Version History:
1.0 Initial release.
0 downloads
Submitted
-
Connected Vehicles Accelerator
The Connected Vehicles Accelerator contains components to allow tracking of vehicles and trips based on the GTFS format for transit vehicles. Although based on a transit data model it allows tracking of any kind of vehicle moving to a defined schedule. It includes components for visualization of real-time moving vehicles, rules to detect delays and classify occupancy, and integration components to link all of them together.
This video explains how the Accelerator works.
Business Scenario
Traditionally, transportation companies relied on routes, schedules, work assignments and other isolated systems to model their business. Much of the data is historical, making it difficult or impossible to predict future state. Plus, with the data in silos there is no overall holistic view of what's going on across the entire network. Stale, batch-oriented feeds mean that the data is in the wrong place at the wrong time, degrading its value. Getting the data to the right people is also a challenge. Backwards-facing data means that exceptions are always surprises and handling them is always a reactive process often resulting in sub-optimal outcomes.
In the modern world of Internet of Things (IoT), vehicles have become mobile devices, leading to the Internet of Trains, Boats, or Airplanes. These new information sources provide an opportunity to increase the available operational intelligence, both quantity and quality. Of course the data volume increase can be both a benefit and a hindrance if you can't find the signal in the noise. But the clever use of smart event processing technology and predictive analytics allows you to cut through the clutter to find the events that matter. Now with forward-looking data, exceptions can be proactively handled with the best possible outcome, for the company, and its customers and partners, improving their experience. Plus it opens up new avenues to monetize the value of the data through real-time APIs that can be exposed and marketed to third parties.
Concepts
At the heart of the Connected Vehicles Accelerator is the Trip. This is a journey consisting of several stops operating on a schedule. There are three resources that a trip depends on: Vehicle, Crew, and Passengers/Cargo. Plus the Trip also has a dependency on the Processes that make them happen.
The Connected Vehicles Accelerator captures data from existing systems, and combines it with real-time feeds from these resources and processes. In addition, it can capture real-time feeds from third party data providers such as weather and traffic. Accelerator rules analyse this data and produce automated actions, advisories to operations staff, and alerts to outside parties. The current state of the network is displayed in true real-time on an operations dashboard, and near real-time using analytics tools.
By aggregating all this information in one place, the accelerator gives unique insight into network operations that just is not available in any other single system.
Benefits and Business Value
Connected Vehicles platform acts as a single source of truth for all trip and vehicle data. Using an in-memory model exposed using integration services the data is available to any system that needs it, reducing the need for data silos. As a real-time data repository, it is fed directly by data streams from vehicles and systems, so the information is guaranteed to be timely and accurate.
The business rules are primarily configuration-driven which allows decision table changes to be deployed in hours rather than weeks. This means a more agile system, able to adapt to business needs quicker and more effectively. By detecting anomalies and sending alerts, the accelerator acts as an efficient and fast first check on network health. It decides when something needs operations input and alerts them quickly and effectively. Using the real-time operation dashboard, operations staff has visual confirmation of network health at a glance, helping them quickly identify critical business moments.
Better operational intelligence with predictive capability means a single view of resources, all updated in real-time, with more timely and more accurate data. The net result is a more agile business, able to react on both the micro and macro scale more effectively.
The platform deployment is naturally scalable giving better data distribution and the ability to meet growth targets and beyond. The event-based architecture and in-memory network model support large scale deployments both on premise and in the cloud. Exposing this data using APIs empowers employees, customers, and partners.
Technical Scenario
Connected Vehicles is organized into contexts, with each representing a particular business or industry scenario. Within each context there will be several different test cases which can be run as demos to show various accelerator features.
The accelerator has the following demo contexts:
Distribution Logistics -- logistics company providing deliveries to stores in the Bay Area Railway -- passenger railway operating in the Netherlands called Virtual Train Secure Logistics -- logistics company providing secure delivery services in Madrid
In all cases a simulator is used in place of actual vehicles, publishing data directly into the accelerator environment. This includes information about vehicle speed, direction, distance, and position, as well as occupancy.
The accelerator is based around a Network Model which is an in-memory representation of static data. It is based on GTFS (General Transit Feed Specification) for trips and routing, and extended further with Extension data for scheduling, vehicles, and crews. This static reference data is used by the Event Manager to model the transportation network. It combines the static reference data with dynamic data feeds that arrive as Report events. This allows the Event Manager to track the existing state of the network.
Components
0 downloads
Submitted
-
Open Banking Accelerator
The Open Banking Accelerator gives a 360-view of a customer's financial health through the use of open banking APIs. It masters the data allowing for cleansing and governance, and then exposes this data in s standardized way using virtualization. It also provides lightweight fraud detection and customer offer management to illustrate how these concepts could be addressed. Refer to the TIBCO Risk Management Accelerator and TIBCO Next Best Action Accelerator for more detailed implementation of these concepts.
And here's a video showing the Accelerator in action.
Business Scenario
There have been fundamental changes in the financial services industry over the past few years. This has been driven by a combination of new regulatory requirements and enabling technologies that give customers better rights over their personal information.
Open Banking typically gives consumers the right to access certain data about themselves and have this information safely transferred to trusted third parties. It is part of a move towards an Open API Economy that will span multiple industries, not just financial services. It is related to data privacy initiatives like GDPR and Open Data, but is also driven by regulatory requirements like PSD2 in the UK and CDR in Australia.
Concepts
Benefits and Business Value
The benefits to customers is two-fold. Firstly, it reduces the friction of changing financial service providers, which ultimately leads to better service and greater competition. It also gives access to new and innovative financial products and services including better financial tools.
Technical Scenario
The Accelerator show how customer financial data can be aggregated from disparate data sources and then cleansed and governed in a master data management repository. Transactions flow through the system and uses the master data to enrich and contextualize individual customer transactions. Then lightweight fraud detection and personalized offer rules are applied and suitable actions taken in response.
The demonstration uses the Australian CDR format for master data, but other formats such as PSD2 could be added as an additional virtualization layer.
Customer master data is captured from a number of sources via APIs. In this demonstration the integration component is implemented in TIBCO Streaming and it will pull bank products and details from 4 major Australian banks through their publicly available APIs. This data is then mastered in TIBCO EBXTM where it can be cleansed and governed. Additional customer master data is stored in various repositories in EBXTM.
This master data is then captured and virtualized, combined with data from other sources using TIBCO Data Virtualization. This data is then available for internal systems to access and apply.
The main business rules engine is also implemented using TIBCO Streaming as the Event Manager. This captures a stream of financial transactions, enriches it using the virtualized data and then stores it in a live repository. A data science model is applied against the transaction stream to determine the likelihood that a given transaction is fraudulent and if it exceeds a configurable level it will be flagged for investigation. In addition, the full customer context pulled from the virtualization platform and then analysed by the offer platform to make personalized offers to the customer for products that they may be interested in.
Components
0 downloads
Submitted
-
TIBCO LiveView? Desktop Custom View for TIBCO ActiveSpaces® Paint Demo
This component adds a custom view to TIBCO LiveView? Desktop which brings the TIBCO ActiveSpaces® Paint demo to Desktop. It can also execute a query against the drawing's shape data to display a subset of the shapes.0 downloads
Submitted
-
Live Dashboard for TIBCO Fulfillment Order Management
Live Dashboard for TIBCO Fulfillment Order Management
Using this shared component, you can display real-time analytics using TIBCO® Live DataMart.
You must have the following software installed to use this feature:
TIBCO® Fulfillment Order Management 4.0.1 TIBCO Live Datamart 10.2 TIBCO Enterprise Messaging Service? 8.3 Apache Maven 3.x (or above) Configuring for the Live Dashboard for TIBCO Fulfillment Order Management (LD4FOM)
Complete the following configurations before running the Live Datamart dashboard.
Procedure
Setting the following Environment Variables
Set the TIBCO_EP_HOME environment variable to the file system location of your TIBCO StreamBase installation. This is the same location identified by the STREAMBASE_HOME environment variable. For example, $TIBCO_HOME/sb-cep/10.2. Add TIBCO_EP_HOME/distrib/tibco/bin to the PATH environment variable.
Set the environment variable SB_MAVEN_REPO to $TIBCO_EP_HOME/sdk/maven/repo.
Installing External Dependencies
The dashboard project requires the jms-2.0.jar and tibjms.jar files to be in the local Apache Maven repository, which is specific to the current user on the current machine. By default, this is the .m2 directory of the user's home directory. As per the POM currently available with fom-notification-dashboard project, it will try resolve the dependencies by checking in $TIBCO_EP_HOME/sdk/maven/repo but you can add multiple repositories in the POM.
Procedure
Use the following command syntax (the command is shown on multiple lines, but must be entered as one long command):
mvn install:install-file -Dfile=<path-to-file> -DgroupId=<group-id> -DartifactId=<artifact-id> -Dversion=<version> -Dpackaging=jar -DgeneratePom=true
Setting the Outbound Notifications
To generate the live or real-time analytical reports, the Live DataMart dashboard uses outbound TIBCO Enterprise Messaging Service messages from TIBCO® Fulfillment Order Management. Set the following properties in Fulfillment Order Management Configurator so the application can send the outbound notifications.
Procedure
Set the following properties in Fulfillment Order Management Configurator to true:
For the provided dashboard reports, only these outbound notifications are used.
Order Status Change Notification Plan Status Change Notification PlanItem Status Change Notification OrderLine Status Change Notification Setting TIBCO Enterprise Messaging Service Configurations
Update the fom-notification-dashboard/src/main/resources/adapter-configurations.xml file with the respective TIBCO Enterprise Messaging Service configurations.
Procedure
Under the "jms-server" section, update the following attributes:provider-url attribute with the TIBCO Enterprise Messaging Service server?s IP and port.
It is recommended when you modify the password field, you encrypt your password using the command sbcipher -c "admin" from TIBCO Streambase. For example:
C:tibcosb-cep10.2bin>sbcipher -c "admin" Vzz+hSVqZjrTyiA08RL87YgAnPxI/AygAnrXVWeS0IspfKihwJ/YJ8hCzyTLpFVlIg/6eD/EvYaEmVNmB1NFRQ==
Update the provider URL attribute with the TIBCO Enterprise Messaging Service server?s IP and port. Update the username attribute with the TIBCO Enterprise Messaging Service username. Update the password attribute with the TIBCO Enterprise Messaging Service password. Under the "destination" section, update the message-selector attribute with the value of your specific tenant. The default TENANTID is kept as TIBCO. For example, if you have a tenant ID ?A?, the message-selector attribute should be updated as message-selector="TENANTID='A'". Setting Configuration Parameters
Change the configuration parameters in fom-notification-dashboard/src/main/resources/configurationparameters file before running the dashboard.
Procedure
Set the following configuration parameters:
The values of the parameters in this file must be updated and the configuration-parameters file reference must be passed in the epadmin command. For more information on epadmin commands and how it works, refer to the TIBCO StreamBase documentation on epadmin.
clean_up_completed_orders_days = 1
For more information on this parameter, see CLEAN_UP_COMPLETED_ORDERS_DAYS in the "Automatic Deletion" topic.
completed_orders_clean_up_frequency_seconds = 3600
For more information on this parameter, see COMPLETED_ORDERS_CLEAN_UP_FREQUENCY_SECONDS in the "Automatic Deletion" topic.
eventflow_engine_tcp_port = 10000
This is the default port where the TIBCO StreamBase engine starts.
liveview_tcp_port = 10080
This is the default port where the LiveView engine starts.
Creating the TIBCO Enterprise Messaging Service Channels
Setting up JMS-routing is not a mandatory setup, but is recommended to reduce the load on the TIBCO Enterprise Messaging Service server used by TIBCO® Fulfillment Order Management. It is recommend to use a separate TIBCO Enterprise Messaging Service server for the Live DataMart dashboard and enable routing on the application's TIBCO Enterprise Messaging Service server, and use the second TIBCO Enterprise Messaging Service server for the Live DataMart dashboard.
Procedure
Execute the channel creation script fom-reporting-dashboard/bin/af-dashboard-create-ems-channel.txt.
This script creates all the required queues and bridges between TIBCO® Fulfillment Order Management application?s topics and the dashboard?s queues.
TIBCO Live Datamart Dashboard Deployment
You have to import the provided source code in TIBCO StreamBase Studio and make the required configuration changes as stated above and then deploy the TIBCO Live Datamart dashboard to start the dashboard service with the provided reports.
Deploying the Live Datamart Dashboard with Provided Source Archives
This is the deployable option to extend the existing Dashboard functionality by adding new reports to the dashboard.
Procedure
Following are two project source directories:
fom-reporting-dashboard/src/deploy-fom-dashboard fom-reporting-dashboard/src/fom-notification-dashboard Import the fom-notification-dashboard and deploy-fom-dashboard projects into TIBCO StreamBase Studio.
Result
fom-notification-dashboard is ready to run in your designer.
Building the source from command prompt
You can use the ant script fom-reporting-dashboard/src/build.xml to trigger the build and generate the deployable archive.
$ fom-reporting-dashboard/src> ant
Starting TIBCO Live Datamart Dashboard
After doing the above mentioned changes, you are ready to start the dashboard using epadmin
Install the application as a service and create a node by running the following command:
$> epadmin install node application=fom-reporting-dashboard/src/deploy-fom-dashboard/target/deploy-fom-dashboard-0.0.1-SNAPSHOT-ep-application.zip nodename=A.fom_notification_dashboard nodedirectory=<any-tmp-directory> substitutionfile=<path-to-configuration-parameters-file>
When running this command, the nodename identifies a unique service name. A service name consists of the following parts:
servicename = <nodename-label>.[[<group-label>.]*]]<clustername.label>
The following are some example service names:
These service names uniquely identify five different nodes, all in the same cluster.
a cluster name an optional grouping a node name applicationcluster eastcoast.applicationcluster eastcoast.applicationcluster westcoast.applicationcluster westcoast.applicationcluster Start the created node by running the following command:
$> epadmin servicename=A.fom_notification_dashboard start node
You can stop the node by running the following command:
$> epadmin servicename=A.fom_notification_dashboard stop node
You can remove the node by running the following command:
$> epadmin servicename=A.fom_notification_dashboard remove node
Access the dashboard on browser using - http://<IP-Address>:<port>
default port to access the dashboard is 10080.
LDM Dashboard Reports
The dashboard provides out-of-the-box reports based on the outbound notifications.
The following table describes the provided reports:
Report NameWhat is ShownOrders Inflow RateThis graph shows the number of orders currently coming in TIBCO® Fulfillment Order Management. By default the order's inflow is shown on a per minute scale but you can change the scale to per hour or per day. To customize this graph, see Customizing the Orders Inflow Rate GraphProcess Component Average Completion TimeThis graph shows the average completion time for all the completed process components (Y- Axis) on the scale of millisecond (X-Axis). To find the average, this graph considers the last 100 process components which are marked as complete in the system.Order Average Completion Rate TimeThis graph shows the average number of orders completed in TIBCO® Fulfillment Order Management on a time scale. By default the order's inflow is shown on a per minute scale, but you can change the scale to per hour or per day. To customize this graph, see Customizing the Average Order Completion Rate Graph.Order Completion RateThis graph shows the number of orders completed in TIBCO® Fulfillment Order Management on a time scale. By default the order's inflow is shown on a per minute scale, but you can change the scale to per hour or per day. To customize this graph, see Customizing the Order Completion Rate Graph.Ordered ProductsThis pie chart displays the number of ordered products and the percentage it has against other products.Order StatusThis table shows all the orders' current state. When you click on any of the orders, the corresponding Plan, Plan Item and Order Line grids are populated.Order Line StatusThis table shows order-lines with its current state that are related to the order selected in the Order Status table.Plan StatusThis table shows all plans with its current state that are related to the order selected in the Order Status table.Plan Item StatusThis table shows all plan-items with its current state that are related to the order selected in the Order Status table.Process Component Competition RateAfter clicking any of the plans in the plan status table, this bar graph is populated for the process components that are completed.Stuck Plan Items GridThis table shows all plan-items with a state as ERROR or ERROR_HANDLER.Long Running OrdersThis table shows all orders in the system that exceeded a specified time. You can form multiple grids for this type with different specified times. To customize this table, see Customizing the Long Running Orders Grid.Customer and LocationThis heat chart displays the density of customers in every country. To create your own custom heat chart on a country level, see Customizing the Customer and Location Heat Chart. Customizing Provided LDM Dashboard Reports
Customizing the Orders Inflow Rate Graph
You can change the order's inflow on a per minute (default), per hour, or per day scale.
Click the edit button on the Orders Inflow Rate graph. Change the x-axis based on what time scale you want shown on the graph: Per Minute (default) From the Data tab, change the query to SELECT count_distinct(orderID) AS Orders, roundOffTimestampToMinutes FROM SubmittedOrders GROUP BY roundOffTimestampToMinutes. Select the Visualization tab. From the Field drop-down menu, select roundOffTimestampToMinutes. Per Hour From the Data tab, change the query to SELECT count_distinct(orderID) AS Orders, roundOffTimestampToHours FROM SubmittedOrders GROUP BY roundOffTimestampToHours. Select the Visualization tab. From the Field drop-down menu, select roundOffTimestampToHours. Per Day From the Data tab, change the query to SELECT count_distinct(orderID) AS Orders, roundOffTimestampToDay FROM SubmittedOrders GROUP BY roundOffTimestampToDay. Select the Visualization tab. From the Field drop-down menu, select roundOffTimestampToDay. Update the Axis Name to Time. Click Save. Customizing the Average Order Completion Rate Graph
Click the edit button on the Average Order Completion Rate graph. Change the x-axis based on what time scale you want shown on the graph: Per Minute (default) From the Data tab, change the query to SELECT count_distinct(orderID) AS Orders, roundOffTimestampToMinutes FROM CompletedOrders GROUP BY roundOffTimestampToMinutes. Select the Visualization tab. From the Field drop-down menu, select roundOffTimestampToMinutes. Per Hour From the Data tab, change the query to SELECT count_distinct(orderID) AS Orders, roundOffTimestampToHours FROM CompletedOrders GROUP BY roundOffTimestampToHours. Select the Visualization tab. From the Field drop-down menu, select roundOffTimestampToHours. Per Day From the Data tab, change the query to SELECT count_distinct(orderID) AS Orders, roundOffTimestampToDay FROM CompletedOrders GROUP BY roundOffTimestampToDay. Select the Visualization tab. From the Field drop-down menu, select roundOffTimestampToDay. Update the Axis Name to Time. Click Save. Customizing the Order Completion Rate Graph
Click the edit button on the Average Order Completion Rate graph. Change the x-axis based on what time scale you want shown on the graph: Per Minute (default) From the Data tab, change the query to SELECT count_distinct(orderID) AS Orders, roundOffTimestampToMinutes FROM SubmittedAndCompletedOrders GROUP BY roundOffTimestampToMinutes. Select the Visualization tab. From the Field drop-down menu, select roundOffTimestampToMinutes. Per Hour From the Data tab, change the query to SELECT count_distinct(orderID) AS Orders, roundOffTimestampToHours FROM SubmittedAndCompletedOrders GROUP BY roundOffTimestampToHours. Select the Visualization tab. From the Field drop-down menu, select roundOffTimestampToHours. Per Day From the Data tab, change the query to SELECT count_distinct(orderID) AS Orders, roundOffTimestampToDay FROM SubmittedAndCompletedOrders GROUP BY roundOffTimestampToDay . Select the Visualization tab. From the Field drop-down menu, select roundOffTimestampToDay. Update the Axis Name to Time. Click Save. Customizing the Long Running Orders Grid
Click the edit button on the Long Running Orders grid. Change the x-axis based on what time scale you want shown on the graph: Per Minute (default) From the Data tab, change the query to SELECT orderID, status, eventTimestamp, orderRef FROM OrderNotifications WHERE status != 'COMPLETE' WHEN eventTimestamp BETWEEN epoch() and now()-minutes(1) . Per Hour From the Data tab, change the query to SELECT orderID, status, eventTimestamp, orderRef FROM OrderNotifications WHERE status != 'COMPLETE' WHEN eventTimestamp BETWEEN epoch() and now()-hours(1). Per Day From the Data tab, change the query to SELECT orderID, status, eventTimestamp, orderRef FROM OrderNotifications WHERE status != 'COMPLETE' WHEN eventTimestamp BETWEEN epoch() and now() - days(1). Click Save. Customizing the Customer and Location Heat Chart
Click New Cart. Paste the following query in the "Type your LiveQL" section: SELECT region, count(customerID) as customers from CustomerHeatMap WHERE region != "null" GROUP BY region
Click the Visualization tab. Select Region Map. Select region. In the Region field, select the desired country. Name the cart as per your region. Click Save and close the edit mode. Apart from the out of box reports, you can also create your own analytical reports based on the notification data TIBCO Fulfillment Order Management sends.
LDM Data Deletion
There are two ways for deleting data from the LDM tables: automatic and manual. Manual trimming is only for the tables that do not have the auto trimming mechanism.
Automatic Deletion
Periodic trimming of the LiveView reports's older data ensures that the LDM service does not run out of memory.
The automatic LDM reports data deletion is already set-up for deleting completed orders, but the default value can be overridden.
The following two parameters are used for deleting data:
CLEAN_UP_COMPLETED_ORDERS_DAYS - This variable defines how old the completed orders should be when deleted from the reports. The unit of value is days, and the default is set to 1 day. So any order which is in COMPLETE status for more than 1 day is removed from the reports. COMPLETED_ORDERS_CLEAN_UP_FREQUENCY_SECONDS - This variable defines how frequent an older completed order should be deleted. The unit of value is seconds, and the default is set as 3600 seconds. So the completed orders deletion is triggered every 3600 seconds (1 hour). If you want to override the default value for any of the above stated variables, use substitutions as one of the parameters passed to the epadmin command when installing the node.
Manual Deletion
For the tables that do not have the auto trimming mechanism, such as the Ordered Products pie chart, you have to use the lv-client delete command. For ad-hoc manual deletes, you can use the lv-client delete command. For more information, see the TIBCO Streambase LiveView Command Reference documentation on lv-client. The Ordered Products pie chart data can be cleaned up from the productID, orderLine, orderRefcolumns by running the following command:
lv-client -u lv://lvserver:<port> "delete from ProductPieChart where predicate"
Multi-Tenant Environment for LDM Dashboard
The LDM dashboard supports multi-tenancy through the TIBCO EMS message-selector property. You can have one node started for a single tenant. The following diagram shows one-to-one mapping between nodes and tenants based on TIBCO EMS message selector:
To use a multi-tenant LDM Dashboard, configure the message selector in TIBCO EMS before starting the node to run dashboard as described in the "Setting TIBCO EMS Configurations" topic.
Data Persistence for LiveView Table
Data persistence allows you to restore data even if the node goes down and gets removed. Data persistence for the LiveView table is enabled by default. Navigate to the node directory, the directory you have defined while installing the node as described in the "Deploying the LDM Dashboard" topic, and navigate to the engine directory: lv_tablespace/persisted_data. The LDM table-wise directories are created inside the persisted_data directory and contains the database files. It is highly recommended to periodically take a backup of the persisted_data directory, so that this directory can be restored whenever required.
0 downloads
Submitted
-
TIBCO LiveView? Configuration Files for TIBCO ActiveSpaces® Demos
This component demonstrates connectivity between TIBCO ActiveSpaces® and TIBCO LiveView?. It contains LiveView configuration files for the ActiveSpaces Paint and ActiveSpaces Operations Demos as well as preconfigured launchers for LiveView and both demos.0 downloads
Submitted
-
TIBCO LiveView? Plane Tracker
This component demonstrates how to display real-time positions of planes on a map. Using the TIBCO LiveView? Javascript API, maps are updated in real-time. The sample illustrates the real-time updates by displaying airplane locations which are fed in from the PlanesToLiveView C# client application and a Software Defined Radio (SDR). The C# client application and USB drivers for the SDR are provided separately, and are not bundled in this project.0 downloads
Submitted