What's New in TIBCO Spotfire®
By:
Last updated:
1:15am Nov 05, 2018

TIBCO Spotfire® 10.0

Spotfire 10.0 represents a major mark in the evolution of Business Intelligence and Analytics tools. For the first time, a product covers all the four main paradigms of interaction with BI data, while at the same time providing the full capabilities of a modern BI platform. Spotfire 10 includes all consumption of BI information through dashboards, add-hoc visual analytics, canned analytic apps, predictive analytics, mobile apps, static reports and web portal mashups. With Spotfire 10 you no longer have to be restricted to one interaction paradigm, such as direct manipulation. With Spotfire 10 you can analyze your data using search, automatic insights or direct manipulation, or even modify and build your application using a workflow-based approach. The fact is that each of these user interface paradigms have their own merits and drawbacks, so you should be free to choose the one that works best for the task at hand.

But Spotfire 10 does not stop at providing a state of the art user experience for BI and Analytics; Spotfire 10 pushes the border of BI by letting you analyze streaming data from dozens of streaming data sources such as Kafka, MQTT, Salesforce Streaming, WITS, OsiPi and many Capital Market Exchanges, and of course other TIBCO Technologies such as FTL and EMS. With Spotfire 10, you can build real-time dashboards just as easily as you build dashboards based on static data, and Spotfire, in fact, lets you mashup streaming and historic data to give you a view both of what's happening now and how that compares with what happened historically, or it lets you bring in static data as context to the streaming data.

Read more about Spotfire 10.0 below, and also checkout the Spotfire X webinar series –  a series of webinars covering different aspects in detail.

Search

Search for visualizations

Spotfire 10.0 has a powerful search feature that lets you search both for columns or values in your data, to be able to show recommended visualizations, or for Spotfire features. This makes it even easier to visualize and analyze your data the way you want to, and it makes visual analytics accessible to anyone who knows how to search.

Type, for example, "Sales per Region and Year" and Spotfire responds with recommending the corresponding visualizations. You can choose to click or drag a visualization from the search interface to the visualization canvas. File Search for Visualizations

Search for values in your data

It is also possible to search for values in your data. For example, if you are looking for the sales of a specific sales rep, the sales in a specific country, etc., you can just type the name and Spotfire will help you mark it. File Search for data values

Search to drive your analysis

The search feature also lets you search for Spotfire functionality, such as adding a calculated column or similar, or even for data stored in the library.

AI-Powered Visualization Recommendations

Spotfire 10.0 provides visualization recommendations powered by machine learning, that help you find relationships in the data. If you select a column in the Data in analysis flyout, Spotfire now shows recommended visualizations including also other columns that seem likely to have a relationship to the selected column. This makes it easier and faster to analyze your data.

 

File AI Powered Insights - from Introducing Spotfire X Webinar

New Spotfire UX

The Spotfire user interface has been redesigned, and is now visually cleaner and keeps the functionality grouped more logically.

Top Menu

The top menu has sections for File, Data, Tools, Visualizations, Help, etc., that contain access-points for many features. On the right-hand side, there are icons for filters, bookmarks, collaboration, search, etc.

Files and data

The + icon lets you browse and search for all kinds of data sources, files, data functions, data connections, custom data sources, etc., both locally and in the spotfire library.

Data in analysis flyout

The Data in analysis flyout lets you see details about the selected column, such as the distribution of values, max, min, unique values, etc., and access various functions for columns.

Visualization types flyout

The Visualization types flyout lets you choose a visualization type and drag it to the desired position on the visualization canvas.

Data canvas

The Data canvas is the new place to review and author the data pipeline for each data table (the Source View). It is accessed through the icon at the bottom of the Authoring bar to the left.

Visual Analytics for real-time data

By connecting to the new product Spotfire Data Streams, or TIBCO Live Datamart, Spotfire Analyst can now visualize data from dozens of  streaming data sources such as Kafka, MQTT, Salesforce Streaming, WITS, OsiPi and many Capital Market Exchanges, and of course other TIBCO Technologies such as FTL and EMS. 

File streaming_1.mp4

Spotfire 10.0 makes creating real-time applications and dashboards just as easy as with traditional data. This enables organizations to react quicker to issues and opportunities and optimize their business further. With this capability, Spotfire 10 pushes the border of BI and Analytics into use cases that previously could only be served with expensive custom applications. 

Limit streaming data to a time range

With streaming data it is possible to limit the time range shown in the visualization to the last X minute, or any other time range you want to see. To view just the last X minutes in a visualization with time on the X-axis is easily done by a click on the X-axis scale.

 

Note that it is also possible to limit the data from the Data section of the Visualization Properties dialog, and that there you are not limited to the last X minutes but can configure arbitrary time intervals, such as showing the data from 5 to 10 minutes ago.

Visual Analytics

Drag and drop to place new visuals on the visualization area

When creating a visualization from the visualizations flyout, Search or Recommended Visualizations, you can now drag it and place it on the canvas where you would like it. This makes it quicker to create dashboards with the layout you want.

Configurable preferred aggregation method in Spotfire visualizations

It is now possible both to set a preferred aggregation method for individual columns, but also a globally preferred aggregation for numerical columns. This means that Spotfire will use the preferred aggregation method instead of "Sum" at any time when it needs to default to an aggregation method in a visualization. This is useful, for example, when working with data that does not make sense to sum up, such as Temperature, Age, etc. The global setting is used unless there is a specific setting for the concerned column.

The global setting is configured from Tools > Options, in the Visualizations page, and requires Spotfire Analyst.

For individual columns, the preferred aggregation method can be specified in the Data in analysis flyout, using Spotfire Business Author or Spotfire Analyst.

Add user name and/or timestamp  in prepared PDF reports header or footer

The prepared PDF reports can now be configured to display the current user and/or the current time in the header or footer.

Change coordinate reference system in Spotfire Business Author

Spotfire Business Authors users can now configure the coordinate reference system (CRS) and projection of a map chart and its associated layers.
This also benefits Analyst users with a nicer, faster UI to set CRS/Projection preferences.

Geocoding coverage updates

Over 160,000 new cities are now available worldwide.

New administrative areas are supported for US, that includes:

  • Area Code
  • Borough
  • CBSA
  • Congressional District
  • School District

Data Wrangling

The summary view

When adding and replacing data in Spotfire, the summary view will not only show you how your data will be added, it will also recommend how data is added. Individual features of the summary view are described in separate sections below.

Define add columns and add rows without waiting for data to load

Since the summary view is displayed before any data is loaded, you can now define Add Columns and Add Rows operations without loading one of the data tables into memory first. Combined with previews and smart UIs, this allows you to get your operations configured right from the start, before starting to load potentially large amounts of data.

Recommended add rows, while adding data

The recommendation engine is built right into the summary view and will run across all listed tables. This allows the recommendation engine to work across tables not yet loaded into the analysis file. For example, you can let the recommendation engine union 12 Excel files with sales per month without loading them one by one. In addition to previous Business Author client support, recommendations for Add Rows is now available in the Windows client Spotfire Analyst.

Add columns with recommended column matches, while adding data

Just like with Add rows above, you can configure Add columns in one step, before loading data. As in previous Spotfire releases you are guided by recommended column matches and a smart web-based dialog with a preview. The Windows client, Spotfire Analyst, now uses web UIs only, for configuration of Add Columns operations.

Recommendation engine support for links between tables

As previously mentioned, Spotfire's recommendation engine for visualizations is now an integrated part of the data panel.

There is also a new type of recommendation available in Spotfire 10, recommended table links. By linking tables together, you can use shared marking between visualizations representing data from different data tables.

This recommendation provides a shortcut to create a Spotfire relation, which is much faster than using the Data Table Properties Relations tab.

Categorization of in-db columns in the data in analysis flyout

As with in-memory data tables before, in-database data tables are now easier to navigate because their columns are automatically categorized and grouped by Numbers, Categories, Currency, Time, Location, Identifiers and Binary. 

Data Access

Apache Drill support

Spotfire now includes a native connector for Apache Drill. It's available in the Connect to list in the Windows client Spotfire Analyst. The connector works in conjunction with the MapR Drill ODBC driver. Install the driver on your Windows client and on the Windows Server Spotfire Node Manager to activate this connector in the Spotfire on-premises platform.

Together with Dremio, this is the first connector that fetches the connection string details from DSNs and stores them in a Spotfire data source object. This enables you to use all the granular settings found in the DSN configuration UI.

Dremio Support

Spotfire now includes a native connector for Dremio. It's available in the Connect to list in the Windows client Spotfire Analyst. The connector works in conjunction with the Dremio ODBC driver. Install the driver on your Windows client and on the Windows Server Spotfire Node Manager to activate this connector in the Spotfire on-premises platform.

Together with Apache Drill, this is the first connector that fetches the connection string details from DSNs and stores them in a Spotfire data source object. This enables you to use all the granular settings found in the DSN configuration UI.

New connector list

The new way Spotfire lists connectors makes it easier to find the data source you would like to connect to. Each connector also has an updated help text which allows you to instantly find the details you need, for example, to activate a connector by downloading and installing the corresponding driver. The new list also saves clicks. If you select the wrong connector, the list is still available so that you can switch to the connector you intended to use.

In the Apache Drill image above, we can see how two existing data connections in the library are listed together with the option to establish a new connection. This encourages reuse of data connections that are already stored in the library, instad of connecting from scratch with each analysis created ad hoc.

The new list is also searchable from Spotfire's updated search function (see the Search section at the beginning of this article).

Data tools consolidated into one menu

All data-related menu items are now gathered under a Data menu.

Server and Administration

Scheduling Automation Services jobs

In the new Automation Services area in the administration interface, you can schedule Automation Services jobs, and monitor the activity of all Automation Services jobs that are run in your Spotfire environment.

The TERR service

The TERR service is now available as part of the Spotfire environment, along with the existing Web Player service and Automation Services.

Node managers can now be installed on Linux computers to run the TERR service; see Installing a node manager (RPM Linux) or Installing a node manager (Tarball Linux).

Improved scheduled update job handling

Scheduled update jobs that cannot be immediately run are now all queued on the Spotfire Server for distribution to Spotfire Web Player instances as they become available. This results in a more robust routing of jobs than previously, where each service maintained its own job queue after its maximum number of concurrent updates was reached. Administrators can still set the maximum number of concurrent jobs per service; for more information, see the concurrentUpdates setting in Spotfire.Dxp.Worker.Web.config file.

You can change how often the server deletes the scheduled job history by editing the configuration.xml file; see Changing how often the scheduled job history is cleared.

Node manager performance monitoring

At the DEBUG logging level, the node manager now produces a performance.monitoring.log file that is similar to the server log file with the same name.

New commands

create-scheduled-jobs creates scheduled Automation Services jobs from a local JSON file that is created by the administrator.

remove-config-property modifies the configuration.xml file to remove the value(s) of a specific configuration property.

Developer

Preferred aggregation method

An API to get or set the preferred aggregation method to be used by plot heuristics when creating aggregated expressions from a data column.

API reference: 

Default layers for map chart

An API to load the default layer, for example base map layer, TMS layer, or feature layer, when configuring a map chart.

API reference:

 

TIBCO Spotfire® 7.14

Spotfire® 7.14 contains support for cascading filters when working with external relational data, additional editing capabilities in the data table workflow, numerous improvements to our OLAP and big data connectors (SAP Hana, Oracle Essbase, Microsoft SQL Server Analysis Services), an all new Salesforce connector with support for federated authentication, and the ability to automatically set the coordinate reference system when importing Shape files. 

Note that Spotfire® 7.14 is a mainstream version. Fixes to critical issues discovered after the release will only be made to the most current version and to any long term supported versions. For more information on the difference between mainstream versions and long term supported versions see the documentation

Visual Analytics

Cascading filters in-db

Spotfire now supports cascading filters also when working with external data in relational databases. This means that it is easier to find specific values in, for example, list box filters, since the contents of one filter is affected by what is filtered out by other filters.

The behavior can be switched on per data connection by selecting the 'Enable cascading filters for in-database data tables for this connection' check box in the data connection settings (see the screenshot above). This works for all relational database connectors.

Set coordinate reference system from Shape file

Spotfire is now able to set coordinate reference systems automatically by recognizing the projection formats (.prj file) associated with Shape files.

More detailed, updated list of supported coordinate reference systems 

Spotfire now supports even more coordinate reference systems and provides more details for each of them, to help you select the correct CRS faster.

Data Wrangling

Insert rows, columns and data transformations before other nodes

With this release of Spotfire, you have access to yet another improvement of the data table editing workflow in the source view: the ability to add rows, columns and data transformations anywhere within an existing data table structure. This will save you days of data wrangling time when maintaining and developing analysis files. For example, you can now insert your sales data rows for the last month in the beginning of your data structure, before joins with other data. Earlier, the risk was high that you would have to rebuild the data table from scratch to obtain the desired structure or use workarounds like data table data sources (data table from current analysis).

In the example below, we have an existing analysis with a sales transactions data set (Video_Game_Sales_Numbers_0-8000) joined with a dimensions data set. The dimensions provide more information about each transaction. We also have two data transformations on the Added Columns node and one operation on the final data table.

Now, when more sales transactions come in over time we would like to add them as rows to our data. With Spotfire 7.14 this is easy to do with the new editing capability which allows us to insert rows (and columns and data transformations) where we need them to go. In this example, the new sales transactions rows must be added to the Video_Game_Sales_Numbers_0-8000 data source. There are two access points in the source view for this. The image below shows the first alternative, on the arrow binding nodes together.

The image below show the access point in the node operations list. In addition to inserting rows and columns, this access point can also be used to insert data transformations.

The end result is a final data table with more sales transactions. The already existing join with the dimensions table, the two data transformations and the final data table operation are all intact.

Data Access

A new and improved Salesforce connector

Native, self-service support for analyzing data from Salesforce was introduced in Spotfire 7.5. This release brings support for the two most frequently requested enhancements for this connector: federated authentication, and removal of the need for an ODBC driver. The new connector also uses the Salesforce bulk API for quick access to millions of Salesforce records. You are now also able to load more than 2000 rows from Salesforce reports. You may notice that the new connector lacks the .com extension in its name:

The new connector's data source UI has a blue link which is used to login with federated authentication:

Once the blue link is selected, your default web browser will open the Salesforce login page:

If your organization is using a custom domain you will use that option when logging in:

If you sign in with, for example, Google, you can select which account you are using:

Spotfire needs access to certain information to be able to load your Salesforce data:

As always when loading data from Salesforce it's recommended to deselect all columns (available as a right click option in the column list) and then pick only the columns you need for your analysis. Once done, define one or more prompts to limit data on, for example, State, like in the example below using the Account table.

You can always go back and add or remove columns later on. This is done from the Source view:

You can control whether users should be prompted when opening the analysis (or when reloading the Salesforce data) by selecting or clearing the Prompt for new settings before loading check box in the Data Table Properties:

For useful information about compatibility and using the updated connector to open Salesforce connections that you created in earlier versions of Spotfire, see https://community.tibco.com/wiki/compatibility-information-tibco-spotfirer-connector-salesforce.

Aggregate calculated measures in Oracle Essbase

With this Spotfire release, you can create visualizations that aggregate calculated Essbase measures. This was previously not possible because not all calculated measures are additive. However, if you know that you are working with additive measures, you can now configure your Oracle Essbase Spotfire data source to allow aggregated measures. In previous versions of Spotfire you would get an error message if you tried to aggregate calculated measures in a visualization:

In Spotfire 7.14 you can now allow aggregation of calculated measures by selecting the check box in the settings panel of the Data Selection in Connection dialog, as seen in the image below.

Once allowed, your visualization will display data as expected.

Import spatial objects with connectors for Oracle, Microsoft SQL Server, and PostgreSQL

The connectors for Oracle, Microsoft SQL Server, and PostgreSQL now support geographical data types. This allows you to connect to and extract geographical row level data into Spotfire's in-memory data engine with just a few configuration steps. The image below shows an example using all spatial data types in SQL Server:

The SAP HANA connector now supports all connection settings

In addition to the new connection timeout setting for SAP HANA you can now set any connection string parameter from within Spotfire, for example the fetch size.

Please note that it's not possible to enter properties already available in the dialog. If you for example try to enter a user name you will be notified of this:

Microsoft Analysis Services command timeout support

You will now be able to analyze more data and ask more complex questions in SSAS by raising the maximum MDX query timeout time.

Microsoft Analysis Services username and password authentication support

In addition to Windows Authentication you can now authenticate with username and password towards your analysis services instances.

Microsoft Azure Analysis Services support

With the added support for username and password authentication you can now connect Spotfire directly to Microsoft Azure Analysis Services.

Amazon RDS SQL Server support in Cloud Business Author

TIBCO Cloud Spotfire and the Spotfire on-premises platform can now connect to Amazon RDS SQL Server data. This means you can store analysis files in the Spotfire (Cloud) library and let them query Amazon RDS SQL Server directly from the web based clients, Spotfire Business Author and Consumer. You use the Microsoft SQL Server connector to connect to Amazon RDS SQL Server.

A new connector query log

As a Spotfire administrator you probably use Spotfire’s users action logs for an overall view of queries generated by the Spotfire ecosystem towards your data sources. In addition to this, you might also be asked to investigate certain query related issues reported by end users. An example could be visualizations that “take forever” to render. In this case, the users action logs might not be the ideal tool to work with, as they only provide a view of historical data and not a real time view of currently running queries.

With this release of Spotfire, you now have access to a query log dedicated to connectors. By loading the log into Spotfire you can locate the MDX/SQL query in question, and copy it and run it in your favorite database tool. This allows you to instantly determine whether it is the complexity of the Spotfire visualizations that needs to be adapted to better suit the data engine of the data source, or, whether you should ask the DBA to tune the database.

The log file collects queries from Spotfire Analyst, Node Managers and Automation Services. Each row in the log represents a query, which was generated from a data connector running on the Spotfire instance and sent to an external data source. By default, the logging level is set to OFF.

Level The logging level
HostName The name of the computer running the Spotfire service.
TimeStamp The date and time, in the local time of the computer running the service, when the query was generated in Spotfire.
UTCTimeStamp The date and time, in UTC, when the query was generated in Spotfire.
QueryId The unique identifier of the query, as assigned by Spotfire.
UserName The Spotfire username of the logged on user.
Status Specifies whether the query succeeded, failed, or was canceled by the user.
DurationMs The amount of time, in milliseconds, that the query took to execute in the external data source.
RowCount The number of rows in the query result.
ColumnCount The number of columns in the query result.
DataSourceType The type of Spotfire connector that was used in the connection.
DatabaseServer The URL or IP address of the server of the external data source.
Database The name of the database in the external data source.
DatabaseUser The database user that was used to log in to the external data source.
Analysis The name of the Spotfire analysis file.
Visualization The name of the visualization in the analysis that generated the query.
Operation The type of operation that generated the query.
DataSourceInfo Connector type specific information regarding the data connection.
Parameters Any parameters in the query.
QueryString The full query string sent from Spotfire to the external data source.

As always in Spotfire, logging is controlled from the Help menu > Support Diagnostics and Logging:

Go to the Logging tab and select the DEBUG or TRACE log level. Notice the path to where your log file is stored because we will open the log file in Spotfire and analyze it later on. The log file is named Spotfire.Dxp.QueryLog.log.

Go to File > Add data tables > Add > File... and select your log file. You will then see the Import Settings dialog:

Go to the Advanced settings and select Allow newline characters in quoted fields:

Once data is loaded you can visualise for example the number of times each query has been pushed to the underlying data source:

If you add the log file data table to an existing analysis you can analyze queries while you are using your analysis file:

 

Administration

Nodes & Services

The following updates have been made to the Nodes & Services app on the Administration page:

On the "Resource pools" page, when adding instances to a resource pool, the dialog now shows the total number of existing instances and the name of the resource pool.

The "Untrusted nodes" page now includes port information for untrusted nodes.

Scheduling & Routing

The following updates have been made to the Scheduling & Routing:

The CLI command config-scheduled-updates-retries has a new option, stop-updates- destination-unavailable. Using this option, you can indicate whether scheduled updates should be retried if the destination is offline or unavailable. By default this option if set to "true", so scheduled updates are not retried when the destination is unavailable.

When creating a rule, if you do not first enter a rule name, the Rule name field is auto-populated with the name of the file, group, or user that you select. You can then edit the name as you see fit.

To see further information on changes in functionality and list of items that will be deprecated please see the 7.14 Spotfire Server Release notes.

 

Developer

JavaScript API: New authentication mechanism supports external/web authentication

It is now possible to use the JavaScript API on a Spotfire Server that is configured with any external/web authentication. For example, you can now create a mashup with a .dxp file located on the TIBCO Cloud Spotfire library.

The code sample below shows a simple mashup and illustrates the differences that comes with 7.14 compared with previous versions:

<html>
<head>
    <meta charset="utf-8"/>
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <title>Simple mashup example</title>
    <script src="https://spotfire.tcsdev.tcie.pro/spotfire/js-api/loader.js"></script>
</head>
<body>
    There are three changes you need to make to previous JS-API tools.
    <ol>
        <li>Change the script src to https://spotfire-environment.example.com/spotfire/js-api/loader.js
        <li>Change the script to use the new spotfire.webPlayer.createApplication API
        <li>Create the callbacks onReadyCallback, onError, onCreateLoginElement
    <p>
	
    <div id="renderAnalysis"></div>
</body>
<script>
    var app;
    var doc;
    var webPlayerServerRootUrl = "https://spotfire-next.cloud.tibco.com/spotfire/wp/";
    var customizationInfo = { showToolBar: false, showStatusBar: false, showPageNavigation: false };
    var analysisPath = "/Samples/Expense Analyzer Dashboard";
    var parameters = '';
    var reloadInstances = true;
    var apiVersion = "7.14";
 
    // This is basically an asynchronous version of the spotfire.webPlayer.Application constructor
    spotfire.webPlayer.createApplication(
        webPlayerServerRootUrl,
        customizationInfo,
        analysisPath,
        parameters,
        reloadInstances,
        apiVersion, // New. String specifying the api version. Should perhaps be optional with latest as default.
        onReadyCallback, //New. Callback with signature: function(response, app)
        onCreateLoginElement // New. Optional function reference to create a custom login element wrapper.
        );
 
    function onReadyCallback(response, newApp)
    {
        app = newApp;
        if(response.status === "OK")
        {
            // The application is ready, meaning that the api is loaded and that the analysis path is validated for the current session (anonymous or logged in user)
      console.log("OK received. Opening document to page 0 in element renderAnalysis")
            doc = app.openDocument("renderAnalysis", 0);
        }
        else
        {
            console.log("Status not OK. " + response.status + ": " + response.message)
        }
    }
 
    function onError(error)
    {
        console.log("Error: " + error);
    }
 
    function onCreateLoginElement()
    {
        console.log("Creating the login element");
        // Optionally create and return a div to host the login button
        return null;
    }
</script>
</html>

 

API to insert data operations

It is now possible to add data operations (AddRowsOperation, AddColumnsOperation or DataTransformationOperation) to any location within the data table structure (SourceView).

TIBCO Spotfire® 7.13

Spotfire 7.13 introduces a new automatic responsive page layout feature that adapts Spotfire pages to fit smaller screens. This makes it easier for authors to create Spotfire dashboards and applications that work well on any device, whether it is a desktop, laptop, tablet or phone. In addition, this release continues to improve self-service data wrangling by making the Add Columns (join) feature available in the web authoring client (Spotfire Business Author), and by making it possible to edit the add columns and on-demand settings from the visual data source view. Administrators and power users may appreciate the new ability to easily run more than one version of Spotfire in parallel, for example, during an upgrade process. For developers, the much requested API to trigger Automation Services Jobs is now available.

Note that Spotfire® 7.13 is a mainstream version. Fixes to critical issues discovered after the release will only be made to the most current version and to any long term supported versions . For more information on the difference between mainstream versions and long term supported versions see the documentation. Visual Analytics

Responsive page layout for mobile devices

The Spotfire page layout is now responsive, so that when a page is viewed on a small device like a phone, the layout organizes to suit the screen. The responsive layout enables vertical scrolling if the page is too large to view on the screen directly. This means that Spotfire analysis files now easily can be used on any device, whether it is a desktop computer, a laptop, a tablet, or a phone. 

The responsive behavior is automatic by default, but can be configured and enabled/disabled by the author of the analysis.

Specifically, when right-clicking a page tab and selecting "Page layout options", there is a threshold value for the screen width; if the screen is narrower than this threshold, then the layout reorganizes to a vertically stacked layout with scroll. The threshold value can be configured per page, and the default value is configurable through a server preference.

File This video illustrates how it works by showing the behavior when resizing the browser window.

 

More responsive marking in OLAP and big data visualizations

Big data visualizations using live queries, especially towards large OLAP data sources like SAP BW and Oracle Essbase, are now much quicker to respond to marking changes you make.

When you mark a selection of data points in a visualization representing billions of rows of data, it can sometimes take a while for the analysis to refresh the visualizations in the analysis file. Spotfire now computes the marking column used behind the scenes in its in-memory engine, thus improving the marking performance significantly.

Spotfire does this by splitting live visualization and live marking queries into two separate queries. The two query results are then joined in the fast in-memory data engine. This allows faster query execution in the external database or cube, allowing visualizations to refresh significantly faster and adapt to marking changes. This feature allows Spotfire end users to do more data discovery and find insight faster, as big data visualizations will be refreshed more quickly and be more responsive, especially when using Spotfire brush linking between many visualizations and different markings.

Aggregations are done once, and then the marking is applied. Before, the aggregations were done once again after marking. This saves one aggregation calculation that could potentially be expensive. The result is visualizations that refresh more quickly.

The image below shows a comparison of a SAP BW analysis used in Spotfire 7.13 and in the previous release Spotfire 7.12. In many cases, you should expect less frequent and long running queries when marking data:

File This video shows a comparison of a SAP BW analysis used in Spotfire 7.13 and in the previous release Spotfire 7.12. In many cases, you should expect less frequent and long running queries when marking data.

 

Auto-zoom for zoom-sliders that are at the end of the range

Visualizations with zoom-sliders that are "open", that is, when sliders are at the end of their range, now auto-zoom when the data changes (for example, when filtering). If  you have adjusted a zoom slider so that it is not at the end of the range, the zoom slider will keep the position even when filtering.

If a zoom slider is set to a position which is less than the full range, and you move it back towards the end of the range, then the zoom slider will once again be "open", and thus will adjust the zoom when data changes through filtering, data reload or differend kinds of data limiting.

See File this video for an illustration of how it works.

 

Color picker in Spotfire Analyst

Spotfire Analyst now has a color picker that makes it very easy to create analyses that follow the corporate color scheme, the color scheme of a website or similar. Just click the picker icon under the colored squares (see screenshot below), then pick the color you want from anywhere on your screen, and use it in the custom theme editor or in the color axis of the visualizations.

 

Data Wrangling

Join data in the Spotfire Business Author web client

Adding data to your analysis by inserting columns (joins) is a core part of data preparation. With this release of Spotfire, you have the option to add columns to a data table using the Spotfire Business Author web client, in addition to using the Windows client, Spotfire Analyst. This means that more users will be able to do their visual data discovery directly in their web browser, without having to install the Windows client. It also means that if you discover issues with your joins while using the web client, you can instantly fix them, without first switching to the Windows client.

A new web-based user interface, based on the recently added Add rows feature in Spotfire Business Author, makes it really easy to understand the different join methods through illustrations. The result is previewed before finishing the operation. The user interface is smart in the sense that it shortlists and recommends ID and categorical columns that share name and data type. This makes it quicker to find the columns you most probably are looking for.

In the image below, the F1 race calendar has been loaded into the web client TIBCO Spotfire Business Author. The data set contains all race locations, but wouldn't it be nice to also include a link to the details of each circuit? With this new feature you can quickly add more columns to your data table. In this example, a column with links to the Wikipedia page of each circuit is added.

The access point for adding columns is found by expanding the data panel and displaying the Data table view.

In this example, the additional data was stored as a data extract file in the Spotfire Cloud library.

Once the new data is loaded you will see a new Add columns dialog. It's based on the Add rows dialog released in Spotfire 7.7, but adapted for joining data.

Note that the new dialog provides the following features that will help you create a successful join:

The recommendation engine of Spotfire has already defined three column matches for the join, based on data heuristics.

You can add and remove column matches and get a live preview of the result directly in the dialog.

The preview includes color coding of columns. Light blue indicates existing columns, dark blue indicates matched columns, and medium blue indicates the added columns.

The Number of input rows for the preview can be changed. This is useful, for example, when you need more rows to get a representative preview of your matched column values.

In this image, two out of three suggested column matches have been removed, and the preview is updated accordingly.

The Columns from new data section lets you specify which columns to add or exclude.

In the image below, you can see that the default join gives duplicate columns for Circuit and Locality. This can be addressed by simply excluding these columns under Columns from new data when doing the match (or by editing the Added columns node afterwards).

The preview also shows that USA in the existing data table (to the left of the dark blue join column) is not present in the new data table (to the right of the dark blue join column). Also, United States of America in the new data table is not present in the existing data table, which results in new rows being added to the final data table. You can work with the Number of input rows setting to find mismatches like these between the two tables. You can also change the join type to achieve a join result better aligned with what you need. If you would like to transform your values to get a better match, you can do so in the Spotfire Analyst client by joining on a calculated column created with the Calculated Column data transformation. You can of course also modify the values directly in the data source, for example in an Excel sheet, if that is what you prefer.

In this example, we are only interested in adding the column containing Wikipedia links for each circuit. Therefore, the other available columns from the new data have been excluded, as seen below.

The Join settings section displays the new preview of the Spotfire join types. Select a join type and hover with the mouse pointer over the join type example to see how each join works.

With the new column added, you can click on the link for a circuit in a Spotfire table visualization and read more about the circuit on Wikipedia.

 

Edit joins from the source view

The TIBCO Spotfire data wrangling vision to edit everything continues, and with this release you can edit previously specified Add columns (Join) operations. This makes it really easy to adapt your analysis files to changes in your data sources over time. Broken joins can easily be fixed and thanks to the built-in smart indications in the Spotfire source view, you will instantly see when there are issues with your Add columns operations that need to be addressed. A new web-based user interface, based on the recently added Add rows feature in Spotfire Business Author, makes it easy to understand the different join methods through illustrations, and the result is previewed before finishing the edit.

The image below shows the gear icon on the Added columns node, which is the entry point for editing a join.

The new dialog makes it easy to, for example, change the join type as needed.

Edit on-demand settings on data source level

With this release of Spotfire, you can quickly edit on-demand settings for each individual data source in a data table. The setting is available in the source view once you have selected a data source node, as seen in the image below.

Previously, the on-demand setting applied to the final data table, but with this release of Spotfire you can control on-demand settings for each source.

The image below shows the entry point for the setting on a data connection.

This new feature also allows you to switch to on-demand data loading even though it was not specified as such from the beginning. The image below show the On-Demand Settings dialog which allows you to switch from the All data at once mode to the Data on demand mode.

Data Access

Cloudera Impala query timeout setting

With this release of Spotfire, a timeout setting has been added to the Cloudera Impala connector. This means that you can allow Impala queries to run for longer. For example, this allows running queries to complete when you are extracting result data sets into the in-memory data engine of Spotfire.

Amazon EMR support

TIBCO Cloud Spotfire and the Spotfire on-premise platform now support Amazon EMR via Hive and Apache Spark SQL.

This means that you can store analysis files in the Spotfire (Cloud) Library and query Amazon EMR directly from the web-based clients Spotfire Business Author and Consumer.

Use the Hortonworks connector of Spotfire Cloud Analyst and the ODBC driver for Hive from Cloudera to connect to EMR Hive.

Use the Apache Spark SQL connector of Spotfire Cloud Analyst and the TIBCO ODBC Driver for Apache Spark SQL to connect to EMR Spark SQL.

Apache Spark SQL support in TIBCO Cloud Spotfire

TIBCO Cloud Spotfire now supports Databricks Cloud and Apache Spark SQL.

This means that you can store analysis files in the TIBCO Cloud Spotfire library and query Databricks Cloud and Apache Spark SQL directly from the web-based clients Spotfire Business Author and Consumer.

Use the Databricks ODBC driver to connect to Databricks and use the TIBCO ODBC Driver for Apache Spark SQL to connect to generic Apache Spark SQL.

Both drivers are used with the Apache Spark SQL connector of Spotfire Analyst.

Microsoft HDInsight Hive support in TIBCO Cloud Spotfire

TIBCO Cloud Spotfire now connects to Microsoft HDInsight via Hive.

This means that you can store analysis files in the Spotfire Cloud Library and let them query Microsoft HDInsight directly from the web-based clients Spotfire Business Author and Consumer.

Use the Hortonworks connector of Spotfire Cloud Analyst and the ODBC driver for Hive from Cloudera to connect to Microsoft HDInsight Hive.

Administration

Work with multiple versions of Spotfire Analyst

The Spotfire deployment mechanism now supports both upgrading and downgrading of the installed Spotfire client when you connect to a server (and a specific deplyment area). This makes it easier to work with multiple Spotfire versions at the same time.

Server Database Support for Oracle 12c

Spotfire now supports Oracle 12c as the server database.

Developer

REST API to run Automation Services jobs

It is now possible to trigger execution of Automation Services jobs from an external application using a REST API. A job can either be stored in the Spotfire library or passed as an argument. The API uses an OAuth2 based authentication/authorization mechanism.

See REST API Reference for more details.

OAuth2 based authentication for the Web Service (SOAP) API

The Web Service (SOAP) APIs (LibraryService, UserDirectoryService, UpdateAnalysisService, InformationModelService, LicenseService and SecurityService) now uses a OAuth2 based authentication/authorization mechanism. This means that the API client only needs to support a single authentication method that will work with any Spotfire Server authentication configuration.

See Web Services API Reference for more details.

Simplified workflow when building Spotfire .Net extensions

With this release comes an updated and simplified procedure for building .NET extensions for Spotfire. The package building functionality is now integrated with Visual Studio®. Templates are provided so that the configuration needed for a third party developer is kept to a minimum.

See TIBCO Spotfire Developer Documentation for a tutorial on how it works.

Spotfire distribution files

With this release, it is possible to ship a bundled solution containing several Spotfire packages as a single distribution file (.sdn).

See TIBCO Spotfire Developer Documentation for more details.

 

TIBCO Spotfire® Analytics for Apple iOS 2.10

Just as the Android App, the Spotfire Apple iOS App now lets you temporarily maximize a single visualization on the screen, and then restore the original view. When viewing a Spotfire page using the iOS App, you can now double tap a visualization to view it full screen. By swiping right or left, the view shifts to the next visualization on the page, and double tapping in the maximized mode returns the view to the original full page layout. Read more at the Spotfire Mobile What's New page.

TIBCO Spotfire® Analytics for Android 1.1

TIBCO Spotfire Analytics for Android version 1.1.0 introduces touch gesture support for temporarily maximizing a visualization and then restoring the original layout when used together with Spotfire 7.12. When viewing a Spotfire page using the Android App, the user can now double tap a visualization to view it full screen. By swiping right or left, the view shifts to the next visualization on the page, and double tapping in the maximized mode returns the view to the original full page layout. Also, the Android App now has broader support for authentication methods. Read more at the Spotfire Mobile What's New page.

TIBCO Spotfire® 7.12

Spotfire® 7.12 brings a whole new capability to Export in Spotfire, the ability to prepare professional PDF reports based on the visualizations and pages in the Spotfire DXP file. The prepared PDF reports can easily be exported by end users, or exported and distributed automatically through TIBCO Spotfire® Automation Services. In addition, Spotfire 7.12 adds important improvements in Visual Analytics such as fixed height/width visuals and text areas, the ability to maxmize a single visual and then restore it to original size, and also continues on the Data Wrangling track of "Edit everything" by introducing the ability to remove individual data sources. View the What's new webcast at: https://www.tibco.com/resources/demand-webinar/whats-new-tibco-spotfire

Note that Spotfire® 7.12 is a mainstream version. Fixes to critical issues discovered after the release will only be made to the most current version and to any long term supported versions . For more information on the difference between mainstream versions and long term supported versions see the documentation

Export prepared reports to PDF

Users can now create and save one or more Prepared reports that contain the settings for how to export a Spotfire DXP file to PDF.  It is now possible to select pages or individual visualizations from the Spotfire DXP file and mix them into a PDF report, if needed using repetitions over filter values or bookmarks and exporting all rows from tables or trellised visualizations. 

 

The reports are named and stored inside the DXP and can very easily be exported by users of web or installed clients by selecting the prepared report.

 

The prepared reports may contain selected pages and individual visualizations from the DXP file. 

Note that it is possible to select an individual visualization and export it to its own page in the PDF. This is especially useful for Tables and "Trellis by pages" visualizations where it is possible to select to export all rows/pages, even those not visible on the screen.

 

Repeat pages or visualizations per filter

In a static report, there is sometimes the need to repeat a page or visualization several times, for each value of a column. For example, if we are looking at sales data for some stores of different types, it may be interesting to first look at total sales per store type for the whole country, then have the same visualization repeated and filtered to contain data for only one sales region at a time.

 

Repeat pages or visualizations per bookmark

Similarly as with filters, it is sometimes needed to export pages or individual visualizations with the settings of a specific bookmark that sets, for example, a marking or a specific filtering that cannot be expressed as a simple repetition over categorical values.

Header and footer

The header and the footer are very flexible and consist of, in total, 8 different fields. At the top and the bottom of the page there are the left, center and right header/footer field. Below the top header fields there is also a field for additional information, and there is also an additional field above the three footer fields. It easy to configure the font, size, color and background color of the text in each of the fields, which makes it possible to give the report a professional look and feel. All fields are optional and can contain selected content such as free text, page number, date, report name, page title, etc, but they can also contain dynamic information, such as the value of a filter or bookmark used to repeat the current page. 

 

Example report with header and footer

 

Exporting a prepared report from the menu

When one or more prepared reports are available in a DXP file, it is easy for an end user to do an export. The user just needs to access the Export menu and select PDF, then choose which of the prepared reports to export and then click Export.

Exporting prepared report from a text area button or link

An Author can make it even easier for Consumers to export the prepared report by providing a text area button or link that triggers the export of a specified prepared report through using a new Action available in Text Area Action Controls, that lets the author create a button, link or image and select a corresponding Prepared Report, which will be exported if a user clicks on the control.

Export prepared reports in Automation Services

It is now possible to export a prepared PDF report through TIBCO Spotfire Automation Services. The Export to PDF task in the Automation Services job builder now lets the user select a prepared report from the open DXP file and export it to a defined location.

 

Export to Power Point and Export to Image

The Export to PowerPoint feature now uses the visual theme when exporting, and also uses a higher resolution making the visualizations and text clearer.

When exporting a visualization to an image, the image will now use the visual theme of the analysis, and the image is exported in a higher resolution than before. There are also right-click context menu option to export an image to a .png file, and also to copy the visualization image to the clipboard so it can be pasted into a document or presentation. 

Visual Analytics

Maximize a visualization and then restore to original size

It is now easily possible to temporarily maximize a visualization, and then return to the previous page layout again. This makes it easier to take a closer look at a particular visualization without distorting the original visualization layout on the page. In addition, in the maximized view there are controls that let you rotate the view between all visualizations on the page. To see it in action take a look at this File Video.

The visuals can be maximized/restored from the expanding arrows button in the visualizations title-bar, or from the visualizations right-click context menu.

 

Fixed height or width of text areas and visualizations

Spotfire now lets you fix the width or height of a text area or visualization in certain positions of the page. As an example, you can easily lock the height of a text area located on the top or the bottom of the page, or the width of a text area which is on the left or right edge of the page. The same capability exists for all  visualizations, but an important usage of this is to avoid that text areas get scroll bars when viewed on small screens. 

In the illustration below the you can see a dashboard on a big screen in the background, and in the foreground the same dashboard on a smaller screen, and that the top and left text areas still retain their height and width, respectively.

In order to lock the height or width, click the lock icon in the toolbar:  All splitters between visualizations light up and a lock icon appears wherever you are able to fix the width or height. On the lock icon, a small menu shows you the available options for how to lock the height or width.

The below screenshot shows that the top text area has a locked height from the top of the page, and the user has clicked on the splitter for the left text area and is presented with the option to lock the width of the text area to the left of the page.

 

Improved scrolling on mobile devices

In Spotfire web clients, it is now possible to use two-finger scroll on mobile devices. This is useful when viewing a page which is larger than the screen of the device and looking at a visualization with its own scroll capability, for instance a table or a map. By using two fingers to scroll you can make the page scroll rather than the visualization. When using one finger, the visualization itself scrolls. For visualizations that do not have internal scroll there is no difference; both types of scrolling will scroll the page. In addition, the scrolling on pages on mobile devices now is generally smoother (known as "Momentum" on iOS).

Increased number of labels on scatter plots and maps

The maximum limit for the number of labels displayed in a scatter plot or a map chart has been increased. You can now use up to 3000 labels in your visualizations (compared to the previous limit of 200). This is mainly useful when having centered labels for data where markers do not overlap significantly.

 

WMS Layer Authentication

Usually WMS (Web Map Services) are public and do not need authentication, but now it is possible to add map layers to Spotfire also from WMS that do require a username and password.

 

Data Wrangling

Remove individual data sources

The latest data wrangling and editing capability in Spotfire is the ability to remove individual data sources using the source view.

The new trash can button (as seen below on the POSDetails.csv file data source node) allows you to remove a node in the source view tree.

In the two images below, notice how Spotfire provides you with a preview of the result as you hover with the mouse pointer over the trash can icon on a node. This makes it easy to see what the result will be after the node has been removed. You can of course also use Undo, should you change your mind after removing a node.

In the image below, we have removed the POSDetails.csv data source, which means that SalesTransactions from Teradata now is the only data source providing input to our final data table. Notice how the Add columns operation is removed as well as the data source node.

In the image below, notice that Spotfire provides a warning if you remove the first data source in the source view:

Smart movement of data transformations when removing data sources

You may wonder what happens when you remove a data source which has data transformations applied to it? Spotfire has a smart way of managing this.

Below is an example of a source view with transformations on nodes.

When you hover over a node's trash can icon you will notice that the badge indicating the number of data transformations might be dimmed out during the preview. If the transformations will be removed together with the node, the badge is dimmed. If the transformations will be left after the node is removed, the badge will not be dimmed. Spotfire will automatically move the data transformations to the appropriate nodes, when this is possible.

For example, if the added rows in the image below are removed, the data transformation previously applied on the added rows node will remain.

The result is a single data source with 3 + 1 = 4 data transformations, as seen in the image below.

Another example of when Spotfire's smart movement of data transformations is valuable is when you are removing the first data source and would like to keep only the columns added in the Added columns operation. You can achieve this by adding an Exclude columns data transformation before you remove the first data source. In the example below, the first data source is removed and the exclude columns data transformation is kept on the data source.

The data source information now lists added and ignored columns

When you join data tables in Spotfire using the Insert Columns feature, Spotfire natively stores information about which columns you selected to ignore from the new data table. Spotfire has also listed these ignored columns in the source information details in the source view and in the data table properties dialog.

With this Spotfire release the columns you decided to add are listed together with your ignored columns. This is particularly useful when working on complex data that involves many data transformations and multiple Insert Columns operations.

In the image below, the new Added columns section is shown in the Information pane. Seven columns are added and five columns are ignored.

The image below shows that all columns but one have been ignored. Only the Customer column was added.

Delete multiple data tables at once

This is a small but time saving feature when cleaning up an analysis file from data tables you no longer need. From the Edit > Data Table Properties dialog it is now possible to select multiple data tables and delete them all at once. Simply select multiple data tables by shift-clicking on them in the list of data tables and then click the Delete button.

Data Access

The Teradata connector now supports macros

With Spotfire's connector for Teradata, you can now browse and connect to data represented by Teradata macros. A macro in Teradata allows you to name a set of one or more SQL statements, which then provide a convenient shortcut for executing groups of frequently run SQL statements in Teradata.

The image below shows how Spotfire presents Teradata Macros as stored procedures in the Views in Connection dialog.

The Apache Spark SQL connector now supports temporary views

As a Spark developer you might publish results of Spark jobs in temporary views. With this release of Spotfire, you can connect directly to those views using Spotfire's self-service live query connector and instantly visualize the result.

The image below shows that temporary views are displayed under a separate category in the list of available tables in the Views in Connection dialog.

Apache Spark SQL temporary views and tables in custom queries

If you are creating a custom query and you want to use data from an Apache Spark SQL temporary table or view, you must refer to those objects using their qualified names, specifying both the name and the location of the object. The qualified names required have the following format: 

databaseName.tempViewName

By default, global temporary views are stored in the global_temp database. The database name can vary, and you can see it in the hierarchy of available database tables in Spotfire. To select all columns from a global temporary view named myGlobalTempView, that is stored in the global_temp database:

SELECT * FROM global_temp.myGlobalTempView

Temporary views/tables (listed in Spotfire under ‘Temporary views’ or ‘Temporary tables’) are always located in the #temp database. To select all columns in a temporary view named myTempView: 

SELECT * FROM #temp.myTempView

 

Developer

Export to PDF with the Spotfire .Net API

It is now possible to export a prepared PDF report with the Spotfire .Net API, which makes it possible to include exporting of PDF reports in custom workflows with C# extensions or IronPython scripts.

Export to PDF with the Spotfire JavaScript API

It is now possible to export a prepared PDF report with the Spotfire JavaScript API, so that you can take advantage of Spotfire's PDF report capabilities in a web mashup environment.

In addition, there is an option in the JavaScript API to launch a dialog for exporting to PDF without having a prepared report. This option now has the same capabilities as when using the Spotfire clients. The dialog provides a preview, the exported visualizations use the visual theme in the analysis and the exported PDF is of a higher graphical quality.

API to render pages and visualizations to PNG images

The Spotfire .Net API has new capabilities when rendering pages and visualizations. The resulting PNG images use the visual theme and the API includes settings to adjust the resolution as well as the visability of visual attributes such as annotations, axis labels, legend and title.

This API is useful when creating export tools to support customized layouts or output formats.

API to remove data operation

The Data Table Source View API (introduced with the 7.10 release) now lets you remove individual data sources or other operations, such as add columns operations or add rows operations.

API to maximize a visualization

The Spotfire .Net API now supports the ability to temporarily maximize a visualization, and then return to the previous page layout again.

API to set WMS layer username and password

The spotfire .Net API now supports configuring a WMS layer with username and password.

 

TIBCO Spotfire® Data Catalog 5.5.0

Spotfire® Data Catalog 5.5.0 introduces support for the new add-on licenses Spotfire® Data Catalog Language Packs. The language packs licenses extend the rich capabilities for content analytics to include support for a wide range of languages other than English. The extended language support is distributed as five separate language pack licenses that include languages from distinct regions. 

In addition to this and many other content analytics improvements, this new release brings an overhaul of the SAIL search interface, and Spotfire Data Catalog now leverages machine learning to enable higher quality search results. Read more about Spotfire Data Catalog on the product wiki page.

TIBCO Drivers 1.4.0 

Version 1.4.0 of TIBCO Drivers is now available! This release brings you updated and improved versions of the included data source drivers for Apache Spark SQL, Salesforce, MongoDB and Apache Cassandra.

TIBCO Spotfire Analytics for Android® 1.0

Now users of Android devices can also benefit from a Native app for their device. Similar to the Spotfire App for Apple iOS, the Android App makes it easier  to keep track of business facts or monitor business performancefrom anywhere by using your tablet or phone. Read more about the Spotfire Android App at the Mobile what's new page for Spotfire. Download the app from the Google Play Store.

TIBCO Spotfire® 7.11 LTS

Spotfire 7.11 brings highly requested improvements in data wrangling, cross tables, tables and maps, and it also makes the life of the Spotfire Administrator easier through improvements in scheduled updates and management of multiple Sites. Developers and application builders will enjoy the upgraded IronPython engine that now supports the latest (2.7.7) IronPython version.

In addition to the new features, Spotfire 7.11 has been designated as a Long Term Support (LTS) version.  LTS versions are typically supported for up to 36 months from release. For LTS versions, defect corrections will typically be delivered as hotfixes or service packs while for regular releases they will be delivered in subsequent releases.

Visual Analytics

Calculate the subtotals and grand totals in cross tables based on the aggregated values displayed in the cells

It is now possible to configure the cross table to calculate subtotals and grand totals based on the aggregated values visualized in the table, as an option to calculating it using the underlying row level data. This is useful, for example, when you want to visualize the sum of the absolute values of the categories displayed in the table.

In the screenshot above, you can see the Properties dialog where you can select, for each column, whether to calculate the subtotal and grand total on the underlying row values, or as the sum of the values displayed in the cross table cells. This is useful, for example, when one wants to compare the sum of absolute values in the subtotals.

Conditional color of the text in tables and cross tables

It is now possible to color the text in tables and cross tables through color rules, as an alternative to coloring the cell background. This provides more freedom in the visual expression of the tables.

Search and zoom to a location

You can now search for a geographic location on the map and quickly zoom in to its geographic area. When you start typing a location name, Spotfire suggests locations you can select to zoom to on the map.

Switch data table now keeps the visualization configuration

Visualizations will now keep their configuration when switching to another data table, provided that the new and the old data table include the same columns. This saves time when switching back and forth between identical tables. 

Data wrangling

Replace Data source

As a Spotfire user, you are used to working with multiple data sources mashed together, to provide more answers from your data. With this release of Spotfire, you can easily replace one of those data sources with another data source, without compromising the data wrangling and data mashup you have done.

Example: Going from test to production

The picture below shows the source view in an analysis file. Three data sources are used and mashed together using Insert Columns (joins).

The first data source is a linked data table containing sales sample data, stored in a local Spotfire Binary Data Format file (SalesOrderDetailSample.sbdf).

By working with an alternative and local data source you can develop an analysis file without access to the production data source. This is convenient, for example, when working off-site, or, when you have work in progress that you do not want to introduce in your production environment (for performance reasons or for other reasons).

Once you are ready to switch to the production data source, you can access the new replace data source feature from the data source menu in the source view:

The picture below shows the new Replace Data Source dialog. In this example, we select to switch to the corresponding data table in Microsoft SQL Server.

In the image below, the sample data source has been replaced. The data source type is now a data connection instead of an sbdf file.

Add transformation to existing data source

In addition to the capability to replace data sources, this release of Spotfire also enables you to add data transformations to existing data sources. Previously, data transformations could only be added when creating a new data source or when editing data transformations already part of the data source.

There are certain situations when it's beneficial to attach transformations to data sources. The benefits are based on the fact that Spotfire doesn't save the original data in the analysis file, only the transformed result.

Let's assume you prefer to store a copy of your data in your analysis file for it to be available offline and for you to be able to decide when a reload is needed. Let's also assume that you are loading 200M rows into Spotfire, and then defines an pivot data transformation to reduce the size of the data table. Having the pivot data transformation as part of the data source will only store the pivoted result table and discard the 200M original rows. This dramatically reduces the size of the analysis file. If the transformation was performed as a separate step, the original 200M rows would be stored.

This will also reduce the loading time when opening the analysis, since the pivoted table is already available. If the transformation was performed as a separate step, the pivot operation would have had to be performed as part of loading the analysis file.

Custom data transformations may also benefit from being performed as part of the data source.

The image below shows the new access point to insert a transformation on a data source.

Edit replace value transformations

It is now possible to edit replaced values without creating additional transformation steps within the analysis. This means that you can go back and modify previously added replacement operations, if they are no longer applicable. By editing already created operations, you can avoid having a large number of transformations for replacing the same value over time, and make the analysis cleaner.

The image below shows the entry point for editing two replace specific value transformations. Click the Edit button to open the new edit dialog.

 

The image below shows the new edit dialog for Replace Specific Value:

Since we have replaced a specific value we have defined both a new value and a primary key column (PermitNumber). You can add more key columns and you can replace the currently used key column and/or value in the dialog.

You can also insert a new replace value transformation (using the new Replace Value and Replace Specific Value dialogs) into an existing transformation group by clicking Insert in the Edit Transformations dialog:

Edit relational data connection data sources from the source view

Previously, Spotfire users had source view access to make quick changes to data connection configurations. This made it possible to add and remove tables and columns, add or modify custom queries, modify prompts, change column names and other settings that are part of data connections.

With this release, it is just as easy to make changes to the data source used by the data connection. The data source holds information regarding source IP, authentication method, time-outs and database, which are all easy to modify now.

For example, it has never been as easy to move from a test database to a production database. With a few clicks from the source view, you can now point the data source to another database, maybe even to a database with another type of authentication method. If different table names are used in the databases, for example, 'dbo.test.transaction' in the test database and 'dbo.prod.transaction' in the production database, Spotfire highlights these differences in the data connection and makes it easy to select the corresponding table in the production database.

The image below shows a data connection data source being displayed in the source view. Click on the settings button (the gear icon) on the data source node to edit the data connection. 

The image below shows the Views in Connection dialog reached from the settings button (the gear icon). From here, you can enable full editing of the data connection by clicking the button in the lower left corner of the dialog.

The image below shows the new Edit Data Source Settings button. This is a new feature in 7.11 and provides a shortcut to editing your data source.

The image below shows the Microsoft SQL Server Connection dialog which contains the settings for the connection data source. From here, it is easy to, for example, switch from a test to production server or database. You can also switch authentication method.

 

Data Access

Option to query SAP BW directly towards the SAP BAPI API

The SAP BW connector now has the option to query SAP BW using the native SAP BAPI API without going through the ODBO API used until today. If you choose to enable the BAPI API integration you can expect a boost in performance and more detailed messages from SAP BW should something go wrong. If you choose not to enable the BAPI API, the SAP BW connector will use the ODBO API as before.

We are convinced that the BAPI API will provide a better user experience and allow us to develop new features over time. We have therefore decided to deprecate support for the ODBO API in a future Spotfire release. However, both APIs will be available for a period of time, to allow you to upgrade your SAP BW client driver installation to the BAPI API at your own pace.

The image below shows the title of the SAP BW Connection dialog, where it is indicated that the SAP BAPI API is being used.

Load more than one million SAP BW data cells

Note: This feature becomes available when you have enabled the SAP BW's BAPI API on Spotfire clients and servers. Please see the "Option to query SAP BW directly towards the SAP BAPI API" feature above for more details.

SAP BW limits the number of non-empty cells that can be retrieved in metadata and in result data sets. This limit is configurable in SAP BW, and common limits are between 500k and 1M non-empty cells. By leveraging the SAP BW BAPI API, Spotfire is no longer dependent on this limitation and allows you to analyze more data than set by the limit. This means that you can connect to BEx queries representing more data, and thus extending the number of use cases you can implement with Spotfire.

Only Spotfire administrators can enable this capability in the Spotfire platform.

Specify SAP BW operation timeout

It is common for SAP BW BEx queries to represent very large amounts of data. This means that Spotfire data import queries towards BEx queries sometimes need some extra time to complete. You can now increase the default 10 minute timeout as part of the SAP BW data connection. This allows you to import and analyze larger data volumes without queries timing out before your data is available.

The images below show the SAP BW Connection dialog. Click the new Advanced tab to reach the operation timeout setting.

 

Increased SAP HANA function support

Spotfire's SAP HANA connector now supports the following additional functions: 

Median

Stddev_Pop

Stddev_Samp

Var_Pop

Var_Samp

Bitcount

Months_Between

Years_Between

The image below shows a few of the new functions in Spotfire's Custom Expression user interface. Note that the details for how to use these functions are documented by SAP and are subject to change over time.

Support for new thrift transport modes in Apache Spark SQL

Spotfire's connector for Apache Spark SQL now supports the thrift transport modes Binary, SASL and HTTP. Having the TLS security settings on the first page and turned on by default for new connections makes it quicker to configure your data connections in a secure way, for example, to Databricks data sources.

Support for Teradata 16

The Teradata connector and Information Services now support Teradata 16.

The images below show the different tabs available in the updated Teradata Connection dialog.

Spotfire Cloud access to data from TIBCO Spotfire Data Catalog in Spotfire Cloud Business Author and Consumer

Analysis files opened in Spotfire Cloud Business Author and Consumer can now load data directly from publically available TIBCO Spotfire Data Catalogs.

Analysis files are authored in Spotfire Analyst, saved to the Spotfire Cloud Library and are instantly available for Business Author and Consumer users.

As a Business Author and Consumer user, you will receive fresh data when the analysis is opened. You can manually refresh data from individual data sources in the source view of Spotfire Business Author.

The image below shows the library browser of the Spotfire Cloud web client.

When you open an analysis based on data from TIBCO Spotfire Data Catalog, it is now possible to refresh the data directly from the source view:

 

Spotfire Server

LDAP and Spotfire Authentication

Spotfire 7.11 allows users to access Spotfire even though they are not part of the external user directory.

If you configure authentication towards an external user directory such as an LDAP directory, or a Windows NT Domain, you can combine this with adding users manually to the Spotfire database so you do not have to add them to the LDAP directory.

To see more on this feature, go here.

Scheduling & Routing

Spotfire 7.11 provides three new features for scheduling and routing, to help our administrators more easily manage files that are not cached and rules.

  1. You can now prevent users from opening analysis files that are not cached by scheduled updates. This is useful because if there are certain analysis files that take a significant amount of resources to initially load, you can prevent that from happening by not allowing the user to open an uncached analysis file. To see more on how to do this, read more here.
  2. You can now recover a rule if it was automatically disabled. When an analysis file is deleted from the library, the routing rule associated to it will fail and the rule will become disabled. Now, if the analysis file is imported back to its previous location the rule is recovered and can automatically be reenabled by updating a setting in the server configuration file, enable-recovered-rules-automatically. To see more on this feature, go here.
  3. You can now copy routing rules and schedules from one site to another. For details on how to use this feature, go here.

Update to the Library Browser Page

The Spotfire library browser now provides a left hand navigation section that allows you to view recent files you have just opened as well as quickly browse for other files of interest.

 

Developer

IronPython support updated to version 2.7.7

TIBCO Spotfire 7.11 supports the latest version of IronPython (2.7.7), enabling more powerful language features and libraries.

IronPython is an implementation of the Python programming language which is tightly integrated with the .NET Framework. Using IronPython scripts with Spotfire, you can utilize not only .Net Framework and Python libraries, but also the full Spotfire C# API. This makes IronPython scripting a powerful tool when creating advanced analytic applications in Spotfire. If there is a need to run certain scripts in the older version of Iron Python, this is still supported by selecting the older version in the drop-down list shown in the image below.

For tutorials and examples, see https://community.tibco.com/wiki/ironpython-scripting-tibco-spotfire.

 

TIBCO Spotfire® 7.10

Spotfire 7.10 provides a new, high-resolution, export to PDF feature, including a modern, user-friendly UI, with a live preview. In addition, there are also very useful improvements in data access (especially for SAP BW users), visual analytics, and, for administrators, we have made the node manager upgrade process simpler. 

New and improved Export to PDF

With Spotfire 7.10, the new Export to PDF feature includes the following main improvements over the legacy implementation:

  • The exported visualizations use the visual theme in the analysis.
  • The exported PDF is of a higher resolution.
  • The modern user interface makes it easier to configure the export, to get the result you need.
  • The dialog provides a preview that lets you see the result of your settings.

You access the new Export to PDF dialog from the File > Export menu in Spotfire Analyst, or, from the menu in the top right of the web client. 

In the left-hand panel, there are controls that let you configure what to export, and what type of content to include (such as page numbers, date, Annotations, etc). You can also find basic settings here, such as the paper size and page orientiation. Just to the right of the panel is the preview area, which is dynamically updated as you change the export settings in the left-side panel, so you can see the effect of your selections directly. The preview can also be zoomed using the controls in the upper right corner.

 

Control the proportions of the exported content

You can now easily control the proportions of exported visualizations. In the Proportions part of the user interface, you can choose to use one of three options:

As it is on your screen

This setting is the default option in Spotfire. Choosing this option ensures that the PDF page displays the content exactly the way it looks on your screen. However, you might not use all the available space on the paper with this choice, as shown below.  (In this image, the paper is A4 landscape-oriented, so a portion of the paper in the bottom is not used.)

Fit to PDF page

If you want to utilize your paper dimensions, choose Fit to PDF page. When you select this option, check to make sure no labels are truncated, because the aspect ratio for the dashboard may be changed significantly. (The image below shows how this can happen.)

Notice that, in the picture above, the labels in the lower line chart to the right do not fit the space, the labels in the bar chart look sub-optimal, and only parts of the company names in the table in the center are visible. In this case, using the feature Relative text size is a great option.

Relative text size

Use the Relative text size slider to scale the text to the best size. Below, you can see the result: the text is smaller, but the labels and the text in the table fits and are easily readable. Watch this File video showing how Relative text size works.

Custom

If the options Fit to PDF page or As it is on your screen cannot give you what you need, you can use the custom proportions to define any desired aspect ratio for the exported content.

 

Exporting all rows from a table, or all trellis pages from a trellised visualization

In order to export all rows from a table, not only those rows visible on the screen, or to export all trellis pages for a trellis-by-pages visualization, you first select the visualization to export. A new, very convenient, way to do this is from the right-click menu in the visualization:

The What to export – Active visualization option is then automatically selected, and you can select Export entire table, as shown below.

The process is very similar for export of all trellis pages in a trellis-by-pages visualization.

 

Visualize direction on maps or scatter plots by setting the rotation of markers

It is now possible to rotate markers on maps and scatter plots, based on values in a column or a custom expression. For example, using this new feature, you can configure the direction of markers to indicate wind direction, a ship's heading, or similar direction information.

Set the rotation using a column or a custom expression on the new Rotation axis of the scatter plot or map chart marker layer. The rotation is described in degrees, where 0 is North, 90 is East, 180 is South, and so on.

 

Scroll in a cross table hierarchy

When the hierarchy of a cross table is very large, you can now scroll in the hierarchy, as well as in the values. This new feature makes it easier to work with cross tables that have a large hierarchy.

The Appearance tab in the cross table properties now has a new section that is called Horizontal scrolling.

There are three options in the Horizontal scrolling section:

Freeze row headers keeps the row header frozen, so you can scroll only the column values. This was the behavior in version 7.9 of Spotfire, and earlier.

Scroll row headers specifies that the cross table always scrolls both row headers and column values.

Adjust automatically is the new default: The cross table sets the best of the two other options automatically, depending on the width of the row headers relative to the width of the whole cross table.

 

Quick auto-zoom on maps

You can now enable or disable auto-zoom much more quickly, directly from the right-click menu in the map chart.

 

Improved performance when filtering data in-db

When you use Spotfire to analyze data in a database with a live connection (data still kept in the database, not brought into memory with Spotfire), filtering performance is much better in Spotfire 7.10 compared to earlier versions.

 

Spotfire business users can now limit data using SAP BW BEx variables

Spotfire has had native self-service support for SAP BW since version 5.5. The number of Spotfire users on SAP BW has grown rapidly since then. We are happy to announce that the most-requested SAP BW integration feature is available in Spotfire 7.10.

In SAP BW BEx queries, variables are used to limit the data to be loaded. With Spotfire 7.10, it is now possible to connect SAP BW BEx query variables to Spotfire prompts automatically. Now, you can build analysis files that are directly connected to the variables that are already part of your BEx queries and your business processes. SAP BW users will be familiar with the variables, and can, for example, very easily narrow down the data being analyzed to certain time frames, product categories, equipment maintenance areas, oil wells, accounts, employees, and so on. 

The following  image shows the updated Data Selection in Connection dialog box for defining SAP BW views in Spotfire.

The panel to the right has a new section for prompt settings. To activate prompts for a variable, select the Prompt for values check box.

The prompt type adapts to the prompt type defined in the BEx variable. In the example above, because this prompt type corresponds to the input allowed for that BEx variable, the prompt type for the TYPE variable is locked to Single selection.

To make it easier for business users to understand the prompt, you can provide a description of the prompt .

The following image shows that it is possible to control in which order the prompts are displayed to business users. This feature is important because variables are related, and limiting one variable affects the values available for the following variables.

The following image displays an example of the first prompt, for the variable Tax:

The following image displays the second prompt, TYPE. Users can enter variable values manually, or (as in this example), they can load the unique values in a list from SAP BW.

The following image displays the third and last prompt, Region, with a value which has been entered manually. 

The following image depicts the data analysis as a bar chart, where the data is limited by the selections previously made using the prompts.

In BEx queries, variables are used to limit the data to be loaded. Some variables are mandatory, and values must be defined before a user can open the query. By establishing prompting, you can let the end user define the variable value, instead of defining it in the connection configuration.

Note: You can define both a value and prompting for the same BEx variable. The variable value you define in the connection is the default selection in the prompt dialog for the variable when the connection is opened. This can be useful if you save the connection in the library for reuse. However, if you create an analysis with prompts and save it to the library, then the selections you made in the prompts when creating the analysis will be stored in the analysis. In that case, it will be your selections in the prompts, rather than the variable values defined in the connection, that are the default selections in prompts shown to the end users.

Compared to working with relational data sources, BEx queries are more restrictive regarding how you can set up prompting. When a variable is defined in the query, it is designed to accept only certain input. For example, it can be a single value, a multiple value or a range. In Spotfire, the accepted input determines the prompt types you can use for a BEx variable.

Note: Unless Load values automatically is selected, then, by default, the prompts for BEx variables give users the option to enter values manually. When a user enters variable values manually, Spotfire supports entering values as text (captions). Entering values as keys is not supported.

 

SAP Message Server support for SAP BW

You can now connect Spotfire to your cluster of SAP BW systems using the SAP Message Server load balancer. Previously, you had to connect Spotfire directly to a certain SAP BW instance.

The SAP Message Server allows IT to assign application servers to workgroups or specific applications. Users are automatically logged in to the server that currently has the best performance statistics and/or the fewest users.

The image below shows the updated SAP BW connection dialog, with the new fields for entering SAP Message Server connection details.

Single Sign On to SAP HANA with SAP SSO 3.0

It is common that SAP HANA deployments use the SAP SSO 3.0 Kerberos solution. The Spotfire SAP HANA integration now supports Kerberos authentication in combination with SAP SSO 3.0, in all clients and servers. This change enables Spotfire users analyzing SAP HANA data to access data without entering their SAP HANA credentials manually. It also provides a central location for users and roles administration, for SAP HANA administrators.

Configurable Essbase Measure dimension

If you are connecting to an Oracle Essbase cube that does not have a dimension tagged as the accounts dimension, you can now specify which dimension contains the measures. In previous releases, this was not possible, and some users could not connect to their Essbase cubes.

The following image shows the dialog that is displayed when you create a connection to such a cube.

You can manually specify which dimension to use as the measure (accounts) dimension in your connection.

 

API access to Spotfire's data wrangling operations

Spotfire's Source View (available by expanding the data panel) has been extremely well received by the Spotfire user community. It provides an overview of your data wrangling steps, and it also has access points for going back and editing data wrangling steps.

With this release, you can now get the same overview and the same editing capabilities using an API.

With API control of how data is wrangled, you can unlock new ways of building analytics applications. For example, an analytic scenario can be adapted on the fly by letting business users change join type (API control of the add columns operation), which instantly changes how data is blended, and thus, is presented in the analysis file.

Using the API, you can extract data wrangling and cleansing steps from Spotfire. For example, all usage of the replace values data transformation on your data (which is also new with this release) can be exported. This new feature means that you can write code into SQL or Spark (for example) that converts all steps taken to cleanse data.

Below is an example of how you can write the join type so it is controlled by a document property.

from Spotfire.Dxp.Data import * from Spotfire.Dxp.Data.DataOperations import * from System import *

sourceView = table.GenerateSourceView(); 
op = sourceView.GetAllOperations[AddColumnsOperation]()[0]

newJoinType = Enum.Parse(JoinType, joinType)

op.AddColumnsSettings = op.AddColumnsSettings.WithJoinType(newJoinType).WithTreatEmptyValuesAsEqual(matchOnNull)

The following image shows how a text area input field could be used to control the join type through a document property.

Learn more about the API here.

Easier debugging of TERR data functions

There is now a way to see debug information which is generated in runtime when a TERR data function executes, such as parameter values and also your own free text output. The same mechanism is used whether you run the data function locally using the embeded TERR engine in Spotfire Analyst, or, using the TERR engine in TIBCO Spotfire Statistics Services. To enable the debug output, select Tools > Options > Data Functions and click the Enable Data Function debugging check box:

This makes Spotfire show additional debug information from the execution of the data function, such as, input and output parameter values. The debug information is viewed in the notifications window that you access from the lower left notification message (click the yellow triangle):

 

Here is one example of debug output:

If you want to, it is easy to add your custom debug information in a data function, in the script body:

cat("My debug output: the input value for Multiplier was: ")
cat(Multiplier)
cat("\n")
OutputColumn <- InputColumn*Multiplier

 

Administration

Easier upgrades of Node Managers

Spotfire 7.10 improves the the upgrade process:

  1. You can upgrade the node manager from the administration UI.
  2. The node manager upgrade is now part of the rollback process in case there is an issue or error with the upgrade.

Quick deployment of package updates

To deploy Spotfire software, the administrator places software packages in a deployment area and assigns the deployment area to particular groups. 

If a new deployment is available when a user logs in to a Spotfire client, the software packages are downloaded from the server to the client. 

Deployments are used: 

  • To set up a new a new Spotfire system. 
  • To install a product upgrade, extension, or hotfix provided by Spotfire. 
  • To install a custom tool or extension.

With one click you can now up update, rollback, or delete your deployment packages

Pagination for Viewing Scheduled Updates

The Scheduling & Routing page now has pagination. By default, you will see 100 scheduled updates and routing rules, but you can switch the view to 50 or 150 items per page.

 

TIBCO Spotfire Analytics for iOS 2.9

Version 2.9 of the Spotfire iPhone/iPad App adds user notification when new data is available through Scheduled Updates and the ability to synchronize the App settings between multiple iOS devices using iCloud. Read more about this and other Spotfire Mobile releases here.

 

TIBCO Spotfire® Data Catalog

TIBCO Spotfire Data Catalog

The Data Catalog makes handling, searching and accessing data from across your organization a natural and fast experience. Even if your data is scattered in disparate data sources – in databases, data warehouses, or elsewhere – you can make all these data sources readily available for self-service access in one unified data catalog. Using Attivio intelligent technology, your data is profiled, organized and semantically enriched so that you can search with natural language across all your data sources, whether the contents are structured or unstructured. Discover relationships between your data with the patented ‘Join Finder’ and bundle just the right, relevant information in self-service data marts. Then start uncovering insights through seamless integration with the Spotfire visualization platform. 

TIBCO Spotfire® 7.9

The main highlights in Spotfire® 7.9 are significant new inline data wrangling features.

Spotfire® 7.9 On Demand Webinar 

Inline data wrangling

Edit data transformations

Spotfire 7.6 introduced the Source View which provides an overview of your data transformations, calculations and how your data tables are derived from rows and columns combined from multiple data sources. Spotfire 7.7 made add rows (unions) editable and smart by usage of the Spotfire recommendations engine.

With Spotfire 7.9, one of the most anticipated new features of all times is now available; the ability to change data transformation settings. This saves you a lot of time, for example, when a recently added data transformation needs further editing, or if an existing data transformation needs to be adapted to changes in the data source.

Access points for editing data transformations

The image below shows an example of details in the Source View. There are two access points for editing data transformations; one for editing data transformations that are part of a data source, and the other access point is used to edit data transformations inserted as separate steps.

The image below shows the dialog for working with data transformations and how to gain access to the settings dialogs for each data transformation.

Available edit features

The following editing features are available from the Source View:

  1. Edit a data transformation. (Edit...)
  2. Delete a complete data transformation group. (The waste basket icon.)
  3. Delete a data transformation from a group (including deletion of a data transformation in data source step). (Remove)
  4. Insert a data transformation into an existing transformation group before or after existing data transformations. (Insert menu).
  5. Change the order in which data transformations are applied. (Move Up/Move Down)

Certain non-editable use cases

In some cases, the ability to edit a data transformation will not be possible. In summary, if a data source (column producer) cannot be refreshed, it cannot be edited. There are two cases when this is the happens:

If the final data table is (top) embedded.

If a data source includes data transformations but its data is not linked or cached, but stored (embedded).

A stored data table with a disabled access point for edit data transformation:

A linked data table with an available access point for edit data transformation:

Indications when something goes wrong in data transformations

With Spotfire 7.9, you will be notified when a data preparation step cannot be applied as expected, or, if a data transformation is no longer necessary.

The image below shows an example of the three levels of indications, depending on severity.

An Error indication

An Error indication is displayed if a data transformation cannnot be applied.

For example, if a column is missing for a calculation (if it has changed or has been removed in the data source, or, if it has been removed when editing a previous data preparation step in Spotfire), you will see an error. With Spotfire 7.9, and the ability to edit data transformations, many errors can be resolved in Spotfire. Once fixed, the error indication will be reevaluated, and hopefully disappear.

A Warning indication

A Warning indication is displayed if, for example, a defined value formatting step no longer can be applied.

For example, this happens if a column's data type (Real) and formatting (Percentage) has been changed using Spotfire's Data panel.

Now, if the data type changes to Real in the data source, Spotfire will not apply the data type change and thus cannot apply the Percentage formatting. A Warning highlights that you need to redefine the formatting on the column again.

An Information indication

An Information indication is displayed, for example, if a data type is changed to the same data type that the column already has, using a data transformation. This can happen if the data type has been wrong before, but now has been corrected in the data source. The data transformation in Spotfire is then no longer necessary, and this is highlighted using the Information indication.

Inline data cleaning

Spotfire now provides an easy way to clean up issues in your data, right when you see them. It is when you visualize data that you spot errors, so why not fix them right there and then? The new Replace value feature lets you change incorrect data values by double-clicking in a table, in the Details-on-Demand, or in the expanded Data panel. There are two flavors to the replace value feature; the ability to replace a single value only, or to replace all occurrences of that value in the column. 

Replace all occurrences of the value

For some types of data issues, the natural way to fix it is to replace all occurrences of the incorrect value. This helps you solve issues caused by alternative (mis)spellings like Tomatoes|Tomatos, Color|Colour or even if some rows of data use acronyms such as CA and some rows use the full name California.  It can also be used to group categorical values into different "buckets", such as grouping states into arbitrary regions.

Replace a single data value

Replacing a single data value is useful, for example, when you find issues in numerical data. Perhaps the decimal point is in the wrong place, or some other type of error. 

Replace specific value from a table details visualization

Replace specific value from the Details-on-Demand

Replacing the single value only requires that there is a defined key that can be used to identify this specific row of data. In the above screenshot, you can see a link to "Select key columns". The link leads to the below dialog that lets you define one or more columns to uniquely define each row of the data table.

How does it work?

Underneath the surface, the changes are implemented using two new data transformations, Replace value and Replace specific value. This means that no data is changed in the original data source. Instead, the value is replaced when the data is brought into Spotfire. It also means that when data is reloaded, the same corrections are applied again, and for the Replace value case new instances of the value in question are also being replaced.

The logic in the Replace specific value case is to replace the value only if it is the same value as when the transformation was created. Thus, if the value is changed in the data source after the transformation was defined, the transformation will no longer have any effect.

Review all changes

The visual Data source view lets you inspect and, if needed, remove the Replace value transformations.

Above, you can see how replace value transformations are shown in the source view.

Recommendations for add rows prefix and postfix support

Before Spotfire 7.9, Spotfire's recommendation engine would automatically detect if new data should be added as rows to existing data. With Spotfire 7.9, the recommendation engine for add rows also automatically matches columns with common names but different prefixes and/or postfixes. For example, the new column 'Sales (2016)' will match the existing column 'Sales (2015)'.

Columns that have the same prefix/postfix will have the prefix/postfix removed from the column name. In the example above, the column name will be 'Sales'.

The prefix/postfix will automatically be entered on all rows in the origin column. In the example above, the origin column will contain '2016' and '2015' for the respective data sources.

Access Amazon Redshift data from Spotfire Cloud web clients

Amazon Redshift is now supported in Spotfire Cloud Business Author and Consumer. This means that when you open an analysis file with data from Amazon Redshift in Spotfire Cloud Business Author and Consumer, you can now load data directly from your Amazon Redshift instance. Both in-database live queries and in-memory data import are supported.

Analysis files with Amazon Redshift connections are authored in Spotfire Cloud Analyst, saved to the Spotfire Cloud Library and are then available for Spotfire Cloud Business Author and Consumer users.

You can manually refresh data from individual data sources from Business Author's Source View.

Note: You might have to allow the Spotfire Cloud servers to access your Amazon Redshift data by whitelisting the servers' IP addresses. More information is available in the TIBCO Cloud Spotfire help.

Access Azure SQL data from Spotfire Cloud web clients

Azure SQL is now supported in Spotfire Cloud Business Author and Consumer. This means that when you open an analysis file with data from Azure SQL in Spotfire Cloud Business Author and Consumer, you can now load data directly from your Azure SQL instance. Both in-database live queries and in-memory data import are supported.

Analysis files with Azure SQL connections are authored in Spotfire Cloud Analyst, saved to the Spotfire Cloud Library and are then available for Spotfire Cloud Business Author and Consumer users.

You can manually refresh data from individual data sources from Business Author's Source View.

Note: You might have to allow the Spotfire Cloud servers to access your Azure SQL data by white listing the servers' IP addresses. More information is available in the TIBCO Cloud Spotfire help.

Access OData provider data from Spotfire Cloud web clients

Tutorial: https://community.tibco.com/wiki/access-odata-provider-data-spotfire-clo...

OData is now supported in Spotfire Cloud Business Author and Consumer. This means that when you open an analysis file with data from OData in Spotfire Cloud Business Author and Consumer, you can now load data directly from your OData instance. The OData connector supports in-memory data import.

Analysis files with OData connections are authored in Spotfire Cloud Analyst, saved to the Spotfire Cloud Library and are then available for Spotfire Cloud Business Author and Consumer users.

You can manually refresh data from individual data sources from Business Author's Source View.

Note: You might have to allow the Spotfire Cloud servers to access your Odata providers by white listing the servers' IP addresses. More information is available in the TIBCO Cloud Spotfire help.

Connectors and live query data tables

Microsoft Azure HDInsight is now supported

Starting with Spotfire 7.9, the Hortonworks Hive connector now supports Microsoft Azure HDInsight.

For more information about Microsoft Azure HDInsight, see: https://azure.microsoft.com/en-us/services/hdinsight/

Apache KNOX is now supported

Starting with Spotfire 7.9, the Hortonworks Hive connector now supports Apache KNOX, with or without Kerberos.

For more details about Apache KNOX, see: https://knox.apache.org

SAP SSO is now supported with the SAP BW connector

It is common that SAP BW deployments use SAP's SSO solution. Spotfire's SAP BW integration now supports this authentication method in all clients and servers. This enables Spotfire users to analyze SAP BW data without entering their SAP BW credentials manually. It also provides a central location for users and roles administration for SAP BW administrators.

Instructions for how to configure Spotfire for SAP BW SSO is available here: https://community.tibco.com/wiki/single-sign-tibco-spotfire-sap-bw-conne...

Configurable maximum allowed number of rows in live query results

Spotfire 7.9 introduces a new safety setting which allows system administrators to set a limit for how large the data tables loaded using live queries (in-database tables) can be. This is a protection against, for example, ad hoc analysts splitting a bar chart on a fact table's ID column, which could result in a gigabyte data table being loaded into client and Web Player memory.

Google Analytics system web browser authentication

Spotfire's Google Analytics connector now supports Google's new modernized OAuth implementation. The system web browser is now used for user authorization, instead of a built in Spotfire dialog. This means that if a user is already logged into Google in the system web browser, the login step will be performed automatically.

For more details about the reason for this change, see: https://developers.googleblog.com/2016/08/modernizing-oauth-interactions...

New data source versions support

Analysis Services 2016 is now supported

Spotfire 7.9 (and later) now supports Analysis Services 2016.

For details, see the system requirements page here: http://support.spotfire.com/sr_spotfire_dataconnectors.asp#ssas

PostgreSQL 9.5 and 9.6 is now supported

Spotfire 7.9 (and later) now supports PostgreSQL 9.5 and 9.6.

For details, see the system requirements page here: http://support.spotfire.com/sr_spotfire_dataconnectors.asp#postgresql

MySQL 5.7 is now supported

Spotfire 7.9 (and later) now supports MySQL 5.7.

For details, see the system requirements page here: http://support.spotfire.com/sr_spotfire_dataconnectors.asp#oraclemysql

SAP BW 7.5 is now supported

Spotfire 7.5 (and later) now supports SAP BW 7.5.

For details, see the system requirements page here: http://support.spotfire.com/sr_spotfire_dataconnectors.asp#sapnetweaver

Apache Spark SQL 2.0 is now supported

Spotfire's Spark SQL connector now supports Spark 1.6.0 to 2.0.2.

NOTE: The latest TIBCO ODBC Driver for Apache Spark SQL must be used in combination with the connector.

For details, see the system requirements page here: http://support.spotfire.com/sr_spotfire_dataconnectors.asp#apachesparksql

Information Services now supports constrained Kerberos delegation

Spotfire Information Services now supports constrained Kerberos delegation in combination with compatible JDBC drivers.

Location Analytics

Nautical Miles unit (new feature)

Nautical Miles is added as a unit of measurement in addition to existing imperial and metrics units, when using radius and rectangle selection.

Get the coordinates of a location (new feature)

You can now right-click anywhere on a map and get geographic coordinates (latitude and longitude) for a location.

Easier access to map layer (enhancement)

It is much easier to enable access to the map layer when Spotfire cannot access the Internet or is on a restricted environment. Now, only one unique domain needs to be allowed.

Advanced Analytics

  • Continued work towards broader R compatibility, to enable more and more potential applications to be run on TERR. As of this release, 99% of packages on CRAN, almost 10,000 community packages, can be loaded in TERR. (Well done, TERR Team!). Full details on compatibility are available on the TERR Documentation site.
  • Significant improvements to TERR performance in many areas.
  • TERR can now be used in RStudio to create interactive R Markdown notebooks. R Notebooks allow for direct interaction with R while producing a reproducible document with publication-quality output.
  • A new Guide to Graphics in TERR, which provides tips and examples on using Javascript-enabled packages, certain open-source R packages, and the TERR RinR package to create graphics from TERR.

Server

Log4J2

For Spotfire Server 7.9, the logging framework has been upgraded from Log4j to Log4j2. The benefits of upgrading to Log4j2 include the following:

  • You can manage logging from the UI. For example, you can start debug logging during runtime, without having to manually edit configuration files.
  • Log4J2 is garbage-free, which reduces the pressure on the garbage collector.
  • Java 8 feature sets are fully supported, including lazy logging.

If you used a custom-modified log4j.properties file in any Spotfire Server version between 7.5 and 7.8, you must manually add these modifications to the new log4j2.xml file. 

Sites

You can now create multiple Spotfire environments that share the same Spotfire database, including the library and user directory. These environments, which are called sites, can be configured to reduce latency for multi-geographic deployments. Sites also enable the use of a variety of authentication methods, along with different user directories, within the same deployment. 

Each site includes one or more Spotfire Servers along with their connected nodes and services. A site's servers, nodes, and services can only communicate within the site, but because the Spotfire database is shared among the sites, all of the sites have access to the users, groups, and library in your Spotfire implementation.

The benefits of using sites include the following:

  1. You can route user requests from a particular office to the servers and nodes that are physically closest to that office. This reduces the impact of network latency between servers that are located in different geographic regions. 
  2. You can enable different authentication methods for different sets of users who share a Spotfire implementation. For example, internal users can be authenticated with Kerberos authentication while external users, such as customers and partners, can be authenticated with a username and password method.

 

TIBCO Spotfire® 7.8

Spotfire 7.8 extends the reach of the Spotfire Recommendation engine into the data space, making it easier than ever to add more rows of data to your Analysis. For Administrators, Spotfire 7.8 adds support for authentication through OpenID Connect (OIDC). And for IronPython and C# developers, there are new APIs that enable you to create more easy to use and powerful analytic applications using Spotfire.

Recommendations for Add rows

In Spotfire Business Author the user now can get a Recommendation to Add the data as rows to an existing data table when adding new data, if the Spotfire Recommendation engine determines that this is suitable. Further, Spotfire can automatically match the columns from the original and the data sets. See how this works in this video. and for more details see this article.

Data Access

Improvements to the SQL server connector

The Spotfire SQL Server Data connector now has added support for SQL server 2016, Azure SQL and Azure SQL Data Warehouse

Configure the maximum amount of in-database rows in the table visualization

In earlier versions of Spotfire, when you kept the data in-database as opposed to loading it into the Spotfire in-memory engine, Table visualizations were limited to showing at most 10000 rows. This has now been changed so that an Administrator can configure the maximum number of rows to display in a Table visualization when running against in-DB data.

The setting which is called TableVisualizationExternalRowLimit is reached through the Administation Manager.

Location Analytics

WMS 1.3.0 Support

Spotfire map charts now support version 1.3.0 of the WMS standard.

For Developers - new APIs

KPI Chart API

The KPI chart API allows authors and developers to automatically configure KPI Charts from IronPython scripts or custom tools. This enables creating more user friendly and powerful visual analytics applications for end users. See this article for further details and examples.

LayoutDefinition API

IronPython and C# Developers can now define the Layout of visuals on a page in more detail. The new API allows specifying vertical and horizontal proportions to layout the visuals on a page. This means you can now acheive similar layouts using the API as you can when manually arranging visuals on a page. See this article for further details and examples.

Administration Improvements - Federated Authentication: OpenID Connect (OIDC)

Spotfire Server now supports the use of OpenID Connect. OpenID Connect is an open standard and decentralized authentication protocol. Using OpenID Connect allows a customer to set it up so that their users can login with an account they already have. For example, a user can log into Spotfire with Google, Yahoo, or Salesforce. This eliminates the need for administrators to provide their own login systems (such as LDAP or AD).

This enables administrators to reduce the number of usernames and passwords their users need to remember

To setup OpenID Connect with Spotfire Server, there are two prerequisites:

You have to configure a public address URL within Spotfire Server.

You have to register a client at the provider with a return endpoint URL, and receive a client ID and a client secret from the provider.

 

New Solutions and Extensions

Spotfire Templates, Data Functions, Accelerators, Extensions and Custom Datasources are available for a wide range of industry vertical and horizontal use cases.  Most are provided as free downloads.  The most popular, recent offerings are shown below.  For a complete list, view all analytics components on the TIBCO Exchange.

Alerting

The Spotfire Plug-in for Alerting allows you to configure alerts directly from any Spotfire analysis file and can be used to alert when thresholds or rules on any chart are violated.  It is an extension for TIBCO Spotfire that integrates with Automation Services via an alerting task that can generate e-mail, text or pop-up alerts.

Live Datamart Custom Data Source

This Custom Datasource is a TIBCO Spotfire® Extension that enables users to build interactive Spotfire visualizations using data stored in TIBCO® Live Datamart.

Customer Analytics and Marketing

Customer Analytics template series is used to analyze customers purchase behavior.  It includes Spotfire analysis templates for segmentation, propensity and affinity.

A/B Testing data functions provide analysis for a number of marketing use cases where the goal is to compare the effect of different “treatments” on a response, such as click-through rates, orders or sales dollars. These treatments can be different web pages, different email designs, copy, or promotions.

Machine Learning

The Gradient Boosting Machine analysis template and data function are used to create a GBM machine learning model to understand the effects of predictor variables on a single response.  Examples of business problems that can be addressed include understanding causes of financial fraud, product quality problems, equipment failures, customer behavior, fuel efficiency, missing luggage and many others. 

Clustering with Variable Importance Data Function clusters objects together based on similarities between the objects and ranks the input variables according to their influence on cluster formation.  

Fraud

The Financial Crime Buster Analysis Template guides the user through the tasks of adhoc data discovery, supervised model creation and unsupervised model creation to build a strategy for combating financial crime.

Geoanalytics and Energy

The Contour Plot Data Function generates a contour plot as a feature layer on any map chart

The Decline Curve Analysis Data Function calculates a Hyperbolic Decline Curve Analysis using production oil and gas data. 

TIBCO Spotfire® 7.7

Version 7.7 further extends the capabilites of TIBCO Spotfire. The main areas of improvements are in the ability to develop mobile applications, web authoring improvments,data wrangling and management, and administration improvements for scheduled updates, resource pool management and automation services.

Below are more information and articles about specific features.

KPI chart and Mobile

In Spotfire 7.7 it is now a lot easier to create Mobile applications with all types of visualizations. The minimum page size option enables vertical scrolling and seeing one or a few visuals at a time on a small screen, while users on a larger screen can see more (all) visuals at once. In addition the KPI chart now has Sparklines to give more context to the KPI.

Also read the Best practices for designing mobile applications in TIBCO Spotfire

Data Access

Spotfire 7.7 provides a brand new self service connector to Attivio, thus expanding the ability to create analysis files based on Attivio data lake data and unstructured content to business users. With Spotfire 7.7 business users can even author analysis files that uses the power of Attivio's full text search engine. Data is brought into Spotfire on demand, based on what end users search for. SAP BW continues to be a very popular source of data and Spotfire 7.7 delivers some of the most frequently requested features in this area. Both new and enhanced self service data connectors benefit from the ease of use in Spotfire 7.7. By decresing the amount of steps users need to do to edit data connections and deploy connectors to the Spotfire eco system, valuable time is saved.

Data wrangling

Spotfire 7.7 continues to make it easier to prepare your data. Now it is possible to edit settings for add rows and data sources directly from the visual data source view.

Web authoring improvements

The Spotfire Business Author has a number of new capabilites such as creating and configuring KPI charts, creating multi layer maps, adding color rules and as mentioned above: the capability to add rows to data tables.

Administration Improvements

The main improvements in administration features are new jobs for TIBCO Spotfire Automation Services to send emails with attachements and to save data to a file, and improved management of resource pools for web player and automation services and for monitoring so called Scheduled Updates.

Custom panel API for Spotfire web clients

With Spotfire 7.7, developers can add custom panels to the Spotfire web clients.

Other API Improvements

Spotfire 7.7 adds APIs for:

  • Cross Table sort mode (get/set); Global or Leaf: crossTablePlot.SortRowsMode = SortMode.Global;
  • Cross Table empty cell text (get/set): crossTablePlot.EmptyCellText = "-";
  • Get and set minimum page dimensions: page.MinimumWidth = 713; page.MinimumHeight = 446;

Location Analytics

Spotfire 7.7 improves the use of map charts through improved zoom visibility and improved map chart access when there is no internet connection. 

TIBCO Spotfire® 7.6

7.6 is an important release for TIBCO Spotfire, thanks to the modernized client and server architecture. This new foundation is helping make it easier and faster for us to make visualization improvements, and significantly simplifies server administration and managability.  This document summarizes the cool new features in TIBCO Spotfire 7.6.

Below are tutorials and video links to learn more about a selection of the new features, and more.

KPI chart and Mobile:

The new KPI chart is a big fan favorite in TIBCO Spotfire 7.6.  It is now easier than ever to configure a Key Performance Indicator dashboard in TIBCO Spotfire and make it available to consumers using the TIBCO Spotfire iOS app for mobile devices or the TIBCO Spotfire web client.  Create dashboards that let the user browse their KPIs, tapping a KPI to view more detailed KPIs, or to view more details in regular TIBCO Spotfire visuals.

Waterfall Charts

Another great new visualization is the Waterfall chart, which now works with TIBCO Spotfire Cloud 3.6 and TIBCO Spotfire 7.6.  They are useful when you need to show how different component factors contribute to a final result. Waterfall charts are commonly used in financial analysis, but are useful for other use cases as well. So if you're unfamiliar with why you would use a waterfall chart in the first place, start by reading this post on: why use a waterfall chart. Then, explore the "how" to create a waterfall chart in TIBCO Spotfire with the following tutorials:

Show top N vs the rest

It can be useful to visualize the Top N of something, versus "the rest".  This is a great visualization technique to improve chart readability when you have a few large groups and many smaller ones. This article is relevant for 7.6 and older versions as well:

Inline Data Preparation and Data Wrangling

Below are a selection of new, easy to use tools to prepare and wrangle data. This video shows how they can be used:

https://www.youtube.com/watch?v=hr1vZgbUcGQ

Visual overview of data table structures

It is sometimes challenging to understand which data sources and what methods have been used to create combined data tables. To solve this problem, data table data sources and operations can now easily be viewed in the Source view of the expanded data panel. It is possible to see detailed information about operations and preview intermediate resulting data tables after individual steps.

Split columns into new columns based on column values

Sometimes, column values contain multiple pieces of information. Examples are first and last name, or city and zip code. It's now easy to split columns of this type into separate columns containing the individual values from the original column. The original column can then be hidden from the analysis, not to distract and take up valuable space (in, for example, the Data panel).

Unpivot from the data panel

Data can be organized in different ways, for example, in a short/wide or tall/skinny format, but still contain the same information. Often, it is easier to visualize data organized in a tall/skinny format, that is, when the values are collected in just a few value columns. Unpivoting is one way to transform data from a short/wide to a tall/skinny format, so the data can be presented the way you want it in the visualizations. The Data panel (both in TIBCO Spotfire Analyst and TIBCO Spotfire Business Author) now has a built-in unpivot tool on the right-click menu.

Whats New in TIBCO Spotfire 7.6

Using multiple screens when analyzing data

When you want to simultaneously view more visualizations than will fit on a single screen, you can now analyze your data using multiple screens!

New Google Analytics connector in TIBCO Spotfire Business Author and TIBCO Spotfire Analyst

TIBCO Spotfire Business Author and TIBCO Spotfire Analyst now support direct access to, and analysis of, data from Google Analytics.

Whats New in TIBCO Spotfire 7.6

Video: https://www.youtube.com/watch?v=Prju49PPRQ4

New Salesforce.com connector in TIBCO Spotfire Business Author

TIBCO Spotfire Business Author now supports direct access to, and analysis of, Salesforce.com data, without using the installed TIBCO Spotfire Analyst client.

Whats New in TIBCO Spotfire 7.6

Caching Data using Automation Services

Performance can often be improved by periodically loading data from databases and caching it, so that TIBCO Spotfire analyses requiring the data can be opened quickly and without each analysis hitting the database with queries. 

Custom/External Authentication in TIBCO Spotfire 7.5/7.6

Many customers want to embed TIBCO Spotfire Web Player into a portal or other web application and secure access by passing authentication information from the portal to TIBCO Spotfire.  Customers also have internal Web application security standards that require Single-Sign-On to all web applications which would include TIBCO Spotfire Web Player.  TIBCO Spotfire supports these scenarios via custom and external authentication.  With TIBCO Spotfire 7.5, the architecture has changed such that the support for these scenarios has moved from the TIBCO Spotfire Web Player to the TIBCO Spotfire Server.

 Go to TIBCO Spotfire Main Wiki page

 

 

 

Feedback (4)

Thanks for your feedback. The links have been updated.

asunden 1:40am Aug. 21, 2018

e.g. ON https://community.tibco.com/wiki/whats-new-tibco-spotfire

Section under 7.13 See REST API Reference for more details.
and
See TIBCO Spotfire Developer Documentation for a tutorial on how it works.

Goes to 
https://docs.tibco.com/pub/sfire_dev/area/doc/html/GUID-625B17C7-3B83-4C...

404 Page not found.

dave.williams 11:29am Aug. 20, 2018

Don,

Many of the links embedded in the body are also broken. 

Thanks!

Julie 4:15pm Aug. 08, 2017

Several of the links in this article are broken.  I started at 7.7 and was working towards 7.10 and I encountered 4 broken links in the 7.7 information when just getting started.  The two links under 7.7 location analytics are broken.  Resource pool workflow, easier deployment of data connectors are both broken.  

Julie 3:22pm Aug. 08, 2017