What's New in TIBCO Spotfire®
Last updated:
4:02am Aug 11, 2017
Table of Contents

TIBCO Spotfire® 7.10

Spotfire 7.10 provides a new, high-resolution, export to PDF feature, including a modern, user-friendly UI, with a live preview. In addition, there are also very useful improvements in data access (especially for SAP BW users), visual analytics, and, for administrators, we have made the node manager upgrade process simpler. 

New and improved Export to PDF

With Spotfire 7.10, the new Export to PDF feature includes the following main improvements over the legacy implementation:

  • The exported visualizations use the visual theme in the analysis.
  • The exported PDF is of a higher resolution.
  • The modern user interface makes it easier to configure the export, to get the result you need.
  • The dialog provides a preview that lets you see the result of your settings.

You access the new Export to PDF dialog from the File > Export menu in Spotfire Analyst, or, from the menu in the top right of the web client. 

In the left-hand panel, there are controls that let you configure what to export, and what type of content to include (such as page numbers, date, Annotations, etc). You can also find basic settings here, such as the paper size and page orientiation. Just to the right of the panel is the preview area, which is dynamically updated as you change the export settings in the left-side panel, so you can see the effect of your selections directly. The preview can also be zoomed using the controls in the upper right corner.


Control the proportions of the exported content

You can now easily control the proportions of exported visualizations. In the Proportions part of the user interface, you can choose to use one of three options:

As it is on your screen

This setting is the default option in Spotfire. Choosing this option ensures that the PDF page displays the content exactly the way it looks on your screen. However, you might not use all the available space on the paper with this choice, as shown below.  (In this image, the paper is A4 landscape-oriented, so a portion of the paper in the bottom is not used.)

Fit to PDF page

If you want to utilize your paper dimensions, choose Fit to PDF page. When you select this option, check to make sure no labels are truncated, because the aspect ratio for the dashboard may be changed significantly. (The image below shows how this can happen.)

Notice that, in the picture above, the labels in the lower line chart to the right do not fit the space, the labels in the bar chart look sub-optimal, and only parts of the company names in the table in the center are visible. In this case, using the feature Relative text size is a great option.

Relative text size

Use the Relative text size slider to scale the text to the best size. Below, you can see the result: the text is smaller, but the labels and the text in the table fits and are easily readable. Watch this File video showing how Relative text size works.


If the options Fit to PDF page or As it is on your screen cannot give you what you need, you can use the custom proportions to define any desired aspect ratio for the exported content.


Exporting all rows from a table, or all trellis pages from a trellised visualization

In order to export all rows from a table, not only those rows visible on the screen, or to export all trellis pages for a trellis-by-pages visualization, you first select the visualization to export. A new, very convenient, way to do this is from the right-click menu in the visualization:

The What to export – Active visualization option is then automatically selected, and you can select Export entire table, as shown below.

The process is very similar for export of all trellis pages in a trellis-by-pages visualization.


Visualize direction on maps or scatter plots by setting the rotation of markers

It is now possible to rotate markers on maps and scatter plots, based on values in a column or a custom expression. For example, using this new feature, you can configure the direction of markers to indicate wind direction, a ship's heading, or similar direction information.

Set the rotation using a column or a custom expression on the new Rotation axis of the scatter plot or map chart marker layer. The rotation is described in degrees, where 0 is North, 90 is East, 180 is South, and so on.


Scroll in a cross table hierarchy

When the hierarchy of a cross table is very large, you can now scroll in the hierarchy, as well as in the values. This new feature makes it easier to work with cross tables that have a large hierarchy.

The Appearance tab in the cross table properties now has a new section that is called Horizontal scrolling.

There are three options in the Horizontal scrolling section:

Freeze row headers keeps the row header frozen, so you can scroll only the column values. This was the behavior in version 7.9 of Spotfire, and earlier.

Scroll row headers specifies that the cross table always scrolls both row headers and column values.

Adjust automatically is the new default: The cross table sets the best of the two other options automatically, depending on the width of the row headers relative to the width of the whole cross table.


Quick auto-zoom on maps

You can now enable or disable auto-zoom much more quickly, directly from the right-click menu in the map chart.


Improved performance when filtering data in-db

When you use Spotfire to analyze data in a database with a live connection (data still kept in the database, not brought into memory with Spotfire), filtering performance is much better in Spotfire 7.10 compared to earlier versions.


Spotfire business users can now limit data using SAP BW BEx variables

Spotfire has had native self-service support for SAP BW since version 5.5. The number of Spotfire users on SAP BW has grown rapidly since then. We are happy to announce that the most-requested SAP BW integration feature is available in Spotfire 7.10.

In SAP BW BEx queries, variables are used to limit the data to be loaded. With Spotfire 7.10, it is now possible to connect SAP BW BEx query variables to Spotfire prompts automatically. Now, you can build analysis files that are directly connected to the variables that are already part of your BEx queries and your business processes. SAP BW users will be familiar with the variables, and can, for example, very easily narrow down the data being analyzed to certain time frames, product categories, equipment maintenance areas, oil wells, accounts, employees, and so on. 

The following  image shows the updated Data Selection in Connection dialog box for defining SAP BW views in Spotfire.

The panel to the right has a new section for prompt settings. To activate prompts for a variable, select the Prompt for values check box.

The prompt type adapts to the prompt type defined in the BEx variable. In the example above, because this prompt type corresponds to the input allowed for that BEx variable, the prompt type for the TYPE variable is locked to Single selection.

To make it easier for business users to understand the prompt, you can provide a description of the prompt .

The following image shows that it is possible to control in which order the prompts are displayed to business users. This feature is important because variables are related, and limiting one variable affects the values available for the following variables.

The following image displays an example of the first prompt, for the variable Tax:

The following image displays the second prompt, TYPE. Users can enter variable values manually, or (as in this example), they can load the unique values in a list from SAP BW.

The following image displays the third and last prompt, Region, with a value which has been entered manually. 

The following image depicts the data analysis as a bar chart, where the data is limited by the selections previously made using the prompts.

In BEx queries, variables are used to limit the data to be loaded. Some variables are mandatory, and values must be defined before a user can open the query. By establishing prompting, you can let the end user define the variable value, instead of defining it in the connection configuration.

Note: You can define both a value and prompting for the same BEx variable. The variable value you define in the connection is the default selection in the prompt dialog for the variable when the connection is opened. This can be useful if you save the connection in the library for reuse. However, if you create an analysis with prompts and save it to the library, then the selections you made in the prompts when creating the analysis will be stored in the analysis. In that case, it will be your selections in the prompts, rather than the variable values defined in the connection, that are the default selections in prompts shown to the end users.

Compared to working with relational data sources, BEx queries are more restrictive regarding how you can set up prompting. When a variable is defined in the query, it is designed to accept only certain input. For example, it can be a single value, a multiple value or a range. In Spotfire, the accepted input determines the prompt types you can use for a BEx variable.

Note: Unless Load values automatically is selected, then, by default, the prompts for BEx variables give users the option to enter values manually. When a user enters variable values manually, Spotfire supports entering values as text (captions). Entering values as keys is not supported.


SAP Message Server support for SAP BW

You can now connect Spotfire to your cluster of SAP BW systems using the SAP Message Server load balancer. Previously, you had to connect Spotfire directly to a certain SAP BW instance.

The SAP Message Server allows IT to assign application servers to workgroups or specific applications. Users are automatically logged in to the server that currently has the best performance statistics and/or the fewest users.

The image below shows the updated SAP BW connection dialog, with the new fields for entering SAP Message Server connection details.

Single Sign On to SAP HANA with SAP SSO 3.0

It is common that SAP HANA deployments use the SAP SSO 3.0 Kerberos solution. The Spotfire SAP HANA integration now supports Kerberos authentication in combination with SAP SSO 3.0, in all clients and servers. This change enables Spotfire users analyzing SAP HANA data to access data without entering their SAP HANA credentials manually. It also provides a central location for users and roles administration, for SAP HANA administrators.

Configurable Essbase Measure dimension

If you are connecting to an Oracle Essbase cube that does not have a dimension tagged as the accounts dimension, you can now specify which dimension contains the measures. In previous releases, this was not possible, and some users could not connect to their Essbase cubes.

The following image shows the dialog that is displayed when you create a connection to such a cube.

You can manually specify which dimension to use as the measure (accounts) dimension in your connection.


API access to Spotfire's data wrangling operations

Spotfire's Source View (available by expanding the data panel) has been extremely well received by the Spotfire user community. It provides an overview of your data wrangling steps, and it also has access points for going back and editing data wrangling steps.

With this release, you can now get the same overview and the same editing capabilities using an API.

With API control of how data is wrangled, you can unlock new ways of building analytics applications. For example, an analytic scenario can be adapted on the fly by letting business users change join type (API control of the add columns operation), which instantly changes how data is blended, and thus, is presented in the analysis file.

Using the API, you can extract data wrangling and cleansing steps from Spotfire. For example, all usage of the replace values data transformation on your data (which is also new with this release) can be exported. This new feature means that you can write code into SQL or Spark (for example) that converts all steps taken to cleanse data.

Below is an example of how you can write the join type so it is controlled by a document property.

from Spotfire.Dxp.Data import * from Spotfire.Dxp.Data.DataOperations import * from System import *

sourceView = table.GenerateSourceView(); 
op = sourceView.GetAllOperations[AddColumnsOperation]()[0]

newJoinType = Enum.Parse(JoinType, joinType)

op.AddColumnsSettings = op.AddColumnsSettings.WithJoinType(newJoinType).WithTreatEmptyValuesAsEqual(matchOnNull)

The following image shows how a text area input field could be used to control the join type through a document property.

Learn more about the API here.

Easier debugging of TERR data functions

There is now a way to see debug information which is generated in runtime when a TERR data function executes, such as parameter values and also your own free text output. The same mechanism is used whether you run the data function locally using the embeded TERR engine in Spotfire Analyst, or, using the TERR engine in TIBCO Spotfire Statistics Services. To enable the debug output, select Tools > Options > Data Functions and click the Enable Data Function debugging check box:

This makes Spotfire show additional debug information from the execution of the data function, such as, input and output parameter values. The debug information is viewed in the notifications window that you access from the lower left notification message (click the yellow triangle):


Here is one example of debug output:

If you want to, it is easy to add your custom debug information in a data function, in the script body:

cat("My debug output: the input value for Multiplier was: ")
OutputColumn <- InputColumn*Multiplier



Easier upgrades of Node Managers

Spotfire 7.10 improves the the upgrade process:

  1. You can upgrade the node manager from the administration UI.
  2. The node manager upgrade is now part of the rollback process in case there is an issue or error with the upgrade.

Quick deployment of package updates

To deploy Spotfire software, the administrator places software packages in a deployment area and assigns the deployment area to particular groups. 

If a new deployment is available when a user logs in to a Spotfire client, the software packages are downloaded from the server to the client. 

Deployments are used: 

  • To set up a new a new Spotfire system. 
  • To install a product upgrade, extension, or hotfix provided by Spotfire. 
  • To install a custom tool or extension.

With one click you can now up update, rollback, or delete your deployment packages

Pagination for Viewing Scheduled Updates

The Scheduling & Routing page now has pagination. By default, you will see 100 scheduled updates and routing rules, but you can switch the view to 50 or 150 items per page.


TIBCO Spotfire Analytics for iOS 2.9

Version 2.9 of the Spotfire iPhone/iPad App adds user notification when new data is available through Scheduled Updates and the ability to synchronize the App settings between multiple iOS devices using iCloud. Read more about this and other Spotfire Mobile releases here.


TIBCO Spotfire® Data Catalog

TIBCO Spotfire Data Catalog

The Data Catalog makes handling, searching and accessing data from across your organization a natural and fast experience. Even if your data is scattered in disparate data sources – in databases, data warehouses, or elsewhere – you can make all these data sources readily available for self-service access in one unified data catalog. Using Attivio intelligent technology, your data is profiled, organized and semantically enriched so that you can search with natural language across all your data sources, whether the contents are structured or unstructured. Discover relationships between your data with the patented ‘Join Finder’ and bundle just the right, relevant information in self-service data marts. Then start uncovering insights through seamless integration with the Spotfire visualization platform. 

TIBCO Spotfire® 7.9

The main highlights in Spotfire® 7.9 are significant new inline data wrangling features.

Spotfire® 7.9 On Demand Webinar 

Inline data wrangling

Edit data transformations

Spotfire 7.6 introduced the Source View which provides an overview of your data transformations, calculations and how your data tables are derived from rows and columns combined from multiple data sources. Spotfire 7.7 made add rows (unions) editable and smart by usage of the Spotfire recommendations engine.

With Spotfire 7.9, one of the most anticipated new features of all times is now available; the ability to change data transformation settings. This saves you a lot of time, for example, when a recently added data transformation needs further editing, or if an existing data transformation needs to be adapted to changes in the data source.

Access points for editing data transformations

The image below shows an example of details in the Source View. There are two access points for editing data transformations; one for editing data transformations that are part of a data source, and the other access point is used to edit data transformations inserted as separate steps.

The image below shows the dialog for working with data transformations and how to gain access to the settings dialogs for each data transformation.

Available edit features

The following editing features are available from the Source View:

  1. Edit a data transformation. (Edit...)
  2. Delete a complete data transformation group. (The waste basket icon.)
  3. Delete a data transformation from a group (including deletion of a data transformation in data source step). (Remove)
  4. Insert a data transformation into an existing transformation group before or after existing data transformations. (Insert menu).
  5. Change the order in which data transformations are applied. (Move Up/Move Down)

Certain non-editable use cases

In some cases, the ability to edit a data transformation will not be possible. In summary, if a data source (column producer) cannot be refreshed, it cannot be edited. There are two cases when this is the happens:

If the final data table is (top) embedded.

If a data source includes data transformations but its data is not linked or cached, but stored (embedded).

A stored data table with a disabled access point for edit data transformation:

A linked data table with an available access point for edit data transformation:

Indications when something goes wrong in data transformations

With Spotfire 7.9, you will be notified when a data preparation step cannot be applied as expected, or, if a data transformation is no longer necessary.

The image below shows an example of the three levels of indications, depending on severity.

An Error indication

An Error indication is displayed if a data transformation cannnot be applied.

For example, if a column is missing for a calculation (if it has changed or has been removed in the data source, or, if it has been removed when editing a previous data preparation step in Spotfire), you will see an error. With Spotfire 7.9, and the ability to edit data transformations, many errors can be resolved in Spotfire. Once fixed, the error indication will be reevaluated, and hopefully disappear.

A Warning indication

A Warning indication is displayed if, for example, a defined value formatting step no longer can be applied.

For example, this happens if a column's data type (Real) and formatting (Percentage) has been changed using Spotfire's Data panel.

Now, if the data type changes to Real in the data source, Spotfire will not apply the data type change and thus cannot apply the Percentage formatting. A Warning highlights that you need to redefine the formatting on the column again.

An Information indication

An Information indication is displayed, for example, if a data type is changed to the same data type that the column already has, using a data transformation. This can happen if the data type has been wrong before, but now has been corrected in the data source. The data transformation in Spotfire is then no longer necessary, and this is highlighted using the Information indication.

Inline data cleaning

Spotfire now provides an easy way to clean up issues in your data, right when you see them. It is when you visualize data that you spot errors, so why not fix them right there and then? The new Replace value feature lets you change incorrect data values by double-clicking in a table, in the Details-on-Demand, or in the expanded Data panel. There are two flavors to the replace value feature; the ability to replace a single value only, or to replace all occurrences of that value in the column. 

Replace all occurrences of the value

For some types of data issues, the natural way to fix it is to replace all occurrences of the incorrect value. This helps you solve issues caused by alternative (mis)spellings like Tomatoes|Tomatos, Color|Colour or even if some rows of data use acronyms such as CA and some rows use the full name California.  It can also be used to group categorical values into different "buckets", such as grouping states into arbitrary regions.

Replace a single data value

Replacing a single data value is useful, for example, when you find issues in numerical data. Perhaps the decimal point is in the wrong place, or some other type of error. 

Replace specific value from a table details visualization

Replace specific value from the Details-on-Demand

Replacing the single value only requires that there is a defined key that can be used to identify this specific row of data. In the above screenshot, you can see a link to "Select key columns". The link leads to the below dialog that lets you define one or more columns to uniquely define each row of the data table.

How does it work?

Underneath the surface, the changes are implemented using two new data transformations, Replace value and Replace specific value. This means that no data is changed in the original data source. Instead, the value is replaced when the data is brought into Spotfire. It also means that when data is reloaded, the same corrections are applied again, and for the Replace value case new instances of the value in question are also being replaced.

The logic in the Replace specific value case is to replace the value only if it is the same value as when the transformation was created. Thus, if the value is changed in the data source after the transformation was defined, the transformation will no longer have any effect.

Review all changes

The visual Data source view lets you inspect and, if needed, remove the Replace value transformations.

Above, you can see how replace value transformations are shown in the source view.

Recommendations for add rows prefix and postfix support

Before Spotfire 7.9, Spotfire's recommendation engine would automatically detect if new data should be added as rows to existing data. With Spotfire 7.9, the recommendation engine for add rows also automatically matches columns with common names but different prefixes and/or postfixes. For example, the new column 'Sales (2016)' will match the existing column 'Sales (2015)'.

Columns that have the same prefix/postfix will have the prefix/postfix removed from the column name. In the example above, the column name will be 'Sales'.

The prefix/postfix will automatically be entered on all rows in the origin column. In the example above, the origin column will contain '2016' and '2015' for the respective data sources.

Access Amazon Redshift data from Spotfire Cloud web clients

Amazon Redshift is now supported in Spotfire Cloud Business Author and Consumer. This means that when you open an analysis file with data from Amazon Redshift in Spotfire Cloud Business Author and Consumer, you can now load data directly from your Amazon Redshift instance. Both in-database live queries and in-memory data import are supported.

Analysis files with Amazon Redshift connections are authored in Spotfire Cloud Analyst, saved to the Spotfire Cloud Library and are then available for Spotfire Cloud Business Author and Consumer users.

You can manually refresh data from individual data sources from Business Author's Source View.

Note: You might have to allow the Spotfire Cloud servers to access your Amazon Redshift data by whitelisting the servers' IP addresses. More information is available in the TIBCO Spotfire Cloud help.

Access Azure SQL data from Spotfire Cloud web clients

Azure SQL is now supported in Spotfire Cloud Business Author and Consumer. This means that when you open an analysis file with data from Azure SQL in Spotfire Cloud Business Author and Consumer, you can now load data directly from your Azure SQL instance. Both in-database live queries and in-memory data import are supported.

Analysis files with Azure SQL connections are authored in Spotfire Cloud Analyst, saved to the Spotfire Cloud Library and are then available for Spotfire Cloud Business Author and Consumer users.

You can manually refresh data from individual data sources from Business Author's Source View.

Note: You might have to allow the Spotfire Cloud servers to access your Azure SQL data by white listing the servers' IP addresses. More information is available in the TIBCO Spotfire Cloud help.

Access OData provider data from Spotfire Cloud web clients

Tutorial: https://community.tibco.com/wiki/access-odata-provider-data-spotfire-clo...

OData is now supported in Spotfire Cloud Business Author and Consumer. This means that when you open an analysis file with data from OData in Spotfire Cloud Business Author and Consumer, you can now load data directly from your OData instance. The OData connector supports in-memory data import.

Analysis files with OData connections are authored in Spotfire Cloud Analyst, saved to the Spotfire Cloud Library and are then available for Spotfire Cloud Business Author and Consumer users.

You can manually refresh data from individual data sources from Business Author's Source View.

Note: You might have to allow the Spotfire Cloud servers to access your Odata providers by white listing the servers' IP addresses. More information is available in the TIBCO Spotfire Cloud help.

Connectors and live query data tables

Microsoft Azure HDInsight is now supported

Starting with Spotfire 7.9, the Hortonworks Hive connector now supports Microsoft Azure HDInsight.

For more information about Microsoft Azure HDInsight, see: https://azure.microsoft.com/en-us/services/hdinsight/

Apache KNOX is now supported

Starting with Spotfire 7.9, the Hortonworks Hive connector now supports Apache KNOX, with or without Kerberos.

For more details about Apache KNOX, see: https://knox.apache.org

SAP SSO is now supported with the SAP BW connector

It is common that SAP BW deployments use SAP's SSO solution. Spotfire's SAP BW integration now supports this authentication method in all clients and servers. This enables Spotfire users to analyze SAP BW data without entering their SAP BW credentials manually. It also provides a central location for users and roles administration for SAP BW administrators.

Instructions for how to configure Spotfire for SAP BW SSO is available here: https://community.tibco.com/wiki/single-sign-tibco-spotfire-sap-bw-conne...

Configurable maximum allowed number of rows in live query results

Spotfire 7.9 introduces a new safety setting which allows system administrators to set a limit for how large the data tables loaded using live queries (in-database tables) can be. This is a protection against, for example, ad hoc analysts splitting a bar chart on a fact table's ID column, which could result in a gigabyte data table being loaded into client and Web Player memory.

Google Analytics system web browser authentication

Spotfire's Google Analytics connector now supports Google's new modernized OAuth implementation. The system web browser is now used for user authorization, instead of a built in Spotfire dialog. This means that if a user is already logged into Google in the system web browser, the login step will be performed automatically.

For more details about the reason for this change, see: https://developers.googleblog.com/2016/08/modernizing-oauth-interactions...

New data source versions support

Analysis Services 2016 is now supported

Spotfire 7.9 (and later) now supports Analysis Services 2016.

For details, see the system requirements page here: http://support.spotfire.com/sr_spotfire_dataconnectors.asp#ssas

PostgreSQL 9.5 and 9.6 is now supported

Spotfire 7.9 (and later) now supports PostgreSQL 9.5 and 9.6.

For details, see the system requirements page here: http://support.spotfire.com/sr_spotfire_dataconnectors.asp#postgresql

MySQL 5.7 is now supported

Spotfire 7.9 (and later) now supports MySQL 5.7.

For details, see the system requirements page here: http://support.spotfire.com/sr_spotfire_dataconnectors.asp#oraclemysql

SAP BW 7.5 is now supported

Spotfire 7.5 (and later) now supports SAP BW 7.5.

For details, see the system requirements page here: http://support.spotfire.com/sr_spotfire_dataconnectors.asp#sapnetweaver

Apache Spark SQL 2.0 is now supported

Spotfire's Spark SQL connector now supports Spark 1.6.0 to 2.0.2.

NOTE: The latest TIBCO ODBC Driver for Apache Spark SQL must be used in combination with the connector.

For details, see the system requirements page here: http://support.spotfire.com/sr_spotfire_dataconnectors.asp#apachesparksql

Information Services now supports constrained Kerberos delegation

Spotfire Information Services now supports constrained Kerberos delegation in combination with compatible JDBC drivers.

Location Analytics

Nautical Miles unit (new feature)

Nautical Miles is added as a unit of measurement in addition to existing imperial and metrics units, when using radius and rectangle selection.

Get the coordinates of a location (new feature)

You can now right-click anywhere on a map and get geographic coordinates (latitude and longitude) for a location.

Easier access to map layer (enhancement)

It is much easier to enable access to the map layer when Spotfire cannot access the Internet or is on a restricted environment. Now, only one unique domain needs to be allowed.

Advanced Analytics

  • Continued work towards broader R compatibility, to enable more and more potential applications to be run on TERR. As of this release, 99% of packages on CRAN, almost 10,000 community packages, can be loaded in TERR. (Well done, TERR Team!). Full details on compatibility are available on the TERR Documentation site.
  • Significant improvements to TERR performance in many areas.
  • TERR can now be used in RStudio to create interactive R Markdown notebooks. R Notebooks allow for direct interaction with R while producing a reproducible document with publication-quality output.
  • A new Guide to Graphics in TERR, which provides tips and examples on using Javascript-enabled packages, certain open-source R packages, and the TERR RinR package to create graphics from TERR.



For Spotfire Server 7.9, the logging framework has been upgraded from Log4j to Log4j2. The benefits of upgrading to Log4j2 include the following:

  • You can manage logging from the UI. For example, you can start debug logging during runtime, without having to manually edit configuration files.
  • Log4J2 is garbage-free, which reduces the pressure on the garbage collector.
  • Java 8 feature sets are fully supported, including lazy logging.

If you used a custom-modified log4j.properties file in any Spotfire Server version between 7.5 and 7.8, you must manually add these modifications to the new log4j2.xml file. To learn more about the procedure, please go here.


You can now create multiple Spotfire environments that share the same Spotfire database, including the library and user directory. These environments, which are called sites, can be configured to reduce latency for multi-geographic deployments. Sites also enable the use of a variety of authentication methods, along with different user directories, within the same deployment. 

Each site includes one or more Spotfire Servers along with their connected nodes and services. A site's servers, nodes, and services can only communicate within the site, but because the Spotfire database is shared among the sites, all of the sites have access to the users, groups, and library in your Spotfire implementation.

The benefits of using sites include the following:

  1. You can route user requests from a particular office to the servers and nodes that are physically closest to that office. This reduces the impact of network latency between servers that are located in different geographic regions. 
  2. You can enable different authentication methods for different sets of users who share a Spotfire implementation. For example, internal users can be authenticated with Kerberos authentication while external users, such as customers and partners, can be authenticated with a username and password method.

To learn how to implement sites, please go here.

TIBCO Spotfire® 7.8

Spotfire 7.8 extends the reach of the Spotfire Recommendation engine into the data space, making it easier than ever to add more rows of data to your Analysis. For Administrators, Spotfire 7.8 adds support for authentication through OpenID Connect (OIDC). And for IronPython and C# developers, there are new APIs that enable you to create more easy to use and powerful analytic applications using Spotfire.

Recommendations for Add rows

In Spotfire Business Author the user now can get a Recommendation to Add the data as rows to an existing data table when adding new data, if the Spotfire Recommendation engine determines that this is suitable. Further, Spotfire can automatically match the columns from the original and the data sets. See how this works in this video. and for more details see this article.

Data Access

Improvements to the SQL server connector

The Spotfire SQL Server Data connector now has added support for SQL server 2016, Azure SQL and Azure SQL Data Warehouse

Configure the maximum amount of in-database rows in the table visualization

In earlier versions of Spotfire, when you kept the data in-database as opposed to loading it into the Spotfire in-memory engine, Table visualizations were limited to showing at most 10000 rows. This has now been changed so that an Administrator can configure the maximum number of rows to display in a Table visualization when running against in-DB data.

The setting which is called TableVisualizationExternalRowLimit is reached through the Administation Manager.

Location Analytics

WMS 1.3.0 Support

Spotfire map charts now support version 1.3.0 of the WMS standard.

For Developers - new APIs


The KPI chart API allows authors and developers to automatically configure KPI Charts from IronPython scripts or custom tools. This enables creating more user friendly and powerful visual analytics applications for end users. See this article for further details and examples.

LayoutDefinition API

IronPython and C# Developers can now define the Layout of visuals on a page in more detail. The new API allows specifying vertical and horizontal proportions to layout the visuals on a page. This means you can now acheive similar layouts using the API as you can when manually arranging visuals on a page. See this article for further details and examples.

Administration Improvements - Federated Authentication: OpenID Connect (OIDC)

Spotfire Server now supports the use of OpenID Connect. OpenID Connect is an open standard and decentralized authentication protocol. Using OpenID Connect allows a customer to set it up so that their users can login with an account they already have. For example, a user can log into Spotfire with Google, Yahoo, or Salesforce. This eliminates the need for administrators to provide their own login systems (such as LDAP or AD).

This enables administrators to reduce the number of usernames and passwords their users need to remember

To setup OpenID Connect with Spotfire Server, there are two prerequisites:

You have to configure a public address URL within Spotfire Server.

You have to register a client at the provider with a return endpoint URL, and receive a client ID and a client secret from the provider.

Read more about this in the Spotfire Server Documentation

New Solutions and Extensions

Spotfire Templates, Data Functions, Accelerators, Extensions and Custom Datasources are available for a wide range of industry vertical and horizontal use cases.  Most are provided as free downloads.  The most popular, recent offerings are shown below.  For a complete list, view all analytics components on the TIBCO Exchange.


The Spotfire Plug-in for Alerting allows you to configure alerts directly from any Spotfire analysis file and can be used to alert when thresholds or rules on any chart are violated.  It is an extension for TIBCO Spotfire that integrates with Automation Services via an alerting task that can generate e-mail, text or pop-up alerts.

Live Datamart Custom Data Source

This Custom Datasource is a TIBCO Spotfire® Extension that enables users to build interactive Spotfire visualizations using data stored in TIBCO® Live Datamart.

Customer Analytics and Marketing

Customer Analytics template series is used to analyze customers purchase behavior.  It includes Spotfire analysis templates for segmentation, propensity and affinity.

A/B Testing data functions provide analysis for a number of marketing use cases where the goal is to compare the effect of different “treatments” on a response, such as click-through rates, orders or sales dollars. These treatments can be different web pages, different email designs, copy, or promotions.

Machine Learning

The Gradient Boosting Machine analysis template and data function are used to create a GBM machine learning model to understand the effects of predictor variables on a single response.  Examples of business problems that can be addressed include understanding causes of financial fraud, product quality problems, equipment failures, customer behavior, fuel efficiency, missing luggage and many others. 

Clustering with Variable Importance Data Function clusters objects together based on similarities between the objects and ranks the input variables according to their influence on cluster formation.  


The Financial Crime Buster Analysis Template guides the user through the tasks of adhoc data discovery, supervised model creation and unsupervised model creation to build a strategy for combating financial crime.

Geoanalytics and Energy

The Contour Plot Data Function generates a contour plot as a feature layer on any map chart

The Decline Curve Analysis Data Function calculates a Hyperbolic Decline Curve Analysis using production oil and gas data. 

TIBCO Spotfire® 7.7

Version 7.7 further extends the capabilites of TIBCO Spotfire. The main areas of improvements are in the ability to develop mobile applications, web authoring improvments,data wrangling and management, and administration improvements for scheduled updates, resource pool management and automation services.

Below are more information and articles about specific features.

KPI chart and Mobile

In Spotfire 7.7 it is now a lot easier to create Mobile applications with all types of visualizations. The minimum page size option enables vertical scrolling and seeing one or a few visuals at a time on a small screen, while users on a larger screen can see more (all) visuals at once. In addition the KPI chart now has Sparklines to give more context to the KPI.

Also read the Best practices for designing mobile applications in TIBCO Spotfire

Data Access

Spotfire 7.7 provides a brand new self service connector to Attivio, thus expanding the ability to create analysis files based on Attivio data lake data and unstructured content to business users. With Spotfire 7.7 business users can even author analysis files that uses the power of Attivio's full text search engine. Data is brought into Spotfire on demand, based on what end users search for. SAP BW continues to be a very popular source of data and Spotfire 7.7 delivers some of the most frequently requested features in this area. Both new and enhanced self service data connectors benefit from the ease of use in Spotfire 7.7. By decresing the amount of steps users need to do to edit data connections and deploy connectors to the Spotfire eco system, valuable time is saved.

Data wrangling

Spotfire 7.7 continues to make it easier to prepare your data. Now it is possible to edit settings directly from the visual data source view.

Web authoring improvements

The Spotfire Business Author has a number of new capabilites such as creating and configuring KPI charts, creating multi layer maps, adding color rules and as mentioned above: the capability to add rows to data tables.

Administration Improvements

The main improvements in administration features are new jobs for TIBCO Spotfire Automation Services to send emails with attachements and to save data to a file, and improved management of resource pools for web player and automation services and for monitoring so called Scheduled Updates.

Custom panel API for Spotfire web clients

With Spotfire 7.7, developers can add custom panels to the Spotfire web clients.

Other API Improvements

Spotfire 7.7 adds APIs for:

  • Cross Table sort mode (get/set); Global or Leaf: crossTablePlot.SortRowsMode = SortMode.Global;
  • Cross Table empty cell text (get/set): crossTablePlot.EmptyCellText = "-";
  • Get and set minimum page dimensions: page.MinimumWidth = 713; page.MinimumHeight = 446;

Location Analytics

Spotfire 7.7 improves the use of map charts.

TIBCO Spotfire® 7.6

7.6 is an important release for TIBCO Spotfire, thanks to the modernized client and server architecture. This new foundation is helping make it easier and faster for us to make visualization improvements, and significantly simplifies server administration and managability.  This page summarizes the cool new features in TIBCO Spotfire 7.6.

Below are tutorials and video links to learn more about a selection of the new features, and more.

KPI chart and Mobile:

The new KPI chart is a big fan favorite in TIBCO Spotfire 7.6.  It is now easier than ever to configure a Key Performance Indicator dashboard in TIBCO Spotfire and make it available to consumers using the TIBCO Spotfire iOS app for mobile devices or the TIBCO Spotfire web client.  Create dashboards that let the user browse their KPIs, tapping a KPI to view more detailed KPIs, or to view more details in regular TIBCO Spotfire visuals.

Waterfall Charts

Another great new visualization is the Waterfall chart, which now works with TIBCO Spotfire Cloud 3.6 and TIBCO Spotfire 7.6.  They are useful when you need to show how different component factors contribute to a final result. Waterfall charts are commonly used in financial analysis, but are useful for other use cases as well. So if you're unfamiliar with why you would use a waterfall chart in the first place, start by reading this post on: why use a waterfall chart. Then, explore the "how" to create a waterfall chart in TIBCO Spotfire with the following tutorials:

Show top N vs the rest

It can be useful to visualize the Top N of something, versus "the rest".  This is a great visualization technique to improve chart readability when you have a few large groups and many smaller ones. This article is relevant for 7.6 and older versions as well:

Inline Data Preparation and Data Wrangling

Below are a selection of new, easy to use tools to prepare and wrangle data. This video shows how they can be used:


Visual overview of data table structures

It is sometimes challenging to understand which data sources and what methods have been used to create combined data tables. To solve this problem, data table data sources and operations can now easily be viewed in the Source view of the expanded data panel. It is possible to see detailed information about operations and preview intermediate resulting data tables after individual steps.


Split columns into new columns based on column values

Sometimes, column values contain multiple pieces of information. Examples are first and last name, or city and zip code. It's now easy to split columns of this type into separate columns containing the individual values from the original column. The original column can then be hidden from the analysis, not to distract and take up valuable space (in, for example, the Data panel).


Unpivot from the data panel

Data can be organized in different ways, for example, in a short/wide or tall/skinny format, but still contain the same information. Often, it is easier to visualize data organized in a tall/skinny format, that is, when the values are collected in just a few value columns. Unpivoting is one way to transform data from a short/wide to a tall/skinny format, so the data can be presented the way you want it in the visualizations. The Data panel (both in TIBCO Spotfire Analyst and TIBCO Spotfire Business Author) now has a built-in unpivot tool on the right-click menu.


Using multiple screens when analyzing data

When you want to simultaneously view more visualizations than will fit on a single screen, you can now analyze your data using multiple screens!

New Google Analytics connector in TIBCO Spotfire Business Author and TIBCO Spotfire Analyst

TIBCO Spotfire Business Author and TIBCO Spotfire Analyst now support direct access to, and analysis of, data from Google Analytics.


Video: https://www.youtube.com/watch?v=Prju49PPRQ4

New Salesforce.com connector in TIBCO Spotfire Business Author

TIBCO Spotfire Business Author now supports direct access to, and analysis of, Salesforce.com data, without using the installed TIBCO Spotfire Analyst client.


Caching Data using Automation Services

Performance can often be improved by periodically loading data from databases and caching it, so that TIBCO Spotfire analyses requiring the data can be opened quickly and without each analysis hitting the database with queries. 

Custom/External Authentication in TIBCO Spotfire 7.5/7.6

Many customers want to embed TIBCO Spotfire Web Player into a portal or other web application and secure access by passing authentication information from the portal to TIBCO Spotfire.  Customers also have internal Web application security standards that require Single-Sign-On to all web applications which would include TIBCO Spotfire Web Player.  TIBCO Spotfire supports these scenarios via custom and external authentication.  With TIBCO Spotfire 7.5, the architecture has changed such that the support for these scenarios has moved from the TIBCO Spotfire Web Player to the TIBCO Spotfire Server.

 Go to TIBCO Spotfire Main Wiki page

Feedback (2)


Many of the links embedded in the body are also broken. 


Julie 4:15pm Aug. 08, 2017

Several of the links in this article are broken.  I started at 7.7 and was working towards 7.10 and I encountered 4 broken links in the 7.7 information when just getting started.  The two links under 7.7 location analytics are broken.  Resource pool workflow, easier deployment of data connectors are both broken.  

Julie 3:22pm Aug. 08, 2017