TIBCO Spotfire® Data Catalog 5.5.0
Spotfire® Data Catalog 5.5.0 introduces support for the new add-on licenses Spotfire® Data Catalog Language Packs. The language packs licenses extend the rich capabilities for content analytics to include support for a wide range of languages other than English. The extended language support is distributed as five separate language pack licenses that include languages from distinct regions.
In addition to this and many other content analytics improvements, this new release brings an overhaul of the SAIL search interface, and Spotfire Data Catalog now leverages machine learning to enable higher quality search results. Read more about Spotfire Data Catalog on the product wiki page.
TIBCO Drivers 1.4.0
Version 1.4.0 of TIBCO Drivers is now available! This release brings you updated and improved versions of the included data source drivers for Apache Spark SQL, Salesforce, MongoDB and Apache Cassandra.
TIBCO Spotfire Analytics for Android® 1.0
Now users of Android devices can also benefit from a Native app for their device. Similar to the Spotfire App for Apple iOS, the Android App makes it easier to keep track of business facts or monitor business performancefrom anywhere by using your tablet or phone. Read more about the Spotfire Android App at the Mobile what's new page for Spotfire. Download the app from the Google Play Store.
TIBCO Spotfire® 7.11 LTS
Spotfire 7.11 brings highly requested improvements in data wrangling, cross tables, tables and maps, and it also makes the life of the Spotfire Administrator easier through improvements in scheduled updates and management of multiple Sites. Developers and application builders will enjoy the upgraded IronPython engine that now supports the latest (2.7.7) IronPython version.
In addition to the new features, Spotfire 7.11 has been designated as a Long Term Support (LTS) version. LTS versions are typically supported for up to 36 months from release. For LTS versions, defect corrections will typically be delivered as hotfixes or service packs while for regular releases they will be delivered in subsequent releases.
Calculate the subtotals and grand totals in cross tables based on the aggregated values displayed in the cells
It is now possible to configure the cross table to calculate subtotals and grand totals based on the aggregated values visualized in the table, as an option to calculating it using the underlying row level data. This is useful, for example, when you want to visualize the sum of the absolute values of the categories displayed in the table.
In the screenshot above, you can see the Properties dialog where you can select, for each column, whether to calculate the subtotal and grand total on the underlying row values, or as the sum of the values displayed in the cross table cells. This is useful, for example, when one wants to compare the sum of absolute values in the subtotals.
Conditional color of the text in tables and cross tables
It is now possible to color the text in tables and cross tables through color rules, as an alternative to coloring the cell background. This provides more freedom in the visual expression of the tables.
Search and zoom to a location
You can now search for a geographic location on the map and quickly zoom in to its geographic area. When you start typing a location name, Spotfire suggests locations you can select to zoom to on the map.
Switch data table now keeps the visualization configuration
Visualizations will now keep their configuration when switching to another data table, provided that the new and the old data table include the same columns. This saves time when switching back and forth between identical tables.
Replace Data source
As a Spotfire user, you are used to working with multiple data sources mashed together, to provide more answers from your data. With this release of Spotfire, you can easily replace one of those data sources with another data source, without compromising the data wrangling and data mashup you have done.
Example: Going from test to production
The picture below shows the source view in an analysis file. Three data sources are used and mashed together using Insert Columns (joins).
The first data source is a linked data table containing sales sample data, stored in a local Spotfire Binary Data Format file (SalesOrderDetailSample.sbdf).
By working with an alternative and local data source you can develop an analysis file without access to the production data source. This is convenient, for example, when working off-site, or, when you have work in progress that you do not want to introduce in your production environment (for performance reasons or for other reasons).
Once you are ready to switch to the production data source, you can access the new replace data source feature from the data source menu in the source view:
The picture below shows the new Replace Data Source dialog. In this example, we select to switch to the corresponding data table in Microsoft SQL Server.
In the image below, the sample data source has been replaced. The data source type is now a data connection instead of an sbdf file.
Add transformation to existing data source
In addition to the capability to replace data sources, this release of Spotfire also enables you to add data transformations to existing data sources. Previously, data transformations could only be added when creating a new data source or when editing data transformations already part of the data source.
There are certain situations when it's beneficial to attach transformations to data sources. The benefits are based on the fact that Spotfire doesn't save the original data in the analysis file, only the transformed result.
Let's assume you prefer to store a copy of your data in your analysis file for it to be available offline and for you to be able to decide when a reload is needed. Let's also assume that you are loading 200M rows into Spotfire, and then defines an pivot data transformation to reduce the size of the data table. Having the pivot data transformation as part of the data source will only store the pivoted result table and discard the 200M original rows. This dramatically reduces the size of the analysis file. If the transformation was performed as a separate step, the original 200M rows would be stored.
This will also reduce the loading time when opening the analysis, since the pivoted table is already available. If the transformation was performed as a separate step, the pivot operation would have had to be performed as part of loading the analysis file.
Custom data transformations may also benefit from being performed as part of the data source.
The image below shows the new access point to insert a transformation on a data source.
Edit replace value transformations
It is now possible to edit replaced values without creating additional transformation steps within the analysis. This means that you can go back and modify previously added replacement operations, if they are no longer applicable. By editing already created operations, you can avoid having a large number of transformations for replacing the same value over time, and make the analysis cleaner.
The image below shows the entry point for editing two replace specific value transformations. Click the Edit button to open the new edit dialog.
The image below shows the new edit dialog for Replace Specific Value:
Since we have replaced a specific value we have defined both a new value and a primary key column (PermitNumber). You can add more key columns and you can replace the currently used key column and/or value in the dialog.
You can also insert a new replace value transformation (using the new Replace Value and Replace Specific Value dialogs) into an existing transformation group by clicking Insert in the Edit Transformations dialog:
Edit relational data connection data sources from the source view
Previously, Spotfire users had source view access to make quick changes to data connection configurations. This made it possible to add and remove tables and columns, add or modify custom queries, modify prompts, change column names and other settings that are part of data connections.
With this release, it is just as easy to make changes to the data source used by the data connection. The data source holds information regarding source IP, authentication method, time-outs and database, which are all easy to modify now.
For example, it has never been as easy to move from a test database to a production database. With a few clicks from the source view, you can now point the data source to another database, maybe even to a database with another type of authentication method. If different table names are used in the databases, for example, 'dbo.test.transaction' in the test database and 'dbo.prod.transaction' in the production database, Spotfire highlights these differences in the data connection and makes it easy to select the corresponding table in the production database.
The image below shows a data connection data source being displayed in the source view. Click on the settings button (the gear icon) on the data source node to edit the data connection.
The image below shows the Views in Connection dialog reached from the settings button (the gear icon). From here, you can enable full editing of the data connection by clicking the button in the lower left corner of the dialog.
The image below shows the new Edit Data Source Settings button. This is a new feature in 7.11 and provides a shortcut to editing your data source.
The image below shows the Microsoft SQL Server Connection dialog which contains the settings for the connection data source. From here, it is easy to, for example, switch from a test to production server or database. You can also switch authentication method.
Option to query SAP BW directly towards the SAP BAPI API
The SAP BW connector now has the option to query SAP BW using the native SAP BAPI API without going through the ODBO API used until today. If you choose to enable the BAPI API integration you can expect a boost in performance and more detailed messages from SAP BW should something go wrong. If you choose not to enable the BAPI API, the SAP BW connector will use the ODBO API as before.
We are convinced that the BAPI API will provide a better user experience and allow us to develop new features over time. We have therefore decided to deprecate support for the ODBO API in a future Spotfire release. However, both APIs will be available for a period of time, to allow you to upgrade your SAP BW client driver installation to the BAPI API at your own pace.
The image below shows the title of the SAP BW Connection dialog, where it is indicated that the SAP BAPI API is being used.
Load more than one million SAP BW data cells
Note: This feature becomes available when you have enabled the SAP BW's BAPI API on Spotfire clients and servers. Please see the "Option to query SAP BW directly towards the SAP BAPI API" feature above for more details.
SAP BW limits the number of non-empty cells that can be retrieved in metadata and in result data sets. This limit is configurable in SAP BW, and common limits are between 500k and 1M non-empty cells. By leveraging the SAP BW BAPI API, Spotfire is no longer dependent on this limitation and allows you to analyze more data than set by the limit. This means that you can connect to BEx queries representing more data, and thus extending the number of use cases you can implement with Spotfire.
Only Spotfire administrators can enable this capability in the Spotfire platform.
Specify SAP BW operation timeout
It is common for SAP BW BEx queries to represent very large amounts of data. This means that Spotfire data import queries towards BEx queries sometimes need some extra time to complete. You can now increase the default 10 minute timeout as part of the SAP BW data connection. This allows you to import and analyze larger data volumes without queries timing out before your data is available.
The images below show the SAP BW Connection dialog. Click the new Advanced tab to reach the operation timeout setting.
Increased SAP HANA function support
Spotfire's SAP HANA connector now supports the following additional functions:
The image below shows a few of the new functions in Spotfire's Custom Expression user interface. Note that the details for how to use these functions are documented by SAP and are subject to change over time.
Support for new thrift transport modes in Apache Spark SQL
Spotfire's connector for Apache Spark SQL now supports the thrift transport modes Binary, SASL and HTTP. Having the TLS security settings on the first page and turned on by default for new connections makes it quicker to configure your data connections in a secure way, for example, to Databricks data sources.
Support for Teradata 16
The Teradata connector and Information Services now support Teradata 16.
The images below show the different tabs available in the updated Teradata Connection dialog.
Spotfire Cloud access to data from TIBCO Spotfire Data Catalog in Spotfire Cloud Business Author and Consumer
Analysis files opened in Spotfire Cloud Business Author and Consumer can now load data directly from publically available TIBCO Spotfire Data Catalogs.
Analysis files are authored in Spotfire Analyst, saved to the Spotfire Cloud Library and are instantly available for Business Author and Consumer users.
As a Business Author and Consumer user, you will receive fresh data when the analysis is opened. You can manually refresh data from individual data sources in the source view of Spotfire Business Author.
The image below shows the library browser of the Spotfire Cloud web client.
When you open an analysis based on data from TIBCO Spotfire Data Catalog, it is now possible to refresh the data directly from the source view:
LDAP and Spotfire Authentication
Spotfire 7.11 allows users to access Spotfire even though they are not part of the external user directory.
If you configure authentication towards an external user directory such as an LDAP directory, or a Windows NT Domain, you can combine this with adding users manually to the Spotfire database so you do not have to add them to the LDAP directory.
To see more on this feature, go here.
Scheduling & Routing
Spotfire 7.11 provides three new features for scheduling and routing, to help our administrators more easily manage files that are not cached and rules.
- You can now prevent users from opening analysis files that are not cached by scheduled updates. This is useful because if there are certain analysis files that take a significant amount of resources to initially load, you can prevent that from happening by not allowing the user to open an uncached analysis file. To see more on how to do this, read more here.
- You can now recover a rule if it was automatically disabled. When an analysis file is deleted from the library, the routing rule associated to it will fail and the rule will become disabled. Now, if the analysis file is imported back to its previous location the rule is recovered and can automatically be reenabled by updating a setting in the server configuration file, enable-recovered-rules-automatically. To see more on this feature, go here.
- You can now copy routing rules and schedules from one site to another. For details on how to use this feature, go here.
Update to the Library Browser Page
The Spotfire library browser now provides a left hand navigation section that allows you to view recent files you have just opened as well as quickly browse for other files of interest.
IronPython support updated to version 2.7.7
TIBCO Spotfire 7.11 supports the latest version of IronPython (2.7.7), enabling more powerful language features and libraries.
IronPython is an implementation of the Python programming language which is tightly integrated with the .NET Framework. Using IronPython scripts with Spotfire, you can utilize not only .Net Framework and Python libraries, but also the full Spotfire C# API. This makes IronPython scripting a powerful tool when creating advanced analytic applications in Spotfire. If there is a need to run certain scripts in the older version of Iron Python, this is still supported by selecting the older version in the drop-down list shown in the image below.
For tutorials and examples, see https://community.tibco.com/wiki/ironpython-scripting-tibco-spotfire.
TIBCO Spotfire® 7.10
Spotfire 7.10 provides a new, high-resolution, export to PDF feature, including a modern, user-friendly UI, with a live preview. In addition, there are also very useful improvements in data access (especially for SAP BW users), visual analytics, and, for administrators, we have made the node manager upgrade process simpler.
New and improved Export to PDF
With Spotfire 7.10, the new Export to PDF feature includes the following main improvements over the legacy implementation:
- The exported visualizations use the visual theme in the analysis.
- The exported PDF is of a higher resolution.
- The modern user interface makes it easier to configure the export, to get the result you need.
- The dialog provides a preview that lets you see the result of your settings.
You access the new Export to PDF dialog from the File > Export menu in Spotfire Analyst, or, from the menu in the top right of the web client.
In the left-hand panel, there are controls that let you configure what to export, and what type of content to include (such as page numbers, date, Annotations, etc). You can also find basic settings here, such as the paper size and page orientiation. Just to the right of the panel is the preview area, which is dynamically updated as you change the export settings in the left-side panel, so you can see the effect of your selections directly. The preview can also be zoomed using the controls in the upper right corner.
Control the proportions of the exported content
You can now easily control the proportions of exported visualizations. In the Proportions part of the user interface, you can choose to use one of three options:
As it is on your screen
This setting is the default option in Spotfire. Choosing this option ensures that the PDF page displays the content exactly the way it looks on your screen. However, you might not use all the available space on the paper with this choice, as shown below. (In this image, the paper is A4 landscape-oriented, so a portion of the paper in the bottom is not used.)
Fit to PDF page
If you want to utilize your paper dimensions, choose Fit to PDF page. When you select this option, check to make sure no labels are truncated, because the aspect ratio for the dashboard may be changed significantly. (The image below shows how this can happen.)
Notice that, in the picture above, the labels in the lower line chart to the right do not fit the space, the labels in the bar chart look sub-optimal, and only parts of the company names in the table in the center are visible. In this case, using the feature Relative text size is a great option.
Relative text size
Use the Relative text size slider to scale the text to the best size. Below, you can see the result: the text is smaller, but the labels and the text in the table fits and are easily readable. Watch this video showing how Relative text size works.
If the options Fit to PDF page or As it is on your screen cannot give you what you need, you can use the custom proportions to define any desired aspect ratio for the exported content.
Exporting all rows from a table, or all trellis pages from a trellised visualization
In order to export all rows from a table, not only those rows visible on the screen, or to export all trellis pages for a trellis-by-pages visualization, you first select the visualization to export. A new, very convenient, way to do this is from the right-click menu in the visualization:
The What to export – Active visualization option is then automatically selected, and you can select Export entire table, as shown below.
The process is very similar for export of all trellis pages in a trellis-by-pages visualization.
Visualize direction on maps or scatter plots by setting the rotation of markers
It is now possible to rotate markers on maps and scatter plots, based on values in a column or a custom expression. For example, using this new feature, you can configure the direction of markers to indicate wind direction, a ship's heading, or similar direction information.
Set the rotation using a column or a custom expression on the new Rotation axis of the scatter plot or map chart marker layer. The rotation is described in degrees, where 0 is North, 90 is East, 180 is South, and so on.
Scroll in a cross table hierarchy
When the hierarchy of a cross table is very large, you can now scroll in the hierarchy, as well as in the values. This new feature makes it easier to work with cross tables that have a large hierarchy.
The Appearance tab in the cross table properties now has a new section that is called Horizontal scrolling.
There are three options in the Horizontal scrolling section:
Freeze row headers keeps the row header frozen, so you can scroll only the column values. This was the behavior in version 7.9 of Spotfire, and earlier.
Scroll row headers specifies that the cross table always scrolls both row headers and column values.
Adjust automatically is the new default: The cross table sets the best of the two other options automatically, depending on the width of the row headers relative to the width of the whole cross table.
Quick auto-zoom on maps
You can now enable or disable auto-zoom much more quickly, directly from the right-click menu in the map chart.
Improved performance when filtering data in-db
When you use Spotfire to analyze data in a database with a live connection (data still kept in the database, not brought into memory with Spotfire), filtering performance is much better in Spotfire 7.10 compared to earlier versions.
Spotfire business users can now limit data using SAP BW BEx variables
Spotfire has had native self-service support for SAP BW since version 5.5. The number of Spotfire users on SAP BW has grown rapidly since then. We are happy to announce that the most-requested SAP BW integration feature is available in Spotfire 7.10.
In SAP BW BEx queries, variables are used to limit the data to be loaded. With Spotfire 7.10, it is now possible to connect SAP BW BEx query variables to Spotfire prompts automatically. Now, you can build analysis files that are directly connected to the variables that are already part of your BEx queries and your business processes. SAP BW users will be familiar with the variables, and can, for example, very easily narrow down the data being analyzed to certain time frames, product categories, equipment maintenance areas, oil wells, accounts, employees, and so on.
The following image shows the updated Data Selection in Connection dialog box for defining SAP BW views in Spotfire.
The panel to the right has a new section for prompt settings. To activate prompts for a variable, select the Prompt for values check box.
The prompt type adapts to the prompt type defined in the BEx variable. In the example above, because this prompt type corresponds to the input allowed for that BEx variable, the prompt type for the TYPE variable is locked to Single selection.
To make it easier for business users to understand the prompt, you can provide a description of the prompt .
The following image shows that it is possible to control in which order the prompts are displayed to business users. This feature is important because variables are related, and limiting one variable affects the values available for the following variables.
The following image displays an example of the first prompt, for the variable Tax:
The following image displays the second prompt, TYPE. Users can enter variable values manually, or (as in this example), they can load the unique values in a list from SAP BW.
The following image displays the third and last prompt, Region, with a value which has been entered manually.
The following image depicts the data analysis as a bar chart, where the data is limited by the selections previously made using the prompts.
In BEx queries, variables are used to limit the data to be loaded. Some variables are mandatory, and values must be defined before a user can open the query. By establishing prompting, you can let the end user define the variable value, instead of defining it in the connection configuration.
Note: You can define both a value and prompting for the same BEx variable. The variable value you define in the connection is the default selection in the prompt dialog for the variable when the connection is opened. This can be useful if you save the connection in the library for reuse. However, if you create an analysis with prompts and save it to the library, then the selections you made in the prompts when creating the analysis will be stored in the analysis. In that case, it will be your selections in the prompts, rather than the variable values defined in the connection, that are the default selections in prompts shown to the end users.
Compared to working with relational data sources, BEx queries are more restrictive regarding how you can set up prompting. When a variable is defined in the query, it is designed to accept only certain input. For example, it can be a single value, a multiple value or a range. In Spotfire, the accepted input determines the prompt types you can use for a BEx variable.
Note: Unless Load values automatically is selected, then, by default, the prompts for BEx variables give users the option to enter values manually. When a user enters variable values manually, Spotfire supports entering values as text (captions). Entering values as keys is not supported.
SAP Message Server support for SAP BW
You can now connect Spotfire to your cluster of SAP BW systems using the SAP Message Server load balancer. Previously, you had to connect Spotfire directly to a certain SAP BW instance.
The SAP Message Server allows IT to assign application servers to workgroups or specific applications. Users are automatically logged in to the server that currently has the best performance statistics and/or the fewest users.
The image below shows the updated SAP BW connection dialog, with the new fields for entering SAP Message Server connection details.
Single Sign On to SAP HANA with SAP SSO 3.0
It is common that SAP HANA deployments use the SAP SSO 3.0 Kerberos solution. The Spotfire SAP HANA integration now supports Kerberos authentication in combination with SAP SSO 3.0, in all clients and servers. This change enables Spotfire users analyzing SAP HANA data to access data without entering their SAP HANA credentials manually. It also provides a central location for users and roles administration, for SAP HANA administrators.
Configurable Essbase Measure dimension
If you are connecting to an Oracle Essbase cube that does not have a dimension tagged as the accounts dimension, you can now specify which dimension contains the measures. In previous releases, this was not possible, and some users could not connect to their Essbase cubes.
The following image shows the dialog that is displayed when you create a connection to such a cube.
You can manually specify which dimension to use as the measure (accounts) dimension in your connection.
API access to Spotfire's data wrangling operations
Spotfire's Source View (available by expanding the data panel) has been extremely well received by the Spotfire user community. It provides an overview of your data wrangling steps, and it also has access points for going back and editing data wrangling steps.
With this release, you can now get the same overview and the same editing capabilities using an API.
With API control of how data is wrangled, you can unlock new ways of building analytics applications. For example, an analytic scenario can be adapted on the fly by letting business users change join type (API control of the add columns operation), which instantly changes how data is blended, and thus, is presented in the analysis file.
Using the API, you can extract data wrangling and cleansing steps from Spotfire. For example, all usage of the replace values data transformation on your data (which is also new with this release) can be exported. This new feature means that you can write code into SQL or Spark (for example) that converts all steps taken to cleanse data.
Below is an example of how you can write the join type so it is controlled by a document property.
from Spotfire.Dxp.Data import * from Spotfire.Dxp.Data.DataOperations import * from System import * sourceView = table.GenerateSourceView(); op = sourceView.GetAllOperations[AddColumnsOperation]() newJoinType = Enum.Parse(JoinType, joinType) op.AddColumnsSettings = op.AddColumnsSettings.WithJoinType(newJoinType).WithTreatEmptyValuesAsEqual(matchOnNull)
The following image shows how a text area input field could be used to control the join type through a document property.
Learn more about the API here.
Easier debugging of TERR data functions
There is now a way to see debug information which is generated in runtime when a TERR data function executes, such as parameter values and also your own free text output. The same mechanism is used whether you run the data function locally using the embeded TERR engine in Spotfire Analyst, or, using the TERR engine in TIBCO Spotfire Statistics Services. To enable the debug output, select Tools > Options > Data Functions and click the Enable Data Function debugging check box:
This makes Spotfire show additional debug information from the execution of the data function, such as, input and output parameter values. The debug information is viewed in the notifications window that you access from the lower left notification message (click the yellow triangle):
Here is one example of debug output:
If you want to, it is easy to add your custom debug information in a data function, in the script body:
cat("My debug output: the input value for Multiplier was: ") cat(Multiplier) cat("\n") OutputColumn <- InputColumn*Multiplier
Easier upgrades of Node Managers
Spotfire 7.10 improves the the upgrade process:
- You can upgrade the node manager from the administration UI.
- The node manager upgrade is now part of the rollback process in case there is an issue or error with the upgrade.
Quick deployment of package updates
To deploy Spotfire software, the administrator places software packages in a deployment area and assigns the deployment area to particular groups.
If a new deployment is available when a user logs in to a Spotfire client, the software packages are downloaded from the server to the client.
Deployments are used:
- To set up a new a new Spotfire system.
- To install a product upgrade, extension, or hotfix provided by Spotfire.
- To install a custom tool or extension.
With one click you can now up update, rollback, or delete your deployment packages
Pagination for Viewing Scheduled Updates
The Scheduling & Routing page now has pagination. By default, you will see 100 scheduled updates and routing rules, but you can switch the view to 50 or 150 items per page.
TIBCO Spotfire Analytics for iOS 2.9
Version 2.9 of the Spotfire iPhone/iPad App adds user notification when new data is available through Scheduled Updates and the ability to synchronize the App settings between multiple iOS devices using iCloud. Read more about this and other Spotfire Mobile releases here.
TIBCO Spotfire® Data Catalog
The Data Catalog makes handling, searching and accessing data from across your organization a natural and fast experience. Even if your data is scattered in disparate data sources – in databases, data warehouses, or elsewhere – you can make all these data sources readily available for self-service access in one unified data catalog. Using Attivio intelligent technology, your data is profiled, organized and semantically enriched so that you can search with natural language across all your data sources, whether the contents are structured or unstructured. Discover relationships between your data with the patented ‘Join Finder’ and bundle just the right, relevant information in self-service data marts. Then start uncovering insights through seamless integration with the Spotfire visualization platform.
TIBCO Spotfire® 7.9
The main highlights in Spotfire® 7.9 are significant new inline data wrangling features.
Inline data wrangling
Edit data transformations
Spotfire 7.6 introduced the Source View which provides an overview of your data transformations, calculations and how your data tables are derived from rows and columns combined from multiple data sources. Spotfire 7.7 made add rows (unions) editable and smart by usage of the Spotfire recommendations engine.
With Spotfire 7.9, one of the most anticipated new features of all times is now available; the ability to change data transformation settings. This saves you a lot of time, for example, when a recently added data transformation needs further editing, or if an existing data transformation needs to be adapted to changes in the data source.
Access points for editing data transformations
The image below shows an example of details in the Source View. There are two access points for editing data transformations; one for editing data transformations that are part of a data source, and the other access point is used to edit data transformations inserted as separate steps.
The image below shows the dialog for working with data transformations and how to gain access to the settings dialogs for each data transformation.
Available edit features
The following editing features are available from the Source View:
- Edit a data transformation. (Edit...)
- Delete a complete data transformation group. (The waste basket icon.)
- Delete a data transformation from a group (including deletion of a data transformation in data source step). (Remove)
- Insert a data transformation into an existing transformation group before or after existing data transformations. (Insert menu).
- Change the order in which data transformations are applied. (Move Up/Move Down)
Certain non-editable use cases
In some cases, the ability to edit a data transformation will not be possible. In summary, if a data source (column producer) cannot be refreshed, it cannot be edited. There are two cases when this is the happens:
If the final data table is (top) embedded.
If a data source includes data transformations but its data is not linked or cached, but stored (embedded).
A stored data table with a disabled access point for edit data transformation:
A linked data table with an available access point for edit data transformation:
Indications when something goes wrong in data transformations
With Spotfire 7.9, you will be notified when a data preparation step cannot be applied as expected, or, if a data transformation is no longer necessary.
The image below shows an example of the three levels of indications, depending on severity.
An Error indication
An Error indication is displayed if a data transformation cannnot be applied.
For example, if a column is missing for a calculation (if it has changed or has been removed in the data source, or, if it has been removed when editing a previous data preparation step in Spotfire), you will see an error. With Spotfire 7.9, and the ability to edit data transformations, many errors can be resolved in Spotfire. Once fixed, the error indication will be reevaluated, and hopefully disappear.
A Warning indication
A Warning indication is displayed if, for example, a defined value formatting step no longer can be applied.
For example, this happens if a column's data type (Real) and formatting (Percentage) has been changed using Spotfire's Data panel.
Now, if the data type changes to Real in the data source, Spotfire will not apply the data type change and thus cannot apply the Percentage formatting. A Warning highlights that you need to redefine the formatting on the column again.
An Information indication
An Information indication is displayed, for example, if a data type is changed to the same data type that the column already has, using a data transformation. This can happen if the data type has been wrong before, but now has been corrected in the data source. The data transformation in Spotfire is then no longer necessary, and this is highlighted using the Information indication.
Inline data cleaning
Spotfire now provides an easy way to clean up issues in your data, right when you see them. It is when you visualize data that you spot errors, so why not fix them right there and then? The new Replace value feature lets you change incorrect data values by double-clicking in a table, in the Details-on-Demand, or in the expanded Data panel. There are two flavors to the replace value feature; the ability to replace a single value only, or to replace all occurrences of that value in the column.
Replace all occurrences of the value
For some types of data issues, the natural way to fix it is to replace all occurrences of the incorrect value. This helps you solve issues caused by alternative (mis)spellings like Tomatoes|Tomatos, Color|Colour or even if some rows of data use acronyms such as CA and some rows use the full name California. It can also be used to group categorical values into different "buckets", such as grouping states into arbitrary regions.
Replace a single data value
Replacing a single data value is useful, for example, when you find issues in numerical data. Perhaps the decimal point is in the wrong place, or some other type of error.
Replace specific value from a table details visualization
Replace specific value from the Details-on-Demand
Replacing the single value only requires that there is a defined key that can be used to identify this specific row of data. In the above screenshot, you can see a link to "Select key columns". The link leads to the below dialog that lets you define one or more columns to uniquely define each row of the data table.
How does it work?
Underneath the surface, the changes are implemented using two new data transformations, Replace value and Replace specific value. This means that no data is changed in the original data source. Instead, the value is replaced when the data is brought into Spotfire. It also means that when data is reloaded, the same corrections are applied again, and for the Replace value case new instances of the value in question are also being replaced.
The logic in the Replace specific value case is to replace the value only if it is the same value as when the transformation was created. Thus, if the value is changed in the data source after the transformation was defined, the transformation will no longer have any effect.
Review all changes
The visual Data source view lets you inspect and, if needed, remove the Replace value transformations.
Above, you can see how replace value transformations are shown in the source view.
Recommendations for add rows prefix and postfix support
Before Spotfire 7.9, Spotfire's recommendation engine would automatically detect if new data should be added as rows to existing data. With Spotfire 7.9, the recommendation engine for add rows also automatically matches columns with common names but different prefixes and/or postfixes. For example, the new column 'Sales (2016)' will match the existing column 'Sales (2015)'.
Columns that have the same prefix/postfix will have the prefix/postfix removed from the column name. In the example above, the column name will be 'Sales'.
The prefix/postfix will automatically be entered on all rows in the origin column. In the example above, the origin column will contain '2016' and '2015' for the respective data sources.
Access Amazon Redshift data from Spotfire Cloud web clients
Amazon Redshift is now supported in Spotfire Cloud Business Author and Consumer. This means that when you open an analysis file with data from Amazon Redshift in Spotfire Cloud Business Author and Consumer, you can now load data directly from your Amazon Redshift instance. Both in-database live queries and in-memory data import are supported.
Analysis files with Amazon Redshift connections are authored in Spotfire Cloud Analyst, saved to the Spotfire Cloud Library and are then available for Spotfire Cloud Business Author and Consumer users.
You can manually refresh data from individual data sources from Business Author's Source View.
Note: You might have to allow the Spotfire Cloud servers to access your Amazon Redshift data by whitelisting the servers' IP addresses. More information is available in the TIBCO Spotfire Cloud help.
Access Azure SQL data from Spotfire Cloud web clients
Azure SQL is now supported in Spotfire Cloud Business Author and Consumer. This means that when you open an analysis file with data from Azure SQL in Spotfire Cloud Business Author and Consumer, you can now load data directly from your Azure SQL instance. Both in-database live queries and in-memory data import are supported.
Analysis files with Azure SQL connections are authored in Spotfire Cloud Analyst, saved to the Spotfire Cloud Library and are then available for Spotfire Cloud Business Author and Consumer users.
You can manually refresh data from individual data sources from Business Author's Source View.
Note: You might have to allow the Spotfire Cloud servers to access your Azure SQL data by white listing the servers' IP addresses. More information is available in the TIBCO Spotfire Cloud help.
Access OData provider data from Spotfire Cloud web clients
OData is now supported in Spotfire Cloud Business Author and Consumer. This means that when you open an analysis file with data from OData in Spotfire Cloud Business Author and Consumer, you can now load data directly from your OData instance. The OData connector supports in-memory data import.
Analysis files with OData connections are authored in Spotfire Cloud Analyst, saved to the Spotfire Cloud Library and are then available for Spotfire Cloud Business Author and Consumer users.
You can manually refresh data from individual data sources from Business Author's Source View.
Note: You might have to allow the Spotfire Cloud servers to access your Odata providers by white listing the servers' IP addresses. More information is available in the TIBCO Spotfire Cloud help.
Connectors and live query data tables
Microsoft Azure HDInsight is now supported
Starting with Spotfire 7.9, the Hortonworks Hive connector now supports Microsoft Azure HDInsight.
For more information about Microsoft Azure HDInsight, see: https://azure.microsoft.com/en-us/services/hdinsight/
Apache KNOX is now supported
Starting with Spotfire 7.9, the Hortonworks Hive connector now supports Apache KNOX, with or without Kerberos.
For more details about Apache KNOX, see: https://knox.apache.org
SAP SSO is now supported with the SAP BW connector
It is common that SAP BW deployments use SAP's SSO solution. Spotfire's SAP BW integration now supports this authentication method in all clients and servers. This enables Spotfire users to analyze SAP BW data without entering their SAP BW credentials manually. It also provides a central location for users and roles administration for SAP BW administrators.
Instructions for how to configure Spotfire for SAP BW SSO is available here: https://community.tibco.com/wiki/single-sign-tibco-spotfire-sap-bw-conne...
Configurable maximum allowed number of rows in live query results
Spotfire 7.9 introduces a new safety setting which allows system administrators to set a limit for how large the data tables loaded using live queries (in-database tables) can be. This is a protection against, for example, ad hoc analysts splitting a bar chart on a fact table's ID column, which could result in a gigabyte data table being loaded into client and Web Player memory.
Google Analytics system web browser authentication
Spotfire's Google Analytics connector now supports Google's new modernized OAuth implementation. The system web browser is now used for user authorization, instead of a built in Spotfire dialog. This means that if a user is already logged into Google in the system web browser, the login step will be performed automatically.
For more details about the reason for this change, see: https://developers.googleblog.com/2016/08/modernizing-oauth-interactions...
New data source versions support
Analysis Services 2016 is now supported
Spotfire 7.9 (and later) now supports Analysis Services 2016.
For details, see the system requirements page here: http://support.spotfire.com/sr_spotfire_dataconnectors.asp#ssas
PostgreSQL 9.5 and 9.6 is now supported
Spotfire 7.9 (and later) now supports PostgreSQL 9.5 and 9.6.
For details, see the system requirements page here: http://support.spotfire.com/sr_spotfire_dataconnectors.asp#postgresql
MySQL 5.7 is now supported
Spotfire 7.9 (and later) now supports MySQL 5.7.
For details, see the system requirements page here: http://support.spotfire.com/sr_spotfire_dataconnectors.asp#oraclemysql
SAP BW 7.5 is now supported
Spotfire 7.5 (and later) now supports SAP BW 7.5.
For details, see the system requirements page here: http://support.spotfire.com/sr_spotfire_dataconnectors.asp#sapnetweaver
Apache Spark SQL 2.0 is now supported
Spotfire's Spark SQL connector now supports Spark 1.6.0 to 2.0.2.
NOTE: The latest TIBCO ODBC Driver for Apache Spark SQL must be used in combination with the connector.
For details, see the system requirements page here: http://support.spotfire.com/sr_spotfire_dataconnectors.asp#apachesparksql
Information Services now supports constrained Kerberos delegation
Spotfire Information Services now supports constrained Kerberos delegation in combination with compatible JDBC drivers.
Nautical Miles unit (new feature)
Nautical Miles is added as a unit of measurement in addition to existing imperial and metrics units, when using radius and rectangle selection.
Get the coordinates of a location (new feature)
You can now right-click anywhere on a map and get geographic coordinates (latitude and longitude) for a location.
Easier access to map layer (enhancement)
It is much easier to enable access to the map layer when Spotfire cannot access the Internet or is on a restricted environment. Now, only one unique domain needs to be allowed.
- Continued work towards broader R compatibility, to enable more and more potential applications to be run on TERR. As of this release, 99% of packages on CRAN, almost 10,000 community packages, can be loaded in TERR. (Well done, TERR Team!). Full details on compatibility are available on the TERR Documentation site.
- Significant improvements to TERR performance in many areas.
- TERR can now be used in RStudio to create interactive R Markdown notebooks. R Notebooks allow for direct interaction with R while producing a reproducible document with publication-quality output.
For Spotfire Server 7.9, the logging framework has been upgraded from Log4j to Log4j2. The benefits of upgrading to Log4j2 include the following:
- You can manage logging from the UI. For example, you can start debug logging during runtime, without having to manually edit configuration files.
- Log4J2 is garbage-free, which reduces the pressure on the garbage collector.
- Java 8 feature sets are fully supported, including lazy logging.
If you used a custom-modified log4j.properties file in any Spotfire Server version between 7.5 and 7.8, you must manually add these modifications to the new log4j2.xml file.
You can now create multiple Spotfire environments that share the same Spotfire database, including the library and user directory. These environments, which are called sites, can be configured to reduce latency for multi-geographic deployments. Sites also enable the use of a variety of authentication methods, along with different user directories, within the same deployment.
Each site includes one or more Spotfire Servers along with their connected nodes and services. A site's servers, nodes, and services can only communicate within the site, but because the Spotfire database is shared among the sites, all of the sites have access to the users, groups, and library in your Spotfire implementation.
The benefits of using sites include the following:
- You can route user requests from a particular office to the servers and nodes that are physically closest to that office. This reduces the impact of network latency between servers that are located in different geographic regions.
- You can enable different authentication methods for different sets of users who share a Spotfire implementation. For example, internal users can be authenticated with Kerberos authentication while external users, such as customers and partners, can be authenticated with a username and password method.
TIBCO Spotfire® 7.8
Spotfire 7.8 extends the reach of the Spotfire Recommendation engine into the data space, making it easier than ever to add more rows of data to your Analysis. For Administrators, Spotfire 7.8 adds support for authentication through OpenID Connect (OIDC). And for IronPython and C# developers, there are new APIs that enable you to create more easy to use and powerful analytic applications using Spotfire.
Recommendations for Add rows
In Spotfire Business Author the user now can get a Recommendation to Add the data as rows to an existing data table when adding new data, if the Spotfire Recommendation engine determines that this is suitable. Further, Spotfire can automatically match the columns from the original and the data sets. See how this works in this video. and for more details see this article.
Improvements to the SQL server connector
The Spotfire SQL Server Data connector now has added support for SQL server 2016, Azure SQL and Azure SQL Data Warehouse
Configure the maximum amount of in-database rows in the table visualization
In earlier versions of Spotfire, when you kept the data in-database as opposed to loading it into the Spotfire in-memory engine, Table visualizations were limited to showing at most 10000 rows. This has now been changed so that an Administrator can configure the maximum number of rows to display in a Table visualization when running against in-DB data.
The setting which is called TableVisualizationExternalRowLimit is reached through the Administation Manager.
WMS 1.3.0 Support
Spotfire map charts now support version 1.3.0 of the WMS standard.
For Developers - new APIs
KPI Chart API
The KPI chart API allows authors and developers to automatically configure KPI Charts from IronPython scripts or custom tools. This enables creating more user friendly and powerful visual analytics applications for end users. See this article for further details and examples.
IronPython and C# Developers can now define the Layout of visuals on a page in more detail. The new API allows specifying vertical and horizontal proportions to layout the visuals on a page. This means you can now acheive similar layouts using the API as you can when manually arranging visuals on a page. See this article for further details and examples.
Administration Improvements - Federated Authentication: OpenID Connect (OIDC)
Spotfire Server now supports the use of OpenID Connect. OpenID Connect is an open standard and decentralized authentication protocol. Using OpenID Connect allows a customer to set it up so that their users can login with an account they already have. For example, a user can log into Spotfire with Google, Yahoo, or Salesforce. This eliminates the need for administrators to provide their own login systems (such as LDAP or AD).
This enables administrators to reduce the number of usernames and passwords their users need to remember
To setup OpenID Connect with Spotfire Server, there are two prerequisites:
You have to configure a public address URL within Spotfire Server.
You have to register a client at the provider with a return endpoint URL, and receive a client ID and a client secret from the provider.
New Solutions and Extensions
Spotfire Templates, Data Functions, Accelerators, Extensions and Custom Datasources are available for a wide range of industry vertical and horizontal use cases. Most are provided as free downloads. The most popular, recent offerings are shown below. For a complete list, view all analytics components on the TIBCO Exchange.
The Spotfire Plug-in for Alerting allows you to configure alerts directly from any Spotfire analysis file and can be used to alert when thresholds or rules on any chart are violated. It is an extension for TIBCO Spotfire that integrates with Automation Services via an alerting task that can generate e-mail, text or pop-up alerts.
Live Datamart Custom Data Source
This Custom Datasource is a TIBCO Spotfire® Extension that enables users to build interactive Spotfire visualizations using data stored in TIBCO® Live Datamart.
Customer Analytics and Marketing
Customer Analytics template series is used to analyze customers purchase behavior. It includes Spotfire analysis templates for segmentation, propensity and affinity.
A/B Testing data functions provide analysis for a number of marketing use cases where the goal is to compare the effect of different “treatments” on a response, such as click-through rates, orders or sales dollars. These treatments can be different web pages, different email designs, copy, or promotions.
The Gradient Boosting Machine analysis template and data function are used to create a GBM machine learning model to understand the effects of predictor variables on a single response. Examples of business problems that can be addressed include understanding causes of financial fraud, product quality problems, equipment failures, customer behavior, fuel efficiency, missing luggage and many others.
Clustering with Variable Importance Data Function clusters objects together based on similarities between the objects and ranks the input variables according to their influence on cluster formation.
The Financial Crime Buster Analysis Template guides the user through the tasks of adhoc data discovery, supervised model creation and unsupervised model creation to build a strategy for combating financial crime.
Geoanalytics and Energy
The Contour Plot Data Function generates a contour plot as a feature layer on any map chart
The Decline Curve Analysis Data Function calculates a Hyperbolic Decline Curve Analysis using production oil and gas data.
TIBCO Spotfire® 7.7
Version 7.7 further extends the capabilites of TIBCO Spotfire. The main areas of improvements are in the ability to develop mobile applications, web authoring improvments,data wrangling and management, and administration improvements for scheduled updates, resource pool management and automation services.
Below are more information and articles about specific features.
KPI chart and Mobile
In Spotfire 7.7 it is now a lot easier to create Mobile applications with all types of visualizations. The minimum page size option enables vertical scrolling and seeing one or a few visuals at a time on a small screen, while users on a larger screen can see more (all) visuals at once. In addition the KPI chart now has Sparklines to give more context to the KPI.
Spotfire 7.7 provides a brand new self service connector to Attivio, thus expanding the ability to create analysis files based on Attivio data lake data and unstructured content to business users. With Spotfire 7.7 business users can even author analysis files that uses the power of Attivio's full text search engine. Data is brought into Spotfire on demand, based on what end users search for. SAP BW continues to be a very popular source of data and Spotfire 7.7 delivers some of the most frequently requested features in this area. Both new and enhanced self service data connectors benefit from the ease of use in Spotfire 7.7. By decresing the amount of steps users need to do to edit data connections and deploy connectors to the Spotfire eco system, valuable time is saved.
Spotfire 7.7 continues to make it easier to prepare your data. Now it is possible to edit settings for add rows and data sources directly from the visual data source view.
Web authoring improvements
The Spotfire Business Author has a number of new capabilites such as creating and configuring KPI charts, creating multi layer maps, adding color rules and as mentioned above: the capability to add rows to data tables.
The main improvements in administration features are new jobs for TIBCO Spotfire Automation Services to send emails with attachements and to save data to a file, and improved management of resource pools for web player and automation services and for monitoring so called Scheduled Updates.
Custom panel API for Spotfire web clients
With Spotfire 7.7, developers can add custom panels to the Spotfire web clients.
Other API Improvements
Spotfire 7.7 adds APIs for:
- Cross Table sort mode (get/set); Global or Leaf: crossTablePlot.SortRowsMode = SortMode.Global;
- Cross Table empty cell text (get/set): crossTablePlot.EmptyCellText = "-";
- Get and set minimum page dimensions: page.MinimumWidth = 713; page.MinimumHeight = 446;
Spotfire 7.7 improves the use of map charts through improved zoom visibility and improved map chart access when there is no internet connection.
TIBCO Spotfire® 7.6
7.6 is an important release for TIBCO Spotfire, thanks to the modernized client and server architecture. This new foundation is helping make it easier and faster for us to make visualization improvements, and significantly simplifies server administration and managability. This document summarizes the cool new features in TIBCO Spotfire 7.6.
Below are tutorials and video links to learn more about a selection of the new features, and more.
KPI chart and Mobile:
The new KPI chart is a big fan favorite in TIBCO Spotfire 7.6. It is now easier than ever to configure a Key Performance Indicator dashboard in TIBCO Spotfire and make it available to consumers using the TIBCO Spotfire iOS app for mobile devices or the TIBCO Spotfire web client. Create dashboards that let the user browse their KPIs, tapping a KPI to view more detailed KPIs, or to view more details in regular TIBCO Spotfire visuals.
- Monitor your business's KPIs in your phone using TIBCO Spotfire
- A mobile application for monitoring store employee presence KPIs in TIBCO Spotfire
- Using TIBCO Spotfire for visualizing market development KPIs
Another great new visualization is the Waterfall chart, which now works with TIBCO Spotfire Cloud 3.6 and TIBCO Spotfire 7.6. They are useful when you need to show how different component factors contribute to a final result. Waterfall charts are commonly used in financial analysis, but are useful for other use cases as well. So if you're unfamiliar with why you would use a waterfall chart in the first place, start by reading this post on: why use a waterfall chart. Then, explore the "how" to create a waterfall chart in TIBCO Spotfire with the following tutorials:
- Creating a waterfall chart in TIBCO Spotfire
- Using waterfall charts for difference analysis with TIBCO Spotfire
Show top N vs the rest
It can be useful to visualize the Top N of something, versus "the rest". This is a great visualization technique to improve chart readability when you have a few large groups and many smaller ones. This article is relevant for 7.6 and older versions as well:
Inline Data Preparation and Data Wrangling
Below are a selection of new, easy to use tools to prepare and wrangle data. This video shows how they can be used:
Visual overview of data table structures
It is sometimes challenging to understand which data sources and what methods have been used to create combined data tables. To solve this problem, data table data sources and operations can now easily be viewed in the Source view of the expanded data panel. It is possible to see detailed information about operations and preview intermediate resulting data tables after individual steps.
Split columns into new columns based on column values
Sometimes, column values contain multiple pieces of information. Examples are first and last name, or city and zip code. It's now easy to split columns of this type into separate columns containing the individual values from the original column. The original column can then be hidden from the analysis, not to distract and take up valuable space (in, for example, the Data panel).
Unpivot from the data panel
Data can be organized in different ways, for example, in a short/wide or tall/skinny format, but still contain the same information. Often, it is easier to visualize data organized in a tall/skinny format, that is, when the values are collected in just a few value columns. Unpivoting is one way to transform data from a short/wide to a tall/skinny format, so the data can be presented the way you want it in the visualizations. The Data panel (both in TIBCO Spotfire Analyst and TIBCO Spotfire Business Author) now has a built-in unpivot tool on the right-click menu.
Using multiple screens when analyzing data
When you want to simultaneously view more visualizations than will fit on a single screen, you can now analyze your data using multiple screens!
New Google Analytics connector in TIBCO Spotfire Business Author and TIBCO Spotfire Analyst
TIBCO Spotfire Business Author and TIBCO Spotfire Analyst now support direct access to, and analysis of, data from Google Analytics.
New Salesforce.com connector in TIBCO Spotfire Business Author
TIBCO Spotfire Business Author now supports direct access to, and analysis of, Salesforce.com data, without using the installed TIBCO Spotfire Analyst client.
Caching Data using Automation Services
Performance can often be improved by periodically loading data from databases and caching it, so that TIBCO Spotfire analyses requiring the data can be opened quickly and without each analysis hitting the database with queries.
Custom/External Authentication in TIBCO Spotfire 7.5/7.6
Many customers want to embed TIBCO Spotfire Web Player into a portal or other web application and secure access by passing authentication information from the portal to TIBCO Spotfire. Customers also have internal Web application security standards that require Single-Sign-On to all web applications which would include TIBCO Spotfire Web Player. TIBCO Spotfire supports these scenarios via custom and external authentication. With TIBCO Spotfire 7.5, the architecture has changed such that the support for these scenarios has moved from the TIBCO Spotfire Web Player to the TIBCO Spotfire Server.