TIBCO Spotfire® JDBC Data Access Connectivity Details

Last updated:
7:06am Jul 15, 2020

TIBCO Spotfire® connects to virtually any JDBC compliant data source via the Spotfire Server Information Services interface.

Spotfire Information Services requires a Data Source Template to configure the URL Connection string, the JDBC driver class, and other settings.

This page provides general information about Spotfire JDBC connectivity and examples of Information Services data source templates

How to create a new JDBC data source template

Maybe you can't find a data source template example for your data source on this page? If that is the case you can create your own. The different XML settings are documented here.

How to add a new JDBC data source to Spotfire Server

  1. Find your data source template below or elsewhere. If you can't find a template for your specific data source you might be able to modify an existing template to work with your driver and data source.
  2. Launch the Spotfire configuration tool.
  3. Click on the Configuration tab.
  4. Click on DataSource Templates on the left pane.
  5. On the right pane, click on on "New", type in a name and paste the datasource template within the "Data source template" section.
  6. Click on "OK" and make sure that the status shows the following message at the bottom of the page "XML parsed OK."
  7. Confirm that the datasource is enabled with a checkbox.
  8. Save the configuration and restart the Spotfire service for changes to take effect.

JDBC Driver Folder Locations

Starting with Spotfire Server 10.3 drivers should in general be placed in the <installation directory>/tomcat/custom-ext folder. Doing so ensures that the files are handled properly by the Spotfire Server Upgrade Tool. 

If files are placed in another directory like <installation directory>/tomcat/lib , then one will have to manually move them during an upgrade.  For Spotfire Server 10.3 and above, it is recommended that one initially try the jar files in the custom-ext directory, but, if issues are encountered, then one may need to fall back to one of the other locations.   

In previous versions one may place JDBC driver jar files in the tomcat lib folder, e.g. tibco/tss/<version>/tomcat/lib folder or the Spotfire web application lib folder, e.g. tibco/tss/tomcat/webapps/spotfire/WEB-INF/lib. There are some issues to be aware of:

Apache Drill JDBC Driver Location

The current Apache Drill JDBC file must go in the tomcat/lib folder.  (Starting with Spotfire Server 10.3, drivers should in general be placed in the <installation directory>/tomcat/custom-ext folder.)

Cloudera JDBC Driver Location

The jar files from Cloudera must go in the spotfire/WEB-INF/lib folder. (Starting with Spotfire Server 10.3, drivers should in general be placed in the <installation directory>/tomcat/custom-ext folder.)

Hortonworks, Apache Drill and Phoenix JDBC Driver Location

The jar files from Hortonworks and Phoenix must go in the spotfire/WEB-INF/lib folder if the Apache Drill jar file is in the tomcat/lib folder. (Starting with Spotfire Server 10.3, drivers should in general be placed in the <installation directory>/tomcat/custom-ext folder.)

SAP/Sybase jConnect JDBC Driver Location

In Spotfire 10.10 and later, because of a conflict with Oracle JDBC drivers (when used for Information Services), you cannot place the JDBC driver for SAP/Sybase jConnect (jconn4-jdbc.jar) in the recommended directory for JDBC drivers (<installation directory>/tomcat/custom-ext).

The conflict happens because of a known issue in the jConnect driver. See SAP Note 2827462.

To install the JDBC driver for jConnect, jconn-jdbc.jar, instead place the driver file in for example the following directory: 

<installation directory>/tomcat/lib

 

Access DB

Information Services queries by default has double quotes as shown in the example below:

SELECT
   C1."key" AS "KEY",
   C1."odometerdistance" AS "ODOMETERDISTANCE"
FROM
   "Complete_Bend_Strain_IMU_data" C1
WHERE
   <conditions>

But Access DB is not compatible with double quotes and requires queries like in this example:

SELECT
   C1.key AS KEY,
   C1.odometerdistance AS ODOMETERDISTANCE
FROM
   Complete_Bend_Strain_IMU_data C1
WHERE
   <conditions>

Instead of editing the SQL manually in Information Designer the data source template can hold this information. Add the xxx-pattern items to the data source template like in the example below. Please note that if quotes aren't used, that means that column names cannot be two words.

Data Source Template for Access DB

<jdbc-type-settings>
    <type-name>AccessDB</type-name>
    <driver>net.ucanaccess.jdbc.UcanaccessDriver</driver>
    <connection-url-pattern>jdbc:ucanaccess://c:/temp/AccessDB.accdb;showSchema=true</connection-url-pattern>
   <table-types>TABLE, VIEW</table-types>  
  <ping-command>SELECT 1</ping-command>  
  <supports-catalogs>false</supports-catalogs>  
  <supports-schemas>false</supports-schemas>
  <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
  <column-name-pattern>$$name$$</column-name-pattern>
  <table-name-pattern>$$name$$</table-name-pattern>
  <schema-name-pattern>$$name$$</schema-name-pattern>
  <catalog-name-pattern>$$name$$</catalog-name-pattern>
  <procedure-name-pattern>$$name$$</procedure-name-pattern>
  <column-alias-pattern>$$name$$</column-alias-pattern>
</jdbc-type-settings>

 

ActiveSpaces

The ActiveSpaces JDBC driver can be obtained from the AS Support team. You may need to have a license for ActiveSpaces.

Data Source Template for ActiveSpaces (example)

Example data source code can be found in the attachement file data_source_template_activespaces.txt, located at the bottom of this page. 

Apache Cassandra

For Apache Cassandra, there are several different implementations one can use.  For this testing, I used the DataStax Community Edition of the Windows implementation.  This focuses on using JDBC drivers and Information Services.  ODBC drivers for Apache Cassandra are available in the Spotfire Drivers zip file which one can download from the Spotfire Server edelivery site.  There are several JDBC drivers for Cassandra that are built on the DataStax Java Driver.  For this testing, I used the one available here: https://github.com/adejanovski/cassandra-jdbc-wrapper.  I downloaded the jar file and placed it in the tomcat/lib directory.  Starting with Spotfire Server 10.3, the jar file should be placed in the <installation directory>/tomcat/custom-ext folder.

Cassandra uses a modified SQL called the Cassandra Query Language (CQL).  Due to some differences, e.g. table aliases are not supported, and some other issues I ran into, I used some of the Information Services APIs that Spotfire Server supports.  I extended the BasicJDBCMetadataProvider to help get column metadata and the BasicSQLRuntime to handle the table aliases.  The compiled jar file is attached to this article as a zip file - , SpotfireCassandraJDBCExtensions.zip.  If using, one needs to unzip the downloaded file, copy the jar file to the tomcat/webapps/spotfire/WEB-INF/lib directory (Spotfire Server 10.2 and below) or tomcat/custom-ext directory (Spotfire Server 10.3 and above) and the restart Spotfire Server.

Here is the data source template with the custom metadata provider and sql runtime classes specified:

<jdbc-type-settings>
  <type-name>Cassandra</type-name>
  <driver>com.github.adejanovski.cassandra.jdbc.CassandraDriver</driver>
  <connection-url-pattern>jdbc:cassandra://&lt;host1&gt;--&lt;host2&gt;:&lt;port&gt;/&lt;keyspace&gt;</connection-url-pattern>
  <metadata-provider>com.tibco.spotfireps.server.is.CassandraJDBCMetadataProvider</metadata-provider>
  <sql-runtime>com.tibco.spotfireps.server.is.CassandraSQLRuntime</sql-runtime>
  <supports-catalogs>true</supports-catalogs>
  <supports-schemas>true</supports-schemas>
  <supports-procedures>false</supports-procedures>
  <table-types>TABLE, VIEW</table-types>
  <table-expression-pattern>[$$schema$$.]$$table$$</table-expression-pattern>
  <ping-command></ping-command>
</jdbc-type-settings>

With this data source template and the corresponding jar file installed, I was able to read simple data from Cassandra.  I did not try complex data and there are things that CQL does not support to be aware of, like table joins.

Apache Drill

Data Source Template for Apache Drill JDBC driver (example)-> https://drill.apache.org/docs/configuring-tibco-spotfire-server-with-drill/

<jdbc-type-settings>
  <type-name>drill</type-name>
  <driver>org.apache.drill.jdbc.Driver</driver> 
  <connection-url-pattern>jdbc:drill:zk=<zk-node>:5181/drill/<drill-cluster-name>-drillbits</connection-url-pattern> 
  <ping-command>SELECT 1 FROM sys.version</ping-command>
  <supports-catalogs>true</supports-catalogs>
  <supports-schemas>true</supports-schemas>
  <supports-procedures>false</supports-procedures>
  <table-expression-pattern>[$$schema$$.]$$table$$</table-expression-pattern>

  <column-name-pattern>`$$name$$`</column-name-pattern>
  <table-name-pattern>`$$name$$`</table-name-pattern>
  <schema-name-pattern>`$$name$$`</schema-name-pattern>
  <catalog-name-pattern>`$$name$$`</catalog-name-pattern>
  <procedure-name-pattern>`$$name$$`</procedure-name-pattern>
  <column-alias-pattern>`$$name$$`</column-alias-pattern>

  <java-to-sql-type-conversions>
   <type-mapping>
    <from max-length="32672">String</from>
    <to>VARCHAR($$value$$)</to>
   </type-mapping>
   <type-mapping>
    <from>String</from>
    <to>VARCHAR(32672)</to>
   </type-mapping>
   <type-mapping>
    <from>Integer</from>
    <to>INTEGER</to>
   </type-mapping>
  </java-to-sql-type-conversions>
  </jdbc-type-settings>

Apache Hive

Hive does not support the default double quotes (“) that Spotfire uses and requires a specific single quote (`) around names (on US keyboards which is under the tilde next to the numeral 1 key – not the single quote under the double quotes).

One might think that connectivity to Hive would be similar among the Hadoop distributions and that they would use the same JDBC drivers; this is not the case. Some of the implementations have downloads of the required jar driver files from their websites while others require one to get the jar files from the VM.

Hive details for different Hadoop distributions are available further down this page.

Data Source Template for Apache Hive (example)

<jdbc-type-settings>
    <type-name>hive</type-name>
    <driver>org.apache.hadoop.hive.jdbc.HiveDriver</driver>
    <connection-url-pattern>jdbc:hive://&lt;host&gt;:&lt;port&gt;/default</connection-url-pattern>
    <table-types>TABLE,EXTERNAL TABLE</table-types>
    <supports-catalogs>true</supports-catalogs>
    <supports-schemas>true</supports-schemas>
    <ping-command>SHOW TABLES</ping-command>
    <fetch-size>10000</fetch-size>
    <batch-size>100</batch-size>
    <java-to-sql-type-conversions>
    </java-to-sql-type-conversions>
    <temp-table-name-pattern>#$$name$$</temp-table-name-pattern>
    <create-temp-table-command>CREATE TABLE $$name$$ $$column_list$$</create-temp-table-command>
    <drop-temp-table-command>DROP TABLE $$name$$</drop-temp-table-command>
    <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
</jdbc-type-settings>

Apache Phoenix

Since the JDBC connectivity to HBase is based on Apache Phoenix, the Data Source Template for Apache HBase is the same across environments.

To use Apache Phoenix, one needs to specify the metadata. How to configure and setup Phoenix is detailed in the Apache Phoenix documentation (http://phoenix.apache.org/) and is not covered here.

Please also see Hortonworks' article about Spotfire JDBC Connection to Phoenix on Kerberized Cluster.

Data Source Template for Apache Phoenix (example)

<jdbc-type-settings>
    <type-name>Phoenix_HBase2</type-name>
    <driver>org.apache.phoenix.jdbc.PhoenixDriver</driver>
    <connection-url-pattern>jdbc:phoenix:&lt;host&gt;:&lt;port2181&gt;:/&lt;path&gt;</connection-url-pattern>
    <supports-catalogs>false</supports-catalogs>
    <supports-schemas>false</supports-schemas>
    <supports-procedures>false</supports-procedures>
    <ping-command></ping-command>
</jdbc-type-settings>

Amazon Athena

The Amazon Athena database requires one to download JDBC drivers and configure a data source template.  The Accessing Amazon Athena JDBC page has a link to the latest JDBC drivers for Athena.  The latest Amazon Athena JDBC driver is version 2.0.9 and has some changes in driver class name and connection URL format from version 1.x.  Please see the Athena JDBC Driver Migration Guide for more details.   The Data Source Template below is for version 2.0.x of the Athena JDBC driver.

Data Source Template for Athena (example)

<jdbc-type-settings>
	<type-name>Amazon Athena JDBC</type-name>
	<driver>com.simba.athena.jdbc.Driver</driver>
	<connection-url-pattern>jdbc:awsathena://athena.&lt;region&gt;.amazonaws.com:443;s3_staging_dir=&lt;staging_dir&gt;</connection-url-pattern>
	<table-types>DATA, EXTERNAL_TABLE, TABLE, VIEW</table-types>
  	<supports-catalogs>true</supports-catalogs>
	<supports-schemas>true</supports-schemas>
	<supports-procedures>false</supports-procedures>
	<always-use-prepared-statement>false</always-use-prepared-statement>
</jdbc-type-settings>

There are a few things to note regarding the Athena Data Source Template:

- The Athena JDBC driver as of July 2017 does not support the prepareStatement call for JDBC connections.  Spotfire will always call prepareStatement unless told to do otherwise.  One can turn off this behaviour by setting the attribute always-use-prepared-statement to false.  Without this setting, the Information LInk queries will not be able to retrieve data.

- For the table-types settings, the EXTERNAL_TABLE property needs to be added.  This is how Athena exposes the tables externally.

When one creates the data source in Spotfire Information Designer, Athena requires that the fetch size be reduced to less than the 10,000 default in Spotfire.  The fetch size can be set to 0 which will use the default fetch size for the JDBC driver.  The s3_staging_dir URL parameter is mandatory for Athena and will throw an error if it is missing.

The username and password will be the AWS acces key and AWS secret key, respectively.  One can do more complex security using an AWSCredentialsProvider.  Details are on the Amazon Athena JDBC page referenced above.  Connecting using SQL Workbench can help troubleshoot that the connection is working and the correct access is in place.

Amazon Redshift

Data Source Template for Amazon Redshift JDBC41 (example)

<jdbc-type-settings>
	<type-name>AWS Redshift</type-name>
	<driver>com.amazon.redshift.jdbc41.Driver</driver>
	<connection-url-pattern> jdbc:redshift://endpoint:port/database</connection-url-pattern>
	<ping-command>SELECT 1</ping-command>
	<metadata-provider>com.spotfire.ws.im.ds.sql.BasicJDBCMetadataProvider</metadata-provider>
	<fetch-size>10000</fetch-size>
	<batch-size>100</batch-size>
	<table-types>TABLE, VIEW, EXTERNAL TABLE</table-types>
	<supports-catalogs>true</supports-catalogs>
	<supports-schemas>true</supports-schemas>
	<supports-procedures>true</supports-procedures>
	<max-in-clause-size>1000</max-in-clause-size>
	<condition-list-threshold>10000</condition-list-threshold>
	<expand-in-clause>false</expand-in-clause>
	<table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern>
	<procedure-expression-pattern>[$$catalog$$.][$$schema$$.]$$procedure$$</procedure-expression-pattern>
	<time-format-expression>to_char($$value$$, 'HH24:MI:SS')</time-format-expression>
	<date-time-format-expression>$$value$$</date-time-format-expression>
</jdbc-type-settings>

Data Source Template for Amazon Redshift JDBC4 (example)

<jdbc-type-settings>
  <type-name>AWS Redshift</type-name>
  <driver>com.amazon.redshift.jdbc4.Driver</driver>
  <connection-url-pattern> jdbc:redshift://endpoint:port/database</connection-url-pattern>
  <ping-command>SELECT 1</ping-command>
  <metadata-provider>com.spotfire.ws.im.ds.sql.BasicJDBCMetadataProvider</metadata-provider>
  <fetch-size>10000</fetch-size>
  <batch-size>100</batch-size>
  <table-types>TABLE, VIEW, EXTERNAL TABLE</table-types>
  <supports-catalogs>true</supports-catalogs>
  <supports-schemas>true</supports-schemas>
  <supports-procedures>true</supports-procedures>
  <max-in-clause-size>1000</max-in-clause-size>
  <condition-list-threshold>10000</condition-list-threshold>
  <expand-in-clause>false</expand-in-clause>
  <table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern>
  <procedure-expression-pattern>[$$catalog$$.][$$schema$$.]$$procedure$$</procedure-expression-pattern>
  <time-format-expression>to_char($$value$$, 'HH24:MI:SS')</time-format-expression>
  <date-time-format-expression>$$value$$</date-time-format-expression>
</jdbc-type-settings>

Cloudera

Download the Impala or Hive JDBC drivers from the Cloudera web site and unzip to a temporary directory. On the Spotfire Server, you will need to copy the Impala or Hive jar file to the appropriate folder on Spotfire Server and create and enable the Data Source Template.  The jar files need to be copied to the tomcat/custom-ext folder for Spotfire Server 10.3 and above or the tomcat/webapps/spotfire/WEB-INF/lib folder for Spotfire Server 10.2 and below.

With the more recent versions of the Cloudera Impala and Hive JDBC distributions, there is only one file that needs to be copied for Impala or Hive:

  • ImpalaJDBC41.jar file for Impala connectivity

OR

  • HiveJDBC41.jar file for Hive connectivity

Data Source Template for Cloudera Apache Hive Connectivity (sample)

<jdbc-type-settings>
    <type-name>cloudera_hive</type-name>
    <driver>com.cloudera.hive.jdbc41.HS2Driver</driver>
    <connection-url-pattern>jdbc:hive2://&lt;host&gt;:&lt;port10000&gt;/&lt;database&gt;</connection-url-pattern>
    <supports-catalogs>false</supports-catalogs>
    <supports-schemas>true</supports-schemas>
    <supports-procedures>false</supports-procedures>
    <ping-command>SHOW TABLES</ping-command>
    <column-name-pattern>`$$name$$`</column-name-pattern>
    <table-name-pattern>`$$name$$`</table-name-pattern>
    <schema-name-pattern>`$$name$$`</schema-name-pattern>
    <catalog-name-pattern>`$$name$$`</catalog-name-pattern>
    <procedure-name-pattern>`$$name$$`</procedure-name-pattern>
    <column-alias-pattern>`$$name$$`</column-alias-pattern>
</jdbc-type-settings>

Data Source Template for Cloudera Impala Connectivity (sample)

<jdbc-type-settings>
    <type-name>cloudera_impala</type-name>
    <driver>com.cloudera.impala.jdbc41.Driver</driver>
    <connection-url-pattern>jdbc:impala://&lt;host&gt;:&lt;port21050&gt;/&lt;database&gt;</connection-url-pattern>
    <supports-catalogs>false</supports-catalogs>
    <supports-schemas>true</supports-schemas>
    <supports-procedures>false</supports-procedures>
    <ping-command>SHOW TABLES</ping-command>
    <column-name-pattern>`$$name$$`</column-name-pattern>
    <table-name-pattern>`$$name$$`</table-name-pattern>
    <schema-name-pattern>`$$name$$`</schema-name-pattern>
    <catalog-name-pattern>`$$name$$`</catalog-name-pattern>
    <procedure-name-pattern>`$$name$$`</procedure-name-pattern>
    <column-alias-pattern>`$$name$$`</column-alias-pattern>
</jdbc-type-settings>

Couchbase

Data Source Template for Cdata's Couchbase JDBC driver (example)

<jdbc-type-settings>
<type-name>couchbase</type-name>
<driver>cdata.jdbc.couchbase.CouchbaseDriver</driver>
<connection-url-pattern>jdbc:couchbase:</connection-url-pattern>
<ping-command>SELECT * FROM Projects LIMIT 1</ping-command>
<connection-properties>
<connection-property>
<key>User</key>
<value>myuseraccount</value>
</connection-property>
<connection-property>
<key>Password</key>
<value>mypassword</value>
</connection-property>
<connection-property>
<key>Server</key>
<value>http://mycouchbaseserver</value>
</connection-property>
</connection-properties>
<fetch-size>10000</fetch-size>
<batch-size>100</batch-size>
<max-column-name-length>32</max-column-name-length>
<table-types>TABLE, VIEW</table-types>
<supports-catalogs>true</supports-catalogs>
<supports-schemas>true</supports-schemas>
<supports-procedures>false</supports-procedures>
<supports-distinct>true</supports-distinct>
<supports-order-by>true</supports-order-by>
<column-name-pattern>"$$name$$"</column-name-pattern>
<table-name-pattern>"$$name$$"</table-name-pattern>
<schema-name-pattern>"$$name$$"</schema-name-pattern>
<catalog-name-pattern>"$$name$$"</catalog-name-pattern>
<procedure-name-pattern>"$$name$$"</procedure-name-pattern>
<column-alias-pattern>"$$name$$"</column-alias-pattern>
<string-literal-quote>'</string-literal-quote>
<max-in-clause-size>1000</max-in-clause-size>
<condition-list-threshold>10000</condition-list-threshold>
<expand-in-clause>false</expand-in-clause>
<table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern>
<procedure-expression-pattern>[$$catalog$$.][$$schema$$.]$$procedure$$</procedure-expression-pattern>
<procedure-table-jdbc-type>0</procedure-table-jdbc-type>
<procedure-table-type-name></procedure-table-type-name>
<date-format-expression>$$value$$</date-format-expression>
<date-literal-format-expression>'$$value$$'</date-literal-format-expression>
<time-format-expression>$$value$$</time-format-expression>
<time-literal-format-expression>'$$value$$'</time-literal-format-expression>
<date-time-format-expression>$$value$$</date-time-format-expression>
<date-time-literal-format-expression>'$$value$$'</date-time-literal-format-expression>
<java-to-sql-type-conversions>VARCHAR($$value$$) VARCHAR(255) INTEGER BIGINT REAL DOUBLE PRECISION DATE TIME TIMESTAMP</java-to-sql-type-conversions>
<temp-table-name-pattern>$$name$$#TEMP</temp-table-name-pattern>
<create-temp-table-command>CREATE TABLE $$name$$#TEMP $$column_list$$</create-temp-table-command>
<drop-temp-table-command>DROP TABLE $$name$$#TEMP</drop-temp-table-command>
<data-source-authentication>false</data-source-authentication>
<lob-threshold>-1</lob-threshold>
<use-ansii-style-outer-join>false</use-ansii-style-outer-join>
<credentials-timeout>86400</credentials-timeout>
</jdbc-type-settings>

Denodo

Data Source Template for Denodo (example, tested with Denodo 8 beta 1)

<jdbc-type-settings>     
      <type-name>Denodo</type-name>
      <driver>com.denodo.vdp.jdbc.Driver</driver>
      <connection-url-pattern>jdbc:vdb://&lt;server&gt;:&lt;port9999&gt;/&lt;data_base&gt;</connection-url-pattern>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>false</supports-schemas>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
      <metadata-provider>com.spotfire.ws.im.ds.sql.BasicJDBCMetadataProvider</metadata-provider>
      <supports-procedures>true</supports-procedures>
</jdbc-type-settings>

Exasol

https://community.tibco.com/wiki/connecting-tibco-spotfirer-server-exaso...

File Maker

For File Maker connectivity, you need to install the appropriate JDBC driver. This link contains instructions for installating and configuring of the appropriate .jar file:

http://help.filemaker.com/app/answers/detail/a_id/13603/~/installing-filemaker-jdbc-client-drivers

<jdbc-type-settings>
  <type-name>Filemaker</type-name>
  <driver>com.filemaker.jdbc.Driver</driver>
  <connection-url-pattern>jdbc:filemaker://&lt;host&gt;/&lt;database&gt;</connection-url-pattern>
  <ping-command>SELECT 1</ping-command>
  <metadata-provider>com.spotfire.ws.im.ds.sql.BasicJDBCMetadataProvider</metadata-provider>
  <fetch-size>10000</fetch-size>
  <batch-size>100</batch-size>
  <table-expression-pattern>[$schema$.]$table$</table-expression-pattern>
  <table-types>TABLE</table-types>
  <supports-catalogs>true</supports-catalogs>
  <supports-schemas>true</supports-schemas>
  <supports-procedures>true</supports-procedures>
</jdbc-type-settings>

Google BigQuery

JDBC driver from Google/Simba

<jdbc-type-settings>
  <type-name>BigQuery</type-name>
  <driver>com.simba.googlebigquery.jdbc41.Driver</driver>
  <connection-url-pattern>jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=&lt;project&gt;;OAuthType=0;OAuthServiceAcctEmail=xxxxxx@developer.gserviceaccount.com;OAuthPvtKeyPath=D:\\path\file</connection-url-pattern>
  <supports-catalogs>true</supports-catalogs>
  <supports-schemas>true</supports-schemas>
  <supports-procedures>false</supports-procedures>
  <column-name-pattern>$$name$$</column-name-pattern>
  <table-name-pattern>$$name$$</table-name-pattern>
  <schema-name-pattern>$$name$$</schema-name-pattern>
  <catalog-name-pattern>$$name$$</catalog-name-pattern>
  <procedure-name-pattern>$$name$$</procedure-name-pattern>
  <column-alias-pattern>$$name$$</column-alias-pattern>
</jdbc-type-settings>

For Google BigQuery, the default xxx-name-patterns in the data source template will not work.  Google BigQuery does not support double quotes in the schema, table and column names.  Please see this TIBCO Support Knowledge Base article for more information: Executing Information Link based on Google BigQuery failes with error "Marshalling Error: INVALID_CHARACTER_ERR: An invalid or illegal XML character is specified."

JDBC driver from CData

<jdbc-type-settings>
  <type-name>googlebigquery</type-name>
  <driver>cdata.jdbc.googlebigquery.GoogleBigQueryDriver</driver>
  <connection-url-pattern>jdbc:googlebigquery:</connection-url-pattern>
  <ping-command>SELECT * FROM Projects LIMIT 1</ping-command>
  <connection-properties>
    <connection-property>
      <key>DataSetId</key>
      <value>MyDataSetId</value>
    </connection-property>
    <connection-property>
      <key>ProjectId</key>
      <value>MyProjectId</value>
    </connection-property>
    <connection-property>
      <key>InitiateOAuth</key>
      <value>GETANDREFRESH</value>
    </connection-property>
  </connection-properties>
  <fetch-size>10000</fetch-size>
  <batch-size>100</batch-size>
  <max-column-name-length>32</max-column-name-length>
  <table-types>TABLE, VIEW</table-types>
  <supports-catalogs>true</supports-catalogs>
  <supports-schemas>true</supports-schemas>
  <supports-procedures>false</supports-procedures>
  <supports-distinct>true</supports-distinct>
  <supports-order-by>true</supports-order-by>
  <column-name-pattern>`$$name$$`</column-name-pattern>
  <table-name-pattern>`$$name$$`</table-name-pattern>
  <schema-name-pattern>`$$name$$`</schema-name-pattern>
  <catalog-name-pattern>`$$name$$`</catalog-name-pattern>
  <procedure-name-pattern>`$$name$$`</procedure-name-pattern>
  <column-alias-pattern>`$$name$$`</column-alias-pattern>
  <string-literal-quote>'</string-literal-quote>
  <max-in-clause-size>1000</max-in-clause-size>
  <condition-list-threshold>10000</condition-list-threshold>
  <expand-in-clause>false</expand-in-clause>
  <table-expression-pattern>[$$catalog$$.][$$schema$.]$$table$$</table-expression-pattern>
  <procedure-expression-pattern>[$$catalog$$.][$$schema$$.]$$procedure$$</procedure-expression-pattern>
  <procedure-table-jdbc-type>0</procedure-table-jdbc-type>
  <procedure-table-type-name></procedure-table-type-name>
  <date-format-expression>$$value$$</date-format-expression>
  <date-literal-format-expression>$$value$$</date-literal-format-expression>
  <time-format-expression>$$value$$</time-format-expression> 
  <time-literal-format-expression>$$value$$</time-literal-format-expression>
  <date-time-format-expression>$$value$$</date-time-format-expression>
  <date-time-literal-format-expression>$$value$$</date-time-literal-format-expression>
  <java-to-sql-type-conversions>VARCHAR($$value$$) VARCHAR(255) INTEGER BIGINT REAL DOUBLE PRECISION DATE TIME TIMESTAMP</java-to-sql-type-conversions>
  <temp-table-name-pattern>$$value$$#TEMP</temp-table-name-pattern>
  <create-temp-table-command>CREATE TABLE $$name$$#TEMP $$column_list$$</create-temp-table-command>
  <drop-temp-table-command>DROP TABLE $$name$$#TEMP</drop-temp-table-command>
  <data-source-authentication>false</data-source-authentication>
  <lob-threshold>-1</lob-threshold>
  <use-ansii-style-outer-join>false</use-ansii-style-outer-join>
  <credentials-timeout>86400</credentials-timeout>
</jdbc-type-settings>

Hortonworks

Hive connectivity for Hortonworks required pulling the appropriate jar files off of the sandbox VM environment. This link has information regarding what JDBC settings and jar files are needed for various security configurations:

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_data-access/...

For the Hortonworks VM, copy the following jar files from the VM to the tibco/tss/<spotfire release>/tomcat/lib directory. If one is also testing Apache Drill, then the Hive jar files will need to go into the tibco/tss/<spotfire release>/tomcat/webapps/spotfire/WEB-INF/lib directory. Starting with Spotfire Server 10.3, drivers should in general be placed in the <installation directory>/tomcat/custom-ext folder.

Jar file Location on VM
hive-exec-<version>.jar /usr/hdp/<version>/hive2/lib
hive-jdbc-<version>.jar /usr/hdp/<version>/hive2/lib
hive-metastore-<version>.jar /usr/hdp/<version>/hive2/lib
hive-service-<version>.jar /usr/hdp/<version>/hive2/lib
hadoop-common-<version>.jar /usr/hdp/<version>/hadoop

The above files were tested with Hortonworks 2.5 version and Spotfire Server 10.3.2.  If one is using Kerberos for authentication, then the hadoop-auth.jar and commons-configuration-<version>.jar files are required. (See Connecting TIBCO Spotfire to a Kerberized Data Source for more details.)  Additional jars may be needed depending on the version of Hortonworks and Spotfire Server and the specific type of security in use.  These files were tested in an environment secured via username and password authentication.  If one is using ZooKeeper for service discovery, then additional jars are needed from the /usr/hdp/<version>/hadoop/lib directory: curator-client-<version>.jar, curator-framework-<version>.jar and zookeeper-<version>.jar.

One thing to be aware of is the hive-jdbc.jar file. There are two different types of the hive-jdbc jar files available. One is a standalone jar file called hive-jdbc-standalone.jar which is meant to be used in standalone environments; the other hive-jdbc jar file is called hive-jdbc.jar, without the 'standalone', that is meant for non-standalone environments. In the Hortonworks environment, the hive-jdbc.jar entity is a link to the hive-jdbc--standalone.jar file, so one needs to explicitly copy the hive-jdbc-.jar file to get the non-standalone version.

Since Spotfire runs in Tomcat, it is not a standalone environment, e.g. one is running in another environment that may have other jar files. The hive-jdbc-standalone.jar file will not load in Tomcat because it includes Java classes that are already loaded by Tomcat and therefore there is a conflict. In addition to this, there are additional classes that are needed if one does not use the standalone hive-jdbc file. One needs to use the hive-jdbc-.jar with Spotfire.

Data Source Template for Hortonworks Hive Connectivity (sample)

Spotfire needs a data source template for Hive connectivity. The following is the current data source template for HiveServer2 connectivity to Hortonworks.

<jdbc-type-settings>
    <type-name>hive2</type-name>
    <driver>org.apache.hive.jdbc.HiveDriver</driver>
    <connection-url-pattern>jdbc:hive2://&lt;host&gt;:&lt;port10000&gt;/&lt;database&gt;</connection-url-pattern>
    <supports-catalogs>false</supports-catalogs>
    <supports-schemas>true</supports-schemas>
    <supports-procedures>false</supports-procedures>
    <ping-command>SHOW TABLES</ping-command>
    <column-name-pattern>`$$name$$`</column-name-pattern>
    <table-name-pattern>`$$name$$`</table-name-pattern>
    <schema-name-pattern>`$$name$$`</schema-name-pattern>
    <catalog-name-pattern>`$$name$$`</catalog-name-pattern>
    <procedure-name-pattern>`$$name$$`</procedure-name-pattern>
    <column-alias-pattern>`$$name$$`</column-alias-pattern>
</jdbc-type-settings>

Hive access via Apache Knox Gateway in Hortonworks 2.2

Apache Knox Gateway is a REST API Gateway that allows one to interact with a Hadoop cluster at a single access point; thus it can provide security and other critical administrative tasks. The Hortonworks JDBC drivers support Knox and therefore Spotfire can connect to Hive via the Knox interface. Knox requires some extra configuration on the Hortonworks VM and within the Spotfire environment. At a high-level, the main task was to obtain the SSL certificate from the Apache Knox Gateway on the VM and import that certificate into the Spotfire Server Tomcat certificate authority file. After the below screen shot, more details are provied on what on what was needed to support Apache Knox in Spotfire Server.

The Apache Knox JDBC connection uses the same Data Source Template and jar files as the Hive connection does. After configuring the SSL certificates as documented in the Appendix, the JDBC connection URL in Spotfire needs to be modified to include the transportMode=http and the httpPath.

Here is the connection URL I used locally to connect to the VM: jdbc:hive2://192.168.253.129:8443/;ssl=true;transportMode=http;httpPath=gateway/sandbox/hive

The httpPath may be different in different environments. The first and last part, gateway and hive respectively, likely will stay the same. The middle part of the path can vary based on the cluster name that has been defined for Knox. Here is the screen shot from Information Designer for the Apache Knox data source for the Hortonworks 2.2 VM:

Addtional Setup Required for Apache Knox Gateway via JDBC

In order to support Knox via JDBC in Spotfire, a couple of tasks need to be done.  Knox may use a self-signed certificate to support the SSL connection.  With SSL, the certificate authority certificate needs to be to be imported into the Spotfire Server Java certificate file.  Typically, with a non-self-signed certificate, the certificate authority may already be known.  The JDBC connection URL will also need to be different.

Here are the tasks with the command lines that I used locally to accomplish the tasks:

(NOTE: the specific command line options may have changed because this was initially tested in Nov. 2015, but the general steps should be the same.) 

(1) Obtain the certificate from the Knox Java Keystore file.  The default Knox Java Keystore file is /var/lib/knox/data/security/keystores/gateway.jks.  To get the certificate out of it one needs to run the following command:

keytool -exportcert -alias gateway-identity -keystore <path>/gateway.jks -file <filename for certificate>

Here is the command I ran locally after copying the gateway.jks file to my Windows machine:  

c:\tibco\tss\7.0.0\jdk\bin\keytool.exe -exportcert -alias gateway-identity -keystore gateway.jks -file knoxred1.crt

The password for the JKS file on the sandbox VM is: knox

(2) Copy the crt file to the Spotfire Server(s) and import it into tibco\tss\7.0.0\jdk\jre\lib\security\cacerts.

Here is the command line I used when I ran locally with the knoxred1.crt file in the local directory.

c:\tibco\tss\7.0.0\jdk\bin\keytool -import -trustcacerts -file knoxred1.crt -alias gateway-identity -keystore c:\tibco\tss\7.0.0\jdk\jre\lib\security\cacerts

The password of the Spotfire cacerts file is: changeit (based on information from the Spotfire Server Installation Manual section on HTTPS/SSL)

(3) Modify the JDBC connection URL to include the transportMode=http and the httpPath.

Here is the connection URL I used locally to connect to the 2.2 VM:

jdbc:hive2://192.168.253.129:8443/;ssl=true;transportMode=http;httpPath=gateway/sandbox/hive

The httpPath may be different in different environments.  The first and last part, gateway and hive respectively, likely will stay the same.  The middle part of the path can vary based on the cluster name that has been defined for Knox.

This blog post on the Hortonworks site has useful information regarding how to configure Knox and use JDBC to connect: http://hortonworks.com/blog/secure-jdbc-odbc-clients-access-hiveserver2/  I followed these instructions in the 2.2 environment to get Apache Knox working. 

HBase

To connect to HBase via JDBC, one can use the Apache Phoenix drivers.  The specific jar files that may work will likely different between versions of Hadoop and Spotfire Server.  A test with Spotfire Server 10.3.2 and the Cloudera 5.13 quickstart VM used just the phoenix-4.14.0-cdh5.13.2-client.jar placed in the Spotfire Server <installation directory>/tomcat/custom-ext directory.  The jar file was downloaded from the Apache Phoenix downloads site and the 4.14.0-cdh5.13.2 version was used.

In 10.3.0 and above, I would recommend first trying with the phoenix-<version>-client.jar file before trying to determine which specific jars may be needed.  

The following jars were tested in a Spotfire Server 7.7 environment with Hortonworks 2.5.  The Spotfire Server version shouldn't matter too much but the version of Phoenix may.  The required jar files seem to change with different releases.  Copy the following jar files from the Hadoop environment to the tibco/tss/<spotfire release>/tomcat/lib directory in Spotfire Server 10.2 and below.  Starting in Spotfire Server 10.3, copy the following jar files from the Hadoop environment to the <installation directory>/tomcat/custom-ext directory.

Jar file Location on VM
commons-collections-<version>.jar /usr/hdp/<version> /hbase/lib
commons-configuration-<version>.jar /usr/hdp/<version> /hbase/lib
hbase-client-<version>.jar /usr/hdp/<version> /hbase/lib
hbase-common-<version>.jar /usr/hdp/<version> /hbase/lib
hbase-protocol-<version>.jar /usr/hdp/<version> /hbase/lib
hbase-server-<version>.jar /usr/hdp/<version> /hbase/lib
htrace-core-<version>-incubating.jar /usr/hdp/<version> /hbase/lib
netty-all-<version>.Final.jar /usr/hdp/<version> /hbase/lib
hadoop-auth-<version>.jar /usr/hdp/<version> /hadoop
hadoop-common-<version>.jar /usr/hdp/<version> /hadoop
hive-common-<version>.jar /usr/hdp/<version>/hive/lib
hive-exec-<version>.jar /usr/hdp/<version>/hive/lib
tephra-api-<version>.jar /usr/hdp/<version>/hive2/lib
tephra-core-<version>.jar /usr/hdp/<version>/hive2/lib
twill-discovery-api-<version>-incubating.jar /usr/hdp/<version>/hive2/lib
twill-zookeeper-<version>-incubating.jar /usr/hdp/<version>/hive2/lib
phoenix-core-<version>.jar /usr/hdp/<version>/phoenix/lib
zookeeper.jar /usr/hdp/<version>/phoenix/lib

The following screen shot of Spotfire Information Designer shows a configured connection to an unsecure Phoenix HBase data source.  The next section discusses connecting to a secured HBase.

The data source template for Apache Phoenix is provided above. Another note, which is probably less important in a non-sandbox VM test environment, is that the HBase machine name needs to be resolvable from the Spotfire Server machine. During the handshaking between the Phoenix Client (Spotfire Server) and the Phoenix Server (HBase environment), the Phoenix Server answers with the name of the machine and the client needs to be able to resolve it. Add the sandbox.hortonworks.com to your local Windows hosts file (c:\Windows\System32\drivers\etc\hosts – note some of the items in the path may be hidden files) to resolve this issue.

HBase - Secure

This section discusses connecting to a Kerberos secured HBase environment using the Phoenix JDBC drivers. In this test, a service account was used to connect. Each users’ authenticating token was not passed down to HBase. This test was done at a client’s site and is included here to provide information for those who may do this connection in the future. One may have different experiences in another customer environment. Hopefully, the information provided will help get one farther along in the process.

The Phoenix drivers are used in this scenario as well, but complexity is added because HBase is secured with Kerberos. If one has this type of environment, the first thing to get working is the ability to connect from a tool outside Spotfire. One could use another JDBC database tool or the command-line tool provided with Phoenix (sqlline).

The following two websites were useful in determining the final working configuration:

This site is part of the Hortonworks documentation for Phoenix and provides the connection URL information for a secured environment: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_ma...

Phoenix comes with a python script that one can run, but it is easier to test if one can run the Java command outside the script. This is helpful in determining what settings Spotfire may need. This blog post was useful in determining what could be needed on the Phoenix command-line to get connected to a secured HBase cluster: http://bigdatanoob.blogspot.com/2013/09/connect-phoenix-to-secure-hbase-...

Spotfire Server was installed on Windows and Linux. The Hortonworks cluster was on Linux, and the Linux Spotfire Server was installed on an edge node in the cluster with the thought that it would be easier to connect. In this case, this was correct. From the Windows Spotfire Server, we were not able to connect due to Kerberos issues. (I don’t recall the exact error, but something with the password could not be obtained. It was an error with Kerberos.)

The command-line on the Linux server that did work was based on the following:

java -cp {hbase conf directory}:{hadoop conf directory}:{phoenix-client jar} sqlline.SqlLine -d org.apache.phoenix.jdbc.PhoenixDriver -u jdbc:phoenix:{zookeeper_host_names}:{port_number}:{secured_zookeeper_node}:{principal_name}:{hbase_headless_keytab_file} -n none -p none--color=true --fastConnect=false --verbose=true --isolation=TRANSACTION_READ_COMMITTED

The key components are highlighted: including the hbase and hadoop configuration directories was critical and getting the connection URL correct was important. The following command-line provides example names and locations and assumes that one is in the same directory with the Phoenix client jar file.

java -cp /etc/hbase/conf:/etc/hadoop/conf:./phoenix-4.4.0.2.3.0.0-2557-client.jar sqlline.SqlLine -d org.apache.phoenix.jdbc.PhoenixDriver -u jdbc:phoenix:zookeeper1.name,zookeeper2.name:2181:/hbase-secure:principalename@DOMAIN:/etc/security/keytabs/hbase.headless.keytab -n none -p none--color=true --fastConnect=false --verbose=true --isolation=TRANSACTION_READ_COMMITTED

Note that multiple zookeeper nodes are separated by commas. My understanding is that the HBase headless keytab file is the location of the keytab file on the HBase nodes not necessarily on the local machine where one is running this command-line (or running Spotfire Server).

Using the information from this command-line test, the following changes were made in Spotfire Server:

  1. See the HBase section for details on jars that need to be copied into the tomcat/lib or tomcat/custom-ext directory depending on Spotfire Server version.  One may need to test to make sure all the classes are available.
  2. For Spotfire Server, the classpath had to be set to include the hbase and hadoop configuration directories. On Linux, one can set this by editing the setenv.sh file in the tomcat/bin directory and adding CLASSPATH=/etc/hbase/conf:/etc/hadoop/conf. On Windows, one can modify the service.bat file and modify the CLASSPATH setting.
  3. The connection URL in the Information Designer had to modified to include the principal name and the hbase headless keytab location.

One thing to note is that this configuration does not require Kerberos to be configured with Spotfire Server. This configuration is using a service principal to connect and not the Spotfire user. The information passed to the Phoenix driver is used to do the secure connection.

The same data source template as discussed previously can be used. Here’s an example JDBC URL used in information Designer for the connection:

jdbc:phoenix:{zookeeper_host_names}:{port_number}:{secured_zookeeper_node}:{principal_name}:{hbase_headless_keytab_file}

The following shows a connection URL that contains example names and paths.

jdbc:phoenix:zookeeper1.name,zookeeper2.name:2181:/hbase-secure:principalename@DOMAIN:/etc/security/keytabs/hbase.headless.keytab

HPE Vertica

Data Source Template for HPE Vertica Connectivity (sample)

<jdbc-type-settings>
    <type-name>Vertica</type-name>
    <driver>com.vertica.jdbc.Driver</driver>
    <connection-url-pattern>jdbc:vertica://&lt;host&gt;:&lt;port&gt;/&lt;db&gt;,&lt;dbadmin&gt;,&lt;dbpassowrd&gt;</connection-url-pattern>
    <ping-command>SELECT 1 FROM DUAL</ping-command>
</jdbc-type-settings>

IBM Informix

Data Source Template for IBM Informix (sample)

 <jdbc-type-settings>
        <type-name>Informix</type-name>
        <display-name>Informix</display-name>
        <driver>com.informix.jdbc.IfxDriver</driver>
        <connection-url-pattern>
          jdbc:informix-sqli://<host>:<port>/<database>:INFORMIXSERVER=servername;user=<username>password=<password>
        </connection-url-pattern>
        <ping-command></ping-command>
        <supports-catalogs>false</supports-catalogs>
        <column-name-pattern>$$name$$</column-name-pattern>
        <table-name-pattern>$$name$$</table-name-pattern>
        <schema-name-pattern>$$name$$</schema-name-pattern>
        <catalog-name-pattern>$$name$$</catalog-name-pattern>
        <procedure-name-pattern>$$name$$</procedure-name-pattern>
        <column-alias-pattern>$$name$$</column-alias-pattern>
      </jdbc-type-settings>

JBoss Teiid

For connectivity to JBoss Teiid, download the Teiid JDBC jar file from 'http://teiid.jboss.org/downloads/'. Place the file in the following directory <SpotfireServerInstallationFolder>/tomcat/lib for Spotfire Server 10.2 and below or in <SpotfireServerInstallationFolder>/tomcat/custom-ext for Spotfire Server 10.3 and above. 

Data Source Template for JBoss Teiid

<jdbc-type-settings>
    <type-name>teiid</type-name>
    <driver>org.teiid.jdbc.TeiidDriver</driver>
    <connection-url-pattern>jdbc:teiid:&lt;vdb-name&gt;@mm:&lt;host&gt;:&lt;port&gt;</connection-url-pattern>
    <ping-command>SELECT 1</ping-command>
    <metadata-provider>com.spotfire.ws.im.ds.sql.BasicJDBCMetadataProvider</metadata-provider>
    <fetch-size>10000</fetch-size>
    <batch-size>100</batch-size>
    <table-expression-pattern>[$$schema$$.]$$table$$</table-expression-pattern>
    <table-types>TABLE, VIEW</table-types>
    <supports-catalogs>true</supports-catalogs>
    <supports-schemas>true</supports-schemas>
    <supports-procedures>true</supports-procedures>
    <condition-list-threshold>2000</condition-list-threshold>
    <temp-table-name-pattern>$$name$$</temp-table-name-pattern>
    <create-temp-table-command>CREATE LOCAL TEMPORARY TABLE $$name$$ $$column_list$$;</create-temp-table-command>
    <sql-filter>com.spotfire.ws.im.ds.sql.sassharenet.SASSQLFilter</sql-filter>
</jdbc-type-settings>

KDB (KX)

Data Source Template for KDB (KX)

Example data source code can be found in the attachement file data_source_template_kdb_kx.txt, located at the bottom of this page. 

Microsoft SQL Server with Kerberos delegated authentication

This chapter provides steps to create an Information Services Data source with Kerberos delegated authentication to Microsoft SQL Server Database.

Prerequisites

1) TIBCO Spotfire Server configured for Kerberos authentication. Refer to the TIBCO Spotfire Server Installation manual (section "Kerberos Authentication").

2) TIBCO Spotfire Web Player (optional), configured for Single Sign-On (SSO) using Delegation with Kerberos authentication. Refer to the TIBCO Spotfire Web Player installation manual.

3) Constrained or Unconstrained delegation enabled on the domain controller for the Service Account used for Kerberos authentication on TIBCO Spotfire Server. Refer to the TIBCO Spotfire Server installation manual (section "Grant the Spotfire Server Service Account the Right to Delegate Client Credentials").

4) Microsoft SQL Server database to be used in setting up an information link data source that is configured for Kerberos authentication.

Steps to be performed on Spotfire Server

1) Download the JDBC driver for Microsoft SQL Server from the following url and copy the sqljdbc_auth.dll file into the TIBCO Spotfire Server instance's tomcat\lib folder.

http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=1...

The sqljdbc_auth.dll and jar files are extracted into the following location:  <installation directory>\sqljdbc_<version>\<language>\auth\x64\ 

2) Create a new Database Template for Kerberos delegated Authentication to Microsoft SQL Server Database using the TIBCO Spotfire Server Configuration Tool. Define the JDBC data source template using the XML shown below.

3) Enable the template via the Server configuration tool and Save the configuration into the database.

4) On the Spotfire Server, create a keytab file for the MS SQL Server database secured with Kerberos:

ktpass /princ {dbspnuser}@DOMAIN /ptype krb5_nt_principal /crypto rc4-hmac-nt /out spotfire-database.keytab /kvno 0 /pass ***

Replace dbspnuser - User used to register SPNs for the database

              DOMAIN - your custom domain

5) Copy spotfire-database.keytab to <tss_install_dir>\jdk\jre\lib\security folder.

6) Create spotfire-database.login file in <tss_install_dir>\jdk\jre\lib\security\ containing the following text:

DatabaseKerberos

{

com.sun.security.auth.module.Krb5LoginModule required debug=true storeKey=true useKeyTab=true

keyTab="${java.home}/jre/lib/security/spotfire-database.keytab" principal="{dbspnuser}@DOMAIN"

}

Replace dbspnuser - User used to register SPNs for the SQL Server database

              DOMAIN - your custom domain

7) Add an entry for the spotfire-database.login in the <tss_install_dir>\jdk\jre\lib\security\java.security file:   login.config.url.1=file:${java.home}/jre/lib/security/spotfire-database.login

8) Run the Spotfire Server service with the service account used to register SPNs for the server machine.

9) Restart the TIBCO Spotfire Server service (as a Windows or Linux service), to make these changes take effect.

10) Login to Spotfire Server from the Spotfire desktop client, then select "Tools > Information Designer" from the main menu and create a new Data Source using the newly created template (sqlserver2005_delegated). Keep the username/password field blank.

11) If the data source connection fails, purge the kerberos tickets on the Spotfire client machine using command:  

    <tss_install_dir>\jdk\bin\klist purge

References

https://msdn.microsoft.com/en-us/library/ms378428(v=sql.110).aspx

https://msdn.microsoft.com/en-us/library/gg558122(v=sql.110).aspx

Data Source Template for Microsoft SQL Server with Kerberos delegated authentication (example)

<jdbc-type-settings>
  <type-name>sqlserver2005_delegated</type-name>
  <driver>com.microsoft.sqlserver.jdbc.SQLServerDriver</driver>
  <connection-url-pattern>jdbc:sqlserver://&lt;host&gt;:&lt;port&gt;;DatabaseName=&lt;database&gt;;integratedSecurity=true;    authenticationScheme=JavaKerberos</connection-url-pattern>
  <ping-command>SELECT 1</ping-command>
  <connection-properties>
    <connection-property>
      <key>spotfire.connection.pool.factory.data.source</key>
      <value>kerberos.data.source</value>
    </connection-property>
    <connection-property>
      <key>selectMethod</key>
      <value>cursor</value>
    </connection-property>
  </connection-properties>
  <metadata-provider>com.spotfire.ws.im.ds.sql.sqlserver.SQLServerMetadataProvider</metadata-provider>
  <fetch-size>10000</fetch-size>
  <batch-size>100</batch-size>
  <table-types>TABLE, VIEW</table-types>
  <supports-catalogs>true</supports-catalogs>
  <supports-schemas>true</supports-schemas>
  <supports-procedures>true</supports-procedures>
  <max-in-clause-size>1000</max-in-clause-size>
  <condition-list-threshold>10000</condition-list-threshold>
  <expand-in-clause>false</expand-in-clause>
  <table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern>
  <date-format-expression>$$value$$</date-format-expression>
  <time-format-expression>convert(varchar(8), $$value$$, 108)</time-format-expression>
  <date-time-format-expression>$$value$$</date-time-format-expression>
  <java-to-sql-type-conversions>
    <type-mapping>
      <from max-length="8000">String</from>
      <to>VARCHAR($$value$$)</to>
    </type-mapping>
    <type-mapping>
      <from>String</from>
      <to>VARCHAR(8000)</to>
    </type-mapping>
    <type-mapping>
      <from>Integer</from>
      <to>DECIMAL(10, 0)</to>
    </type-mapping>
    <type-mapping>
      <from>Long</from>
      <to>DECIMAL(38, 0)</to>
    </type-mapping>
    <type-mapping>
      <from>Float</from>
      <to>REAL</to>
    </type-mapping>
    <type-mapping>
      <from>Double</from>
      <to>DOUBLE PRECISION</to>
    </type-mapping>
    <type-mapping>
      <from>Date</from>
      <to>DATETIME</to>
    </type-mapping>
    <type-mapping>
      <from>Time</from>
      <to>DATETIME</to>
    </type-mapping>
    <type-mapping>
      <from>DateTime</from>
      <to>DATETIME</to>
    </type-mapping>
  </java-to-sql-type-conversions>
  <temp-table-name-pattern>#$$name$$</temp-table-name-pattern>
  <create-temp-table-command>CREATE TABLE $$name$$ $$column_list$$</create-temp-table-command>
  <drop-temp-table-command>DROP TABLE $$name$$</drop-temp-table-command>
  <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
</jdbc-type-settings>

Oracle Database with SSL encryption

Note: The template below will only work if the SSL is used only for encryption (NOT for Authentication) on the Oracle Database. Different connection properties exist for SSL Authentication to Oracle DB.

References

http://www.oracle.com/technetwork/topics/wp-oracle-jdbc-thin-ssl-130128.pdf

http://docs.oracle.com/cd/B19306_01/java.102/b14355/sslthin.htm#BABDJBAE

Data Source Template for Oracle Database with SSL encryption (example)

<jdbc-type-settings>
  <type-name>OracleSSL</type-name>
  <driver>oracle.jdbc.OracleDriver</driver>
  <connection-url-pattern>jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=&lt;host&gt;)(PORT=&lt;port&gt;))(CONNECT_DATA=(SERVICE_NAME=&lt;servicename&gt;)))</connection-url-pattern>
  <connection-properties>
	<connection-property> 
	<key>oracle.net.ssl_cipher_suites</key> 
	<value>SSL_DH_anon_WITH_3DES_EDE_CBC_SHA, SSL_DH_anon_WITH_RC4_128_MD5, SSL_DH_anon_WITH_DES_CBC_SHA</value>
	</connection-property>
  </connection-properties>
  <ping-command>SELECT 1 FROM DUAL</ping-command>
  <metadata-provider>com.spotfire.ws.im.ds.sql.oracle.OracleMetadataProvider</metadata-provider>
  <sql-filter>com.spotfire.ws.im.ds.sql.oracle.OracleSQLFilter</sql-filter>
  <sql-runtime>com.spotfire.ws.im.ds.sql.oracle.OracleSQLRuntime</sql-runtime>
  <fetch-size>10000</fetch-size>
  <batch-size>100</batch-size>
  <table-types>TABLE, VIEW</table-types>
  <supports-catalogs>true</supports-catalogs>
  <supports-schemas>true</supports-schemas>
  <supports-procedures>true</supports-procedures>
  <max-in-clause-size>1000</max-in-clause-size>
  <condition-list-threshold>7000</condition-list-threshold>
  <expand-in-clause>false</expand-in-clause>
  <table-expression-pattern>[$$schema$$.]$$table$$[@$$catalog$$]</table-expression-pattern>
  <procedure-expression-pattern>[$$schema$$.]$$procedure$$[@$$catalog$$]</procedure-expression-pattern>
  <date-format-expression>TO_CHAR($$value$$, 'YYYY-MM-DD')</date-format-expression>
  <time-format-expression>TO_CHAR($$value$$, 'HH24:MI:SS')</time-format-expression>
  <date-time-format-expression>$$value$$</date-time-format-expression>
  <date-time-literal-format-expression>TO_DATE('$$value$$', 'YYYY-MM-DD HH24:MI:SS')</date-time-literal-format-expression>
  <java-to-sql-type-conversions>
    <type-mapping>
      <from max-length="4000">String</from>
      <to>VARCHAR2($$value$$)</to>
    </type-mapping>
    <type-mapping>
      <from>String</from>
      <to>VARCHAR2(4000)</to>
    </type-mapping>
    <type-mapping>
      <from>Integer</from>
      <to>NUMBER(10)</to>
    </type-mapping>
    <type-mapping>
      <from>Long</from>
      <to>NUMBER(38)</to>
    </type-mapping>
    <type-mapping>
      <from>Float</from>
      <to>REAL</to>
    </type-mapping>
    <type-mapping>
      <from>Double</from>
      <to>FLOAT</to>
    </type-mapping>
    <type-mapping>
      <from>Date</from>
      <to>DATE</to>
    </type-mapping>
    <type-mapping>
      <from>Time</from>
      <to>DATE</to>
    </type-mapping>
    <type-mapping>
      <from>DateTime</from>
      <to>DATE</to>
    </type-mapping>
  </java-to-sql-type-conversions>
  <create-temp-table-command>CREATE GLOBAL TEMPORARY TABLE $$name$$ $$column_list$$ ON COMMIT PRESERVE ROWS</create-temp-table-command>
  <drop-temp-table-command>TRUNCATE TABLE $$name$$;DROP TABLE $$name$$</drop-temp-table-command>
  <lob-threshold>4000</lob-threshold>
  <procedure-table-jdbc-type>-10</procedure-table-jdbc-type>
  <procedure-table-type-name>REF CURSOR</procedure-table-type-name>
</jdbc-type-settings>

Pivotal Greenplum

Data Source Template for Pivotal Greenplum (example)

<jdbc-type-settings>
<type-name>postgreSQL</type-name>
<driver>org.postgresql.Driver</driver>
<connection-url-pattern>
       jdbc:postgresql://&lt;host&gt;:&lt;port&gt;/database
 </connection-url-pattern>
 <supports-catalogs>true</supports-catalogs>
 <supports-schemas>true</supports-schemas>
 <use-ansii-style-outer-join>true</use-ansii-style-outer-join> 
 <metadata-provider>com.spotfire.ws.im.ds.sql.BasicJDBCMetadataProvider</metadata-provider>
 <supports-procedures>true</supports-procedures>
<sql-runtime>com.spotfire.ws.im.ds.sql.postgresql.PostgresSQLRuntime</sql-runtime>
</jdbc-type-settings>

PostgreSQL

Data Source Template for PostgreSQL (example)

<jdbc-type-settings>
  <type-name>postgreSQL</type-name>
  <display-name>postgreSQL</display-name>
  <driver>org.postgresql.Driver</driver>
  <connection-url-pattern>
    jdbc:postgresql://&lt;host&gt;:&lt;port&gt;/database
  </connection-url-pattern>
  <ping-command>select version()</ping-command>
  <supports-procedures>true</supports-procedures>
  <condition-list-threshold>10000</condition-list-threshold>
  <date-literal-format-expression>{d '$$value$$'}</date-literal-format-expression>
  <time-literal-format-expression>{t '$$value$$'}</time-literal-format-expression>
  <date-time-literal-format-expression>{ts '$$value$$'}</date-time-literal-format-expression>
  <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
  <sql-runtime>com.spotfire.ws.im.ds.sql.postgresql.PostgresSQLRuntime</sql-runtime>
</jdbc-type-settings>

Presto

This data source template for Presto is based on the information available on this page Presto 337 Documentation: 3.2. JDBC Driver.  Additional connection URL parameters will need to be added if one is using SSL, Kerberos, etc.   This is just the basic data source template and connection URL with information about date and time format conversion.  The catalog and schema parameters are not required on the connection URL.  The data source template includes additional information for formatting of date and time columns.  These values for the date/time format values may need to be adjusted since they have not been completely tested. 

<jdbc-type-settings>
	<type-name>PrestoKerberousTesting</type-name>
	<driver>io.prestosql.jdbc.PrestoDriver</driver>
	<connection-url-pattern>jdbc:presto://&lt;host&gt;:port&gt;/&lt;catalog&gt;/&lt;schema&gt;</connection-url-pattern>
	<supports-catalogs>true</supports-catalogs>
	<supports-schemas>true</supports-schemas>
	<date-format-expression>$$value$$</date-format-expression>
	<date-literal-format-expression>date '$$value$$'</date-literal-format-expression>
	<time-format-expression>$$value$$</time-format-expression>
	<time-literal-format-expression>time '$$value$$'</time-literal-format-expression>
	<date-time-format-expression>$$value$$</date-time-format-expression>
	<date-time-literal-format-expression>timestamp '$$value$$'</date-time-literal-format-expression>
</jdbc-type-settings>

 

Progress OpenEdge

Data Source Template for Progress OpenEdge (example)

<jdbc-type-settings>
        <type-name>OpenEdge</type-name>
        <display-name>OpenEdge</display-name>
        <driver>com.ddtek.jdbc.openedge.OpenEdgeDriver</driver>
        <connection-url-pattern>
              jdbc:datadirect:openedge://<HOST>:<PORT>;databaseName=<DB_NAME>
        </connection-url-pattern>
        <ping-command></ping-command>
        <supports-catalogs>false</supports-catalogs>
</jdbc-type-settings>

SAP HANA

For SAP HANA-connectivity in Spotfire, the SAP HANA connector ngdbc.jar (either 32-bit or 64-bit version) is required. Then, a data source template can be created.

Data Source Template for SAP HANA (example)

<jdbc-type-settings>
  <type-name>Hana</type-name>
  <driver>com.sap.db.jdbc.Driver</driver>
  <connection-url-pattern>jdbc:sap://&lt;host&gt;:&lt;port&gt;/&lt;database&gt;</connection-url-pattern>
  <ping-command>select 1 from dummy</ping-command>
  <supports-catalogs>true</supports-catalogs>
  <supports-schemas>true</supports-schemas>
  <supports-procedures>true</supports-procedures>
  <table-types>TABLE, CALC VIEW, OLAP VIEW, JOIN VIEW, HIERARCHY VIEW, VIEW</table-types>
  <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
  <condition-list-threshold>2000</condition-list-threshold>
  <temp-table-name-pattern>#$$name$$</temp-table-name-pattern>
  <create-temp-table-command>CREATE LOCAL TEMPORARY TABLE $$name$$ $$column_list$$;</create-temp-table-command>
  <drop-temp-table-command></drop-temp-table-command>
</jdbc-type-settings>

Data Source Template for SAP HANA - Kerbros, not constrained (example)

<jdbc-type-settings>
  <type-name>Hana_krb</type-name>
  <driver>com.sap.db.jdbc.Driver</driver>
  <connection-url-pattern>jdbc:sap://&lt;host&gt;:&lt;port&gt;/&lt;database&gt;</connection-url-pattern>
  <ping-command>select 1 from dummy</ping-command>
  <connection-properties>
    <connection-property>
      <key>spotfire.connection.pool.factory.data.source</key>
      <value>kerberos.data.source</value>
    </connection-property>
    <connection-property>
      <key>nativeAuthentication</key>
      <value>true</value>
    </connection-property>
    <connection-property>
      <key>authentication</key>
      <value>DatabaseMapping</value>
    </connection-property>
    <connection-property>
      <key>spotfire.kerberos.gsscredential.property</key>
      <value>user</value>
    </connection-property>
  </connection-properties>
  <supports-catalogs>true</supports-catalogs>
  <supports-schemas>true</supports-schemas>
  <supports-procedures>true</supports-procedures>
  <table-types>TABLE, CALC VIEW, OLAP VIEW, JOIN VIEW, HIERARCHY VIEW, VIEW</table-types>
  <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
  <condition-list-threshold>2000</condition-list-threshold>
  <temp-table-name-pattern>#$$name$$</temp-table-name-pattern>
  <create-temp-table-command>CREATE LOCAL TEMPORARY TABLE $$name$$ $$column_list$$;</create-temp-table-command>
  <drop-temp-table-command></drop-temp-table-command>
</jdbc-type-settings>

SAS IOM/Workspace

Data Source Template for SAS IOM/Workspace (example)

Please see this Wiki: https://community.tibco.com/questions/connection-sas-iom-sas-workspace

Snowflake

Data Source Template for Snowflake (example)

<jdbc-type-settings>
  <type-name>Snowflake</type-name>
  <driver>net.snowflake.client.jdbc.SnowflakeDriver</driver>
  <connection-url-pattern>jdbc:snowflake://&lt;ACCOUNT&gt;.snowflakecomputing.com</connection-url-pattern>
  <supports-catalogs>true</supports-catalogs>
  <supports-schemas>true</supports-schemas>
  <supports-procedures>false</supports-procedures>
  <fetch-size>10000</fetch-size>
  <batch-size>100</batch-size>
  <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
</jdbc-type-settings>

SQL Parallel Data Warehouse (SQL PDW)

Data Source Template for SQL PDW (example)

Example data source code can be found in the attachement file data_source_template_sql_pdw.txt, located at the bottom of this page. 

SQream DB

Data Source Template for SQream (example)

<jdbc-type-settings>
<type-name>Sqream</type-name>
<display-name>Sqream</display-name>
<driver>sql.SQDriver</driver>
<connection-url-pattern> jdbc:SQream;IP=192.168.0.90;PORT=5000;URLCONECTION=demo;</connection-url-pattern>
<supports-catalogs>false</supports-catalogs>
<supports-schemas>false</supports-schemas>
<supports-procedures>true</supports-procedures>
<java-to-sql-type-conversions>
    <type-mapping>
      <from>VARCHAR(2048)</from>
      <to>STRING</to>
    </type-mapping>
    <type-mapping>
      <from>INT</from>
      <to>Integer</to>
    </type-mapping>
    <type-mapping>
      <from>BIGINT</from>
      <to>LongInteger</to>
    </type-mapping>
    <type-mapping>
      <from>Real</from>
      <to>Real</to>
    </type-mapping>
    <type-mapping>
      <from>Date</from>
      <to>DATE</to>
    </type-mapping>
    <type-mapping>
      <from>BOOL</from>
      <to>Boolean</to>
    </type-mapping>
    <type-mapping>
      <from>DateTime</from>
      <to>DATETIME</to>
    </type-mapping>
</java-to-sql-type-conversions>
</jdbc-type-settings>

Teradata Cloud

The recommended way to connect Spotfire to Teradata and Teradata Cloud is to use the native connector, please see this Wiki for more details.

However, if you are using Information Links there are certain settings that might increase performance towards Teradata Cloud.

There are quite a few parameters with the Teradata JDBC driver including message size. They can be found here:

http://developer.teradata.com/doc/connectivity/jdbc/reference/current/jd...

Setting the TCP parameter, TCP=SEND1048576+RECEIVE1048576, will expand the socket send and receive buffer sizes to 1MB from the default 64K. This could result in significant throughput improvement”.

TIBCO Data Virtualization (TDV) with Kerberos

The recommended way to connect Spotfire to TIBCO Data Virtualization is to use the native connector.

This chapter assumes that the steps outlined in KB Article 000028976 is completed and TDV Studio is able to log you on via SSO.

Data Source Template for TIBCO Data Virtualization (TDV) with Kerberos With Constrained Delegation

<jdbc-type-settings>
	<type-name>Composite_Kerberos_constrained</type-name>
	<driver>cs.jdbc.driver.CompositeDriver</driver>
	<connection-url-pattern>jdbc:compositesw:dbapi@&lt;server&gt;:&lt;port9401&gt;?domain=&lt;domain&gt;&amp;dataSource=&lt;data_source&gt;&amp;authenticationMethod=&lt;auth_method&gt;&amp;kerberos.spn=&lt;spn&gt;&amp;kerberos.useTicketCache=true</connection-url-pattern>
	<ping-command>SELECT 1 FROM /services/databases/system/all_domains</ping-command>
	<connection-properties>
	<connection-property>
	<key>spotfire.connection.pool.factory.data.source</key>
	<value>kerberos.data.source</value>
	</connection-property>
	<connection-property>
	<key>spotfire.kerberos.gsscredential.property</key>
	<value>userGSSCredential</value>
	</connection-property>
	</connection-properties>
	<supports-catalogs>true</supports-catalogs>
	<supports-schemas>true</supports-schemas>
	<use-ansii-style-outer-join>true</use-ansii-style-outer-join>
	<metadata-provider>com.spotfire.ws.im.ds.sql.composite.CompositeMetadataProvider</metadata-provider>
	<supports-procedures>true</supports-procedures>
</jdbc-type-settings>

The exact parameters needed on the connection-url may change with the environment based on how Kerberos is configured.  For Kerberos with Unconstrained Delegation, one will remove the spotfire.kerberos.gsscredential.property connection property.

TIBCO Spotfire Data Catalog / Attivio Semantic Data Catalog

The JDBC driver is downloaded from the Provision Mart section in the Data Catalog software.

Data Source Template for TIBCO Spotfire Data Catalog / Attivio Semantic Data Catalog

The data source template is also available here.

<jdbc-type-settings>
    <type-name>Attivio DSD</type-name>
    <driver>com.attivio.jdbc.AttivioDriver</driver>
    <connection-url-pattern>jdbc:attivio:&lt;mart&gt;@mm://&lt;host&gt;:&lt;port&gt;</connection-url-pattern>
    <supports-catalogs>true</supports-catalogs>
    <use-ansi-join>true</use-ansi-join>
    <supports-schemas>true</supports-schemas>
    <string-literal-quote></string-literal-quote>
    <connection-properties>
        <connection-property>
            <key>use-ansii-style-outer-join</key>
            <value>true</value>
        </connection-property>
    </connection-properties>
</jdbc-type-settings>

Tibero

If you want to make use of Tibero through information designer (JDBC driver), then use of the following database template. 

Data Source Template for Tibero Database (example)

<jdbc-type-settings>
  <type-name>tibero</type-name>
  <driver>com.tmax.tibero.jdbc.TbDriver</driver>
  <connection-url-pattern>jdbc:tibero:thin:@&lt;host&gt;:&lt;port1521&gt;:&lt;sid&gt;</connection-url-pattern>
  <ping-command>SELECT 1</ping-command>
  <metadata-provider>com.spotfire.ws.im.ds.sql.BasicJDBCMetadataProvider</metadata-provider>
  <sql-filter>com.spotfire.ws.im.ds.sql.BasicSQLFilter</sql-filter>
  <sql-runtime>com.spotfire.ws.im.ds.sql.BasicSQLRuntime</sql-runtime>
  <fetch-size>10000</fetch-size>
  <batch-size>100</batch-size>
  <table-types>TABLE, VIEW</table-types>
  <supports-catalogs>true</supports-catalogs>
  <supports-schemas>true</supports-schemas>
  <supports-procedures>true</supports-procedures>
  <table-expression-pattern>[$$schema$$.]$$table$$[@$$catalog$$]</table-expression-pattern>
  <procedure-expression-pattern>[$$schema$$.]$$procedure$$[@$$catalog$$]</procedure-expression-pattern>
</jdbc-type-settings>

 

Feedback (6)

Dave,

Based on the Athenta JDBC Driver Migration Guide, you are correct that the driver class has changed from com.amazonaws.athena.jdbc.AthenaDriver to com.simba.athena.jdbc.Driver.  I will update this page.  Thank you for pointing this out.  It looks like there are a few changes that might affect the connection URL as well.

Peter

Peter McKinnis 6:49am Sep. 24, 2019

Anyone use Athena? 

The Current AWS ATHENA connector, 2.09, does not see the specified Criver class.

<driver>com.amazonaws.athena.jdbc.AthenaDriver</driver>

The driver available from the Amazon download seems to be a Simba driver and the folloing Driver class may be required.

<driver>com.simba.athena.jdbc.Driver</driver>

Appreciate any insights before I log a support call.

dave.williams 4:42pm Sep. 25, 2019

The Spotfire Connectors do not need a datasource template.  Datasource templates are only needed for JDBC connections from the Spotfire Server using Information Services.  The Spotfire Connector for Salesforce connects directly from the client to Salesforce.com.  The Spotfire Database Drivers page on the TIBCO Community lists the drivers needed to connect to Salesforce.com and other Spotfire Connectors.

Peter McKinnis 12:21pm May. 21, 2018

ritu.rani,

I responded to your specific questions on the community regarding connectivity to secure HBase: https://community.tibco.com/questions/do-we-need-kerberized-phoenix-data-source-connect-hbase-secure#node-931021

Peter

Peter McKinnis 9:13am Sep. 14, 2017

Hi,

For connecting to Hbase secure you have mentioned that " this configuration does not require Kerberos to be configured with Spotfire Server. This configuration is using a service principal to connect and not the Spotfire user. The information passed to the Phoenix driver is used to do the secure connection"

In connecting to kerberized hbase what did you enter in Username and Password field of the Phoenix data source template. Also did you copy the keytab file in spotfire server.

 

ritu.rani 3:39am Sep. 14, 2017

doctemp09,

If you need an answer, you need to ask the question in the answers section

TIBCO Community Wiki page Feedback Policy.

 

Don Johnson 8:11am Apr. 21, 2017