TIBCO provides data science for everyone: Data Scientists, Data Engineers, Developers,.... But you all have different preferences and requirements for how to do data science, including choice of development environment and coding languages. Fortunately TIBCO Data Science (single, unified platform for creating and operationalizing
data science) can handle most preferences, technologies, and languages. Here is a list that organises them with links to tips & tricks elswewhere on the Community to help you.
Development Scripting Languages and Custom Extensions
R, Python, Scala, Java, C#, C, SQL, MDX, Pig, Hive QL, Spark
TIBCO Data Science tools offer wide range of native features and possibilities where user does not need coding at all. Nevertheless, in addition to no code options there is also possibility to use other scripting languages for implementing data science computations. TIBCO Data Science tools typically use nodes/operators (basic elements from which the analytical process is built) as part of their graphical workflows where user can incorporate code from various scripting languages. During the runtime such node/operator executes the code according to the type of code (e.g. call other execution environment) and typically bring back the results which can be utilized in further analysis by consequent nodes/operators. Example of such workflow is below.
Deployment (Code Generation Languages)
Predictive models generated in C, C++, C#, Java, PMML, PFA, SAS, SQL Stored Procedure in C#, SQL User Defined Function in C#, Statistica Visual Basic
Once the predictive modelling is done, tools can produce a code of the model which can be afterwards used for further scoring in TIBCO Data Science platform itself or in other applications (deployment environments, real time scoring engines or even gateways).
TIBCO Data Science tools can use and invoke as part of the computational process following environments:
TERR, R, Python, SAS, MatLab, most RDBMS, most flavors of Hadoop, Hive, Spark, Flogo
Analytic Market Places
Azure ML, AWS, Apervita, Algorithmia, H20, Microsoft CNTK, TIBCO Community Exchange
External models, methods and know-how can be also taken from external sources like market places. Again, you can use node in your analytical workflow to invoke and utilize information from external source which can be model, single method or whole analytical procedure. All of this combined inside single processing environment of TIBCO Data Science.
You can find some of the references connected with the topic below:
- (video, documentation) Open source integration in Statistica
- (documentation) Jupyter notebooks in Spotfire Data Science
- (documentation) Custom operators in Spotfire Data Science
- (documentation) SQL Execute operator in Spotfire Data Science
- (documentation) Pig Execute operator in Spotfire Data Science
- (documentation) HQL Execute operator in Spotfire Data Science
- (video) Model deployment
- (documentation) Model export options in Spotfire Data Science
- (video, blog) Edge scoring - Flogo IoT example
- (wiki) Matlab integration
- (video) H2O and other marketplaces connection with Statistica
- (video) Spart integration with Statistica
- (video, wiki) Integration with analytic marketplaces
- (wiki) Deep learning through Microsoft Cognitive Toolkit
- (exchange) TIBCO Community Exchange templates and extensions
Using development scripting languages directly from TIBCO Spotfire:
Back to main Data Science wiki page