Connect to apache spark python notebook on azure databricks

I am trying to use the output of an Apache spark python notebook from Azure Databricks.

Ideally I would like to set document properties from the spotfire view, and use them as input to a spark job.

This job would be triggered manually from the spotfire view by a spotfire cloud user, who does not have any knowledge of this backend.

I downloaded the Apache Spark SQL ODBC driver from 

https://docs.tibco.com/pub/spotfire/general/drivers/data_sources/connector_apache_spark_sql.htm



And then the following steps on 



https://docs.tibco.com/pub/sfire-analyst/10.6.0/doc/html/en-US/TIB_sfire-analyst_UsersGuide/connectors/apache-spark/apache_spark_details_on_apache_spark_sql_connection.htm

However I am stuck at this step, since I have no clue how to connect a spotfire view to a notebook/job on databricks.



edit:



I found this link

https://docs.microsoft.com/en-us/azure/databricks/bi/jdbc-odbc-bi

NOTE THAT USERNAME AND PASSWORD IS

token:<generated-token>

WHERE <generated-token> IS GENERATED BY YOU ON DATABRICKS

And I am now able to connect to the clusters and see the data that is available on the databricks platform aswell.

However I still don't get how i can run a spark job from this connection, and input parameters to spark.

Login