Connect to apache spark python notebook on azure databricks
I am trying to use the output of an Apache spark python notebook from Azure Databricks.
Ideally I would like to set document properties from the spotfire view, and use them as input to a spark job.
This job would be triggered manually from the spotfire view by a spotfire cloud user, who does not have any knowledge of this backend.
I downloaded the Apache Spark SQL ODBC driver from
And then the following steps on
However I am stuck at this step, since I have no clue how to connect a spotfire view to a notebook/job on databricks.
I found this link
NOTE THAT USERNAME AND PASSWORD IS
WHERE <generated-token> IS GENERATED BY YOU ON DATABRICKS
And I am now able to connect to the clusters and see the data that is available on the databricks platform aswell.
However I still don't get how i can run a spark job from this connection, and input parameters to spark.