Spotfire interprets field types from CSV incorrectly & deletes information in fields

Hello. I need to update one table on a regular basis, based on which I build a dashboard. The source file happens to be in CSV.

Problem: when file is read some field types are not recognized correctly (e.g., I have a field that contains real numbers, but also happens to contain a lot of integers, Spotfire somehow recognizes only integers and sets integer as a field type). It would have not been a big issue if not for the following: when data gets imported all the information from the field is deleted if it doesn't match the field type Spotfire selected. So, I cannot even change field type post-factum and get correct information, since it's wiped on import.

So, when I reload table I have to manually go through field types to make sure they are correct. And I have like 30-40 fields, so, it is annoying as you can imagine.

Is there a way to optimize this process?

1 Comment

I have got the same problem. I have tried an ironpython script, which looked very promising, but it did not work either. It reads all data transformations in Spotfire, but does not read the file import parameter, which are determined before, when replacing the file ... (https://community.tibco.com/wiki/how-replace-file-datasource-data-table-...). I would really appreciate, if someone finds a solution here. The only workarounds that i know are the following one:

  • prepare the file so that Spotfire does not generate wrong data types. In your case, add always a string 2nd row, so that eyerything is imported as string (with a Makro?). You can delete these rows later in Spotfire. 
  • You can also replace this file outside Spotfire with an file that contains the Identical name
SpUser_ckf - Feb 23, 2019 - 3:11pm ::
+ Add a Comment

(3) Answers

Login