Jump to content
We have recently updated our Privacy Statement, available here ×

Antonio Fermiano

PowerCustomerSuccess
  • Posts

    13
  • Joined

  • Last visited

Antonio Fermiano's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. In my original questions, I said: "Prefetch != 0, therefore using durable as message broker". Revisiting the manual, I understand that it's recommended that prefetch is zero in case of direct connection, but this value does not decide if FTL is going to make a direct connection or message broker pattern. If that's true, it's not clear to me how to configure a direct connection or message broker pattern... Can explain me better please If both endpoints in a application have compatible transports (e.g: shared memory transports pointing to the same memory segment) then is direct connection between both guaranteed Thank you.
  2. It looks like my Multicast configuration was messy - I was trying to configure a direct path but I was actually using a lot of conflicting multicast groups. I've repeated the test with a correct configuration and everything worked fine with prefetch = 0.
  3. I'm executing the following scenario: Sender RV (custom) -> "TIBCO RV Adapter for TIBCO FTL" -> "FTL Cluster" (3 nodes, same machine for testing purposes)-> Receiver FTL (custom) I'm logging everything I send and receive to be able to verify integrity. Sender is sending ~3mb/s using ~1024 bytes packages. I am using "Shared Memory" transport for the endpoints and "Auto" for communication among clusters/stores. 1) If nothing is restarted in this scenario, integrity check is OK. 2) If I keep restarting "Receiver FTL" and handle duplicated packages properly, integrity check is OK. Durable does its job. 3) If I keep restarting cluster elements to test HAusing these steps: - Kill cluster element 1, wait 4 seconds, start cluster element 1, wait 15 seconds, kill cluster element 2, etc. I can see little pauses (a backup node takes leadership, clients reconnect), and everything keeps working. However, my integrity check FAILS, because I lose about 13% of the packages in "Receiver FTL" during several points of the transmission. 4) If instead of "publisher_settings": "store_send_noconfirm" I switch to "publisher_settings": "store_confirm_send", integrity check is OK, however in this configuration I experience severe performance issues (I'm unable to keep receiver rate at 3mb/s in a very powerful physical server). So my questions are: a) Is this behavior OK 100% of delivered packages is only possible in "store_confirm_send" b) Is it expected to not be able to handle 3mb/s in "store_confirm_send" or I'mconfiguring something wrong I can provide more information about the scenario if necessary. Thank you in advance.
  4. Hello. I'm starting the adapter (rvda64) with the following config file: { "realm":{ "applicationName":"rvftlconverterapp", "services":[ { "endpoints":["endpoint1"], "port":"29051", "fromRV":[ { "subjectName":"assunto1.assunto2.assunto3", "parseSubject":[{ "subject1":1, "subject2":2, "subject3":3 }], "formatName":"formato" } ] } ], "url":"http://10.2.4.48:18080" } }If I use tibrvsend to send a message: ./tibrvsend -service 29051 -network '127.0.0.1;235.114.240.1;235.114.240.1' -daemon tcp:8025 assunto1.assunto2.assunto3 "Test message" I receive the "DATA" field correctly ("Test Message"), but with the following content in the subject fields: subject1 = "assunto1" subject2 = "" subject3 = "" I'm using TIBCO FTL 6.4 and TIBCO RV 8.4.5. Is it a bug or am I configuring it wrong Thank you.
  5. We are evaluating TIBCO FTL solution for our product, however I'm having some issues. Scenario: 1) Application A (publisher) sending packages to application B (subscriber). 2) Multicast transport. 3) Windows server 2012, everything in the same machine, including FTL server; and Red Hat Enterprise 7.2 (tested on both). 4) Packet size: 1024 bytes. 5) Data rate: 3mb/s (~3000 packets per second). 6) Managed format. 7) Using C library. 8) Very powerful server on both OSes - 32 * 8 cores, 128gb RAM. After some successfully attempts with Multicast + prefetch != 0 (durable as message broker), we tried to switch to prefetch = 0 (direct path between publisher and subscriber) in order to reduce latency, due to our product requirements. However, we are encoutering terrible latency issues. What happens is that the packets are not being received in the uniform rate they are being sent, instead we are receiving the packets in bursts, something like a burst each 1 or 2 seconds. I have attached the configuration JSON I'm using (Application: rvftlconverterapp, Application A = endpoint1, Application B = endpoint2). In the current configuration (store = rvftlconverterstore, prefetch 0), I have the performance issues I mentioned. If I switch to store =rvftlconverterstore2 (prefetch 1024), it works correctly. If I change it back to store =rvftlconverterstore, but use shared memory, it works correctly. Am I doing anything wrong Thank you in advance.
  6. We are evaluating TIBCO FTL solution for our product, however I'm having some issues. Scenario: 1) Application A (publisher) sending packages to application B (subscriber). 2) Multicast transport. 3) Prefetch != 0, therefore using durable as message broker. 4) Windows server 2012, everything in the same machine, including FTL server. 5) Packet size: 1024 bytes. 6) Data rate: 12mb/s (~12200 packets per second). 7) Managed format. 8) Using C library. I'm playing with message delivery reliability, so application B: 1) Closes itself each 2 seconds - first time closing gracefully return 0 after finishing a callback, second time with a segfault inside the callback function. This simulates a complete disaster situation. 2) Application B handles duplicated packets properly, as far as I can see. 3) Both application A and application B logthe content they sendand receive, so I can compare if the durable delivered everything to application B, even with all the carnage. If I use "implicit ACK", or in other words, if I use TIBCO API in application B as: tibProperties_SetBoolean(ex, props, TIB_SUBSCRIBER_PROPERTY_BOOL_EXPLICIT_ACK, 0); (or just omit it) Then I receivemost of the packages in application B, but I still lose a few parts:sometimes I lose a single packet, sometimes a whole block. If I use "explicit ACK", meaning: tibProperties_SetBoolean(ex, props, TIB_SUBSCRIBER_PROPERTY_BOOL_EXPLICIT_ACK, 1); and acknowledge each packet individually: tibMessage_Acknowledge(ex, msgs); I don't miss anything in Application B. I have attached the configuration JSON I'm using (Application: rvftlconverterapp, Application A = endpoint1, Application B = endpoint2). Does it make sense to miss packets when using explicit ACK Am I misusing the API Am I configuring it wrong Is it a bug
  7. It's also broken for me... but I found a workaround that you can use without the need of the APIs:1) "Download Deploy". 2) Format the JSON (e.g.: jsonlint.com). 3) Add the format manually. 4) Go to "Edit Mode". 5) "Upload Config". 6) "Deploy".It will appear and work correctly.
×
×
  • Create New...