Announcement

Collapse
No announcement yet.

Permanent Connection with TCP Sender

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Permanent Connection with TCP Sender

    Hello,

    is there a way to keep the connection open between a MirthConnect TCP Sender to another Server?
    The Server is monitoring its own Socket and always states a warning after Mirth is closing the Socket when all messages have been sent. "Keep connection open" is only working if there are still messages to be sent.

    Thanks in advance!

  • #2
    Set "Keep Connection Open" to yes
    and "Send Timeout" to 0
    Best,

    Kirby

    Mirth Certified|Epic Bridges Certified|Cloverleaf Level 2 Certified

    Appliance Version 3.11.4
    Mirth Connect Version 3.8.0
    Java Version 1.6.0_45-b06
    Java (64 bit) Version 1.6.0_45-b06
    Java 7 (64 bit) Version 1.7.0_151-b15
    Java 8 (64 bit) Version 1.8.0_181-b13
    PostgreSQL Version 9.6.8

    Comment


    • #3
      Thanks!

      Comment


      • #4
        Clarifying question about this behavior

        If the timeout is hit and the connection is closed, what will cause it to reopen? Will new messages prompt a reconnect?

        Thanks!

        Comment


        • #5
          Originally posted by djantzen View Post
          If the timeout is hit and the connection is closed, what will cause it to reopen? Will new messages prompt a reconnect?

          Thanks!
          Yes, once the next message comes in the connection will be opened again.
          Step 1: JAVA CACHE...DID YOU CLEAR ...wait, ding dong the witch is dead?

          Nicholas Rupley
          Work: 949-237-6069
          Always include what Mirth Connect version you're working with. Also include (if applicable) the code you're using and full stacktraces for errors (use CODE tags). Posting your entire channel is helpful as well; make sure to scrub any PHI/passwords first.


          - How do I foo?
          - You just bar.

          Comment


          • #6
            Thank you for the clarification!

            What we are seeing is that one Mirth instance forwarding MLLP messages to a second Mirth instance gradually slows down and grinds to a halt, enqueuing messages on the first instance. We are on version 3.5.0.8232.

            We have a simple example that demonstrates this in a local development environment. It appears that enabling the 'Keep Connection Open' option will cause the failure to occur after several thousand messages.

            However, when we opt *not* to keep the connection open, it takes several tens of thousands of messages to be sent before the slowdown occurs and the queue expands. So, it's better, but still a problem.

            To drain the queue we have to redeploy the channel to the sender Mirth instance, or restart it entirely.

            Can you provide any suggestions about what might cause such behavior?

            Thanks for your time,
            David Jantzen

            Comment


            • #7
              Underlying Network issue, queue settings, perhaps an issue delivering to the final destination....

              Can you post your channels or at least a screen shot of the source and destination settings for each?

              Has this issue always happened, or did something change?
              Best,

              Kirby

              Mirth Certified|Epic Bridges Certified|Cloverleaf Level 2 Certified

              Appliance Version 3.11.4
              Mirth Connect Version 3.8.0
              Java Version 1.6.0_45-b06
              Java (64 bit) Version 1.6.0_45-b06
              Java 7 (64 bit) Version 1.7.0_151-b15
              Java 8 (64 bit) Version 1.8.0_181-b13
              PostgreSQL Version 9.6.8

              Comment


              • #8
                And I have to retract most of that...

                It appeared at first that there was a difference between keeping the connection open versus reopening each time, but that did not turn out to be consistent. We see a backed up queue sporadically, regardless of the setting or the Send Timeout value.

                It may indeed be an underlying networking issue. The Mirth servers actually run within their own Docker containers, and that additional abstraction layer may be hiding the problem. Something we fixed in this process was a DNS configuration in the containers that was causing 4 second delays in any network operation involving address resolution. That may yet turn out to be the root cause here, but again, it's not yet reproducible.

                Comment

                Working...
                X