No announcement yet.

Using an API GET from file store returns out of memory error

  • Filter
  • Time
  • Show
Clear All
new posts

  • Using an API GET from file store returns out of memory error

    My goal is to use an API to sync files from a separate site to an internal location.

    These are very large files so when I execute my GET request Mirth returns an out of memory error.

    Is there a functionality in Mirth that can handle such a request? I tried writing the job routine in another language outside of Mirth to see what would happen and got the same results.

    I came across this older thread but the link is broken and doesn't give much information to go off of after that.
    Integration Architect

    HL7 FHIR Fundamentals Certified

  • #2
    Attachments will reduce your overall memory usage, but still require loading the entire file into memory.

    Calling out to a command line utility like curl could be an option.

    Another option could be to do it entirely in a javascript reader and work with Java input and output streams so that you can write as you read in small chunks instead of waiting to read the whole thing.

    Your message then could just be a summary of the results.


    • #3
      I tried the curl method, but didn't initiate that call from Mirth, it worked alright in compressing the files for me but still get a timeout when doing large folders or just the entire folder structure. Which I would hope to be the goal.

      My current setup is using a javascript reader, using unirest to make a GET request to their ending in "/Download" API

      Like you mentioned in relation to attachments, it's seems to be trying to load the whole thing into memory before deciding what to do with it. I'm still looking into the stream functionality, tried a stream function in Python which didn't work.

      What I don't understand is how can I write at the same time it's reading even with i/o streams... to me it looks like it has to establish the connection and get all the files before it can do anything with it. I could force a curl command to wait longer before timing out, but don't know if that'd do anything besides have me wait longer before eventually timing out.

      Here is my current call to their API:

      var response = Unirest.get($g('ExternalVendor_URL') + "/Items(id)/Download")
      .header("Authorization", "Bearer " + accessToken)

      Then below I would execute the write functionality, but it fails at the above because of timeouts or out of memory in Mirth
      Last edited by llong; 07-09-2018, 04:59 AM.
      Integration Architect

      HL7 FHIR Fundamentals Certified


      • #4
        It appears that unirest does indeed download the entire file before you can do anything with it.


        • #5
          Gotcha. Good news is that I found out I'm not needing to download the full file so that helps, but doesn't help with Mirth.

          I tried breaking down the files into smaller sections where a folder may be 1GB, but get an out of memory error. Tried increasing the heap size to 2GB but Mirth wouldn't start, then moved it to 1GB and it started, but still got an out of memory error. I manually set the Unirest timeout to (0,0), but it would hang indefinitely.

          I wrote the program in a Python script instead which is able to handle it.
          Integration Architect

          HL7 FHIR Fundamentals Certified


          • #6
            End Solution

            Wanted to post a reply to what my actual solution ended up being (in case someone else runs into this problem) as Python experienced the same issue with large file sizes.

            Instead of downloading the files as a whole for a sync between the cloud and our local storage. I created a recursive function in Mirth to make API calls to each folder in the cloud file storage to find any deltas and if it found one it would update that single file to local storage.
            Integration Architect

            HL7 FHIR Fundamentals Certified