Sending HL7 v2 Messages with Hasura, Hapi and Serverless

Simon Johnson
Towards Data Science
12 min readMay 18, 2022

--

Image by author

Health Level 7 (HL7) Version 2 was developed in the late 80s and for many years was the most popular data exchange standard in the world. The standard was more recently superseded by the HL7 Fast Healthcare Interoperability Resources (FHIR) but the adoption rate has been slower than expected so even in 2022, working with HL7 v2 is still surprisingly common.

In this post I’m going to outline a solution for a typical use case: imagine we have a legacy Electronic Medical Record (EMR) system with a supporting relational DB at a clinic that needs to send a message (eg ADT_A04) to another facility whenever a new patient is registered. The steps below can be run on a development machine as a demo —no cloud platform is required. And although this post is focused on HL7 v2, we’re currently looking into a GraphQL & REST FHIR set up for Hasura so if that’s something you’d be interested in please go and upvote our GitHub discussion.

The most common approach to this problem is to reach for an integration tool called Mirth Connect — a nice Java-based server application that was first developed over 15 years ago and is squarely focused on the healthcare industry. Although the core of Mirth is Open Source, extensions such as FHIR support require a subscription that as of writing costs around $20K/year.

Hasura in contrast has only been around for a few years but it’s being used by organizations across industries from Walmart to Airbus and supports modern microservices architecture and design patterns. Hasura is completely Open Source (with all features freely available), developed in Haskell and can be quickly deployed as a Docker image.

Why Hasura?

A few reasons for choosing Hasura over Mirth Connect include:

  • Instant APIs: If you’re adding HL7 v2 to a legacy EMR via the supporting RDBMS chances are this is just one of many (current or future) integrations, so getting an instant API on top of your existing DB for free is sure to come in handy. Also, in order to support this feature Hasura tracks the DB schema which has the added benefit of automatically generating table column maps for easy assignment to HL7 message template fields without having to manually write SQL queries.
  • GraphQL: If you’re still thinking in terms of REST vs GraphQL then you’re missing the big picture of federation, schema stitching and remote joins and I encourage you to check out the presentation by Uri Goldshtein for Firely DevDays. GraphQL also allows for rapid development of custom screens and dashboards (which are ubiquitous in healthcare) by supporting modern client frameworks (eg Gatsby, Next.js, Gridsome) that can read/write the data they want without requiring development of custom endpoints.
  • Synchronous Events: Behind the scenes Hasura creates and manages triggers in the host DB so that corresponding actions are fired synchronously. Mirth’s approach on the other hand is to poll the database periodically which is not as efficient and runs contrary to event driven architecture.
  • Admin Console: This type of integration can be a fiddly process so anything to aid debugging often becomes indispensable. Mirth Connect does have a Java GUI but when compared to Hasura it’s missing one feature that’s particularly useful for this scenario: a visual DB explorer. The Hasura web-based Admin console allows us to visualize our DB as well as complete CRUD operations on rows, tables and relationships without leaving the web browser.

Demo Overview

Requirements: Git, Docker, Node.js, NPM, Java ≥8

Image by author, logos used with permission

This solution takes a micro-services approach, starting with Hasura connecting to the existing EMR DB and monitoring a Patients table for new inserts. When a new patient is registered, Hasura transforms the insert data into a templated HL7 message in JSON format and sends it as a HTTP request to a Java HAPI service running on the Serverless framework (using serverless-offline for this demo). Java HAPI is the gold standard for HL7 processing — the service parses the JSON HL7 message and converts it to the ER7 format (pipe-delimited) and forwards it on as a HTTP request to the HTTP-MLLP Gateway. The Gateway is a simple Node.js app that takes the HTTP request body in ER7 format and sends it to the final destination over MLLP, receives the ACK response and returns it back to the Java HAPI service, which in turn returns the response back to Hasura. Ideally the MLLP-Gateway would also be running as a serverless function but unfortunately AWS API Gateway and Application Load Balancer currently only allow for HTTP-based invocation.

Getting Started with Postgres and Hasura

Hasura supports Postgres, MS SQL, Citus, BigQuery and has just put out an early release for MySQL. For this demo we’re going to use Postgres with a slightly refactored patient_data table from OpenEMR.

Follow the commands below to get set up.

$ git clone git@github.com:whitebrick/hl7v2-hasura-hapi-serverless.git
$ cd hl7v2-hasura-hapi-serverless
$ psql
# Create a new user, DB and add the pgcrypto extension
postgres=# CREATE USER myemrusr WITH password 'myemrpwd';
postgres=# CREATE DATABASE myemr OWNED BY myemrusr;
postgres=# CREATE DATABASE myemr WITH OWNER = myemrusr;
postgres=# \c myemr
postgres=# CREATE EXTENSION pgcrypto;
postgres=# \q
# Test new user and DB and load data
$ psql -U myemrusr myemr
myemr=> \i ./sql/openemr_patient_data_table_postgres.sql
myemr=> \i ./sql/openemr_patient_data_rows_postgres.sql

Once we have our patient_data table set up we’re ready to launch Hasura from Docker and configure it to use our new DB as below (Note the DB host is for Mac see the Guide for Linux or Windows). Hasura creates a new schema hdb_catalog inside the DB to persist all of the metadata so it won’t touch our data in the public schema unless we tell it to.

# Edit as required
$ vi ./hasura_docker_run.bash
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL= postgres://myemrusr:myemrpwd@host.docker.internal:5432/myemr \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
-e HASURA_GRAPHQL_DEV_MODE=true \
-e HASURA_GRAPHQL_ADMIN_SECRET=admin \
hasura/graphql-engine:latest
$ bash ./hasura_docker_run.bash

Now head over to http://localhost:8080 and when prompted for the admin secret enter your value from above (“admin” in this case). Once signed in, the first thing we need to do is track the patient_data table so Hasura can analyze and monitor it. Click on the Data nav tab and click the Track button next to the patient_data table. Once tracked we can now visualize and manage the database from the convenience of the Admin Console as described above.

Image by author

And Now that Hasura understands our table, just as quick example of how easy it is to get going with instant APIs, if we click on the API tab we can simply select a few checkboxes on the left and a GraphQL query is written before our eyes — complete with syntax highlighting, code completion and error checking. We can then hit the play button to see the results of our query.

Image by author

Running HAPI Serverless

We’ll leave Hasura for a minute and get our HAPI microservice running by installing Serverless Framework and then building and testing hapi-serverless with the commands below. The serverless-offline plugin allows us to run the lambda locally as it would in the cloud by using docker behind-the-scenes which is why we’re pulling the lambda:java8 image.

NB: Because serverless-offline uses Docker, the initial request may take up to a minute or so to process.

# Build
$ git clone git@github.com:whitebrick/hapi-serverless.git
$ cd hapi-serverless
$ npm install
$ mvn package
$ docker pull lambci/lambda:java8
# Start
$ npm start
...
Server ready: http://localhost:3000
# Test
$ cd test/functional
$ bash ./test_parsing.bash
$ bash ./test_forwarding.bash

Creating HL7 v2 Messages from Hasura

The next step is to get a template message. Although we can build a message from scratch, it’s usually best practice to ask the consumer for a template so that any quirks or deviations from the standard are accounted for. For this example I googled for ADT_04 samples and the North Dakota Department of Health Messaging Guide was the first to appear. I copied the ER7 formatted example message from the guide and pasted it into the file ./hl7/NDDH_ADT_A04.er7 and then POSTed it to HAPI Serverless to get the JSON representation in ./hl7/NDDH_ADT_A04.json .

Now returning to Hasura, we’re going to click on the Events tab and create a new event with the name HL7-Send_ADT and attach it to our patient_data table in the public schema. We want to trigger on new Insert operations only and we’ll put in the local HAPI Serverless URL from above but adjusted for Docker networking (http://host.docker.internal:3000/dev this is for Mac, see the Guide for Linux or Windows).

Image by author

We also want to increase the timeout to 120 seconds to give our HAPI Serverless Docker image plenty of time to startup, set the Request Method to POST and Content-Type header to application/json.

Image by author

Next, we want to click Add Payload Transform and this is where we can map the inserted DB values to our HL7 JSON template. Because Hasura already knows about our table, it can automatically create the expected Insert input. We then paste out template from ./hl7/NDDH_ADT_A04.json and start mapping the values to our columns — for this demo we’ll just map the fist, last and middle names. As we can see below the transform editor also comes with syntax highlighting, code completion and validation, as well as a sample preview (that displays as soon as the code is complete/valid). The {{$body.event.data.new.*}} paths map to the JSON structure of the input using Hasura’s own Kriti Lang, which if looks familiar, is inspired by Go’s templating language.

Image by author

Finally click on the Save button at the very bottom of the page and now let’s test it out. Click back on the Data nav tab, select the patient_data table followed by the Insert Row menu tab and then enter values for the names (see below) and click the Save button.

Image by author

Assuming our hapi-serverless is still running and we have the Docker networking URL correct, we should now see a hit on the terminal log. If we click back to the Events nav tab, select the HL7-Send_ADT event and then then Pending Events sub tab we should see a new event waiting as our HAPI Serverless takes some time to start up and respond. Eventually the event will be moved to the Processed Events tab and will allow us to view the request and response where we can see our ER7 conversion.

Image by author

HTTP to MLLP

Unfortunately HTTP was not as widespread in the late 80s as it was in the late 90s so we’re stuck with MLLP, but at least it’s still over TCP/IP. In order to forward the ER7 message over MLLP we’ve put together a very basic gateway running on Node using Express and the mllp-node package. The gateway looks for a header eg Forward-To: mllp://ack.whitebrick.com:2575 and uses mllp-node to send the body with the corresponding control characters and then returns a HTTP response. If you have your own endpoint to test receiving ACK messages you can use that in the Forward-To header otherwise you’re welcome to use our Apache Camel test endpoint above — details here.

# Build
$ git clone git@github.com:whitebrick/http-mllp-node.git
$ cd http-mllp-node
$ npm install

# Start
$ node gateway.js
...
HTTP server listening on port 3030
# Test
$ cd test/gateway-functional
$ bash ./functional.bah

Sending HL7 v2 Messages from Hasura

Now that we have an MLLP gateway running, we can use the forwarding feature of HAPI Serverless to not only convert the message from JSON but also to forward the ER7 data through the gateway and receive the response back. In order to do this we simply add the headers below (adjust for your own Docker networking) to our Event configuration.

Image by author

With the additional headers in-place, we can return to the Data page and insert another record to test the full send and response. If all goes well we should now see retEr7and retJsonvalues with the ACK data from the downstream MLLP consumer.

Image by author

And that’s it! We now have a full end-to-end solution so that when the EMR creates a new patient record, Hasura constructs a HL7 v2 JSON message from a template, sends it through an MLLP gateway, captures the ACK response and nicely displays the result in the Admin console.

Accessing Response Data and Raising Error Alarms

After sending some messages, the obvious next question is: what if I want do something with the response data beyond just displaying it in the Admin Console? Because the Hasura metadata is stored in the same DB (but separate schema) we can create a function+trigger to copy the data of interest into our own table and then set up additional Events on the new table. For example, let’s say that we wanted to raise an alarm by sending an E-mail or Slack message when there was an error response from the HL7 message.

Let’s first head back over to the Data nav tab, click the SQL menu on the left, paste the code below and hit the Run button to create a new table for our error messages.

CREATE table hl7_error_messages(
id serial PRIMARY KEY NOT NULL,
created_at timestamp without time zone DEFAULT now(),
request json,
response json
)

If we use psql to take a look at the Hasura tables hdb_catalog.event_log and hdb_catalog.event_invocation_logs we’ll see the same data from the Admin Console view. The trigger below calls the copy_hl7_error_messages function when Hasura runs any event (eg sending a HL7 message). The function then checks that the event name matches HL7-Send_ADT and the status is not successful before copying it over to our newly created hl7_error_messages table. The additional line is to un-escape and parse the response JSON because it’s going into a Postgres JSON field.

CREATE OR REPLACE FUNCTION copy_hl7_error_messages()
RETURNS trigger AS
$$
DECLARE
event_name text;
response_json json;
BEGIN
SELECT trigger_name INTO event_name FROM hdb_catalog.event_log WHERE id = NEW.event_id;
IF (event_name = 'HL7-Send_ADT') AND (NEW.status != 200 OR NEW.status IS NULL) THEN
-- unescape JSON
SELECT cast('{"data":{"message":' || cast(((NEW.response->'data'->'message') #>> '{}' )::jsonb as text) || '}}' as json) INTO response_json;
INSERT INTO hl7_error_messages (request, response)
VALUES (NEW.request, response_json);
RETURN NEW;
ELSE
RETURN NULL;
END IF;
END;
$$
LANGUAGE plpgsql;
CREATE TRIGGER copy_hl7_error_messages_trigger
AFTER INSERT
ON hdb_catalog.event_invocation_logs
FOR EACH ROW
EXECUTE PROCEDURE copy_hl7_error_messages();

Next click on the default menu link under Databases and then the Track button next to hl7_error_messages. Now let’s force an error to test the function by killing our Gateway Node process and inserting a new patient record. With a bit of luck we should now have a record in the hl7_error_messages table and because we unescaped the message we can query it directly from our API with JSONPath (see below).

Image by author

Now that the error response lives in a tracked table we can go ahead and create any number of new Events through Hasura, following the same steps as we did earlier, to fire E-mail/Slack notification alarm hooks when new records are inserted into the hl7_error_messages table.

I hope this demo has given you another option to consider next time you’re working with HL7 v2 and if you are taking a microservices approach we’d love to hear how. If you need assistance or want more reading on healthcare interoperability and integration, head over to whitebrick.com and don’t hesitate to reach out.

--

--