Mule ESB JMS Request-Reply Interaction Pattern Comparison

JMS request-reply interaction pattern is very common for implementing synchronous RPC (Remote Procedure Call). There are several ways how to implement this pattern:

  • Temporary queue: Each temporary queue is unique and owned by the sender of the message that sets this temporary queue name as JMSReplyTo. The JMS Consumer processes the input request message and sends the response back on the temporary queue specified. Second JMS Consumer will pick up the response message received on the temporary queue;
  • JMS Correlation ID: Each JMS Producer sets the message’s JMSCorrelationID. This value is unique for each request message. The JMSReplyTo property contains queue name that is shared among all Consumers listening to responses. The JMS Consumer processes the input request message and sends response message having the same JMSCorrelationID as the input message. All responses JMS Consumers listen on the same response queue, but they use JMS Selector that is applied on the JMS Broker (server) side for message filtering.

The frameworks usually implement the temporary queue approach using the following sequence of actions:

  • Create JMS Session
  • Create JMS Producer
  • Create Temporary Queue
  • Create JMS Consumer
  • Send JMS request Message
  • Receive JMS response Message
  • Destroy the JMS Producer
  • Destroy the JMS Consumer
  • Destroy the JMS Session

The above lifecycle that includes create/destroy pattern over and over is not the most efficient way of dealing with JMS resources and has direct impact on the system latency. In Mule, request-response exchange pattern is implemented this way. It seems we shall avoid one JMS activity with request-response exchange pattern.

 

Mule JMS activity with request-response exchange pattern, JMS Request-Reply

Figure 1. Mule JMS activity with request-response exchange pattern

 

On the other side, the second, JMSCorrelationID based approach, puts some burden on the JMS server side that is responsible for message filtering and that directly impacts the latency as well.

Can we do better and introduce a third approach that will be more efficient than the above mentioned two approaches? The answer is Yes and Mule implements very efficient JMS request/reply with its Request-Reply Scope. This scope consists of two one-way JMS activities: Producer (sender) and Consumer (receiver). The Consumer will receive the messages on the reply queue. It is very important to emphasize that JMS Correlation ID Selector is not used at all, i.e. there is no JMS server side filtering. The Consumer extracts the JMS Correlation ID from the message received. The JMS Correlation ID is the same with the Mule Correlation ID. All flows that are “parked aside” have some Mule Correlation ID.  The JMS Consumer will wake up the flow that is waiting on that particular Mule Correlation ID.

 

Mule request-reply scope, JMS Request-Reply

Figure 2. Mule request-reply scope

 

Now, what happens if we have multiple Mule ESB runtimes (distinct nodes) with multiple JMS Consumers listening on the same JMS response queue?

Without any “magic”, it is quite possible that one of the JMS Consumers will eventually pick up some JMS message that is not related to any flow parked into that Mule ESB Runtime. The right JMS Consumer will never pick up the expected response message and that Mule Flow will fail with timeout.

As stated in the MuleSoft documentation a problem like this may exist when we have a server group composed of multiple servers that aren’t configured as part of HA (High Availability) cluster. We don’t have to worry if the Mule ESB runtime servers are part of HA cluster.

In the case of HA cluster, one of the nodes in the cluster will pick up the response message (remember that all of the JMS Consumers listen on the same JMS response queue). But, the waiting flow is not necessarily on that node. In that case, Mule HA Runtime will dispatch the JMS response message received to the correct node. For that purpose, Mule HA uses the Hazelcast as shared, distributed memory grid that supports its own topics based communication mechanism. Without further details, Mule ESB HA runtime will dispatch the JMS message received to the correct flow that exists on some of the nodes in the cluster.

The next natural question is what is the best approach for implementing JMS request-reply pattern if we don’t have HA cluster, i.e. when we use the community (free) Mule ESB edition? This is the scenario where we have multiple servers behind NGINX proxy in order to create fault tolerant Mule ESB environment without additional financial costs.

Or can we avoid Hazelcast JMS message dispatching in HA cluster?

If we want to avoid the JMS response message dispatching from one node to another in HA or non-HA environment, then we can start each node with environment variable which is unique for each node. For example: instanceId=1, instanceId=2 etc. The JMS response queue should be constructed dynamically, i.e. ‘queue.response.${instanceId}’. The response queues are similar, but not the same. In this case each JMS response message will be received by Mule ESB runtime node that hosts the actual waiting flow.

Let’s summarize what we have with the final approach:

  1. The JMS Consumer is created at the very beginning when the flow itself instantiates
  2. The JMS Consumers inside one Mule ESB runtime node listen to the same queue which is unique for that runtime (queue.response.${instanceId}). Temporary queues are not used, i.e. the approach is efficient;
  3. The response messages are received without JMS message selector used, i.e. the approach is efficient
  4. The JMS response messages received correlate with flow parked at that node, i.e. there is no further JMS message dispatching in the cluster.