One of the major goals when implementing an integration application, that brings data from different sources and technologies, is simplifying and automating the business processes. Having a reusable set of services, libraries of utility processes and common data model is paramount.
In order to support high availability and load balancing, integration applications usually use distributed deployment model that utilizes multiple nodes in the cluster.
There are various ways to use shared data on different nodes and clusters when utilizing Mule as an integration technology, and one of them is using Redis.
Redis is an open source (BSD licensed), in-memory data structure store, used as database, cache and message broker. It supports wide range of data types and structures like strings, hashes (maps), lists, sets and sorted sets. Redis officially supports Linux, but there is an unofficial support for Windows as well. Redis has a rich set of capabilities like message publish/subscribe, read and write operations, key expiration etc. Last, but not least, Redis supports distributed locks that prevent multiple clients changing the same data at the same time.
Mule comes with ready, out-of-the-box Redis connector that allows developers seamless integration with this great product.
One use case of Redis in a Mule project is for storing mutable configuration data in a form of key-value pairs.
While working on an ESB Framework implementation that is responsible for message logging and message routing to appropriate business services or APIs, there was a need for sharing and caching common service endpoint configuration data among several applications and engines running on multiple nodes in the cluster. In particular, the framework reads the service endpoint based on information provided in the input message (HTTP path, method etc.). Another administrative application can update the shared service endpoint configuration that will be readily available through Redis to all of the nodes in the cluster.
Above is a screenshot of a Mule flow with HTTP listener configured for HTTP URI path â/Accountâ. Â Letâs say HTTP request is sent to this path using the POST method. In this case we have two parameters âAccountâ and âPOSTâ, which are used to make service endpoint lookup in Redis. The key used is âpostAccountâ. Once the service name is returned from Redis, it can be further used to construct downstream HTTP endpoint URLs, JMS queue names etc.
The same mechanism can be used for protecting services. In that case the Redis stored configuration will contain records that bind a particular authenticated user with the services he/she has access to.
Finally, Redis can be used for storing some runtime data like the number of requests executed by a particular client against particular service endpoint. This runtime data can be used for rate limit and throttling implementation where clientâs API calls are rejected if the specified request quota has been exhausted for a particular time period.
You can read more about Redis here: http://redis.io/documentation, and learn more about Mule Redis connector at: https://docs.mulesoft.com/mule-user-guide/v/3.8/redis-connector.
Sr. Technical Consultant