Using MQ Light to make an application more responsive

MQ Light can be used to improve the level of responsiveness that a user experiences when interacting with an application, by offloading work from the application’s main event loop. This tutorial takes you through the steps required to build a simple web application, making use of MQ Light to offload processing from the code path used to serve responses to the user’s web browser.


Introduction

A key factor that determines the responsiveness of a web application is the amount of time that the web server takes to process a HTTP request and send a response. If a web server needs to perform CPU intensive, or high-latency processing before it can respond to a request, then the number of requests that the server can process are reduced. This can lead to a situation where the server can no longer keep up with the rate at which new requests arrive, and the web application becomes unusable.


The processing that is carried out as part of a request can sometimes be time consuming, but still not affect the response sent to the user. For example, tweeting a status update, sending an email, or resizing then storing an image, are all activities which could be initiated by a HTTP request and could have no affect on the data returned in the corresponding response. This kind of processing is a good candidate for offloading to another part of your application, freeing up the code responsible for responding to HTTP requests to service more requests.


An illustration of simple worker offload


MQ Light is designed to help simplify the task of offloading work from the parts of an application that respond to user input. In the context of a web application, this could mean using MQ Light to offload processing from the code responsible for responding to HTTP requests. To illustrate how MQ Light can be used, this tutorial takes you through the building of two applications. The first application performs all its processing inline, responding to user requests and as a result suffers from high latency. The second application comprises two components that make use of MQ Light to offload work from the component that responds to user requests, to another component that can asynchronously process the requests.



Example 1: An application suffering from a latency problem

Firstly, let’s build a simple Node.js application that demonstrates the problems that can occur if an application performs a lot of processing in the same logic that handles the response to user input.


Our application is based on a common pattern of interaction:

  1. Accept events generated by a user. For example, these could be “check ins” to a location, or activity.
  2. Perform some processing on the data generated by the user. For example, this could be using the data stored in a database to compare the frequence of the user’s check ins, against those of their peers.
  3. Optionally, post a social media update based on this processing. For example, posting a status update on Twitter when a certain number of check ins is achieved.

The following diagram shows the interaction between the various components that make up our first example application:


Flow of work in our application


This application will be a simple web server, that simulates accessing a database and posting a social media update as part of processing each HTTP request. The delays used to simulate accessing the database and making the social media update are deliberately exaggerated, by an order of magnitude or more, so that their affect on the responsiveness of the application is apparent without the need to load up the web application.

var http = require('http'); 
var url = require('url'); 

var host = 'localhost'; 
var port = 54321; 
var requestCount = 0; 

var updateDatabase = function(callback) { 
  // Simulate a sequence of database accesses and updates 
  var time = 500 + Math.floor(Math.random() * 1000); 
  setTimeout(function() { 
    console.log(' Database access (' + time + 'ms)'); 
    callback(); 
  }, time); 
}; 

var postToSocialMedia = function(callback) { 
  // Simulate posting a social media update 
  var time = 750 + Math.floor(Math.random() * 1500); 
  setTimeout(function() { 
    console.log(' Social media post (' + time + 'ms)'); 
    callback(); 
  }, time); 
}; 

http.createServer(function(request, response) { 
  if (url.parse(request.url).pathname === '/') { 
    var start = new Date(); 
    console.log('Request received #: ' + (++requestCount)); 
    var sendResponse = function() { 
      response.writeHead(200, { 
        'Content-Type': 'text/html', 
        'Cache-Control': 'no-cache, no-store, must-revalidate', 
        'Pragma': 'no-cache', 
        'Expires': '0' 
      }); 
      var duration = (new Date()).getTime() - start.getTime(); 
      response.write('<!DOCTYPE html><html><head><title>MQ Light Sample' + 
                     '</title></head><body>Request #: ' + requestCount + 
                     '<br>Time taken to serve this page: ' + duration + 
                     'ms<br><a href="/">Load the page again</a></body></html>'); 
      response.end(); 
      console.log('Response sent (total: ' + duration + 'ms)\n'); 
    }; 
    updateDatabase(function() { 
      // 1-in-5 chance of a social media update 
      if (Math.random() < 0.2) { 
        postToSocialMedia(sendResponse); 
      } else { 
        sendResponse(); 
      } 
    }); 
  } else { 
    response.writeHead(404, {'Content-Type': 'text/html'}); 
    response.write('<!DOCTYPE html><html><head><title>404</title></head>' + 
                   '<body><h1>Not Found</h1></body></html>'); 
    response.end(); 
  } 
}).listen(port, host);

This application listens on port 54321 of localhost. If you direct a web browser to this port, you’ll see a very simple page displaying the amount of time it took to serve the page, and inviting you to click a link to load the page again:


browser1


Refresh the browser, or click “Load the page again” . You’ll observe a noticable delay between requesting the page, and the page being displayed. This is as a result of performing the (simulated) database and social media processing as part of generating the response to the web browser.


The application also displays information on the terminal or command prompt where it is running. For example, here is the sample output generated by responding to three requests:


     Request received #: 1 
      Database access (983ms) 
      Social media post (1984ms) 
     Response sent (total: 2973ms) 

     Request received #: 2 
      Database access (1253ms) 
      Social media post (1847ms) 
     Response sent (total: 3103ms) 

     Request received #: 3 
      Database access (1189ms) 
     Response sent (total: 1190ms) 

Example 2: An application that offloads work to appear more responsive

Next, let’s see what the application looks like if we use MQ Light to offload the database and social media work to another component of the application. The first thing to do is split the application into two components, each of which will run as their own separate node process. One component responds to web requests (we’ll refer to this as the ‘front-end‘ component) and one performs the database updates and the social media posts (which we’ll call the ‘back-end‘ component).


The following diagram shows the interactions between our two application components, and the various other parts of the system that now make up our complete application:


Decomposed worker offload illustration


Take a look at the revised front-end component for processing HTTP requests:

var http = require('http'); 
var url = require('url'); 
var mqlight = require('mqlight'); 

var host = 'localhost'; 
var port = 54321; 
var mqlightService = 'amqp://localhost'; 
var requestCount = 0; 

var client = mqlight.createClient({service: mqlightService}); 
client.on('error', function(err) { 
  console.log('MQ Light: ' + (err.message ? err.message : err)); 
  if (err instanceof mqlight.ReplacedError || 
      err instanceof mqlight.SecurityError) { 
    console.log('Stopping due to error'); 
    process.exit(1); 
  } 
}); 

http.createServer(function(request, response) { 
  if (url.parse(request.url).pathname === '/') { 
    var start = new Date(); 
    console.log('Request received #: ' + (++requestCount)); 
    var messageData = {requestCount: requestCount}; 
    client.send('offload-tutorial', messageData, {qos: 1}, function() { 
      response.writeHead(200, { 
        'Content-Type': 'text/html', 
        'Cache-Control': 'no-cache, no-store, must-revalidate', 
        'Pragma': 'no-cache', 
        'Expires': '0' 
      }); 
      var duration = (new Date()).getTime() - start.getTime(); 
      response.write('<!DOCTYPE html><html><head><title>MQ Light Sample' + 
                     '</title></head><body>Request #: ' + requestCount + 
                     '<br>Time taken to serve this page: ' + duration + 
                     'ms<br><a href="/">Load the page again</a></body></html>'); 
      response.end(); 
      console.log('Response sent (total: ' + duration + 'ms)\n'); 
    }); 
  } else { 
    response.writeHead(404, {'Content-Type': 'text/html'}); 
    response.write('<!DOCTYPE html><html><head><title>404</title></head>' + 
                   '<body><h1>Not Found</h1></body></html>'); 
    response.end(); 
  } 
}).listen(port, host);

The key differences between this, and our very first application, are:

  • Lines 10 to 18 create an instance of the MQ Light client and register a handler for errors. For more information about individual API calls, please refer to the documentation available from npmjs.org: https://www.npmjs.org/package/mqlight
  • Line 25 sends a message to MQ Light. When this message arrives at the back-end application it will be the cue to perform the database and social media processing.
  • The updateDatabase and postToSocialMedia functions are no longer part of the application. These have been relocated to the back-end application.

Now let’s take a look at the back-end application, which is responsible for carrying out the database updates and posting to social media:

var http = require('http'); 
var url = require('url'); 
var mqlight = require('mqlight'); 

var host = 'localhost'; 
var port = 54321; 
var mqlightService = 'amqp://localhost'; 

var client = mqlight.createClient({service: mqlightService}); 
client.on('error', function(err) { 
  console.log('MQ Light: ' + (err.message ? err.message : err)); 
  if (err instanceof mqlight.ReplacedError || 
      err instanceof mqlight.SecurityError) { 
    console.log('Stopping due to error'); 
    process.exit(1); 
  } 
}); 
var subOptions = { 
  qos: 1, 
  credit: 1, 
  autoConfirm: false 
}; 
client.subscribe('offload-tutorial', 'share', subOptions); 

var updateDatabase = function(callback) { 
  // Simulate a sequence of database accesses and updates 
  var time = 500 + Math.floor(Math.random() * 1000); 
  setTimeout(function() { 
    console.log(' Database access (' + time + 'ms)'); 
    callback(); 
  }, time); 
}; 

var postToSocialMedia = function(callback) { 
  // Simulate posting a social media update 
  var time = 750 + Math.floor(Math.random() * 1500); 
  setTimeout(function() { 
    console.log(' Social media post (' + time + 'ms)'); 
    callback(); 
  }, time); 
}; 

client.on('message', function(msg, delivery) { 
  var start = new Date(); 
  console.log('Received message #' + msg.requestCount + ' received'); 
  var complete = function() { 
    delivery.message.confirmDelivery(); 
    var duration = (new Date()).getTime() - start.getTime(); 
    console.log('Message processed (total: ' + duration + 'ms)'); 
  }; 
  updateDatabase(function() { 
    // 1-in-5 chance of a social media update 
    if (Math.random() < 0.2) { 
      postToSocialMedia(complete); 
    } else { 
      complete(); 
    } 
  }); 
});

Here, the key highlights are:

  • Lines 9 to 17 create an instance of the MQ Light client and register a handler for errors.
  • Line 18 to 23 subscribe to an MQ Light destination called offload-tutorial. This instructs the client to start receiving messages from the destination. The subscribe call requests that the client’s use of the destination is shared – meaning that if you start multiple instances of the back-end application, they will share the messaging workload out. The options supplied to the subscribe call tell MQ Light to pace the flow of messages so that only one message arrives at a time, and the client must explicitly confirm delivery of the message before it receives another message.
  • Lines 25 to 41 should look familiar, as they are the simulated database access and simulated social media posts taken, without change, from our first application.
  • Line 43 registers a listener for the message event. This function will be called each time a message arrives at a destination the client is subscribed to, and will call the functions that simulate database access and posting to social media.
  • Line 47 confirms that the client has successfully processed the message and instructs the MQ Light server that it no longer needs to keep its copy of the message.

From the perspective of MQ Light, the front-end transfers requests to the back-end component by encoding each request as a message and sending each message to a topic called offload-tutorial. The following diagram shows the key MQ Light components used by our second example application:


offload-tutorial message send/receive example


If you run both the front-end and back-end Node.js programs, then you’ll notice that once again the front-end web program is listening on port 54321 of localhost (so if you left the first application running, you’ll need to stop it!). If you direct a web browser to this port, you’ll see a very similar page to that displayed by the first example application. This time however, the page displays much more quickly:


browser2


The back-end part of the application also displays information about the requests it is processing. For example, here is the sample output generated by responding to three requests:


     Received message #1 received 
      Database access (982ms) 
     Message processed (total: 991ms) 
     Received message #2 received 
      Database access (673ms) 
     Message processed (total: 677ms) 
     Received message #3 received 
      Database access (1389ms) 
     Message processed (total: 1405ms)

This confirms that while the front-end part of the application is now much more responsive, the time-consuming processing still takes place, albeit relocated into the separate back-end part of the application.


While developing and testing this kind of application, it’s useful to have the MQ Light User Interface open, to keep track of what the application is doing. Here is what the User Interface looks like after running the sample application. Using the interface, you can see what clients are connected to MQ Light, what the clients are doing, and also what messages are flowing:


browser3


For more information on the MQ Light User Interface, see the User Interface topic.



Example 3: An application that scales horizontally

While trying out the offload example application, you may have noticed something interesting; if you repeatedly click the reload button a number of times in quick succession, you’ll notice that it takes a little while for the back-end part of the application to catch up. This is because the MQ Light server is buffering the messages. This occurs when the message producing application is producing messages at a rate that exceeds that of the applications that are consuming messages. This can be a useful characteristic, as it decouples the back-end part of the application, from the spikes in the rate at which the front-end component of the application is utilised. However this only helps if the back-end component can keep up with the average rate at which the front-end component handles requests.


How can we scale our application so that the front-end component can process more user requests, without building up a backlog of work for the back-end component?


A simple solution would be to run more instances of the back-end component at the same time, and share the workload between them, and MQ Light makes this straight forward. In fact, because the back-end component of our sample application uses a shared destination, it can already be scaled in this way.


Let’s spend a moment to review shared destinations. When an application subscribes to MQ Light, it specifies whether the destination used to buffer messages on its behalf, is exclusively for its own use – or can be shared amongst a number of applications. When a single application is consuming messages – there is no difference between using a shared destination and using a private destination (we use private destination to describe destinations that belong to a particular instance of the MQ Light client). When two or more applications subscribe using the same topic pattern, then the difference becomes apparent. Applications using a private destination can receive a copy of every message sent to the corresponding topic. When applications use a shared destination, each message sent to the corresponding topic is routed to one of the applications using the shared destination. The following diagram illustrates this point:


MQ Light API: Concepts


To put this into practice, if you start an instance of the front-end component and three instances of the back-end component. you’ll see that requests sent in to the application via the web page are still processed as quickly as before, however the back-end applications share the work and are better able to process the greater rate at which requests are arriving. MQ Light will continue to distribute messages amongst applications using a shared destination, even when new instances of an application start, or existing instances stop. This provides the flexibility to make changes to the number of instances of the components of your application that are processing workload in line with demand, and without the need to reconfigure other parts of the application.


Its important to be aware that MQ Light doesn’t make any assurances about exactly how messages will be distributed amongst applications that are subscribed to a shared destination. For example, you cannot assume that the messages will be equally distributed amongst the consuming applications. Depending on your application, this can require adding some complexity into the back-end worker components – as they need to be able to process messages in the order they are shared out amongst the available instances of the application.



Controlling how MQ Light buffers messages

So far, this tutorial has focused on how MQ Light can be used to make an application more responsive and scale horizontally. However, we are yet to address some practical considerations. For example:

  • What if the back-end component cannot keep up with the rate at which the front-end component sends messages?
  • What if the back-end component ends unexpectedly?
  • What if the requests sent to the back-end component are only useful for a certain period of time?

The MQ Light concepts of message time-to-live and destination time-to-live play a key part in answering these questions. Firstly, let’s consider how message time-to-live might help.


As the name suggests, specifying a message time-to-live informs MQ Light about how long it needs to store a message for. If a message is stored within the MQ Light server for a period of time that exceeds its time-to-live value, then MQ Light can discard the message without delivering it to any consuming application.


This can be helpful if the back-end component of an application is not keeping up with the rate at which messages are arriving at its destination; where MQ Light will buffer these messages to disk, consuming storage on the system where it is running. By setting a time-to-live value when each message is produced, you can bound the amount of messages stored, and hence control the storage used by MQ Light.


While not relevant in this example scenario, message time-to-live can also be useful in situations where the data transported in a message has a finite period of time when it is useful, as it can be used to avoid processing data which is no longer current.


Destination time-to-live governs how long MQ Light will store the messages arriving at a destination, when there are no applications consuming from the destination. The default is for destinations to be deleted as soon as they are not being used by any consuming applications. To see how this affects the sample application, try stopping and then restarting all of the instances of the back-end component. Any requests sent from the front-end component while the back-end component is stopped will not be delivered. For some use-cases this can be a desirable behavior, but for the interests of this tutorial, let’s assume that we don’t want this behavior.


Let’s assume that we want our application to be able to tolerate periods of time where there are no instances of the back-end application running, without discarding messages sent from the front-end component. Update the application to specify a destination time-to-live:

var subOptions = { 
   ttl: 60000 ,
   qos: 1, 
   credit: 1,
   autoConfirm: false 
}; 

The changes insert a new line into the code for the back-end component. In this example, the new line is added after line 17 and specifies a time-to-live (ttl) of 60,000 milliseconds (60 seconds).


The following steps will test this change:

  1. Starting both the front-end component and a single instance of the back-end component of the application.
  2. Stopping the back-end component.
  3. Using the User Interface to make a number of requests.
  4. Starting the back-end component again, and observing that the back-end component immediately starts processing the requests that occurred while it was stopped.

If you stop, then restart the back-end component within a minute, then the time-to-live that we have applied to the destination will ensure that any messages arriving while the component was stopped are buffered and delivered to the application when it restarts. If you leave the back-end component stopped for more than one minute and then restart it, any messages arriving at the destination while the back-end component was stopped will have been discarded.


Destination time-to-live is useful in situations where you want MQ Light to tolerate an application disconnecting and reconnecting (perhaps due to a transient failure condition) without missing any messages sent to the application. Destination time-to-live also enables MQ Light to automatically free up the resources used when an application stops using MQ Light and never returns, without requiring a separate administrative step.


Conclusion

This tutorial has demonstrated how MQ Light can be used to improve the responsiveness of an application by offloading work from a time critical part of the application – the code path used to respond to user requests – to another process. We have also seen how MQ Light can share work amongst a number of instances of a back-end application component – to run several instances of time consuming processing in parallel. Finally we have taken a brief look at the time-to-live options provided by MQ Light, and seen how these can be used to control how much message data is stored by the MQ Light server.


For more information on MQ Light time-to-live options, see the Message lifetime topic.

Join The Discussion

Your email address will not be published.