Skip to main content

Async Messaging with RabbitMQ and Tortoise in Node.js

RabbitMQ happens to be the easiest and most performant message broker platform using the AMQ protocol out there today. Using it in a microservice architecture translates into massive performance gains, as well as the promise of reliability.

In this guide, we’re going to explore the basics of using RabbitMQ with Node.js.

Theory

At its most basic level, you’d ideally have two different services interacting with one another through Rabbit - a publisher and a subscriber.

A publisher typically pushes messages to Rabbit, and a subscriber listens to these messages, and executes code on the basis of those messages.

Note that they can be both at the same time - a service can publish messages to Rabbit and consume messages at the same time, which allows really powerful systems to be designed.

Now a publisher typically publishes messages with a routing key to something called an exchange. A consumer listens to a queue on the same exchange, bound to the routing key.

In architectural terms, your platform would use one Rabbit exchange, and different kinds of jobs/services would have their own routing keys and queues, in order for pub-sub to work effectively.

Messages can be strings; they can also be native objects - AMQP client libraries do the heavy lifting of converting objects from one language to another. And yes, that does mean services can be written in different languages, so long as they’re able to understand AMQP.


npm install --save tortoise node-cron

Now your package.json should look a lot like this:


Now we’re all set. Let’s create a publisher first.

const Tortoise = require('tortoise')
const cron = require('node-cron')

const tortoise = new Tortoise(`amqp://rudimk:YouKnowWhat@$localhost:5672`)

After importing tortoise and node-cron, we’ve gone ahead and initialized a connection to RabbitMQ. Next, let’s write a quick and dirty function that publishes a message to Rabbit:

function scheduleMessage(){
    let payload = {url: 'https://randomuser.me/api'}
    tortoise
    .exchange('random-user-exchange', 'direct', { durable:false })
    .publish('random-user-key', payload)
}

That’s simple enough. We’ve defined a dictionary containing a URL to the RandomUser.me API, which is then published to the random-user-exchangeexchange on RabbitMQ, with the random-user-key routing key.

As mentioned earlier, the routing key is what determines who gets to consume a message. Now, let’s write a scheduling rule, to publish this message every 60 seconds.

cron.schedule('60 * * * * *', scheduleMessage)

And our publisher’s ready! But it’s really no good without a consumer to actually consume these messages! But first, we do need a library that can call the URL in these messages. Personally, I use superagentnpm install --save superagent.

Now, in consumer.js:

const Tortoise = require('tortoise')
const superagent = require('superagent')

const tortoise = new Tortoise(`amqp://rudimk:YouKnowWhat@$localhost:5672`)

Next, let’s write an async function that calls a URL and displays the result:

async function getURL(url){
	let response = await superagent.get(url)
	return response.body
}

Time to write code to actually consume messages:

tortoise
.queue('random-user-queue', { durable: false })
// Add as many bindings as needed 
.exchange('random-user-exchange', 'direct', 'random-user-key', { durable: false })
.prefetch(1)
.subscribe(function(msg, ack, nack) {
  // Handle 
  let payload = JSON.parse(msg)
  getURL(payload['url']).then((response) => {
    console.log('Job result: ', response)
  })
  ack() // or nack()
})

Here, we’ve told tortoise to listen to the random-user-queue, that’s bound to the random-user-key on the random-user-exchange. Once a message is received, the payload is retrieved from msg, and is passed along to the getURLfunction, which in turn returns a Promise with the desired JSON response from the RandomUser API.

Conclusion

The simplicity associated with using RabbitMQ for messaging is unparalleled, and it’s very easy to come up with really complex microservice patterns with just a few lines of code.


Comments

Popular posts from this blog

4 Ways to Communicate Across Browser Tabs in Realtime

1. Local Storage Events You might have already used LocalStorage, which is accessible across Tabs within the same application origin. But do you know that it also supports events? You can use this feature to communicate across Browser Tabs, where other Tabs will receive the event once the storage is updated. For example, let’s say in one Tab, we execute the following JavaScript code. window.localStorage.setItem("loggedIn", "true"); The other Tabs which listen to the event will receive it, as shown below. window.addEventListener('storage', (event) => { if (event.storageArea != localStorage) return; if (event.key === 'loggedIn') { // Do something with event.newValue } }); 2. Broadcast Channel API The Broadcast Channel API allows communication between Tabs, Windows, Frames, Iframes, and  Web Workers . One Tab can create and post to a channel as follows. const channel = new BroadcastChannel('app-data'); channel.postMessage(data); And oth...

Certbot SSL configuration in ubuntu

  Introduction Let’s Encrypt is a Certificate Authority (CA) that provides an easy way to obtain and install free  TLS/SSL certificates , thereby enabling encrypted HTTPS on web servers. It simplifies the process by providing a software client, Certbot, that attempts to automate most (if not all) of the required steps. Currently, the entire process of obtaining and installing a certificate is fully automated on both Apache and Nginx. In this tutorial, you will use Certbot to obtain a free SSL certificate for Apache on Ubuntu 18.04 and set up your certificate to renew automatically. This tutorial will use a separate Apache virtual host file instead of the default configuration file.  We recommend  creating new Apache virtual host files for each domain because it helps to avoid common mistakes and maintains the default files as a fallback configuration. Prerequisites To follow this tutorial, you will need: One Ubuntu 18.04 server set up by following this  initial ...

Working with Node.js streams

  Introduction Streams are one of the major features that most Node.js applications rely on, especially when handling HTTP requests, reading/writing files, and making socket communications. Streams are very predictable since we can always expect data, error, and end events when using streams. This article will teach Node developers how to use streams to efficiently handle large amounts of data. This is a typical real-world challenge faced by Node developers when they have to deal with a large data source, and it may not be feasible to process this data all at once. This article will cover the following topics: Types of streams When to adopt Node.js streams Batching Composing streams in Node.js Transforming data with transform streams Piping streams Error handling Node.js streams Types of streams The following are four main types of streams in Node.js: Readable streams: The readable stream is responsible for reading data from a source file Writable streams: The writable stream is re...