Skip to main content

Implementing Job Schedulers in Node.js

Ever wondered how the application server back up the files periodically without any physical interruption. This is where Cron jobs come in.

Cron Jobs schedules a job periodically to do the actions that are configured to do.

there are few use cases where cron jobs play a vital role. they are,

  • Deleting Log files - Application generates a lot of logs.clearing old logs will save lots of space in the server. it can be done using cron jobs.
  • DB Backup - Database backup saves the application from disasters. Cron job will be helpful to do that.
  • Application Logic - we can use cron jobs to do some application logic on a time basis.

How cron job works

we will write a cron job to archive the old records in the database in a production application.

Firstly, create a project and install the following dependencies,

1npm init --yes
2npm install express node-cron mongoose faker
  • express - web server library in nodejs
  • node-cron - cron job scheduler library in nodejs
  • mongoose - ORM for MongoDB

After that, create a file called Model.js and add the following code

1const mongoose = require("mongoose")
2
3const weatherSchema = new mongoose.Schema({
4 minTemp: {
5 type: Number,
6 },
7 maxTemp: {
8 type: Number,
9 },
10 recordedDate: {
11 type: Date,
12 },
13 isArchived: {
14 type: Boolean,
15 default: false,
16 },
17})
18
19class Weather {
20 static getRec(date) {
21 return this.find({
22 recordedDate: {
23 $lte: new Date(date),
24 },
25 }).exec()
26 }
27
28 static insertBulkData(data) {
29 return this.insertMany(data)
30 }
31
32 static archiveData(date) {
33 return this.updateMany(
34 {
35 recordedDate: {
36 $lte: new Date(date),
37 },
38 },
39 {
40 $set: {
41 isArchived: true,
42 },
43 }
44 ).exec()
45 }
46
47 static getArchivedData() {
48 return this.find({
49 isArchived: true,
50 }).exec()
51 }
52}
53
54weatherSchema.loadClass(Weather)
55
56module.exports = mongoose.model("Weather", weatherSchema)

Mainly, Model.js creates a mongoose schema for a DB table which stores the weather data in the database.

After that, create a file called scheduler.js and add the code for job scheduler.

1cron.schedule("* * * * * *", () => {
2 console.log("Running every minute")
3})

cron schedule schedules the job for time format that is mentioned.

nodeschduler

To learn more about cron job format, there is a great site crontab-guru which explains in detail

Connect the mongoose with the Express to insert some dummy data to database.

1const cron = require("node-cron")
2const express = require("express")
3const mongoose = require("mongoose")
4const app = express()
5const faker = require("faker")
6const model = require("./Model")
7
8mongoose
9 .connect("mongodb://localhost:27017/nodescheduler")
10 .then(res => {
11 console.log("mongoose connected successfully")
12
13 app.get("/insertdata", async (req, res) => {
14 let data = []
15 for (let i = 0; i < 100; i++) {
16 let record = {
17 minTemp: faker.random.number(),
18 maxTemp: faker.random.number(),
19 recordedDate: faker.date.past(),
20 }
21 data.push(record)
22 }
23
24 await model.insertBulkData(data)
25
26 res.send("Data is inserted")
27 })
28
29 app.listen(4000, () => {
30 console.log("Server is running port 4000")
31 })
32 })
33 .catch(err => {
34 console.error(err)
35 })

To insert some dummy data using fakerjs. run the script with the command, and visit the URL http://localhost:4000/insertdata

1node scheduler.js

it will create some bulk dummy data to test the job scheduler. Now it is time to add the job scheduler.

1cron.schedule("* * * * */3 *", async () => {
2 var d = new Date()
3 d.setMonth(d.getMonth() - 2) //1 month ago
4
5 await model.archiveData(d)
6
7 console.log("scheduler => archived")
8})

Above cron job will run every 3 months, to mark the data as archived in the database.

Likewise, we can use cron jobs to schedule a job for our application logics.

Summary

Above all, cron jobs play a vital role in some application development use cases. it is always good to know how cron jobs work in application development.

Comments

Popular posts from this blog

4 Ways to Communicate Across Browser Tabs in Realtime

1. Local Storage Events You might have already used LocalStorage, which is accessible across Tabs within the same application origin. But do you know that it also supports events? You can use this feature to communicate across Browser Tabs, where other Tabs will receive the event once the storage is updated. For example, let’s say in one Tab, we execute the following JavaScript code. window.localStorage.setItem("loggedIn", "true"); The other Tabs which listen to the event will receive it, as shown below. window.addEventListener('storage', (event) => { if (event.storageArea != localStorage) return; if (event.key === 'loggedIn') { // Do something with event.newValue } }); 2. Broadcast Channel API The Broadcast Channel API allows communication between Tabs, Windows, Frames, Iframes, and  Web Workers . One Tab can create and post to a channel as follows. const channel = new BroadcastChannel('app-data'); channel.postMessage(data); And oth...

Certbot SSL configuration in ubuntu

  Introduction Let’s Encrypt is a Certificate Authority (CA) that provides an easy way to obtain and install free  TLS/SSL certificates , thereby enabling encrypted HTTPS on web servers. It simplifies the process by providing a software client, Certbot, that attempts to automate most (if not all) of the required steps. Currently, the entire process of obtaining and installing a certificate is fully automated on both Apache and Nginx. In this tutorial, you will use Certbot to obtain a free SSL certificate for Apache on Ubuntu 18.04 and set up your certificate to renew automatically. This tutorial will use a separate Apache virtual host file instead of the default configuration file.  We recommend  creating new Apache virtual host files for each domain because it helps to avoid common mistakes and maintains the default files as a fallback configuration. Prerequisites To follow this tutorial, you will need: One Ubuntu 18.04 server set up by following this  initial ...

Working with Node.js streams

  Introduction Streams are one of the major features that most Node.js applications rely on, especially when handling HTTP requests, reading/writing files, and making socket communications. Streams are very predictable since we can always expect data, error, and end events when using streams. This article will teach Node developers how to use streams to efficiently handle large amounts of data. This is a typical real-world challenge faced by Node developers when they have to deal with a large data source, and it may not be feasible to process this data all at once. This article will cover the following topics: Types of streams When to adopt Node.js streams Batching Composing streams in Node.js Transforming data with transform streams Piping streams Error handling Node.js streams Types of streams The following are four main types of streams in Node.js: Readable streams: The readable stream is responsible for reading data from a source file Writable streams: The writable stream is re...