Skip to main content

Building a GraphQL API with Node and MongoDB

 Over the past decade, REST has become the standard for designing web APIs. REST has become more concrete with emerging best practices for the web. However, it has become too inflexible to keep up with the complex client side requirements. As a result, there is more data fetching happening behind the scene and REST has performed poorly with this growth. To overcome the inflexibility and inefficiency, GraphQL was created.

In this article, we will build a restaurant app that tracks chefs and dishes that can be queried and updated using GraphQL

GraphQL is a query language created by Facebook with the purpose of building robust client applications based on intuitive and flexible syntax. It fully describes the data requirements and interactions with an existing database. GraphQL was developed internally by Facebook and released in 2015.

A GraphQL query is a string that is sent to a server to be interpreted and fulfilled, and the response returns JSON back to the client.

With traditional REST API calls, we didn’t have the ability for the client to request a customized set of data. In contrast, GraphQL allows clients to define the structure of the data required, and the same structure of the data is returned from the server. This prevents excessively large amounts of data from being returned. However, it also adds a layer of complexity that may not be applicable for simple APIs.

Moreover, maintaining multiple endpoints is difficult in REST architecture. When the application grows, the number of endpoints will increase, resulting in the client needing to ask for data from different endpoints. GraphQL APIs are more organized by providing structured types and fields in the schema while using a single API endpoint to request data.

Let’s start developing. First we will create a new folder and initialize our package.json file. Then add following packages with the command listed below:

yarn init
yarn add express graphql express-graphql mongoose

Now we can move on to creating our main file app.js in our root directory and require graphqlHTTP from the Express GraphQL package.

const express = require('express');
const graphqlHTTP = require('express-graphql');

const mongo = require('mongoose');
const app = express();
mongo.connect('mongodb://***yourusername***:***yourpassword***@ds053317.mlab.com:53317/gql-demo', {
useNewUrlParser: true,
useUnifiedTopology: true
})

mongo.connection.once('open', () => {
console.log('connected to database');
})

app.use(‘/graphiql’, graphqlHTTP({ schema: require(‘./schema.js’), graphiql: true}));

app.listen(8080, () => {
console.log('Server running succefully...')
})

Here we have required express and graphqlHTTP from our installed packages. We also made our connection with our MongoDB database using mlab. By setting true to graphiql, we can send and receive requests from the browser alike Insomnia or Postman. We also can serve it locally and test it at http://localhost:8080/graphiql to use the console.

Our next step is building our data models for storing items into our database. We will make a new folder mongo-models, and we will create two files chef.js and dishes.js like below:


const mongo = require('mongoose');
const Schema = mongo.Schema;

const chefSchema = new Schema({
    name: String,
    rating: Number
});

module.exports = mongo.model('Chef', chefSchema);

const mongo = require('mongoose');
const Schema = mongo.Schema;

const dishSchema = new Schema({
    name: String,
    country: String,
    tasty: Boolean,
    chefsId: String
});

module.exports = mongo.model('Dish', dishSchema);


Now we will make a folder and name it Schema.js where we will add types to the code and define our GraphQL API:

const graphql = require('graphql');

const Dish = require('../mongo-models/dish');
const Chef = require('../mongo-models/chef');

const {
GraphQLObjectType,
GraphQLString,
GraphQLBoolean,
GraphQLSchema,
GraphQLID,
GraphQLFloat,
GraphQLList,
GraphQLNonNull
} = graphql;


const DishType = new GraphQLObjectType({
name: 'Dish',
fields: () => ({
id: {
type: GraphQLID
},
name: {
type: GraphQLString
},
tasty: {
type: GraphQLBoolean
},
country: {
type: GraphQLString
},
chefs: {
type: ChefType,
resolve(parent, args) {
return Chef.findById(parent.chefsId)
}
}
})
});

const ChefType = new GraphQLObjectType({
name: 'chefs',
fields: () => ({
id: {
type: GraphQLID
},
name: {
type: GraphQLString
},
rating: {
type: GraphQLFloat
},
dish: {
type: new GraphQLList(DishType),
resolve(parent, args) {
return Dish.find({
chefsId: parent.id
})
}
}
})
});

In the above code, we imported graphql and our Mongo models from our folder. We also gave type definitions to our data types in GraphQL, which is wrapped inside a field function with a fat arrow. In GraphQL we have the resolve function that gets fired when we write some root queries while we’ll see in the next steps.

Inside the resolve function, we get data from the database.

const RootQuery = new GraphQLObjectType({
name: 'RootQueryType',
fields: {
dish: {
type: DishType,
args: {
id: {
type: GraphQLID
}
},
resolve(parent, args) {
return Dish.findById(args.id);
}
},
chefs: {
type: ChefType,
args: {
id: {
type: GraphQLID
}
},
resolve(parent, args) {
return Chef.findById(args.id);
}
},
dishes: {
type: new GraphQLList(DishType),
resolve(parent, args) {
return Dish.find({});
}
},
chefs: {
type: new GraphQLList(ChefType),
resolve(parent, args) {
return Chef.find({});
}
}
}
});

const Mutation = new GraphQLObjectType({
name: 'Mutation',
fields: {
addDish: {
type: DishType,
args: {
name: {
type: new GraphQLNonNull(GraphQLString)
},
country: {
type: new GraphQLNonNull(GraphQLString)
},
tasty: {
type: new GraphQLNonNull(GraphQLBoolean)
}
},
resolve(parent, args) {
let dish = new Dish({
name: args.name,
country: args.country,
tasty: args.tasty,
});
return dish.save();
}
},
addChef: {
type: ChefType,
args: {
name: {
type: new GraphQLNonNull(GraphQLString)
},
rating: {
type: new GraphQLNonNull(GraphQLString)
}
},
resolve(parent, args) {
let chef = new Chef({
name: args.name,
rating: args.rating
});
return chef.save();
}
}
}
})

module.exports = new GraphQLSchema({
query: RootQuery,
mutation: Mutation
});

We have added the above line of code in the schema.js file. Here we added rootquery to get the dish type with an id and also return all lists of dishes with the GraphQLList import. The same goes for chefs.

We also added a Mutation to our objects in GraphQL where we can add a dish to the database with the compulsory field GraphQLnonNull import. In the resolve function, we have returned the new object and saved it to the database. The same goes for the mutation for Chefs.

Finally, we have the exports of our rootquery and the mutation in the end.

That’s it! We now have a working API using GraphQL with Node.

I have also linked my GitHub link here so that you guys can fork and work with the available code — Link here





















Comments

Popular posts from this blog

4 Ways to Communicate Across Browser Tabs in Realtime

1. Local Storage Events You might have already used LocalStorage, which is accessible across Tabs within the same application origin. But do you know that it also supports events? You can use this feature to communicate across Browser Tabs, where other Tabs will receive the event once the storage is updated. For example, let’s say in one Tab, we execute the following JavaScript code. window.localStorage.setItem("loggedIn", "true"); The other Tabs which listen to the event will receive it, as shown below. window.addEventListener('storage', (event) => { if (event.storageArea != localStorage) return; if (event.key === 'loggedIn') { // Do something with event.newValue } }); 2. Broadcast Channel API The Broadcast Channel API allows communication between Tabs, Windows, Frames, Iframes, and  Web Workers . One Tab can create and post to a channel as follows. const channel = new BroadcastChannel('app-data'); channel.postMessage(data); And oth...

Certbot SSL configuration in ubuntu

  Introduction Let’s Encrypt is a Certificate Authority (CA) that provides an easy way to obtain and install free  TLS/SSL certificates , thereby enabling encrypted HTTPS on web servers. It simplifies the process by providing a software client, Certbot, that attempts to automate most (if not all) of the required steps. Currently, the entire process of obtaining and installing a certificate is fully automated on both Apache and Nginx. In this tutorial, you will use Certbot to obtain a free SSL certificate for Apache on Ubuntu 18.04 and set up your certificate to renew automatically. This tutorial will use a separate Apache virtual host file instead of the default configuration file.  We recommend  creating new Apache virtual host files for each domain because it helps to avoid common mistakes and maintains the default files as a fallback configuration. Prerequisites To follow this tutorial, you will need: One Ubuntu 18.04 server set up by following this  initial ...

Working with Node.js streams

  Introduction Streams are one of the major features that most Node.js applications rely on, especially when handling HTTP requests, reading/writing files, and making socket communications. Streams are very predictable since we can always expect data, error, and end events when using streams. This article will teach Node developers how to use streams to efficiently handle large amounts of data. This is a typical real-world challenge faced by Node developers when they have to deal with a large data source, and it may not be feasible to process this data all at once. This article will cover the following topics: Types of streams When to adopt Node.js streams Batching Composing streams in Node.js Transforming data with transform streams Piping streams Error handling Node.js streams Types of streams The following are four main types of streams in Node.js: Readable streams: The readable stream is responsible for reading data from a source file Writable streams: The writable stream is re...