Skip to main content

How to Securely Query Postgres in Node.js

When you’re querying Postgres, you need to choose between:

  • Using an ORM. This gives you “native” feeling APIs to query the database.
  • Using raw SQL. This gives you the ultimate flexibility and performance and gives you more transferable skills. It’s always helpful to know how to write SQL.

Postgres ORM

If you want to use an ORM to query Postgres, I recommend using https://typeorm.io. If you’re starting with a fresh project, you can use their typeorm init CLI command:

npx typeorm init --name MyProject --database postgrescd MyProject && yarn

You’ll then need to edit ormconfig.json to add your database connection options. You’ll need to add a file in src/entity for each table in your database.

You may also like: Restful API with NodeJS, Express, PostgreSQL, Sequelize, Travis, Mocha, Coveralls and Code Climate.

You can then use a JavaScript API to create records in your database:

import {createConnection} from "typeorm";
import {Photo} from "./entity/Photo";

async function createPhoto() {  
  const connection = await createConnection({    
    type: 'postgres',    
    url:  process.env.DATABASE_URL || 'postgres://test:test@localhost/test' 
  });  

  const photo = new Photo();  
  photo.name = "Me and Bears";  
  photo.description = "I am near polar bears";  
  photo.filename = "photo-with-bears.jpg";  
  photo.views = 1;  
  photo.isPublished = true;  
  const {id} = await connection.manager.save(photo);  

  console.log("Photo has been saved. Photo id is", photo.id);}

createPhoto();

There are advantages to this approach (the biggest being that it supports strong types), but I personally feel that it makes the code pretty hard to read/follow, and the skills you learn on TypeORM will be of no use if you move to a different ORM

Raw SQL

I believe that the simplest and easiest way to query Postgres is to directly write the SQL that will be run against your database.

“Using SQL directly, means there’s nothing to configure”

yarn add @databases/pg

You’ll need to set theDATABASE_URL environment variable to a database connection string.

import connect, {sql} from '@databases/pg';

const db = connect();

export async function getAllUsers() {
  return await db.query(sql`SELECT * FROM users;`);
}

export async function getUserById(userId) {
  return (await db.query(sql`
    SELECT * FROM users WHERE user_id=${userId}
  `))[0];
}

export async function createUser(u) {
  return (await db.query(sql`
    INSERT INTO users (name, email)
    VALUES (${u.name}, ${u.email})
    RETURNING user_id;
  `))[0].user_id;
}

export async function deleteUserById(userId) {
  await db.query(sql`DELELTE FROM users WHERE user_id=${userId}`);
}

export async function updateUserById(userId, u) {
  await db.query(sql`
    UPDATE users
    SET name=${u.name}, email=${u.email}
    WHERE user_id=${userId}
  `);
}

export async function upsertUser(userId, u) {
  return (await db.query(sql`
    INSERT INTO users (user_id, name, email)
    VALUES (${userId}, ${u.name}, ${u.email})
    ON CONFLICT (user_id)
    DO UPDATE SET name=${u.name}, email={u.email}
    RETURNING *;
  `))[0];
}

N.B. The [@databases](https://www.atdatabases.org/) library does not just concatenate your user input into a string of SQL, it separates your parameters from the actual query, and uses prepared statements to run the query. It throws a clear runtime exception if you forget to tag your sql with the sql tag. This means it’s virtually impossible for you to introduce SQL Injection vulnerabilities by accident.

Conclusion

For most projects, I recommend querying your Postgres database directly using @databases/pg. It gives you the ultimate flexibility. If you need TypeScript types, I recommend declaring the types along with the SQL that queries. TypeScript isn’t currently able to check that the types match your database schema, but at least if they’re in the same file, you’ll probably remember to keep them in sync.

Thank for reading! If you liked this post, share it with all of your programming buddies!

Comments

Popular posts from this blog

4 Ways to Communicate Across Browser Tabs in Realtime

1. Local Storage Events You might have already used LocalStorage, which is accessible across Tabs within the same application origin. But do you know that it also supports events? You can use this feature to communicate across Browser Tabs, where other Tabs will receive the event once the storage is updated. For example, let’s say in one Tab, we execute the following JavaScript code. window.localStorage.setItem("loggedIn", "true"); The other Tabs which listen to the event will receive it, as shown below. window.addEventListener('storage', (event) => { if (event.storageArea != localStorage) return; if (event.key === 'loggedIn') { // Do something with event.newValue } }); 2. Broadcast Channel API The Broadcast Channel API allows communication between Tabs, Windows, Frames, Iframes, and  Web Workers . One Tab can create and post to a channel as follows. const channel = new BroadcastChannel('app-data'); channel.postMessage(data); And oth...

Certbot SSL configuration in ubuntu

  Introduction Let’s Encrypt is a Certificate Authority (CA) that provides an easy way to obtain and install free  TLS/SSL certificates , thereby enabling encrypted HTTPS on web servers. It simplifies the process by providing a software client, Certbot, that attempts to automate most (if not all) of the required steps. Currently, the entire process of obtaining and installing a certificate is fully automated on both Apache and Nginx. In this tutorial, you will use Certbot to obtain a free SSL certificate for Apache on Ubuntu 18.04 and set up your certificate to renew automatically. This tutorial will use a separate Apache virtual host file instead of the default configuration file.  We recommend  creating new Apache virtual host files for each domain because it helps to avoid common mistakes and maintains the default files as a fallback configuration. Prerequisites To follow this tutorial, you will need: One Ubuntu 18.04 server set up by following this  initial ...

Working with Node.js streams

  Introduction Streams are one of the major features that most Node.js applications rely on, especially when handling HTTP requests, reading/writing files, and making socket communications. Streams are very predictable since we can always expect data, error, and end events when using streams. This article will teach Node developers how to use streams to efficiently handle large amounts of data. This is a typical real-world challenge faced by Node developers when they have to deal with a large data source, and it may not be feasible to process this data all at once. This article will cover the following topics: Types of streams When to adopt Node.js streams Batching Composing streams in Node.js Transforming data with transform streams Piping streams Error handling Node.js streams Types of streams The following are four main types of streams in Node.js: Readable streams: The readable stream is responsible for reading data from a source file Writable streams: The writable stream is re...