6 Things You Can Easily Do With Nginx

6 Things You Can Easily Do With Nginx

Introduction

Nginx is by far one of the most famous and most commonly deployed web servers out in the world - check the web servers' usage statistics provided by W3Techs. Nginx can also act as a proxy server, load balancer, caching server, etc. - feel free to read the official docs for more information on what the product can offer you.

My first experience with Nginx was when trying to use it as an API gateway (more like a single entry point for a system) proxying requests to multiple back-end services. I fell in love with the simplicity of a product with so many capabilities. Since then, I've used it for multiple projects and would still prefer it in comparison to alternative web servers.

I decided to write this post to illustrate some of the features I found super useful during my experience with Nginx which in my opinion turned out to be incredibly simple to configure. In no particular order, the list is given in the next section.

If you don't understand the http/server/location directives in the configuration examples below or you haven't seen any Nginx configuration before, please refer to this guide for more details.

How-tos

Limit request body size

Very often when developing an API we let users upload arbitrary files - be they image files, video files, archives, etc. This would commonly imply sending a large amount of bytes as part of the HTTP request body. When left with no validation this functionality can lead to unexpected storage costs or even memory exhaustion if the backend server loads the whole file into memory. Nginx can easily provide protection for this by enforcing a limit on the size of the request body. Here is an example configuration.

http {
    ...
    client_max_body_size 500M;
    ... 
}

Yes, it is that simple. This line is enough to ensure the body of all HTTP requests sent to this host does not exceed 500 megabytes. Any requests that exceed the limit will receive a client error response with status code 413 (Request Entity Too Large).

Another example is given below - enforcing the limit for a particular URI pattern rather than for every single request:

http {
    ...
    server {
        ...
        listen 80;

        location /api/pictures {
            ...
            client_max_body_size 500M;
            ...
        }
        ...
    }
    ...
}

The config above defines an HTTP server listening on port 80 and enforces all requests sent to URIs starting with /api/pictures to have a maximum body size of 500 megabytes. For more details on this, please see the official docs

N.B. the default threshold is 1 megabyte

Add HTTP basic authentication

A common use case website developers have is restricting access to certain sub-pages of their website. Personally, what I also commonly do during manual testing of a new website/API is restrict access to the entire application to a particular set of test users until the release is official. Nginx's auth basic module can easily allow us to do just that. For example:

http {
    ...
    auth_basic "Please authenticate";
    auth_basic_user_file /etc/nginx/.htpasswd;
    ... 
}

This config restricts the entire Nginx server using HTTP Basic Authentication. Keep in mind the /etc/nginx/.htpasswd file must contain the list of <username>:<password> pairs and is expected to follow a certain format. Please see this tutorial for how to generate the necessary password file.

The same configuration can be used to only restrict a particular URI pattern:

http {
    ...
    server {
        ...
        listen 80;

        location /admin {
            ...
            auth_basic "Please authenticate";
            auth_basic_user_file /etc/nginx/.htpasswd;
            ...
        }
        ...
    }
    ...
}

Limit API access to read-only requests

Sometimes when exposing a local back-end service to the public through Nginx (acting as a proxy) I only want to allow read-only (e.g. GET and HEAD) requests that the front-end needs for retrieving content - for example, I might not want to expose publicly any ops-related DELETE API endpoints. Nginx supports this through the limit_except directive:

http {
    ...
    server {
        ...
        listen 80;

        location /api {
            ...
            limit_except GET {
                deny  all;
            }
            ...
        }
        ...
    }
    ...
}

N.B. the HEAD method is automatically included when allowing the GET method

Add SSL support

Who doesn't love the message from our browsers indicating that the connection with a particular website is secure? To enable this for an Nginx web server we can use the Nginx SSL module:

http {
    ...
    server {
        ...
        listen 443 ssl;

        ssl_certificate <path to certificate file in PEM format>;
        ssl_certificate_key <path to private/secret key file in PEM format>;
        ...
    }
    ...
}

This config assumes you know how to generate an SSL certificate. One of the simplest ways to do this is to use Let's encrypt - follow this tutorial. Ultimately, you should end up with fullchain.pem and privkey.pem; these are the files you need for the ssl_certiciate and ssl_certificate_key directives respectively.

Rate limiting clients

Rate limiting is the process of ensuring that an individual client cannot exceed a given threshold of requests for a given unit of time - e.g. ensuring that each client can send a maximum of 5 requests per second. The module for configuring this is the Nginx HTTP Limit Request Module.

http {
    ...
    limit_req_zone $binary_remote_addr zone=books:10m rate=5r/s;
    ...
    server {
        ...
        listen 80;

        location /api/books {
            ...
            limit_req zone=books;
            ...
        }
        ...
    }
    ...
}

Here we need one directive (limit_req_zone) for defining the rate limiting "zone" (a re-usable rate limiting configuration) and another directive (limit_req) for indicating that we want to enable the rate limiting zone for a particular context (e.g. a URI location pattern). The limit_req directive can also be used inside a server or http context - in other words, rate limiting can be applied for all virtual servers, for an individual virtual server or for a particular URI location pattern.

By default, clients that exceed the rate limiting threshold will receive a status code 503 Service Unavailable. This is also configurable with the limit_req_status directive:

http {
    ...
    limit_req_zone $binary_remote_addr zone=books:10m rate=5r/s;
    ...
    server {
        ...
        listen 80;

        location /api/books {
            ...
            limit_req zone=books;
            limit_req_status 429;
            ...
        }
        ...
    }
    ...
}

Redirect HTTP traffic to HTTPs

A very very very common scenario for web servers is to automatically redirect all clients who connect to the non-encrypted version of a website (HTTP traffic - i.e. server listening on port 80) to the safer encrypted version of the website (HTTPs traffic - i.e. server listening on port 443). Redirection is another easy thing to configure in Nginx:

http {
    ...
    server {
        listen 80;
        return 301 https://$host$request_uri;
    }
    server {
        ...
        listen 443;
        ...
    }
    ...
}

What this config means is that Nginx will return a status code 301 (Moved Permanently) to all clients that connect to port 80. The redirection target will be a URL that uses the same host name and request URI but with https rather than http as the URL scheme. Browsers will automatically follow the redirection and move clients to the secure version of our website.