How to setup an Nginx reverse proxy and also provide a global X.509 certificate for it

Version 2.1 by Alexandru Pentilescu on 2022/06/11 21:18

Nginx is a powerful reverse proxy engine, capable of intercepting any HTTP request for a specific server and redirecting it to a targeted port.

Using Nginx in such a way helps define subdomains for different services.

For example, suppose you are in control of a domain name called "pentilescu.com". You wish to establish subdomains for different services, such as: having a Bitwarden HTTP server accessible at "passwords.pentilescu.com", a gitea HTTP server accessible at "git.pentilescu.com" and a personal webpage at "alexandru.pentilescu.com".

"pentilescu.com" is the master domain, an umbrella under which all the other services reside. Each of these services can run on a different machine, if you wish for it, or it can run on the same machine.

All you have to do is configure an Nginx server on the machine that's being pointed to, via DNS, as "pentilescu.com". Then, Nginx can redirect all HTTP requests destined to various pre-configured subdomains to other machines or to different ports on the same machine, effectively orchestrating all requests according to well defined matching rules. It can, say, redirect all HTTP requests destined for "passwords.pentilescu.com" to localhost port 187, all HTTPS requests destined for "git.pentilescu.com" to localhost port 200 and all "alexandru.pentilescu" requests to local device 192.168.1.3 port 9030.

This gives administrators flexibility to configure all services on various machines and ports and then, at a single gateway endpoint, configure Nginx to redirect all requests to each of them, accordingly.

This is the power that a reverse proxy allows. And all of this can be attained without having to configure any subdomain DNS records.

Let's see how such configurations will look!

Installing Nginx

First and foremost, Nginx must be installed on the machine being pointed to by the main domain via DNS. This machine will serve as the gateway for all the requests coming into the current network, requests that must be serviced by a machine within this network.
As Nginx is licensed under a free software copyright license, it's found, usually, under most Linux repositories. Use the method to install it specific to your own Linux distribution! Usually, a simple "sudo apt-get install nginx" is sufficient. This implies, of course, that you have administrative privileges over the gateway machine.
Once the installation is done, you have to configure Nginx to start looking for user-defined configuration files in appropriate directories.
To do so, please edit "/etc/nginx/nginx.conf" to look like the following:


user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;


events {
   worker_connections  1024;
}


http {
   include       /etc/nginx/mime.types;
   default_type  application/octet-stream;

   log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                     '$status $body_bytes_sent "$http_referer" '
                     '"$http_user_agent" "$http_x_forwarded_for"';

   access_log  /var/log/nginx/access.log  main;

   sendfile        on;
   #tcp_nopush     on;

   keepalive_timeout  65;

   #gzip  on;

   include /etc/nginx/conf.d/*.conf;
   include /etc/nginx/sites-enabled/*.conf;
}

Please note that, the only new line which I added apart from the pre-existing ones that come by default with this file, was "include /etc/nginx/sites-enabled/*.conf;", almost at the end.

What this line does is tell Nginx to take into account any file with the ".conf" extension from within "/etc/nginx/sites-enabled/" and import its configuration into this master configuration file, temporarily.

Effectively, this means, whenever you wish to add another endpoint subdomain to your domain name, you just have to create a new ".conf" file under "/etc/nginx/sites-enabled" and Nginx will automatically accept it, provided the syntax in that file is correct.

Ideally, "/etc/nginx/sites-enabled" should only contain symbolic link files to actual physical files stored in "/etc/nginx/sites-available". This is to allow partially defined services being configured offline in their own directory and, only after the configurations are fully complete, can they be activated by creating a symlink to their file.

Moreover, whenever a service needs to be disabled again, its configuration file doesn't have to be physically deleted in its entirety. All that has to be done is deleting the symbolic link pointing to it from "/etc/nginx/sites-enabled" and, afterwards, the file itself will remain on the drive under "/etc/nginx-sites-available" where it can be edited afterwards, until it can be reactivated again by the re-creation of a new symlink to it.

Handling symbolic links is very easy and straightforward.

Of course, one needs to remember, at all times, the following rules:

  1. When activating or deactivating a service, one must create or delete symbolic links in "/etc/nginx/sites-enabled"
  2. When configuring a new service from scratch, one must first create the actual configuration file under "/etc/nginx/sites-available" and, only after the configuration is complete, then one must create a symbolic link to that file in "/etc/nginx/sites-enabled"

Remembering these might seem difficult at first glance, but it will soon become second nature to most administrators.

If this is too difficult to remember, you can always use only "/etc/nginx/sites-enable" to store your actual configuration files, instead of storing symbolic links. This is not the recommended usage for them but is an option.

Now, time to write a configuration file from scratch for our bitwarden service!

Example of a service configuration file

Please take a look at the following configuration file for our Bitwarden endpoint:

server {

       server_name passwords.pentilescu.com;
       root /var/www/;

       listen [::]:443 ssl http2; # managed by Certbot
       listen 443 ssl http2; # managed by Certbot


       index index.html;

       location / {
           proxy_pass http://localhost:5178/;
           proxy_set_header Host $host;
           proxy_set_header X-Real-IP $remote_addr;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header X-Forwarded-Proto $scheme;

           client_max_body_size 0;
           add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";
           add_header Referrer-Policy "same-origin";

           access_log /var/log/nginx/bitwarden.access.log;
           error_log /var/log/nginx/bitwarden.error.log;
    }

   include /etc/nginx/snippets/ssl.conf;
}

I called this file "bitwarden.conf" both in the "sites-enabled" and "sites-available" directories, the former is a symbolic link towards the latter. It's imperative that the symbolic link's name end in the ".conf" extension, otherwise Nginx will ignore it, per our own configuration in "nginx.conf".
Now, let's break that configuration down into the important parts that you should remember:

  • The "server_name" directive tells Nginx the name of the subdomain whose requests need to be redirected to a different endpoint. In our case, we wanted all HTTP requests destined to "passwords.pentilescu.com" to be redirected to our localhost port 5178
  • The "listen 443 ssl http2" and "listen [::]:443 ssl http2" directives tell Nginx to open port 443 for both IPv4 and IPv6 incoming traffic. Port 443 is usually used by web browsers attempting to connect to our website using the TLS protocol, in order to secure the connection against network eavesdroppers. Keep in mind that, in order to effectively use TLS, we must have a generated X.509 certificate to use, signed by a certificate authority that's, hopefully, trusted by most web browser vendors
  • The "location" directive is a bit interesting. It defines many other subdirectives for the destination endpoint of our service. The "proxy_pass" subdirective, specifically, tells Nginx where the destination of the redirected requests needs to be. In our case, all requests have to be redirected to our own machine, port 5178. The "add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";" directive tells the connection to always be encrypted, at all times, otherwise it should fail in an error. This should include subdomains as well. And the "error_log" and "access_log" subdirectives tell Nginx where and under which names to store the error log and the access log for this specific service, respectively

Now, you may wonder "Why port 5178", specifically? There is no reason for this specific port. I'm simply running Bitwarden in docker and I needed an external port to be configured for my service. Basically, this port can be any other number, provided you remember to change this configuration both in your Bitwarden docker file AND here, in Nginx.

Finally, the "include /etc/nginx/snippets/ssl.conf;" is the most interesting directive, by far. This "snippets" directory, is meant to contain configuration snippets that are meant to be included in other configuration files, all the time. The "ssl.conf", specifically, contains configuration data for pointing towards X.509 certificate files, so that Nginx wil know where the public key file is contained, where the private key file is contained, and where the full chain certificate is contained. All these three files need to be configured, so that Nginx will know how to find them and deliver the public ones to any visitors to our site. If you don't wish to support TLS at all on your server, don't include this directive in your configuration file and change the 443 port references here to port 80, instead, so that only HTTP connections are allowed.

Since the same certificate files will be reused by all of our different endpoints, such as "passwords.pentilescu.com", "git.pentilescu.com", "wiki.pentilescu.com" etc, it doesn't make sense to just copy-paste their configurations in every one of their Nginx ".conf" files, instead, we configure all of them in one snippet file, in our case, "ssl.conf" and then, in all of those other services' configuration files, we just add 1 line that references the "ssl.conf" file, instead. Functionally, they are the same.

This is possible because of the existence of wildcard X.509 certificates, where you can basically say "This certificate is meant to protect traffic for all subdomains under pentilescu.com as well as for pentilescu.com itself!". This allows us to have a single X.509 certificate for all of our subdomains, which makes administration easier as only this certificate will have to be signed, afterwards, by a certificate authority!

Now, you might look at this and say to yourself: "Well, that's all fine and good but where are you opening port 80 for remote connections? Isn't that the standard HTTP port? In the above configuration, you're only listening on port 443, which is specifically only HTTPS! Wouldn't this cause connecting clients that only use bare HTTP to fail?".
Well, yes, this is an issue. But that's why there's more to show you, in the next section!

Setting up HTTP->HTTPS automatic redirecting and configuring a global X.509 certificate

Time to get into the groove of things!
First, we shall create a new configuration file in "/etc/nginx/sites-available" and then a symbolic link towards it in "/etc/nginx/sites-enabled" called "fallback.conf". Why?
Well, basically, we want for any HTTP request to our website, either to "pentilescu.com" or to any subdomain of "pentilescu.com" to redirect that request to the exact same destination but using the HTTPS protocol, instead.
This is considered good practice, because, you're eseentially redirecting the browser to use a more secured version of that page.