How to setup an Nginx reverse proxy and also provide a global X.509 certificate for it
Nginx is a powerful reverse proxy engine, capable of intercepting any HTTP request for a specific server and redirecting it to a targeted port.
Using Nginx in such a way helps define subdomains for different services.
For example, suppose you are in control of a domain name called "pentilescu.com". You wish to establish subdomains for different services, such as: having a Bitwarden HTTP server accessible at "passwords.pentilescu.com", a gitea HTTP server accessible at "git.pentilescu.com" and a personal webpage at "alexandru.pentilescu.com".
"pentilescu.com" is the master domain, an umbrella under which all the other services reside. Each of these services can run on a different machine, if you wish for it, or it can run on the same machine.
All you have to do is configure an Nginx server on the machine that's being pointed to, via DNS, as "pentilescu.com". Then, Nginx can redirect all HTTP requests destined to various pre-configured subdomains to other machines or to different ports on the same machine, effectively orchestrating all requests according to well defined matching rules. It can, say, redirect all HTTP requests destined for "passwords.pentilescu.com" to localhost port 187, all HTTPS requests destined for "git.pentilescu.com" to localhost port 200 and all "alexandru.pentilescu" requests to local device 192.168.1.3 port 9030.
This gives administrators flexibility to configure all services on various machines and ports and then, at a single gateway endpoint, configure Nginx to redirect all requests to each of them, accordingly.
This is the power that a reverse proxy allows. And all of this can be attained without having to configure any subdomain DNS records.
Let's see how such configurations will look!
Installing Nginx
First and foremost, Nginx must be installed on the machine being pointed to by the main domain via DNS. This machine will serve as the gateway for all the requests coming into the current network, requests that must be serviced by a machine within this network.
As Nginx is licensed under a free software copyright license, it's found, usually, under most Linux repositories. Use the method to install it specific to your own Linux distribution! Usually, a simple "sudo apt-get install nginx" is sufficient. This implies, of course, that you have administrative privileges over the gateway machine.
Once the installation is done, you have to configure Nginx to start looking for user-defined configuration files in appropriate directories.
To do so, please edit "/etc/nginx/nginx.conf" to look like the following:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*.conf;
}
Please note that, the only new line which I added apart from the pre-existing ones that come by default with this file, was "include /etc/nginx/sites-enabled/*.conf;", almost at the end.
What this line does is tell Nginx to take into account any file with the ".conf" extension from within "/etc/nginx/sites-enabled/" and import its configuration into this master configuration file, temporarily.
Effectively, this means, whenever you wish to add another endpoint subdomain to your domain name, you just have to create a new ".conf" file under "/etc/nginx/sites-enabled" and Nginx will automatically accept it, provided the syntax in that file is correct.
Ideally, "/etc/nginx/sites-enabled" should only contain symbolic link files to actual physical files stored in "/etc/nginx/sites-available". This is to allow partially defined services being configured offline in their own directory and, only after the configurations are fully complete, can they be activated by creating a symlink to their file.
Moreover, whenever a service needs to be disabled again, its configuration file doesn't have to be physically deleted in its entirety. All that has to be done is deleting the symbolic link pointing to it from "/etc/nginx/sites-enabled" and, afterwards, the file itself will remain on the drive under "/etc/nginx/sites-available" where it can be edited afterwards, until it can be reactivated again by the re-creation of a new symlink to it.
Handling symbolic links is very easy and straightforward.
Of course, one needs to remember, at all times, the following rules:
- When activating or deactivating a service, one must create or delete symbolic links in "/etc/nginx/sites-enabled"
- When configuring a new service from scratch, one must first create the actual configuration file under "/etc/nginx/sites-available" and, only after the configuration is complete, then one must create a symbolic link to that file in "/etc/nginx/sites-enabled"
Remembering these might seem difficult at first glance, but it will soon become second nature to most administrators.
If this is too difficult to remember, you can always use only "/etc/nginx/sites-enabled" to store your actual configuration files, instead of storing symbolic links. This is not the recommended usage for them but is an option.
Now, time to write a configuration file from scratch for our bitwarden service!
Example of a service configuration file
Please take a look at the following configuration file for our Bitwarden endpoint:
server_name passwords.pentilescu.com;
root /var/www/;
listen [::]:443 ssl http2; # managed by Certbot
listen 443 ssl http2; # managed by Certbot
index index.html;
location / {
proxy_pass http://localhost:5178/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 0;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";
add_header Referrer-Policy "same-origin";
access_log /var/log/nginx/bitwarden.access.log;
error_log /var/log/nginx/bitwarden.error.log;
}
include /etc/nginx/snippets/ssl.conf;
}
I called this file "bitwarden.conf" both in the "sites-enabled" and "sites-available" directories, the former is a symbolic link towards the latter. It's imperative that the symbolic link's name end in the ".conf" extension, otherwise Nginx will ignore it, per our own configuration in "nginx.conf".
Now, let's break that configuration down into the important parts that you should remember:
- The "server_name" directive tells Nginx the name of the subdomain whose requests need to be redirected to a different endpoint. In our case, we wanted all HTTP requests destined to "passwords.pentilescu.com" to be redirected to our localhost port 5178
- The "listen 443 ssl http2" and "listen [::]:443 ssl http2" directives tell Nginx to open port 443 for both IPv4 and IPv6 incoming traffic. Port 443 is usually used by web browsers attempting to connect to our website using the TLS protocol, in order to secure the connection against network eavesdroppers. Keep in mind that, in order to effectively use TLS, we must have a generated X.509 certificate to use, signed by a certificate authority that's, hopefully, trusted by most web browser vendors
- The "location" directive is a bit interesting. It defines many other subdirectives for the destination endpoint of our service. The "proxy_pass" subdirective, specifically, tells Nginx where the destination of the redirected requests needs to be. In our case, all requests have to be redirected to our own machine, port 5178. The "add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";" directive tells the connection to always be encrypted, at all times, otherwise it should fail in an error. This should include subdomains as well. And the "error_log" and "access_log" subdirectives tell Nginx where and under which names to store the error log and the access log for this specific service, respectively
Now, you may wonder "Why port 5178", specifically? There is no reason for this specific port. I'm simply running Bitwarden in docker and I needed an external port to be configured for my service. Basically, this port can be any other number, provided you remember to change this configuration both in your Bitwarden docker file AND here, in Nginx.
Finally, the "include /etc/nginx/snippets/ssl.conf;" is the most interesting directive, by far. This "snippets" directory, is meant to contain configuration snippets that are meant to be included in other configuration files, all the time. The "ssl.conf", specifically, contains configuration data for pointing towards X.509 certificate files, so that Nginx wil know where the public key file is contained, where the private key file is contained, and where the full chain certificate is contained. All these three files need to be configured, so that Nginx will know how to find them and deliver the public ones to any visitors to our site. If you don't wish to support TLS at all on your server, don't include this directive in your configuration file and change the 443 port references here to port 80, instead, so that only HTTP connections are allowed.
Since the same certificate files will be reused by all of our different endpoints, such as "passwords.pentilescu.com", "git.pentilescu.com", "wiki.pentilescu.com" etc, it doesn't make sense to just copy-paste their configurations in every one of their Nginx ".conf" files, instead, we configure all of them in one snippet file, in our case, "ssl.conf" and then, in all of those other services' configuration files, we just add 1 line that references the "ssl.conf" file, instead. Functionally, they are the same.
This is possible because of the existence of wildcard X.509 certificates, where you can basically say "This certificate is meant to protect traffic for all subdomains under pentilescu.com as well as for pentilescu.com itself!". This allows us to have a single X.509 certificate for all of our subdomains, which makes administration easier as only this certificate will have to be signed, afterwards, by a certificate authority!
Now, you might look at this and say to yourself: "Well, that's all fine and good but where are you opening port 80 for remote connections? Isn't that the standard HTTP port? In the above configuration, you're only listening on port 443, which is specifically only HTTPS! Wouldn't this cause connecting clients that only use bare HTTP to fail?".
Well, yes, this is an issue. But that's why there's more to show you, in the next section!
Setting up HTTP->HTTPS automatic redirecting
Time to get into the groove of things!
First, we shall create a new configuration file in "/etc/nginx/sites-available" and then a symbolic link towards it in "/etc/nginx/sites-enabled" called "fallback.conf". Why?
Well, basically, we want for any HTTP request to our website, either to "pentilescu.com" or to any subdomain of "pentilescu.com" to redirect that request to the exact same destination but using the HTTPS protocol, instead.
This is considered good practice, because, you're eseentially redirecting the browser to use a more secured connection protocol. Not only will this ensure that any login credentials are protected from being stolen, but also that the exact resources being retrieved from our server are obfuscated, so that our visitors can browse our server privately.
The contents of this "fallback.conf" file are as follows:
listen [::]:443 ssl http2 default_server; # managed by Cert
listen 443 default_server;
server_name _;
include /etc/nginx/snippets/ssl.conf;
return 404;
}
server {
server_name pentilescu.com;
return 301 https://alexandru.pentilescu.com$request_uri;
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443;
}
server {
server_name _;
return 301 https://$host$request_uri; # managed by Certbot
# return 301 https://$host$request_uri;
listen 80 default_server;
listen [::]:80 default_server;
listen 443 ssl;
listen [::]:443;
}
Each of these 3 server blocks have a well defined purpose. Let's break them down!
The most important thing to remember is how Nginx does request matching.
To put it simply, you can have as many ".conf" files in the "sites-enabled" directory as you wish to have. And any one ".conf" file can contain multiple "server" directives, each with its own proxy_pass destination and with its own subdomain service name.
So, when a new HTTP/HTTPS request arrives at our Nginx server, how does it decide which "server" directive to match with that specific request, so that it knows where to redirect it to?
The answer is fairly easy: for each request, Nginx looks through all of the ".conf" files under "sites-enabled", looks through each ".conf" file's "server" directives while taking into account its "server_name" and "listen" directives and then it decides to match those which are the most specific to our request's parameters.
To put it as simply as possible, suppose the aforementioned bitwarden.conf file is created and there's also the fallback.conf file as with the previous contents, as well.
When a new request arrives to our server, destined to passwords.pentilescu.com port 443, our server will look at both of these ".conf" files. Let's suppose it starts looking with the fallback.conf file.
It sees 3 "server" blocks in fallback.conf, the first of which matches for any incoming connections for port 443 (which matches our request) and matches for literally any server name (that's what the "_" after the "server" directive means). Since it matches for any server name, this is not a very specific match. Nginx will remember this as the closest match it has for now and then continue to look for other "server" blocks as well.
The second "server" block only matches for requests directed at "pentilescu.com", not for any of its subdomains. As the request was made specifically to passwords.pentilescu.com, which is a subdomain of pentilescu.com, this does not match with this block at all. Nginx continues.
The third "server" block matches with the server_name directive (as the server name is the "_" wildcard once again), and it also matches with the ports (both port 80 and 443 this time). Nginx will also remember this one as a viable candidate for resolving the request.
Then, Nginx will also take a look at the bitwarden.conf file.
Here, there is one single "server" block, one which matches exactly with the "passwords.pentilescu.com" server name (as well as the 443 port). As this one has an exact match to both server_name and port, this is the most specific one to match.
As such, Nginx will resolve to use that server block for this request and redirect it to that server's port. As such, the configuration from bitwarden.conf won over the configurations in fallback.conf.
Simple, right?
Now, you might wonder, what's the purpose of the first server block in fallback.conf, then. Well, fallback.conf should contain, ideally, resolution requests for stuff that doesn't match anything that's specific.
For example, suppose the aforementioned setup, but with a request for "abcdefg.pentilescu.com" port 443. Where should such a request go to?
Well, it doesn't match the "passwords.pentilescu.com" server_name block in bitwarden.conf. Nor does it match the "pentilescu.com" server block either, as it's a subdomain for pentilescu.com. The only two matching candidates, as such, are the first and third "server" blocks in fallback.conf, which resolve to any name whatsoever, due to their wildcard server_names. In this situation, Nginx will just take the first one, just because that's the one to contain the "default server" descriptor for port 443 (i.e. it should handle anything that comes to port 443 and doesn't have a more specific match) and uses that one. The action detailed for this "server" block is "return 404", which tells Nginx to simply return a 404 status code, immediately. The browser will then report this "404" status code to the visitor, letting him know that the service he/she was attempting to access does not exist on this server, an indication that their request was malformed.
This block effectively handles all malformed requests or any request that does not have a specific resolver for it.
The second "server" block in fallback.conf is specifically for all requests coming to "pentilescu.com", not any of its subdomains. It listens to both ports 80 and 443 and, when it matches, it will return an HTTP code 301 to "https://alexandru.pentilescu.com$request_uri", effectively preserving the exact same URI that was used in the original request but simply redirecting it forcefully to the "alexandru.pentilescu,com", so that any requests to "pentilescu.com" will redirect the browser to "alexandru.pentilescu.com". Basically, this is a forced redirection so that, when accessing the naked domain, one would forcefully be redirected to a specific subdomain. This is just a quirck of my own website. You may choose not to include this behavior in your own platform.
Finally, the third and last "server" block in our fallback.conf matches to literally any server name due to its wildcard and is the default server for port 80. Being the default server, this means this block will handle all requests coming from port 80 to our server that doesn't have a match. The action of this block is to return a 301 HTTP code to "https://$host$request_uri". "https://$host$request_uri" will resolve to the exact same URL as the original requested one, except that it has the "https://" prefix before it, instead. This means that we're telling Nginx that, for any request that doesn't have a better match to it coming from port 80 (i.e. through the HTTP protocol) we must send back a HTTP response code of 301 (i.e. redirect to) to redirect our visitor to the exact same URL that they already used but with the "https://" prefix appended, effectively forcing them to use port 443 instead. This will move all their requests from HTTP to HTTPS to automatically secure them with encryption!
And, moreover, if we don't add any more "server" directives in any other one of our ".conf" files for Nginx, this block will always be used for incoming port 80 requests, so that nothing more specific resolves for them and we can handle all of them by redirecting them to use a more secure protocol.
Pretty cool, right?
Finally, let's see how we can configure an X509 certificate globally!