Changes for page How to setup an Nginx reverse proxy and also provide a global X.509 certificate for it
Last modified by Alexandru Pentilescu on 2023/06/25 18:53
From version 6.1
edited by Alexandru Pentilescu
on 2022/06/11 21:30
on 2022/06/11 21:30
Change comment:
There is no comment for this version
To version 13.1
edited by Alexandru Pentilescu
on 2023/06/25 18:53
on 2023/06/25 18:53
Change comment:
There is no comment for this version
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -2,11 +2,11 @@ 2 2 3 3 Using Nginx in such a way helps define subdomains for different services. 4 4 5 -For example, suppose you are in control of a domain name called " pentilescu.com". You wish to establish subdomains for different services, such as: having a Bitwarden HTTP server accessible at "passwords.pentilescu.com", a gitea HTTP server accessible at "git.pentilescu.com" and a personal webpage at "alexandru.pentilescu.com".5 +For example, suppose you are in control of a domain name called "transistor.one". You wish to establish subdomains for different services, such as: having a Bitwarden HTTP server accessible at "passwords.transistor.one", a gitea HTTP server accessible at "git.transistor.one" and a personal webpage at "alex.transistor.one". 6 6 7 -" pentilescu.com" is the master domain, an umbrella under which all the other services reside. Each of these services can run on a different machine, if you wish for it, or it can run on the same machine.7 +"transistor.one" is the master domain, an umbrella under which all the other services reside. Each of these services can run on a different machine, if you wish for it, or it can run on the same machine. 8 8 9 -All you have to do is configure an Nginx server on the machine that's being pointed to, via DNS, as " pentilescu.com". Then, Nginx can redirect all HTTP requests destined to various pre-configured subdomains to other machines or to different ports on the same machine, effectively orchestrating all requests according to well defined matching rules. It can, say, redirect all HTTP requests destined for "passwords.pentilescu.com" to localhost port 187, all HTTPS requests destined for "git.pentilescu.com" to localhost port 200 and all "alexandru.pentilescu" requests to local device 192.168.1.3 port 9030.9 +All you have to do is configure an Nginx server on the machine that's being pointed to, via DNS, as "transistor.one". Then, Nginx can redirect all HTTP requests destined to various pre-configured subdomains to other machines or to different ports on the same machine, effectively orchestrating all requests according to well defined matching rules. It can, say, redirect all HTTP requests destined for "passwords.transistor.one" to localhost port 187, all HTTPS requests destined for "git.transistor.one" to localhost port 200 and all "alex.transistor.one" requests to local device 192.168.1.3 port 9030. 10 10 11 11 This gives administrators flexibility to configure all services on various machines and ports and then, at a single gateway endpoint, configure Nginx to redirect all requests to each of them, accordingly. 12 12 ... ... @@ -89,7 +89,7 @@ 89 89 {{code language="nginx"}} 90 90 server { 91 91 92 - server_name passwords. pentilescu.com;92 + server_name passwords.transistor.one; 93 93 root /var/www/; 94 94 95 95 listen [::]:443 ssl http2; # managed by Certbot ... ... @@ -120,7 +120,7 @@ 120 120 I called this file "bitwarden.conf" both in the "sites-enabled" and "sites-available" directories, the former is a symbolic link towards the latter. It's imperative that the symbolic link's name end in the ".conf" extension, otherwise Nginx will ignore it, per our own configuration in "nginx.conf". 121 121 Now, let's break that configuration down into the important parts that you should remember: 122 122 123 -* The "server_name" directive tells Nginx the name of the subdomain whose requests need to be redirected to a different endpoint. In our case, we wanted all HTTP requests destined to "passwords. pentilescu.com" to be redirected to our localhost port 5178123 +* The "server_name" directive tells Nginx the name of the subdomain whose requests need to be redirected to a different endpoint. In our case, we wanted all HTTP requests destined to "passwords.transistor.one" to be redirected to our localhost port 5178 124 124 * The "listen 443 ssl http2" and "listen [::]:443 ssl http2" directives tell Nginx to open port 443 for both IPv4 and IPv6 incoming traffic. Port 443 is usually used by web browsers attempting to connect to our website using the TLS protocol, in order to secure the connection against network eavesdroppers. Keep in mind that, in order to effectively use TLS, we must have a generated X.509 certificate to use, signed by a certificate authority that's, hopefully, trusted by most web browser vendors 125 125 * The "location" directive is a bit interesting. It defines many other subdirectives for the destination endpoint of our service. The "proxy_pass" subdirective, specifically, tells Nginx where the destination of the redirected requests needs to be. In our case, all requests have to be redirected to our own machine, port 5178. The "add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";" directive tells the connection to always be encrypted, at all times, otherwise it should fail in an error. This should include subdomains as well. And the "error_log" and "access_log" subdirectives tell Nginx where and under which names to store the error log and the access log for this specific service, respectively 126 126 ... ... @@ -128,19 +128,163 @@ 128 128 129 129 Finally, the "include /etc/nginx/snippets/ssl.conf;" is the most interesting directive, by far. This "snippets" directory, is meant to contain configuration snippets that are meant to be included in other configuration files, all the time. The "ssl.conf", specifically, contains configuration data for pointing towards X.509 certificate files, so that Nginx wil know where the public key file is contained, where the private key file is contained, and where the full chain certificate is contained. All these three files need to be configured, so that Nginx will know how to find them and deliver the public ones to any visitors to our site. If you don't wish to support TLS at all on your server, don't include this directive in your configuration file and change the 443 port references here to port 80, instead, so that only HTTP connections are allowed. 130 130 131 -Since the same certificate files will be reused by all of our different endpoints, such as "passwords. pentilescu.com", "git.pentilescu.com", "wiki.pentilescu.com" etc, it doesn't make sense to just copy-paste their configurations in every one of their Nginx ".conf" files, instead, we configure all of them in one snippet file, in our case, "ssl.conf" and then, in all of those other services' configuration files, we just add 1 line that references the "ssl.conf" file, instead. Functionally, they are the same.131 +Since the same certificate files will be reused by all of our different endpoints, such as "passwords.transistor.one", "git.transistor.one", "wiki.transistor.one" etc, it doesn't make sense to just copy-paste their configurations in every one of their Nginx ".conf" files, instead, we configure all of them in one snippet file, in our case, "ssl.conf" and then, in all of those other services' configuration files, we just add 1 line that references the "ssl.conf" file, instead. Functionally, they are the same. 132 132 133 -This is possible because of the existence of wildcard X.509 certificates, where you can basically say "This certificate is meant to protect traffic for all subdomains under pentilescu.comas well as forpentilescu.comitself!". This allows us to have a single X.509 certificate for all of our subdomains, which makes administration easier as only this certificate will have to be signed, afterwards, by a certificate authority!133 +This is possible because of the existence of wildcard X.509 certificates, where you can basically say "This certificate is meant to protect traffic for all subdomains under transistor.one as well as for transistor.one itself!". This allows us to have a single X.509 certificate for all of our subdomains, which makes administration easier as only this certificate will have to be signed, afterwards, by a certificate authority! 134 134 135 135 Now, you might look at this and say to yourself: "Well, that's all fine and good but where are you opening port 80 for remote connections? Isn't that the standard HTTP port? In the above configuration, you're only listening on port 443, which is specifically only HTTPS! Wouldn't this cause connecting clients that only use bare HTTP to fail?". 136 136 Well, yes, this is an issue. But that's why there's more to show you, in the next section! 137 137 138 138 139 -= Setting up HTTP->HTTPS automatic redirecting and configuring a global X.509 certificate=139 += Setting up HTTP->HTTPS automatic redirecting = 140 140 141 141 Time to get into the groove of things! 142 142 First, we shall create a new configuration file in "/etc/nginx/sites-available" and then a symbolic link towards it in "/etc/nginx/sites-enabled" called "fallback.conf". Why? 143 -Well, basically, we want for any HTTP request to our website, either to " pentilescu.com" or to any subdomain of "pentilescu.com" to redirect that request to the exact same destination but using the HTTPS protocol, instead.143 +Well, basically, we want for any HTTP request to our website, either to "transistor.one" or to any subdomain of "transistor.one" to redirect that request to the exact same destination but using the HTTPS protocol, instead. 144 144 This is considered good practice, because, you're eseentially redirecting the browser to use a more secured connection protocol. Not only will this ensure that any login credentials are protected from being stolen, but also that the exact resources being retrieved from our server are obfuscated, so that our visitors can browse our server privately. 145 145 146 146 The contents of this "fallback.conf" file are as follows: 147 + 148 +{{code language="nginx"}} 149 +server { 150 + listen [::]:443 ssl http2 default_server; # managed by Cert 151 + listen 443 default_server; 152 + server_name _; 153 + include /etc/nginx/snippets/ssl.conf; 154 + return 404; 155 +} 156 + 157 +server { 158 + server_name transistor.one; 159 + return 301 https://alexandru.transistor.onem$request_uri; 160 + 161 + listen 80; 162 + listen [::]:80; 163 + listen 443 ssl; 164 + listen [::]:443; 165 +} 166 + 167 +server { 168 + server_name _; 169 + return 301 https://$host$request_uri; # managed by Certbot 170 + 171 + 172 + # return 301 https://$host$request_uri; 173 + 174 + listen 80 default_server; 175 + listen [::]:80 default_server; 176 +} 177 +{{/code}} 178 + 179 +Each of these 3 server blocks have a well defined purpose. Let's break them down! 180 + 181 +The most important thing to remember is how Nginx does request matching. 182 + 183 +To put it simply, you can have as many ".conf" files in the "sites-enabled" directory as you wish to have. And any one ".conf" file can contain multiple "server" directives, each with its own proxy_pass destination and with its own subdomain service name. 184 + 185 +So, when a new HTTP/HTTPS request arrives at our Nginx server, how does it decide which "server" directive to match with that specific request, so that it knows where to redirect it to? 186 + 187 +The answer is fairly easy: for each request, Nginx looks through all of the ".conf" files under "sites-enabled", looks through each ".conf" file's "server" directives while taking into account its "server_name" and "listen" directives and then it decides to match those which are the most specific to our request's parameters. 188 + 189 +To put it as simply as possible, suppose the aforementioned bitwarden.conf file is created and there's also the fallback.conf file as with the previous contents, as well. 190 + 191 +When a new request arrives to our server, destined to passwords.transistor.one port 443, our server will look at both of these ".conf" files. Let's suppose it starts looking with the fallback.conf file. 192 +It sees 3 "server" blocks in fallback.conf, the first of which matches for any incoming connections for port 443 (which matches our request) and matches for literally any server name (that's what the "_" after the "server" directive means). Since it matches for any server name, this is not a very specific match. Nginx will remember this as the closest match it has for now and then continue to look for other "server" blocks as well. 193 + 194 +The second "server" block only matches for requests directed at "transistor.one", not for any of its subdomains. As the request was made specifically to passwords.transistor.one, which is a subdomain of transistor.one, this does not match with this block at all. Nginx continues. 195 + 196 +The third "server" block matches with the server_name directive (as the server name is the "_" wildcard once again). However, this matches only for incoming connections coming to port 80. Our request is to port 443 though. As such, this match fails and Nginx will continue to look for more "server" directives. 197 + 198 +Then, Nginx will also take a look at the bitwarden.conf file. 199 + 200 +Here, there is one single "server" block, one which matches exactly with the "passwords. 201 +" server name (as well as the 443 port). As this one has an exact match to both server_name and port, this is the most specific one to match. 202 + 203 +As such, Nginx will resolve to use that server block for this request and redirect it to that server's port. As such, the configuration from bitwarden.conf won over the configurations in fallback.conf. 204 + 205 +Simple, right? 206 + 207 +Now, you might wonder, what's the purpose of the first server block in fallback.conf, then. Well, fallback.conf should contain, ideally, resolution requests for stuff that doesn't match anything that's specific. 208 + 209 +For example, suppose the aforementioned setup, but with a request for "abcdefg.transistor.one" port 443. Where should such a request go to? 210 + 211 +Well, it doesn't match the "passwords.transistor.one" server_name block in bitwarden.conf. Nor does it match the "transistor.one" server block either, as it's a subdomain for transistor.one. Finally, it doesn't match to the third "server" directive in fallback.conf either, as that is destined only for connections incoming to port 80, not 443. The only matching candidate, as such, is the first "server" block in fallback.conf, which resolves to any name whatsoever, due to its wildcard server_name. In this situation, Nginx will just take it, as it's the only candidate. It also has the "default server" descriptor for port 443, which means that, even if the server name parameter didn't match, this would have still been the "server" block to handle it. The action detailed for this "server" block is "return 404", which tells Nginx to simply return a 404 status code, immediately. The browser will then report this "404" status code to the visitor, letting him know that the service he/she was attempting to access does not exist on this server, an indication that their request was malformed. 212 + 213 +This block effectively handles all malformed requests or any request that does not have a specific resolver for it. 214 + 215 +The second "server" block in fallback.conf is specifically for all requests coming to "transistor.one", not any of its subdomains. It listens to both ports 80 and 443 and, when it matches, it will return an HTTP code 301 to "https://alex.transistor.one$request_uri", effectively preserving the exact same URI that was used in the original request but simply redirecting it forcefully to the "alexandru.pentilescu,com", so that any requests to "transistor.one" will redirect the browser to "alex.transistor.one". Basically, this is a forced redirection so that, when accessing the naked domain, one would forcefully be redirected to a specific subdomain. This is just a quirck of my own website. You may choose not to include this behavior in your own platform. 216 + 217 +Finally, the third and last "server" block in our fallback.conf matches to literally any server name due to its wildcard and is the default server for port 80. Being the default server, this means this block will handle all requests coming from port 80 to our server that doesn't have a match. The action of this block is to return a 301 HTTP code to "https://$host$request_uri". "https://$host$request_uri" will resolve to the exact same URL as the original requested one, except that it has the "https://" prefix before it, instead. This means that we're telling Nginx that, for any request that doesn't have a better match to it coming from port 80 (i.e. through the HTTP protocol) we must send back a HTTP response code of 301 (i.e. redirect to) to redirect our visitor to the exact same URL that they already used but with the "https://" prefix appended, effectively forcing them to use port 443 instead. This will move all their requests from HTTP to HTTPS to automatically secure them with encryption! 218 + 219 +And, moreover, if we don't add any more "server" directives in any other one of our ".conf" files for Nginx, this block will always be used for incoming port 80 requests, so that nothing more specific resolves for them and we can handle all of them by redirecting them to use a more secure protocol. 220 + 221 +Pretty cool, right? 222 + 223 +Finally, let's see how we can configure an X509 certificate globally! 224 + 225 + 226 += Configuring a global X.509 certificate = 227 + 228 +This is the easiest part of this article. Whenever you wish to encrypt a request to a specific server block in Nginx, just add the "include /etc/nginx/snippets/ssl.conf" directive in its server block and you're pretty much done. 229 +Now, what should this ssl.conf snippets file contain? Easy: 230 + 231 +{{code language="nginx"}} 232 +ssl_certificate /etc/letsencrypt/live/transistor.one/fullchain.pem; # managed by Certbot 233 +ssl_certificate_key /etc/letsencrypt/live/transistor.one/privkey.pem; # managed by Certbot 234 +ssl_trusted_certificate /etc/letsencrypt/live/transistor.one/chain.pem; 235 +{{/code}} 236 + 237 +Now, I admit, these file paths are usually generated by the certbot utility. Configuring certbox is outside the scope of this article and I will not cover it. 238 +certbot is also an utility specific for the Let's Encrypt CA, which might differ from your own certificate authority. But, regardless of which CA you choose to use, everything should boil down to 3 ".pem" files at the end, one containing your public key that will be delived to the visitor, one containing the fullchain and one containing the private key which will be used by Nginx to decrypt incoming traffic with. 239 + 240 +Technically, the ssl_certificate_key should point to your private key file. DO NOT, UNDER ANY CIRCUMSTANCES, GIVE THIS TO ANYONE. This has to be kept private and only you and Nginx should have access to it. 241 + 242 +chain.pem contains your public certificate along with the CA's intermediate certificate that signed your certificate. 243 + 244 +fullchain.pem contains everything that chain.pem contains, plus the certificate's authority's own public root certificate, which was used in signing the intermediate certificate mentioned above, one that should be recognized by any visitor's web browser. 245 + 246 +As such, please change these file paths to the 3 files that you will be using from your respective CA. If in doubt, always ask for professional help from a sysadmin! 247 + 248 += Testing our setup and deploying = 249 + 250 +We're almost done! For completeness' sake, here's my gitea.conf Nginx configuration file as well, so that you have a base to start out with: 251 + 252 +{{code language="nginx"}} 253 + server { 254 + server_name git.transistor.one; 255 + 256 + listen [::]:443 ssl http2; # managed by Certbot 257 + listen 443 ssl http2; # managed by Certbot 258 + 259 + 260 + include /etc/nginx/snippets/ssl.conf; 261 + 262 + location / { 263 + proxy_pass http://localhost:3000; 264 + } 265 +} 266 +{{/code}} 267 + 268 +This will redirect all requests meant for "git.transistor.one" to localhost port 3000. It also supports TLS, as usual. 269 + 270 +Once you've got everything ready, run the following command to test all your configuration files at once: 271 + 272 +{{code language="bash"}} 273 + sudo nginx -t 274 +{{/code}} 275 + 276 +If Nginx reports that everything is OK, then proceed to restart the service with" 277 + 278 +{{code language="bash"}} 279 + sudo systemctl restart nginx 280 +{{/code}} 281 + 282 +Also, I don't remember if the Nginx daemon is set to run by default on system startup. This is pretty important, as you want all of your web services to be available even in the case of a system reboot. You shouldn't have to manually start Nginx after a system reboot! As such, I recommend running the following to make sure it's enabled: 283 + 284 +{{code language="bash"}} 285 + sudo systemctl enable nginx 286 +{{/code}} 287 + 288 +Also you might have to open firewall ports 80 and 443 to allow Nginx to listen to these. This is specific to your distro so please do that manually. On my end, I don't remember having to do that. I think just installing Nginx did that automatically. Your mileage may vary. 289 + 290 +That's it! Happy coding!