Last modified by Alexandru Pentilescu on 2023/06/25 18:53

Show last authors
1 Nginx is a powerful reverse proxy engine, capable of intercepting any HTTP request for a specific server and redirecting it to a targeted port.
2
3 Using Nginx in such a way helps define subdomains for different services.
4
5 For example, suppose you are in control of a domain name called "transistor.one". You wish to establish subdomains for different services, such as: having a Bitwarden HTTP server accessible at "passwords.transistor.one", a gitea HTTP server accessible at "git.transistor.one" and a personal webpage at "alex.transistor.one".
6
7 "transistor.one" is the master domain, an umbrella under which all the other services reside. Each of these services can run on a different machine, if you wish for it, or it can run on the same machine.
8
9 All you have to do is configure an Nginx server on the machine that's being pointed to, via DNS, as "transistor.one". Then, Nginx can redirect all HTTP requests destined to various pre-configured subdomains to other machines or to different ports on the same machine, effectively orchestrating all requests according to well defined matching rules. It can, say, redirect all HTTP requests destined for "passwords.transistor.one" to localhost port 187, all HTTPS requests destined for "git.transistor.one" to localhost port 200 and all "alex.transistor.one" requests to local device 192.168.1.3 port 9030.
10
11 This gives administrators flexibility to configure all services on various machines and ports and then, at a single gateway endpoint, configure Nginx to redirect all requests to each of them, accordingly.
12
13 This is the power that a reverse proxy allows. And all of this can be attained without having to configure any subdomain DNS records.
14
15 Let's see how such configurations will look!
16
17
18 = Installing Nginx =
19
20 First and foremost, Nginx must be installed on the machine being pointed to by the main domain via DNS. This machine will serve as the gateway for all the requests coming into the current network, requests that must be serviced by a machine within this network.
21 As Nginx is licensed under a free software copyright license, it's found, usually, under most Linux repositories. Use the method to install it specific to your own Linux distribution! Usually, a simple "sudo apt-get install nginx" is sufficient. This implies, of course, that you have administrative privileges over the gateway machine.
22 Once the installation is done, you have to configure Nginx to start looking for user-defined configuration files in appropriate directories.
23 To do so, please edit "/etc/nginx/nginx.conf" to look like the following:
24
25 {{code language="nginx"}}
26
27 user nginx;
28 worker_processes auto;
29
30 error_log /var/log/nginx/error.log notice;
31 pid /var/run/nginx.pid;
32
33
34 events {
35 worker_connections 1024;
36 }
37
38
39 http {
40 include /etc/nginx/mime.types;
41 default_type application/octet-stream;
42
43 log_format main '$remote_addr - $remote_user [$time_local] "$request" '
44 '$status $body_bytes_sent "$http_referer" '
45 '"$http_user_agent" "$http_x_forwarded_for"';
46
47 access_log /var/log/nginx/access.log main;
48
49 sendfile on;
50 #tcp_nopush on;
51
52 keepalive_timeout 65;
53
54 #gzip on;
55
56 include /etc/nginx/conf.d/*.conf;
57 include /etc/nginx/sites-enabled/*.conf;
58 }
59
60 {{/code}}
61
62 Please note that, the only new line which I added apart from the pre-existing ones that come by default with this file, was "include /etc/nginx/sites-enabled/*.conf;", almost at the end.
63
64 What this line does is tell Nginx to take into account any file with the ".conf" extension from within "/etc/nginx/sites-enabled/" and import its configuration into this master configuration file, temporarily.
65
66 Effectively, this means, whenever you wish to add another endpoint subdomain to your domain name, you just have to create a new ".conf" file under "/etc/nginx/sites-enabled" and Nginx will automatically accept it, provided the syntax in that file is correct.
67
68 Ideally, "/etc/nginx/sites-enabled" should only contain symbolic link files to actual physical files stored in "/etc/nginx/sites-available". This is to allow partially defined services being configured offline in their own directory and, only after the configurations are fully complete, can they be activated by creating a symlink to their file.
69
70 Moreover, whenever a service needs to be disabled again, its configuration file doesn't have to be physically deleted in its entirety. All that has to be done is deleting the symbolic link pointing to it from "/etc/nginx/sites-enabled" and, afterwards, the file itself will remain on the drive under "/etc/nginx/sites-available" where it can be edited afterwards, until it can be reactivated again by the re-creation of a new symlink to it.
71
72 Handling symbolic links is very easy and straightforward.
73
74 Of course, one needs to remember, at all times, the following rules:
75
76 1. When activating or deactivating a service, one must create or delete symbolic links in "/etc/nginx/sites-enabled"
77 1. When configuring a new service from scratch, one must first create the actual configuration file under "/etc/nginx/sites-available" and, only after the configuration is complete, then one must create a symbolic link to that file in "/etc/nginx/sites-enabled"
78
79 Remembering these might seem difficult at first glance, but it will soon become second nature to most administrators.
80
81 If this is too difficult to remember, you can always use only "/etc/nginx/sites-enabled" to store your actual configuration files, instead of storing symbolic links. This is not the recommended usage for them but is an option.
82
83 Now, time to write a configuration file from scratch for our bitwarden service!
84
85 = Example of a service configuration file =
86
87 Please take a look at the following configuration file for our Bitwarden endpoint:
88
89 {{code language="nginx"}}
90 server {
91
92 server_name passwords.transistor.one;
93 root /var/www/;
94
95 listen [::]:443 ssl http2; # managed by Certbot
96 listen 443 ssl http2; # managed by Certbot
97
98
99 index index.html;
100
101 location / {
102 proxy_pass http://localhost:5178/;
103 proxy_set_header Host $host;
104 proxy_set_header X-Real-IP $remote_addr;
105 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
106 proxy_set_header X-Forwarded-Proto $scheme;
107
108 client_max_body_size 0;
109 add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";
110 add_header Referrer-Policy "same-origin";
111
112 access_log /var/log/nginx/bitwarden.access.log;
113 error_log /var/log/nginx/bitwarden.error.log;
114 }
115
116 include /etc/nginx/snippets/ssl.conf;
117 }
118 {{/code}}
119
120 I called this file "bitwarden.conf" both in the "sites-enabled" and "sites-available" directories, the former is a symbolic link towards the latter. It's imperative that the symbolic link's name end in the ".conf" extension, otherwise Nginx will ignore it, per our own configuration in "nginx.conf".
121 Now, let's break that configuration down into the important parts that you should remember:
122
123 * The "server_name" directive tells Nginx the name of the subdomain whose requests need to be redirected to a different endpoint. In our case, we wanted all HTTP requests destined to "passwords.transistor.one" to be redirected to our localhost port 5178
124 * The "listen 443 ssl http2" and "listen [::]:443 ssl http2" directives tell Nginx to open port 443 for both IPv4 and IPv6 incoming traffic. Port 443 is usually used by web browsers attempting to connect to our website using the TLS protocol, in order to secure the connection against network eavesdroppers. Keep in mind that, in order to effectively use TLS, we must have a generated X.509 certificate to use, signed by a certificate authority that's, hopefully, trusted by most web browser vendors
125 * The "location" directive is a bit interesting. It defines many other subdirectives for the destination endpoint of our service. The "proxy_pass" subdirective, specifically, tells Nginx where the destination of the redirected requests needs to be. In our case, all requests have to be redirected to our own machine, port 5178. The "add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";" directive tells the connection to always be encrypted, at all times, otherwise it should fail in an error. This should include subdomains as well. And the "error_log" and "access_log" subdirectives tell Nginx where and under which names to store the error log and the access log for this specific service, respectively
126
127 Now, you may wonder "Why port 5178", specifically? There is no reason for this specific port. I'm simply running Bitwarden in docker and I needed an external port to be configured for my service. Basically, this port can be any other number, provided you remember to change this configuration both in your Bitwarden docker file AND here, in Nginx.
128
129 Finally, the "include /etc/nginx/snippets/ssl.conf;" is the most interesting directive, by far. This "snippets" directory, is meant to contain configuration snippets that are meant to be included in other configuration files, all the time. The "ssl.conf", specifically, contains configuration data for pointing towards X.509 certificate files, so that Nginx wil know where the public key file is contained, where the private key file is contained, and where the full chain certificate is contained. All these three files need to be configured, so that Nginx will know how to find them and deliver the public ones to any visitors to our site. If you don't wish to support TLS at all on your server, don't include this directive in your configuration file and change the 443 port references here to port 80, instead, so that only HTTP connections are allowed.
130
131 Since the same certificate files will be reused by all of our different endpoints, such as "passwords.transistor.one", "git.transistor.one", "wiki.transistor.one" etc, it doesn't make sense to just copy-paste their configurations in every one of their Nginx ".conf" files, instead, we configure all of them in one snippet file, in our case, "ssl.conf" and then, in all of those other services' configuration files, we just add 1 line that references the "ssl.conf" file, instead. Functionally, they are the same.
132
133 This is possible because of the existence of wildcard X.509 certificates, where you can basically say "This certificate is meant to protect traffic for all subdomains under transistor.one as well as for transistor.one itself!". This allows us to have a single X.509 certificate for all of our subdomains, which makes administration easier as only this certificate will have to be signed, afterwards, by a certificate authority!
134
135 Now, you might look at this and say to yourself: "Well, that's all fine and good but where are you opening port 80 for remote connections? Isn't that the standard HTTP port? In the above configuration, you're only listening on port 443, which is specifically only HTTPS! Wouldn't this cause connecting clients that only use bare HTTP to fail?".
136 Well, yes, this is an issue. But that's why there's more to show you, in the next section!
137
138
139 = Setting up HTTP->HTTPS automatic redirecting =
140
141 Time to get into the groove of things!
142 First, we shall create a new configuration file in "/etc/nginx/sites-available" and then a symbolic link towards it in "/etc/nginx/sites-enabled" called "fallback.conf". Why?
143 Well, basically, we want for any HTTP request to our website, either to "transistor.one" or to any subdomain of "transistor.one" to redirect that request to the exact same destination but using the HTTPS protocol, instead.
144 This is considered good practice, because, you're eseentially redirecting the browser to use a more secured connection protocol. Not only will this ensure that any login credentials are protected from being stolen, but also that the exact resources being retrieved from our server are obfuscated, so that our visitors can browse our server privately.
145
146 The contents of this "fallback.conf" file are as follows:
147
148 {{code language="nginx"}}
149 server {
150 listen [::]:443 ssl http2 default_server; # managed by Cert
151 listen 443 default_server;
152 server_name _;
153 include /etc/nginx/snippets/ssl.conf;
154 return 404;
155 }
156
157 server {
158 server_name transistor.one;
159 return 301 https://alexandru.transistor.onem$request_uri;
160
161 listen 80;
162 listen [::]:80;
163 listen 443 ssl;
164 listen [::]:443;
165 }
166
167 server {
168 server_name _;
169 return 301 https://$host$request_uri; # managed by Certbot
170
171
172 # return 301 https://$host$request_uri;
173
174 listen 80 default_server;
175 listen [::]:80 default_server;
176 }
177 {{/code}}
178
179 Each of these 3 server blocks have a well defined purpose. Let's break them down!
180
181 The most important thing to remember is how Nginx does request matching.
182
183 To put it simply, you can have as many ".conf" files in the "sites-enabled" directory as you wish to have. And any one ".conf" file can contain multiple "server" directives, each with its own proxy_pass destination and with its own subdomain service name.
184
185 So, when a new HTTP/HTTPS request arrives at our Nginx server, how does it decide which "server" directive to match with that specific request, so that it knows where to redirect it to?
186
187 The answer is fairly easy: for each request, Nginx looks through all of the ".conf" files under "sites-enabled", looks through each ".conf" file's "server" directives while taking into account its "server_name" and "listen" directives and then it decides to match those which are the most specific to our request's parameters.
188
189 To put it as simply as possible, suppose the aforementioned bitwarden.conf file is created and there's also the fallback.conf file as with the previous contents, as well.
190
191 When a new request arrives to our server, destined to passwords.transistor.one port 443, our server will look at both of these ".conf" files. Let's suppose it starts looking with the fallback.conf file.
192 It sees 3 "server" blocks in fallback.conf, the first of which matches for any incoming connections for port 443 (which matches our request) and matches for literally any server name (that's what the "_" after the "server" directive means). Since it matches for any server name, this is not a very specific match. Nginx will remember this as the closest match it has for now and then continue to look for other "server" blocks as well.
193
194 The second "server" block only matches for requests directed at "transistor.one", not for any of its subdomains. As the request was made specifically to passwords.transistor.one, which is a subdomain of transistor.one, this does not match with this block at all. Nginx continues.
195
196 The third "server" block matches with the server_name directive (as the server name is the "_" wildcard once again). However, this matches only for incoming connections coming to port 80. Our request is to port 443 though. As such, this match fails and Nginx will continue to look for more "server" directives.
197
198 Then, Nginx will also take a look at the bitwarden.conf file.
199
200 Here, there is one single "server" block, one which matches exactly with the "passwords.
201 " server name (as well as the 443 port). As this one has an exact match to both server_name and port, this is the most specific one to match.
202
203 As such, Nginx will resolve to use that server block for this request and redirect it to that server's port. As such, the configuration from bitwarden.conf won over the configurations in fallback.conf.
204
205 Simple, right?
206
207 Now, you might wonder, what's the purpose of the first server block in fallback.conf, then. Well, fallback.conf should contain, ideally, resolution requests for stuff that doesn't match anything that's specific.
208
209 For example, suppose the aforementioned setup, but with a request for "abcdefg.transistor.one" port 443. Where should such a request go to?
210
211 Well, it doesn't match the "passwords.transistor.one" server_name block in bitwarden.conf. Nor does it match the "transistor.one" server block either, as it's a subdomain for transistor.one. Finally, it doesn't match to the third "server" directive in fallback.conf either, as that is destined only for connections incoming to port 80, not 443. The only matching candidate, as such, is the first "server" block in fallback.conf, which resolves to any name whatsoever, due to its wildcard server_name. In this situation, Nginx will just take it, as it's the only candidate. It also has the "default server" descriptor for port 443, which means that, even if the server name parameter didn't match, this would have still been the "server" block to handle it. The action detailed for this "server" block is "return 404", which tells Nginx to simply return a 404 status code, immediately. The browser will then report this "404" status code to the visitor, letting him know that the service he/she was attempting to access does not exist on this server, an indication that their request was malformed.
212
213 This block effectively handles all malformed requests or any request that does not have a specific resolver for it.
214
215 The second "server" block in fallback.conf is specifically for all requests coming to "transistor.one", not any of its subdomains. It listens to both ports 80 and 443 and, when it matches, it will return an HTTP code 301 to "https://alex.transistor.one$request_uri", effectively preserving the exact same URI that was used in the original request but simply redirecting it forcefully to the "alexandru.pentilescu,com", so that any requests to "transistor.one" will redirect the browser to "alex.transistor.one". Basically, this is a forced redirection so that, when accessing the naked domain, one would forcefully be redirected to a specific subdomain. This is just a quirck of my own website. You may choose not to include this behavior in your own platform.
216
217 Finally, the third and last "server" block in our fallback.conf matches to literally any server name due to its wildcard and is the default server for port 80. Being the default server, this means this block will handle all requests coming from port 80 to our server that doesn't have a match. The action of this block is to return a 301 HTTP code to "https://$host$request_uri". "https://$host$request_uri" will resolve to the exact same URL as the original requested one, except that it has the "https://" prefix before it, instead. This means that we're telling Nginx that, for any request that doesn't have a better match to it coming from port 80 (i.e. through the HTTP protocol) we must send back a HTTP response code of 301 (i.e. redirect to) to redirect our visitor to the exact same URL that they already used but with the "https://" prefix appended, effectively forcing them to use port 443 instead. This will move all their requests from HTTP to HTTPS to automatically secure them with encryption!
218
219 And, moreover, if we don't add any more "server" directives in any other one of our ".conf" files for Nginx, this block will always be used for incoming port 80 requests, so that nothing more specific resolves for them and we can handle all of them by redirecting them to use a more secure protocol.
220
221 Pretty cool, right?
222
223 Finally, let's see how we can configure an X509 certificate globally!
224
225
226 = Configuring a global X.509 certificate =
227
228 This is the easiest part of this article. Whenever you wish to encrypt a request to a specific server block in Nginx, just add the "include /etc/nginx/snippets/ssl.conf" directive in its server block and you're pretty much done.
229 Now, what should this ssl.conf snippets file contain? Easy:
230
231 {{code language="nginx"}}
232 ssl_certificate /etc/letsencrypt/live/transistor.one/fullchain.pem; # managed by Certbot
233 ssl_certificate_key /etc/letsencrypt/live/transistor.one/privkey.pem; # managed by Certbot
234 ssl_trusted_certificate /etc/letsencrypt/live/transistor.one/chain.pem;
235 {{/code}}
236
237 Now, I admit, these file paths are usually generated by the certbot utility. Configuring certbox is outside the scope of this article and I will not cover it.
238 certbot is also an utility specific for the Let's Encrypt CA, which might differ from your own certificate authority. But, regardless of which CA you choose to use, everything should boil down to 3 ".pem" files at the end, one containing your public key that will be delived to the visitor, one containing the fullchain and one containing the private key which will be used by Nginx to decrypt incoming traffic with.
239
240 Technically, the ssl_certificate_key should point to your private key file. DO NOT, UNDER ANY CIRCUMSTANCES, GIVE THIS TO ANYONE. This has to be kept private and only you and Nginx should have access to it.
241
242 chain.pem contains your public certificate along with the CA's intermediate certificate that signed your certificate.
243
244 fullchain.pem contains everything that chain.pem contains, plus the certificate's authority's own public root certificate, which was used in signing the intermediate certificate mentioned above, one that should be recognized by any visitor's web browser.
245
246 As such, please change these file paths to the 3 files that you will be using from your respective CA. If in doubt, always ask for professional help from a sysadmin!
247
248 = Testing our setup and deploying =
249
250 We're almost done! For completeness' sake, here's my gitea.conf Nginx configuration file as well, so that you have a base to start out with:
251
252 {{code language="nginx"}}
253 server {
254 server_name git.transistor.one;
255
256 listen [::]:443 ssl http2; # managed by Certbot
257 listen 443 ssl http2; # managed by Certbot
258
259
260 include /etc/nginx/snippets/ssl.conf;
261
262 location / {
263 proxy_pass http://localhost:3000;
264 }
265 }
266 {{/code}}
267
268 This will redirect all requests meant for "git.transistor.one" to localhost port 3000. It also supports TLS, as usual.
269
270 Once you've got everything ready, run the following command to test all your configuration files at once:
271
272 {{code language="bash"}}
273 sudo nginx -t
274 {{/code}}
275
276 If Nginx reports that everything is OK, then proceed to restart the service with"
277
278 {{code language="bash"}}
279 sudo systemctl restart nginx
280 {{/code}}
281
282 Also, I don't remember if the Nginx daemon is set to run by default on system startup. This is pretty important, as you want all of your web services to be available even in the case of a system reboot. You shouldn't have to manually start Nginx after a system reboot! As such, I recommend running the following to make sure it's enabled:
283
284 {{code language="bash"}}
285 sudo systemctl enable nginx
286 {{/code}}
287
288 Also you might have to open firewall ports 80 and 443 to allow Nginx to listen to these. This is specific to your distro so please do that manually. On my end, I don't remember having to do that. I think just installing Nginx did that automatically. Your mileage may vary.
289
290 That's it! Happy coding!