Configure Nginx for a Production Environment

Harsh Shah
9 min readMay 5, 2021

Nginx is the most versatile web server out there and can beat other servers when configured properly. A web server in a production environment is different from a web server in a test environment regarding performance, security, etc.

When we install Nginx, it provides a ready-to-use configuration setting for our application. However, the default configuration is not good enough for a production environment. There are a number of configurations that improve security and enhance the overall performance of our web application. Therefore, we will focus on how to configure it to perform better during heavy traffic spikes and secure it from users who intend to abuse it.

  • Performance- We can do caching, buffering, data compression , etc to improve server performance.
  • Security- We can make the webserver more secure by rate-limiting, preventing bots / crawlers, enforcing strict header policies and so on.

Performance Configuration

1. Compression

Compressing the amount of data transferred over the network can speed up our website.gzip is a popular data compression program. When the browser requests a web page, the server doesn’t send it directly byte by byte. Instead, it sends it in a compressed state based on the accepted encodings of the browser.

Want to read this story later? Save it in Journal.

http {
gzip on;
gzip_static on;
gzip_http_version 1.1;
gzip_comp_level 5;
gzip_min_length 256;
gzip_proxied any;
gzip_vary on;
gzip_types
application/atom+xml
application/javascript
application/json
application/rss+xml
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/svg+xml
image/x-icon
text/css
text/plain
text/x-component;
}
  • gzip - Enables or disables gzipping of responses
  • gzip_static- To serve pre-compressed gzip files instead of compressing files on the fly.
  • gzip_http_version: Sets the minimum HTTP version of a request required to compress a response.. We can make use of the default value, which is 1.1.
  • gzip_comp_level- We can set a gzip compression level of a response. To avoid wasting CPU resources, we do not need to keep the compression level too high. Acceptable values are in the range from 1 (min) to 9 (max).
  • gzip_min_length - Tells Nginx to not compress anything smaller than the defined size. We can set it to more than 20 bytes.
  • gzip_proxied- By default, Nginx does not compress responses to proxied requests (requests that come from the proxy server). To configure compression of these responses, we can use gzip_proxieddirective. The directive has a number of parameters specifying which kinds of proxied requests Nginx should compress(here we’re enabling compression if: a response header includes the “expired”, “no-cache”, “no-store”, “private”, and “Authorization” parameters).
  • gzip_vary - If it is on, then it tells proxies to cache both gzipped and regular versions of a resource.
  • gzip_types- Enables gzipping of responses for the specified MIME types in addition to “text/html”. Responses with the “text/html” type are always compressed

2. Client-side Caching

If a website uses a lot of static content, we can optimize its performance by enabling client-side caching, where the browser stores copies of static content for quick access. In addition, Caching helps to reduce the number of times to load the same data.

We can use regex to identify the type of static content and use it with alocation block to cache it. Nginx provides features to cache static content metadata via open_file_cache directive.

location ~* \.
(?:ico|gif|jpe?g|png|htc|xml|otf|ttf|eot|woff|woff2|svg)$ {
expires 1d;
access_log off;
log_not_found off;
add_header Cache-Control private; open_file_cache max=3000 inactive=120s;
open_file_cache_valid 120s;
open_file_cache_min_uses 4;
open_file_cache_errors on;
}
location ~* \.(css|js|html)$ {
expires 12h;
access_log on;
add_header Cache-Control public;
}
  • Cache-Control- It is an HTTP cache header comprised of directives that allow you to define when / how a response should be cached and for how long.
  • Cache-Control: public- A response containing the public directive signifies that it is allowed to be cached by any intermediate cache. This, however, is usually not included in responses as other directives already signify if the response can be cached (e.g. max-age).
  • Cache-Control: private- It signifies that the response can only be cached by the browser that is accessing the file. This disallows any intermediate caches to store the response.
  • open_file_cache- This directive stores metadata of files and directories commonly requested by users, not actual content of files. So performance gain by this kind of cache may not be noticeable
  • open_file_cache_valid- This directive contains backup information inside the open_file_cache directive. We can use this directive to set a valid period, usually in seconds after which the information related to files and directories is re-validated again.
  • open_file_cache_min_uses- Nginx usually clear information inside the open_file_cache directive after a period of inactivity based on the open_file_cache_min_uses. We can use this directive to set the minimum set access to identify which files and directories are actively accessed.
  • open_file_cache_errors- To allow Nginx to cache errors such as “permission denied” or “can’t access this file” when files are accessed. So anytime a resource is accessed by a user who does not have the right to do so, Nginx displays the same error report “permission denied”. If we are using nginx as load-balancer, leave this off.

3. Buffers

A buffer is a temporary storage where data is stored and processed for a short time. Upon receiving of the request, Nginx writes it on these buffers. The data in these buffers is available as Nginx variables, such as $request_body .

server {    client_body_buffer_size 16K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 4 8k;
client_body_in_single_buffer on;
}
  • client_body_buffer_size- It sets the buffer size for the request body. If we plan to run the webserver on 64-bit systems, we need to set the value to 16k. If we want to run the webserver on the 32-bit system, set the value to 8k.
  • client_header_buffer_size- Used to set or allocate a buffer for request headers.
  • client_max_body_size- If we intend to handle large file uploads, we need to set this directive to at least 2m or more. By default, it is set to 1m.
  • large_client_header_buffers- This directive is used to set the maximum number and size of buffers to read large request headers.
  • client_body_in_single_buffer- Sometimes, all the request body is not stored in a buffer. The rest of it is saved or written to a temporary file. However, if we intend to save or store the entire request in a single buffer, we need to enable this directive.

4. Timeout

Configure timeout using directives such as keepalive_timeout and keepalive_requests to prevent long-waiting connections from wasting resources.

http {     keepalive_timeout 30s;
keepalive_requests 30;
send_timeout 30s;
}
  • keepalive_timeout- Limits the maximum time during which requests can be processed through one keep-alive connection. After this time is reached, the connection is closed following the subsequent request processing.
  • keepalive_requests- Sets the maximum number of requests that can be served through one keep-alive connection. After the maximum number of requests are made, the connection is closed.
  • send_timeout- Sets a timeout for transmitting a response to the client. The timeout is set only between two successive write operations, not for the transmission of the whole response. If the client does not receive anything within this time, the connection is closed.

Security Configuration

1. Disable server_tokens

The server_tokens directive tells Nginx to display its current version on error pages. This is not desirable since we do not want to share that information with the world to prevent attacks at our web server caused by known vulnerabilities in that specific version.

To disable the server_tokens directive, set if to off inside a server block.

server
{
server_tokens off;
}

2. Limiting the Rate of Requests (Reduce Scrapping / Attacks)

We can limit the rate at which Nginx accepts incoming requests at a value for actual users.

The limit_req_zone directive defines the parameters for rate-limiting while limit_req enables rate limiting within the context where it appears (in below example, for all requests to /sign-up/).

From the code below, we can say that each unique IP address is limited to 10 requests per second with 5 requests bursting.

limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;server {
location /sign-up/ {
limit_req zone=one burst=5;
}
}
  • The limit_req_zone directive is commonly defined in the http block, making it available for use in multiple contexts.
  • The limit_req_zone directive sets the parameters for rate-limiting and the shared memory zone, but it does not limit the request rate. For that, we need to apply the limit to a specific location or server block by including a limit_req directive there.
  • burst- It defines how many requests a client can make in excess of the rate specified by the zone.

3. Limiting the Number of Connections (Reduce Scrapping / Attacks)

We can limit the number of connections that can be opened by a unique IP address.

limit_conn_zone $binary_remote_addr zone=addr:10m;server {

location /something/ {
limit_conn addr 10;
}
}

Below example allows a maximum of 10 connections to be opened for a unique IP address.

4. Terminate slow connections

We can make use of timeouts directives such as the client_body_timeout and client_header_timeout to control how long Nginx will wait for writes from the client body and client header.

Add the following inside the server section.

server {
client_body_timeout 12s;
client_header_timeout 12s;
}
  • client_body_timeout and client_header_timeout- They are responsible for the time, a server will wait for a client body or client header to be sent after request. If neither a body or header is sent, the server will issue a 408 error or Request time out.

5. Deny Connections From Bots/Attackers

Sometimes, if we are experiencing poor performance, it is because we are being attacked by Internet bots. The reason for these attacks is that they are trying to find a security bug in our application code or in the software itself.

We can stop this by the following code.

map $http_user_agent $limit_bots {
default 0;
~*(google|bing|yandex|msnbot) 1;
~*(AltaVista|Googlebot|Slurp|BlackWidow|Bot|ChinaClaw|Custo|DISCo|Download|Demon|eCatch|EirGrabber|EmailSiphon|EmailWolf|SuperHTTP|Surfbot|WebWhacker) 1;
~*(Express|WebPictures|ExtractorPro|EyeNetIE|FlashGet|GetRight|GetWeb!|Go!Zilla|Go-Ahead-Got-It|GrabNet|Grafula|HMView|Go!Zilla|Go-Ahead-Got-It) 1;
~*(rafula|HMView|HTTrack|Stripper|Sucker|Indy|InterGET|Ninja|JetCar|Spider|larbin|LeechFTP|Downloader|tool|Navroad|NearSite|NetAnts|tAkeOut|WWWOFFLE) 1;
~*(GrabNet|NetSpider|Vampire|NetZIP|Octopus|Offline|PageGrabber|Foto|pavuk|pcBrowser|RealDownload|ReGet|SiteSnagger|SmartDownload|SuperBot|WebSpider) 1;
~*(Teleport|VoidEYE|Collector|WebAuto|WebCopier|WebFetch|WebGo|WebLeacher|WebReaper|WebSauger|eXtractor|Quester|WebStripper|WebZIP|Wget|Widow|Zeus) 1;
~*(Twengabot|htmlparser|libwww|Python|perl|urllib|scan|Curl|email|PycURL|Pyth|PyQ|WebCollector|WebCopy|webcraw) 1;
}
server {
location / {
if ($limit_bots = 1) {
return 403;
}
}

The default value of limit_bots is 0. If any request is coming from above bots, then we set its value to 1 and Nginx will return 403 to that request.

6. Secure HTTP header

By default, Nginx does not necessarily have all the important security headers that are actually very straightforward. Security vulnerabilities such as clickjacking attacks, cross-site scripting attacks, code injection attacks, etc. can be fixed by implementing necessary headers.

server {    add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header Content-Security-Policy "default-src 'self';";
add_header X-Permitted-Cross-Domain-Policies master-only;
add_header Referrer-Policy same-origin;
add_header Strict-Transport-Security 'max-age=31536000;
includeSubDomains; preload';
}
  • X-Frame-Options- To indicate whether or not a browser should be allowed to open a page in frame or iframe. This will prevent site content embedded into other sites. If it has the SAMEORIGIN value, then it will allow the page to be displayed in a frame on the same origin as the page itself.
  • X-Content-Type-Options- The x-content-type header also called “Browser Sniffing Protection” to tell the browser to follow the MIME types indicated in the header. It is used to prevents web browsers from sniffing a response away from the declared Content-Type. There is only one parameter we got to add nosniff .
  • Content-Security-Policy- By implementing the Content Security Policy (CSP) header, we can Prevent XSS, clickjacking, code injection attacks. CSP instruct browser to load allowed content to load on the website. default-src and self indicate loading everything from the same origin in various web servers. If it hasscript-src value, then it loads the only script from the same origin.
  • X-Permitted-Cross-Domain-Policies- We can implement this header to instruct the browser on how to handle the requests over a cross-domain. By implementing this header, we restrict loading our site’s assets from other domains to avoid resource abuse.
  • Referrer-Policy- It is used to identifies the address of the webpage that requested the current webpage. By checking the referrer, the new webpage can see where the request originated. The Referrer-Policy can be configured to cause the browser to not inform the destination site any URL information. When it has same-origin value, it means to send the origin, path, and query string for same-origin requests. Don’t send the Referer header for cross-origin requests.
  • Strict-Transport-Security- HSTS (HTTP Strict Transport Security) header to ensure all communication from a browser is sent over HTTPS (HTTP Secure). This prevents HTTPS click-through prompts and redirects HTTP requests to HTTPS. Before implementing this header, we must ensure all our website page is accessible over HTTPS else they will be blocked. In our example, it has ‘max-age=31536000;
    includeSubDomains; preload’
    having HSTS configured for one year, including preload for domain and sub-domains.

In this article, I have explained some configurations to improve security and increase the performance of the Nginx web server. Do you already use something like this or have a different opinion altogether? Let me know in the response section.

📝 Save this story in Journal.

--

--

Harsh Shah

A passionate Software Engineer having experience of building Web and Mobile applications with Django,Laravel,Nodejs and some other cool libraries and frameworks