Nginx Rate-Limite HTTP2 Cloudpanel and general Nginx and Wordpress

One annoyance of running a publically-accessible WordPress site is the bots that attempt to rapidly try thousands of login attempts via /wp-login.php.

Even if none of the guesses are ever likely to work, the site will waste resources running PHP and SQL to confirm that to be the case.

More description at the bottom…

This holds true to reverse proxy also
You need to align according to your NGINX structure in CloudPanel. Mine is customized, and you can inquire in the comments on how to achieve this in your specific scenario.

A barrier to these drive-by hack attempts can be added using nginx’s http_limit_req, where rate limiting is applied only to POST requests for the login page, not affecting the rest of the site.

  1. In /etc/nginx/conf.d/login-limit.conf we create the zone LOGINLIMIT.
    1m is the size of the shared memory zone for tracking requests, and 15r/m limits to 15 requests per minute (ie 1 every 4 seconds).

    map $request_method $posting_id {
      default "";
      POST $binary_remote_addr;
    }
    
    limit_req_zone $posting_id zone=LOGINLIMIT:1m rate=15r/m;
    
  2. Add a reusable configuration snippet to /etc/nginx/snippets/wordpress-login-limit.php

    location ~ /wp-login.php$ {
      limit_req zone=LOGINLIMIT;
      limit_req_status 429;
      include /etc/nginx/fastcgi.conf;
      fastcgi_pass unix:///run/php/php-fpm.sock;
      fastcgi_param SCRIPT_FILENAME $document_root/wp-login.php;
    }
    
  3. Call it from the host configuration in, eg, /etc/nginx/sites-available/example.com

    server {
      ...
      include /etc/nginx/snippets/wordpress-login-limit.conf;
      ...
    }
    

We use $binary_remote_addr as the key for lookups, but other keys could be used, such as an X-Forwarded-For: header.
Limits don’t even have to be based on IP address - other variables, eg $geoip_country_code, could be used creatively if appropriate.

(Restricting checks to POST requests was adapted from a Reverb.com blog post).

# This setting to make Nginx use HTTP2 and Rate Limit
# Set global rate limiting log level
limit_req_log_level warn;

# Create a shared memory zone for rate limiting
limit_req_zone $binary_remote_addr zone=global:10m rate=10r/s;

# Server block for the main domain
server {
  listen 443 ssl http2 reuseport backlog=4096;
  listen [::]:443 ssl http2 reuseport backlog=4096;
  server_name example.com;

  ssl_certificate /path/to/subdomain/certificate.crt;
  ssl_certificate_key /path/to/subdomain/private-key.key;

  # Enable rate limiting for the entire server
  limit_req zone=global;

  # Location block for the login page
  location /login/ {
    # Apply stricter rate limiting for the login page
    limit_req zone=global burst=5 nodelay;

    # ...
  }

  # Location block for static content
  location /static/ {
    # Allow higher rate limits for static content
    limit_req zone=global burst=10 nodelay;

    # ...
  }

  # Default location block
  location / {
    # ...
  }
}

# Server block for a subdomain
# Any Nginx error with backlog=4096 and reuseport, remove and maintain only one server config

server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name subdomain.example.com;

  # Enable rate limiting for the entire subdomain server
  limit_req zone=global;

  # Location block for the subdomain's login page
  location /login/ {
    # Apply stricter rate limiting for the subdomain's login page
    limit_req zone=global burst=5 nodelay;

    # ...
  }

  # Location block for the subdomain's static content
  location /static/ {
    # Allow higher rate limits for subdomain's static content
    limit_req zone=global burst=10 nodelay;

    # ...
  }

  # Default location block
  location / {
    # ...
  }
}

Nginx Rate Limiting: Managing Traffic Flow for Optimal Performance

Nginx, a versatile and high-performance web server, implements rate limiting to control the flow of requests from clients, ensuring optimal server performance and preventing abuse or malicious attacks. This mechanism is particularly crucial for handling high-traffic applications and protecting against denial-of-service (DoS) attacks.

The Leaky Bucket Algorithm: The Foundation of Nginx Rate Limiting

Nginx employs the leaky bucket algorithm, a simple yet effective technique for regulating the rate at which requests are processed. This algorithm simulates the behavior of a leaky bucket, where water (representing requests) is poured into the bucket at a certain rate, and water (processed requests) leaks out at a controlled rate.

The leaky bucket algorithm operates in three main stages:

  1. Bucket Creation: A bucket with a defined capacity is established. This capacity determines the maximum number of requests that can be stored in the queue at any given time.
  2. Request Handling: Requests from clients are added to the bucket. If the bucket is full, new requests are discarded, preventing the queue from overflowing and overwhelming the server.
  3. Request Processing: Requests are processed from the bucket at a specified rate, ensuring that the server doesn’t get overwhelmed by a sudden surge of requests.

Key Components of Nginx Rate Limiting

Nginx rate limiting configuration involves several key components:

Rate Limit: The rate at which requests are processed, typically defined in requests per second (RPS) or requests per minute (RPM).

Zone: A shared memory area where rate limiting information is stored. This allows for consistent rate limiting across multiple Nginx workers.

Log Level: The severity level of rate limiting logs, ensuring that only important messages are recorded.

Benefits of Nginx Rate Limiting

Nginx rate limiting offers several advantages:

Prevents Abuse: Rate limiting thwarts malicious attempts to flood the server with requests, protecting against DoS attacks and resource exhaustion.

Manages Resources: Rate limiting ensures that server resources are allocated efficiently, preventing a single client from monopolizing resources and affecting other users’ experience.

Enhances Security: Rate limiting can hinder brute-force attacks, password guessing attempts, and other unauthorized activities that aim to compromise security.

Improves Performance: By regulating the flow of requests, rate limiting prevents the server from becoming overloaded, ensuring optimal performance and responsiveness for all users.

Implementing Nginx Rate Limiting

Nginx rate limiting can be configured using the limit_req directive in the nginx.conf file. This directive allows you to define the rate limit, zone, log level, and other configuration parameters.

Conclusion

Nginx rate limiting is a robust tool for regulating traffic flow, safeguarding against abuse, and optimizing server performance. Implementing rate limiting shields your Nginx servers from malicious attacks, enhances resource usage, and delivers a stable, responsive experience for all users.

This configuration enforces rate limiting exclusively for HTTPS/2 connections by specifying the ‘ssl http2’ protocol in the ‘listen’ directive for each server block. This approach streamlines HTTP/2 connection performance by eliminating unnecessary checks for HTTP/1.x connections.

1 Like

Nginx rate limiting operates uniquely; it accepts input as requests per second. However, it doesn’t limit the rate in this manner. Instead, it breaks down your limit into chunks of one-tenth of a second.

Thus, a rate limit of 10 requests per second is effectively 1 request per 0.1 seconds. This means if you receive 2 requests within 0.1 seconds, the second request will be denied.

To counteract this, it’s best to enable a burst—like we do with webapp_waf—where you permit, for example, a burst of 10 requests. This means the initial 10 requests won’t be limited, but subsequent ones will be if that rate continues, due to the bucketing by tenths of a second.

1 Like