Nginx

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:

Instantly available setup for vulnerability assessment & penetration testing. Run a full pentest from anywhere with 20+ tools & features that go from recon to reporting. We don't replace pentesters - we develop custom tools, detection & exploitation modules to give them back some time to dig deeper, pop shells, and have fun.

Missing root location

Essentials of Configuring Nginx Root Directory

When configuring the Nginx server, the root directive plays a critical role by defining the base directory from which files are served. Consider the example below:

server {
        root /etc/nginx;

        location /hello.txt {
                try_files $uri $uri/ =404;
                proxy_pass http://127.0.0.1:8080/;
        }
}

In this configuration, /etc/nginx is designated as the root directory. This setup allows access to files within the specified root directory, such as /hello.txt. However, it's crucial to note that only a specific location (/hello.txt) is defined. There's no configuration for the root location (location / {...}). This omission means that the root directive applies globally, enabling requests to the root path / to access files under /etc/nginx.

A critical security consideration arises from this configuration. A simple GET request, like GET /nginx.conf, could expose sensitive information by serving the Nginx configuration file located at /etc/nginx/nginx.conf. Setting the root to a less sensitive directory, like /etc, could mitigate this risk, yet it still may allow unintended access to other critical files, including other configuration files, access logs, and even encrypted credentials used for HTTP basic authentication.

Alias LFI Misconfiguration

In the configuration files of Nginx, a close inspection is warranted for the "location" directives. A vulnerability known as Local File Inclusion (LFI) can be inadvertently introduced through a configuration that resembles the following:

location /imgs { 
    alias /path/images/;
}

This configuration is prone to LFI attacks due to the server interpreting requests like /imgs../flag.txt as an attempt to access files outside the intended directory, effectively resolving to /path/images/../flag.txt. This flaw allows attackers to retrieve files from the server's filesystem that should not be accessible via the web.

To mitigate this vulnerability, the configuration should be adjusted to:

location /imgs/ { 
    alias /path/images/;
}

More info: https://www.acunetix.com/vulnerabilities/web/path-traversal-via-misconfigured-nginx-alias/

Accunetix tests:

alias../ => HTTP status code 403
alias.../ => HTTP status code 404
alias../../ => HTTP status code 403
alias../../../../../../../../../../../ => HTTP status code 400
alias../ => HTTP status code 403

Unsafe path restriction

Check the following page to learn how to bypass directives like:

location = /admin {
    deny all;
}

location = /admin/ {
    deny all;
}
pageProxy / WAF Protections Bypass

Unsafe variable use / HTTP Request Splitting

Vulnerable variables $uri and $document_uri and this can be fixed by replacing them with $request_uri.

A regex can also be vulnerable like:

location ~ /docs/([^/])? { … $1 … } - Vulnerable

location ~ /docs/([^/\s])? { … $1 … } - Not vulnerable (checking spaces)

location ~ /docs/(.*)? { … $1 … } - Not vulnerable

A vulnerability in Nginx configuration is demonstrated by the example below:

location / {
  return 302 https://example.com$uri;
}

The characters \r (Carriage Return) and \n (Line Feed) signify new line characters in HTTP requests, and their URL-encoded forms are represented as %0d%0a. Including these characters in a request (e.g., http://localhost/%0d%0aDetectify:%20clrf) to a misconfigured server results in the server issuing a new header named Detectify. This happens because the $uri variable decodes the URL-encoded new line characters, leading to an unexpected header in the response:

HTTP/1.1 302 Moved Temporarily
Server: nginx/1.19.3
Content-Type: text/html
Content-Length: 145
Connection: keep-alive
Location: https://example.com/
Detectify: clrf

Learn more about the risks of CRLF injection and response splitting at https://blog.detectify.com/2019/06/14/http-response-splitting-exploitations-and-mitigations/.

Also this technique is explained in this talk with some vulnerable examples and dectection mechanisms. For example, In order to detect this misconfiguration from a blackbox perspective you could these requests:

  • https://example.com/%20X - Any HTTP code

  • https://example.com/%20H - 400 Bad Request

If vulnerable, the first will return as "X" is any HTTP method and the second will return an error as H is not a valid method. So the server will receive something like: GET / H HTTP/1.1 and this will trigger the error.

Another detection examples would be:

  • http://company.tld/%20HTTP/1.1%0D%0AXXXX:%20x - Any HTTP code

  • http://company.tld/%20HTTP/1.1%0D%0AHost:%20x - 400 Bad Request

Some found vulnerable configurations presented in that talk were:

  • Note how $uri is set as is in the final URL

location ^~ /lite/api/ {
 proxy_pass http://lite-backend$uri$is_args$args;
}
  • Note how again $uri is in the URL (this time inside a parameter)

location ~ ^/dna/payment {
 rewrite ^/dna/([^/]+) /registered/main.pl?cmd=unifiedPayment&context=$1&native_uri=$uri break;
 proxy_pass http://$back;
  • Now in AWS S3

location /s3/ {
 proxy_pass https://company-bucket.s3.amazonaws.com$uri;
}

Any variable

It was discovered that user-supplied data might be treated as an Nginx variable under certain circumstances. The cause of this behavior remains somewhat elusive, yet it's not rare nor straightforward to verify. This anomaly was highlighted in a security report on HackerOne, which can be viewed here. Further investigation into the error message led to the identification of its occurrence within the SSI filter module of Nginx's codebase, pinpointing Server Side Includes (SSI) as the root cause.

To detect this misconfiguration, the following command can be executed, which involves setting a referer header to test for variable printing:

$ curl -H ‘Referer: bar’ http://localhost/foo$http_referer | grep ‘foobar’

Scans for this misconfiguration across systems revealed multiple instances where Nginx variables could be printed by a user. However, a decrease in the number of vulnerable instances suggests that efforts to patch this issue have been somewhat successful.

Raw backend response reading

Nginx offers a feature through proxy_pass that allows for the interception of errors and HTTP headers produced by the backend, aiming to hide internal error messages and headers. This is accomplished by Nginx serving custom error pages in response to backend errors. However, challenges arise when Nginx encounters an invalid HTTP request. Such a request gets forwarded to the backend as received, and the backend's raw response is then directly sent to the client without Nginx's intervention.

Consider an example scenario involving a uWSGI application:

def application(environ, start_response):
    start_response('500 Error', [('Content-Type', 'text/html'), ('Secret-Header', 'secret-info')])
    return [b"Secret info, should not be visible!"]

To manage this, specific directives in the Nginx configuration are used:

http {
    error_page 500 /html/error.html;
    proxy_intercept_errors on;
    proxy_hide_header Secret-Header;
}
  • proxy_intercept_errors: This directive enables Nginx to serve a custom response for backend responses with a status code greater than 300. It ensures that, for our example uWSGI application, a 500 Error response is intercepted and handled by Nginx.

  • proxy_hide_header: As the name suggests, this directive hides specified HTTP headers from the client, enhancing privacy and security.

When a valid GET request is made, Nginx processes it normally, returning a standard error response without revealing any secret headers. However, an invalid HTTP request bypasses this mechanism, resulting in the exposure of raw backend responses, including secret headers and error messages.

merge_slashes set to off

By default, Nginx's merge_slashes directive is set to on, which compresses multiple forward slashes in a URL into a single slash. This feature, while streamlining URL processing, can inadvertently conceal vulnerabilities in applications behind Nginx, particularly those prone to local file inclusion (LFI) attacks. Security experts Danny Robinson and Rotem Bar have highlighted the potential risks associated with this default behavior, especially when Nginx acts as a reverse-proxy.

To mitigate such risks, it is recommended to turn the merge_slashes directive off for applications susceptible to these vulnerabilities. This ensures that Nginx forwards requests to the application without altering the URL structure, thereby not masking any underlying security issues.

For more information check Danny Robinson and Rotem Bar.

Default Value in Map Directive

In the Nginx configuration, the map directive often plays a role in authorization control. A common mistake is not specifying a default value, which could lead to unauthorized access. For instance:

http {
    map $uri $mappocallow {
        /map-poc/private 0;
        /map-poc/secret 0;
        /map-poc/public 1;
    }
}
server {
    location /map-poc {
        if ($mappocallow = 0) {return 403;}
        return 200 "Hello. It is private area: $mappocallow";
    }
}

Without a default, a malicious user can bypass security by accessing an undefined URI within /map-poc. The Nginx manual advises setting a default value to avoid such issues.

DNS Spoofing Vulnerability

DNS spoofing against Nginx is feasible under certain conditions. If an attacker knows the DNS server used by Nginx and can intercept its DNS queries, they can spoof DNS records. This method, however, is ineffective if Nginx is configured to use localhost (127.0.0.1) for DNS resolution. Nginx allows specifying a DNS server as follows:

resolver 8.8.8.8;

proxy_pass and internal Directives

The proxy_pass directive is utilized for redirecting requests to other servers, either internally or externally. The internal directive ensures that certain locations are only accessible within Nginx. While these directives are not vulnerabilities by themselves, their configuration requires careful examination to prevent security lapses.

proxy_set_header Upgrade & Connection

If the nginx server is configured to pass the Upgrade and Connection headers an h2c Smuggling attack could be performed to access protected/internal endpoints.

This vulnerability would allow an attacker to stablish a direct connection with the proxy_pass endpoint (http://backend:9999 in this case) that whose content is not going to be checked by nginx.

Example of vulnerable configuration to steal /flag from here:

server {
    listen       443 ssl;
    server_name  localhost;

    ssl_certificate       /usr/local/nginx/conf/cert.pem;
    ssl_certificate_key   /usr/local/nginx/conf/privkey.pem;

    location / {
     proxy_pass http://backend:9999;
     proxy_http_version 1.1;
     proxy_set_header Upgrade $http_upgrade;
     proxy_set_header Connection $http_connection;
    }

    location /flag {
     deny all;
    }

Note that even if the proxy_pass was pointing to a specific path such as http://backend:9999/socket.io the connection will be stablished with http://backend:9999 so you can contact any other path inside that internal endpoint. So it doesn't matter if a path is specified in the URL of proxy_pass.

Try it yourself

Detectify has created a GitHub repository where you can use Docker to set up your own vulnerable Nginx test server with some of the misconfigurations discussed in this article and try finding them yourself!

https://github.com/detectify/vulnerable-nginx

Static Analyzer tools

Gixy is a tool to analyze Nginx configuration. The main goal of Gixy is to prevent security misconfiguration and automate flaw detection.

Nginxpwner is a simple tool to look for common Nginx misconfigurations and vulnerabilities.

References

Instantly available setup for vulnerability assessment & penetration testing. Run a full pentest from anywhere with 20+ tools & features that go from recon to reporting. We don't replace pentesters - we develop custom tools, detection & exploitation modules to give them back some time to dig deeper, pop shells, and have fun.

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:

Last updated