Nginx
Last updated
Last updated
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Instantly available setup for vulnerability assessment & penetration testing. Run a full pentest from anywhere with 20+ tools & features that go from recon to reporting. We don't replace pentesters - we develop custom tools, detection & exploitation modules to give them back some time to dig deeper, pop shells, and have fun.
When configuring the Nginx server, the root directive plays a critical role by defining the base directory from which files are served. Consider the example below:
In this configuration, /etc/nginx
is designated as the root directory. This setup allows access to files within the specified root directory, such as /hello.txt
. However, it's crucial to note that only a specific location (/hello.txt
) is defined. There's no configuration for the root location (location / {...}
). This omission means that the root directive applies globally, enabling requests to the root path /
to access files under /etc/nginx
.
A critical security consideration arises from this configuration. A simple GET
request, like GET /nginx.conf
, could expose sensitive information by serving the Nginx configuration file located at /etc/nginx/nginx.conf
. Setting the root to a less sensitive directory, like /etc
, could mitigate this risk, yet it still may allow unintended access to other critical files, including other configuration files, access logs, and even encrypted credentials used for HTTP basic authentication.
In the configuration files of Nginx, a close inspection is warranted for the "location" directives. A vulnerability known as Local File Inclusion (LFI) can be inadvertently introduced through a configuration that resembles the following:
This configuration is prone to LFI attacks due to the server interpreting requests like /imgs../flag.txt
as an attempt to access files outside the intended directory, effectively resolving to /path/images/../flag.txt
. This flaw allows attackers to retrieve files from the server's filesystem that should not be accessible via the web.
To mitigate this vulnerability, the configuration should be adjusted to:
More info: https://www.acunetix.com/vulnerabilities/web/path-traversal-via-misconfigured-nginx-alias/
Accunetix tests:
Check the following page to learn how to bypass directives like:
Vulnerable variables $uri
and $document_ur
i and this can be fixed by replacing them with $request_uri
.
A regex can also be vulnerable like:
location ~ /docs/([^/])? { … $1 … }
- Vulnerable
location ~ /docs/([^/\s])? { … $1 … }
- Not vulnerable (checking spaces)
location ~ /docs/(.*)? { … $1 … }
- Not vulnerable
A vulnerability in Nginx configuration is demonstrated by the example below:
The characters \r (Carriage Return) and \n (Line Feed) signify new line characters in HTTP requests, and their URL-encoded forms are represented as %0d%0a
. Including these characters in a request (e.g., http://localhost/%0d%0aDetectify:%20clrf
) to a misconfigured server results in the server issuing a new header named Detectify
. This happens because the $uri variable decodes the URL-encoded new line characters, leading to an unexpected header in the response:
Learn more about the risks of CRLF injection and response splitting at https://blog.detectify.com/2019/06/14/http-response-splitting-exploitations-and-mitigations/.
Also this technique is explained in this talk with some vulnerable examples and dectection mechanisms. For example, In order to detect this misconfiguration from a blackbox perspective you could these requests:
https://example.com/%20X
- Any HTTP code
https://example.com/%20H
- 400 Bad Request
If vulnerable, the first will return as "X" is any HTTP method and the second will return an error as H is not a valid method. So the server will receive something like: GET / H HTTP/1.1
and this will trigger the error.
Another detection examples would be:
http://company.tld/%20HTTP/1.1%0D%0AXXXX:%20x
- Any HTTP code
http://company.tld/%20HTTP/1.1%0D%0AHost:%20x
- 400 Bad Request
Some found vulnerable configurations presented in that talk were:
Note how $uri
is set as is in the final URL
Note how again $uri
is in the URL (this time inside a parameter)
Now in AWS S3
It was discovered that user-supplied data might be treated as an Nginx variable under certain circumstances. The cause of this behavior remains somewhat elusive, yet it's not rare nor straightforward to verify. This anomaly was highlighted in a security report on HackerOne, which can be viewed here. Further investigation into the error message led to the identification of its occurrence within the SSI filter module of Nginx's codebase, pinpointing Server Side Includes (SSI) as the root cause.
To detect this misconfiguration, the following command can be executed, which involves setting a referer header to test for variable printing:
Scans for this misconfiguration across systems revealed multiple instances where Nginx variables could be printed by a user. However, a decrease in the number of vulnerable instances suggests that efforts to patch this issue have been somewhat successful.
Nginx offers a feature through proxy_pass
that allows for the interception of errors and HTTP headers produced by the backend, aiming to hide internal error messages and headers. This is accomplished by Nginx serving custom error pages in response to backend errors. However, challenges arise when Nginx encounters an invalid HTTP request. Such a request gets forwarded to the backend as received, and the backend's raw response is then directly sent to the client without Nginx's intervention.
Consider an example scenario involving a uWSGI application:
To manage this, specific directives in the Nginx configuration are used:
proxy_intercept_errors: This directive enables Nginx to serve a custom response for backend responses with a status code greater than 300. It ensures that, for our example uWSGI application, a 500 Error
response is intercepted and handled by Nginx.
proxy_hide_header: As the name suggests, this directive hides specified HTTP headers from the client, enhancing privacy and security.
When a valid GET
request is made, Nginx processes it normally, returning a standard error response without revealing any secret headers. However, an invalid HTTP request bypasses this mechanism, resulting in the exposure of raw backend responses, including secret headers and error messages.
By default, Nginx's merge_slashes
directive is set to on
, which compresses multiple forward slashes in a URL into a single slash. This feature, while streamlining URL processing, can inadvertently conceal vulnerabilities in applications behind Nginx, particularly those prone to local file inclusion (LFI) attacks. Security experts Danny Robinson and Rotem Bar have highlighted the potential risks associated with this default behavior, especially when Nginx acts as a reverse-proxy.
To mitigate such risks, it is recommended to turn the merge_slashes
directive off for applications susceptible to these vulnerabilities. This ensures that Nginx forwards requests to the application without altering the URL structure, thereby not masking any underlying security issues.
For more information check Danny Robinson and Rotem Bar.
As shown in this writeup, there are certain headers that if present in the response from the web server they will change the behaviour of the Nginx proxy. You can check them in the docs:
X-Accel-Redirect
: Indicate Nginx to internally redirect a request to a specified location.
X-Accel-Buffering
: Controls whether Nginx should buffer the response or not.
X-Accel-Charset
: Sets the character set for the response when using X-Accel-Redirect.
X-Accel-Expires
: Sets the expiration time for the response when using X-Accel-Redirect.
X-Accel-Limit-Rate
: Limits the rate of transfer for responses when using X-Accel-Redirect.
For example, the header X-Accel-Redirect
will cause an internal redirect in the nginx. So having an nginx configuration with something such as root /
and a response from the web server with X-Accel-Redirect: .env
will make nginx sends the content of /.env
(Path Traversal).
In the Nginx configuration, the map
directive often plays a role in authorization control. A common mistake is not specifying a default value, which could lead to unauthorized access. For instance:
Without a default
, a malicious user can bypass security by accessing an undefined URI within /map-poc
. The Nginx manual advises setting a default value to avoid such issues.
DNS spoofing against Nginx is feasible under certain conditions. If an attacker knows the DNS server used by Nginx and can intercept its DNS queries, they can spoof DNS records. This method, however, is ineffective if Nginx is configured to use localhost (127.0.0.1) for DNS resolution. Nginx allows specifying a DNS server as follows:
proxy_pass
and internal
DirectivesThe proxy_pass
directive is utilized for redirecting requests to other servers, either internally or externally. The internal
directive ensures that certain locations are only accessible within Nginx. While these directives are not vulnerabilities by themselves, their configuration requires careful examination to prevent security lapses.
If the nginx server is configured to pass the Upgrade and Connection headers an h2c Smuggling attack could be performed to access protected/internal endpoints.
This vulnerability would allow an attacker to stablish a direct connection with the proxy_pass
endpoint (http://backend:9999
in this case) that whose content is not going to be checked by nginx.
Example of vulnerable configuration to steal /flag
from here:
Note that even if the proxy_pass
was pointing to a specific path such as http://backend:9999/socket.io
the connection will be stablished with http://backend:9999
so you can contact any other path inside that internal endpoint. So it doesn't matter if a path is specified in the URL of proxy_pass.
Detectify has created a GitHub repository where you can use Docker to set up your own vulnerable Nginx test server with some of the misconfigurations discussed in this article and try finding them yourself!
https://github.com/detectify/vulnerable-nginx
Gixy is a tool to analyze Nginx configuration. The main goal of Gixy is to prevent security misconfiguration and automate flaw detection.
Nginxpwner is a simple tool to look for common Nginx misconfigurations and vulnerabilities.
Instantly available setup for vulnerability assessment & penetration testing. Run a full pentest from anywhere with 20+ tools & features that go from recon to reporting. We don't replace pentesters - we develop custom tools, detection & exploitation modules to give them back some time to dig deeper, pop shells, and have fun.
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)