80,443 - Pentesting Web Methodology
Last updated
Last updated
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Get a hacker's perspective on your web apps, network, and cloud
Find and report critical, exploitable vulnerabilities with real business impact. Use our 20+ custom tools to map the attack surface, find security issues that let you escalate privileges, and use automated exploits to collect essential evidence, turning your hard work into persuasive reports.
The web service is the most common and extensive service and a lot of different types of vulnerabilities exists.
Default port: 80 (HTTP), 443(HTTPS)
In this methodology we are going to suppose that you are going to a attack a domain (or subdomain) and only that. So, you should apply this methodology to each discovered domain, subdomain or IP with undetermined web server inside the scope.
Launch general purposes scanners. You never know if they are going to find something or if the are going to find some interesting information.
Start with the initial checks: robots, sitemap, 404 error and SSL/TLS scan (if HTTPS).
Start spidering the web page: It's time to find all the possible files, folders and parameters being used. Also, check for special findings.
Note that anytime a new directory is discovered during brute-forcing or spidering, it should be spidered.
Directory Brute-Forcing: Try to brute force all the discovered folders searching for new files and directories.
Note that anytime a new directory is discovered during brute-forcing or spidering, it should be Brute-Forced.
Backups checking: Test if you can find backups of discovered files appending common backup extensions.
Brute-Force parameters: Try to find hidden parameters.
Once you have identified all the possible endpoints accepting user input, check for all kind of vulnerabilities related to it.
Check if there are known vulnerabilities for the server version that is running. The HTTP headers and cookies of the response could be very useful to identify the technologies and/or version being used. Nmap scan can identify the server version, but it could also be useful the tools whatweb, webtech or https://builtwith.com/:
Search for vulnerabilities of the web application version
Some tricks for finding vulnerabilities in different well known technologies being used:
Take into account that the same domain can be using different technologies in different ports, folders and subdomains. If the web application is using any well known tech/platform listed before or any other, don't forget to search on the Internet new tricks (and let me know!).
If the source code of the application is available in github, apart of performing by your own a White box test of the application there is some information that could be useful for the current Black-Box testing:
Is there a Change-log or Readme or Version file or anything with version info accessible via web?
How and where are saved the credentials? Is there any (accessible?) file with credentials (usernames or passwords)?
Are passwords in plain text, encrypted or which hashing algorithm is used?
Is it using any master key for encrypting something? Which algorithm is used?
Can you access any of these files exploiting some vulnerability?
Is there any interesting information in the github (solved and not solved) issues? Or in commit history (maybe some password introduced inside an old commit)?
If a CMS is used don't forget to run a scanner, maybe something juicy is found:
Clusterd: JBoss, ColdFusion, WebLogic, Tomcat, Railo, Axis2, Glassfish CMSScan: WordPress, Drupal, Joomla, vBulletin websites for Security issues. (GUI) VulnX: Joomla, Wordpress, Drupal, PrestaShop, Opencart CMSMap: (W)ordpress, (J)oomla, (D)rupal or (M)oodle droopscan: Drupal, Joomla, Moodle, Silverstripe, Wordpress
At this point you should already have some information of the web server being used by the client (if any data is given) and some tricks to keep in mind during the test. If you are lucky you have even found a CMS and run some scanner.
From this point we are going to start interacting with the web application.
Default pages with interesting info:
/robots.txt
/sitemap.xml
/crossdomain.xml
/clientaccesspolicy.xml
/.well-known/
Check also comments in the main and secondary pages.
Forcing errors
Web servers may behave unexpectedly when weird data is sent to them. This may open vulnerabilities or disclosure sensitive information.
Access fake pages like /whatever_fake.php (.aspx,.html,.etc)
Add "[]", "]]", and "[[" in cookie values and parameter values to create errors
Generate error by giving input as /~randomthing/%s
at the end of URL
Try different HTTP Verbs like PATCH, DEBUG or wrong like FAKE
If you find that WebDav is enabled but you don't have enough permissions for uploading files in the root folder try to:
Brute Force credentials
Upload files via WebDav to the rest of found folders inside the web page. You may have permissions to upload files in other folders.
If the application isn't forcing the user of HTTPS in any part, then it's vulnerable to MitM
If the application is sending sensitive data (passwords) using HTTP. Then it's a high vulnerability.
Use testssl.sh to checks for vulnerabilities (In Bug Bounty programs probably these kind of vulnerabilities won't be accepted) and use a2sv to recheck the vulnerabilities:
Information about SSL/TLS vulnerabilities:
Launch some kind of spider inside the web. The goal of the spider is to find as much paths as possible from the tested application. Therefore, web crawling and external sources should be used to find as much valid paths as possible.
gospider (go): HTML spider, LinkFinder in JS files and external sources (Archive.org, CommonCrawl.org, VirusTotal.com, AlienVault.com).
hakrawler (go): HML spider, with LinkFider for JS files and Archive.org as external source.
dirhunt (python): HTML spider, also indicates "juicy files".
evine (go): Interactive CLI HTML spider. It also searches in Archive.org
meg (go): This tool isn't a spider but it can be useful. You can just indicate a file with hosts and a file with paths and meg will fetch each path on each host and save the response.
urlgrab (go): HTML spider with JS rendering capabilities. However, it looks like it's unmaintained, the precompiled version is old and the current code doesn't compile
gau (go): HTML spider that uses external providers (wayback, otx, commoncrawl)
ParamSpider: This script will find URLs with parameter and will list them.
galer (go): HTML spider with JS rendering capabilities.
LinkFinder (python): HTML spider, with JS beautify capabilities capable of search new paths in JS files. It could be worth it also take a look to JSScanner, which is a wrapper of LinkFinder.
goLinkFinder (go): To extract endpoints in both HTML source and embedded javascript files. Useful for bug hunters, red teamers, infosec ninjas.
JSParser (python2.7): A python 2.7 script using Tornado and JSBeautifier to parse relative URLs from JavaScript files. Useful for easily discovering AJAX requests. Looks like unmaintained.
relative-url-extractor (ruby): Given a file (HTML) it will extract URLs from it using nifty regular expression to find and extract the relative URLs from ugly (minify) files.
JSFScan (bash, several tools): Gather interesting information from JS files using several tools.
subjs (go): Find JS files.
page-fetch (go): Load a page in a headless browser and print out all the urls loaded to load the page.
Feroxbuster (rust): Content discovery tool mixing several options of the previous tools
Javascript Parsing: A Burp extension to find path and params in JS files.
Sourcemapper: A tool that given the .js.map URL will get you the beatified JS code
xnLinkFinder: This is a tool used to discover endpoints for a given target.
waymore: Discover links from the wayback machine (also downloading the responses in the wayback and looking for more links
HTTPLoot (go): Crawl (even by filling forms) and also find sensitive info using specific regexes.
SpiderSuite: Spider Suite is an advance multi-feature GUI web security Crawler/Spider designed for cyber security professionals.
jsluice (go): It's a Go package and command-line tool for extracting URLs, paths, secrets, and other interesting data from JavaScript source code.
ParaForge: ParaForge is a simple Burp Suite extension to extract the paramters and endpoints from the request to create custom wordlist for fuzzing and enumeration.
katana (go): Awesome tool for this.
Crawley (go): Print every link it's able to find.
Start brute-forcing from the root folder and be sure to brute-force all the directories found using this method and all the directories discovered by the Spidering (you can do this brute-forcing recursively and appending at the beginning of the used wordlist the names of the found directories). Tools:
Dirb / Dirbuster - Included in Kali, old (and slow) but functional. Allow auto-signed certificates and recursive search. Too slow compared with th other options.
Dirsearch (python): It doesn't allow auto-signed certificates but allows recursive search.
Gobuster (go): It allows auto-signed certificates, it doesn't have recursive search.
Feroxbuster - Fast, supports recursive search.
wfuzz wfuzz -w /usr/share/seclists/Discovery/Web-Content/raft-medium-directories.txt https://domain.com/api/FUZZ
ffuf - Fast: ffuf -c -w /usr/share/wordlists/dirb/big.txt -u http://10.10.10.10/FUZZ
uro (python): This isn't a spider but a tool that given the list of found URLs will to delete "duplicated" URLs.
Scavenger: Burp Extension to create a list of directories from the burp history of different pages
TrashCompactor: Remove URLs with duplicated functionalities (based on js imports)
Chamaleon: It uses wapalyzer to detect used technologies and select the wordlists to use.
Recommended dictionaries:
https://github.com/danielmiessler/SecLists/tree/master/Discovery/Web-Content
raft-large-directories-lowercase.txt
directory-list-2.3-medium.txt
RobotsDisallowed/top10000.txt
/usr/share/wordlists/dirb/common.txt
/usr/share/wordlists/dirb/big.txt
/usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt
Note that anytime a new directory is discovered during brute-forcing or spidering, it should be Brute-Forced.
Broken link checker: Find broken links inside HTMLs that may be prone to takeovers
File Backups: Once you have found all the files, look for backups of all the executable files (".php", ".aspx"...). Common variations for naming a backup are: file.ext~, #file.ext#, ~file.ext, file.ext.bak, file.ext.tmp, file.ext.old, file.bak, file.tmp and file.old. You can also use the tool bfac or backup-gen.
Discover new parameters: You can use tools like Arjun, parameth, x8 and Param Miner to discover hidden parameters. If you can, you could try to search hidden parameters on each executable web file.
Arjun all default wordlists: https://github.com/s0md3v/Arjun/tree/master/arjun/db
Param-miner “params” : https://github.com/PortSwigger/param-miner/blob/master/resources/params
Assetnote “parameters_top_1m”: https://wordlists.assetnote.io/
nullenc0de “params.txt”: https://gist.github.com/nullenc0de/9cb36260207924f8e1787279a05eb773
Comments: Check the comments of all the files, you can find credentials or hidden functionality.
If you are playing CTF, a "common" trick is to hide information inside comments at the right of the page (using hundreds of spaces so you don't see the data if you open the source code with the browser). Other possibility is to use several new lines and hide information in a comment at the bottom of the web page.
API keys: If you find any API key there is guide that indicates how to use API keys of different platforms: keyhacks, zile, truffleHog, SecretFinder, RegHex, DumpsterDive, EarlyBird
Google API keys: If you find any API key looking like AIzaSyA-qLheq6xjDiEIRisP_ujUseYLQCHUjik you can use the project gmapapiscanner to check which apis the key can access.
S3 Buckets: While spidering look if any subdomain or any link is related with some S3 bucket. In that case, check the permissions of the bucket.
While performing the spidering and brute-forcing you could find interesting things that you have to notice.
Interesting files
Look for links to other files inside the CSS files.
If you find a .env information such as api keys, dbs passwords and other information can be found.
If you find API endpoints you should also test them. These aren't files, but will probably "look like" them.
JS files: In the spidering section several tools that can extract path from JS files were mentioned. Also, It would be interesting to monitor each JS file found, as in some ocations, a change may indicate that a potential vulnerability was introduced in the code. You could use for example JSMon.
Javascript Deobfuscator and Unpacker: https://lelinhtinh.github.io/de4js/, https://www.dcode.fr/javascript-unobfuscator
Javascript Beautifier: http://jsbeautifier.org/, http://jsnice.org/
JsFuck deobfuscation (javascript with chars:"[]!+" https://ooze.ninja/javascript/poisonjs/)
TrainFuck: +72.+29.+7..+3.-67.-12.+55.+24.+3.-6.-8.-67.-23.
In several occasions you will need to understand regular expressions used, this will be useful: https://regex101.com/
You could also monitor the files were forms were detected, as a change in the parameter or the apearance f a new form may indicate a potential new vulnerable functionality.
403 Forbidden/Basic Authentication/401 Unauthorized (bypass)
403 & 401 Bypasses502 Proxy Error
If any page responds with that code, it's probably a bad configured proxy. If you send a HTTP request like: GET https://google.com HTTP/1.1
(with the host header and other common headers), the proxy will try to access google.com and you will have found a SSRF.
NTLM Authentication - Info disclosure
If the running server asking for authentication is Windows or you find a login asking for your credentials (and asking for domain name), you can provoke an information disclosure.
Send the header: “Authorization: NTLM TlRMTVNTUAABAAAAB4IIAAAAAAAAAAAAAAAAAAAAAAA=”
and due to how the NTLM authentication works, the server will respond with internal info (IIS version, Windows version...) inside the header "WWW-Authenticate".
You can automate this using the nmap plugin "http-ntlm-info.nse".
HTTP Redirect (CTF)
It is possible to put content inside a Redirection. This content won't be shown to the user (as the browser will execute the redirection) but something could be hidden in there.
Now that a comprehensive enumeration of the web application has been performed it's time to check for a lot of possible vulnerabilities. You can find the checklist here:
Web Vulnerabilities MethodologyFind more info about web vulns in:
You can use tools such as https://github.com/dgtlmoon/changedetection.io to monitor pages for modifications that might insert vulnerabilities.
Get a hacker's perspective on your web apps, network, and cloud
Find and report critical, exploitable vulnerabilities with real business impact. Use our 20+ custom tools to map the attack surface, find security issues that let you escalate privileges, and use automated exploits to collect essential evidence, turning your hard work into persuasive reports.
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)