Hello and welcome to my first tutorial on Information Gathering.
In this tutorial we will gather information about our website using some freely online available websites.
We’ll be using the Following websites in this tutorial to capture information about our target.
Then the last thing we are going to use is the robots.txt file to view the paths, which web admin wants to hide from the bots and do not want them to be public. All such infomation can many times give your testing a boost start. I will Expain each with example one by one.
You can also view the video tutorial for a better understanding.
This website gives us a detailed information about the web hosting and the Server with detailed information on what is running on the server along with the IP, whoIs information, Server side technologies etc. All this Information should be saved in your reports so that you can use all the information to find the right tests and define the attack surface which is the most important part of a penetest.
2. YouGetSignalMany times the particular domain you are targetting is not so vulnerable or you are not able to find the right attack surface, in such case you can make a Reverse IP domain lookup and find the other domains on the server which may be vulnerable and allow you to enter the Server.
In such a way you can make your way towards into the website.
Archive.org is a website which is maintaining history of many websites over the internet. Many times you can get some information which is no more displayed on the website because of some security issue but something related to that can still be found there.
Robots.txt is a file which is used by the websites to disallow crawlers to Crawl some of its sensitive data or the admin panels. And it can be viewed publically so in that case it could be useful if we find that data and use it later on.
After all this we can move to our target domain and view the robots.txt file, which is used by the web Admins or some Web-Application to hide private stuff from the web bots. But viewing it may allow you to get the path of all that content and later we can view those pages or paths and find some hidden content which could also be in an open form because of the foolishness of a web admin.
Welcome to my second tutorial on Information Gathering
In this tutorial we ll be using Google to gather more sensitive information about our target.
So first let us start with some basic usage of advanced google searching. Then i will show you how to use them to gather information about your target.
InUrl is used to search for any text inside the uri. Many times used by hackers to search for vulnerable scripts and plugins or sensitive information in the website.
InText is used to search for any text in the body or the source code of the website. It is many times used by hackers to search for particular version of application which is exploitable.
FileType is used to search for any type of file which you want to locate in a particualr website or on any particular subject or you can search for any type of file freely. Used by hackers to search for files containing Sensitive information for exploit the websites.
InTitle is used to search for titles of the webpages. Hackers use to to search for vulnerable pages or the indexing on a website.
Site using this dork you can minimize the area of search to a particular website. Hackers use it to target and search sentive information in a website.
Link checks other websites containing links to a website. Hackers use to search any other information related to thier target.
Many times you want to remove some junk results and get more pointed results.
Now we ll use all the above dorks in a manner to get some more information about our target.
Searching for public Sub-domains for your target domain.
Getting Open Index or Insecure Information
intitle:”index of /” Parent Directory site:yoursitehere.com
You can search for admin directories
intitle:”Index of /admin” site:yoursitehere.com
You can search for password directories
intitle:”Index of /password” site:yoursitehere.com
You can search for mail directories
intitle:”Index of /mail” site:yoursitehere.com
You can search for files like passwd
intitle:”Index of /” passwd site:yoursitehere.com
You can search for password.txt files
intitle:”Index of /” password.txt site:yoursitehere.com
You can search for htaccess file
intitle:”Index of /” .htaccess site:yoursitehere.com
You can also search for diffrent extensions.
intitle:”index of ftp” .mdb site:yoursitehere.com
You can also try and look for admin pages or the login functionalities
Intitle: “login” “admin” site:yoursitehere.com
Using InURL we can search for diffrent functionalities within the website.
Search for Admin Login Functionality on target domain
Search for Login Functionality on target domain
Using FileType we can search for diffrent files within the website.
Searching for text files containing passwd in URL on target domain
inurl:passwd filetype:txt site:yoursitehere.com
Searching for db files containing admin in URL on target domain
inurl:admin filetype:db site:yoursitehere.com
Searching for logs on target domain
Searching for Excel and csv files on target domain
filetype:xls csv site:yoursitehere.com
Search for other sites containing links for your target website
You can also use Google Translater as a proxy to access the website
You can also use shodanhq.com for some more information by just using thehostname:yoursitehere.com dork.
Thanks for reading, see in the next part on Information Gathering
Welcome to my third tutorial on Information Gathering
In this tutorial we ll be use NMAP to gather Open Ports information about our target.
So first let us start with some basics of NMAP.
Using NMAP you can check the open ports and services versions running on a server that may help you to get direct access exploiting any of the functionality or via bruteforcing. It also helps you to understand about the services running on the server so that later it may help you while pentesting.
The two basic scan types used most in Nmap are TCP connect() scanning [-sT] and SYN scanning (also known as half-open, or stealth scanning) [-sS].
For a basic port scan:
Nmap <IP Address>
For a Stealth port scan:
Nmap -sS <IP Address>
To scan the service version running on Open Ports use -sV switch
Nmap -sV <IP Address>
Some times if a server is blocking your ping and acting dead then you can use -Pn switch to scan it
Nmap -sV -Pn <IP Address>
For a OS detection you can use -O switch
Nmap -O -Pn <IP Address>
Thats all for this tutorial see you in the next tutorial, keep rocking, and keep hacking.
Welcome to my Forth tutorial on Information Gathering
In this tutorial we ll use gather DNS information about our target.
But before starting with basics of DIG we must know different types of DNS records.
The following list describes the common DNS record types and their use:
Maps a hostname to an IP address
SOA (Start of Authority)
Identifies the DNS server responsible for the domain information
CNAME (Canonical Name)
Provides additional names or aliases for the address record
MX (Mail Exchange)
Identifies the mail server for the domain
Identifies services such as directory services
Maps IP addresses to hostnames
NS (Name Server)
Identifies other name servers for the domain
AXFR (Zone Tranfer)
Can leak all the Sub-Domain Names registered for the domain
To install it on Windows:
1) Go to ftp://ftp.isc.org/isc/bind9/9.5.0-P2/
2) Download BIND9.5.0-P2.zip
3) Open the archive with WinZip
4) Extract dig.exe, libbind9.dll, libdns.dll, libisc.dll, libisccfg.dll, liblwres.dll to c:\\windows\\system32
Now to get the information on all of these records at once we can use ANY keyword. As shown below
DIG ANY <domain>
If you want to get NS records for the domain then you can go for the given query
DIG NS <domain>
In the same manner you have to change the option to get different records.
DIG <option> <domain>
This is all for this tutorial see you in the next tutorial of Information Gathering.
Welcome to my Fifth tutorial on Information Gathering
In this tutorial we will use Fierce to gather more DNS information and other sub domains of our target.
Note : You will need perl to execute it, else you can setup a virtual system and use backtrack.
Fierce is a very useful perl script which can be used to gather information and save your ass from performing large tests manually. The basic usage Fierce is to find out the nameservers and then try out if any of them is misconfigured to allow a Zone transfer and find out Sub-domains. Fierce request each DNS server to give the entire content of its dns cache and if its vulnerable then in that case its all informaiton will be revealed to the attacker or the pentester, in many cases a Zone transfer may not be allowed but still you can get a misconfigured DNS server which may allow a Zone transfer.
Once you get it you can get all the subdomains and the DNS records under that domain as well as other details. In such cases you may also get private domains also, which will help you later to pentest and compromise the site.
After that if the hosts.txt file is available it will try to bruteforce the sub-domains using that and will show you all the results. Thats all for fierce right now. Let me show you how to use it.
fierce.pl -dns yourwebsitehere.com
Welcome to my Seventh tutorial on Information Gathering
In this tutorial we ll use Metagoofil to gather information about our target.
Metagoofil is another revolutionary tool that utilizes the Google search engine to get metadata from documents available in the target domain.
Metagoofil works by:
[#] Searching file types in the target domain using the Google search engine
[#] Downloading all of the documents found and saving them to the local disk.
[#] Extracting the metadata from the downloaded documents
[#] Saving the result in an HTML file
The information which can be found using metadata are usernames, path, MAC address, Software, Operating System etc. This information can be used later on to help in the penetration testing phase.
To access Metagoofil from backtrack 5, you can use the console to execute the following commands:-
# cd /pentest/enumeration/google/metagoofil
Metagoofil options as listed in the application:
-d: domain to search
-t: filetype to download (pdf,doc,xls,ppt,odp,ods,docx,xlsx,pptx)
-l: limit of results to search (default 200)
-h: work with documents in directory (use â€œyesâ€ for local analysis)
-n: limit of files to download
-o: working directory
-f: output file
As an example of metagoofil usage, we will collect all the documents from our target domain and save them to a html file named result.html. We limit the download for each file type to 50 files.Following is the command we give:
# ./metagoofil.py -d targetdomain.com -t doc,pdf,xls,docx -l 100 -n 50 -o targetdomainfiles -f result.html
From the result that we get a lot of information from the documents we have collected, such as usernames and path information. We can use the usernames to brute force the password, while the path information can be used to guess the operating system used by the target. We got all of this information without going to the domain website ourselves.Metagoofil is very usefull tool when u want to download the documents and inofrmation about domain without going into it.
Preview of Results:
Thats all on Information Gathering with Metagoofil, see you in the next tutorial of Information Gathering.