Conduct Search Engine Discovery and Reconnaissance for Information Leakage is in testing checklist for information gathering while doing analysis.
This article will throw light on the information leakage in the cyber world unintentionally by the application/system/organisation which can be misused by the third party which will create a loss.
The objective is to understand what sensitive design and configuration information of the application/system/organisation is exposed both directly (on the organisation’s website) or indirectly (on a third party website).
The concept behind this is to find sensitive information about your application/system/organisation shown to the world without your knowledge. We might get some great and unexpected results from this test.
We use these searches to find:
- Network diagrams and configurations
- Archived posts and emails by administrators and other key staff
- Log on procedures and username formats
- Usernames and passwords
- Error message content
- Development, test, UAT (User acceptance testing is used to test software in real worlds by the intended audience. It is last phase of testing.) and staging versions of the website
There are direct and indirect elements to search engine discovery and reconnaissance. Direct methods relate to searching the indexes and the associated content from caches. Indirect methods relate to gleaning (obtaining) sensitive design and configuration information by searching forums, newsgroups, and tendering websites.
Once a search engine robot has completed crawling, it commences indexing the web page based on tags and associated attributes, such as <TITLE>, in order to return the relevant search results.
If the robots.txt file is not updated during the lifetime of the web site, and inline HTML meta tags that instruct robots not to index content have not been used, then it is possible for indexes to contain web content not intended to be included in by the owners. Website owners may use the previously mentioned robots.txt, HTML meta tags, authentication, and tools provided by search engines to remove such content.
Using the advanced “site:” search operator, it is possible to restrict search results to a specific domain. Do not always limit testing to just one search engine provider as they may generate different results depending on when they crawled content and their own algorithms. Consider using the following search engines:
- Duck Duck Go
Duck Duck Go and ixquick/Startpage provide reduced information leakage about the tester.
Google provides the Advanced “cache:” search operator, but this is the equivalent to clicking the “Cached” next to each Google Search Result. Hence, the use of the Advanced “site:” Search Operator and then clicking “Cached” is preferred.
The Google SOAP Search API supports the doGetCachedPage and the associated doGetCachedPageResponse SOAP Messages to assist with retrieving cached pages.
PunkSpider is web application vulnerability search engine. It is of little use for a penetration tester doing manual work. However it can be useful as demonstration of easiness of finding vulnerabilities by script-kiddies.
Useful queries are:
|site:||Limits the results to web resources within a given website||filetype:xls site:nowebsec.com|
|filetype:||Limits the results to web resources matching the desired file type (not always correct)||filetype:xls intext:email intext:password|
|inurl:||Searches with the URL of the crawled web pages.||inurl:wp-content/uploads filetype:sql|
|intext:||Searches within the text of the web pages (the text possibly seen by regular users browsing the web pages)||intext:”Apache Server Status”|
|–||Excludes the term/operator from the results||inurl:citrix inurl:login.asp -site:citrix.com|
|“search-term”||Adding the phrase in quotation marks returns only results that are an exact match to what is sought for||inurl:”server-status”|
|*||A wildcard for any unknown/arbitrary words. It is not used for completing a word like foot* but pinpoints that anys word could be at that search position.||a * saved is a * earned|
|+||The phrase that follows the + modifier must exist within the results. It can be used to include an overly common word which Google typically neglects in queries.||“Machine gun” +uzi|
|.||A single-character wildcard, any single character can be in that place.||inurl:.ssh intitle:index.of authorized_keys|
To find the web content of owasp.org indexed by a typical search engine, the syntax required is:
To display the index.html of owasp.org as cached, the syntax is:
Google Hacking Database
By using these Google queries attackers dig out the core log files containing important documents of the site through which hackers or attackers penetrate in the site and thus the site is hacked. This is called Google dorks; you can try these Google hacking tips and check out the results. The Google Hacking Database is list of useful search queries for Google. Queries are put in several categories:
- Files containing usernames
- Sensitive Directories
- Web Server Detection
- Vulnerable Files
- Vulnerable Servers
- Error Messages
- Files containing juicy info
- Files containing passwords
- Sensitive Online Shopping Info
What we can do to save from information leakage?
Carefully consider the sensitivity of design and configuration information before it is posted online.
Periodically review the sensitivity of existing design and configuration information that is posted online.
This article is just for educational purpose.