WEB INFRASTRUCTURE BATTLEFIELD – ARE REVERSE PROXIES ENOUGH?

Are reverse proxies enough for developers and system administrators in order to defend their applications or are they silently being exploited in wild causing system level compromises? Today, as I have layered the foundational DAST scanning values and their results in a post before, most might be aware; this along wouldn’t make web applications secure with additional layers of protection involved as such that of reverse proxies.

REVERSE PROXIES

In order to understand what a reverse proxy is and what are the additional security protections generally taken by a server administrator, I have compiled several research on Defencely‘s own internal infrastructure and have completely agreed on the fact that web applications are dynamic and malicious intent will always find a way. In order to fix this, I needed to explain the executives of what the risks are and how these risks could be given a proper threat modeling management. This post is the resultant of such discussions and how and what reverse proxies are in the context of web protection – which have been always a buzz for web server administrators who had been still at current date failing to protect their applications from attacks.

A reverse proxy could be used as one of the following or in parallel to each other support:

  1. Load Balancer and Caching Servers.
  2. WAF/IPS set-up Proxy Server.
  3. As a obfuscation proxy.

Load Balancers and caching servers help protection towards DDOS (Distributed Denial of Service) Attacks, whereas a IPS/WAF enabled dedicated server helps protection from erroneous TCP packets, detection of such anomalous TCP packets and triggers an alarm after detecting such attacks. As a obfuscation proxy, an added layer of protection is added to the web infrastructure via keeping the software stack used in the application development hidden in the headers and other places to which an attacker would first enumerate prior preparing his attack sequences.

reverseasNow, some infrastructure implementers consider reverse proxies as the ultimate way to protect their web assets as well as web-server, which certainly isn’t the case. Reverse Proxies only strengthen with additional security measures.

ATTACK SURFACE MEASUREMENT

To completely measure the attack surface area, an attacker or the penetration tester has to understand the scope of the security audit, prepare a value based blueprint of how methodologically he would go about carrying out the entire security audit. Compromising a web application with protections placed such as WAF, IPS, IDS, HIPS, additional firewalls, firewall rule-sets, Honeypots, controls, etc could and might very well look complicated; but once an expert at this who do what they had been doing for their food professionally come across the scenario, it doesn’t take much longer for them to realize the basic enterprise foundations to analyze the attack surface, and then prepare their attack plan and goals associated with the security engagement.

To measure the attack surface area, three distinct things are taken into particular consideration, and these are:

  1. Trusts – the infrastructure assets which are the interactions between the objects and within the security scope.
  2. Accesses – any interaction which happens to be from the outside of security scope to the internal of the security scope is known to be as Accesses.
  3. Visibility – Visibility are informational assets which happen to be of informational value exposing security scopes.

All these three components of the security audit makes an entire relative component known to be Porosity which itself is the entire attack surface. And hence:

Porosity = Trusts + Accesses + Visibility

Other security measures which are built by the infrastructure implementers are controls, whose pure intention are to limit the functionality to where it should be and hence control the workflow of the data, application logic and the various expected output to expected valid input.

The five variables and widely used controls in the overall infrastructure security mechanisms are:

  1. Authentication
  2. Indemnification
  3. Resilience
  4. Subjugation
  5. Continuity

All of them have to do with non-repudiation, confidentiality, privacy, integrity and alarming respectively as per the points made across previously. Now, to define a vulnerability, the definition of vulnerability itself for web applications and web infrastructure would be the violations of accesses and trusts. Which is altogether the equation has to be:

Accesses + Trusts (violation of both or either one) = Vulnerability.

This would be the appropriate measure of any vulnerabilities found in the Web Infrastructure. To measure weaknesses, the right equation would be:

Authentication + Indemnification + Resilience + Subjugation + Continuity = Weaknesses

Any of the violations above would be measured as a weakness and not as a vulnerability in whatsoever means. A concern would be when Non-Repudiation, confidentiality, privacy, and integrity have been violated. This equation would be:

Non-Repudiation + Confidentiality + Privacy + Integrity = Concern

Apart from all of the above, any violation of visibility could be referred to as Exposureand is for informational values only.

REVERSE PROXY TEST RESULTANTS

Since reverse proxy’s are implemented to obstruct incoming malicious traffic and also identify or drop packets which could possibly harm underlying web applications served via another web-server, the reverse proxy has to be the intermediate server and hence is a security infrastructure testing rather than web application vulnerability assessment testing. Because there is no direct interactions with the web application itself nor any logical components of the web application, the reverse proxy security audit solely is based upon infrastructure security testing.

I would break down several tools and testing methodologies via which all of the certaintybeliefs of server administrators regarding reverse proxy to be the best security relief against attacks will find their hopeless paths. In order for me to methodologically test these uncertain black-box reverse proxy security audits, I would first need to interact with the reverse proxy themselves and then escalate my attacks higher into the web application since the malicious payloads would need to first infiltrate the intermediary reverse proxy. The billion dollar question is – Are Reverse Proxy themselves strong to prevent attacks or are themselves being attacked?

To methodologically put by resultants into an effective set pieces of information security assurances and provide a ground for compliance, I have constructed the logic behind breaking these security obstructions in contextual basis of Access ViolationsVisibility ViolationsTrusts Violations, and Non-Repudiation Violations. The whole research isn’t public for access yet but these are some measures which has been made public at current.

ACCESS VIOLATIONS

Since there is no interactions made originally with the web application itself, the test scope would be only the web infrastructure counting reverse proxy. I would use the popular Facebook proxy as an example and in security assessment and audits, these same tools or methodological techniques could be used!

Tools used:

  1. Nmap (Network Mapper)
  2. Unicornscan

Nmap is a great tool to look for access entry points let them be via TCP or UDP (UDP accesses are a concern for now as an example!). The first nmap commands I use over are described in the images attached right below:

re1

Nevertheless, this should be the way for UDP scans on all the existing ports; the results at this point are irrelevant of the actual fact results when done on a reverse proxy during security audits. I revived a bunch of access entries which could be used to look down deep into:

re2

Another way to do this quick and very efficiently (more efficient than Nmap) is to use Unicornscan (but only works efficient than Nmap for UDP Scans.):

re3

And the results obtained were faster and reliable. Nmap could be right handy and fit if unicorn is too much.

VISIBILITY VIOLATIONS

This again could be done via tamper data (firefox addon) if the reverse proxy does interact over browsers. If not, however for test purposes I have had used openssl:

r4

As prompt, there was something which were expected by the reverse proxy and certainly I failed to really enumerate at my end. This could again be a POST request which the proxy might just had expected. This again is a ‘might’ shadowy end and needs to be confirmed. I quickly hit up RESTClient to make sure, that was the scenario and indeed it was!

r5

To prove my previous theory that openssl might just be a handy toolset for the audit, I picked up my first target from the proxy list Facebook servers use which are publicly available and was tested before on Access Violation tests:

re6

As transparent, Facebook is connected, I could now pass on the commands as the proxy expected. I will issue a GET request this time and try to pull out a content and see if OPTIONS (verb/method) has been implemented:

re7

At this point, I was given a ‘400 Bad Request’ which essentially is a client side server status code; this again means the client has mistakenly or intentionally/thoughtfully as a foul play (as the case might be!) tried to access a resource on the proxy which doesn’t or hasn’t expected the request type (verb/method) which in this case was GET. I could had tested more, but notice the proxy server replies with  ‘HTTP 1.0’; this again might be mis-direction or the server really is implemented across HTTP/1.0 and does not use HTTP/1.1 for communications. Trying again:

r8

Again, I received the same client side server status code, which means the client has mistaken their request or has a malformed request. Malformed requests hence could be used in the similar fashion to detect the behavior of the proxy the security auditor is encountering with. This again has to be Visibility Violations where the reverse proxy would fail to distinguish itself from the real web-server and the attacker understand he is talking to a proxy and not the real web-server.

Also, the operation controls which are implemented across HTTPS are only confidentiality and not certainly privacy. This means if there is an intermediary server which one client is  being able to connect to, and originally wants privacy to be implemented and by default HTTPS has operational controls set on the server such as:

  1. Confidentiality
  2. Integrity
  3. Subjugation

which yet again happens to be the default operation controls universally for all cipher suites used in HTTPS, the privacy in it’s first place as imagined by the client, is a violation! such cases are when the client (user) expects HTTPS to be secure and readily provide them privacy but technically never knows how HTTPS has been implemented. The privacy violation here would result in the server being able to know where the information has come from (source) and where the information is going to (destination). But since confidentiality is mainstream business for why actually HTTPS has been implemented, the server wouldn’t be able to look at what data the server has received and what data the server is sending across after receiving; which means the server has no right over the data but the endpoints (source and the destination):

For informational value, I have attached how to test them (the cipher suites available):

sslyze

The results would be:

more

There are certainly more cases for SSH and services of interests. Ton of services could be unknown which also has the value of visibility tests run across since they might give an exposure to certain data information entity in a certain way, which shouldn’t be exposed at the first placed.

CONCLUSIVE RESULTS

As discussed in the sections above, Defencely Red Team has covered most of the research aspects of vulnerability assessments and penetration tests which suits the needs for an enterprise security audit which does not and should not be only limited to web applications but also the components which support them. A lot of assets such as Non-Repudiation Violations, and other violations require an entire draft for themselves. I have been actively involved with the community in order to bring the most amazing results shared across publicly once prepared but as a part of my active role inDefencely, I am responsible as well for the research copies.

On either perspective, enterprise business risk assessment should involve reverse proxies as an alternate vulnerability assessment criteria and should involve the same into conclusive testing since all the test cases so far for each sub-sets of violations could end up into compromising an application. And; once, an attacker is able to direct his/her traffic in the way he/she intends and reverse proxies fails at the serious moment of impact – a server administrator never should consider reverse proxies as the only ultimate security protection available. Code level flaws are yet another fact which needs to be broaden and discussed, but that would be another day.

\0

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s