Testing for the nodes/proxy RCE vulnerability with K3s
We spin up a K3s cluster in a Slicer microVM and run Graham Helton's detection script to check for the nodes/proxy RCE vulnerability.
Security researcher Graham Helton recently disclosed an interesting Kubernetes RBAC behavior: nodes/proxy GET permissions allow command execution in any Pod. The Kubernetes Security Team closed this as "working as intended," but it's worth understanding the implications.
Did you know? We are the team behind OpenFaaS - Serverless Functions for Kubernetes. We've already written up a separate blog post exploring the potential impact of this vulnerability for OpenFaaS users.
Excited to disclose my research allowing RCE in Kubernetes
— Graham Helton (too much for zblock) (@GrahamHelton3) January 26, 2026
It allows running arbitrary commands in EVERY pod in a cluster using a commonly granted "read only" RBAC permission. This is not logged and and allows for trivial Pod breakout.
Unfortunately, this will NOT be patched. pic.twitter.com/MQky20uamu
In this post, we'll spin up a K3s cluster in a SlicerVM microVM and run Graham's detection script to check for affected service accounts.
There are multiple ways to run Kubernetes within a microVM, depending on your needs and use-case.
- Stable control-plane, with autoscaling nodes
- Slicer across multiple physical hosts
- A simple 3-node HA cluster
The example used in this post is a single-node cluster designed to be disposable, and only to be used within the microVM, not from other hosts via kubectl.
Step 1: Create a userdata script
On a machine with Linux installed, and KVM available (bare-metal or nested virtualization), install Slicer.
You can use a Team, Platform, or Individual license.
Create a working directory for the lab.
mkdir -p k3s-rce
cd k3s-rceCreate userdata.sh to bootstrap K3s:
#!/bin/bash
set -ex
export HOME=/home/ubuntu
export USER=ubuntu
cd /home/ubuntu/
(
arkade update
arkade get kubectl k3sup jq --path /usr/local/bin
chown $USER /usr/local/bin/*
mkdir -p .kube
)
(
k3sup install --local --k3s-extra-args '--disable traefik'
mv ./kubeconfig ./.kube/config
chown $USER .kube/config
)
export KUBECONFIG=/home/ubuntu/.kube/config
k3sup ready --kubeconfig $KUBECONFIG
echo "export KUBECONFIG=/home/ubuntu/.kube/config" >> $HOME/.bashrc
chown -R $USER $HOMEStep 2: Generate the VM config and start it
slicer new k3s-rce \
--net=isolated \
--allow=0.0.0.0/0 \
--cpu=2 \
--ram=4 \
--userdata-file ./userdata.sh \
--graceful-shutdown=false \
> k3s-rce.yamlStart the VM:
sudo -E slicer up ./k3s-rce.yamlWait until the VM has run the whole userdata script:
sudo -E slicer vm ready --userdataStep 3: Run Graham's detection script
Shell into the VM:
sudo -E slicer vm shell --uid 1000Download and run Graham's detection script:
curl -sLO https://gist.githubusercontent.com/grahamhelton/f5c8ce265161990b0847ac05a74e466a/raw/cad5073f2a1c3edc5ea5a1db81a4f860fb60d271/detect-nodes-proxy.sh
chmod +x detect-nodes-proxy.sh
./detect-nodes-proxy.shThe script will scan your cluster for service accounts with nodes/proxy permissions and report any that could be exploited.
Now create a new ClusterRole with the nodes/proxy GET permission, along with a ServiceAccount and RoleBinding to use it.
kubectl create clusterrole nodes-proxy-rce \
--verb=get \
--resource=nodes/proxy
kubectl create serviceaccount nodes-proxy-rce
kubectl create rolebinding nodes-proxy-rce \
--clusterrole=nodes-proxy-rce \
--serviceaccount=default:nodes-proxy-rceRun Graham's detection script again:
./detect-nodes-proxy.shYou should see the new service account listed as affected.
[+] Checking RoleBindings
[!] Vulnerable Service Account: default/nodes-proxy-rce -> nodes-proxy-rce (binding ns: default)
Verify: kubectl auth can-i get nodes --subresource=proxy --as=system:serviceaccount:default:nodes-proxy-rce
Let's run the verify command:
kubectl auth can-i get nodes --subresource=proxy --as=system:serviceaccount:default:nodes-proxy-rceYou should see the output:
YesWrapping up
This is a real quirk in Kubernetes RBAC—the fact that GET vs CREATE authorization depends on the transport protocol is surprising. Graham's script makes it easy to audit your clusters for affected service accounts.
Over on the OpenFaaS blog post, we've written up a more detailed explanation of the potential impact of this vulnerability for OpenFaaS users, and how to mitigate it.
See also: