Kubernetes RBAC: Access Control Without the Headache
ServiceAccounts, Roles and RoleBindings in Kubernetes — how it works, where it breaks and practical examples to set up RBAC quickly and securely.
Jean-Pierre Broeders
Freelance DevOps Engineer
Kubernetes RBAC: Access Control Without the Headache
By default, everything in Kubernetes runs under the default ServiceAccount. Sounds harmless, but it's like everyone in the office sharing the same key. Sooner or later, someone opens a door that should have stayed locked.
RBAC — Role-Based Access Control — fixes that problem. Not glamorous, but essential. And once it's set up properly, it barely needs attention.
Why RBAC Matters
A classic mistake: a CI/CD pipeline running with cluster-admin privileges. Works great until someone accidentally fires off kubectl delete namespace production. Or until a compromised pod gets full API access.
RBAC prevents those disasters. The principle is straightforward: give every workload and every user exactly the permissions they need. Nothing more.
The Building Blocks
Four resources make up the core:
| Resource | Scope | What it does |
|---|---|---|
| Role | Namespace | Defines permissions within a single namespace |
| ClusterRole | Cluster-wide | Defines permissions across the entire cluster |
| RoleBinding | Namespace | Binds a Role to a user or ServiceAccount |
| ClusterRoleBinding | Cluster-wide | Binds a ClusterRole to a user or ServiceAccount |
The difference between Role and ClusterRole is purely scope. A Role applies to one namespace, a ClusterRole to everything. Sounds trivial, but forgetting this distinction is responsible for a lot of overly broad permissions in production.
A Practical Example
Say there's a monitoring agent that needs to read pods but shouldn't modify anything. The setup looks like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: monitoring-agent
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: monitoring-pod-reader
subjects:
- kind: ServiceAccount
name: monitoring-agent
namespace: monitoring
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
Three manifests, done. The monitoring agent can now read pods across all namespaces, but nothing else. No peeking at secrets, no modifying deployments, no draining nodes.
ServiceAccounts: The Forgotten Piece
Every pod runs under a ServiceAccount. Without explicit assignment, the default account for that namespace gets used. That account has minimal rights by default, but the issue is: all pods share it. One compromised pod means an attacker gets the same permissions as every other pod in that namespace.
The fix is simple: create a dedicated ServiceAccount per application.
apiVersion: v1
kind: ServiceAccount
metadata:
name: payment-service
namespace: production
automountServiceAccountToken: false
That last line — automountServiceAccountToken: false — matters. By default, Kubernetes mounts an API token into every pod. If the application doesn't need the Kubernetes API (and most don't), turn this off. Less attack surface.
Common Mistakes
Overly broad wildcards. Tempting to use resources: ["*"] and verbs: ["*"]. Always works, but effectively grants cluster-admin rights. Never do this in production.
ClusterRoleBinding when RoleBinding would suffice. If a service only needs to operate within its own namespace, use a RoleBinding. No reason to grant cluster-wide permissions.
Forgetting to test RBAC. After creating roles, verify they work correctly:
kubectl auth can-i list pods \
--as=system:serviceaccount:monitoring:monitoring-agent
# yes
kubectl auth can-i delete deployments \
--as=system:serviceaccount:monitoring:monitoring-agent
# no
That can-i check is worth its weight in gold. Run it after every RBAC change.
Aggregated ClusterRoles
A handy feature that few teams use: aggregation. This automatically merges ClusterRoles based on labels.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring-extras
labels:
rbac.example.com/aggregate-to-monitoring: "true"
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring-aggregate
aggregationRule:
clusterRoleSelectors:
- matchLabels:
rbac.example.com/aggregate-to-monitoring: "true"
rules: [] # automatically populated
Useful when multiple teams need to independently add permissions to the same role. Prevents merge conflicts in Git and keeps RBAC modular.
Pod Security Standards
RBAC controls who can do what via the API. But what a pod itself can do at the OS level — running as root, host networking, privileged containers — that falls under Pod Security Standards (PSS).
Since Kubernetes 1.25, this is built-in via the Pod Security Admission controller. Three levels:
| Level | Description |
|---|---|
| privileged | No restrictions (system-level workloads only) |
| baseline | Blocks the worst offenders (hostNetwork, privileged containers) |
| restricted | Best practice — no root, no capabilities, read-only rootfs |
Assigning to a namespace is done via labels:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/warn: restricted
The warn mode is great for testing first without breaking anything.
Practical Checklist
A quick checklist for production RBAC:
- Every team or service gets its own ServiceAccount
automountServiceAccountToken: falseunless actually needed- Roles instead of ClusterRoles wherever possible
- No wildcards in rules
kubectl auth can-ichecks in the CI/CD pipeline- Pod Security Standards set to
restrictedfor production namespaces - RBAC manifests in version control, never hand-crafted
Nothing spectacular. Just solid baseline hygiene that prevents a small mistake from becoming a major incident. Most security issues in Kubernetes environments don't come from sophisticated attacks — they come from permissions that are too broad and that nobody ever locked down.
