π Q-7 β RBAC Fix Using Logs (ServiceAccount + Role/RoleBinding Problem)
(Exactly how it happens in the exam. All traps included.)
π Question (Exam Style)
A Deployment named inspector is running in the namespace ops. Users report that the application inside this Deployment is failing to perform its Kubernetes API calls.
Inspect the logs of the running Pod and identify the RBAC issue.
The logs clearly show multiple errors in the format:
Error from server (Forbidden): system:serviceaccount:ops:default
cannot list resource "<resource>" in API group "" in the namespace "ops"
You are provided with:
-
Three existing ServiceAccounts in the namespace
ops: -
default gorillagoreabc- Two existing Roles already created in the namespace
- Two existing RoleBindings in the namespace (only one of them should be correct for this application)
π§ Your tasks:
- Identify which Role provides the correct permissions based on the Podβs logs.
- Identify which RoleBinding is wrong.
- Update the deployment so that it uses the correct ServiceAccount associated with the correct RoleBinding.
- Fix the problem using the correct RoleBinding or by creating a new one if necessary.
- Restart the Deployment in any valid way so that new Pods run with updated RBAC settings.
- Verify that the logs no longer show Forbidden errors.
You are given a Deployment running in a namespace. When you check its logs, you see that the Pod is repeatedly failing because the ServiceAccount does not have permissions to list certain resources.
β Hidden Traps (Critical)
Trap-1 β Logs contain VERY old entries
Exam clusters run workloads for days β logs are full of junk. Always use:
kubectl logs <pod> --since=2m --timestamps
This makes the real error visible instantly.
Trap-2 β The log tells you EXACTLY:
- Which resource is being accessed
- Which verb is being attempted
- Which namespace
- Which ServiceAccount is being used
- Whether RoleBinding or ClusterRoleBinding is required
Example log line:
Error from server (Forbidden): system:serviceaccount:frontend:default
cannot list resource "pods" in API group "" in the namespace "frontend"
This single line gives you EVERYTHING:
| Meaning | Value |
|---|---|
| SA being used | default |
| Namespace of SA | frontend |
| Resource | pods |
| Verb | list |
| Level | Namespaced β Role + RoleBinding |
If it said cluster, then you would need ClusterRoleBinding.
Trap-3 β They already gave multiple Roles + RoleBindings
You must choose the correct one. Wrong one = logs still fail.
Trap-4 β The Deploymentβs command reveals more
Inside Deployment YAML:
command: ["sh", "-c", "while true; do kubectl get pods; kubectl get secrets; sleep 60; done"]
This reveals:
- Required verbs β list
- Required resources β pods, secrets
- Required scope β namespaced
Trap-5 β After fixing RBAC, Pods do NOT auto-restart
You must either:
kubectl rollout restart deployment <name>
or:
kubectl delete pod -l app=<label>
π§ͺ FULL LAB SETUP (Everything exam provides)
You will create:
β Namespace
β Deployment (broken) β Two ServiceAccounts β Two Roles β Two RoleBindings β Actual logs with RBAC failure
1. Create Namespace
kubectl create ns frontend
2. Create the Broken Deployment
# broken-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbac-demo
namespace: frontend
spec:
replicas: 1
selector:
matchLabels:
app: rbac-demo
template:
metadata:
labels:
app: rbac-demo
spec:
serviceAccountName: default # β WRONG SA
containers:
- name: checker
image: bitnami/kubectl
command:
- /bin/sh
- -c
- |
while true; do
kubectl get pods;
kubectl get secrets;
sleep 60;
done
Apply:
kubectl apply -f broken-deploy.yaml
3. Create Two ServiceAccounts (Exam gives these)
kubectl create sa panda -n frontend
kubectl create sa gorilla -n frontend
4. Create Two Roles (Exam gives these)
Role-A (wrong)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: role-a
namespace: frontend
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get"]
Role-B (correct)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: role-b
namespace: frontend
rules:
- apiGroups: [""]
resources: ["pods", "secrets"]
verbs: ["list"]
Apply both.
5. Create Two RoleBindings (Exam gives these)
RB-A (incorrect binding)
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: bind-a
namespace: frontend
subjects:
- kind: ServiceAccount
name: panda
roleRef:
kind: Role
name: role-a
apiGroup: rbac.authorization.k8s.io
RB-B (correct binding but Deployment not using it)
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: bind-b
namespace: frontend
subjects:
- kind: ServiceAccount
name: gorilla
roleRef:
kind: Role
name: role-b
apiGroup: rbac.authorization.k8s.io
π§© NOW THE REAL PROBLEM
Check logs:
kubectl logs -n frontend deploy/rbac-demo --since=1m
You will see:
Error: system:serviceaccount:frontend:default
cannot list resource "pods" in namespace "frontend"
β SA = default β Must be changed to gorilla
π― STEP-BY-STEP SOLUTION
π Universal Step 0 β Extract Required Permissions From Logs
π This is the step you perform as soon as you inspect the logs.
As soon as you check the Pod logs, you will see one or more Forbidden errors:
Error from server (Forbidden): system:serviceaccount:<namespace>:<service-account>
cannot <verb> resource "<resource>" in API group "" in the namespace "<namespace>"
From this single log line, extract:
- Which ServiceAccount is actually used
- Which resources the Pod is trying to access
- Which verbs (list / get / watch / etc.)
-
Whether the scope is namespace or cluster
-
If log ends with:
in the namespace "X"β Role needed - If log ends with:
at the cluster scopeor uses-Ainternally β ClusterRole needed
π Step 0.1 β Check Whether the SA Currently Has Those Permissions
Run:
kubectl auth can-i <verb> <resource> --as=system:serviceaccount:<ns>:<sa> -n <ns>
Example:
kubectl auth can-i list pods --as=system:serviceaccount:ops:default -n ops
You will get:
no
This confirms the RBAC issue.
β The goal after fixing the RBAC is:
When you run the exact same command again:
kubectl auth can-i <verb> <resource> --as=system:serviceaccount:<ns>:<sa> -n <ns>
The answer must now be:
yes
This means your ServiceAccount + Role/RoleBinding (or ClusterRoleBinding) are correct.
1οΈβ£ Inspect logs
kubectl logs deploy/rbac-demo -n frontend --since=2m --timestamps
Identify:
- Resource: pods, secrets
- Verb: list
- Namespace: frontend
- ServiceAccount: default (wrong)
2οΈβ£ Check roles
kubectl get role -n frontend
kubectl describe role role-b -n frontend
Role-B matches the needed permissions.
3οΈβ£ Check RoleBindings
kubectl describe rolebinding bind-b -n frontend
You see:
subjects:
name: gorilla
roleRef:
name: role-b
Perfect match.
4οΈβ£ Patch Deployment to use correct SA
Method A β PATCH (recommended)
kubectl set serviceaccount deploy/rbac-demo gorilla -n frontend
Method B β edit
kubectl edit deploy rbac-demo -n frontend
# change:
# serviceAccountName: gorilla
5οΈβ£ Restart Pods
kubectl delete pod -l app=rbac-demo -n frontend
or
kubectl rollout restart deployment rbac-demo -n frontend
6οΈβ£ Verify
kubectl logs deploy/rbac-demo -n frontend --since=1m
Now you see real output β no RBAC errors.
π© FINAL NOTES (High-value exam points)
- Always check logs with
--since=2m - ServiceAccount identity in logs is 100% reliable
- βin namespaceβ β Role + RoleBinding
- βclusterβ β ClusterRole + ClusterRoleBinding
- Deployment must be restarted manually
- Always check RoleRules β resources + verbs
- Match the correct SA in RoleBinding
- Deployment
serviceAccountNamemust use the SA from the correct RoleBinding