πŸ“Œ Q-7 β€” RBAC Fix Using Logs (ServiceAccount + Role/RoleBinding Problem)

(Exactly how it happens in the exam. All traps included.)

πŸ“Œ Question (Exam Style)

A Deployment named inspector is running in the namespace ops. Users report that the application inside this Deployment is failing to perform its Kubernetes API calls.

Inspect the logs of the running Pod and identify the RBAC issue.

The logs clearly show multiple errors in the format:

Error from server (Forbidden): system:serviceaccount:ops:default 
cannot list resource "<resource>" in API group "" in the namespace "ops"

You are provided with:

  • Three existing ServiceAccounts in the namespace ops:

  • default

  • gorilla
  • goreabc
  • Two existing Roles already created in the namespace
  • Two existing RoleBindings in the namespace (only one of them should be correct for this application)

πŸ”§ Your tasks:

  1. Identify which Role provides the correct permissions based on the Pod’s logs.
  2. Identify which RoleBinding is wrong.
  3. Update the deployment so that it uses the correct ServiceAccount associated with the correct RoleBinding.
  4. Fix the problem using the correct RoleBinding or by creating a new one if necessary.
  5. Restart the Deployment in any valid way so that new Pods run with updated RBAC settings.
  6. Verify that the logs no longer show Forbidden errors.

You are given a Deployment running in a namespace. When you check its logs, you see that the Pod is repeatedly failing because the ServiceAccount does not have permissions to list certain resources.


⭐ Hidden Traps (Critical)

Trap-1 β€” Logs contain VERY old entries

Exam clusters run workloads for days β†’ logs are full of junk. Always use:

kubectl logs <pod> --since=2m --timestamps

This makes the real error visible instantly.


Trap-2 β€” The log tells you EXACTLY:

  • Which resource is being accessed
  • Which verb is being attempted
  • Which namespace
  • Which ServiceAccount is being used
  • Whether RoleBinding or ClusterRoleBinding is required

Example log line:

Error from server (Forbidden): system:serviceaccount:frontend:default 
cannot list resource "pods" in API group "" in the namespace "frontend"

This single line gives you EVERYTHING:

Meaning Value
SA being used default
Namespace of SA frontend
Resource pods
Verb list
Level Namespaced β†’ Role + RoleBinding

If it said cluster, then you would need ClusterRoleBinding.


Trap-3 β€” They already gave multiple Roles + RoleBindings

You must choose the correct one. Wrong one = logs still fail.


Trap-4 β€” The Deployment’s command reveals more

Inside Deployment YAML:

command: ["sh", "-c", "while true; do kubectl get pods; kubectl get secrets; sleep 60; done"]

This reveals:

  • Required verbs β†’ list
  • Required resources β†’ pods, secrets
  • Required scope β†’ namespaced

Trap-5 β€” After fixing RBAC, Pods do NOT auto-restart

You must either:

kubectl rollout restart deployment <name>

or:

kubectl delete pod -l app=<label>


πŸ§ͺ FULL LAB SETUP (Everything exam provides)

You will create:

βœ” Namespace

βœ” Deployment (broken) βœ” Two ServiceAccounts βœ” Two Roles βœ” Two RoleBindings βœ” Actual logs with RBAC failure


1. Create Namespace

kubectl create ns frontend

2. Create the Broken Deployment

# broken-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rbac-demo
  namespace: frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rbac-demo
  template:
    metadata:
      labels:
        app: rbac-demo
    spec:
      serviceAccountName: default     # ❌ WRONG SA
      containers:
      - name: checker
        image: bitnami/kubectl
        command:
          - /bin/sh
          - -c
          - |
            while true; do
              kubectl get pods;
              kubectl get secrets;
              sleep 60;
            done

Apply:

kubectl apply -f broken-deploy.yaml

3. Create Two ServiceAccounts (Exam gives these)

kubectl create sa panda -n frontend
kubectl create sa gorilla -n frontend

4. Create Two Roles (Exam gives these)

Role-A (wrong)

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: role-a
  namespace: frontend
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get"]

Role-B (correct)

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: role-b
  namespace: frontend
rules:
- apiGroups: [""]
  resources: ["pods", "secrets"]
  verbs: ["list"]

Apply both.


5. Create Two RoleBindings (Exam gives these)

RB-A (incorrect binding)

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: bind-a
  namespace: frontend
subjects:
- kind: ServiceAccount
  name: panda
roleRef:
  kind: Role
  name: role-a
  apiGroup: rbac.authorization.k8s.io

RB-B (correct binding but Deployment not using it)

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: bind-b
  namespace: frontend
subjects:
- kind: ServiceAccount
  name: gorilla
roleRef:
  kind: Role
  name: role-b
  apiGroup: rbac.authorization.k8s.io

🧩 NOW THE REAL PROBLEM

Check logs:

kubectl logs -n frontend deploy/rbac-demo --since=1m

You will see:

Error: system:serviceaccount:frontend:default 
cannot list resource "pods" in namespace "frontend"

β†’ SA = default β†’ Must be changed to gorilla


🎯 STEP-BY-STEP SOLUTION

πŸ” Universal Step 0 β€” Extract Required Permissions From Logs

πŸ‘‰ This is the step you perform as soon as you inspect the logs.

As soon as you check the Pod logs, you will see one or more Forbidden errors:

Error from server (Forbidden): system:serviceaccount:<namespace>:<service-account>
cannot <verb> resource "<resource>" in API group "" in the namespace "<namespace>"

From this single log line, extract:

  1. Which ServiceAccount is actually used
  2. Which resources the Pod is trying to access
  3. Which verbs (list / get / watch / etc.)
  4. Whether the scope is namespace or cluster

  5. If log ends with: in the namespace "X" β†’ Role needed

  6. If log ends with: at the cluster scope or uses -A internally β†’ ClusterRole needed

πŸ”Ž Step 0.1 β€” Check Whether the SA Currently Has Those Permissions

Run:

kubectl auth can-i <verb> <resource> --as=system:serviceaccount:<ns>:<sa> -n <ns>

Example:

kubectl auth can-i list pods --as=system:serviceaccount:ops:default -n ops

You will get:

no

This confirms the RBAC issue.

⭐ The goal after fixing the RBAC is:

When you run the exact same command again:

kubectl auth can-i <verb> <resource> --as=system:serviceaccount:<ns>:<sa> -n <ns>

The answer must now be:

yes

This means your ServiceAccount + Role/RoleBinding (or ClusterRoleBinding) are correct.


1️⃣ Inspect logs

kubectl logs deploy/rbac-demo -n frontend --since=2m --timestamps

Identify:

  • Resource: pods, secrets
  • Verb: list
  • Namespace: frontend
  • ServiceAccount: default (wrong)

2️⃣ Check roles

kubectl get role -n frontend
kubectl describe role role-b -n frontend

Role-B matches the needed permissions.


3️⃣ Check RoleBindings

kubectl describe rolebinding bind-b -n frontend

You see:

subjects:
  name: gorilla
roleRef:
  name: role-b

Perfect match.


4️⃣ Patch Deployment to use correct SA

kubectl set serviceaccount deploy/rbac-demo gorilla -n frontend

Method B β€” edit

kubectl edit deploy rbac-demo -n frontend
# change:
# serviceAccountName: gorilla

5️⃣ Restart Pods

kubectl delete pod -l app=rbac-demo -n frontend

or

kubectl rollout restart deployment rbac-demo -n frontend

6️⃣ Verify

kubectl logs deploy/rbac-demo -n frontend --since=1m

Now you see real output β€” no RBAC errors.


🟩 FINAL NOTES (High-value exam points)

  • Always check logs with --since=2m
  • ServiceAccount identity in logs is 100% reliable
  • β€œin namespace” β†’ Role + RoleBinding
  • β€œcluster” β†’ ClusterRole + ClusterRoleBinding
  • Deployment must be restarted manually
  • Always check RoleRules β†’ resources + verbs
  • Match the correct SA in RoleBinding
  • Deployment serviceAccountName must use the SA from the correct RoleBinding