Q-11 — Resource Requests Rebalancing

Question (short): A WordPress application (namespace relative-fawn) has 3 replicas. The question shows:

cpu: 1
memory: 2015360Ki

Adjust Pod requests so:

  • Node resources are divided evenly across the 3 Pods.
  • Add 10% overhead for stability (taken from node total).
  • Use the same requests for both containers and all initContainers.
  • Do not change any limits.
  • You may scale the deployment to 0 while editing.
  • After changes, confirm 3 replicas are Running & Ready.

Understand the numbers (critical)

  • The values shown in the question (cpu: 1, memory: 2015360Ki) are the current Pod requests, NOT node capacity.
  • Do NOT use these numbers to compute the new balanced requests. The question explicitly asks to divide node resources — so you must get node capacity and base your math on that.
  • This is an intentional exam trap. Always fetch the node capacity.

Pre-check (always do this first in the exam)

  • If a ResourceQuota or LimitRange exists, read and follow it. (You said none exist, but always check.)
  • If one exists and restricts requests, adapt to those constraints.

Quick worked example (typical / example numbers)

If node = 4 cores (4000m) and 8Gi:

  • overhead_cpu = 4000 × 0.10 = 400m
  • usable_cpu = 4000 − 400 = 3600m
  • per_pod_cpu = 3600 / 3 = 1200m → set cpu: "1200m"

Memory: 8Gi = 8192Mi ≈ 8388608Ki

  • overhead_mem ≈ 8388608 × 0.10 ≈ 838861Ki
  • usable_mem ≈ 8388608 − 838861 = 7550,747Ki (≈ 7.56Gi)
  • per_pod_mem ≈ usable_mem / 3 ≈ 2519165Ki ≈ ~2.4Gi → set memory: "2400Mi" or memory: "2.4Gi" (pick tidy rounding acceptable to grader)

Rule of thumb: choose a clean round number close to the result (e.g., 1200m, 2400Mi). Reasonable rounding is accepted.


Traps & final checks

  • Trap A: Do not use the Pod request numbers shown in the question. Those are current requests, not node capacity. Fetch node capacity.
  • Trap B: Check ResourceQuota and LimitRange in the namespace — if present they may constrain allowable requests.
  • Trap C: Update BOTH initContainers and containers with identical requests. Missing initContainers or mismatch = deduction.
  • Trap D: Do not change limits. Only change requests.
  • Trap E: Round sensibly. Use units the grader accepts (m for CPU, Mi/Gi for memory).
  • Trap F: If cluster has heterogeneous nodes, use the node(s) scheduler targets (usually exam has single node). Use kubectl get nodes -o wide to confirm.

Quick one-line checklist to paste in exam answer:

  1. kubectl -n relative-fawn get resourcequota; kubectl -n relative-fawn get limitrange
  2. kubectl get node -o jsonpath='{.status.capacity.cpu}'; kubectl get node -o jsonpath='{.status.capacity.memory}'
  3. compute: overhead = node * 10%; usable = node - overhead; per_pod = usable / 3 → round → set requests
  4. kubectl -n relative-fawn scale deploy wordpress --replicas=0
  5. kubectl -n relative-fawn edit deploy wordpress → add same resources.requests to initContainers & containers
  6. kubectl -n relative-fawn scale deploy wordpress --replicas=3 + kubectl -n relative-fawn rollout status deploy/wordpress
  7. verify pods Running & Ready

controlplane ~   k get no                                                     # single node
NAME           STATUS   ROLES           AGE    VERSION
controlplane   Ready    control-plane   173m   v1.34.0

controlplane ~   k get po -n relative-fawn -o wide                            # All pods are running on the same node
NAME                         READY   STATUS    RESTARTS   AGE     IP            NODE           NOMINATED NODE   READINESS GATES
wordpress-59857fc4d8-467dh   1/1     Running   0          8m28s   172.17.0.27   controlplane   <none>           <none>
wordpress-59857fc4d8-jzzfh   1/1     Running   0          8m28s   172.17.0.28   controlplane   <none>           <none>
wordpress-59857fc4d8-mcjgq   1/1     Running   0          8m28s   172.17.0.26   controlplane   <none>           <none>

controlplane ~   kubectl get node controlplane -o jsonpath='{.status.capacity.cpu}'
16
controlplane ~   kubectl get node controlplane -o jsonpath='{.status.capacity.memory}'
65838280Ki

controlplane ~   k scale deploy -n relative-fawn wordpress --replicas 0       # mandatory step, before putting the resource requests
deployment.apps/wordpress scaled

controlplane ~   k get po -n relative-fawn 
No resources found in relative-fawn namespace.

controlplane ~   k apply -f deploy.yaml 
deployment.apps/wordpress configured

controlplane ~   k get po -n relative-fawn 
NAME                         READY   STATUS    RESTARTS   AGE
wordpress-59857fc4d8-5tvkn   1/1     Running   0          59s
wordpress-59857fc4d8-9xgnn   1/1     Running   0          59s
wordpress-59857fc4d8-xrv8b   1/1     Running   0          59s

controlplane ~   cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress
  namespace: relative-fawn
spec:
  replicas: 3
  selector:
    matchLabels:
      app: wordpress
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      initContainers:
      - name: init-db
        image: busybox:1.28
        command: ["/bin/sh", "-c"]
        args:
        - echo init; sleep 1
        resources:
          requests:
            cpu: "4800m"          # [16000-(16000*0.1)]÷3
            memory: "19751484Ki"  # [65838280-(65838280*0.1)]÷3

      containers:
      - name: wordpress-app
        image: wordpress:5.8
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: "4800m"
            memory: "19751484Ki"