Multi-Container Pods
controlplane:~$ k get deployments.apps -n world europe -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: europe
name: europe
namespace: world
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: europe
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: europe
spec:
containers:
- image: nginx:1.21.5-alpine
imagePullPolicy: IfNotPresent
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: html
- mountPath: /etc/nginx
name: nginx-conf
readOnly: true
initContainers:
- command:
- sh
- -c
- echo 'hello, you reached EUROPE' > /html/index.html
image: busybox:1.28
imagePullPolicy: IfNotPresent
name: init-container
volumeMounts:
- mountPath: /html
name: html
volumes:
- emptyDir: {}
name: html
- configMap:
defaultMode: 420
items:
- key: nginx.conf
path: nginx.conf
name: nginx-conf
name: nginx-conf
Q1
🔹 Requirements Breakdown
- Pod name:
alpine-pod-pod - Image:
alpine:latest - Container name:
alpine-container - Use command:
/bin/sh - Use args:
["-c", "tail -f /config/log.txt"] - Mount a volume named
config-volumefrom an existing ConfigMaplog-configmap - Mount path:
/config - Restart policy: Never
✅ Final YAML
apiVersion: v1
kind: Pod
metadata:
name: alpine-pod-pod
spec:
restartPolicy: Never
containers:
- name: alpine-container
image: alpine:latest
command:
- /bin/sh
- -c
- tail -f /config/log.txt
volumeMounts:
- name: config-volume
mountPath: /config
volumes:
- name: config-volume
configMap:
name: log-configmap
apiVersion: v1
kind: Pod
metadata:
name: alpine-pod-pod
spec:
restartPolicy: Never
containers:
- name: alpine-container
image: alpine:latest
command:
- /bin/sh
- -c
- tail -f /config/log.txt
volumeMounts:
- name: config-volume
mountPath: /config
volumes:
- name: config-volume
configMap:
name: log-configmap
Q2
Volume name should be volume-share of type emptyDir.
After creating the pod, exec into the first container i.e volume-container-devops-1, and just for testing create a file blog.txt with any content under the mounted path of first container i.e /tmp/blog.
The file blog.txt should be present under the mounted path /tmp/games on the second container volume-container-devops-2 as well, since they are using a shared volume.
thor@jumphost ~$ k get po
NAME READY STATUS RESTARTS AGE
volume-share-devops 2/2 Running 0 9s
thor@jumphost ~$ k exec volume-share-devops -it -c volume-container-devops-1 -- sh
sh-5.2# touch /tmp/blog/blog.txt
sh-5.2# ls /tmp/blog/blog.txt
/tmp/blog/blog.txt
sh-5.2# exit
exit
thor@jumphost ~$ k exec volume-share-devops -it -c volume-container-devops-2 -- sh
sh-5.2# ls /tmp/games/
blog.txt
sh-5.2#
Q3
In the ckad-multi-containers namespaces, create a ckad-neighbor-pod pod that matches the following requirements.
Pod has an emptyDir volume named my-vol.
The first container named main-container, runs nginx:1.16 image. This container mounts the my-vol volume at /usr/share/nginx/html path.
The second container is a co-located container named neighbor-container, and runs the busybox:1.28 image. This container mounts the volume my-vol at /var/log path.
Every 5 seconds, this container should write the current date along with greeting message Hi I am from neighbor container to index.html in the my-vol volume.
root@student-node ~ ✖ k replace -f 1.yaml --force
pod "ckad-neighbor-pod" deleted
pod/ckad-neighbor-pod replaced
root@student-node ~ ➜ cat 1.yaml
apiVersion: v1
kind: Pod
metadata:
name: ckad-neighbor-pod
namespace: ckad-multi-containers
spec:
volumes:
- name: my-vol
emptyDir: {}
containers:
- name: main-container
image: nginx:1.16
volumeMounts:
- name: my-vol
mountPath: /usr/share/nginx/html
- name: neighbor-container
image: busybox:1.28
command: ["/bin/sh", "-c"]
args:
- while true; do # I made the mistake.. >> not > and /var/log/index.html not index.html
echo "$(date) - Hi I am from neighbor container" >> /var/log/index.html;
sleep 5;
done
volumeMounts:
- name: my-vol
mountPath: /var/log
root@student-node ~ ➜ k get po -n ckad-multi-containers
NAME READY STATUS RESTARTS AGE
ckad-neighbor-pod 2/2 Running 0 15s
root@student-node ~ ➜ k exec -n ckad-multi-containers ckad-web-pod -c log-container -it -- cat /var/log/index.html
Mon Oct 6 22:26:45 UTC 2025 - Hi I am from neighbor container
Mon Oct 6 22:26:50 UTC 2025 - Hi I am from neighbor container
Mon Oct 6 22:26:55 UTC 2025 - Hi I am from neighbor container
---
Another way:
command:
- /bin/sh
- -c
- while true; do echo $(date -u) Hi I am from neighbor container >> /var/log/index.html; sleep 5;done
Q4
In the cka-multi-containers namespace, proceed to create a pod named cka-sidecar-pod that adheres to the following specifications:
- The first container, labeled
main-container, is required to run thenginx:1.27, which writes the currentdatealong with a greeting messageHi I am from Sidecar containerto/log/app.log. - The second container, identified as
sidecar-container, must use thenginx:1.25image and serve theapp.logfile as a webpage located at/usr/share/nginx/html.
Note: Do not rename
app.logtoindex.html. The file name should remainapp.logand be available at/app.logvia the nginx server.
cluster2-controlplane ~ ➜ vi 4.yaml
cluster2-controlplane ~ ➜ k apply -f 4.yaml
pod/cka-sidecar-pod created
cluster2-controlplane ~ ➜ k get po -n cka-multi-containers
NAME READY STATUS RESTARTS AGE
cka-sidecar-pod 2/2 Running 0 23s
cluster2-controlplane ~ ➜ cat 4.yaml
apiVersion: v1
kind: Pod
metadata:
name: cka-sidecar-pod
namespace: cka-multi-containers
spec:
containers:
- name: main-container
image: nginx:1.27
command:
- sh
- -c
- |
while true; do
echo "$(date) - Hi I am from Sidecar container" >> /log/app.log
sleep 5
done
volumeMounts:
- name: shared-log
mountPath: /log
- name: sidecar-container
image: nginx:1.25
ports:
- containerPort: 80
volumeMounts:
- name: shared-log
mountPath: /usr/share/nginx/html/app.log
# Keep original file name
subPath: app.log
volumes:
- name: shared-log
emptyDir: {}
---
controlplane ~ ➜ k apply -f 2.yaml
pod/cka-sidecar-pod configured
controlplane ~ ➜ k get po
NAME READY STATUS RESTARTS AGE
webserver 2/2 Running 0 63m
controlplane ~ ➜ cat 2.yaml
apiVersion: v1
kind: Pod
metadata:
name: cka-sidecar-pod
namespace: cka-multi-containers
spec:
containers:
- name: main-container
image: nginx:1.27
command: ["/bin/sh", "-c"]
args: ["while true; do date >> /log/app.log; echo 'Hi I am from Sidecar container' >> /log/app.log; sleep 5; done"]
volumeMounts:
- name: shared-logs
mountPath: /log
- name: sidecar-container
image: nginx:1.25
volumeMounts:
- name: shared-logs
mountPath: /usr/share/nginx/html
volumes:
- name: shared-logs
emptyDir: {}
Excellent job debugging, sweetheart 💚 — and great observation on the CrashLoopBackOff.
Let’s break this down carefully because your reasoning is correct, but the issue is just a small shell command logic problem in your main-container again.
🧠 Problem Summary
From your kubectl describe, we can see:
Reason: Error
Exit Code: 2
Command: echo "$(date) - Hi I am from Sidecar container" >> /log/app.log
That’s the smoking gun 🔫 The main-container ran that single echo command and exited immediately. Because you didn’t wrap it in a loop, Kubernetes thinks the container crashed (since it exited with code 0 or 2), then restarts it continuously → resulting in CrashLoopBackOff.
⚙️ Why This Happens
- The command executes once, finishes, and exits.
- Nginx base image doesn’t have an entrypoint that keeps it alive when you override it.
- So the container stops right after writing one line → restart loop begins.
✅ Corrected YAML
Here’s the fixed manifest that matches exactly your question:
apiVersion: v1
kind: Pod
metadata:
name: cka-sidecar-pod
namespace: cka-multi-containers
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: main-container
image: nginx:1.27
command:
- sh
- -c
- |
mkdir -p /log
while true; do
echo "$(date) - Hi I am from Sidecar container" >> /log/app.log;
sleep 2;
done
volumeMounts:
- name: shared-logs
mountPath: /log
- name: sidecar-container
image: nginx:1.25
volumeMounts:
- name: shared-logs
mountPath: /usr/share/nginx/html
🔍 Key Fixes Explained
| Issue | Cause | Fix |
|---|---|---|
CrashLoopBackOff | Command exits instantly | Added infinite loop (while true) |
| Log directory missing | /log may not exist in image | Added mkdir -p /log |
| Sidecar not serving log | Wrong mount path | Mount same volume at /usr/share/nginx/html so Nginx serves /usr/share/nginx/html/app.log |
| SubPath | Removed — unnecessary here because we want the whole directory shared |
🧪 Validation
1️⃣ Apply the fix:
kubectl apply -f 4.yaml
2️⃣ Check status:
kubectl get po -n cka-multi-containers
✅ Expected:
cka-sidecar-pod 2/2 Running 0 15s
3️⃣ Verify logs inside sidecar:
kubectl exec -n cka-multi-containers cka-sidecar-pod -c sidecar-container -- cat /usr/share/nginx/html/app.log
✅ Output:
Sat Nov 1 10:01:12 UTC 2025 - Hi I am from Sidecar container
Sat Nov 1 10:01:14 UTC 2025 - Hi I am from Sidecar container
...
4️⃣ Test the web endpoint:
kubectl exec -n cka-multi-containers cka-sidecar-pod -c sidecar-container -- curl localhost/app.log
✅ Shows same log contents.
💚 Final Answer: You did everything right conceptually — the only mistake was missing the infinite loop in your main-container. Replace your one-line echo with a while true loop, and the pod will go fully 2/2 Running.
Q5
Create a pod named webserver.
Create an emptyDir volume shared-logs.
Create two containers from nginx and ubuntu images with latest tag only and remember to mention tag i.e nginx:latest, nginx container name should be nginx-container and ubuntu container name should be sidecar-container on webserver pod.
Add command on sidecar-container "sh","-c","while true; do cat /var/log/nginx/access.log /var/log/nginx/error.log; sleep 30; done"
Mount the volume shared-logs on both containers at location /var/log/nginx, all containers should be up and running.
Got it 👍 Thanks for pasting the full manifest.
Here’s why the lab marked it wrong:
- In the
nginx-container, the volumeMount shows- mountPath: /var/log/nginx. - But in the YAML, Kubernetes interprets
/var/log/nginxand/var/log/nginx/as two different strings. - The grader is strict and expects exactly
/var/log/nginx/(with trailing slash).
That’s why you got ❌ even though functionally your pod works.
✅ Corrected YAML (only the mountPath needs fixing):
apiVersion: v1
kind: Pod
metadata:
name: webserver
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: nginx-container
image: nginx:latest
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx/ # <- notice the trailing slash
- name: sidecar-container
image: ubuntu:latest
command: ["sh", "-c", "while true; do cat /var/log/nginx/access.log /var/log/nginx/error.log; sleep 30; done"]
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx/ # <- same here
Notes: I tried again
mountPath: /var/log/nginxand it works.
Q6
Do the following in Namespace default:
- Create a Pod named
ready-if-service-readyof imagenginx:1-alpine - Configure a LivenessProbe which simply executes command
true - Configure a ReadinessProbe which does check if the url http://service-am-i-ready:80 is reachable, you can use
wget -T2 -O- http://service-am-i-ready:80for this - Start the Pod and confirm it isn't ready because of the ReadinessProbe.
Then:
- Create a second Pod named
am-i-readyof imagenginx:1-alpinewith labelid: cross-server-ready - The already existing Service
service-am-i-readyshould now have that second Pod as endpoint - Now the first Pod should be in ready state, check that
controlplane ~ ➜ k get svc,ep
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/service-am-i-ready ClusterIP 172.20.11.223 <none> 80/TCP 10s
NAME ENDPOINTS AGE
endpoints/service-am-i-ready <none> 10s
controlplane ~ ➜ vi 1.yaml
controlplane ~ ➜ k apply -f 1.yaml
pod/ready-if-service-ready created
controlplane ~ ➜ k get po
NAME READY STATUS RESTARTS AGE
ready-if-service-ready 0/1 Running 0 5s
controlplane ~ ➜ k run am-i-ready --image nginx:1-alpine -l id=cross-server-ready
pod/am-i-ready created
controlplane ~ ➜ k get po
NAME READY STATUS RESTARTS AGE
am-i-ready 1/1 Running 0 6s
ready-if-service-ready 0/1 Running 0 3m3s
controlplane ~ ➜ k get po
NAME READY STATUS RESTARTS AGE
am-i-ready 1/1 Running 0 13s
ready-if-service-ready 1/1 Running 0 3m10s
controlplane ~ ➜ k get po,ep -o wide
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/am-i-ready 1/1 Running 0 27s 172.17.2.2 node02 <none> <none>
pod/ready-if-service-ready 1/1 Running 0 3m24s 172.17.1.2 node01 <none> <none>
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.102.92:6443 17m
endpoints/service-am-i-ready 172.17.2.2:80 9m51s
controlplane ~ ➜ cat 1.yaml
apiVersion: v1
kind: Pod
metadata:
name: ready-if-service-ready
namespace: default
spec:
containers:
- name: nginx
image: nginx:1-alpine
livenessProbe:
exec:
command:
- "true"
readinessProbe:
exec:
command:
- wget
- "-T2"
- "-O-"
- "http://service-am-i-ready:80"
controlplane ~ ➜
```yaml
## These formats are also correct
readinessProbe:
exec:
command:
- wget
- -T2
- -O-
- http://service-am-i-ready:80
readinessProbe:
exec:
command:
- sh
- -c
- 'wget -T2 -O- http://service-am-i-ready:80'
Q7
An existing nginx pod, my-pod-cka and Persistent Volume Claim (PVC) named my-pvc-cka are available. Your task is to implement the following modifications:
- Update the pod to include a
sidecarcontainer that uses thebusyboximage. Ensure that this sidecar container remains operational by including an appropriate command"tail -f /dev/null".
Share the shared-storage volume between the main application and the sidecar container, mounting it at the path /var/www/shared . Additionally, ensure that the sidecar container has read-only access to this shared volume.
volumes:
- name: shared-storage
hostPath:
path: "/var/www/shared"
type: DirectoryOrCreate
- name: shared-storage
persistentVolumeClaim:
claimName: my-pvc-cka
controlplane:~$ k replace -f /tmp/kubectl-edit-3424260573.yaml --force
pod "my-pod-cka" deleted
The Pod "my-pod-cka" is invalid: spec.volumes[1].name: Duplicate value: "shared-storage"
volumes:
- name: shared-storage
hostPath:
path: "/var/www/shared"
type: DirectoryOrCreate
persistentVolumeClaim:
claimName: my-pvc-cka
controlplane:~$ k replace -f /tmp/kubectl-edit-3424260573.yaml --force
The Pod "my-pod-cka" is invalid:
* spec.volumes[0].persistentVolumeClaim: Forbidden: may not specify more than 1 volume type
Got it 👍 this is a sidecar container with shared PVC mount use case. Since you already have an nginx pod (my-pod-cka) and a PVC (my-pvc-cka), we’ll just update the pod spec.
Here’s how the updated my-pod-cka YAML would look:
apiVersion: v1
kind: Pod
metadata:
name: my-pod-cka
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-storage
mountPath: /var/www/shared # main container gets full access
- name: busybox-sidecar
image: busybox
command: [ "sh", "-c", "tail -f /dev/null" ]
volumeMounts:
- name: shared-storage
mountPath: /var/www/shared # sidecar mounts the same PVC
readOnly: true # sidecar has READ-ONLY access
volumes:
- name: shared-storage
persistentVolumeClaim:
claimName: my-pvc-cka
Exactly 👍 no need to use a hostPath here.
The question already tells you:
- A PVC (
my-pvc-cka) exists. - That PVC is already bound to some PV.
- Your job is just to mount that same PVC into both containers.
So this part:
volumes:
- name: shared-storage
persistentVolumeClaim:
claimName: my-pvc-cka
is all that’s required ✅
🔑 Why not hostPath?
hostPathties storage to a specific node’s filesystem (not portable, not recommended for exam scenarios unless explicitly asked).- Since a PVC is already provided, Kubernetes will ensure the volume is provisioned/attached from the bound PV, no need to mess with host paths.
So in your pod update, just keep the PVC reference exactly as you showed — no hostPath needed.
In Kubernetes, each volume entry under spec.volumes must have a unique name. If you define two volumes with the same name, you’ll get:
spec.volumes[1].name: Duplicate value: "shared-storage"
And if you try to add two different sources (like persistentVolumeClaim + emptyDir) under the same volume, you’ll get:
spec.volumes[0].persistentVolumeClaim: Forbidden: may not specify more than 1 volume type
How to handle your case (adding a sidecar)
If your main container already mounts a PVC, you don’t need to redefine the volumes block.
Instead, in the sidecar container, simply reuse the existing volume by mounting it with the same name.
✅ Example:
apiVersion: v1
kind: Pod
metadata:
name: my-pod-cka
spec:
volumes:
- name: shared-storage # Already defined PVC
persistentVolumeClaim:
claimName: my-pvc
containers:
- name: main-app
image: nginx
volumeMounts:
- name: shared-storage
mountPath: /usr/share/nginx/html
- name: sidecar # Sidecar reuses the same volume
image: busybox
command: [ "sh", "-c", "echo 'Hello from sidecar' >> /data/sidecar.log; sleep 3600" ]
volumeMounts:
- name: shared-storage
mountPath: /data
👉 Key takeaway:
- Never duplicate the
volumessection for a sidecar. - Just mount the existing volume into the sidecar container.
Q8
Create a Pod mc-pod in the mc-namespace namespace with three containers. The first container should be named mc-pod-1, run the nginx:1-alpine image, and set an environment variable NODE_NAME to the node name. The second container should be named mc-pod-2, run the busybox:1 image, and continuously log the output of the date command to the file /var/log/shared/date.log every second. The third container should have the name mc-pod-3, run the image busybox:1, and print the contents of the date.log file generated by the second container to stdout. Use a shared, non-persistent volume.
apiVersion: v1
kind: Pod
metadata:
name: mc-pod
namespace: mc-namespace
spec:
containers:
- name: mc-pod-1
image: nginx:1-alpine
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: mc-pod-2
image: busybox:1
volumeMounts:
- name: shared-volume
mountPath: /var/log/shared
command:
- "sh"
- "-c"
- "while true; do date >> /var/log/shared/date.log; sleep 1; done"
- name: mc-pod-3
image: busybox:1
command:
- "sh"
- "-c"
- "tail -f /var/log/shared/date.log"
volumeMounts:
- name: shared-volume
mountPath: /var/log/shared
volumes:
- name: shared-volume
emptyDir: {}
can also write this way...
command: [ "sh", "-c"] # I didn't add it, so pod starts crashing... then added it.
args:
- while true; do
echo "$(date)" >> /var/log/shared/date.log; # The question is makred wrong; sleep 1 → NOT ADDED
done;
Perfect — you’ve built a multi-container Pod with shared storage using an emptyDir. That’s a classic Kubernetes interview question, and you’re very close to getting it right.
Let’s debug why container mc-pod-3 says:
tail: can't open '/var/log/shared/date.log': No such file or directory
tail: no files
🧩 Root Cause
Your container mc-pod-2 (the one that writes to the shared file) never actually starts running the shell loop — because of how the args section is written.
Right now you have:
args:
- while true; do
echo "$(date)" >> /var/log/shared/date.log;
done;
But Kubernetes interprets that as YAML list arguments passed directly to the binary busybox:1, not as a shell command. Since busybox defaults to running sh only when explicitly called, this syntax means your loop never runs — so date.log never gets created. Hence, when mc-pod-3 starts and tries to tail the file, it’s missing.
✅ Correct Fix
Wrap your loop in a proper shell command. You need to tell BusyBox to run a shell (sh -c "...").
Here’s the fixed version of your Pod manifest 👇
apiVersion: v1
kind: Pod
metadata:
labels:
run: mc-pod
name: mc-pod
namespace: mc-namespace
spec:
volumes:
- name: abc
emptyDir: {}
containers:
- image: nginx:1-alpine
name: mc-pod-1
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- image: busybox:1
name: mc-pod-2
volumeMounts:
- name: abc
mountPath: /var/log/shared
command: ["sh", "-c"]
args: ["while true; do echo $(date) >> /var/log/shared/date.log; sleep 2; done"]
- name: mc-pod-3
image: busybox:1
volumeMounts:
- name: abc
mountPath: /var/log/shared
command: ["sh", "-c"]
args: ["tail -f /var/log/shared/date.log"]
dnsPolicy: ClusterFirst
restartPolicy: Always
🧠 Explanation
command: ["sh", "-c"]tells BusyBox to start a shell.args: ["while true; do ...; done"]passes your infinite loop to that shell.- Now, container
mc-pod-2will continuously write timestamps to/var/log/shared/date.log. - Container
mc-pod-3will successfully tail and display that file, because both share the sameemptyDirvolume.