CKA-2025 — Q-15 (NetworkPolicy Communication)

🔶 Question (rewritten professionally)

You are given two Deployments running in separate namespaces:

  • Namespace frontend

  • Deployment: mysql-checker

  • Namespace backend

  • Deployment: mysql

  • Service: mysql-service (port 3306)

Both namespaces currently enforce a deny-all security posture:

  • deny-all-ingress
  • deny-all-egress

Your job:

  1. Inspect the deployments to understand how they must communicate.
  2. From the folder ~/netpol/, choose one NetworkPolicy to allow the required communication.
  3. Your chosen policy must:

  4. Allow ONLY necessary communication

  5. Be as restrictive as possible
  6. You MUST NOT delete or modify any existing deny-all policies.
  7. If the deny-all policies block communication in the other namespace, create additional NetworkPolicy only where needed.

Goal:

✔️ Allow frontend → backend (MySQL) communication ✔️ Everything else must stay blocked ✔️ Least privilege security ✔️ All Pods must become functional


Step-1 — Understand Required Communication

From your Deployment:

env:
  MYSQL_HOST: mysql-service.backend.svc.cluster.local
  MYSQL_PORT: "3306"

Communication required:

👉 frontend pod must connect OUT (egress) 👉 backend pod must accept INGRESS on port 3306

So required policy:

  • Backend needs Ingress allow
  • Frontend needs Egress allow

Step-2 — Check What Is Already Blocked

You already applied these:

In frontend namespace:

  • deny-all-egress ❌
  • deny-all-ingress ❌

In backend namespace:

  • deny-all-egress ❌
  • deny-all-ingress ❌

Meaning:

  • Frontend cannot send requests (egress = blocked)
  • Backend cannot receive requests (ingress = blocked)

Step-3 — Inspect Provided Netpol Files in ~/netpol/

You have:

netpol-1.yaml

Allows all frontend → backend (Ingress only)

netpol-2.yaml ✔️ (least privilege)

Allows only:

  • namespace = frontend
  • podLabel = app=frontend
  • port 3306 (also Ingress only)

netpol-3.yaml

Deny everything (useless)

⚠️ IMPORTANT

💡 ALL THREE policies fix only backend Ingress They do NOT fix frontend Egress.

So selecting netpol-2.yaml solves only half of the problem.

That is the hidden exam trap.


✅ Step-4 — Apply the correct pre-provided Netpol

Apply netpol-2.yaml to backend:

kubectl apply -f ~/netpol/netpol-2.yaml

This fixes:

✔️ backend ingress → allowed ❌ frontend egress → still blocked


✅ Step-5 — Create NEW NetworkPolicy to allow frontend Egress

Because frontend namespace has:

deny-all-egress

You must create a new custom policy:

frontend-allow-egress.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-egress-to-backend
  namespace: frontend
spec:
  podSelector:
    matchLabels:
      app: frontend
  policyTypes:
    - Egress
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: backend
          podSelector:
            matchLabels:
              app: backend
      ports:
        - protocol: TCP
          port: 3306
    - ports:
        - protocol: UDP
          port: 53

This allows:

✔️ frontend → backend → 3306 ✔️ everything else still BLOCKED ✔️ Least privilege ✔️ Deny-All policies remain untouched ✔️ Exam requirement satisfied


Final Working Set of Policies

1️⃣ deny-all (already applied)

2️⃣ backend ingress allow (from netpol-2.yaml)

3️⃣ frontend egress allow (custom YAML above)

This is EXACT behavior expected by exam designers.


controlplane ~   ls
deny-all.yaml  deploy.yaml  netpol-1.yaml  netpol-2.yaml  netpol-3.yaml

controlplane ~   cat deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql-checker
  namespace: frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: frontend
    spec:
      containers:
        - name: checker
          image:  kubernetesway/mysql-connection-checker
          env:
            - name: MYSQL_HOST
              value: mysql-service.backend.svc.cluster.local
            - name: MYSQL_PORT
              value: '3306'
---
apiVersion: v1
kind: Service
metadata:
  name: mysql-service
  namespace: backend
spec:
  selector:
    app: backend
  ports:
    - protocol: TCP
      port: 3306
      targetPort: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  namespace: backend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
        - name: mysql
          image: mysql:8
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: rootpassword
            - name: MYSQL_DATABASE
              value: mydb
            - name: MYSQL_USER
              value: myuser
            - name: MYSQL_PASSWORD
              value: mypassword
          ports:
            - containerPort: 3306
          volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
      volumes:
        - name: mysql-persistent-storage
          emptyDir: {}  # Replace with a PVC in production

controlplane ~   cat deny-all.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-egress
  namespace: frontend
spec:
  podSelector: {}  # Selects all pods in the namespace
  policyTypes:
    - Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
  namespace: backend
spec:
  podSelector: {}  # Selects all pods in the namespace
  policyTypes:
    - Ingress
---
controlplane ~   cat netpol-1.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-from-frontend-to-backend
  namespace: backend
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Ingress

  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: frontend

      ports:
        - protocol: TCP
          port: 3306

controlplane ~   cat netpol-2.yaml               # this is the right one, the least priviledge
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: backend
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: frontend
          podSelector:
            matchLabels:
              app: frontend
      ports:
        - protocol: TCP
          port: 3306

controlplane ~   cat netpol-3.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-nothing
  namespace: backend
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Ingress
  ingress: []   

controlplane ~   vi egress-to-backend.yaml 

controlplane ~   cat egress-to-backend.yaml   # Newly deployed, although backend is now listening incoming, but frontend is blocking outgoing 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: egress-to-backend
  namespace: frontend
spec:
  podSelector:
    matchLabels:
      app: frontend
  policyTypes:
    - Egress
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: backend
          podSelector:
            matchLabels:
              app: backend
      ports:
        - protocol: TCP
          port: 3306
    - ports:
        - protocol: UDP
          port: 53

controlplane ~   k create ns frontend
namespace/frontend created

controlplane ~   k create ns backend
namespace/backend created

controlplane ~   k apply -f deny-all.yaml -f netpol-2.yaml -f egress-to-backend.yaml 
networkpolicy.networking.k8s.io/deny-all-egress created
networkpolicy.networking.k8s.io/deny-all-ingress created
networkpolicy.networking.k8s.io/allow-frontend-to-backend created
networkpolicy.networking.k8s.io/egress-to-backend created

controlplane ~   k apply -f deploy.yaml 
deployment.apps/mysql-checker created
service/mysql-service created
deployment.apps/mysql created

controlplane ~   k get po -n frontend 
NAME                             READY   STATUS   RESTARTS      AGE
mysql-checker-8674b5755f-46p2w   0/1     Error    1 (12s ago)   21s

controlplane ~   k get po -n frontend 
NAME                             READY   STATUS   RESTARTS      AGE
mysql-checker-8674b5755f-46p2w   0/1     Error    2 (24s ago)   39s

controlplane ~   k get po -n backend 
NAME                     READY   STATUS    RESTARTS   AGE
mysql-84f67f9849-f4q85   1/1     Running   0          62s

controlplane ~   k logs -n frontend mysql-checker-8674b5755f-46p2w 
Checking connection to mysql-service.backend.svc.cluster.local:3306...
mysql-service.backend.svc.cluster.local (172.20.68.4:3306) open
✅ Successfully connected to mysql-service.backend.svc.cluster.local:3306

controlplane ~   k get po -n frontend 
NAME                             READY   STATUS    RESTARTS      AGE
mysql-checker-8674b5755f-46p2w   1/1     Running   3 (66s ago)   97s

controlplane ~   k get po -n frontend 
NAME                             READY   STATUS    RESTARTS      AGE
mysql-checker-8674b5755f-46p2w   1/1     Running   3 (10m ago)   10m

controlplane ~   k exec -it -n frontend mysql-checker-8674b5755f-46p2w -- nc -vz mysql-service.backend.svc.cluster.local 3306
mysql-service.backend.svc.cluster.local (172.20.68.4:3306) open