Resolving Unintended Kubernetes Pod Scheduling: A Practical Guide

in #kubernetes4 months ago

Resolving Unintended Kubernetes Pod Scheduling: A Practical Guide

When managing a Kubernetes cluster, pod scheduling issues can sometimes lead to deployments on unintended nodes. This happened recently in my homelab setup when a PostgreSQL pod landed on an unexpected host, disrupting performance expectations. Here's how I diagnosed and fixed the issue.

Understanding the Cluster Layout

Before diving into fixes, it’s essential to understand the cluster topology. Here’s an example of what the node setup looked like:

NAME               STATUS   ROLES                       AGE    VERSION        LABELS
zephyr-agent      Ready    <none>                      590d   v1.32.3+k3s1   kubernetes.io/arch=arm64,kubernetes.io/hostname=zephyr-agent
valkyrie-master   Ready    control-plane,etcd,master   60d    v1.32.3+k3s1   kubernetes.io/arch=arm64,node-role.kubernetes.io/control-plane=true
echo-node         Ready    <none>                      87d    v1.33.1+k3s1   kubernetes.io/arch=amd64,kubernetes.io/hostname=echo-node
atlas-control     Ready    control-plane,etcd,master   591d   v1.32.3+k3s1   kubernetes.io/arch=amd64,node-role.kubernetes.io/control-plane=true
nova-node         Ready    <none>                      87d    v1.32.3+k3s1   kubernetes.io/arch=amd64,kubernetes.io/hostname=nova-node

For this environment, I intended PostgreSQL to avoid nova-node, but Kubernetes kept scheduling it there.

Checking Scheduling Constraints

First, I verified where the pod was running:

kubectl get pod/postgresql-0 -o wide -n homelab

Output:

NAME           READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
postgresql-0   1/1     Running   0          14m   10.42.6.190   nova-node   <none>           <none>

Despite expectations, the pod ended up on nova-node, which wasn’t ideal.

Applying Node Affinity Rules

After reviewing my StatefulSet configuration, I realized the nodeAffinity section was incomplete. To prevent PostgreSQL from landing on nova-node, I added the following affinity rules:

spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/hostname
                operator: NotIn
                values:
                  - nova-node

This explicitly prevented the pod from being scheduled on the unwanted node.

Verifying Persistent Volume Behavior

Since PostgreSQL was using an NFS-backed Persistent Volume, I ensured it wasn’t affecting node selection.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgres-nfs
spec:
  capacity:
    storage: 18T
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.1.58
    path: "/mnt/storage/postgresql"
  mountOptions:
    - nfsvers=4.1

Because NFS volumes are not node-specific, this configuration didn’t bind my pod to nova-node. Kubernetes simply needed proper affinity rules to schedule it elsewhere.

Final Results

After applying the affinity changes and restarting the StatefulSet, my pod landed on the correct node:

kubectl get pod/postgresql-0 -o wide -n homelab

Output:

NAME           READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
postgresql-0   1/1     Running   0          75s   10.42.5.21   echo-node    <none>           <none>

Success! PostgreSQL now runs on echo-node, avoiding nova-node as intended.


Key Takeaways

Check where your pod is running"kubectl get pod -o wide"
Apply Node Affinity rules → restrict scheduling using "operator: NotIn"
Verify Persistent Volume properties → ensure NFS-backed volumes don’t influence node selection ("kubectl describe pvc")
Restart the StatefulSet → ensure Kubernetes respects new constraints ("kubectl rollout restart statefulset")

By carefully applying affinity rules and verifying volume bindings, I successfully redirected my PostgreSQL pod to the right node. If you're facing similar issues, hopefully, this guide helps you fine-tune your Kubernetes scheduling! 🚀