Applying YAML changes blindly is risky. Preview exactly what will change first. Before Applying: # See what kubectl apply will change kubectl diff -f deployment.yaml # Output shows: # – Lines being removed # + Lines being added # ~ Lines being modified Safe Workflow: # 1. Check diff kubectl diff -f deployment.yaml # 2. […]
Category: Kubernetes
Kubernetes: View Real-Time Pod Logs with Stern Instead of kubectl
Tired of running kubectl logs for each pod separately? Stern streams logs from multiple pods simultaneously. Install Stern: # Mac brew install stern # Linux wget https://github.com/stern/stern/releases/download/v1.28.0/stern_linux_amd64 Usage: # Tail all pods matching pattern stern my-app # All pods in namespace stern . -n production # Color-coded by pod, auto-follows new pods Why Better: Kubectl […]
Kubernetes Ingress: Expose Multiple Services Through One Load Balancer
Paying for multiple cloud load balancers? Ingress routes traffic to different services based on hostname or path. # ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: main-ingress annotations: # NGINX Ingress Controller nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/ssl-redirect: “true” nginx.ingress.kubernetes.io/proxy-body-size: “10m” # Cert-Manager for SSL cert-manager.io/cluster-issuer: “letsencrypt-prod” spec: tls: – hosts: – api.example.com – app.example.com – admin.example.com secretName: tls-secret […]
Kubernetes: Force Delete Stuck Pods in Terminating State Instantly
Pod stuck in “Terminating” state for hours? Kubernetes is waiting for graceful shutdown that will never complete. Force delete it properly. The Problem: # Pod stuck forever kubectl get pods NAME STATUS AGE stuck-pod-abc123 Terminating 3h # Normal delete doesn’t work kubectl delete pod stuck-pod-abc123 # Still shows Terminating… Why Pods Get Stuck: When deleting […]
Debug Kubernetes Pods That Keep Crashing Before Logs Disappear
Your pod crashes in 2 seconds, and logs vanish before you can read them? Here’s how to catch the output before Kubernetes deletes the container. The Problem: When a pod CrashLoops, the container exits so fast that ‘kubectl logs’ shows nothing useful or “container not found” errors. By the time you run the command, Kubernetes […]
Why Your Pod “Looks Healthy” But Still Drops Traffic
This is a classic production trap. Root cause readinessProbe ≠ livenessProbe Most people use only one. Correct pattern livenessProbe: httpGet: path: /health/live port: 80 readinessProbe: httpGet: path: /health/ready port: 80 Why this matters Liveness = should I restart? Readiness = should I receive traffic? If your app is warming caches or reconnecting DBs, traffic arrives […]
Why Pods Restart Even When CPU & Memory Look Fine
Everything green… pods still restart. Hidden killer Liveness probes too aggressive Short timeouts during GC pauses Cold starts under load Fix Separate readiness & liveness Increase initialDelaySeconds Avoid HTTP probes on heavy endpoints livenessProbe: initialDelaySeconds: 30 timeoutSeconds: 5
Why Pods Restart Without Errors (OOMKilled Isn’t Always Logged)
Pods restart, logs look clean. Hidden reason Memory limit hit → kernel kills container → no app log. Check kubectl describe pod <pod-name> Look for: Reason: OOMKilled Fix Increase memory or fix memory leak — don’t just scale blindly.
Use Liveness vs Readiness Correctly
livenessProbe: httpGet: path: /health port: 80 Why it mattersWrong probes cause infinite restarts or traffic to broken pods.
Use Resource Requests to Prevent Noisy Neighbors
resources: requests: cpu: “250m” memory: “256Mi” Why it mattersWithout requests, Kubernetes can’t schedule intelligently → random slowdowns.
Why Liveness ≠ Readiness Probes
livenessProbe: httpGet: path: /health readinessProbe: httpGet: path: /ready Why it mattersLiveness restarts pods, readiness controls traffic. Mixing them causes outages.
Readiness vs Liveness — One Mistake Takes Down Clusters
Readiness vs Liveness — One Mistake Takes Down Clusters livenessProbe: httpGet: path: /health/live readinessProbe: httpGet: path: /health/ready Why this mattersLiveness restarts pods. Readiness controls traffic. Mixing them causes cascading failures.
Why Readiness Probes Matter More Than Liveness
Most outages happen because traffic hits half-ready pods. readinessProbe: httpGet: path: /health/ready port: 80 Key idea: Liveness = “restart me” Readiness = “send traffic or not” If you only use liveness → Kubernetes will happily route traffic to chaos.
Detect CrashLoopBackOff Root Cause in 10 Seconds
Most people stare at logs too long. Kubernetes already tells you why your pod died. Fastest diagnostic command: kubectl describe pod <pod-name> Look for: Last State: Terminated Reason: OOMKilled Why this matters:Logs may be empty if the container never fully started. Real fix:Increase memory requests, not limits: resources: requests: memory: “512Mi” limits: memory: “1Gi” Result:Stable […]
Kubernetes Pods Restart with No CPU or Memory Spikes
Metrics look clean, pods restart anyway. Why it happensProcess exits with code 0 (app logic exit). Why it mattersKubernetes assumes failure. Vital fix Keep main process alive or use proper controllers.
Kubernetes Pods Restart Without Errors
No crash logs, still restarts. Why it happensLiveness probe fails silently. Why it mattersYour app is healthy — Kubernetes thinks it isn’t. Smart fixSeparate readiness and liveness logic.
Kubernetes Deployments Succeed But Traffic Fails
Pods are “Running” — app unreachable. Why it happensService selector does not match pod labels. Why it mattersLooks healthy, acts broken. Smart fixVerify selectors, not just pod status.
Kubernetes Pods Restart Without Errors
No crash logs, still restarting. WhyResource limits exceeded silently. TipCheck memory limits and OOMKills.
Kubernetes Services Work Internally but Fail Externally
Pods communicate, users cannot. WhyService type mismatch (ClusterIP vs LoadBalancer). TipExplicitly define service exposure strategy.
Kubernetes Pods Behave Differently After Restarts
Same image, different runtime behavior. WhyEnvironment variables and config maps may change order or values. TipVersion configuration separately from images.
Kubernetes Nodes Slowly Waste Resources
No alerts, rising costs. WhyRequests stay high while actual usage drops. Maintenance TipRecalculate requests after load stabilizes.
Kubernetes Clusters Age Without Showing Errors
Everything green, costs go up. WhyUnused resources are rarely reclaimed automatically. Tip Audit requests vs real usage periodically.
Kubernetes Services Work But Traffic Is Uneven
Some pods are overloaded, others idle. WhyClient-side connection reuse breaks load balancing. Tip Tune keep-alive and connection pools.
Kubernetes Pods Restart Without Errors
Everything looks “normal”. WhyOOMKills don’t always surface clearly. Tip Check container memory limits vs actual usage patterns.
Kubernetes Deployments Appear “Healthy” but Perform Poorly
Pods are green, users complain. WhyResource limits hide throttling issues. Fix Monitor CPU throttling, not just pod status. Healthy pods can still be starved.
Kubernetes Pods Get Killed Without Errors
No logs. Just gone. WhyOOMKill triggered before logging flush. FixIncrease memory requests, not limits.
Kubernetes Pods Restart Without Scaling Events
Traffic increases, no scale-out. WhyRequests never hit resource thresholds. FixTune resource requests, not limits.
Kubernetes ConfigMap Changes Don’t Apply
You updated config, app ignores it. WhyConfigMaps are not auto-reloaded. FixRestart pods or mount with checksum-based rollout.
Kubernetes HPA Doesn’t Scale When Traffic Spikes
Metrics look fine, pods stay the same. WhyCPU-based autoscaling ignores I/O-bound workloads. FixScale using custom or request-based metrics.
Kubernetes Pods Restart Without Logs
No errors, no stack traces. WhyThe container is killed before logs flush (OOMKill). FixInspect pod events, not container logs.




