Your pod crashes in 2 seconds, and logs vanish before you can read them? Here’s how to catch the output before Kubernetes deletes the container.
The Problem: When a pod CrashLoops, the container exits so fast that ‘kubectl logs’ shows nothing useful or “container not found” errors. By the time you run the command, Kubernetes has already removed the crashed container.
Instant Solution – Previous Container Logs:
kubectl logs POD_NAME --previous
The ‘–previous’ flag shows logs from the LAST crashed instance, even after the container is gone. This captures the fatal error that caused the crash.
Better: Keep Failed Container for Investigation:
apiVersion: v1
kind: Pod
metadata:
name: debug-pod
spec:
restartPolicy: Never # Critical: Don't auto-restart
containers:
- name: app
image: your-image
command: ["sh", "-c"]
args:
- |
echo "Starting debug session..."
/your/actual/command || sleep 3600 # Keep alive on failure
Why This Works:
‘restartPolicy: Never’ tells Kubernetes “don’t restart this pod even if it fails.” The ‘|| sleep 3600’ keeps the container running for 1 hour after the actual command fails, giving you time to exec in and inspect.
Real-Time Debugging – Exec Before Crash:
For pods that crash mid-execution:
kubectl debug POD_NAME -it --image=busybox --target=CONTAINER_NAME
This attaches a sidecar debugger container sharing the same namespaces (network, PID, filesystem) as your crashing container. You can inspect files, check network connectivity, or examine processes while the actual pod is still running.
Catch Initialization Failures:
If your init containers crash:
kubectl logs POD_NAME -c INIT_CONTAINER_NAME --previous
Init containers run sequentially before main containers. If one fails, the entire pod fails to start. The –previous flag works here too.
