ポッドの1つが起動せず、常に再起動していて、CrashLoopBackOff状態になっています。
名前READYステータス再起動年齢
quasar-api-staging-14c385ccaff2519688add0c2cb0144b2-3r7v4 0/1
CrashLoopBackOff 72 5h
ポッドを記述すると、次のようになります(イベントのみ)。
FirstSeen LastSeen Count From SubobjectPath Reason Message
57m 57m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 7515ced7f49c
57m 57m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 7515ced7f49c
52m 52m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 2efe8885ad49
52m 52m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 2efe8885ad49
46m 46m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id a4361ebc3c06
46m 46m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id a4361ebc3c06
41m 41m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 99bc3a8b01ad
41m 41m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 99bc3a8b01ad
36m 36m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 3e873c664cde
36m 36m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 3e873c664cde
31m 31m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 97680dac2e12
31m 31m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 97680dac2e12
26m 26m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 42ef4b0eea73
26m 26m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 42ef4b0eea73
21m 21m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 7dbd65668733
21m 21m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 7dbd65668733
15m 15m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id d372cb279fff
15m 15m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id d372cb279fff
10m 10m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id bc7f5a0fe5d4
10m 10m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id bc7f5a0fe5d4
5m 5m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id b545a71af1d2
5m 5m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id b545a71af1d2
3h 25s 43 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Pulled Container image "us.gcr.io/skywatch-app/quasar-api-staging:15.0" already present on machine
25s 25s 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 3e4087281881
25s 25s 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 3e4087281881
3h 5s 1143 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Backoff Back-off restarting failed docker container
ポッドのログにもあまり表示されません。
Pod "quasar-api-staging-14c385ccaff2519688add0c2cb0144b2-3r7v4" in namespace "default": container "quasar-api-staging" is in waiting state.
ポッドをローカルで実行できましたが、動作しているようです。他に何を確認または試すべきかわかりません。ヘルプやトラブルシューティングの手順は大歓迎です!
kubectl logs <podid> --previous
を実行して、コンテナの以前のインスタンスからのログを確認してみてください。
これは遅い回答ですが、30分前に見つけたかったものです。
私の場合、その理由は、ノードに最新のタグが付けられた障害のあるバージョンのディスク上のDockerイメージがあったためです。
私の解決策は、ノードインスタンスから障害のあるイメージを削除することでした。
docker rmi faulty:latest