Pod的状态阶段
存在以下5个阶段(phase,注意和下面状态情况的差异):
1 2 3 4 5 |
PodPending PodPhase = "Pending" PodRunning PodPhase = "Running" PodSucceeded PodPhase = "Succeeded" PodFailed PodPhase = "Failed" PodUnknown PodPhase = "Unknown" |
- Pending 创建pod的请求已经被k8s接受,但是容器并没有启动成功,可能处在:写数据到etcd,调度,pull镜像,启动容器这四个阶段中的任何一个阶段,pending伴随的事件通常会有:ADDED, Modified这两个事件的产生。
- Running pod已经绑定到node节点,并且所有的容器已经启动成功,或者至少有一个容器在运行,或者在重启中。
- Succeeded pod中的所有的容器已经正常的自行退出,并且k8s永远不会自动重启这些容器,一般会是在部署job的时候会出现。
- Failed pod中的所有容器已经终止,并且至少有一个容器已经终止于失败(退出非零退出代码或被系统停止)。
- Unknown 由于某种原因,无法获得pod的状态,通常是由于与pod的主机通信错误
Pod的状态情况
状态情况是一般是指的对pod的最后几次情况转变的记录。每个情况里 包含一个status,一个type(每个条件对应着一个类型,包含PodScheduled,Ready,Initialized,Unschedulable),一个reason。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
status: conditions: - lastProbeTime: null lastTransitionTime: 2018-08-07T10:11:56Z status: "True" type: Initialized #每次探测时候状态情况的类型 - lastProbeTime: null lastTransitionTime: 2018-08-07T10:24:18Z message: 'containers with unready status: [nginx2]' reason: ContainersNotReady status: "False" #如果这里为false,表示这一次的探测结果有问题,原因就是message和reason里表明的 type: Ready - lastProbeTime: null lastTransitionTime: 2018-08-07T10:11:56Z status: "True" type: PodScheduled ------ status: conditions: - lastProbeTime: null lastTransitionTime: 2018-08-05T15:36:44Z status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: 2018-08-07T10:36:25Z status: "True" type: Ready - lastProbeTime: null lastTransitionTime: 2018-08-05T15:36:44Z status: "True" type: PodScheduled containerStatuses: |
看一个完整的
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
[root@k8s-master cka]# kubectl get pods nginx -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: 2018-08-07T10:47:43Z labels: app: nginx name: nginx namespace: default resourceVersion: "89685" selfLink: /api/v1/namespaces/default/pods/nginx uid: 500f5920-9a2f-11e8-8140-000c29850765 spec: containers: - image: nginx imagePullPolicy: Always name: nginx ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-gt88m readOnly: true - image: nginx imagePullPolicy: Always name: nginx2 ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-gt88m readOnly: true dnsPolicy: ClusterFirst nodeName: k8s-node1 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 volumes: - name: default-token-gt88m secret: defaultMode: 420 secretName: default-token-gt88m status: conditions: - lastProbeTime: null lastTransitionTime: 2018-08-07T10:47:43Z status: "True"<<<<<<<<<<<<<<<<<<<< type: Initialized <<<<<<<<<<<<<<<<<<<< - lastProbeTime: null lastTransitionTime: 2018-08-07T10:49:10Z message: 'containers with unready status: [nginx2]' reason: ContainersNotReady status: "False"<<<<<<<<<<<<<<<<<<<< type: Ready<<<<<<<<<<<<<<<<<<<< - lastProbeTime: null lastTransitionTime: 2018-08-07T10:47:43Z status: "True"<<<<<<<<<<<<<<<<<<<< type: PodScheduled<<<<<<<<<<<<<<<<<<<< containerStatuses: - containerID: docker://2dc78277c7664db83b06076ff48e71acde1feed51dbf81a272e8730a4ffbbf13 image: nginx:latest imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424 lastState: {} name: nginx ready: true restartCount: 0 state: running:<<<<<<<<<<<<<<<<<<<< startedAt: 2018-08-07T10:47:50Z - containerID: docker://7da0ca7b1ea086df804228493453e88ac1c14078882e52a9bf30589e922b824f image: nginx:latest imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424 lastState: terminated: containerID: docker://7da0ca7b1ea086df804228493453e88ac1c14078882e52a9bf30589e922b824f exitCode: 1 finishedAt: 2018-08-07T10:49:10Z reason: Error startedAt: 2018-08-07T10:49:07Z name: nginx2 ready: false restartCount: 3 state: waiting:<<<<<<<<<<<<<<<<<<<< message: Back-off 40s restarting failed container=nginx2 pod=nginx_default(500f5920-9a2f-11e8-8140-000c29850765) reason: CrashLoopBackOff ####容器的state reason是kubectl get pods输出中的STATUS栏结果 hostIP: 172.16.199.101 phase: Running podIP: 10.244.1.11 qosClass: BestEffort startTime: 2018-08-07T10:47:43Z [root@k8s-master cka]# kubectl get pods nginx NAME READY STATUS RESTARTS AGE nginx 1/2 CrashLoopBackOff 3 2m |
所以kubectl get pods里给出的status实际是pod里容器的最后state reason,并不是pod的状态
文章评论