Updated: 4/14/2015
This document covers the lifecycle of a pod. It is not an exhaustive document, but an introduction to the topic.
As consistent with the overall API convention, phase is a simple, high-level summary of the phase of the lifecycle of a pod. It is not intended to be a comprehensive rollup of observations of container-level or even pod-level conditions or other state, nor is it intended to be a comprehensive state machine.
The number and meanings of PodPhase
values are tightly guarded. Other than what is documented here, nothing should be assumed about pods with a given PodPhase
.
A pod containing containers that specify readiness probes will also report the Ready condition. Condition status values may be True
, False
, or Unknown
.
More detailed information about the current (and previous) container statuses can be found in containerStatuses
. The information reported depends on the current ContainerState, which may be Waiting, Running, or Termination (sic).
The RestartPolicy may be Always
, OnFailure
, or Never
. RestartPolicy applies to all containers in the pod. RestartPolicy only refers to restarts of the containers by the Kubelet on the same node. As discussed in the pods document, once bound to a node, a pod may never be rebound to another node. This means that some kind of controller is necessary in order for a pod to survive node failure, even if just a single pod at a time is desired.
The only controller we have today is ReplicationController
. ReplicationController
is only appropriate for pods with RestartPolicy = Always
. ReplicationController
should refuse to instantiate any pod that has a different restart policy.
There is a legitimate need for a controller which keeps pods with other policies alive. Both of the other policies (OnFailure
and Never
) eventually terminate, at which point the controller should stop recreating them. Because of this fundamental distinction, let's hypothesize a new controller, called JobController
for the sake of this document, which can implement this policy.
In general, pods which are created do not disappear until someone destroys them. This might be a human or a ReplicationController
. The only exception to this rule is that pods with a PodPhase
of Succeeded
or Failed
for more than some duration (determined by the master) will expire and be automatically reaped.
If a node dies or is disconnected from the rest of the cluster, some entity within the system (call it the NodeController for now) is responsible for applying policy (e.g. a timeout) and marking any pods on the lost node as Failed
.
Pod is Running
, 1 container, container exits success
Running
Succeeded
Succeeded
Pod is Running
, 1 container, container exits failure
Running
Running
Failed
Pod is Running
, 2 containers, container 1 exits failure
Running
Running
Running
Running
Running
Failed
Pod is Running
, container becomes OOM
Running
Running
Failed
Pod is Running
, a disk dies
Failed
Pod is Running
, its node is segmented out
Failed
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。