Pod Lifecycle

Pod Lifecycle

Β·

7 min read

Have you ever wondered how Kubernetes manages your containers behind the scenes? πŸ€”

Well, wonder no more! In this blog post, we're going to take a fun and friendly approach to exploring the lifecycle of a pod - the basic unit of deployment in Kubernetes. πŸš€ So grab a cup of coffee β˜•οΈ, get comfy 😎, and let's dive in! πŸ’»

What is a Pod??

A Kubernetes pod is a collection of one or more containers, and is the smallest unit of a Kubernetes application. Containers are grouped into Kubernetes pods in order to increase the intelligence of Resource Sharing. Containers in the same pod will share the same compute resources. These compute resources are pooled together in Kubernetes to form clusters, which can provide a more powerful and intelligently distributed system for executing applications.

Why does Kubernetes use pods?

  • A pod can have a shared network namespace and shared storage volumes, which makes it easier for the containers within the pod to communicate and share data.

  • Additionally, pods can be scheduled on nodes in a Kubernetes cluster, which allows Kubernetes to manage resources and ensure that the containers have the necessary CPU and memory to run.

You might have question that Why Kubernetes does not run containers directly, instead running pods???

If Kubernetes were to run containers directly, it would be more difficult to manage these containers as a cohesive unit. Each container would need to be managed individually, making it harder to coordinate networking, storage, and scheduling across the containers. Pods provides a simpler and more powerful way to manage containerized applications in Kubernetes

The lifecycle of a pod goes through several phases, which are as follows:

  1. Pending: When a pod is created, it enters the pending phase. During this phase, Kubernetes is trying to schedule the pod onto a node in the cluster. If there are not enough resources available on any of the nodes to run the pod, it will remain in the pending phase until resources become available.

  2. Running: The Pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.

  3. Succeeded: If the process running inside the container(s) completes successfully, the pod enters the Succeeded phase. The container(s) are terminated, and the pod is no longer running.

  4. Failed: All containers in the Pod have terminated, and at least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system.

  5. Unknown: If the state of the pod cannot be determined, the pod enters the Unknown phase. This can happen if the pod is unable to communicate with the Kubernetes API server.

  • In addition to these phases, a pod can also be deleted. When a pod is deleted, it is removed from the Kubernetes cluster and enters the Terminating phase.

  • When a Pod is being deleted, it is shown as Terminating by some kubectl commands. This Terminating status is not one of the Pod phases. A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a pod by force.

Understanding Kubernetes Architecture

Lifecycle in stepwise manner:

Now I explain how components of cluster are helping in creating a pod:

  1. The user creates a YAML or JSON manifest that specifies the desired characteristics of the Pod, including the container image, resource limits, networking, and other properties.

  2. The manifest is submitted to the Kubernetes API Server by using kubectl commands. The API Server stores the manifest in etcd, which is a distributed key-value store that serves as the primary source of truth for the cluster's configuration data. This enables users to track changes to the configuration over time and revert to previous versions if necessary. When a user submits a request to create or modify a resource in the cluster, the API server stores the desired state of that resource in etcd.

    Note: "primary source of truth" refers to the source of the most up-to-date and accurate information about the desired state of the cluster.

  3. Etcd then return back that it has made a successful write(kind of acknowledgement given by etcd to api server that it done their task of writing or storing).

  4. Now the role of Scheduler comes in, Scheduler is responsible for selecting a suitable Node in the cluster on which to run the Pod. Scheduler keeps pinging the api server at regular interval( like every 5sec) to get status whether there any wokloads that need to be scheduled to the worker node. If any workload is assigned to api server, now Scheduler examines the available resources on each Node, as well as any scheduling constraints or preferences specified in the Pod manifest, and selects the best candidate Node.

  5. Once a Node has been selected, the Scheduler tells the api server that this particular node is best for your workload. Now API server updates the Pod's configuration in etcd with the Node assignment.

  6. Now API server manage all things related to making pod on that worker node. for doing so API server let's kubelet of that node know that we need to spin up a pod on this node.

  7. Kubelet now work with container runtime engine to create a desired pod( which has appropriate container running).

  8. So we have made a pod on k8s cluster

Let's say if any pod goes down due to some reason, how will system know that a new pod need to be created in its place?

  1. The kubelet running on the node where the failed pod was running detects that the pod has stopped running and sends an update to the Kubernetes API server to reflect the pod's status as "not running".

  2. Now this is where Control Manager role comes in picture, Controller-Manager component continuously monitors the state of all pods and their associated replication controllers, deployment objects, or other controller objects. When it detects that a pod is not running, it triggers the creation of a new pod to replace it.

  3. Now the same steps are repeated that it described above from API server till creation of new pod.

Control Manager keeps checking, if actual state==desired state and if not then it triggers the creation of new pod.

This entire process is automated and transparent to the user or application, ensuring that failed pods are quickly replaced with new instances to maintain the desired level of service.

Health of Pod:

  • Maintaining the health of a Pod is critical to ensure that the applications or services running in the Pod are available and reliable.

  • Probes are used to monitor the health of the containers running in a Pod. A probe is a diagnostic test performed by Kubernetes to determine the health of a container. The results of the probe can be used by Kubernetes to make decisions about the state of the container, such as whether to restart the container or mark it as unhealthy.

There are three types of probes in Kubernetes:

  1. Liveness Probe: A liveness probe determines whether the container is alive or not. If the liveness probe fails, Kubernetes will restart the container. The liveness probe is useful for detecting and recovering from container crashes.

  2. Readiness Probe: A readiness probe determines whether the container is ready to receive traffic. If the readiness probe fails, the container will be removed from the service endpoint. This is useful for ensuring that containers are fully initialized before they receive traffic.

  3. Startup Probe: A startup probe is a new type of probe that was introduced in Kubernetes 1.16. It determines whether the container has started and is ready to receive traffic. It is similar to a readiness probe, but it is only performed once, when the container starts up.

Overall:

API Server: For Central Management

Etcd: For storing and managing configuration data and metadata

Scheduler: To choose best node to make pod on

Control Manager: Matching Actual state with Desired State

So there you have it - the lifecycle of a pod in Kubernetes, explained in simple terms. πŸŽ‰ We hope this blog post has given you a better understanding of how pods work behind the scenes and why they're such an important building block for container orchestration. πŸ’»πŸš€

As always, if you have any questions or comments, we'd love to hear from you. πŸ’¬ Thanks for reading, and happy podding! πŸ³πŸŽ‰

Β