Kubernetes Architecture in Simple Words

Sarvar
6 min readApr 25, 2023

--

Hey,

My name is Sarvar, and I am a highly skilled Senior Developer at Luxoft India. With years of experience working on cutting-edge technologies, I have honed my expertise in Cloud Operations (Azure and AWS), Data Operations, Data Analytics, and DevOps. Throughout my career, I’ve worked with clients from all around the world, delivering excellent results, and going above and beyond expectations. I am passionate about learning the latest and treading technologies.

I’ll do my best to explain Kubernetes to you in basic terms because we’re looking at its architecture today. We will examine each Kubernetes component in terms of its function and environment in which it operates. To make Kubernetes easier for you to understand, we will see its architecture diagram. Let’s begin.

What is Kubernetes -

An open-source container orchestration technology called Kubernetes (commonly abbreviated as “K8s”) automates the deployment, scaling, and management of containerized applications. At its core, Kubernetes offers a method for controlling and coordinating a number of containers running on a server cluster. It offers a uniform API and toolset for deploying, scaling, and upgrading applications while abstracting away many of the underlying difficulties of managing containers at scale.

Highly scalable, fault-tolerant, and flexible are all features of Kubernetes. Along with a wide choice of networking and storage options, it supports a number of container runtimes, including Docker. Kubernetes is a well-liked option for contemporary, cloud-native applications because it can be deployed on-premises, in the cloud, or in hybrid environments.

Kubernetes Architecture -

The Kubernetes architecture is made up of a number of components that work together to provide a powerful platform for container orchestration. These components include:

Kubernetes Architecture

Kubernetes Master Node -

The master components are the control plane components that manage the cluster. They include:

1. API Server:

The API server in Kubernetes acts as the main interface for the Kubernetes API and is a crucial part of the control plane. It gives different Kubernetes components and outside clients, such developers and administrators, a RESTful interface for communication.

The Kubernetes cluster’s state is updated as a result of the API server accepting, processing, and validating requests from clients. The Kubernetes objects, such as Pods, Services, Deployments, ConfigMaps, and others, are stored and managed by it. The distributed key-value store etcd, which Kubernetes uses to store configuration data and state information, is connected to the API server, which is running on the master node. Additionally, it interacts with other Kubernetes parts like the controller manager, scheduler, and kubelet running on worker nodes.

2. ETCD:

The configuration and status data for the entire cluster are kept in etcd, a distributed key-value store in Kubernetes. It is a highly available and fault-tolerant database that gives Kubernetes objects, including Pods, Services, Deployments, and more, a trustworthy source of truth.

Etcd, which runs on the master node together with the API server, scheduler, and controller manager, is an essential part of the control plane. Kubernetes is able to maintain a uniform view of the cluster’s state because it offers a consistent and dependable way to store and access data throughout the whole cluster.

Etcd’s distributed architecture enables high availability and fault tolerance. It distributes its data among multiple cluster nodes in order to ensure that it is always accessible, even in the event that one or more nodes fail. Kubernetes components and outside clients can use the watch API that etcd offers in addition to storing Kubernetes objects to keep track of changes to the cluster’s state in real-time. As a result, the required state is always maintained and Kubernetes is able to react fast to changes in the cluster.

3. Controller Manager:

The Controller Manager in Kubernetes is a component in charge of managing and running the numerous controllers that keep an eye on a cluster’s state and make sure the desired state is maintained. Each controller in Kubernetes is in charge of a certain class of resource, such as a replication controller, a service, or a pod. Each controller is in charge of a distinct set of resources, and the Controller Manager operates many controllers as independent processes.

The Controller Manager conducts any necessary validation between the cluster’s present state and the desired state as specified by the controllers as well as making sure the controllers are active and in good health.

4. Scheduler:

According to the resources that are available and any scheduling needs that the pod specifies, the Scheduler in Kubernetes assigns newly formed pods to cluster nodes. When a new pod is produced, the Scheduler chooses which node it ought to be scheduled to in light of things like the need for CPU and memory, the availability of the node, and any rules governing pod affinity or anti-affinity. The node assignment for the pod is then updated by the Scheduler on the Kubernetes API server.

The Scheduler performs its function by continuously assessing the cluster’s condition and making scheduling decisions as necessary. By doing this, it makes sure that every node in the cluster is adequately utilized and that the available nodes are all given an equal amount of pods.

Slave Nodes Components:

The node components manage each individual node and are run on each worker node. They consist of:

1. Kubelet:

A Kubernetes component called Kubelet operates on each cluster node and is in charge of managing the scheduled pods for that node.

The Kubelet interacts with the Kubernetes API server to obtain instructions on which pods to execute and monitors the functionality of the containers inside those pods. Additionally, the Kubelet keeps track of the node’s condition, including CPU and memory use, and communicates this data to the Kubernetes API server.

The Kubelet is in charge of carrying out a number of duties, such as:

  1. According to the guidelines supplied by the API server, starting and stopping containers as necessary.
  2. Keeping an eye on the condition of containers and restarting them if they break down or stop responding.
  3. Volume installation and unmounting in accordance with the pod configuration.
  4. Managing each container’s network interfaces.
  5. Updating each pod’s status on the node and sending this data to the API server.

2. Kube-proxy:

Every node in the Kubernetes cluster runs the network proxy kube-proxy, which controls network communications between the many pods and services that make up the cluster.

Each node’s network rules are updated by the kube-proxy, which directs traffic to the proper location. When the network configuration changes, it checks the Kubernetes API server for updates and modifies the network rules accordingly.

Add-On Components:

Depending on the needs of the application, add-on components can be added to the cluster and are optional. They consist of:

1. Pod:

The smallest deployable unit that can be generated, scheduled, and managed in Kubernetes is a pod. A pod, which may contain one or more containers, represents a single instance of a process that is currently running in the cluster. A pod’s containers can communicate with one another via localhost since they all share the same network namespace. Data sharing amongst containers in a pod is made simple by the ability for them to share the same storage volumes.

2. User-Interface:

A user-friendly method of interacting with and managing Kubernetes clusters is provided through the User Interface (UI) in Kubernetes, a web-based graphical interface.

A web application that runs inside the Kubernetes cluster itself and offers a visual depiction of the cluster’s current state is the Kubernetes UI, also referred to as the Dashboard. Users can view and manage cluster resources, such as pods, services, and deployments, through the Dashboard.

3. Kubectl:

The command-line tool for interacting with Kubernetes clusters is called kubectl. You can manage, examine, and deploy apps and services inside a Kubernetes cluster using kubectl.

You can carry out a variety of actions with kubectl, including:

  1. Establishing and controlling deployments, services, and pods.
  2. Upgrading and scaling the cluster’s applications.
  3. Inspecting and resolving issues with services and applications.
  4. Adjusting security settings and access limitations.
  5. Keeping an eye on the cluster’s resources and status.

Conclusion, Kubernetes architecture consists of a master node that manages the control plane and worker nodes that run the containers. The master node includes several components such as the API server, etcd, controller manager, and scheduler. The worker nodes include components such as kubelet and kube-proxy. Add-on components such as DNS, Dashboard, and Ingress controller can be added to the cluster based on the application requirements.

— — — — — — — —

Here is the End!

Thank you for taking the time to read my article. I hope you found this article informative and helpful. As I continue to explore the latest developments in technology, I look forward to sharing my insights with you. Stay tuned for more articles like this one that break down complex concepts and make them easier to understand.

Remember, learning is a lifelong journey, and it’s important to keep up with the latest trends and developments to stay ahead of the curve. Thank you again for reading, and I hope to see you in the next article!

Happy Learning!

--

--

Sarvar
Sarvar

No responses yet