SHARE
Cloud / May 4, 2021

Kubernetes Architecture Breakdown

Containerization empowers applications from different environments to share an operating system while still maintaining the information and configuration that allow them to work as intended. Containers enable the use of microservices to allow “monolithic apps” to be disaggregated and run anywhere within multiple containers. Microservices in turn allow various DevOps teams to work on portions of the overall app in parallel for faster development, debugging, and deployment in production.

Containers have further advantages as they are very lightweight versus virtual machines, allowing higher server densities to be realized. Containerized apps are also much more portable, and with no hypervisor involved, greater performance levels are achieved. These containers can also be expanded, isolated, and removed easily, without adversely impacting the other containers or the environment. To facilitate deploying, managing, and scaling container-based applications, IT professionals rely on container orchestration. Kubernetes may be the most effective, widely used container orchestration tool available today.

What Is Kubernetes?

Kubernetes (sometimes designated as k8s) automates containerized application deployment across a cluster of machines, eliminating the need for manual administration. Originally designed by Google and currently maintained by the Cloud Native Computing Foundation (which is under the umbrella of the Linux Foundation), Kubernetes is an open-source solution, making it extremely adaptable to the needs of individual organizations.

Simply put, Kubernetes is a single interface that allows organizations to deploy containers to clouds, virtual machines, and physical hardware of all kinds. To understand how Kubernetes works, let’s take a high-level look at the Kubernetes architecture.

Kubernetes Architecture

The Kubernetes architecture can be broken down into clusters. A Kubernetes cluster consists of nodes for running containerized applications. Lightweight, flexible, and capable of more efficiently fulfilling the same role as a virtual machine, Kubernetes clusters make it easier for organizations to build, move, and manage applications across environments. Being platform agnostic, Kubernetes clusters are not restricted to individual operating systems; they can run anywhere.

Nodes and Components

Kubernetes-based containers are placed into pods, which encapsulate individual containers and their underlying microservices and can be easily added or removed. Pods run on nodes. In terms of structure, each Kubernetes cluster is made up of a “master” node, and multiple “worker” nodes. Master nodes fulfill the role of “server,” while the worker nodes act as “clients” that host pods and connect to and run on the master nodes.

Master nodes are the brains of the Kubernetes system. Master nodes allow users to define aspects of the system, such as pods, configuration, deployments, and other factors for Kubernetes to maintain.

The master node consists of the following Kubernetes components:

  • Kube-apiserver
    The cluster’s front end, the kube-apiserver receives requests from users, management devices, and command-line interfaces.
  • Etcd
    Accessible only from the API server, etcd acts as backup storage for all cluster data.
  • Kube-scheduler
    As new pods are created, kube-scheduler identifies these new pods and assigns them to a node. Kube-scheduler also stores worker-node resource usage information.
  • Kube-controller-manager
    Kube-controller-manager runs controller processes, with each controller as a separate process performing a single, specific function.

Every Kubernetes cluster contains at least one worker node, so that applications always remain accessible, even when a node fails. Every worker node in the cluster contains the following:

  • Kubelet
    The kubelet is the lowest level component in the Kubernetes architecture. It is responsible for ensuring that the containers have started, and that they are running.
  • Container runtime
    Container runtimes are ready-to-use software packages containing everything the node needs to run an application. This includes the code and any required runtime, application and system libraries, and default values for essential settings. Third-party plugins (such as Docker) are usually added to help perform this function.
  • Kube-proxy
    The kube-proxy maintains network rules with the nodes, making network communication to pods from network sessions possible. The kube-proxy ensures that every node has the right IP address, and uses local iptables for routing and traffic load balancing.

Essentially, single containers composed of microservices code are grouped into individual pods. The master node schedules the pod to a worker node, coordinating with the container runtime to facilitate launch. In the event that a pod fails, it is discarded, and a replica pod is created. The replica pod is identical to the original, other than the fact that it includes a different set of IPs. But while this allows for significant flexibility, it also creates potential problems, including processing issues and IP churn, potentially impacting the reliability of the containerization. Kubernetes services provide the solution.

Kubernetes Services

Much like how a living body will replicate cells as old ones die off, Kubernetes services are designed to discover and terminate failed pods, while creating new, functionally identical pods to take their place. To do this, it uses key-value pairs known as labels and selectors. These work together, allowing the service to automatically identify pods with labels that match selectors. By comparing the observed state against the desired state in the manifest file, Kubernetes is able to maintain or scale the current system. Kubernetes runs constant checks, or control loops, making changes where necessary to bring everything in line with the user-defined paradigm. Kubernetes services provide stable IP addresses, DNS names, and ports, so that even while containers are being added or removed, overall network information remains unaffected.

Kubernetes in the Cloud

As previously stated, Kubernetes is designed to allow for effective containerization regardless of environment. However third-party cloud platforms can make it difficult for Kubernetes to retrieve the information it needs about nodes within the cluster. With each cloud provider approaching operational implementation differently, Kubernetes requires an all-purpose solution.

Kubernetes cloud controller manager (CCM) is a background program that embeds cloud-specific control logic, making it possible for users to link their clusters into a cloud provider’s API. Using a plugin-mechanism structure, CCM allows for the full range of cloud providers to integrate with Kubernetes.

Gigamon for Kubernetes Visibility  

Containerization with microservices is a solution to the problem of application portability across various hardware and virtual machines, and Kubernetes is a solution to the problem of effective container orchestration. In this same vein, Gigamon GigaVUE® Cloud Suite for Kubernetes provides essential network visibility and security analytics to the Kubernetes architecture.

Containerized applications are vulnerable to attack. Given that every application within a container may represent different potential attack vectors, and that legacy threat-detection mechanisms are ill-suited to dynamic containerized environments, complete container visibility is absolutely essential. Gigamon GigaVUE Cloud Suite employs three key components, designed to ensure total container visibility:

  • G-vTAP containers
    G-vTAP containers are deployed in each worker node, receiving copied packets from every other container on the node. This allows GigaVUE to collect relevant traffic with minimal impact to the node.
  • GigaVUE V Series and HC nodes
    GigaVUE V Series and HC hardware appliances aggregate traffic, extracting appropriate traffic for distribution to relevant tools. These components ensure that tools are receiving the right traffic without becoming overburdened, while generating reliable network intelligence.
  • GigaVUE-FM
    GigaVUE-FM works with the master node to identify where new worker nodes have been established. It then creates new G-vTAP containers for those nodes, configuring their policies and scaling them as needed.

Kubernetes is a complex solution, incorporating a range of different components. But at its heart, Kubernetes architecture follows a relatively simple model: The user defines how the system should function; Kubernetes aligns the cluster to match that desired state. By understanding the components and the roles that each plays, as well as the roles played by GigaVUE Cloud Suite in optimizing container visibility, users can more confidently enjoy the full benefits of containerization.

Click to learn more about GigaVUE Cloud Suite for Kubernetes.

Featured Webinars

Hear from our experts on the latest trends and best practices to optimize your network visibility and analysis.

CONTINUE THE DISCUSSION

People are talking about this in the Gigamon Community’s Hybrid/Public Cloud group.

Share your thoughts today


Back to top