SHARE
Cloud / May 4, 2021

Kubernetes Architecture Breakdown

Containerization facilitates application sharing and microservices adoption, enabling faster development and deployment. Containers are lightweight, portable, and easily scalable, with Kubernetes emerging as the leading orchestration tool for managing container-based applications.

What Is Kubernetes Architecture?

Kubernetes (sometimes designated as k8s) automates containerized application deployment across a cluster of machines, eliminating the need for manual administration. Originally designed by Google and currently maintained by the Cloud Native Computing Foundation (which is under the umbrella of the Linux Foundation), Kubernetes is an open-source solution, making it extremely adaptable to the needs of individual organizations.

Simply put, Kubernetes is a single interface that allows organizations to deploy containers to clouds, virtual machines, and physical hardware of all kinds. To understand how Kubernetes works, let’s take a high-level look at the Kubernetes architecture.

The Kubernetes architecture can be broken down into clusters. A Kubernetes cluster consists of nodes for running containerized applications. Lightweight, flexible, and capable of more efficiently fulfilling the same role as a virtual machine, Kubernetes clusters make it easier for organizations to build, move, and manage applications across environments. Being platform agnostic, Kubernetes clusters are not restricted to individual operating systems; they can run anywhere.

Nodes and Components

Kubernetes-based containers are placed into pods, which encapsulate individual containers and their underlying microservices and can be easily added or removed. Pods run on nodes. In terms of structure, each Kubernetes cluster is made up of a “master” node, and multiple “worker” nodes. Master nodes fulfill the role of “server,” while the worker nodes act as “clients” that host pods and connect to and run on the master nodes.

Master nodes are the brains of the Kubernetes system. Master nodes allow users to define aspects of the system, such as pods, configuration, deployments, and other factors for Kubernetes to maintain.

The master node, called the control plane, consists of the following Kubernetes components:

  • Kube-apiserver: The cluster’s front end, the kube-apiserver receives requests from users, management devices, and command-line interfaces.
  • Etcd: Accessible only from the API server, etcd acts as backup storage for all cluster data.
  • Kube-scheduler: As new pods are created, the kube-scheduler identifies these new pods and assigns them to a node. Kube-scheduler also stores worker-node resource usage information.
  • Kube-controller-manager: Kube-controller-manager runs controller processes, with each controller as a separate process performing a single, specific function.

Kubernetes orchestration is driven by a suite of controller functions responsible for managing various aspects of the cluster’s operations. These functions ensure the availability, scalability, and reliability of workloads running within the Kubernetes ecosystem. Let’s explore the key controller functions:

  • Replication controller: Ensures the correct number of pods is maintained in the cluster by managing pod replication and scaling.
  • Node controller: Monitors the health of each node within the cluster, detecting when nodes come online or become unresponsive.
  • Endpoints controller: Connects pods and services by populating the endpoints object, facilitating communication between them.
  • Service account and token controllers: Allocate API access tokens and default accounts to new namespaces in the cluster, ensuring secure authentication and authorization.
  • Cloud-controller-manager: Links the Kubernetes cluster to the cloud provider’s API, managing cloud-specific controls and integrations. This component is essential for clusters hosted partly or entirely in the cloud, optimizing performance and ensuring fault tolerance.

Every Kubernetes cluster contains at least one worker node, so that applications always remain accessible, even when a node fails. Every worker node in the cluster contains the following:

  • Kubelet: The kubelet is the lowest-level component in the Kubernetes architecture. It is responsible for ensuring that the containers have started and that they are running.
  • Container runtime: Container runtimes are ready-to-use software packages containing everything the node needs to run an application. This includes the code and any required runtime, application and system libraries, and default values for essential settings. Third-party plugins (such as Docker) are usually added to help perform this function.
  • Kube-proxy: The kube-proxy maintains network rules with the nodes, making network communication to pods from network sessions possible. The kube-proxy ensures that every node has the right IP address and uses local iptables for routing and traffic load balancing.

Essentially, single containers composed of microservices code are grouped into individual pods. The master node schedules the pod to a worker node, coordinating with the container runtime to facilitate launch. If a pod fails, it is discarded, and a replica pod is created. The replica pod is identical to the original, other than the fact that it includes a different set of IPs. But while this allows for significant flexibility, it also creates potential problems, including processing issues and IP churn, potentially impacting the reliability of the containerization. Kubernetes services provide the solution.

Kubernetes Services

Much like how a living body will replicate cells as old ones die off, Kubernetes services are designed to discover and terminate failed pods, creating new, functionally identical pods to take their place. To do this, it uses key-value pairs known as labels and selectors. These work together, allowing the service to automatically identify pods with labels that match selectors. By comparing the observed state against the desired state in the manifest file, Kubernetes can maintain or scale the current system. Kubernetes runs constant checks, or control loops, making changes where necessary to bring everything in line with the user-defined paradigm. Kubernetes services provide stable IP addresses, DNS names, and ports so that even while containers are being added or removed, overall network information remains unaffected.

Kubernetes Architecture Best Practices

Kubernetes architecture embodies core principles of availability, scalability, portability, and security, essential for modern cloud-native applications. In this section, we dive into best practices that optimize Kubernetes deployments to ensure robustness, flexibility, and resilience. From achieving high availability through meticulous workload distribution to enhancing security with stringent access controls, these practices empower organizations to harness the full potential of Kubernetes in their infrastructure.

  • High availability: Achieving high availability in Kubernetes involves ensuring both application and infrastructure availability. Components such as replication controllers, replica sets, and pet sets are employed to maintain application availability. Users can configure stateful workloads for high availability using pet sets. Furthermore, Kubernetes offers a wide range of storage backends and supports various configurations for infrastructure availability.
  • Scalability: Scalability in Kubernetes is facilitated by its microservices architecture, where applications are composed of multiple containers grouped into pods. Kubernetes supports cluster auto-scaling, enabling dynamic addition of nodes to the cluster for scaling pods across when necessary. Additionally, Kubernetes on certain cloud platforms supports auto-scaling, providing seamless coordination between Kubernetes and the underlying infrastructure.
  • Portability: Kubernetes is designed to be highly portable, allowing deployment across different cloud platforms, container runtimes, operating systems, and environments. Organizations can deploy Kubernetes locally, on bare metal, or in virtualization environments. Hybrid cloud capabilities can also be realized by deploying clusters on-premises and across multiple cloud providers.
  • Security: Security is a paramount consideration in Kubernetes architecture. Role-based access control (RBAC) is enforced across the cluster, ensuring granular access management. Integrating image-scanning processes into CI/CD pipelines enhances security by identifying vulnerabilities during the build and run phases. Container security is further strengthened by using non-root users, read-only file systems, and avoiding default values to minimize potential vulnerabilities.

Kubernetes in the Cloud

As previously stated, Kubernetes is designed to allow for effective containerization regardless of the environment. However third-party cloud platforms can make it difficult for Kubernetes to retrieve the information it needs about nodes within the cluster. With each cloud provider approaching operational implementation differently, Kubernetes requires an all-purpose solution.

Kubernetes cloud controller manager (CCM) is a background program that embeds cloud-specific control logic, making it possible for users to link their clusters into a cloud provider’s API. Using a plugin-mechanism structure, CCM allows for the full range of cloud providers to integrate with Kubernetes.

Enhancing Kubernetes Architecture Security

Securing Kubernetes clusters, nodes, and containers is paramount to safeguarding your digital assets. To fortify your defenses, adhere to these principles:

  • Stay up to date: Keep your Kubernetes installation current with the latest version. Regular updates ensure you benefit from the latest security patches and enhancements. Only the most recent three versions are supported with security updates.
  • Lock down API access: Configure the Kubernetes API server with utmost security. Disable anonymous and unauthenticated access. Utilize Transport Layer Security (TLS) encryption for all communications between the API server and kubelets, ensuring data integrity and confidentiality.
  • Fortify etcd: While etcd is inherently trustworthy, it’s imperative to secure its connections. Exclusively allow client connections over TLS to maintain the confidentiality and integrity of data exchanged with etcd.
  • Secure the kubelet: Harden the kubelet to prevent unauthorized access. Activate stringent access controls by setting the kubelet to reject anonymous requests. Employ the NodeRestriction admission controller to restrict the kubelet’s privileges, mitigating potential security risks.
  • Leverage native controls: Mitigate operational risks by harnessing Kubernetes-native security mechanisms. Embrace native Kubernetes controls to enforce robust security policies seamlessly. Aligning with native controls minimizes conflicts between your custom security measures and the orchestrator, promoting smoother operations and enhanced security posture.

Securing Kubernetes architecture demands a proactive approach to mitigate evolving threats and vulnerabilities. By implementing best practices such as these, organizations can bolster their defenses against potential breaches. However, safeguarding Kubernetes environments requires more than just adherence to best practices — it necessitates comprehensive visibility and analytics capabilities.

Gigamon for Kubernetes Visibility  

Containerization with microservices is a solution to the problem of application portability across various hardware and virtual machines, and Kubernetes is a solution to the problem of effective container orchestration. In this same vein, Gigamon GigaVUE® Cloud Suite™ for Kubernetes provides essential network visibility and security analytics to the Kubernetes architecture.

Containerized applications are vulnerable to attack. Given that every application within a container may represent different potential attack vectors, and that legacy threat-detection mechanisms are ill-suited to dynamic containerized environments, complete container visibility is absolutely essential. Gigamon GigaVUE Cloud Suite employs three key components, designed to ensure total container visibility:

  • Universal Cloud Tap (UCT) controller: UCT controllers are deployed in each worker node, receiving copied packets from every other controller on the node. This allows GigaVUE to collect relevant traffic with minimal impact on the node.
  • GigaVUE V Series and HC nodes: GigaVUE V Series and HC hardware appliances aggregate traffic, extracting appropriate traffic for distribution to relevant tools. These components ensure that tools are receiving the right traffic without becoming overburdened while generating reliable network intelligence.
  • GigaVUE-FM: GigaVUE-FM works with the master node to identify where new worker nodes have been established. It then creates new UCT controllers for those nodes, configuring their policies and scaling them as needed.

Kubernetes is a complex solution that incorporates a wide range of different components. But at its heart, Kubernetes architecture follows a relatively simple model: The user defines how the system should function; Kubernetes aligns the cluster to match that desired state. By understanding the components and the roles that each play, as well as the roles played by GigaVUE Cloud Suite in optimizing container visibility, users can more confidently enjoy the full benefits of containerization.

Click to learn more about GigaVUE Cloud Suite for Kubernetes.

Featured Webinars

Hear from our experts on the latest trends and best practices to optimize your network visibility and analysis.

CONTINUE THE DISCUSSION

People are talking about this in the Gigamon Community’s Hybrid/Public Cloud group.

Share your thoughts today

RELATED CONTENT

CALCULATOR
Customers have saved millions in IT costs. How much can you save?
WEBINAR
Mastering Cloud Migration: 5 Key Considerations for Seamless Workload Migration
WEBPAGE
Take a Gigamon product tour today
EBOOK
See how to realize the transformative promise of the cloud

Back to top