Kubernetes
Last updated
Last updated
Kubernetes architecture is a distributed system designed to manage containerized applications across multiple physical or virtual machines called Nodes. It follows a client-server model with two primary components:
Control Plane: This includes key components such as:
kube-apiserver: Manages cluster interactions.
etcd: Stores cluster data.
kube-scheduler: Assigns pods to nodes.
kube-controller-manager: Handles cluster operations.
Worker Nodes: These are where application workloads run and include:
kubelet: Manages pod execution.
kube-proxy: Handles network communications.
Container Runtime: Executes containers.
Add-ons: Extend functionality with tools for networking, monitoring, and storage.
Pods: Logical groups of containers.
Services: Provide stable network access to pods.
Kubernetes is highly scalable and fault-tolerant, making it suitable for large-scale deployments.
Kubernetes offers several tools for installing and managing clusters, each with unique features and use cases. Here's a comparison of kubeadm with other popular tools:
Purpose: Initializes a Kubernetes control plane node and joins worker nodes to form a cluster.
Platforms: Supports most Linux distributions, including Ubuntu, CentOS, and Fedora.
Features: Easy to use, supports both bare-metal and cloud environments. It provides a simple way to create and manage clusters.
Limitations: Does not handle infrastructure provisioning; requires manual setup of nodes.
Purpose: Automates the provisioning and management of Kubernetes clusters on cloud platforms.
Platforms: Primarily supports AWS, with beta support for GCE and alpha for VMware vSphere.
Features: Handles infrastructure provisioning and cluster management, making it ideal for cloud-native environments.
Limitations: Limited flexibility in terms of deployment platforms compared to other tools.
Purpose: Uses Ansible for provisioning and orchestrating Kubernetes clusters across multiple platforms.
Platforms: Supports bare metal and various cloud providers (AWS, GCE, Azure, OpenStack).
Features: Offers high flexibility and customization options due to its use of Ansible. Supports a wide range of Linux distributions.
Limitations: Requires Ansible knowledge and can be more complex to set up compared to kubeadm.
Purpose: Provides a declarative API for managing Kubernetes cluster lifecycle, including provisioning and upgrading.
Platforms: Supports multiple infrastructure providers (e.g., AWS, Azure, vSphere).
Features: Focuses on infrastructure as code (IaC) practices, making it suitable for large-scale and multi-cluster environments.
Limitations: Requires more expertise in managing infrastructure as code.
Tool
Primary Use Case
Platforms
Complexity
kubeadm
Simple cluster setup
Most Linux distributions
Low
kops
Cloud-native cluster management
Primarily AWS, GCE (beta), vSphere (alpha)
Medium
Kubespray
Flexible, multi-platform cluster deployment
Bare metal, multiple clouds
High
Cluster API
Declarative cluster lifecycle management
Multiple infrastructure providers
High
Use kubeadm for quick, straightforward cluster setup on existing infrastructure.
Choose kops for cloud-native environments, especially AWS.
Select Kubespray for complex, multi-platform deployments requiring high customization.
Opt for Cluster API when managing large-scale, multi-cluster environments with infrastructure as code.
Each tool has its strengths and is suited to different scenarios, making it important to evaluate your specific needs before choosing a Kubernetes installation tool.
Here’s a comprehensive step-by-step guide to set up a Kubernetes cluster using kubeadm on Ubuntu/Debian-based systems, combining best practices from multiple sources:
Minimum 2 nodes (1 master, 1+ worker) running Ubuntu 22.04+
SSH access with sudo privileges
At least 2GB RAM and 2 CPUs per node
Unique hostnames for each node (e.g., master-node
, worker-1
)
Use the kubeadm join
command generated during master initialization:
For development/testing:
If nodes show NotReady
, verify network plugin installation
Check journalctl -u kubelet
for service errors
Ensure port 6443 is open between nodes
This guide combines methodologies from phoenixNAP, Kubernetes docs, LinuxConfig, and DevOpsCube. For production, consider using managed Kubernetes services like EKS or GKE.