Introduction
Welcome back to my ongoing series, Building Kubernetes Cluster. In my previous posts, we've created the blueprint and established the essential infrastructure for our Kubernetes cluster (you can revisit those steps here for the blueprint article and here for the infrastructure setup).
Today, we embark on an exciting phase of our Kubernetes journey: selecting and configuring the core elements to bring our cluster to life. Therefore, our decisions are crucial now, as they will dictate our Kubernetes environment's efficiency, scalability, and reliability.
This post will delve into two critical choices for our Kubernetes cluster: the container runtime and the network plugin. I'll explore the options and select the best fit for our unique needs. Next, with these decisions in hand, we will proceed to configure the first, perhaps most vital, node in our cluster—the Control Plane node.
So, without further ado, join me as we navigate to details and build a robust and dynamic Kubernetes ecosystem from the ground up!
Selecting a Container Runtime
As we dive into the technicalities of setting up our Kubernetes cluster, a fundamental decision awaits us: choosing the container runtime. Kubernetes version 1.28, which we'll utilize for our cluster, requires a runtime that fulfills the Container Runtime Interface (CRI). This requirement ensures seamless interaction between Kubernetes and the container runtime because CRI is an API that allows Kubernetes to interact with the container runtime. Another standard here is the Open Container Initiative (OCI), a set of standards for containers describing the image format, runtime, and distribution.
It's important to note that Kubernetes releases before version 1.24 included direct integration with Docker Engine through a component known as dockershim
. However, starting with Kubernetes v1.20, this direct integration was deprecated and is no longer a part of Kubernetes.
Several options have gained prominence for their compatibility and performance with Kubernetes in the landscape of container runtimes. We will focus on the following commonly used container runtimes:
- containerd
- CRI-O
- Docker Engine
For a deeper understanding of these options and their implications for your Kubernetes setup, refer to the official Kubernetes documentation on container runtimes.
contained
Containerd is a high-level, industry-standard container runtime and part of the Cloud Native Computing Foundation (CNCF). It was originally developed within Docker but now functions as a standalone tool. Hence, it's used by major cloud providers like Amazon EKS and Azure AKS, highlighting its reliability and widespread adoption.
CRI-O
CRI-O is a streamlined container runtime specifically designed for Kubernetes, implementing the Kubernetes Container Runtime Interface (CRI) with support for OCI (Open Container Initiative) compatible runtimes. Without a doubt, it is a lightweight and efficient alternative to Docker, focused on running containers within a Kubernetes environment.
Docker Engine
Docker Engine is a widely recognized container runtime known for its robust features and ease of use. Now, while Kubernetes no longer directly integrates with Docker Engine due to the removal of dockershim
, to use Docker Engine with Kubernetes post-dockershim, we must adopt, for instance, cri-dockerd
adapter. In brief, this adapter provides a shim for Docker Engine that supports control of Docker Engine via the Kubernetes Container Runtime Interface.
Decision outcome
After thoroughly evaluating the available container runtimes, I've decided to use CRI-O for our Kubernetes cluster. This decision is based on several key factors that align with our goals and the specific needs of our Kubernetes environment:
Ease of use and flexibility
- CRI-O is designed explicitly for Kubernetes, ensuring advanced features and seamless integration. Therefore, this specialized focus gives us fine-grained control over our containerized applications, simplifying container management within a Kubernetes context.
- Its Kubernetes-centric design eliminates unnecessary complexities, offering a more straightforward and efficient container runtime experience.
Community support
- While CRI-O may have a smaller community than Docker, it rapidly expands as more organizations adopt it for their Kubernetes clusters.
- Its growing community focused on development and support promises continuous improvements and optimizations tailored to Kubernetes.
Performance
- CRI-O offers a lean runtime environment, which is both efficient and secure. Also, its optimized performance is particularly beneficial for Kubernetes clusters, handling container orchestration effectively.
- The tight integration with Kubernetes means that CRI-O is particularly well-suited for environments that heavily rely on Kubernetes, providing an optimized and cohesive operational experience.
Learning and exploration
- While Docker Engine and containerd are well-suited for Kubernetes clusters, CRI-O presents a new learning opportunity. In fact, this exploration aligns with the spirit of this blog series — to not only build a Kubernetes cluster but to deepen our understanding of its various components.
- By choosing CRI-O, I embark on a journey to expand my knowledge and share my learning experiences, offering a fresh perspective on Kubernetes container runtime options.
Selecting a Network Plugin
The second important decision that we need to make is to select a network plugin for our Kubernetes cluster. Kubernetes 1.28, the version we're using for our cluster, requires a Container Network Interface (CNI) plugin for cluster networking.
Kubernetes network model
The Kubernetes network model facilitates seamless communication within the cluster, ensuring that applications can communicate inside and outside the cluster. Here's a concise overview of the most critical aspects of the Kubernetes network model:
- Pod-to-Pod communication
Pods can communicate with each other across any node in the cluster without the need for Network Address Translation (NAT). - Node-to-Pod communication
System agents on a node (like system daemons, kubelet) can directly communicate with all pods on the same node. - Pod networking
Containers within a pod use loopback networking to communicate with each other. - Cluster networking
Ensures that pods can communicate with each other across different nodes within the cluster. - Service API
Allows pod applications to be exposed and reachable from outside the Kubernetes cluster. In addition, services can be used to publish applications entirely for consumption within the cluster, enhancing internal communication and service discovery. - Ingress and Gateway API
Ingress provides functionalities specifically for exposing HTTP applications, websites, and APIs.
Gateway API, an add-on, offers more extensive APIs for advanced service networking.
You can visit the official Kubernetes documentation for a deeper dive into the Kubernetes network model, its components, and its functionalities.
Available options
A diverse range of network plugins are available, each suited to different needs and infrastructure setups. Here are a few examples, mainly focusing on plugins designed for cloud environments:
- Amazon ECS CNI Plugins
Built for the AWS cloud environment. The Amazon ECS Agent uses it to configure network namespaces of containers with Elastic Network Interfaces (ENIs). - AWS VPC CNI
It is designed for Kubernetes pod networking in AWS, using Elastic Network Interfaces. It ensures native AWS environment compatibility and performance. - Azure CNI
It is available as part of Azure Kubernetes Service (AKS) and Azure IaaS VMs. Natively extends Azure Virtual Networks to containers, and it is automatically deployed and configured in Kubernetes clusters created by aks-engine, supporting both Linux and Windows nodes.
These examples represent just a fraction of the network plugins tailored for specific cloud environments. For a broader perspective and more options, a comprehensive list of recommended plugins is available in the README.md
file of the CNI – Container Network Interface GitHub repository.
Decision outcome
I've decided to use Calico as our Kubernetes cluster's network plugin. This decision is influenced by the desire to explore new tools and learn more about Kubernetes networking practically. Also, all network plugins are new to me, so I chose the first suitable option from the recommendations list.
What is Calico?
Calico is an L3/L4 networking solution that supports containers, Kubernetes clusters, virtual machines, and host-based workloads. It can be deployed in self-managed on-premise Kubernetes clusters and major cloud environments like AWS, Azure, GKE, and IBM, providing flexibility regardless of the underlying infrastructure. For Linux-based hosts, it utilizes eBPF (Extended Berkeley Packet Filter) and standard Linux iptables, offering efficient and powerful network packet filtering and manipulation. It also offers BGP (Border Gateway Protocol) and overlay networking.
For more information on Calico, including detailed documentation and setup guides, visit the Calico GitHub repository.
Configure Control Plane
We'll use a bash script to automate the installation process for our Kubernetes cluster, so the first step is to create a file and write the first lines to set up error handling and script execution options:
#!/bin/bash set -euxo pipefail
These commands ensure that the script exits on any error, treats unset variables as errors, and prints each command for debugging, providing a strong foundation for the whole script.
Installing tools
Continuing with the setup, the next step involves installing additional tools necessary for node configuration:
sudo apt-get update -y sudo apt-get install -y jq ufw apt-transport-https ca-certificates
jq
: a lightweight command-line JSON processor to extract the node IP address.ufw
: for setting up firewall rules.apt-transport-https
: allows the use of repositories accessed via the HTTPS protocol.ca-certificates
: enables the system to check the validity of SSL/TLS certificates, which is essential for secure communication.
In the next step, let's declare a couple of variables:
KUBERNETES_VERSION="1.28" CRIO_OS="xUbuntu_22.04" CRIO_VERSION="1.28" NET_INTERFACE="eth1" NODE_IP="$(ip --json addr show $NET_INTERFACE | jq -r '.[0].addr_info[] | select(.family == "inet") | .local')" NODE_NAME=$(hostname -s) POD_CIDR="192.168.0.0/16"
Our variables store information about the version of Kubernetes and CRI-O packages that we want to install alongside the OS version required by the CRI-O setup. Additionally, we need information about the network interface and the IP address of the current node. We've defined in the blueprint that for the master node, we will use the eth1
network interface with the static IP address of 172.16.0.100
. We'll use the network address 192.168.0.0/16
for the container network. Moreover, you can use any CIDR here, but remember that the selected network shouldn't overlap with the cluster network.
Configure prerequisites
Disable swap
As part of the node setup for our Kubernetes cluster, it's important to disable swap memory. Although Kubernetes 1.28 has introduced beta support for swap on Linux nodes, I've chosen to proceed without it, given its beta status and the need for a more stable environment.
Here's how we disable swap in our bash script:
sudo swapoff -a sudo sed -ri '/\sswap\s/s/^#?/#/' /etc/fstab
The first command turns off the swap memory for the current session, while the second modifies the /etc/fstab
file, which handles filesystems to be mounted at startup. The command comments out any swap entries, preventing it from being turned on automatically when the node reboots.
Configure firewall
Next, we'll set up the necessary firewall rules using ufw
in our bash script:
sudo ufw allow 22/tcp # ssh (optional) sudo ufw allow 6443/tcp # K8S API server sudo ufw allow 2379:2380/tcp # etcd server client API sudo ufw allow 10250/tcp # kublet API sudo ufw allow 10259/tcp # kube-scheduler sudo ufw allow 10257/tcp # kube-controller-manager
After setting up the firewall rules, the next step is to enable UFW. This will activate the firewall rules we've just defined, ensuring the security of this node:
sudo sed -i "s/ENABLED=no/ENABLED=yes/g" /etc/ufw/ufw.conf sudo ufw enable
The first command modifies the UFW configuration file to enable the firewall on startup, and the second one activates UFW immediately. If you are connected via SSH, you might receive a prompt warning about the SSH session being interrupted. However, since we've allowed port 22 through UFW (sudo ufw allow 22/tcp
), your SSH connection should remain stable, and you can safely proceed with the script.
Loading necessary kernel modules
Next, the following segment of the bash script is dedicated to loading necessary kernel modules for Kubernetes. This step ensures that the underlying system supports essential networking and overlay functionalities Kubernetes requires:
cat << EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter
The first command block creates a file named k8s.conf
under /etc/modules-load.d/
. This file specifies kernel modules to be loaded at boot.
The listed modules overlay
and br_netfilter
are required for Kubernetes networking:
overlay
: is used for supporting overlay network drivers in Kubernetes.br_netfilter
: is required for bridged traffic to pass through iptables rules, which is crucial for network policies in Kubernetes.
The sudo modprobe
commands immediately load these modules into the running kernel, enabling their functionalities without needing a reboot.
Configure networking
The next part of the setup script configures system parameters to ensure proper network traffic handling by Kubernetes:
cat << EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sudo sysctl --system
net.bridge.bridge-nf-call-iptables = 1
ensures that IP tables see bridged traffic and is necessary for the iptables-based kube-proxy to work correctly.net.ipv4.ip_forward = 1
enables IP forwarding, allowing the Linux kernel to pass traffic from one network interface to another.net.bridge.bridge-nf-call-ip6tables = 1
similar to the first setting, but for IPv6, ensuring that the bridged IPv6 traffic is handled correctly.
The sudo sysctl --system
command applies these settings and reloads all system-wide sysctl
settings from the files in /etc/sysctl.d/
, /etc/sysctl.conf
, and runtime sysctl
parameters.
Installing CRI-O
Configure GPG keys
The next step is installing CRI-O, a lightweight container runtime I chose earlier. The first stage of this process involves adding the necessary repository keys:
curl -fsSL https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$CRIO_OS/Release.key | sudo gpg --dearmor -o /usr/share/keyrings/libcontainers-archive-keyring.gpg curl -fsSL https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$CRIO_OS/Release.key | sudo gpg --dearmor -o /usr/share/keyrings/libcontainers-crio-archive-keyring.gpg
These commands download the GPG keys for the CRI-O repositories and store them in a format that the system package manager can recognize under /usr/share/keyrings/
.
Configure package repositories
Following the addition of the GPG keys, the next step in installing CRI-O is to add the appropriate software repositories to our system. This is done using the following commands in our bash script:
echo "deb [signed-by=/usr/share/keyrings/libcontainers-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$CRIO_OS/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list echo "deb [signed-by=/usr/share/keyrings/libcontainers-crio-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$CRIO_OS/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION.list
These commands add the official CRI-O repositories to the system's package sources. Both lines use the signed-by
option to specify the GPG keys we previously added. This ensures that all packages downloaded from these repositories are authenticated and secure.
Install CRI-O
With the repository keys added and the repositories configured, we can install CRI-O on our Kubernetes node. This is accomplished using the following commands in our bash script:
sudo apt-get update -y sudo apt-get install -y cri-o cri-o-runc
After successfully installing CRI-O, the final step is to ensure that the CRI-O service is properly started and enabled to run on boot. This is achieved with the following commands:
sudo systemctl daemon-reload sudo systemctl enable crio --now
The first command reloads the systemd
manager configuration. It's necessary to do this after installing new services or making changes to a service's configuration to ensure systemd
is aware of these modifications. Additionally, the second command not only enables the CRI-O service to start automatically at boot but also starts the service immediately (--now
flag).
Installing Kubernetes components
Configure GPG keys
The next phase in setting up our Kubernetes node is to install the necessary Kubernetes tools, starting with importing the official Kubernetes GPG key. Similar to the CRI-O GPG keys import, we can achieve this in our script by the following command:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v$KUBERNETES_VERSION/deb/Release.key | sudo gpg --dearmor -o /usr/share/keyrings/kubernetes-apt-keyring.gpg
This line downloads the key and saves it in a format recognized by the system's package manager, storing it in /usr/share/keyrings/
.
Configure package repositories
Next, similar to the CRI-O setup, the next step is to add the official Kubernetes packages repository to our system:
echo "deb [signed-by=/usr/share/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v$KUBERNETES_VERSION/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Install Kubernetes tools
We're ready to install tools on our node with everything in place. This is achieved using the following commands in our bash script:
sudo apt-get update -y sudo apt-get install -y kubelet kubectl kubeadm
We'll install the kubelet
component that runs on all nodes in the Kubernetes cluster and manages things like starting pods and containers, kubectl
command-line tool for interacting with the Kubernetes cluster, and last but not least, the kubeadm
tool for bootstrapping our Kubernetes cluster.
Configure node IP for kubelet
Before bootstrapping the control plane node in our Kubernetes cluster, it's important to configure the Kubelet with the specific IP address of the node. This configuration ensures that the Kubelet communicates using the correct network interface. The following command in our bash script accomplishes this:
cat << EOF | sudo tee /etc/default/kubelet KUBELET_EXTRA_ARGS=--node-ip=$NODE_IP EOF
This line sets an extra argument for the Kubelet service to specify the node's IP address.
Fetch images for Kubernetes control plane's components
Now, when we've configured the necessary settings, we're ready to pull the container images required to bootstrap the control plane of our Kubernetes cluster. Interestingly, in Kubernetes, the control plane components themselves run as containers managed by Kubernetes. Let's add this to our script:
sudo kubeadm config images pull
This command uses kubeadm
, a tool provided by Kubernetes to pull all the necessary images for setting up the control plane. Therefore, according to the blueprint, we need to download images for:
kube-apiserver
kube-controller-manager
kube-scheduler
kube-proxy
etcd
Bootstrap the cluster
With all the necessary preparations complete, we can now bootstrap the first node of our Kubernetes cluster – the control plane node. This is achieved using the kubeadm init
command:
sudo kubeadm init --apiserver-advertise-address="$NODE_IP" --apiserver-cert-extra-sans="$NODE_IP" --pod-network-cidr="$POD_CIDR" --node-name "$NODE_NAME"
Quick Breakdown:
--apiserver-advertise-address
parameter sets the IP address for the API server. In this case, the172.16.0.100
IP address will be used.--apiserver-cert-extra-sans
argument adds the nodes' IP to the API server's SSL certificate.--pod-network-cidr
parameter specifies the network range for the pods.--node-name
argument sets the name of the control plane node.
After initializing the control plane, the next step is to set up kubectl
, so let's configure kubectl
to use the administrator credentials generated by kubeadm init
:
mkdir -p "$HOME"/.kube sudo cp -i /etc/kubernetes/admin.conf "$HOME"/.kube/config sudo chown "$(id -u)":"$(id -g)" "$HOME"/.kube/config
These instructions create a .kube
directory in the user's home for storing kubectl
configuration then copies the Kubernetes admin credentials (admin.conf
) to this directory, and adjusts the file ownership to the current user for access without sudo
.
Now, we should be able to check if our cluster works correctly by typing kubectl get nodes
in the command line. This command should return a list of nodes in the cluster, now a single node.
Installing Calico
To set up Calico as the network plugin for our Kubernetes cluster, we can deploy it directly within the cluster using kubectl
. This involves applying configuration manifests from the Calico project:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml
The first command deploys the Tigera operator, a Kubernetes native application that manages Calico installations and upgrades, and the second command applies Calico-specific custom resource definitions, configuring network policies, and other settings.
Summary
We achieved a lot in this blog post of my Building Kubernetes Cluster series! We successfully navigated the intricate process of setting up the control plane node for our Kubernetes cluster. Here's a brief recap of what we've achieved:
- We began by writing a bash script to automate the installation process, ensuring a smooth setup.
- Next, we disabled the swap to align with Kubernetes' best practices and to ensure stable performance.
- Then, we set up firewall rules using UFW to secure the node and facilitate proper communication within the cluster.
- Also, we added kernel modules like
overlay
andbr_netfilter
to support Kubernetes networking. - Next, we configured system parameters to align with Kubernetes' networking model, ensuring correct traffic handling.
- With everything set up, we installed the CRI-O container runtime that prepared the stage for running Kubernetes pods.
- Then, we installed the necessary Kubernetes tools, like
kubelet
,kubectl
, andkubeadm
. - Next, we achieved a significant milestone! We successfully bootstrapped the control plane node using
kubeadm
, based on the blueprint from the first article of this series. - Finally, we installed Calico as our network plugin.
With these steps, our Kubernetes control plane is now operational, laying a solid foundation for our cluster!
Looking Ahead
In my next post, we'll take the next giant leap by configuring and joining worker nodes to the cluster. This will enable us to deploy applications and truly harness the power of Kubernetes. Stay tuned for more insights and hands-on guidance in my Kubernetes journey!
0 Comments