Introduction
We've reached a pivotal moment in our ongoing journey through the Build Kubernetes cluster series: adding worker nodes to our cluster. Furthermore, following the insights and configurations from our previous post, where we successfully established the control plane, we're now set to expand our cluster's capabilities by automating the creation of worker nodes.
In this post, I'll focus on constructing a bash script to streamline the setup of worker nodes. The process will be relatively straightforward, as many configurations and steps are similar to those we applied to the master node. We'll efficiently replicate and adapt these settings to bring our worker nodes online.
Join me as we take the next step in our Kubernetes adventure, enhancing our cluster's capacity and readiness for deploying robust applications!
Configure worker node
As we progress in our series, this article focuses on adding worker nodes to our cluster. This part explains the adaptation of the setup script for configuring worker nodes, highlighting the key differences from the master node setup and maintaining consistency in foundational configurations. By focusing on these adjustments, we equip our worker nodes with the necessary setup, ensuring they integrate seamlessly into the Kubernetes cluster, ready to handle workloads efficiently.
Configure prerequisites
Building upon the groundwork laid in the previous post, we'll script the prerequisites for worker nodes. Therefore, I'll emphasize only the differences in configuration compared to the master node. The aim is to automate and streamline the setup process.
Here's the first part of the script for worker node configuration:
#!/bin/bash set -euxo pipefail sudo apt-get update -y sudo apt-get install -y jq ufw apt-transport-https ca-certificates KUBERNETES_VERSION="1.28" CRIO_OS="xUbuntu_22.04" CRIO_VERSION="1.28" NET_INTERFACE="eth1" NODE_IP="$(ip --json addr show $NET_INTERFACE | jq -r '.[0].addr_info[] | select(.family == "inet") | .local')" sudo swapoff -a sudo sed -ri '/\sswap\s/s/^#?/#/' /etc/fstab sudo ufw allow 22/tcp # ssh (optional) sudo ufw allow 10250/tcp # kublet API sudo ufw allow 30000:32767/tcp # services node ports pool sudo sed -i "s/ENABLED=no/ENABLED=yes/g" /etc/ufw/ufw.conf sudo ufw enable cat << EOF | sudo tee /etc/modules-load.d/k8s.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe br_netfilter cat << EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sudo sysctl --system
For worker nodes, we have a simplified firewall setup. I excluded ports only necessary for the control plane and included a range 30000:32767/tcp
, designated for services node ports. This range is crucial for exposing services running on the cluster.
The rest of the script mirrors our master node setup, including disabling SWAP, updating system packages, and configuring network parameters. This consistency ensures that all nodes in our cluster operate cohesively.
Installing CRI-O
Continuing with our Kubernetes cluster's setup, installing the CRI-O container runtime on the worker nodes follows the same procedure as we did for the master node. This step is vital as CRI-O on worker nodes will run the applications deployed in the cluster.
The installation commands for CRI-O remain the same, ensuring consistency across all nodes in the cluster:
curl -fsSL https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$CRIO_OS/Release.key | sudo gpg --dearmor -o /usr/share/keyrings/libcontainers-archive-keyring.gpg curl -fsSL https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$CRIO_OS/Release.key | sudo gpg --dearmor -o /usr/share/keyrings/libcontainers-crio-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/libcontainers-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$CRIO_OS/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list echo "deb [signed-by=/usr/share/keyrings/libcontainers-crio-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$CRIO_OS/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION.list sudo apt-get update -y sudo apt-get install -y cri-o cri-o-runc sudo systemctl daemon-reload sudo systemctl enable crio --now
In a typical Kubernetes setup, the worker nodes are the primary hosts for running applications. While the master node orchestrates and manages the cluster, the worker nodes do the heavy lifting by running the deployed applications. The instance of CRI-O on each worker node will manage the lifecycle of the containers that constitute these applications, making it a critical component. Though some deployments can run on the master node, and single-node clusters are possible, we focus on creating a more production-like environment. This approach allows us to delve deeper into the practical aspects of Kubernetes management.
Installing Kubernetes components
Installing the necessary Kubernetes tools is crucial as we progress in setting up our worker nodes for the Kubernetes cluster. For the worker nodes, we focus on installing kubelet
and kubeadm
, while skipping kubectl
since cluster management is centralized at the master node.
Here's the script segment for installing these tools:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v$KUBERNETES_VERSION/deb/Release.key | sudo gpg --dearmor -o /usr/share/keyrings/kubernetes-apt-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v$KUBERNETES_VERSION/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update -y sudo apt-get install -y kubelet kubeadm cat << EOF | sudo tee /etc/default/kubelet KUBELET_EXTRA_ARGS=--node-ip=$NODE_IP EOF
Join worker node to the cluster
After setting up the necessary configurations and installing the required tools on the worker nodes in the script, the final step in integrating them into our Kubernetes cluster is to execute the script on each worker node. This process will ensure that each node is properly configured and ready to join the cluster. This section will focus on the commands required to set up the cluster.
Generate join token
We first need to generate a join command from the control plane node to integrate our worker nodes into the Kubernetes cluster. This command will provide the necessary information for worker nodes to connect securely to the cluster.
Run the following command on the control plane (master) node:
sudo kubeadm token create --print-join-command
This command generates a new token and prints out the complete kubeadm join
command, including the token and the discovery-token-ca-cert-hash required for a worker node to join the cluster. This join command should be executed on each worker node after configuring them with the necessary Kubernetes tools and settings. The example output may look like this:
kubeadm join 172.16.0.100:6443 --token n0q...zn --discovery-token-ca-cert-hash sha256:e365e...3661e2
Join worker
After generating the join command from the control plane, the next step is to execute this command on each of our worker nodes. This process integrates the nodes into the Kubernetes cluster:
sudo kubeadm join 172.16.0.100:6443 --token n0q...zn --discovery-token-ca-cert-hash sha256:e365e...3661e2
The command will perform several pre-flight checks and configurations. Once successfully executed, it will confirm that the node has joined the cluster, indicating the process is complete:
[preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
To verify that the worker node has joined the cluster, we can run kubectl get nodes
on the control-plane node. The newly added worker node should be listed with its status.
Label node
The final step in integrating our worker nodes into the Kubernetes cluster is to assign them appropriate labels. Labeling nodes is an important practice in Kubernetes, as it allows for efficient organization and management of nodes, especially when scheduling pods and deploying applications.
To label each newly added worker node, execute the following command from the control plane for each worker node:
kubectl label node k8s-worker-1 node-role.kubernetes.io/worker=worker
Replace k8s-worker-1
with the actual name of the worker node you want to label. This name can be found by running kubectl get nodes
on the control plane or in the post about the infrastructure.
Labels help categorize nodes into roles, making targeting specific nodes for certain deployments or tasks easier. In cluster operations, labels can be used in node selectors, affinities, and anti-affinities to control where pods should or shouldn't be scheduled.
Summary
In this insightful article of our series, we've successfully navigated the process of adding worker nodes to our Kubernetes cluster. Let's recap the key steps and milestones achieved:
Configuring prerequisites
We began by setting up the essential tools and configurations on the worker nodes, leveraging the script automation and knowledge from setting up the control plane.
Installing CRI-O
The CRI-O container runtime was installed on the worker nodes, a crucial component for managing containers in our cluster.
Kubernetes tools installation
Essential tools like kubelet
and kubeadm
were installed. These tools are vital for integrating the worker nodes into the cluster.
Joining the cluster
A unique join command was generated from the control plane and executed on each worker node, securely adding them to the cluster.
Verifying node integration
Post-joining, we verified the successful addition of the worker nodes to the cluster using the kubectl get nodes
command on the control plane.
Labeling worker nodes
The final step involved labeling each worker node enhancing the organization and management of our cluster for future deployments.
With these steps, our Kubernetes cluster now boasts additional worker nodes, enhancing its capacity and capability for running diverse applications. This expansion brings us closer to a robust, production-like environment, offering valuable insights into the practical aspects of Kubernetes management.
0 Comments