Basic setup - Kubernetes with Cilium, 1 master, 1 worker



 //Added necessory repository

//kubernetes repo

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/

enabled=1

gpgcheck=1

gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key

EOF


//docker repo

dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo



[root@master boobalan]# dnf repolist

repo id                                                                        repo name

appstream                                                                      Rocky Linux 9 - AppStream

baseos                                                                         Rocky Linux 9 - BaseOS

docker-ce-stable                                                               Docker CE Stable - x86_64

extras                                                                         Rocky Linux 9 - Extras

kubernetes                                                                     Kubernetes


//install docker and containerd on both master and worker


#dnf install docker-ce docker-ce-cli containerd.io -y


//once we install docker it automatically created a group called 'docker' we can check by #cat /etc/group

//now need to add user into the docker group


#usermod -aG docker $USER && newgrp docker


//this newgrp docker is the command that help applies the group change immediately for the current terminal session, otherwise logout and login required to make the group change.


//start and enable the docker and container


#systemctl start docker && systemctl enable docker

#systemctl start containerd && systemctl enable containerd


//install kubelet - service, kubeadm - initilization, kubectl - commands, kubernetes-cni - container network interface

Component Purpose

kubelet - Runs on each node to manage containers

kubeadm - Bootstraps the Kubernetes cluster

kubectl - CLI tool to interact with the cluster

kubernetes-cni - Provides networking support (but needs a CNI plugin)


#dnf install -y kubelet kubeadm kubectl kubernetes-cni


//start and enable the service


[root@master boobalan]# systemctl start kubelet.service

[root@master boobalan]# systemctl enable kubelet.service


//here facing an error to start kubelet service to fix the issue

Error : rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService

//This happens when containerd isn't properly configured as a CRI runtime for Kubernetes.


[root@master boobalan]# containerd config default > /etc/containerd/config.toml

[root@master boobalan]# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

[root@master boobalan]# systemctl restart containerd

[root@master boobalan]# crictl info | grep runtimeType

bash: crictl: command not found

#yum install -y cri-tools    -----> it's container runtime interface cli tool

[root@master boobalan]# crictl info | grep runtimeType

WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.

ERRO[0000] validate service connection: validate CRI v1 runtime API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory"

        "runtimeType": "",

        "runtimeType": "",

          "runtimeType": "io.containerd.runc.v2",


[root@master boobalan]# systemctl restart kubelet

[root@master boobalan]# systemctl status kubelet

● kubelet.service - kubelet: The Kubernetes Node Agent

     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; preset: disabled)

     Active: active (running) since Sat 2025-02-22 23:54:58 CET; 16s ago

       Docs: https://kubernetes.io/docs/

   Main PID: 5706 (kubelet)

      Tasks: 10 (limit: 10881)

     Memory: 22.2M

        CPU: 1.304s

     CGroup: /system.slice/kubelet.service

             └─5706 /usr/bin/kubelet



[root@master boobalan]# kubectl version --client

Client Version: v1.29.14

Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3


//install CNI plugin - cilium

//kube-proxy is not required if we install cilium plugin CNI to route the traffic to correct pod

[root@master boobalan]# CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)

[root@master boobalan]# CLI_ARCH=amd64

[root@master boobalan]# if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi

[root@master boobalan]# curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0

100 53.4M  100 53.4M    0     0  3006k      0  0:00:18  0:00:18 --:--:-- 3779k

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0

100    92  100    92    0     0    141      0 --:--:-- --:--:-- --:--:--   141

[root@master boobalan]# sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum

cilium-linux-amd64.tar.gz: OK

[root@master boobalan]# sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin

cilium

[root@master boobalan]# rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

rm: remove regular file 'cilium-linux-amd64.tar.gz'? yes

rm: remove regular file 'cilium-linux-amd64.tar.gz.sha256sum'? yes

[root@master boobalan]# cilium install --version 1.14.2

ℹ️  Using Cilium version 1.14.2

⏭️ Skipping auto kube-proxy detection


Error: Unable to install Cilium: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp [::1]:8080: connect: connection refused

[root@master boobalan]# cilium version

cilium-cli: v0.16.24 compiled with go1.23.4 on linux/amd64

cilium image (default): v1.16.6

cilium image (stable): v1.17.1

cilium image (running): unknown. Unable to obtain cilium version. Reason: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp [::1]:8080: connect: connection refused


[root@master boobalan]# cilium status

    /¯¯\

 /¯¯\__/¯¯\    Cilium:             1 errors

 \__/¯¯\__/    Operator:           1 errors

 /¯¯\__/¯¯\    Envoy DaemonSet:    1 errors

 \__/¯¯\__/    Hubble Relay:       1 warnings

    \__/       ClusterMesh:        1 warnings


Cluster Pods:          0/0 managed by Cilium

Helm chart version:

Errors:                cilium                   cilium                   Get "http://localhost:8080/apis/apps/v1/namespaces/kube-system/daemonsets/cilium": dial tcp [::1]:8080: connect: connection refused

                       cilium-envoy             cilium-envoy             Get "http://localhost:8080/apis/apps/v1/namespaces/kube-system/daemonsets/cilium-envoy": dial tcp [::1]:8080: connect: connection refused

                       cilium-operator          cilium-operator          Get "http://localhost:8080/apis/apps/v1/namespaces/kube-system/deployments/cilium-operator": dial tcp [::1]:8080: connect: connection refused

Warnings:              clustermesh-apiserver    clustermesh-apiserver    clustermesh is not deployed

                       hubble-relay             hubble-relay             hubble relay is not deployed

                       hubble-ui                hubble-ui                hubble ui is not deployed

status check failed: [Get "http://localhost:8080/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dcilium": dial tcp [::1]:8080: connect: connection refused, Get "http://localhost:8080/apis/apps/v1/namespaces/kube-system/daemonsets/cilium-envoy": dial tcp [::1]:8080: connect: connection refused, Get "http://localhost:8080/api/v1/namespaces/kube-system/pods?labelSelector=name%3Dcilium-envoy": dial tcp [::1]:8080: connect: connection refused, Get "http://localhost:8080/apis/apps/v1/namespaces/kube-system/daemonsets/cilium": dial tcp [::1]:8080: connect: connection refused, Get "http://localhost:8080/apis/apps/v1/namespaces/kube-system/deployments/cilium-operator": dial tcp [::1]:8080: connect: connection refused, Get "http://localhost:8080/api/v1/pods": dial tcp [::1]:8080: connect: connection refused, unable to retrieve ConfigMap "cilium-config": Get "http://localhost:8080/api/v1/namespaces/kube-system/configmaps/cilium-config": dial tcp [::1]:8080: connect: connection refused]


----------------------------------X---------------------------------------


Now we have to initilise the setup, for that we don't specifiy any '--pod-network-cidr=10.244.0.0/16' range instead we just go with below command because we are using cilium


#sudo kubeadm init --pod-network-cidr=10.244.0.0/16  --> I am not using this

The --pod-network-cidr=10.244.0.0/16 is indeed for internal pod communication within the Kubernetes cluster. The 10.x.x.x IP range (or whichever CIDR you choose) is used solely for the pods, meaning:


Pods are assigned IPs from this range, and they can communicate with each other within the cluster using these internal IPs.

These IPs are not directly accessible from outside the cluster by default.

External Access to Pods

To allow external access to your pods (outside the cluster), you need to use Kubernetes Services (e.g., LoadBalancer, NodePort, or Ingress):


NodePort: Exposes a pod on a specific port across all nodes in the cluster, but the pod’s internal IP (like 10.x.x.x) remains inaccessible directly from outside the cluster.

LoadBalancer: Provisions a load balancer in cloud environments to expose the service externally.

Ingress: Used to route external HTTP(S) traffic to services within the cluster based on URL paths or hostnames.

So, the internal pod IP range is just for communication between pods. External access requires additional configurations like Services or Ingress rules. 🚀



//I am going with this


#kubeadm init


//having error -   [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist

[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...

To see the stack trace of this error execute with --v=5 or higher


//fix

#modprobe br_netfilter    ////Kubernetes needs this module to manage network packets.

#echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

#cat <<EOF | tee /etc/modules-load.d/k8s.conf

br_netfilter

EOF     /////Then made the setting permanent using

#cat <<EOF | tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.ipv4.ip_forward = 1

EOF    ///also enable important sysctl setting

#sysctl --system


[root@master boobalan]# kubeadm init

I0223 00:09:10.232417    6170 version.go:256] remote version is much newer: v1.32.2; falling back to: stable-1.29

[init] Using Kubernetes version: v1.29.14

[preflight] Running pre-flight checks

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

W0223 00:10:09.219917    6170 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.

[certs] Using certificateDir folder "/etc/kubernetes/pki"

[certs] Generating "ca" certificate and key

[certs] Generating "apiserver" certificate and key

[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master.boobi.com] and IPs [10.96.0.1 192.168.198.140]

[certs] Generating "apiserver-kubelet-client" certificate and key

[certs] Generating "front-proxy-ca" certificate and key

[certs] Generating "front-proxy-client" certificate and key

[certs] Generating "etcd/ca" certificate and key

[certs] Generating "etcd/server" certificate and key

[certs] etcd/server serving cert is signed for DNS names [localhost master.boobi.com] and IPs [192.168.198.140 127.0.0.1 ::1]

[certs] Generating "etcd/peer" certificate and key

[certs] etcd/peer serving cert is signed for DNS names [localhost master.boobi.com] and IPs [192.168.198.140 127.0.0.1 ::1]

[certs] Generating "etcd/healthcheck-client" certificate and key

[certs] Generating "apiserver-etcd-client" certificate and key

[certs] Generating "sa" key and public key

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"

[kubeconfig] Writing "admin.conf" kubeconfig file

[kubeconfig] Writing "super-admin.conf" kubeconfig file

[kubeconfig] Writing "kubelet.conf" kubeconfig file

[kubeconfig] Writing "controller-manager.conf" kubeconfig file

[kubeconfig] Writing "scheduler.conf" kubeconfig file

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

[control-plane] Using manifest folder "/etc/kubernetes/manifests"

[control-plane] Creating static Pod manifest for "kube-apiserver"

[control-plane] Creating static Pod manifest for "kube-controller-manager"

[control-plane] Creating static Pod manifest for "kube-scheduler"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Starting the kubelet

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

[apiclient] All control plane components are healthy after 29.074381 seconds

[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs] Skipping phase. Please see --upload-certs

[mark-control-plane] Marking the node master.boobi.com as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]

[mark-control-plane] Marking the node master.boobi.com as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

[bootstrap-token] Using token: jhe9ix.n01gi1nmhtaxv8ba

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes

[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy


Your Kubernetes control-plane has initialized successfully!


To start using your cluster, you need to run the following as a regular user:


  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config


Alternatively, if you are the root user, you can run:


  export KUBECONFIG=/etc/kubernetes/admin.conf


You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/


Then you can join any number of worker nodes by running the following on each as root:


kubeadm join 192.168.198.140:6443 --token jhe9ix.n01gi1nmhtaxv8ba \

        --discovery-token-ca-cert-hash sha256:2c475a5bab34d0c7c4151f3f473a73e7886110e4ed5ea2d5ded686d112b7f7cf


////done

//// now follow the above step 

[root@master boobalan]# mkdir -p $HOME/.kube

[root@master boobalan]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@master boobalan]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@master boobalan]# export KUBECONFIG=/etc/kubernetes/admin.conf

[root@master boobalan]# kubectl get nodes

NAME               STATUS     ROLES           AGE   VERSION

master.boobi.com   NotReady   control-plane   10m   v1.29.14



///here the node is not ready beacuase networking is not set yet, here need to install cilium use above steps to install cilium


///after install still the master is not ready

[root@master boobalan]# cilium status

    /¯¯\

 /¯¯\__/¯¯\    Cilium:             1 errors, 1 warnings

 \__/¯¯\__/    Operator:           1 errors, 1 warnings

 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)

 \__/¯¯\__/    Hubble Relay:       disabled

    \__/       ClusterMesh:        disabled


DaemonSet              cilium             Desired: 1, Unavailable: 1/1

Deployment             cilium-operator    Desired: 1, Unavailable: 1/1

Containers:            cilium             Pending: 1

                       cilium-operator    Pending: 1

Cluster Pods:          0/2 managed by Cilium

Helm chart version:    1.14.2

Image versions         cilium             quay.io/cilium/cilium:v1.14.2@sha256:6263f3a3d5d63b267b538298dbeb5ae87da3efacf09a2c620446c873ba807d35: 1

                       cilium-operator    quay.io/cilium/operator-generic:v1.14.2@sha256:52f70250dea22e506959439a7c4ea31b10fe8375db62f5c27ab746e3a2af866d: 1

Errors:                cilium             cilium                              1 pods of DaemonSet cilium are not ready

                       cilium-operator    cilium-operator                     1 pods of Deployment cilium-operator are not ready

Warnings:              cilium             cilium-jxbg5                        pod is pending

                       cilium-operator    cilium-operator-5db6b54b45-kxqjs    pod is pending

[root@master boobalan]# kubectl get nodes

NAME               STATUS     ROLES           AGE   VERSION

master.boobi.com   NotReady   control-plane   19m   v1.29.14


////error 

[root@master boobalan]# journalctl -u kubelet -f

Feb 23 00:33:45 master.boobi.com kubelet[6933]: E0223 00:33:45.527608    6933 kubelet.go:2911] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"


[root@master boobalan]# kubectl get pods -n kube-system -l k8s-app=cilium

NAME           READY   STATUS    RESTARTS   AGE

cilium-jxbg5   1/1     Running   0          6m57s


[root@master boobalan]# kubectl get namespaces

NAME              STATUS   AGE

default           Active   43m

kube-node-lease   Active   43m

kube-public       Active   43m

kube-system       Active   43m


[root@master boobalan]# kubectl get pods -n kube-system

NAME                                       READY   STATUS    RESTARTS        AGE

cilium-jxbg5                               1/1     Running   0               10m

cilium-operator-5db6b54b45-kxqjs           1/1     Running   1 (3m7s ago)    10m

coredns-76f75df574-lkx6f                   1/1     Running   0               29m

coredns-76f75df574-rc2fg                   1/1     Running   0               28m

etcd-master.boobi.com                      1/1     Running   0               29m

kube-apiserver-master.boobi.com            1/1     Running   0               29m

kube-controller-manager-master.boobi.com   1/1     Running   2 (4m11s ago)   29m

kube-proxy-bg2sq                           1/1     Running   0               28m

kube-scheduler-master.boobi.com            1/1     Running   3               29m


[root@master boobalan]# kubectl get nodes

NAME               STATUS   ROLES           AGE   VERSION

master.boobi.com   Ready    control-plane   26m   v1.29.14


//ok It's because the kube-system, cilium pod are not yet ready now the master is ready.


[root@master boobalan]# cilium status

    /¯¯\

 /¯¯\__/¯¯\    Cilium:             OK

 \__/¯¯\__/    Operator:           OK

 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)

 \__/¯¯\__/    Hubble Relay:       disabled

    \__/       ClusterMesh:        disabled


DaemonSet              cilium             Desired: 1, Ready: 1/1, Available: 1/1

Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1

Containers:            cilium             Running: 1

                       cilium-operator    Running: 1

Cluster Pods:          2/2 managed by Cilium

Helm chart version:    1.14.2

Image versions         cilium             quay.io/cilium/cilium:v1.14.2@sha256:6263f3a3d5d63b267b538298dbeb5ae87da3efacf09a2c620446c873ba807d35: 1

                       cilium-operator    quay.io/cilium/operator-generic:v1.14.2@sha256:52f70250dea22e506959439a7c4ea31b10fe8375db62f5c27ab746e3a2af866d: 1




Before you join the worker node to the cluster, here are the major configuration files you should verify and check:


kubeadm config files:


/etc/kubernetes/kubeadm.conf: Contains configurations for the kubeadm setup. You can check here for settings like API server URL and other options.

kubelet config files:


/var/lib/kubelet/config.yaml: This is the kubelet configuration file. Make sure that it has the correct settings for your environment. For example, it should point to the right kube-apiserver URL.

/etc/systemd/system/kubelet.service.d/10-kubeadm.conf: This file contains the kubelet service configuration, including flags passed to kubelet during startup. You can adjust --kubeconfig or other options here.

Kubeconfig files:


/etc/kubernetes/admin.conf: This is the kubeconfig file used by the admin user. It contains the credentials and cluster configuration.

/etc/kubernetes/kubelet.conf: Used by the kubelet to communicate with the API server.

/etc/kubernetes/controller-manager.conf, /etc/kubernetes/scheduler.conf: These are used by the controller-manager and scheduler components respectively.

Pod Network Configurations:


Ensure that your pod network configuration (e.g., Calico, Flannel, Cilium) is applied correctly. This is critical for pod communication across nodes.



//cluster name 

[root@master boobalan]# kubectl config get-clusters

NAME

kubernetes


[root@master boobalan]# cat ~/.kube/config

apiVersion: v1

clusters:

- cluster:

     server: https://192.168.198.140:6443    <----- API server/master server address

  name: kubernetes

contexts:

- context:

    cluster: kubernetes

    user: kubernetes-admin

  name: kubernetes-admin@kubernetes

current-context: kubernetes-admin@kubernetes

kind: Config

preferences: {}

users:

- name: kubernetes-admin

  user:

.

//if we anted to change the cluster name edit the name:kubernetes to something new, if you want to change kubernetes-admin you can change your name

#kubectl config use-context new-cluster-name

#kubectl config get-clusters

//run this command to get the token which use for join the worker

[root@master boobalan]# kubeadm token create --print-join-command

kubeadm join 192.168.198.140:6443 --token bfgupt.k74h54fwjjw48d7y --discovery-token-ca-cert-hash sha256:2c475a5bab34d0c7c4151f3f473a73e7886110e4ed5ea2d5ded686d112b7f7cf



////worker node have some error

[root@worker1 boobalan]# kubeadm join 192.168.198.140:6443 --token bfgupt.k74h54fwjjw48d7y --discovery-token-ca-cert-hash sha256:2c475a5bab34d0c7c4151f3f473a73e7886110e4ed5ea2d5ded686d112b7f7cf

[preflight] Running pre-flight checks

error execution phase preflight: [preflight] Some fatal errors occurred:

        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist

[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

To see the stack trace of this error execute with --v=5 or higher


///fix

#modprobe br_netfilter    ////Kubernetes needs this module to manage network packets.

#echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

#cat <<EOF | tee /etc/modules-load.d/k8s.conf

br_netfilter

EOF     /////Then made the setting permanent using

#cat <<EOF | tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.ipv4.ip_forward = 1

EOF    ///also enable important sysctl setting

#sysctl --system


[root@worker1 boobalan]# kubeadm join 192.168.198.140:6443 --token bfgupt.k74h54fwjjw48d7y --discovery-token-ca-cert-hash sha256:2c475a5bab34d0c7c4151f3f473a73e7886110e4ed5ea2d5ded686d112b7f7cf

[preflight] Running pre-flight checks

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...


This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.


Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


[root@master boobalan]# kubectl get nodes

NAME                STATUS   ROLES           AGE   VERSION

master.boobi.com    Ready    control-plane   10h   v1.29.14

worker1.boobi.com   Ready    <none>          8h    v1.29.14


[root@master boobalan]# kubectl get nodes -o wide

NAME                STATUS   ROLES           AGE   VERSION    INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION                     CONTAINER-RUNTIME

master.boobi.com    Ready    control-plane   10h   v1.29.14   192.168.198.140   <none>        Rocky Linux 9.5 (Blue Onyx)   5.14.0-503.26.1.el9_5.x86_64       containerd://1.7.25

worker1.boobi.com   Ready    <none>          8h    v1.29.14   192.168.198.141   <none>        Rocky Linux 9.3 (Blue Onyx)   5.14.0-362.24.1.el9_3.0.1.x86_64   containerd://1.7.25


//if we wanted to change anything on the node lable and other details

#kubectl get node worker1.boobi.com -o yaml


#kubectl edit node worker1.boobi.com

//here we can just edit only the label and roles under lable metada field once it done. it attomatically applied no need to restart any service, but here don't touch the hostname and other important details

labels:

  beta.kubernetes.io/arch: amd64

  beta.kubernetes.io/os: linux

  kubernetes.io/arch: amd64

  kubernetes.io/hostname: worker1.boobi.com

  kubernetes.io/os: linux

  environment: production  # <-- This line will be added

name: worker1.boobi.com

resourceVersion: "10050"

uid: 63b6d68c-69f6-4a80-80ef-72cf65b39718

///or ther way doing this through API 

#kubectl label node worker1.boobi.com node-role.kubernetes.io/worker=


///like wise we can edit pod as well

[root@master boobalan]# kubectl -n kube-system get pods -o wide

[root@master boobalan]# kubectl -n kube-system get pod cilium-4xz47 -o yaml

# kubectl edit pod cilium-4xz47 -n kube-system


//we can not edit Pod's UID or Resource Version: , pod names cilium-4xz47

//can edit - lable, resource -cpu,mem, etc.. container image, environment variable, command argument, volumn mount, restart policy once edit the file it automaticaly apply the changes.


[root@master boobalan]# ls /etc/kubernetes

admin.conf  controller-manager.conf  kubelet.conf  manifests  pki  scheduler.conf  super-admin.conf


///admin.conf --> file used to connect through lens  or cat ~/.kube/config


[root@master boobalan]# ls /var/lib/kubelet/

config.yaml  cpu_manager_state  device-plugins  kubeadm-flags.env  memory_manager_state  pki  plugins  plugins_registry  pod-resources  pods


//////////done



/////remove the node from master

Drain and delete the node from Kubernetes:

kubectl drain worker1.boobi.com --ignore-daemonsets --delete-emptydir-data

kubectl delete node worker1.boobi.com



Troubleshoot

///to check what all services running on linux

systemctl list-units --type=service

systemctl list-units --type=service --all

ps aux


///to check the error if any service is not runing

#journalctl -u kubelet -f


Post a Comment

0 Comments