The TIBCO Platform is a real-time, composable data platform that will bring together an evolving set of your TIBCO solutions - and it's available now!
A chart showing the TIBCO Platform vision
Jump to content
Articles
Read more about TIBCO use cases, product features, capabilities and more
  • Kubernetes Cluster on CentOS


    Deepesh Tiwari

    Table of Contents

    Step 1: Install Docker

    1. Update the package list with the command:

     sudo yum update

    2. Next, install Docker with the command:

     sudo yum install docker

    3. Repeat the process on each server that will act as a node.

    4. Check the installation (and version) by entering the following:

     docker ??version

    Step 2 Start and Enable Docker

    1. Set Docker to launch at boot by entering the following:

     sudo systemctl enable docker

    2. Verify Docker is running:

     sudo systemctl status docker

    To start Docker if it?s not running:

     sudo systemctl start docker

    3. Repeat on all the other nodes.

    Step 3: Configure Kubernetes Repository

    Kubernetes packages are not available from official CentOS 7 repositories. This step needs to be performed on the Master Node, and each Worker Node you plan on utilizing for your container setup. Enter the following command to retrieve the Kubernetes repositories.

     cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF 

    Step 4: Kubernetes Installation Tools

    1. These 3 basic packages are required to be able to use Kubernetes. Install the following package(s) on each node:

     sudo yum install -y kubelet kubeadm kubectl

     systemctl enable kubelet

     systemctl start kubelet

    2. Verify the installation with:

     kubeadm version

    3. Repeat for each server node.

    Step 5: Begin Kubernetes Deployment

    Start by disabling the swap memory on each server:

     sudo swapoff -a

    Step 6: Assign Unique Hostname for Each Server Node 

    Decide which server to set as the master node. Then enter the command:

     sudo hostnamectl set-hostname master-node

    sudo vi /etc/hosts and add the entry: 

         127.0.0.1       master-node

    Next, set a worker node hostname by entering the following on the worker server:

     sudo hostnamectl set-hostname worker1

    sudo vi /etc/hosts and add the entry: 

         127.0.0.1       worker1

     You need to do for the worker2 node too.

     You may need to rerun:

     sudo swapoff -a

    Step 7: Configure Firewall

    The nodes, containers, and pods need to be able to communicate across the cluster to perform their functions. Firewalld is enabled in CentOS by default on the front-end. Add the following ports by entering the listed commands.

    On the Master Node enter:

     sudo firewall-cmd --permanent --add-port=6443/tcp sudo firewall-cmd --permanent --add-port=2379-2380/tcp sudo firewall-cmd --permanent --add-port=10250/tcp sudo firewall-cmd --permanent --add-port=10251/tcp sudo firewall-cmd --permanent --add-port=10252/tcp sudo firewall-cmd --permanent --add-port=10255/tcp 

     sudo firewall-cmd --permanent --add-port=53/tcp (this is for dns)

      sudo firewall-cmd ?-reload

    Each time a port is added the system confirms with a ?success? message.

    Enter the following commands on each worker node:

     sudo firewall-cmd --permanent --add-port=10251/tcp sudo firewall-cmd --permanent --add-port=10255/tcp 

     sudo firewall-cmd --permanent --add-port=53/tcp (this is for dns)

     firewall-cmd ?-reload  

    Step 8: Update Iptables Settings

    Set the net.bridge.bridge-nf-call-iptables to ?1? in your sysctl config file. This ensures that packets are properly processed by IP tables during filtering and port forwarding.

     cat  < /etc/sysctl.d/master_node_name net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system  

    Step 9: Disable SELinux

    The containers need to access the host filesystem. SELinux needs to be set to permissive mode, which effectively disables its security functions.

    Use following commands to disable SELinux:

     sudo setenforce 0 sudo sed -i ?s/^SELINUX=enforcing$/SELINUX=permissive/? /etc/selinux/config 

    Step 10: Initialize Kubernetes on Master Node

    Switch to the master server node, and enter the following:

     sudo kubeadm init --pod-network-cidr=10.244.0.0/16

    Once this command finishes, it will display a kubeadm join message at the end. Make a note of the whole entry. This will be used to join the worker nodes to the cluster.

    Next, enter the following to create a directory for the cluster:

     kubernetes-master:~$ mkdir -p $HOME/.kube

     kubernetes-master:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

     kubernetes-master:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

    Step 11: Deploy Pod Network to Cluster

    sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

    Verify that everything is running and communicating:

     kubectl get pods --all-namespaces

    Step 12: Join Worker Node to Cluster

    Switch to the worker1 system and enter the command you noted from Step 7:

    kubeadm join 10.114.95.77:6443 --token mgnqdd.uw3929bzbft7n87h     --discovery-token-ca-cert-hash sha256:2508d1d3312488effc9bf73ea13c0863efc49a7e77de64864c9d884a2e0e9e1a

    *** Node joining:

    1. Go to master-node, run 

      kubeadm token create --print-join-command

    2. Go to worker1 node. Run the command return from 1. Example:

      kubeadm join 10.114.95.77:6443 --token mgnqdd.uw3929bzbft7n87h     --discovery-token-ca-cert-hash sha256:2508d1d3312488effc9bf73ea13c0863efc49a7e77de64864c9d884a2e0e9e1a

    Step 13: Create Docker Registry: (or you can use the TIBCO Docker registry https://reldocker.tibco.com/repositories/bc)

    1. start docker registry 
       docker run -d -p 5000:5000 -e REGISTRY_STORAGE_DELETE_ENABLED=true --restart=always --name registry registry:2

    2. on each node machine, change insecure docker registry setting on all nodes. Create or update /etc/docker/daemon.json. And restart the docker service: service docker restart (Or systemctl restart docker)

    3. {

       

      "insecure-registries" : ["10.114.95.77:5000"]

      }

    4. sudo docker run -d -p 5000:5000 --restart=always --name registry registry:2
    5. Apis for get images info

    http://10.114.95.77:5000/v2/_catalog

    http://10.114.95.77:5000/v2/bcce-cms/tags/list

    On client machine, we need put config file into $home/.kube.

    kubectl cluster-info

    kubectl apply -f bcce-cms.yaml

    kubectl apply -f tas-ws.yaml

    kubectl get deployments

    kubectl get pods

    kubectl apply 

    kubectl apply -f tas-ws-svc.yaml

    kubectl get svc

    Restart Pods

    kubectl scale deployment bcce-cms --replicas=0

    kubectl scale deployment bcce-cms --replicas=3

    *** If you are going to deploy the services from your local machine, you need to get the file ~/.kube/config on the master-node and copy it under your ~/.kube/. On the Docker/Preferences/Daemons, you need to add the new line "10.114.95.77:5000" in insecure-registries.

    Troubleshoot:

    1. kubelet:       error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

       

      This is because cgroup driver used by kubelet and docker are different, kubelet requires they should be same, in order to change the cgroup driver from kubelet, do below change

      -you have to modify the file /etc/default/kubelet (/etc/sysconfig/kubelet for CentOS, RHEL, Fedora) with your cgroup-driver value, like so:

       KUBELET_EXTRA_ARGS=--cgroup-driver=systemd

      refer: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node

    2.  worker-node1 Failed to update Node Allocatable Limits ["kubepods"]: failed to set supported cgroup subsystems for cgroup [kubepods]: failed to find subsystem mount for required subsystem: pids

       

      Its kernel issue, modify file (/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf) and add --feature-gates SupportPodPidsLimit=false --feature-gates SupportNodePidsLimit=false to Start section and reboot server will make it work.

         

    Redhat Related Issues.

         1. Container can't access internet outside.   

             Error:  /var/log/messages

                    May 6 06:14:17 BCCED kernel: docker0: port 2(veth11a945b) entered disabled state

     

                    May 6 06:14:17 BCCED NetworkManager[734]: <warn> (veth8d6d250): failed to find device 8 'veth8d6d250' with udev

         Resolution: docker network bridge messed up, need reset docker bridge. Please run below commands.

                            pkill docker

                            iptables -t nat -F

                            ifconfig docker0 down

               brctl delbr docker0

    Please contact TIBCO support or presales for the scripts.


    User Feedback

    Recommended Comments

    There are no comments to display.



    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

×
×
  • Create New...