Introducing the all-new TIBCO Community site!

For current users, please click "Sign In" to reset your password and access the enhanced features. If you're a first-time visitor, we extend a warm welcome—click "Sign Up" to become a part of the TIBCO Community!

If you're seeking alternative community sites, explore ibi, Jaspersoft, and Spotfire.

Jump to content
  • Kubernete Cluster on Ubuntu


    Deepesh Tiwari

    Table of Contents

    Step 1: Install Docker

    1. Update the package list with the command:

     sudo apt-get update

    2. Next, install Docker with the command:

     sudo apt-get install docker.io

    3. Repeat the process on each server that will act as a node.

    4. Check the installation (and version) by entering the following:

     docker ??version

    Step 2: Start and Enable Docker

    1. Set Docker to launch at boot by entering the following:

     sudo systemctl enable docker

    2. Verify Docker is running:

     sudo systemctl status docker

    To start Docker if it?s not running:

     sudo systemctl start docker

    3. Repeat on all the other nodes.

    Step 3: Add Kubernetes Signing Key

    1. Enter the following to add a signing key:

     curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add

    If you get an error that curl is not installed, install it with:

     sudo apt-get install curl

    2. Then repeat the previous command to install the signing keys. Repeat for each server node.

    Step 4: Add Software Repositories

    Kubernetes is not included in the default repositories. To add them, enter the following:

     sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

    Repeat on each server node.

    Step 5: Kubernetes Installation Tools

    1. Install Kubernetes tools with the command:

     sudo apt-get install kubeadm kubelet kubectl

     sudo apt-mark hold kubeadm kubelet kubectl

    Allow the process to complete.

    2. Verify the installation with:

     kubeadm version

    3. Repeat for each server node.

    Step 6: Begin Kubernetes Deployment

    Start by disabling the swap memory on each server:

     sudo swapoff -a

    Step 7: Assign Unique Hostname for Each Server Node 

    Decide which server to set as the master node. Then enter the command:

     sudo hostnamectl set-hostname master-node

    sudo vi /etc/hosts and add the entry: 

         127.0.0.1       master-node

    Next, set a worker node hostname by entering the following on the worker server:

     sudo hostnamectl set-hostname worker1

    sudo vi /etc/hosts and add the entry: 

         127.0.0.1       worker1

     You need to do for the worker2 node too.

     You may need to rerun:

     sudo swapoff -a

    Step 8: Initialize Kubernetes on Master Node

    Switch to the master server node, and enter the following:

     sudo kubeadm init --pod-network-cidr=10.244.0.0/16

    Once this command finishes, it will display a kubeadm join message at the end. Make a note of the whole entry. This will be used to join the worker nodes to the cluster.

    Next, enter the following to create a directory for the cluster:

     kubernetes-master:~$ mkdir -p $HOME/.kube

     kubernetes-master:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

     kubernetes-master:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

    Step 9: Deploy Pod Network to Cluster

    sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/ku...

    Verify that everything is running and communicating: 

     kubectl get pods --all-namespaces

    Step 10: Join Worker Node to Cluster

    Switch to the worker1 system and enter the command you noted from Step 7:

    kubeadm join 10.114.95.77:6443 --token mgnqdd.uw3929bzbft7n87h     --discovery-token-ca-cert-hash sha256:2508d1d3312488effc9bf73ea13c0863efc49a7e77de64864c9d884a2e0e9e1a

    *** Node joining:

    1. Go to master-node, run 

      kubeadm token create --print-join-command

    2. Go to worker1 node. Run the command return from 1. Example:

      kubeadm join 10.114.95.77:6443 --token mgnqdd.uw3929bzbft7n87h     --discovery-token-ca-cert-hash sha256:2508d1d3312488effc9bf73ea13c0863efc49a7e77de64864c9d884a2e0e9e1a

    Step 11: Create Docker Registry

    1. start docker registry 
       docker run -d -p 5000:5000 -e REGISTRY_STORAGE_DELETE_ENABLED=true --restart=always --name registry registry:2

    2. on each node machine, change insecure docker registry setting on all nodes. Create or update /etc/docker/daemon.json. And restart the docker service: service docker restart (Or systemctl restart docker)

      {"insecure-registries" : ["10.114.95.77:5000"]}

    3. sudo docker run -d -p 5000:5000 --restart=always --name registry registry:2

    Troubleshoot:

    1. If the kubeadm and kubectl version is not consistent on master and worker nodes, we will get some error like:

      [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace

      error execution phase kubelet-start: configmaps "kubelet-config-1.15" is forbidden: User "system:bootstrap:1qn44v" cannot get resource "configmaps" in API group "" in the namespace "kube-system"

    2. change docker image storage location to  avoid /var/lib disk run out.  https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/
      • Open the /etc/default/docker file, uncomment the following line and add the new path to DOCKER_OPTS variable like ?-g /yourfolder/docker_storage?
      • Or you can add the new path into /etc/docker/daemon.json file like the following.

         

        {

         

        //...

         

        "data-root": "/yourfolder/docker_storage",

         

        //...

         

        }

    3.  After the machine reboot, we have to turn off the swap.

       

      Run: swapoff -a

       

      systemctl start kubelet

    4. Warning  FailedCreatePodSandBox  1s               kubelet, worker-node-154  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3a0c7cc1ed01d73fe779af669d3ac1c4d5fb7758a23350bf43089ad49c5f8d9e" network for pod "bcce-cms-5b5d6dd69c-bh8b8": networkPlugin cni failed to set up pod "bcce-cms-5b5d6dd69c-bh8b8_default" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.3.1/24

       

      On worker node, delete cni0 and rejoin the cluster

       

      sudo ip link delete cni0 

    Please contact TIBCO support or presales for the scripts.


    User Feedback

    Recommended Comments

    There are no comments to display.


×
×
  • Create New...