Use Kubeadm to deploy the Kubernetes cluster on CentOS7.2

  • 2020-09-28 09:17:11
  • OfStack

This article refers to the Installing Kubernetes Linux with kubeadm article on kubernetes Installing Kubernetes Linux kubeadm using Kubeadm to deploy Kuebernetes clusters in ES8en 7.2, which solves 1 of the problems encountered when deploying in accordance with this document.

Operating system version


# cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core)

Kernel version


# uname -r
3.10.0-327.el7.x86_64

Cluster nodes


192.168.120.122 kube-master
192.168.120.123 kube-agent1
192.168.120.124 kube-agent2
192.168.120.125 kube-agent3

That is, the cluster contains 1 control node and 3 working nodes.

Preparation before deployment

Configuration can be accessed on the google web site

The packages used for this deployment are provided by google associated sources, so the cluster nodes must be able to access the extranet, which you can configure yourself.

Turn off the firewall


# systemctl stop firewalld.service && systemctl disable firewalld.service

Disable SELinux


# setenforce 0
# sed -i.bak 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config

Configuration yum source


# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
    https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Install kubelet and kubeadm

Install the following packages on all nodes:


# yum install -y docker kubelet kubeadm kubectl kubernetes-cni
# systemctl enable docker && systemctl start docker
# systemctl enable kubelet && systemctl start kubelet

Then set the kernel parameters:


# sysctl net.bridge.bridge-nf-call-iptables=1
# sysctl net.bridge.bridge-nf-call-ip6tables=1

Initialize the control node


# kubeadm init --pod-network-cidr=10.244.0.0/16

The pod-ES65en-ES66en parameter must be added because flannel will be used to build the pod network in this cluster.

Note: Initialization is slow because the process will pull1 some docker image.

The output of this command is as follows:


Initializing your master...
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.4
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.120.122]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 1377.560339 seconds
[apiclient] Waiting for at least one node to register
[apiclient] First node has registered after 6.039626 seconds
[token] Using token: 60bc68.e94800f3c5c4c2d5
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

 sudo cp /etc/kubernetes/admin.conf $HOME/
 sudo chown $(id -u):$(id -g) $HOME/admin.conf
 export KUBECONFIG=$HOME/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node as root:

 kubeadm join --token <token> 192.168.120.122:6443

Observe docker image of control node:


# uname -r
3.10.0-327.el7.x86_64
0

Follow the prompt for the initialization command:


# uname -r
3.10.0-327.el7.x86_64
1

Isolated control node


# uname -r
3.10.0-327.el7.x86_64
2

Install the pod network


# kubectl apply -f flannel/Documentation/kube-flannel-rbac.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created

# kubectl apply -f flannel/Documentation/kube-flannel.yml
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created

Through git clone flannel warehouse:


# uname -r
3.10.0-327.el7.x86_64
4

Add work node


# uname -r
3.10.0-327.el7.x86_64
5

The output of this operation is as follows:


# uname -r
3.10.0-327.el7.x86_64
6

Observe the cluster status at the control node


# uname -r
3.10.0-327.el7.x86_64
7

This completes the deployment of the Kubernetes cluster.


Related articles: