由于之前全手工安装的是1.10.6版本. 最近需要测试nginx的CRD,但是此CRD至少要v1.11以上版本。本着测试环境多一事不如少一事的原则。。。决定用kubeadm来个快速全新安装一套。
环境准备
1master,2nodes
172.16.10.210 k8s-master-v1-16.lab.f5se.io
172.16.10.211 k8s-node1-v1-16.lab.f5se.io
172.16.10.212 k8s-node2-v1-16.lab.f5se.io
所有节点准备工作
- 替换yum repo为阿里镜像,k8s的repo也改成阿里云,这样安装起来能快点
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo cp /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.bak wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 EOF yum clean all yum makecache yum -y update |
- 关firewlld,selinux,swapoff
1 2 3 4 5 6 7 8 9 |
systemctl disable firewalld systemctl stop firewalld setenforce 0 sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config swapoff -a sed -i 's/.*swap.*/#&/' /etc/fstab |
- 调整系统参数
1 2 3 4 5 |
cat < /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system |
- 增加hosts文件
1 2 3 |
172.16.10.210 k8s-master-v1-16.lab.f5se.io 172.16.10.211 k8s-node1-v1-16.lab.f5se.io 172.16.10.212 k8s-node2-v1-16.lab.f5se.io |
- 安装docker
1 2 3 4 5 6 |
yum -y install yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install -y docker-ce-18.09.9-3.el7 systemctl enable docker systemctl start docker |
- 配置docker启动daemon.json文件
注:kubeadm在安装的时候会预先检测系统是否是使用systemd作为cgroup的驱动,因此需要修改daemon.json。且由于我的系统存储驱动不是overlay2,而是
devicemapper
,所以不能完全参考官方 https://kubernetes.io/docs/setup/production-environment/container-runtimes/修改完毕后,重新启动docker :systemctl restart docker
1 2 3 4 5 6 7 |
{ "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" } } |
- 安装kubeadm,以及kubelet工具
1 |
yum install -y kubeadm-1.16.2 kubelet-1.16.2 |
- 预下载被QIANG的镜像,否则安装会无法成功
master节点运行:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
#!/bin/bash ## 使用如下脚本下载国内镜像,并修改tag为google的tag set -e KUBE_VERSION=v1.16.2 KUBE_PAUSE_VERSION=3.1 ETCD_VERSION=3.3.15-0 CORE_DNS_VERSION=1.6.2 GCR_URL=k8s.gcr.io ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers images=(kube-proxy:${KUBE_VERSION} kube-scheduler:${KUBE_VERSION} kube-controller-manager:${KUBE_VERSION} kube-apiserver:${KUBE_VERSION} pause:${KUBE_PAUSE_VERSION} etcd:${ETCD_VERSION} coredns:${CORE_DNS_VERSION}) for imageName in ${images[@]} ; do docker pull $ALIYUN_URL/$imageName docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName docker rmi $ALIYUN_URL/$imageName done |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
#!/bin/bash set -e FLANNEL_VERSION=v0.11.0 # 在这里修改源 QUAY_URL=quay.io/coreos QINIU_URL=quay-mirror.qiniu.com/coreos images=(flannel:${FLANNEL_VERSION}-amd64 flannel:${FLANNEL_VERSION}-arm64 flannel:${FLANNEL_VERSION}-arm flannel:${FLANNEL_VERSION}-ppc64le flannel:${FLANNEL_VERSION}-s390x) for imageName in ${images[@]} ; do docker pull $QINIU_URL/$imageName docker tag $QINIU_URL/$imageName $QUAY_URL/$imageName docker rmi $QINIU_URL/$imageName done |
node节点运行
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
#!/bin/bash set -e KUBE_VERSION=v1.16.2 KUBE_PAUSE_VERSION=3.1 ETCD_VERSION=3.3.15-0 CORE_DNS_VERSION=1.6.2 GCR_URL=k8s.gcr.io ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers images=(kube-proxy:${KUBE_VERSION} pause:${KUBE_PAUSE_VERSION} coredns:${CORE_DNS_VERSION}) for imageName in ${images[@]} ; do docker pull $ALIYUN_URL/$imageName docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName docker rmi $ALIYUN_URL/$imageName done |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
#!/bin/bash set -e FLANNEL_VERSION=v0.11.0 QUAY_URL=quay.io/coreos QINIU_URL=quay-mirror.qiniu.com/coreos images=(flannel:${FLANNEL_VERSION}-amd64 flannel:${FLANNEL_VERSION}-arm64 flannel:${FLANNEL_VERSION}-arm flannel:${FLANNEL_VERSION}-ppc64le flannel:${FLANNEL_VERSION}-s390x) for imageName in ${images[@]} ; do docker pull $QINIU_URL/$imageName docker tag $QINIU_URL/$imageName $QUAY_URL/$imageName docker rmi $QINIU_URL/$imageName done |
主节点正式安装
如果以下命令执行过程中有错,可ctrl-c终止,让后执行kubeadm reset来重新初始化后再次执行
1 |
kubeadm init --apiserver-advertise-address 172.16.10.210 --kubernetes-version=v1.16.2 --pod-network-cidr=10.244.0.0/16 |
如无异常,安装完毕后,系统会提示类似如下信息
1 2 3 4 5 6 7 8 9 10 11 |
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.10.210:6443 --token dov90j.fzvwtb2o353tsy2p \ --discovery-token-ca-cert-hash sha256:2faa5272facb2655af0f115e45fddc5bc7e097997bf0b0e8b4c73bf2d5680e7b |
参考以上提示执行相关kube config文件处理,以便可以执行kubectl命令。并安装一种网络组件,测试中安装flannel
1 2 3 4 5 |
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl create -f kube-flannel.yml kubectl get pod --all-namespaces 确认pod运行 |
node节点安装
node节点运行以下类似命令,具体命令从master安装完毕后的输出中拷贝
1 2 |
kubeadm join 172.16.10.210:6443 --token dov90j.fzvwtb2o353tsy2p \ --discovery-token-ca-cert-hash sha256:2faa5272facb2655af0f115e45fddc5bc7e097997bf0b0e8b4c73bf2d5680e7b |
确认各节点及pod运行正常
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@k8s-master-v1-16 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master-v1-16.lab.f5se.io Ready master 171m v1.16.2 k8s-node1-v1-16.lab.f5se.io Ready 127m v1.16.2 k8s-node2-v1-16.lab.f5se.io Ready 45m v1.16.2 [root@k8s-master-v1-16 ~]# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-5644d7b6d9-fgtdl 1/1 Running 0 171m kube-system coredns-5644d7b6d9-xj5d4 1/1 Running 0 171m kube-system etcd-k8s-master-v1-16.lab.f5se.io 1/1 Running 0 170m kube-system kube-apiserver-k8s-master-v1-16.lab.f5se.io 1/1 Running 0 170m kube-system kube-controller-manager-k8s-master-v1-16.lab.f5se.io 1/1 Running 0 170m kube-system kube-flannel-ds-amd64-fbcw5 1/1 Running 0 139m kube-system kube-flannel-ds-amd64-mzpmb 1/1 Running 0 45m kube-system kube-flannel-ds-amd64-tdmk6 1/1 Running 0 127m kube-system kube-proxy-7svd7 1/1 Running 0 127m kube-system kube-proxy-cs2fb 1/1 Running 0 45m kube-system kube-proxy-jtnrn 1/1 Running 0 171m kube-system kube-scheduler-k8s-master-v1-16.lab.f5se.io 1/1 Running 0 170m |
文章评论