K8s入门之利用Kubeadm安装Kubernetes1.21.2完整部署教程,按照文章安装应该没有异常,但不排除环境导致异常,本着先安装在实践中不断地学习的原则,写下这篇文章,文中部分来自于互联网的故障处理办法,基于Kubeadm安装Kubernetes,如果错误,请及时指出,安装过程存在疑问欢迎留言或私信,谢谢

正文

Kubeadm安装Kubernetes环境概要

  • 系统环境:CentOS 7.5

  • Master:172.19.19.106

  • Node-1:172.19.19.109

  • Node-2:172.19.19.110

  • Kubernetes版本:1.21.2

Kubeadm安装Kubernetes系统准备

  • 关闭自带Firewalld、selinux防火墙

  • 关闭自带sawp虚拟内存

  • 创建系统hosts映射关系

  • 规划节点地址信息,下图

主机名称

操作系统

IP

系统配置

节点应用规划

master

CentOS-7-x86_64

172.19.19.106

4核16G

Docker-ce、Kubelet、Kubeadm、Kubectl

node-1

CentOS-7-x86_64

172.19.19.109

4核8G

docker、kubelet、kubeadm

node-2

CentOS-7-x86_64

172.19.19.110

4核8G

docker、kubelet、kubeadm

分布式Kubeadm安装Kubernetes节点规划

Kubeadm安装Kubernetes

一、Kubeadm安装

1、准备kubernetes部署环境,并添加国内源

## 分别重命名两主机名
[root@localhost ~]# hostnamectl set-hostname master
[root@localhost ~]# hostnamectl set-hostname node-1
[root@localhost ~]# hostnamectl set-hostname node-2

## 关闭自带防火墙
[root@master ~]# sudo systemctl stop firewalld
[root@master ~]# sudo systemctl disable firewalld
[root@node-1 ~]# sudo systemctl stop firewalld
[root@node-1 ~]# sudo systemctl disable firewalld
[root@node-2 ~]# sudo systemctl stop firewalld
[root@node-2 ~]# sudo systemctl disable firewalld
[root@master ~]# vim /etc/selinux/config
[root@master ~]# setenforce 0

## 关闭自带sawp虚拟内存
[root@master ~]# swapoff -a
[root@node-1 ~]# swapoff -a
[root@master ~]# sudo vim /etc/fstab             //删除sawp行,永久禁用
[root@node-1 ~]# sudo vim /etc/fstab             //删除sawp行,永久禁用

## 修改hosts主机映射关系
[root@master ~]# vim /etc/hosts
[root@node-1 ~]# vim /etc/hosts
[root@node-2 ~]# vim /etc/hosts

172.19.19.106   master
172.19.19.109   node-1
172.19.19.110   node-2

hosts添加上述两条映射记录

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2、安装kubeadm

[root@localhost ~]# yum -y install kubelet kubeadm kubectl docker

kubelet+docker(依赖)YUM安装

3、查看镜像版本

[root@master ~]# kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.21.2
k8s.gcr.io/kube-controller-manager:v1.21.2
k8s.gcr.io/kube-scheduler:v1.21.2
k8s.gcr.io/kube-proxy:v1.21.2
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0

4、编辑安装脚本

[root@master ~]# vi k8s.sh
images=(
    kube-apiserver:v1.21.2
    kube-controller-manager:v1.21.2
    kube-scheduler:v1.21.2
    kube-proxy:v1.21.2
    pause:3.4.1
    etcd:3.4.13-0
    coredns:1.8.0
)
for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

5、出现下图,开启docker服务

[root@master ~]# sh k8s.sh
[root@master ~]# systemctl start docker && systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

6、kubeadm初始化集群

[root@master ~]# sudo kubeadm init  --kubernetes-version=v1.21.2  --service-cidr=172.18.0.0/12 --pod-network-cidr=172.18.0.0/16  --ignore-preflight-errors=Swap

kubeadm init初始化默认从k8s.gcr.io拉取镜像文件,但是国内无法访问k8s.gcr.io,如下图界面一直停留在
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull

解决方法:从阿里云中拉取镜像文件,再改tag成 【k8s.gcr.io/xxxx:版本号】形式

首先,我们要确定我们kubeadm需要的镜像文件版本,命令kubeadm config images list 查询

kubeadm镜像文件版本

然后,依次将kubeadm config images list中的所有镜像拉取下来

[root@master ~]# docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.2
[root@master ~]# docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.2

[root@master ~]# docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.2
[root@master ~]# docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.21.2
[root@master ~]# docker pull registry.aliyuncs.com/google_containers/pause:3.4.1
[root@master ~]# docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[root@master ~]# docker pull registry.aliyuncs.com/google_containers/coredns:1.8.0

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.2

这时会发现所有的镜像都是registry.aliyuncs.com/google_containers/开头,这与kubeadm config images list中要求的镜像名称不一样。我们要修改镜像名称,即对镜像重新打个tag,即【docker tag + 旧的镜像名称:版本号 新的镜像名称:版本号】,依次执行修改镜像名称操作

[root@master ~]# docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.2 k8s.gcr.io/kube-apiserver:v1.21.2

[root@master ~]# docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.2 k8s.gcr.io/kube-controller-manager:v1.21.2
[root@master ~]# docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.2 k8s.gcr.io/kube-scheduler:v1.21.2
[root@master ~]# docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.21.2 k8s.gcr.io/kube-proxy:v1.21.2
[root@master ~]# docker tag registry.aliyuncs.com/google_containers/pause:3.4.1 k8s.gcr.io/pause:3.4.1
[root@master ~]# docker tag registry.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
[root@master ~]# docker tag registry.aliyuncs.com/google_containers/coredns:1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0

kubernetes镜像文件(由阿里云下载拷贝重命名为官方名称)

最后,执行kubernetes初始化

详细过程出现“Your Kubernetes control-plane has initialized successfully!”表示成功!

[root@master ~]# sudo kubeadm init  --kubernetes-version=v1.21.2  --service-cidr=172.18.0.0/12 --pod-network-cidr=172.16.0.0/16  --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [172.16.0.1 172.19.19.106]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [172.19.19.106 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [172.19.19.106 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.509065 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 96hjb8.g6a9ado8biq9knjx
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.19.19.106:6443 --token 2thwe5.rykcnzyhuzl7jniy \
        --discovery-token-ca-cert-hash sha256:0775863db8ed259d72b864204ef075b2bdfb6a4e2b98d18a8f37b79a9c207157

7、配置 pod network

[root@aws ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2021-07-06 07:10:55--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4813 (4.7K) [text/plain]
Saving to: ‘kube-flannel.yml.1’

100%[================================================================================================================>] 4,813       --.-K/s   in 0s

2021-07-06 07:10:55 (50.8 MB/s) - ‘kube-flannel.yml.1’ saved [4813/4813]

## 编辑Network部分,修改为自定义网段“172.16.0.0/16”
  net-conf.json: |
    {
      "Network": "172.16.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }

替换kube-flannel.yml中image: quay.io/coreos/flannel:v0.14.0源地址为【lizhenliang/flannel:v0.14.0】,如下以修改成品,粘贴即用

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "172.16.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: lizhenliang/flannel:v0.14.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: lizhenliang/flannel:v0.14.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
[root@master ~]# kubectl apply -f kube-flannel.yml

## 成功后会输出如下信息
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master ~]# kubectl apply -f kube-flannel.yml

8、校验

[root@master ~]# kubectl get pods --all-namespaces

Pod都处于Running状态

二、配置Node节点

1、node节点安装kubelet、kubeadm、docker

[root@node-2 yum.repos.d]# yum -y install kubelet kubeadm docker

node节点安装kubelet、kubeadm、docker

2、启动Docker

## Node节点均执行如下
[root@node-1 ~]# systemctl start docker
[root@node-1 ~]# systemctl enable docker kubeadm
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

[root@node-2 ~]# systemctl start docker
[root@node-2 ~]# systemctl enable docker kubeadm
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

## 确保Node节点正常启动
[root@node-1 ~]# systemctl status docker
[root@node-2 ~]# systemctl status docker

使用kubeadm init最后输出的token,在node节点上执行(直接复制token,执行即可)

kubeadm join 172.19.19.106:6443 --token 2thwe5.rykcnzyhuzl7jniy \
        --discovery-token-ca-cert-hash sha256:0775863db8ed259d72b864204ef075b2bdfb6a4e2b98d18a8f37b79a9c207157

校验是否加入集群master

[root@master ~]# kubectl get nodes

获取名称空间,系统级的pod都在kube-system名称空间中

[root@master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   12m
kube-node-lease   Active   12m
kube-public       Active   12m
kube-system       Active   12m
## 查看PODS状态是否全为Running,表示K8s集群环境搭建成功
kubectl get pods -n kube-system

Kubeadm快速部署Kubernetes

至此,Kubeadm安装Kubernetes部署完整过程结束。

三、Kubeadm安装Kubernetes常见错误解决

1、The connection to the server localhost:8080 was refused - did you specify the right host or port?解决

解决办法:

[root@master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
[root@master ~]# source /etc/profile

2、解决node节点为NotReady问题

通过kubectl describe pod -n kube-system <服务名>来查看某个服务的详细情况,如果 pod 存在问题的话,你在使用该命令后在输出内容的最下面看到一个[Event]条目,如下:

[root@master ~]# kubectl -n kube-system get pods
NAME                             READY   STATUS              RESTARTS   AGE
coredns-558bd4d5db-f96zx         0/1     Running             0          9m45s
coredns-558bd4d5db-t2tp4         0/1     Running             0          9m45s
etcd-master                      1/1     Running             0          9m59s
kube-apiserver-master            1/1     Running             0          9m59s
kube-controller-manager-master   1/1     Running             0          9m59s
kube-flannel-ds-tgzzm            1/1     Running             2          43s
kube-proxy-x7nbs                 1/1     Running             0          9m46s
kube-scheduler-master            1/1     Running             0          9m59s

[root@master ~]# kubectl describe pod -n kube-system coredns-558bd4d5db-f96zx

解决办法:手动拉取镜像,修改完了之后过几分钟 k8s 会自动重试,等一下就可以发现不仅flannel正常了,其他的 pod 状态也都变成了Running,这时再看 node 状态就可以发现问题解决

[root@node-1 ~]# docker pull quay.io/coreos/flannel:v0.14.0
[root@node-2 ~]# docker pull quay.io/coreos/flannel:v0.14.0

Trying to pull repository quay.io/coreos/flannel ...
v0.14.0: Pulling from quay.io/coreos/flannel
801bfaa63ef2: Pull complete
e4264a7179f6: Pull complete
bc75ea45ad2e: Pull complete
78648579d12a: Pull complete
3393447261e4: Pull complete
071b96dd834b: Pull complete
4de2f0468a91: Pull complete
Digest: sha256:4a330b2f2e74046e493b2edc30d61fdebbdddaaedcb32d62736f25be8d3c64d5
Status: Downloaded newer image for quay.io/coreos/flannel:v0.14.0

3、kubernetes安装网络插件flannel时出现CrashLoopBackOff或者Init:ErrImagePull报错

CrashLoopBackOff或者Init:ErrImagePull报错

报错原因:查看kube-flannel.yml文件时发现image: quay.io/coreos/flannel:v0.14.0位置,quay.io网站被墙国内无法访问

解决办法一:替换为国内源 【lizhenliang/flannel:v0.14.0】

## 删除之前操作
[root@master ~]# kubectl delete -f kube-flannel.yml
## 再次执行
[root@master ~]# kubectl apply -f kube-flannel.yml
[root@master ~]# kubectl get pods -n kube-system -o wide
NAME                    READY   STATUS   RESTARTS   AGE   IP              NODE      NOMINATED
kube-flannel-ds-h2bd4   1/1     Running  1          10s   172.19.19.106   master   <none>     

解决办法二:从其他节点或渠道获取flannel:v0.14.0镜像

我这里由于在node-1节点已有,直接打压缩包

[root@node-1 ~]# docker save -o flannel.tar.gz quay.io/coreos/flannel:v0.14.0
[root@node-1 ~]# ls -lh
-rw-------  1 root root  63M 2月   7 17:06 flannel.tar.gz
[root@node-1 ~]# scp ./flannel.tar.gz 172.19.19.106:/root/
[root@node-1 ~]# scp ./flannel.tar.gz 172.19.19.110:/root/


在其他节点上load镜像

[root@master ~]# docker load -i flannel.tar.gz
[root@node-2 ~]# docker load -i flannel.tar.gz

4、Kubeadm快速部署Kubernetes集群之MySQL-8.0.*部署过程中,执行kubectl create -f mysql-svc.yaml

报错:The Service "mysql-svc" is invalid: spec.ports[0].nodePort: Invalid value: 33306: provided port is not in the valid range. The range of valid ports is 30000-32767

原因:端口超范围。解决思路如下

[root@master ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml

    - --service-node-port-range=1-50000

## 重载配置,
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart kube-apiserver

5、Kubeadm安装Kubernetes持续更新中,欢迎反馈