新版 Kubernetes 及 dashboard 安装

准备工作

什么是 ETCD

https://etcd.io/

本例 ETCD 版本 3.5.1

什么是Kubernetes

https://kubernetes.io/

本例 Kubernetes 版本 1.23.5

flannel

https://github.com/flannel-io/flannel

本例 flannel 使用镜像 3.6

Docker

Home

本例使用 Docker 版本 20.10.14

Helm

https://helm.sh/

本例使用 3.6.3

服务器准备

三个服务器,分别为 Master、Node1、Node2
系统安装为 CentOS 7.9

服务器 IP 主机名
Master 172.16.10.10 master.k8s
Node1 172.16.10.11 node1.k8s
Node2 172.16.10.12 node2.k8s
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
#禁用交换内存,修改/etc/fstab文件,注释掉SWAP的自动挂载,或者使用以下命令
swapoff -a
#使用free -m确认swap已经关闭
free -m
#禁用SELinux,编辑文件/etc/selinux/config,设置 SELINUX=disabled,或者使用以下命令
setenforce 0
#开启 ip转发
#echo 1 > /proc/sys/net/ipv4/ip_forward
sysctl net.ipv4.ip_forward=1
#开启 ipv6转发(如果启用了ipv6的话)
#echo 1 > /proc/sys/net/ipv6/ip_forward
sysctl net.ipv6.ip_forward=1

如果需要 pptp 远程连接才能使用 k8s.gcr.io的话可以执行以下 pptp 进行连接,或者使用 --image-repository (详见下文)

yum install epel-release -y
yum install pptp-setup -y
pptpsetup --create CONNECTION_NAME --server CONNECTION_IP --username USER_NAME --password USER_PASS --encrypt --start
route add default dev ppp0

安装

配置 Yum 的 kubernetes 和 Docker 源

kubernetes

docker

cat > /etc/yum.repos.d/docker-ce.repo <<EOF
[docker-ce-stable]
name=Docker CE Stable - basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/releasever/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
EOF

或者直接安装 repo

sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

没有 yum-config-manager 安装 yum install -y yum-utils 或者可以用

yum install -y wget && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

kubernetes

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF

查看版本

yum list docker-ce --showduplicates
yum list kubeadm --showduplicates

本例 kubernetes 1.23.5,docker 20.10.14,最新版本无需加版本号

sudo yum install docker-ce kubeadm

如果要安装较新旧版要带上 -版本号, 太旧的版本可能要加上 --setopt=obsoletes=0

sudo yum install docker-ce-20.10.14 kubeadm-1.23.5
yum install -y --setopt=obsoletes=0 \
   docker-ce-17.03.2.ce-1.el7.centos.x86_64 \
   docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch

kubeadm 会安装依赖 kubelet、kubectl、kubernetes-cni 和其他组件
配置 docker 自启和启动

systemctl enable docker kubelet && systemctl start docker kubelet

确保 Docker 及 kubelet 服务的 Cgroup 一致,官方推荐使用 systemd,否则可能会出现类似如下错误

Initial timeout of 40s passed.

he HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
# 查看 docker 的 cgroup 驱动
docker info |grep Cgroup
# 查看 kubelet 的 cgroup 驱动
systemctl show --property=Environment kubelet | grep cgroup-driver

如果不一致,则需要进行修改
1. 修改 Docker 配置,新建或者编辑 /etc/docker/daemon.json 的内容

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
  1. 修改 /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf,增加或者修改 --cgroup-driver--cgroup-driver=systemd
  2. 修改 /var/lib/kubelet/kubeadm-flags.env 增加或者修改 --cgroup-driver--cgroup-driver=systemd(第三步可能需要在初始化失败的情况下才需要添加)
  3. 修改完成
systemctl daemon-reload
systemctl restart docker kubelet

安装完成后

#开启 bridge-nf-call-iptables
#echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
sysctl net.bridge.bridge-nf-call-iptables=1

master 主服务器配置

初始化 master

执行命令(不需要指定镜像源的可以去除 --image-repository registry.aliyuncs.com/google_containers \

kubeadm init \
--image-repository registry.aliyuncs.com/google_containers \
--apiserver-advertise-address=172.16.10.10  \
--kubernetes-version v1.23.5 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.2.0.0/16 \
--ignore-preflight-errors=all

成功的话,大概会收到收下信息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p HOME/.kube
  sudo cp -i /etc/kubernetes/admin.confHOME/.kube/config
  sudo chown (id -u):(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.10.10:6443 --token kcv6rr.vxa64fbgadwfnbcn \
        --discovery-token-ca-cert-hash sha256:aa840e50a28fa37bc3428e8bdaf9c247e225eb9c298146e078ffaa97f349c3b9 

剩余配置

以普通用户身份启动集群要用相应用户身份执行

rm -rf HOME/.kube
mkdir -pHOME/.kube
cp -i /etc/kubernetes/admin.conf HOME/.kube/config
chown(id -u):(id -g)HOME/.kube/config

如果是 Root 用户,直接执行

export KUBECONFIG=/etc/kubernetes/admin.conf

查看节点状态

# kubectl get nodes
NAME   STATUS   ROLES   AGE   VERSION
master  NotReady  master  6m19s  v1.13.0

配置网络插件

cd ~ && mkdir flannel && cd flannel
curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

启动 flannel

kubectl apply -f ~/flannel/kube-flannel.yml

查看服务

# kubectl get pods --namespace kube-system
NAME                                READY   STATUS    RESTARTS   AGE
coredns-6d8c4cb4d-bvf2p             1/1     Running   0          3m32s
coredns-6d8c4cb4d-ngtfj             1/1     Running   0          3m32s
etcd-master.k8s                     1/1     Running   1          3m46s
kube-apiserver-master.k8s           1/1     Running   1          3m46s
kube-controller-manager-master.k8s  1/1     Running   2          3m46s
kube-flannel-ds-25ldp               1/1     Running   0          68s
kube-proxy-6pd5k                    1/1     Running   0          3m32s
kube-scheduler-master.k8s           1/1     Running   2          3m46s
# kubectl get service
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.1.0.1     <none>        443/TCP   4m1s
# kubectl get svc --namespace kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.1.0.10    <none>        53/UDP,53/TCP,9153/TCP   4m21s

此时再查看节点状态,发现已经是 Ready 了

# kubectl get nodes
NAME                       STATUS   ROLES                  AGE    VERSION
master.k8s   Ready    control-plane,master   8m1s   v1.23.5

node 服务器配置

kubeadm join 172.16.10.10:6443 --token kcv6rr.vxa64fbgadwfnbcn \
        --discovery-token-ca-cert-hash sha256:aa840e50a28fa37bc3428e8bdaf9c247e225eb9c298146e078ffaa97f349c3b9

成功的话会显示

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

kubeadm init 及 kubeadm join 错误

可使用 kubeadm reset 重置配置,然后调整完错误再继续执行

集群检测

master 执行

# kubectl get nodes
NAME         STATUS   ROLES                  AGE    VERSION
master.k8s   Ready    control-plane,master   31m    v1.23.5
node1.k8s    Ready    <none>                 6m3s   v1.23.5
node2.k8s    Ready    <none>                 5m8s   v1.23.5
# kubectl get pods -n kube-system
NAME                                READY   STATUS    RESTARTS   AGE
coredns-6d8c4cb4d-bvf2p              1/1     Running   0          32m
coredns-6d8c4cb4d-ngtfj              1/1     Running   0          32m
etcd-master.k8s                      1/1     Running   1          32m
kube-apiserver-master.k8s            1/1     Running   1          32m
kube-controller-manager-master.k8s   1/1     Running   2          32m
kube-flannel-ds-25ldp                1/1     Running   0          29m
kube-flannel-ds-dr97t                1/1     Running   0          6m10s
kube-proxy-2qbc5                     1/1     Running   1          6m10s
kube-proxy-6pd5k                     1/1     Running   0          32m
kube-scheduler-master.k8s            1/1     Running   2          32m

遇到异常状态 > 2 的 pod 长时间启动不了可删除它等待集群创建新的 pod 资源

#kubectl delete pod kube-scheduler-master.k8s kube-controller-manager-master.k8s -n kube-system
pod "kube-scheduler-master.k8s" deleted
pod "kube-controller-manager-master.k8s" deleted

查看节点状态

# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
master.k8s   Ready    control-plane,master   40m   v1.23.5
node1.k8s    Ready    <none>                 15m   v1.23.5
node2.k8s    Ready    <none>                 14m   v1.23.5

安装 dashboard

  1. 下载 kubernetes dashboard 的配置文件
cd ~ && mkdir kubernetes-dashboard && cd kubernetes-dashboard
curl -o kubernetes-dashboard-v2.5.1.yaml https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml
  1. 编辑 kubernetes-dashboard-v2.5.1.yaml 文件

修改配置,关键字备注 "# 修改"

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
# 修改: 增加以下 1 行
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
# 修改: 增加以下 1 行,端口范围在 30000-32767 之间
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.5.1
        # 修改:如果官方镜像下载不了可以替换成以下地址尝试一下
        # image: registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard:v2.5.1

          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
#修改: 如果不能自动发现 API Server,可以手工指定 Master,增加以下 1 行
             - --apiserver-host=http://172.16.10.10:8080
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.7
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
  1. 创建 kubernetes-dashboard 配置及服务

执行命令 kubectl apply -f kubernetes-dashboard-v2.5.1.yaml
如果命令有错误,可以执行 kubectl delete -f kubernetes-dashboard-v2.5.1.yaml 删除配置

# kubectl apply -f kubernetes-dashboard-v2.5.1.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
  1. 查看及问题处理

查看 Docker 服务信息

# kubectl get pods,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-544665c6c4-6tcp4   1/1     Running   0          4s
pod/kubernetes-dashboard-746c8fd9f8-mppqf        1/1     Running   0          4s

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/dashboard-metrics-scraper   NodePort    10.1.181.100   <none>        8000:30000/TCP   4s
service/kubernetes-dashboard        ClusterIP   10.1.122.118   <none>        443/TCP          4s

若服务启动失败,可用以下命令分析原因,顺便查看一下是否交换机及防火墙的原因

# 查看详细信息
kubectl describe po -n kubernetes-dashboard POD_NAME
# 查看日志
kubectl logs -f -n kubernetes-dashboard POD_NAME

比如:

kubectl describe po -n kubernetes-dashboard kubernetes-dashboard-fb8648fd9-bhm5c
kubectl logs -f -n kubernetes-dashboard kubernetes-dashboard-746c8fd9f8-mppqf
  1. 创建管理账号及规则配置文件 kubernetes-dashboard-admin.yaml
apiVersion: v1
# Create Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---

# Create Cluster Role Binding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
  1. 执行配置规则
# kubectl apply -f kubernetes-dashboard-admin.yaml
serviceaccount/admin-user unchanged
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
  1. 查看管理员信息和规则
# kubectl -n kubernetes-dashboard get sa
NAME                   SECRETS   AGE
admin-user             1         5m59s
default                1         32m
kubernetes-dashboard   1         32m
# kubectl -n kubernetes-dashboard get clusterrolebinding
NAME                      ROLE                                     AGE
kubernetes-dashboard      ClusterRole/kubernetes-dashboard         5m
admin-user                ClusterRole/cluster-admin                5m1s
  1. 删除管理员或者规则
kubectl -n kubernetes-dashboard delete serviceaccount admin-user
kubectl -n kubernetes-dashboard delete clusterrolebinding cluster-admin
  1. 获取登陆 Token
# kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
eyJhbGciOiJSUzI1NiI******3kXWQ
  1. 打开登陆页面
    https://172.16.10.10:30000
    输入上一步获取到的 Token

  2. 完成,及其他

k8s默认使用NodePort对外映射端口为30000-32767如需要映射其他端口需要修改配置文件 /opt/kubernetes/cfg/kube-apiserver 并改成你想要的端口范围

--service-node-port-range=30000-32767

如果服务一直异常,或者无法打开,尝试把 dashboard 安装到 Master

a. 给 Master 打标签

kubectl label node master.k8s name=master

编辑 kubernetes-dashboard-v2.5.1.yaml
nodeSelector:的值 增加 name: master,修改成如下:

      nodeSelector:
        name: master
        "kubernetes.io/os": linux

b. 执行 kubectl delete -f kubernetes-dashboard-v2.5.1.yaml
c. 再执行 kubectl apply -f kubernetes-dashboard-v2.5.1.yaml
d. 然后 GOTO 4.

创建 PV(持久卷) 及 PVC(持久卷声明),注意 PV 的空间必须大于 PVC
执行 kubectl apply -f pv-10g.yaml 创建 10G 的 PV, pv-10g.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
 name: pv-10g
spec:
 capacity:
  storage: 10Gi
 volumeMode: Filesystem
 accessModes:
 - ReadWriteOnce
 persistentVolumeReclaimPolicy: Retain
 storageClassName: local-storage
 local:
  path: /home/vagrant/storage
 nodeAffinity:
  required:
   nodeSelectorTerms:
   - matchExpressions:
     - key: kubernetes.io/hostname
       operator: In
       values:
       - node1

执行 kubectl apply -f pvc-3g.yaml 创建 3G 的 PVC, pvc-3g.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: pvc-3g
spec:
 volumeName: pv-1g
 storageClassName: local-storage
 volumeMode: Filesystem
 accessModes:
 - ReadWriteOnce
 resources:
  requests:
   storage: 3Gi

主节点参与部署

kubectl taint nodes --all node-role.kubernetes.io/master-

主节点不参与部署

kubectl taint nodes k8s node-role.kubernetes.io/master=true:NoSchedule

祝好运。

参考:
https://blog.csdn.net/weixin_39832965/article/details/110801764

版权声明:
作者:Kiyo
链接:https://www.wkiyo.cn/html/2022-03/i1179.html
来源:Kiyo's space
文章版权归作者所有,未经允许请勿转载。

THE END
分享
二维码
< <上一篇
下一篇>>