kubeadm安装k8s1.26高可用集群

k8s环境规划:

  • podSubnet(pod网段) 10.244.0.0/16
  • serviceSubnet(service网段): 10.96.0.0/12

实验环境规划:

  • 操作系统:centos7.9
  • 配置: 4Gib内存/4vCPU/60G硬盘
  • 网络:NAT模式

K8S集群角色

IP 主机名 组件
10.168.1.61 master01 apiserver、controller-manager、schedule、kubelet、etcd、kube-proxy、容器运行时、calico、keepalived、nginx
10.168.1.62 master02 apiserver、controller-manager、schedule、kubelet、etcd、kube-proxy、容器运行时、calico、keepalived、nginx
10.168.1.63 master03 apiserver、controller-manager、schedule、kubelet、etcd、kube-proxy、容器运行时、calico、keepalived、nginx
10.168.1.64 node01 Kube-proxy、calico、coredns、容器运行时、kubelet
10.168.1.60 master VIP

一. 初始化安装k8s集群的实验环境(所有机器都要进行以下操作)

1.1.1 关闭selinux和防火墙

  1. sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
  2. setenforce 0
  3. systemctl stop firewalld && systemctl disable firewalld

1.1.2 配置host文件

  1. cat <<EOF>>/etc/hosts
  2. 10.168.1.61 master01
  3. 10.168.1.62 master02
  4. 10.168.1.63 master03
  5. 10.168.1.64 node01
  6. EOF

1.1.3 关闭交换分区swap

  1. swapoff -a
  2. sed -i 's/.*swap/#&/' /etc/fstab

1.1.4 修改机器内核参数

  1. modprobe br_netfilter
  2. echo "modprobe br_netfilter" >>/etc/profile
  3. cat > /etc/sysctl.d/k8s.conf <<EOF
  4. net.bridge.bridge-nf-call-ip6tables = 1
  5. net.bridge.bridge-nf-call-iptables = 1
  6. net.ipv4.ip_forward = 1
  7. EOF
  8. sysctl -p /etc/sysctl.d/k8s.conf

1.1.4 配置repo源

  1. sudo sed -e 's|^mirrorlist=|#mirrorlist=|g' \
  2. -e 's|^#baseurl=http://mirror.centos.org/centos|baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos|g' \
  3. -i.bak \
  4. /etc/yum.repos.d/CentOS-*.repo
  5. sudo yum install -y yum-utils device-mapper-persistent-data lvm2
  6. yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  7. sudo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
  8. cat>/etc/yum.repos.d/kubernetes.repo<<EOF
  9. [kubernetes]
  10. name=kubernetes
  11. baseurl=https://mirrors.tuna.tsinghua.edu.cn/kubernetes/yum/repos/kubernetes-el7-x86_64
  12. enabled=1
  13. gpgcheck=0
  14. EOF
  15. sudo yum makecache fast

1.1.5 配置时间同步

  1. yum install ntpdate -y
  2. ntpdate cn.pool.ntp.org
  3. echo "*/5 * * * * ntpdate cn.pool.ntp.org"|crontab -

1.1.6 安装基础软件包

  1. yum install -y vim net-tools nfs-utils ipvsadm

1.2、安装containerd服务

1.2.1 安装containerd

  1. yum install containerd.io -y
  2. mkdir -p /etc/containerd
  3. containerd config default > /etc/containerd/config.toml

1.2.2 修改配置文件:

  1. sed -i '/SystemdCgroup/s/false/true/' /etc/containerd/config.toml
  2. sed -i 's#registry.k8s.io/pause:3.6#registry.aliyuncs.com/google_containers/pause:3.7#' /etc/containerd/config.toml
  3. systemctl enable containerd --now

1.2.3 修改/etc/crictl.yaml文件

  1. cat > /etc/crictl.yaml <<EOF
  2. runtime-endpoint: unix:///run/containerd/containerd.sock
  3. image-endpoint: unix:///run/containerd/containerd.sock
  4. timeout: 10
  5. debug: false
  6. EOF
  7. systemctl restart containerd

1.2.4 配置containerd镜像加速器

  1. sed -i '/config_path/s#""#"/etc/containerd/certs.d"#' /etc/containerd/config.toml|grep config_path
  2. mkdir /etc/containerd/certs.d/docker.io/ -p
  3. cat>/etc/containerd/certs.d/docker.io/hosts.toml<<EOF
  4. [host."https://vh3bm52y.mirror.aliyuncs.com",host."https://registry.docker-cn.com"]
  5. capabilities = ["pull"]
  6. EOF
  7. systemctl restart containerd

1.2.5 查看containerd服务是否有报错以及设置容器运行时

  1. systemctl status containerd
  2. crictl config runtime-endpoint /run/containerd/containerd.sock

1.3、安装初始化k8s需要的软件包

  1. yum install kubelet-1.26.3 kubeadm-1.26.3 kubectl-1.26.3 -y
  2. systemctl enable kubelet

1.4 配置IPVS模块

  1. cat>/etc/sysconfig/modules/ipvs.modules<<EOF
  2. #!/bin/bash
  3. ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
  4. for kernel_module in ${ipvs_modules}; do
  5. /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
  6. if [ 0 -eq 0 ]; then
  7. /sbin/modprobe ${kernel_module}
  8. fi
  9. done
  10. EOF
  11. chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

二、配置高可用(以下操作只在master节点)

2.1 通过keepalived+nginx实现k8s apiserver节点高可用

master01:

  1. hostnamectl set-hostname master01 && bash
  1. ssh-keygen
  2. ssh-copy-id master01
  3. ssh-copy-id master02
  4. ssh-copy-id master03
  5. ssh-copy-id node01
  1. yum install epel-release -y
  2. yum install nginx keepalived nginx-mod-stream -y

将以下配置文件写入到/etc/nginx/nginx.conf中

  1. mv /etc/nginx/nginx.conf /etc/nginx/nginx.bak
  2. vim /etc/nginx/nginx.conf
  1. user nginx;
  2. worker_processes auto;
  3. error_log /var/log/nginx/error.log;
  4. pid /run/nginx.pid;
  5. include /usr/share/nginx/modules/*.conf;
  6. events {
  7. worker_connections 1024;
  8. }
  9. # 四层负载均衡,为两台Master apiserver组件提供负载均衡
  10. stream {
  11. log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
  12. access_log /var/log/nginx/k8s-access.log main;
  13. upstream k8s-apiserver {
  14. server 10.168.1.61:6443 weight=5 max_fails=3 fail_timeout=30s;
  15. server 10.168.1.62:6443 weight=5 max_fails=3 fail_timeout=30s;
  16. server 10.168.1.63:6443 weight=5 max_fails=3 fail_timeout=30s;
  17. }
  18. server {
  19. listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
  20. proxy_pass k8s-apiserver;
  21. }
  22. }
  23. http {
  24. log_format main '$remote_addr - $remote_user [$time_local] "$request" '
  25. '$status $body_bytes_sent "$http_referer" '
  26. '"$http_user_agent" "$http_x_forwarded_for"';
  27. access_log /var/log/nginx/access.log main;
  28. sendfile on;
  29. tcp_nopush on;
  30. tcp_nodelay on;
  31. keepalive_timeout 65;
  32. types_hash_max_size 2048;
  33. include /etc/nginx/mime.types;
  34. default_type application/octet-stream;
  35. }
  1. mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.bak
  2. vim /etc/keepalived/keepalived.conf
  1. global_defs {
  2. notification_email {
  3. acassen@firewall.loc
  4. failover@firewall.loc
  5. sysadmin@firewall.loc
  6. }
  7. notification_email_from Alexandre.Cassen@firewall.loc
  8. smtp_server 127.0.0.1
  9. smtp_connect_timeout 30
  10. router_id NGINX_MASTER
  11. }
  12. vrrp_script check_nginx {
  13. script "/etc/keepalived/check_nginx.sh"
  14. }
  15. vrrp_instance VI_1 {
  16. state MASTER
  17. interface eth0 # 修改为实际网卡名
  18. virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
  19. priority 100 # 优先级,备服务器设置 90
  20. advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
  21. authentication {
  22. auth_type PASS
  23. auth_pass 1111
  24. }
  25. # 虚拟IP
  26. virtual_ipaddress {
  27. 10.168.1.60/24
  28. }
  29. track_script {
  30. check_nginx
  31. }
  32. }
  1. vim /etc/keepalived/check_nginx.sh
  1. #!/bin/bash
  2. # 检查nginx进程是否存在
  3. if pgrep "nginx" > /dev/null
  4. then
  5. echo "nginx is running"
  6. else
  7. echo "nginx is not running"
  8. systemctl stop keepalived
  9. fi

启动nginx和keepalived服务

  1. chmod a+x /etc/keepalived/check_nginx.sh
  2. systemctl start nginx && systemctl enable nginx
  3. systemctl start keepalived && systemctl enable keepalived
  1. systemctl status nginx
  2. systemctl status keepalived

master02

  1. hostnamectl set-hostname master02 && bash
  1. ssh-keygen
  2. ssh-copy-id master01
  3. ssh-copy-id master02
  4. ssh-copy-id master03
  5. ssh-copy-id node01
  1. yum install epel-release -y
  2. yum install nginx keepalived nginx-mod-stream -y

将以下配置文件写入到/etc/nginx/nginx.conf中

  1. mv /etc/nginx/nginx.conf /etc/nginx/nginx.bak
  2. vim /etc/nginx/nginx.conf
  1. user nginx;
  2. worker_processes auto;
  3. error_log /var/log/nginx/error.log;
  4. pid /run/nginx.pid;
  5. include /usr/share/nginx/modules/*.conf;
  6. events {
  7. worker_connections 1024;
  8. }
  9. # 四层负载均衡,为两台Master apiserver组件提供负载均衡
  10. stream {
  11. log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
  12. access_log /var/log/nginx/k8s-access.log main;
  13. upstream k8s-apiserver {
  14. server 10.168.1.61:6443 weight=5 max_fails=3 fail_timeout=30s;
  15. server 10.168.1.62:6443 weight=5 max_fails=3 fail_timeout=30s;
  16. server 10.168.1.63:6443 weight=5 max_fails=3 fail_timeout=30s;
  17. }
  18. server {
  19. listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
  20. proxy_pass k8s-apiserver;
  21. }
  22. }
  23. http {
  24. log_format main '$remote_addr - $remote_user [$time_local] "$request" '
  25. '$status $body_bytes_sent "$http_referer" '
  26. '"$http_user_agent" "$http_x_forwarded_for"';
  27. access_log /var/log/nginx/access.log main;
  28. sendfile on;
  29. tcp_nopush on;
  30. tcp_nodelay on;
  31. keepalive_timeout 65;
  32. types_hash_max_size 2048;
  33. include /etc/nginx/mime.types;
  34. default_type application/octet-stream;
  35. }
  1. mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.bak
  2. vim /etc/keepalived/keepalived.conf
  1. global_defs {
  2. notification_email {
  3. acassen@firewall.loc
  4. failover@firewall.loc
  5. sysadmin@firewall.loc
  6. }
  7. notification_email_from Alexandre.Cassen@firewall.loc
  8. smtp_server 127.0.0.1
  9. smtp_connect_timeout 30
  10. router_id NGINX_MASTER
  11. }
  12. vrrp_script check_nginx {
  13. script "/etc/keepalived/check_nginx.sh"
  14. }
  15. vrrp_instance VI_1 {
  16. state BACKUP
  17. interface eth0 # 修改为实际网卡名
  18. virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
  19. priority 90 # 优先级,备服务器设置 90
  20. advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
  21. authentication {
  22. auth_type PASS
  23. auth_pass 1111
  24. }
  25. # 虚拟IP
  26. virtual_ipaddress {
  27. 10.168.1.60/24
  28. }
  29. track_script {
  30. check_nginx
  31. }
  32. }
  1. vim /etc/keepalived/check_nginx.sh
  1. #!/bin/bash
  2. # 检查nginx进程是否存在
  3. if pgrep "nginx" > /dev/null
  4. then
  5. echo "nginx is running"
  6. else
  7. echo "nginx is not running"
  8. systemctl stop keepalived
  9. fi

启动nginx和keepalived服务

  1. chmod a+x /etc/keepalived/check_nginx.sh
  2. systemctl start nginx && systemctl enable nginx
  3. systemctl start keepalived && systemctl enable keepalived
  1. systemctl status nginx
  2. systemctl status keepalived

master03

  1. hostnamectl set-hostname master02 && bash
  1. ssh-keygen
  2. ssh-copy-id master01
  3. ssh-copy-id master02
  4. ssh-copy-id master03
  5. ssh-copy-id node01
  1. yum install epel-release -y
  2. yum install nginx keepalived nginx-mod-stream -y

将以下配置文件写入到/etc/nginx/nginx.conf中

  1. mv /etc/nginx/nginx.conf /etc/nginx/nginx.bak
  2. vim /etc/nginx/nginx.conf
  1. user nginx;
  2. worker_processes auto;
  3. error_log /var/log/nginx/error.log;
  4. pid /run/nginx.pid;
  5. include /usr/share/nginx/modules/*.conf;
  6. events {
  7. worker_connections 1024;
  8. }
  9. # 四层负载均衡,为两台Master apiserver组件提供负载均衡
  10. stream {
  11. log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
  12. access_log /var/log/nginx/k8s-access.log main;
  13. upstream k8s-apiserver {
  14. server 10.168.1.61:6443 weight=5 max_fails=3 fail_timeout=30s;
  15. server 10.168.1.62:6443 weight=5 max_fails=3 fail_timeout=30s;
  16. server 10.168.1.63:6443 weight=5 max_fails=3 fail_timeout=30s;
  17. }
  18. server {
  19. listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
  20. proxy_pass k8s-apiserver;
  21. }
  22. }
  23. http {
  24. log_format main '$remote_addr - $remote_user [$time_local] "$request" '
  25. '$status $body_bytes_sent "$http_referer" '
  26. '"$http_user_agent" "$http_x_forwarded_for"';
  27. access_log /var/log/nginx/access.log main;
  28. sendfile on;
  29. tcp_nopush on;
  30. tcp_nodelay on;
  31. keepalive_timeout 65;
  32. types_hash_max_size 2048;
  33. include /etc/nginx/mime.types;
  34. default_type application/octet-stream;
  35. }
  1. mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.bak
  2. vim /etc/keepalived/keepalived.conf
  1. global_defs {
  2. notification_email {
  3. acassen@firewall.loc
  4. failover@firewall.loc
  5. sysadmin@firewall.loc
  6. }
  7. notification_email_from Alexandre.Cassen@firewall.loc
  8. smtp_server 127.0.0.1
  9. smtp_connect_timeout 30
  10. router_id NGINX_MASTER
  11. }
  12. vrrp_script check_nginx {
  13. script "/etc/keepalived/check_nginx.sh"
  14. }
  15. vrrp_instance VI_1 {
  16. state BACKUP
  17. interface eth0 # 修改为实际网卡名
  18. virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
  19. priority 80 # 优先级,备服务器设置 90
  20. advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
  21. authentication {
  22. auth_type PASS
  23. auth_pass 1111
  24. }
  25. # 虚拟IP
  26. virtual_ipaddress {
  27. 10.168.1.60/24
  28. }
  29. track_script {
  30. check_nginx
  31. }
  32. }
  1. vim /etc/keepalived/check_nginx.sh
  1. #!/bin/bash
  2. # 检查nginx进程是否存在
  3. if pgrep "nginx" > /dev/null
  4. then
  5. echo "nginx is running"
  6. else
  7. echo "nginx is not running"
  8. systemctl stop keepalived
  9. fi

启动nginx和keepalived服务

  1. chmod a+x /etc/keepalived/check_nginx.sh
  2. systemctl start nginx && systemctl enable nginx
  3. systemctl start keepalived && systemctl enable keepalived
  1. systemctl status nginx
  2. systemctl status keepalived

node01

  1. hostnamectl set-hostname node01 && bash

三、初始化K8S集群

master01

  1. kubeadm config print init-defaults > kubeadm.yaml

据我们自己的需求修改配置,比如修改 imageRepository 的值,kube-proxy 的模式为 ipvs,需要注意的是由于我们使用的containerd作为运行时,所以在初始化节点的时候需要指定cgroupDriver为systemd

完整的kubeadm.yaml文件如下

  1. apiVersion: kubeadm.k8s.io/v1beta3
  2. bootstrapTokens:
  3. - groups:
  4. - system:bootstrappers:kubeadm:default-node-token
  5. token: abcdef.0123456789abcdef
  6. ttl: 24h0m0s
  7. usages:
  8. - signing
  9. - authentication
  10. kind: InitConfiguration
  11. localAPIEndpoint:
  12. advertiseAddress: 10.168.1.61
  13. bindPort: 6443
  14. nodeRegistration:
  15. criSocket: unix:///var/run/containerd/containerd.sock
  16. imagePullPolicy: IfNotPresent
  17. taints: null
  18. ---
  19. apiServer:
  20. timeoutForControlPlane: 4m0s
  21. apiVersion: kubeadm.k8s.io/v1beta3
  22. certificatesDir: /etc/kubernetes/pki
  23. clusterName: kubernetes
  24. controllerManager: {}
  25. dns: {}
  26. etcd:
  27. local:
  28. dataDir: /var/lib/etcd
  29. imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
  30. kind: ClusterConfiguration
  31. kubernetesVersion: 1.26.3
  32. controlPlaneEndpoint: 10.168.1.60:16443
  33. networking:
  34. dnsDomain: cluster.local
  35. serviceSubnet: 10.96.0.0/12
  36. podSubnet: 10.244.0.0/16
  37. scheduler: {}
  38. ---
  39. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  40. kind: KubeProxyConfiguration
  41. mode: ipvs
  42. ---
  43. apiVersion: kubelet.config.k8s.io/v1beta1
  44. kind: KubeletConfiguration
  45. cgroupDriver: systemd
  1. kubeadm init --config=kubeadm001.yaml --ignore-preflight-errors=SystemVerification
  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

把master02加入集群

master02创建相关目录

  1. mkdir -p /etc/kubernetes/pki/
  2. mkdir -p /etc/kubernetes/pki/etcd/

master01执行拷贝

  1. scp /etc/kubernetes/pki/ca.crt master02:/etc/kubernetes/pki/
  2. scp /etc/kubernetes/pki/ca.key master02:/etc/kubernetes/pki/
  3. scp /etc/kubernetes/pki/sa.key master02:/etc/kubernetes/pki/
  4. scp /etc/kubernetes/pki/sa.pub master02:/etc/kubernetes/pki/
  5. scp /etc/kubernetes/pki/front-proxy-ca.crt master02:/etc/kubernetes/pki/
  6. scp /etc/kubernetes/pki/front-proxy-ca.key master02:/etc/kubernetes/pki/
  7. scp /etc/kubernetes/pki/etcd/ca.crt master02:/etc/kubernetes/pki/etcd/
  8. scp /etc/kubernetes/pki/etcd/ca.key master02:/etc/kubernetes/pki/etcd/

在master1上查看加入节点的命令:

  1. kubeadm token create --print-join-command

master02执行

  1. kubeadm join 10.168.1.60:16443 --token abcdef.0123456789abcdef \
  2. --discovery-token-ca-cert-hash sha256:7e7f1e14d31f6b395b5301a41e84ef01c47685897d7ede57eef2fd827b681f9b \
  3. --control-plane

把master03加入集群
master03创建相关目录

  1. mkdir -p /etc/kubernetes/pki/
  2. mkdir -p /etc/kubernetes/pki/etcd/

master01执行拷贝

  1. scp /etc/kubernetes/pki/ca.crt master03:/etc/kubernetes/pki/
  2. scp /etc/kubernetes/pki/ca.key master03:/etc/kubernetes/pki/
  3. scp /etc/kubernetes/pki/sa.key master03:/etc/kubernetes/pki/
  4. scp /etc/kubernetes/pki/sa.pub master03:/etc/kubernetes/pki/
  5. scp /etc/kubernetes/pki/front-proxy-ca.crt master03:/etc/kubernetes/pki/
  6. scp /etc/kubernetes/pki/front-proxy-ca.key master03:/etc/kubernetes/pki/
  7. scp /etc/kubernetes/pki/etcd/ca.crt master03:/etc/kubernetes/pki/etcd/
  8. scp /etc/kubernetes/pki/etcd/ca.key master03:/etc/kubernetes/pki/etcd/

在master1上查看加入节点的命令:

  1. kubeadm token create --print-join-command

master03执行

  1. kubeadm join 10.168.1.60:16443 --token abcdef.0123456789abcdef \
  2. --discovery-token-ca-cert-hash sha256:7e7f1e14d31f6b395b5301a41e84ef01c47685897d7ede57eef2fd827b681f9b \
  3. --control-plane

任一节点查看集群情况

扩容k8s集群-添加第一个工作节点

在node01上执行

  1. kubeadm join 10.168.1.60:16443 --token abcdef.0123456789abcdef \
  2. --discovery-token-ca-cert-hash sha256:7e7f1e14d31f6b395b5301a41e84ef01c47685897d7ede57eef2fd827b681f9b

查看所有集群情况


对node节点打上worke标签

  1. kubectl label nodes node01 node-role.kubernetes.io/work=work

再次查看

四、安装calico(只在master01操作就可以了)

  1. yum install wget
  2. wget https://get.helm.sh/helm-v3.10.3-linux-amd64.tar.gz
  3. tar -zxvf helm-v3.10.3-linux-amd64.tar.gz
  4. mv linux-amd64/helm /usr/local/bin/
  1. wget https://github.com/projectcalico/calico/releases/download/v3.24.5/tigera-operator-v3.24.5.tgz
  2. helm show values tigera-operator-v3.24.5.tgz >values.yaml

修改apiServer下的true为false

  1. apiServer:
  2. enabled: false

完整的文件如下

  1. imagePullSecrets: {}
  2. installation:
  3. enabled: true
  4. kubernetesProvider: ""
  5. apiServer:
  6. enabled: false
  7. certs:
  8. node:
  9. key:
  10. cert:
  11. commonName:
  12. typha:
  13. key:
  14. cert:
  15. commonName:
  16. caBundle:
  17. # Resource requests and limits for the tigera/operator pod.
  18. resources: {}
  19. # Tolerations for the tigera/operator pod.
  20. tolerations:
  21. - effect: NoExecute
  22. operator: Exists
  23. - effect: NoSchedule
  24. operator: Exists
  25. # NodeSelector for the tigera/operator pod.
  26. nodeSelector:
  27. kubernetes.io/os: linux
  28. # Custom annotations for the tigera/operator pod.
  29. podAnnotations: {}
  30. # Custom labels for the tigera/operator pod.
  31. podLabels: {}
  32. # Image and registry configuration for the tigera/operator pod.
  33. tigeraOperator:
  34. image: tigera/operator
  35. version: v1.28.5
  36. registry: quay.io
  37. calicoctl:
  38. image: docker.io/calico/ctl
  39. tag: v3.24.5

执行helm安装calico

  1. helm install calico tigera-operator-v3.24.5.tgz -n kube-system --create-namespace -f values.yaml

查看pod运行情况,没有报错表示安装成功了

再次查看集群是否rady

文档更新时间: 2023-03-25 13:25   作者:admin