k8s1.27.2版本二进制高可用集群部署

这篇具有很好参考价值的文章主要介绍了k8s1.27.2版本二进制高可用集群部署。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

环境

说明:本次实验共有5台主机,3台master节点同时又是worker,os128、os129、os130 节点主机容器运行时用的containerd,worker131、worker132主机的用的docker

主机名 IP 组件 系统
os128 192.168.177.128 etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy、containerd CentOS7.9
os129 192.168.177.129 etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy、containerd CentOS7.9
os130 192.168.177.130 etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy、containerd CentOS7.9
worker131 192.168.177.131 haproxy、keepalived、kubelet、kube-proxy、docker、cri-dockerd CentOS7.9
worker132 192.168.177.132 haproxy、keepalived、kubelet、kube-proxy、docker、cri-dockerd CentOS7.9
VIP 192.168.177.127

软件版本

软件版本明细

软件 版本 下载地址 备注
CentOS 7.9.2009 https://mirrors.aliyun.com/centos/7.9.2009/isos/x86_64/CentOS-7-x86_64-Minimal-2009.iso
kernel 3.10.0-1160.105.1.el7.x86_64(系统默认)
kube-apiserver,kube-controller-manager,kube-schedule,kubelet,kube-proxy v1.27.2 https://dl.k8s.io/v1.27.2/kubernetes-server-linux-amd64.tar.gz
etcd v3.5.5 https://github.com/etcd-io/etcd/releases/download/v3.5.5/etcd-v3.5.5-linux-amd64.tar.gz
cfssl v1.6.1 https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
cfssljson v1.6.1 https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
cfssl-certinfo v1.6.1 https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64
containerd v.1.6.6 https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz
runc v1.1.11 https://github.com/opencontainers/runc/releases/download/v1.1.11/runc.amd64 containerd中自带的runc有问题需要替换
docker 20.10.24. https://download.docker.com/linux/static/stable/x86_64/docker-20.10.24.tgz
cri-dockerd 0.3.6 https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.6/cri-dockerd-0.3.6.amd64.tgz
crictl v1.29.0 https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.29.0/crictl-v1.29.0-linux-amd64.tar.gz 使用docker作为runtime时,需要单独安装这个管理工具,containerd的安装包中自带了此工具
haproxy 1.5 系统默认yum源
keepalived 1.3.5 系统默认yum源
calico v3.25.0 https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
coredns v1.11.1 https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base
dashboard v2.7 https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
metrics-server 0.6.1 https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml

服务器系统初始化

# 安装依赖包
yum -y install  epel-release.noarch
yum update  -y
yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl  bash-completion  lrzsz  sysstat openssh-clients -y
# 关闭防火墙 与selinux 和ssh优化
 systemctl stop firewalld
 systemctl disable firewalld
 yum install iptables* -y
 setenforce 0
 sed -i 's/^SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
 sed -i '/^#UseDNS/s/#UseDNS yes/UseDNS no/g' /etc/ssh/sshd_config
 sed -i 's/#PermitEmptyPasswords no/PermitEmptyPasswords no/g' /etc/ssh/sshd_config 
 sed -i 's/^GSSAPIAuthentication yes/GSSAPIAuthentication no/g' /etc/ssh/sshd_config
 systemctl restart sshd
# 关闭交换分区
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a && sysctl -w vm.swappiness=0
 
# 配置系统句柄数
ulimit -SHn 655350
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF
cat >> /etc/security/limits.d/20-nproc.conf << EOF
*  soft    nproc     unlimited
*  hard    nproc     unlimited
EOF

# 主机ipvs管理工具安装及模块加载
yum -y install ipvsadm ipset sysstat conntrack libseccomp
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
# 授权、运行、检查是否加载
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
#内核优化k8s.conf
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 131072
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
#设置生效
sysctl --system
#加载br_netfilter
modprobe br_netfilter
#查看是否加载
lsmod | grep br_netfilter

设置关于etcd签名证书

  • 准备签名证书需要的工具 cfssl、cfssljson、cfssl-certinfo(选择一台主机即可,此次证书相关的都在os128上操作)
    wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
	wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
	wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64
	
	mv cfssl_1.6.1_linux_amd64  /usr/bin/cfssl
	mv cfssljson_1.6.1_linux_amd64 /usr/bin/cfssljson
	mv cfssl-certinfo_1.6.1_linux_amd64 /usr/bin/cfssl-certinfo
	chmod +x /usr/bin/cfssl*
  • 自签etcd 的CA
mkdir -p ~/TLS/{etcd,k8s}

cd ~/TLS/etcd
#自签CA:
cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json << EOF
{
    "CA": {"expiry": "87600h"},
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

#生成证书:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

会生成ca.pem和ca-key.pem文件
  • 使用自签CA签发Etcd HTTPS证书

#创建证书申请文件:
cd ~/TLS/etcd
cat > server-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.177.128",
    "192.168.177.129",
    "192.168.177.130"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

#注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
#生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

#会生成server.pem和server-key.pem文件。

etcd集群部署

  • Etcd 的概念:
    Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群,可容忍2台机器故障。
  • 以下在节点os128上操作,为简化操作,待会将节点os128生成的所有文件拷贝到节点os129和节点os130
# 准备etcd的安装包
wget  https://github.com/etcd-io/etcd/releases/download/v3.5.5/etcd-v3.5.5-linux-amd64.tar.gz 

mkdir -pv /opt/etcd/{bin,cfg,ssl}
tar zxvf etcd-v3.5.5-linux-amd64.tar.gz
mv etcd-v3.5.5-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
  • 准备etcd的配置文件
#os128主机 etcd 配置文件
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.177.128:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.177.128:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.177.128:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.177.128:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.177.128:2380,etcd-2=https://192.168.177.129:2380,etcd-3=https://192.168.177.130:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
---
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
---
# systemd管理etcd
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 安装etcd集群
#拷贝刚才生成的证书
#把刚才生成的证书拷贝到配置文件中的路径:
cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/

# 同步所有主机
scp -r /opt/etcd/ root@192.168.177.129:/opt/
scp -r /opt/etcd/ root@192.168.177.130:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.177.129:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.177.130:/usr/lib/systemd/system/
# os129 主机etcd的配置文件
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.177.129:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.177.129:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.177.129:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.177.129:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.177.128:2380,etcd-2=https://192.168.177.129:2380,etcd-3=https://192.168.177.130:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
# os130主机etcd配置文件
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.177.130:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.177.130:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.177.130:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.177.130:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.177.128:2380,etcd-2=https://192.168.177.129:2380,etcd-3=https://192.168.177.130:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
  • 启动etcd并设置开启自启
启动etcd:
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
  • 使用etcdctl验证etcd集群
 ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.177.128:2379,https://192.168.177.129:2379,https://192.168.177.130:2379" endpoint health --write-out=table

k8s1.27.2版本二进制高可用集群部署,kubernetes,容器,云原生

负载均衡器组件安装

worker131、worker132主机上执行

  • 安装haproxy、keepalived
 yum install haproxy keepalived -y
  • haproxy 配置
cat > /etc/haproxy/haproxy.cfg <<EOF
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     6000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
defaults
    mode                    tcp
    log                     global
    option                  tcplog
    option                  dontlognull
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
#---------------------------------------------------------------------
listen stats
    bind 0.0.0.0:9100
    mode  http
    option httplog
    stats uri /status
    stats refresh 30s
    stats realm "Haproxy Manager"
    stats auth admin:password
    stats hide-version
    stats admin if TRUE
#---------------------------------------------------------------------
frontend  k8s-master-default-nodepool-apiserver
    bind *:6443
    mode tcp
    default_backend             k8s-master-default-nodepool
#---------------------------------------------------------------------
backend k8s-master-default-nodepool
    balance     roundrobin
    mode tcp
    server  k8s-apiserver-1 192.168.177.128:6443 check weight 1 maxconn 2000 check inter 2000 rise 2 fall 3
    server  k8s-apiserver-2 192.168.177.129:6443 check weight 1 maxconn 2000 check inter 2000 rise 2 fall 3
    server  k8s-apiserver-3 192.168.177.130:6443 check weight 1 maxconn 2000 check inter 2000 rise 2 fall 3
EOF
  • keepalived配置
    • worker131 主机配置

      cat > /etc/keepalived/keepalived.conf  << EOF
      ! Configuration File for keepalived
      global_defs {
         router_id LVS_DEVEL
         script_user root
         enable_script_security
      }
      vrrp_script check_haproxy {
         script "/etc/keepalived/check_haproxy.sh"
         interval 5
         weight -5
         fall 2 
      rise 1
      }
      vrrp_instance VI_1 {
         state BACKUP
         interface ens33
         # 非抢占vip模式
         nopreempt
         # 单播
         unicast_src_ip 192.168.177.131
         unicast_peer {
          192.168.177.132
          }
         virtual_router_id 51
         #优先级100大于从服务的99
         priority 100
         advert_int 2
         authentication {
             auth_type PASS
             auth_pass K8SHA_KA_AUTH
         }
         virtual_ipaddress {
             #配置规划的虚拟ip
             192.168.177.127
         }
         #配置对worker131主机haproxy进行监控的脚本
         track_script {
            #指定执行脚本的名称(vrrp_script check_haproxy此处做了配置)
            check_haproxy
         }
      }
      EOF
      
    • worker132 主机配置

      cat  > /etc/keepalived/keepalived.conf << EOF
      ! Configuration File for keepalived
      global_defs {
         router_id LVS_DEVEL
      script_user root
         enable_script_security
      }
      vrrp_script check_haproxy {
         script "/etc/keepalived/check_haproxy.sh"
        interval 5
         weight -5
         fall 2 
      rise 1
      }
      vrrp_instance VI_1 {
         state BACKUP
         interface ens33
         nopreempt
         unicast_src_ip 192.168.177.132
         unicast_peer {
          192.168.177.131
          }
         virtual_router_id 51
         priority 99
         advert_int 2
         authentication {
             auth_type PASS
             auth_pass K8SHA_KA_AUTH
         }
         virtual_ipaddress {
             192.168.177.127
         }
         #配置对worker132主机haproxy进行监控的脚本
         track_script {
            #指定执行脚本的名称(vrrp_script check_haproxy此处做了配置)
            check_haproxy
         }
      }
      EOF
      
  • 健康检查脚本
cat > /etc/keepalived/check_haproxy.sh <<EOF 
#!/bin/bash
err=0
for k in $(seq 1 3)
do
   check_code=$(pgrep haproxy)
   if [[ $check_code == "" ]]; then
       err=$(expr $err + 1)
       sleep 1
       continue
   else
       err=0
       break
   fi
done

if [[ $err != "0" ]]; then
   echo "systemctl stop keepalived"
   /usr/bin/systemctl stop keepalived
   exit 1
else
   exit 0
fi
EOF
chmod +x /etc/keepalived/check_haproxy.sh
  • 设置开启自启并验证高可用VIP
systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived
#查看启动状态
systemctl status keepalived haproxy
#查看虚拟ip是否配置成功了
ip address show

haproxy 监控页面:
k8s1.27.2版本二进制高可用集群部署,kubernetes,容器,云原生查看vip:
k8s1.27.2版本二进制高可用集群部署,kubernetes,容器,云原生此时手动停止woker131主机上的haproxy服务模拟故障,由于keepalived中配置的有监控脚本把woker131主机keepalived服务停掉,vip会自动漂移到worker132的主机上,几乎不会丢包,回出现网络的轻微抖动,如果woker131的keepalived 服务故障恢复启动后,不会抢占vip(配置的非抢占模式)

设置关于k8s自签证书

  • 自签CA

#创建k8s 的kube-apiserver证书
cd ~/TLS/k8s

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat > ca-csr.json << EOF
{
    "CA": {"expiry": "87600h"},
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

#生成证书:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

会生成ca.pem和ca-key.pem文件。
  • kube-apiserver 自签证书

#创建证书申请文件:
cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.177.127",
      "192.168.177.128",
      "192.168.177.129",
      "192.168.177.130",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

#注:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#会生成server.pem和server-key.pem文件。
  • kube-controller-manager自签证书

# 创建证书请求文件
cat > kube-controller-manager-csr.json << EOF
{
  "CN": "system:kube-controller-manager",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing", 
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
  • kube-scheduler自签证书

# 创建证书请求文件
cat > kube-scheduler-csr.json << EOF
{
  "CN": "system:kube-scheduler",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
  • kube-proxy 自签证书

# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
  • admin 自签证书

#生成kubectl连接集群的证书:
cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

此时/root/TLS/k8s目录下会有如下这么多文件
k8s1.27.2版本二进制高可用集群部署,kubernetes,容器,云原生

控制平面节点组件部署

  • 准备工作(在os128节点上操作)
#部署k8s1.27.2 
#下载安装包
wget  https://dl.k8s.io/v1.27.2/kubernetes-server-linux-amd64.tar.gz

#解压二进制包
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
tar -zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager  kubelet   kube-proxy /opt/kubernetes/bin
cp kubectl /usr/bin/
cp kubectl /usr/local/bin/
# 证书拷贝
cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/
  • 部署kube-apiserver

  • 创建kube-apiserver配置文件
	# 创建kube-apiserver配置文件
	cat > /opt/kubernetes/cfg/kube-apiserver.conf <<EOF
	KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
	--v=2 \\
	--etcd-servers=https://192.168.177.128:2379,https://192.168.177.129:2379,https://192.168.177.130:2379 \\
	--bind-address=192.168.177.128 \\
	--secure-port=6443 \\
	--advertise-address=192.168.177.128 \\
	--allow-privileged=true \\
	--service-cluster-ip-range=10.0.0.0/24 \\
	--authorization-mode=RBAC,Node \\
	--enable-bootstrap-token-auth=true \\
	--token-auth-file=/opt/kubernetes/cfg/token.csv \\
	--service-node-port-range=30000-32767 \\
	--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
	--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
	--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
	--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
	--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
	--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
	--service-account-issuer=api \\
	--service-account-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
	--etcd-cafile=/opt/etcd/ssl/ca.pem \\
	--etcd-certfile=/opt/etcd/ssl/server.pem \\
	--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
	--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\
	--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \\
	--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \\
	--requestheader-allowed-names=kubernetes \\
	--requestheader-extra-headers-prefix=X-Remote-Extra- \\
	--requestheader-group-headers=X-Remote-Group \\
	--requestheader-username-headers=X-Remote-User \\
	--enable-aggregator-routing=true \\
	--audit-log-maxage=30 \\
	--audit-log-maxbackup=3 \\
	--audit-log-maxsize=100 \\
	--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
	--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
	--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
	EOF
  • 启用 TLS Bootstrapping 机制
TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和
kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,
当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。
为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,
kubelet会以一个低权限用户自动向apiserver申请证书,
kubelet的证书由apiserver动态签署。
所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy
还是由我们统一颁发一个证书。
  • 创建token文件
	cat > /opt/kubernetes/cfg/token.csv << EOF
	c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
	EOF
	格式:token,用户名,UID,用户组
	token可自行生成替换:
	head -c 16 /dev/urandom | od -An -t x | tr -d ' '
  • systemd管理kube-apiserver
#systemd管理apiserver
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
  • 以下路径文件分发到其他master主机对应的路径
 /opt/kubernetes/bin 
 /opt/kubernetes/ssl 
 /opt/kubernetes/cfg 
 /usr/lib/systemd/system/kube-apiserver.service  

不同主机的/opt/kubernetes/cfg/kube-apiserver.conf配置文件里面的IP要改成相应主机的

  • 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver 
systemctl enable kube-apiserver
  • 部署kube-controller-manager

  • 创建配置文件
# 创建配置文件
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS=" \\
--v=2 \\
--leader-elect=true \\
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--cluster-signing-duration=87600h0m0s"
EOF

•--kubeconfig:连接apiserver配置文件
•--leader-elect:当该组件启动多个时,自动选举(HA)
•--cluster-signing-cert-file/--cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致

说明:–bind-address监听的地址必须是127.0.0.1

  • 生成kube-controller-manager.kubeconfig文件
KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://192.168.177.127:6443"
cd  ~/TLS/k8s
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-controller-manager \
  --client-certificate=./kube-controller-manager.pem \
  --client-key=./kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-controller-manager \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
  • systemd管理controller-manager
# systemd管理controller-manager
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
  • 以下文件分发到其他master节点主机
/opt/kubernetes/bin/kube-controller-manager
/usr/lib/systemd/system/kube-controller-manager.service 
/opt/kubernetes/cfg/kube-controller-manager.conf  
/opt/kubernetes/cfg/kube-controller-manager.kubeconfig 
  • 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
  • 部署kube-scheduler

  • 创建配置文件
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS=" \\
--v=2 \\
--leader-elect \\
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\
--bind-address=127.0.0.1"
EOF
 --kubeconfig:连接apiserver配置文件
 --leader-elect:当该组件启动多个时,自动选举(HA)

说明: --bind-address监听地址必须是127.0.0.1

  • 生成kube-scheduler.kubeconfig
cd ~/TLS/k8s
KUBE_CONFIG="//opt/kubernetes/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://192.168.177.127:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-scheduler \
  --client-certificate=./kube-scheduler.pem \
  --client-key=./kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-scheduler \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
  • systemd管理kube-scheduler
# systemd管理scheduler
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
  • 以下文件分发到其他master主机对应的路径
/opt/kubernetes/bin/kube-scheduler
/usr/lib/systemd/system/kube-scheduler.service 
/opt/kubernetes/cfg/kube-scheduler.conf  
/opt/kubernetes/cfg/kube-scheduler.kubeconfig 
  • 启动并设置开机启动
# 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
  • 查看集群状态

  • 生成管理集群的kubeconfig认证文件

# 生成管理集群的kubeconfig认证文件:
cd ~/TLS/k8s
mkdir /root/.kube
KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://192.168.177.127:6443"
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials cluster-admin \
  --client-certificate=./admin.pem \
  --client-key=./admin-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=cluster-admin \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
  • 使用kubectl 查看集群的状态
#查看集群信息
kubectl cluster-info
#查看集群组件状态
kubectl get cs

k8s1.27.2版本二进制高可用集群部署,kubernetes,容器,云原生图片中的coredns可以忽略,后面会有coredns的部署

  • 授权kubelet-bootstrap用户允许请求证书
授权kubelet-bootstrap用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

数据平面节点组件部署

  • 容器运行时安装

    • 安装docker(os131,os132主机)
# 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/
wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.24.tgz
#解压
tar xvf docker-20.10.24.tgz
#拷贝二进制文件
cp docker/* /usr/bin/
#创建containerd的service文件,并且启动
cat >/etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=1048576
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
EOF
systemctl enable --now containerd.service
#准备docker的service文件
cat > /etc/systemd/system/docker.service <<EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service
[Service]
Type=notify
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
EOF
#准备docker的socket文件
cat > /etc/systemd/system/docker.socket <<EOF
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF
#创建docker组
groupadd docker
#启动docker
systemctl enable --now docker.socket  && systemctl enable --now docker.service
#验证
docker info
cat >/etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": [
    "https://docker.mirrors.ustc.edu.cn",
    "http://hub-mirror.c.163.com"
  ],
  "max-concurrent-downloads": 10,
  "log-driver": "json-file",
  "log-level": "warn",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
    },
  "data-root": "/var/lib/docker"
}
EOF
systemctl restart docker
  • 安装cri-dockerd(os131,os132主机)
 由于1.24以及更高版本不支持docker所以安装cri-docker
# 下载cri-docker 
wget  https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.6/cri-dockerd-0.3.6.amd64.tgz  
 
# 解压cri-docker
tar -zxvf cri-dockerd-0.3.6.amd64.tgz  
cp cri-dockerd/cri-dockerd  /usr/bin/
chmod +x /usr/bin/cri-dockerd
# 写入启动配置文件
cat >  /usr/lib/systemd/system/cri-docker.service <<EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
 
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
 
StartLimitBurst=3
 
StartLimitInterval=60s
 
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
 
TasksMax=infinity
Delegate=yes
KillMode=process
 
[Install]
WantedBy=multi-user.target
EOF
 
# 写入socket配置文件
cat > /usr/lib/systemd/system/cri-docker.socket <<EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
 
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
 
[Install]
WantedBy=sockets.target
EOF
 
# 进行启动cri-docker
systemctl daemon-reload ; systemctl enable cri-docker --now
  • 安装containerd(os128,os129,os130主机)
wget  https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz
tar  xvf cri-containerd-cni-1.6.6-linux-amd64.tar.gz  -C /
#配置 Containerd 所需的模块
cat > /etc/modules-load.d/containerd.conf << EOF
overlay
br_netfilter
EOF
#加载模块
systemctl restart systemd-modules-load.service

mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
sed  -i  's/\(sandbox_image\) =.*/\1 = "registry.aliyuncs.com\/google_containers\/pause:3.9"/g'  /etc/containerd/config.toml
systemctl daemon-reload
systemctl enable --now containerd
systemctl status containerd
#查看containerd相关模块加载情况:
lsmod | egrep 'br_netfilter|overlay'
  • 安装runc(os128,os129,os130主机)
    默认runc执行时提示:runc: symbol lookup error: runc: undefined symbol
wget  https://github.com/opencontainers/runc/releases/download/v1.1.11/runc.amd64
mv   runc.amd64  /usr/local/bin/runc 
  • 部署kubelet

  • 准备工作

#在所有worker节点创建工作目录:
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs,manifests} 
  • 创建配置文件
cat  > /opt/kubernetes/cfg/kubelet.conf <<EOF
KUBELET_OPTS=" \\
--v=2 \\
--hostname-override=$(hostname) \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--runtime-request-timeout=15m  \\
--container-runtime-endpoint=unix:///run/cri-dockerd.sock \\
--cgroup-driver=systemd \\
--node-labels=node.kubernetes.io/node='Linux'"
EOF

--container-runtime-endpoint参数默认为containerd: 
   docker: unix:///run/cri-dockerd.sock
   containerd: unix:///run/containerd/containerd.sock
  • 生成kubelet-conf.yml配置参数文件
cat > /opt/kubernetes/cfg/kubelet-conf.yml << EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /opt/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF
  • 生成kubelet初次加入集群引导bootstrap.kubeconfig文件
#生成kubelet初次加入集群引导kubeconfig文件
KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://192.168.177.127:6443" 
#与token.csv里保持一致
TOKEN="c47ffb939f5ca36231d9e3121a252940"

# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
  • systemd管理kubelet
# systemd管理kubelet
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
#此处如果用的cri是docker不用修改,如果是containerd则需要改成containerd.service
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 启动并设置开机启动
# 启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
  • 批准kubelet证书申请并加入集群

# 查看kubelet证书请求
[root@os128 system]# kubectl get csr 
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           REQUESTEDDURATION   CONDITION
node-csr-wgtllX256bvfMUN-ym0_JW4X0kigCvfDDUTysVAmlrQ   14s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending

# 批准申请
kubectl certificate approve node-csr-wgtllX256bvfMUN-ym0_JW4X0kigCvfDDUTysVAmlrQ

# 查看节点
kubectl get node
  • 其他worker节点kubelet 安装
# 从master节点上同步以下配置文件,修改成对应主机的启动kubelet即可:
/opt/kubernetes/cfg/kubelet.conf # hostname-override、container-runtime-endpoint 参数的值需要注意,hostname-override的值需要集群中唯一,container-runtime-endpoint的值取决于runtime 用的哪个
/usr/lib/systemd/system/kubelet.service # After 的值取决于主机上的runtime 用的哪个
/opt/kubernetes/cfg/kubelet-config.yml #不需要修改
/opt/kubernetes/cfg/kubelet.kubeconfig #不需要修改
/opt/kubernetes/cfg/bootstrap.kubeconfig #不需要修改
/opt/kubernetes/ssl/ca.pem #不需要修改
/opt/kubernetes/bin/kubelet #不需要修改
启动kubelet并设置开机启动,加入集群,批准证书申请参照上面步骤
  • 查看所有节点加入情况
    kubectl get node
    k8s1.27.2版本二进制高可用集群部署,kubernetes,容器,云原生
  • 部署kube-proxy

  • 生成配置参数文件
cat > /opt/kubernetes/cfg/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /opt/kubernetes/kubeconfig/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: $(hostname)
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
EOF
  • 生成kube-proxy.kubeconfig文件
cd  ~/TLS/k8s
KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://192.168.177.127:6443"
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
  • systemd管理kube-proxy
systemd管理kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 启动并设置开机启动
#启动并设置开机启动
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
  • 其他worker节点kube-proxy安装
#从master节点同步以下配置文件
/opt/kubernetes/bin/kube-proxy
/usr/lib/systemd/system/kube-proxy.service 
/opt/kubernetes/cfg/kube-proxy.kubeconfig
/opt/kubernetes/cfg/kube-proxy.yaml #hostnameOverride参数需要确认和当前主机是否一致
启动并设置开机启动

calico网络组件部署

  • 下载calico
wget https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
  • 修改默认网段
# 把calico.yaml里pod所在网段改成 --cluster-cidr=10.244.0.0/16 时选项所指定的网段,
#直接用vim编辑打开此文件查找192,按如下标记进行修改:
# no effect. This should fall within `--cluster-cidr`.
# - name: CALICO_IPV4POOL_CIDR
#   value: "192.168.1.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
  value: "true"
  
把两个#及#后面的空格去掉,并把192.168.1.0/16改成10.244.0.0/16
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
  value: "true"
  • 部署calico
    kubectl apply -f calico.yaml
  • 验证calico
    kubectl get pods -n kube-system
    k8s1.27.2版本二进制高可用集群部署,kubernetes,容器,云原生
  • 授权apiserver访问kubelet
#应用场景:例如kubectl logs
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF
kubectl apply -f apiserver-to-kubelet-rbac.yaml

coredns 组件部署

  • 准备coredns.yml内容,https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base
cat >  coredns.yml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
 name: coredns
 namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
 labels:
   kubernetes.io/bootstrapping: rbac-defaults
 name: system:coredns
rules:
 - apiGroups:
   - ""
   resources:
   - endpoints
   - services
   - pods
   - namespaces
   verbs:
   - list
   - watch
 - apiGroups:
   - discovery.k8s.io
   resources:
   - endpointslices
   verbs:
   - list
   - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 annotations:
   rbac.authorization.kubernetes.io/autoupdate: "true"
 labels:
   kubernetes.io/bootstrapping: rbac-defaults
 name: system:coredns
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: system:coredns
subjects:
- kind: ServiceAccount
 name: coredns
 namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
 name: coredns
 namespace: kube-system
data:
 Corefile: |
   .:53 {
       errors
       health {
         lameduck 5s
       }
       ready
       kubernetes cluster.local  in-addr.arpa ip6.arpa {
         fallthrough in-addr.arpa ip6.arpa
       }
       prometheus :9153
       forward . /etc/resolv.conf {
         max_concurrent 1000
       }
       cache 30
       loop
       reload
       loadbalance
   }
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: coredns
 namespace: kube-system
 labels:
   k8s-app: kube-dns
   kubernetes.io/name: "CoreDNS"
spec:
 # replicas: not specified here:
 # 1. Default is 1.
 # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
 strategy:
   type: RollingUpdate
   rollingUpdate:
     maxUnavailable: 1
 selector:
   matchLabels:
     k8s-app: kube-dns
 template:
   metadata:
     labels:
       k8s-app: kube-dns
   spec:
     priorityClassName: system-cluster-critical
     serviceAccountName: coredns
     tolerations:
       - key: "CriticalAddonsOnly"
         operator: "Exists"
     nodeSelector:
       kubernetes.io/os: linux
     affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: k8s-app
                    operator: In
                    values: ["kube-dns"]
              topologyKey: kubernetes.io/hostname
     containers:
     - name: coredns
       image: coredns/coredns:1.10.1
       imagePullPolicy: IfNotPresent
       resources:
         limits:
           memory: 170Mi
         requests:
           cpu: 100m
           memory: 70Mi
       args: [ "-conf", "/etc/coredns/Corefile" ]
       volumeMounts:
       - name: config-volume
         mountPath: /etc/coredns
         readOnly: true
       ports:
       - containerPort: 53
         name: dns
         protocol: UDP
       - containerPort: 53
         name: dns-tcp
         protocol: TCP
       - containerPort: 9153
         name: metrics
         protocol: TCP
       securityContext:
         allowPrivilegeEscalation: false
         capabilities:
           add:
           - NET_BIND_SERVICE
           drop:
           - all
         readOnlyRootFilesystem: true
       livenessProbe:
         httpGet:
           path: /health
           port: 8080
           scheme: HTTP
         initialDelaySeconds: 60
         timeoutSeconds: 5
         successThreshold: 1
         failureThreshold: 5
       readinessProbe:
         httpGet:
           path: /ready
           port: 8181
           scheme: HTTP
     dnsPolicy: Default
     volumes:
       - name: config-volume
         configMap:
           name: coredns
           items:
           - key: Corefile
             path: Corefile
---
apiVersion: v1
kind: Service
metadata:
 name: kube-dns
 namespace: kube-system
 annotations:
   prometheus.io/port: "9153"
   prometheus.io/scrape: "true"
 labels:
   k8s-app: kube-dns
   kubernetes.io/cluster-service: "true"
   kubernetes.io/name: "CoreDNS"
spec:
 selector:
   k8s-app: kube-dns
 clusterIP: 10.96.0.2
 ports:
 - name: dns
   port: 53
   protocol: UDP
 - name: dns-tcp
   port: 53
   protocol: TCP
 - name: metrics
   port: 9153
   protocol: TCP
EOF

  • 部署coredns
    kubectl apply -f coredns.yml
  • 查看coredns 服务部署
    kubectl get pod -n kube-system | grep coredns
    生产环境需要调整coredns的资源分配并加上hpa

dashboard 组件部署

  • 部署dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
# 修改svc为nodePort方式
vim recommended.yaml
----
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard
----
kubectl apply -f recommended.yaml
# 查看dashboard服务
kubectl get pods -n kubernetes-dashboard
kubectl get pods,svc -n kubernetes-dashboard
  • 创建service account并绑定默认cluster-admin管理员集群角色
# 创建service account并绑定默认cluster-admin管理员集群角色:

cat  > dashadmin.yaml  << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF
kubectl apply -f dashadmin.yaml
# 创建用户登录token,生成的token可以用来登录dashboard
kubectl -n kubernetes-dashboard create token admin-user
  • 验证dashboard登录,访问:https://192.168.177.128:30001,token用上面生成的或者使用kubeconfig文件登录
    k8s1.27.2版本二进制高可用集群部署,kubernetes,容器,云原生

Rancher 管理k8s集群

k8s 的dashboard 也可以使用Rancher来管理,图形界面账号项目权限更友好,功能更强大

  • 简单使用docker部署rancher
    生产环境建议直接部署在k8s集群中,通过ingress的方式来访问
    docker run -d --restart=always --privileged=true -p 443:443 -v /data/rancher:/var/lib/rancher/ --name rancher-server -e CATTLE_SYSTEM_CATALOG=bundled rancher/rancher:stable

  • 把上面部署的二进制k8s集群在 Rancher web页面上按照指引一步步导入即可

  • 登录成功界面如下:
    k8s1.27.2版本二进制高可用集群部署,kubernetes,容器,云原生k8s1.27.2版本二进制高可用集群部署,kubernetes,容器,云原生k8s1.27.2版本二进制高可用集群部署,kubernetes,容器,云原生

metrics-server 组件部署

  • 部署metrics-server
# 下载
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml 
# 设置表示不验证客户端证书
 sed   -i  '/--cert-dir/i  \        - --kubelet-insecure-tls' components.yaml
# 修改文件中服务的镜像地址
sed -i  's/\(image:\).*/\1 registry.aliyuncs.com\/google_containers\/metrics-server:v0.6.1/g' components.yaml 
# 部署
kubectl apply -f components.yaml 
# 验证, 使用kubectl top 可以看到数据说明就正常了
kubectl top node
kubectl top pod  -A 

ingress 组件部署

  • 部署ingress-nginx-deploy
# 下载
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/baremetal/deploy.yaml -O ingress-nginx-deploy.yaml
#查看镜像地址
 grep "image:" ingress-nginx-deploy.yaml 
# mage: registry.k8s.io/ingress-nginx/controller:v1.8.0@sha256:744ae2afd433a395eeb13dc03d3313facba92e96ad71d9feaafc85925493fee3
 #image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b
 #image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b
# 替换镜像
sed  -i   '/controller/s/\(image:\).*/\1 registry.cn-hangzhou.aliyuncs.com\/google_containers\/nginx-ingress-controller:v1.8.0/'  ingress-nginx-deploy.yaml 
sed  -i   '/kube-webhook-certgen/s/\(image:\).*/\1 registry.cn-hangzhou.aliyuncs.com\/google_containers\/kube-webhook-certgen:v20230407/'  ingress-nginx-deploy.yaml 
# 部署ingress-nginx
kubectl apply  -f ingress-nginx-deploy.yaml 

#查看ingress-nginx服务
 kubectl get all -n ingress-nginx

k8s1.27.2版本二进制高可用集群部署,kubernetes,容器,云原生

helm、kubens、crictl、ctr 工具

  • helm
    Helm 是一个用于管理 Kubernetes 应用程序的包管理工具。它允许您定义、安装和升级 Kubernetes 应用程序的预定义包,这些包被称为 “charts”。每个 Helm chart 包含了一组描述 Kubernetes 资源的文件,例如部署、服务、配置映射等。
 #下载
  wget https://get.helm.sh/helm-v3.14.0-linux-amd64.tar.gz
  tar xvf  helm-v3.14.0-linux-amd64.tar.gz
  mv linux-amd64/helm  /usr/local/bin
  chmod +x /usr/local/bin/helm
  # kubectl,helm命令自动补全
  yum install  -y bash_completion
  source /usr/share/bash-completion/bash_completion
  source <(kubectl completion bash)
  source <(helm completion bash)
  • kubens
    kubens 是一个用于快速切换 Kubernetes 命名空间的命令行工具。它是 kubectx 工具包的一部分,用于管理 Kubernetes 上下文和命名空间
#下载
wget https://github.com/ahmetb/kubectx/releases/download/v0.9.5/kubens_v0.9.5_linux_x86_64.tar.gz
# 解压
tar xvf kubens_v0.9.5_linux_x86_64.tar.gz 
mv kubens  /usr/local/bin
chmod +x   /usr/local/bin/kubens
# kubens命令用法
kubens:列出当前配置的所有命名空间。
kubens <namespace>:切换到指定的命名空间。
kubens -c:列出当前配置的所有上下文。
kubens -u:列出当前用户有权访问的所有命名空间
  • crictl
    crictl 是一个用于与容器运行时(Container Runtime Interface,CRI)接口兼容的容器运行时进行交互的命令行工具,默认配置文件路径/etc/crictl.yaml
#下载
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.29.0/crictl-v1.29.0-linux-amd64.tar.gz
tar xvf crictl-v1.29.0-linux-amd64.tar.gz
mv crictl /usr/local/bin
chmod +x   /usr/local/bin/crictl
#crictl 命令使用  
crictl version: 查看版本
crictl pods: 列出主机上有哪些pod
crictl images:列出容器运行时中的镜像列表。
crictl ps:列出容器运行时中正在运行的容器列表。
crictl create:创建一个新的容器。
crictl start:启动一个已经创建的容器。
crictl stop:停止一个正在运行的容器。
crictl rm:删除一个容器。
crictl logs:查看容器的日志。
crictl inspect:查看容器或镜像的详细信息。
crictl pull:从容器镜像仓库中拉取镜像。
crictl rmi:删除一个镜像。
  • ctr
    ctr是Containerd开发的一个命令行工具,可以与Containerd进行交互,用于管理容器、镜像以及其他资源,Containerd 中每个容器实例都会关联到一个命名空间,默认是默认命名空间(default)
 #查看有哪些namespace,默认namespace: default
  ctr namespaces
 # 查看namespace:k8s.io下面有哪些container/task/image
  ctr -n k8s.io containers list 
  ctr -n k8s.io tasks list 
  ctr -n k8s.io images list

nfs storageclass动态pv存储

  • 部署配置nfs 服务
#nfs部署在os128主机,其他所有k8s主机都需要安装nfs-utils
yum install rpcbind nfs-utils -y
systemctl enable rpcbind
systemctl enable nfs
# 配置nfs
mkdir -p  /data/nfs
cat > /etc/exports <<EOF
/data/nfs   192.168.177.0/24(rw,sync,no_subtree_check,no_root_squash)
EOF
systemctl start rpcbind
systemctl start nfs
exportfs -v 
  • 部署nfs subdir external provisioner
  # 配置nfs provisioner helm repo源
 helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
 helm repo update
 # 使用repo源在线安装
 helm install  -n kube-system nfs-subdir-external-provisioner \
    nfs-subdir-external-provisioner/nfs-subdir-external-provisioner  --version 4.0.18  \
    --set image.repository=k8s.dockerproxy.com/sig-storage/nfs-subdir-external-provisioner  \
    --set storageClass.defaultClass=true \
    --set replicaCount=2 \
    --set nfs.server=192.168.177.128 \
    --set nfs.path=/data/nfs 
    
 # 若是在线安装访问超时,可以从githup下载 https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/releases/download/nfs-subdir-external-provisioner-4.0.18/nfs-subdir-external-provisioner-4.0.18.tgz 安装包采用离线部署的方式   
  • 验证nfs动态存储
    • 查看storageclasses 的name
      kubectl get storageclasses.storage.k8s.io
    • 准备测试用的pod与pvc 的yaml
    cat > test-nginx.yaml <<EOF
    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
        volumeMounts:
          - name: www
            mountPath: /usr/share/nginx/html
      volumes:
        - name: www
          persistentVolumeClaim:
            claimName: nginx
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: nginx
    spec:
      storageClassName: "nfs-client"  #上面查到的storageclasses 的name
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 5Gi
    EOF
    kubectl create -f test-nginx.yaml -n default
    
    • 创建成功后如下图
      k8s1.27.2版本二进制高可用集群部署,kubernetes,容器,云原生

loki 日志采集部署

  • 配置loki helm repo 源
# 添加加速镜像仓库
helm repo add grafana "https://helm-charts.itboon.top/grafana" --force-update
helm repo update grafana

  • 部署loki-stack
# 存储使用storageclasses nfs-client,promtail使用仓库中默认的配置,会采集k8s集群中pod的标准输出日志
helm install  loki-stack  -n loki  \
--set loki.persistence.enabled=true  \
--set loki.persistence.storageClassName=nfs-client  \
--set  grafana.enabled=true  \
--set  grafana.persistence.enabled=true  \
--set  grafana.persistence.storageClassName=nfs-client \
--set  grafana.service.type=NodePort \
grafana/loki-stack --version 2.10.1  
  • 准备一个 pod输出点日志, 用于验证日志收集
cat > test-pod.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: test-log-default
  labels:
    logging: "true" 
spec:
  containers:
  - name: log-default
    image: busybox
    args: [/bin/sh, -c,
            'i=0; while true; do echo "$i: defaut ns  $(date)"; i=$((i+1)); sleep 1; done']
EOF
kubectl create  -f test-pod.yaml  -n default
  • 登录grafana 查看日志
# 查看grafana的nodePort的端口
[root@os128 ~]# kubectl get svc loki-stack-grafana -n loki  
loki-stack-grafana      NodePort    10.106.98.86    <none>        80:31182/TCP   45m
# 查看grafana 的登录账号和密码
#账号
kubectl  -n  loki get secrets loki-stack-grafana -o jsonpath='{.data.ad.admin-user}' | base64 -d 
#密码
kubectl  -n loki  get secrets loki-stack-grafana -o jsonpath='{.data.admin-password}'  | base64  -d  

访问http://192.168.177.128:31182 使用上面获取的grafana 的账号和密码登录如下图 可以看到test-log-default pod 的日志输出
k8s1.27.2版本二进制高可用集群部署,kubernetes,容器,云原生

FAQ

  • kubelet服务启动报错:
    validate CRI v1 runtime API for endpoint "unix:///run/cri-dockerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService
    原因及解决方案:cri-dokcerd-v0.2.6的版本有问题,更换到cri-dokcerd-v0.3.6 的版本 问题解决

  • 某个节点上的calico-node 启动报错:
    ERROR][1] cni-installer/<nil> <nil>: Unable to create token for CNI kubeconfig error=Post "https://10.0.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/calico-node/token": dial tcp 10.0.0.1:443: connect: connection refused
    原因及方案:该节点上的kube-proxy 忘记启动,启动kube-proxy服务问题解决

  • k8s中节点node的ROLES值是<none>
    解决方案: 为node 节点上的kubernetes.io/role的标签设置值即可文章来源地址https://www.toymoban.com/news/detail-815151.html

      kubectl label node os128 os129 os130 kubernetes.io/role=all
      kubectl label no worker131 worker132 kubernetes.io/role=worker
    

到了这里,关于k8s1.27.2版本二进制高可用集群部署的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • 【云原生】二进制k8s集群(下)部署高可用master节点

     在上一篇文章中,就已经完成了二进制k8s集群部署的搭建,但是单机master并不适用于企业的实际运用(因为单机master中,仅仅只有一台master作为节点服务器的调度指挥,一旦宕机。就意味着整个集群的瘫痪,所以成熟的k8s的集群一定要考虑到master的高可用。)企业的运用一

    2024年02月12日
    浏览(31)
  • k8s1.23.15版本二进制部署/扩容及高可用架构详解

    前言     众所周知,kubernetes在2020年的1.20版本时就提出要移除docker。这次官方消息表明在1.24版本中彻底移除了dockershim,即移除docker。但是在1.24之前的版本中还是可以正常使用docker的。考虑到可能并不是所有项目环境都紧跟新版换掉了docker,本次就再最后体验一下可支持

    2024年01月16日
    浏览(22)
  • k8s v1.27.4二进制部署记录

    记录二进制部署过程 CPU不足,有两个节点calico没起来

    2024年02月12日
    浏览(18)
  • k8s二进制部署--部署高可用

    notready是因为没有网络,因此无法创建pod 在同一个pod中的容器共享资源和网络,使用同一个网络命名空间。 每个pod都有一个全局的真实IP地址,同一个node之间的不同pod可以直接使用对方pod的ip地址进行通信。 pod1和pod2是通过docker0的网桥来进行通信。 Pod 地址与 docker0 在同一网

    2024年02月03日
    浏览(20)
  • 【云原生】K8S二进制搭建三:高可用配置

    在所有 node 节点上操作 在 master01 节点上操作 初始化环境 初始化环境看这里 在 master01 节点上测试 仪表板是基于Web的Kubernetes用户界面。您可以使用仪表板将容器化应用程序部署到Kubernetes集群,对容器化应用程序进行故障排除,并管理集群本身及其伴随资源。您可以使用仪表

    2024年02月14日
    浏览(18)
  • k8s1.23.15集群二进制部署

    一、前言     二进制部署1.23.15版本k8s集群,etcd集群部署与k8s集群节点复用,手动颁发集群证书     主机信息如下 主机名称 ip地址 服务 k8s-master01 10.1.60.125 docker、etcd、kube-apiserver、kube-schduler、kube-controller-manage、kubelet、kube-proxy k8s-node01 10.1.60.126 docker、etcd、kubelet、kube-proxy

    2024年03月13日
    浏览(22)
  • 二进制安装K8S(单Master集群架构)

    k8s集群master01:192.168.154.10 kube-apiserver kube-controller-manager kube-scheduler etcd k8s集群node01:192.168.154.11 kubelet kube-proxy docker k8s集群node02:192.168.154.12 etcd集群节点1:192.168.154.10 etcd etcd是CoreOS团队于2013年6月发起的开源项目,它的目标是构建一个高可用的分布式键值(key-value)数据库。

    2024年02月10日
    浏览(21)
  • K8s集群部署(二进制安装部署详细手册)

       一、简介 K8s部署主要有两种方式: 1、Kubeadm Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。 2、二进制   从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。  本文通过二进制安装部署的方式在centos7上搭建kubernetes集群

    2024年02月15日
    浏览(20)
  • 【云原生K8s】二进制部署单master K8s+etcd集群

                                                    mater节点 master01 192.168.190.10 kube-apiserver kube-controller-manager kube-scheduler etcd                                                 node节点 node01 192.168.190.20 kubelet kube-proxy docker (容器引擎) node02 192.168.190.30 kubelet kube-proxy do

    2024年02月14日
    浏览(25)
  • [kubernetes]二进制部署k8s集群-基于containerd

    k8s从1.24版本开始不再直接支持docker,但可以自行调整相关配置,实现1.24版本后的k8s还能调用docker。其实docker自身也是调用containerd,与其k8s通过docker再调用containerd,不如k8s直接调用containerd,以减少性能损耗。 除了containerd,比较流行的容器运行时还有podman,但是podman官方安装

    2024年02月12日
    浏览(17)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包