Rocky Linux 9.4安装Kubernetes 1.31

发布于 2024-09-12  131 次阅读


系统信息:

角色hostnameCPU内存硬盘IP
masterk8s-master2核4G100G192.168.2.45
workk8s-node12核4G100G192.168.2.44
workk8s-node22核4G100G192.168.2.43
  1. 转发 IPv4 并让 iptables 看到桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

通过运行以下指令确认 br_netfilter 和 overlay 模块被加载:

lsmod | grep br_netfilter
lsmod | grep overlay

通过运行以下指令确认 net.bridge.bridge-nf-call-iptables、net.bridge.bridge-nf-call-ip6tables 和 net.ipv4.ip_forward 系统变量在你的 sysctl 配置中被设置为 1:

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
  1. 关闭防火墙、SWAP
systemctl stop firewalld
systemctl disable firewalld

swapoff -a

vi /etc/fstab
# 注释掉swap相关的行
  1. 修改hosts文件
vi /etc/hosts

192.168.2.45 k8s-master
192.168.2.44 k8s-node2
192.168.2.43 k8s-node1
  1. 安装Docker
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y

sudo systemctl start docker
sudo systemctl enable docker
  1. 安装cri-dockerd

详见 https://moomail.cn/2024/06/24/openeuler%e5%ae%89%e8%a3%85cri-docker-0-3-14/

  1. 安装 kubeadm、kubelet 和 kubectl
# 将 SELinux 设置为 permissive 模式
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

将 SELinux 设置为 permissive 模式相当于将其禁用。 这是允许容器访问主机文件系统所必需的,例如,某些容器网络插件需要这一能力。

# 添加 Kubernetes 的 yum 仓库。
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

# 在仓库定义中的 exclude 参数确保了与 Kubernetes 相关的软件包在运行 yum update 时不会升级,因
# 为升级 Kubernetes 需要遵循特定的过程。

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
  1. 初始化控制平面节点(以下操作仅在master节点操作
kubeadm config print init-defaults > kubeadm-config.yaml

修改kubeadm-config.yaml

localAPIEndpoint:
  advertiseAddress: 192.168.2.45  ## 修改为mater节点地址
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/cri-dockerd.sock   ## 修改为对应的容器运行时的套接字
  imagePullPolicy: IfNotPresent
  name: k8s-master    ## 修改为master节点hostname
  taints: null
networking:
  dnsDomain: cluster.local
  podSubnet: 172.16.0.0/16   ## 新增,指定pod网段
  serviceSubnet: 10.96.0.0/12
scheduler: {}

初始化

kubeadm init --config=kubeadm-config.yaml
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf
  1. 节点加入集群
kubeadm join 192.168.2.45:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:72036f0c8275bbd49deab1d4c969f69d9e95ea572c5875d2acbbc8b4369e4460 --cri-socket unix:///var/run/cri-dockerd.sock
  1. 安装网络插件 Calico

安装 Tigera Calico 操作器和自定义资源定义

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/custom-resources.yaml

# 修改custom-resources.yaml

vi custom-resources.yaml

spec:
  # Configures Calico networking.
  calicoNetwork:
    ipPools:
    - name: default-ipv4-ippool
      blockSize: 26
      cidr: 172.16.0.0/16   # 修改网段为初始化时设置的podSubnet
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()
kubectl create -f custom-resources.yaml

使用以下命令确认所有 pod 都在运行

watch kubectl get pods -n calico-system
  1. 查看nodes状态
kubectl get nodes

NAME         STATUS   ROLES           AGE   VERSION
k8s-master   Ready    control-plane   35m   v1.31.1
k8s-node1    Ready    <none>          30m   v1.31.1
k8s-node2    Ready    <none>          30m   v1.31.1
  1. 修改kube-proxy工作模式为ipvs
kubectl edit configmap kube-proxy -n kube-system
  mode: "ipvs"  #修改mode为ipvs

删除现有的kube-proxy,使容器重建,配置生效

kubectl delete pod -n kube-system -l k8s-app=kube-proxy