Docker/Esxi/JAVA/K8S-KubeSphere/linux/Mysql/Nodejs/系统运维/编程语言/银河麒麟

【k8s教程二】CentOS7部署K8s集群

daimafengzi · 6月26日 · 2023年 · · · · 本文共7077个字 · 预计阅读24分钟7448次已读

[blockquote2 name=’洛维花’]声明:本教程并不是我原创,因脑子问题,好记性不如笔记以及中途使用中出现其他问题,有个具体化的解决办法,于是便进行了搬运,并稍作更改。文章尾部带有原作者的链接。[/tip]

附上原作者的录像

环境准备

本文服务器的公网IP:192.168.56.101

  • OS version: CentOS 7
  • CPU Architecture: x86_64/amd64
  • K8s version: v1.23.17
  • Docker version: 20.10.23

安装依赖

yum install -y \
    curl \
    wget \
    systemd \
    bash-completion \
    lrzsz

安装前准备

  1. 同步服务器时间
timedatectl set-timezone Asia/Shanghai && timedatectl set-local-rtc 0
systemctl restart rsyslog
systemctl restart crond
  1. 修改主机名

方便通过主机名访问对于的服务器

# 主节点
hostnamectl set-hostname k8s-master
# 从节点
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2

修改hosts

cat >/etc/hosts <<EOF
192.168.56.101    k8s-master
192.168.56.102    k8s-node1
192.168.56.103    k8s-node2
EOF
  1. 开启必要的端口

因是测试环境,直接关闭防火墙

systemctl disable firewalld.service && systemctl stop firewalld.service

容器运行时

  1. 转发IPv4并让iptables看到桥接流量
cat >/etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter

cat >/etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
EOF
sysctl --system

# 通过运行以下指令确认 br_netfilter 和 overlay 模块被加载
lsmod | egrep 'overlay|br_netfilter'
# 通过运行以下指令确认 net.bridge.bridge-nf-call-iptables、net.bridge.bridge-nf-call-ip6tables 系统变量在你的 sysctl 配置中被设置为 1
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
  1. 安装容器运行时
    [blockquote2 name=’洛维花’]注意:k8s v1.24及以后不在支持Docker Engine[/tip]

安装Docker

[blockquote2 name=’洛维花’]官方文档:https://docs.docker.com/engine/install/centos/[/tip]

yum install -y yum-utils
# 设置yum阿里云镜像
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
mkdir -p /etc/docker
# 设置阿里云镜像/日志/cgroup驱动
cat >/etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
  "max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
  "overlay2.override_kernel_check=true"
],
"registry-mirrors":["https://hub-mirror.c.163.com","https://docker.mirrors.ustc.edu.cn","https://registry.docker-cn.com"]
}
EOF
yum makecache fast
yum install -y docker-ce-20.10.23 docker-ce-cli-20.10.23 containerd.io
systemctl daemon-reload
systemctl enable docker && systemctl restart docker

安装containerd(这个其实是另一种的容器工具,我们已经安装了docker,并不需要再安装这个了。)

yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum install -y containerd.io
mkdir -p /etc/containerd
生成默认文件
containerd config default > /etc/containerd/config.toml
编辑配置文件 设置驱动方式为systemd 设置pause镜像 镜像仓库的加速器
sed -i "s#SystemdCgroup = false#SystemdCgroup = true#g" /etc/containerd/config.toml
sed -i "s#registry.k8s.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g" /etc/containerd/config.toml
sed -i "/\[plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors\]/a\        [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"docker.io\"]" /etc/containerd/config.toml
sed -i "/\[plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"docker.io\"\]/a\          endpoint = [\"https://hub-mirror.c.163.com\",\"https://docker.mirrors.ustc.edu.cn\",\"https://registry.docker-cn.com\"]" /etc/containerd/config.toml
sed -i "/endpoint = \[\"https:\/\/hub-mirror.c.163.com\",\"https:\/\/docker.mirrors.ustc.edu.cn\",\"https:\/\/registry.docker-cn.com\"]/a\        [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"registry.k8s.io\"]" /etc/containerd/config.toml
sed -i "/\[plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"registry.k8s.io\"\]/a\          endpoint = [\"registry.cn-hangzhou.aliyuncs.com/google_containers\"]" /etc/containerd/config.toml
sed -i "/endpoint = \[\"registry.cn-hangzhou.aliyuncs.com\/google_containers\"]/a\        [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"k8s.gcr.io\"]" /etc/containerd/config.toml
sed -i "/\[plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"k8s.gcr.io\"\]/a\          endpoint = [\"registry.cn-hangzhou.aliyuncs.com/google_containers\"]" /etc/containerd/config.toml
systemctl daemon-reload
systemctl enable containerd && systemctl restart containerd

安装k8s

[blockquote2 name=’洛维花’]官方文档:
https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/

https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kubelet/
[/tip]

  1. 关闭swap分区或者禁用swap文件
swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab
  1. 关闭selinux
setenforce 0 && sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

  1. 安装k8s
# 使用阿里云k8s源
cat >/etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装工具kubelet、kubeadm、kubectl
yum install -y kubelet-1.23.17 kubeadm-1.23.17 kubectl-1.23.17 --disableexcludes=kubernetes
# 设置驱动方式为systemd
cat >/etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF
# 设置容器运行时(仅容器运行时为containerd才需要进行以下设置,容器运行时为Docker则不需要)
crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock
crictl config image-endpoint unix:///var/run/containerd/containerd.sock
sed -i '/KUBELET_KUBEADM_ARGS/s/"$/ --container-runtime=remote --container-runtime-endpoint=unix:\/\/\/run\/containerd\/containerd.sock"/' /var/lib/kubelet/kubeadm-flags.env

# kubelet开机自启
systemctl enable --now kubelet
# 查看kubelet状态
systemctl status kubelet
# 如果报错,查询错误信息
journalctl -xe

运行k8s

mkdir -p /k8sdata/log/
kubeadm init \
  --apiserver-advertise-address=192.168.56.101 \
  --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
  --kubernetes-version=v1.23.17 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16 | tee /k8sdata/log/kubeadm-init.log

mkdir -p "$HOME"/.kube
cp -i /etc/kubernetes/admin.conf "$HOME"/.kube/config
chown "$(id -u)":"$(id -g)" "$HOME"/.kube/config

提示:

[blockquote2 name=’洛维花’]
1. 如果是搭建的服务器是主节点,则服务器至少2核2G,如果没有达到该配置但是仍想安装,则可以在kubeadm init命令行中使用–ignore-preflight-errors=CpuNum即可忽略报错。
2. 如果初始化失败,通过kubeadm reset进行重设
[/tip]
# 安装网络系统
## flannel
“`
mkdir -p /k8sdata/network/
wget –no-check-certificate -O /k8sdata/network/flannelkube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl create -f /k8sdata/network/flannelkube-flannel.yml
“`
## calico
“`
mkdir -p /k8sdata/network/
wget –no-check-certificate -O /k8sdata/network/calico.yml https://docs.projectcalico.org/manifests/calico.yaml
kubectl create -f /k8sdata/network/calico.yml
“`
# k8s命令行补全
“`
! grep -q kubectl “$HOME/.bashrc” && echo “source /usr/share/bash-completion/bash_completion” >>”$HOME/.bashrc”
! grep -q kubectl “$HOME/.bashrc” && echo “source >”$HOME/.bashrc”
! grep -q kubeadm “$HOME/.bashrc” && echo “source >”$HOME/.bashrc”
! grep -q crictl “$HOME/.bashrc” && echo “source >”$HOME/.bashrc”
source “$HOME/.bashrc”
“`
# k8s常用命令
“`
# 获取节点
kubectl get nodes -o wide
# 实时查询nodes状态
watch kubectl get nodes -o wide
# 获取pod
kubectl get pods –all-namespaces -o wide
# 查看镜像列表
kubeadm config images list
# 节点加入集群
kubeadm token create –print-join-command
# 描述node
kubectl describe node k8s-master
# 描述pod
kubectl describe pod kube-flannel-ds-hs8bq –namespace=kube-flannel
“`
# 总结
按照本教程可以部署一个可以正常运行的k8s,但本文仍存在一些待优化的地方,如在部署或者使用过程中遇到问题会在本文进行补充。

视频备份

下载信息

部分资源来源于网络如有侵权,请联系删除

百度网盘

[blockquote2 name=’洛维花’]
原文链接:https://jonssonyan.com/2022/07/18/CentOS7%E9%83%A8%E7%BD%B2K8s%E9%9B%86%E7%BE%A4/
[/tip]

0 条回应
| 耗时 0.372 秒 | 查询 57 次 | 内存 4.28 MB |
本站CDN由One degree CDN提供