kubeadm安装多master节点k8s高可用集群
<h1>环境规划</h1>
<h2>k8s环境规划:</h2>
<p>podSubnet(pod网段)
serviceSubnet(service网段): </p>
<h2>实验环境规划:</h2>
<table>
<thead>
<tr>
<th>操作系统:centos 7.5
配置: 4Gib内存/4vCPU/60G硬盘
网络:NAT
开启虚拟机的虚拟化:<img src="https://www.showdoc.com.cn/server/api/attachment/visitFile?sign=755b3ae1801eea5ad334dfb9a10406d7&amp;file=file.png" alt="" /></th>
<th>k8s集群角色</th>
<th>ip</th>
<th>主机名</th>
<th>安装的组件</th>
</tr>
</thead>
<tbody>
<tr>
<td>控制节点</td>
<td>10.5.146.51</td>
<td>master1</td>
<td>apiserver\controller-manager\scheduler\kublete\etcd\docker\kube-proxy\keepalived\nginx\calico</td>
</tr>
<tr>
<td>控制节点</td>
<td>10.5.146.52</td>
<td>master2</td>
<td>apiserver\controller-manager\scheduler\kublete\etcd\docker\kube-proxy\keepalived\nginx\calico</td>
</tr>
<tr>
<td>工作节点</td>
<td>10.5.146.53</td>
<td>node1</td>
<td>kublete\docker\kube-proxy\calico\coredns</td>
</tr>
<tr>
<td>VIP</td>
<td>10.5.146.54</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<h1>环境配置(所有节点)</h1>
<h2>关闭selinux</h2>
<p>sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
getenforce #检查配置是否成功,显示Disabled说明selinux关闭成功</p>
<h2>配置主机名</h2>
<p>hostnamectl set-hostname 【主机名】 && bash
修改/etc/hosts,将如下内容加到hosts文件中:
10.5.146.51 master1
10.5.146.52 master2
10.5.146.53 node1</p>
<h2>配置主机之间无密码登录</h2>
<p>ssh-keygen #一路回车,不输入密码
ssh-copy-id 【3个节点的主机名】#把本地的生成的密钥文件和私钥文件拷贝到远程主机</p>
<h2>关闭swap,提升性能</h2>
<p>临时关闭
swapoff -a
永久关闭:编辑 /etc/fstab 文件,将 swap 的挂载行注释掉</p>
<h2>修改内核参数</h2>
<pre><code class="language-shell">modprobe br_netfilter
echo &quot;modprobe br_netfilter&quot; &gt;&gt; /etc/profile
cat &gt; /etc/sysctl.d/k8s.conf &lt;&lt;EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf</code></pre>
<h2>关闭防火墙</h2>
<p>systemctl stop firewalld ; systemctl disable firewalld</p>
<h2>配置阿里云的repo源</h2>
<p>安装rzsz命令
yum install lrzsz -y
安装scp
yum install openssh-clients
备份基础repo源
mkdir /root/repo.bak
cd /etc/yum.repos.d/
mv * /root/repo.bak/
下载阿里云的repo源:把CentOS-Base.repo和epel.repo文件上传到主机的/etc/yum.repos.d/目录下
配置国内阿里云docker的repo源
yum install yum-utils -y
yum-config-manager --add-repo <a href="http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo">http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo</a>
配置安装k8s组件需要的阿里云的repo源
vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=<a href="https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/">https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/</a>
enabled=1
gpgcheck=0</p>
<h2><em>配置时间同步</em></h2>
<p>方法1:通过ntp配置时间同步
yum install ntpdate -y
ntpdate cn.pool.ntp.org
把时间同步做成计划任务</p>
<pre><code class="language-shell">crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org</code></pre>
<p>重启crond服务
方法二:直接用 date -s "202311142130"</p>
<h2>安装基础软件包</h2>
<p> yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm</p>
<h2>安装docker服务</h2>
<p>yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io -y
systemctl start docker && systemctl enable docker && systemctl status docker</p>
<h2>配置docker镜像加速器和驱动</h2>
<p>vim /etc/docker/daemon.json
{
"registry-mirrors":["<a href="https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com"">https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com"</a>;, "<a href="https://rncxm540.mirror.aliyuncs.com"">https://rncxm540.mirror.aliyuncs.com"</a>;],
"exec-opts": ["native.cgroupdriver=systemd"]
}
修改docker文件驱动为systemd,默认为cgroupfs,kubelet默认使用systemd,两者必须一致才可以
systemctl daemon-reload && systemctl restart docker
systemctl status docker</p>
<h1>安装初始化k8s需要的软件包(所有节点)</h1>
<p>yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
systemctl enable kubelet</p>
<h1>通过keepalive+nginx实现k8s apiserver节点高可用(两主节点配置)</h1>
<h2>安装 nginx 主备</h2>
<p>yum install epep-release -y
yum install nginx keepalived -y</p>
<h2>修改nginx配置文件</h2>
<pre><code class="language-shell">vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
#四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 10.5.146.51:6443 weight=5 max_fails=3 fail_timeout=30s;
server 10.5.146.52:6443 weight=5 max_fails=3 fail_timeout=30s;
}
server {
listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
proxy_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] &quot;$request&quot; '
'$status $body_bytes_sent &quot;$http_referer&quot; '
'&quot;$http_user_agent&quot; &quot;$http_x_forwarded_for&quot;';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80 default_server;
server_name _;
location / {
}
}
}</code></pre>
<h1>kubeadm初始化k8s集群</h1>
<p>vim /root/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.6
controlPlaneEndpoint: 10.5.146.54:16443
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
certSANs:</p>
<ul>
<li>10.5.146.51</li>
<li>10.5.146.52</li>
<li>10.5.146.53</li>
<li>
<h2>10.5.146.54
networking:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/16</h2>
<p>apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs</p>
</li>
</ul>
<p>使用kubeadm初始化k8s集群
把初始化k8s集群需要的离线镜像包上传到3个节点的机器上,手动解压:
docker load -i k8simage-1-20-6.tar.gz
[root@master1]# kubeadm init --config kubeadm-config.yaml --ignore-preflight-errors=SystemVerification</p>
<p> </p>