miaoyun+Rancher+K8S学习与实践


Rancher安装

k8s external ip #worker集群高可用IP.

英文版最新部署指南,这是离线部署链接 https://rancher.com/docs/rancher/v2.x/en/installation/other-installation-methods/air-gap/

Rancher 2.4.3 - HA 部署高可用k8s集群 #参考此文件

https://www.cnblogs.com/xiao987334176/p/12981735.html
本文参考链接:
https://blog.51cto.com/bilibili/2440304
https://blog.51cto.com/liuzhengwei521/2398244
https://www.cnblogs.com/xzkzzz/p/9995956.html
https://www.cnblogs.com/kelsen/p/10836332.html

Rancher 2.4.4-高可用集群HA部署-离线安装 2020-06-14 https://blog.csdn.net/weixin_42331537/article/details/106745662 rancher-images.txt rancher-save-images.sh rancher-images.txt rancher-load-images.sh

Rancher Releases Mirror http://mirror.cnrancher.com/

rancher-server离线的高可用部署 https://blog.csdn.net/qq_39919755/article/details/94858022

安装前镜像下载,参考rancher官网离线安装中的“同步镜像到私有镜像仓库”,下载三个文件,找一台docker机器事先下载好。

阿里云加速配置

下载前,请先配置阿里镜加速配置,内容如下:

{ "registry-mirrors" : ["https://dekn3ozn.mirror.aliyuncs.com"] }

登录私有仓库报错及解决办法

报错信息:

[root@worker-01 rancher]# docker login 172.16.7.199:5000
Username: docker
Password: docker
Error response from daemon: Get https://172.16.7.199:5000/v2/: http: server gave HTTP response to HTTPS client

解决办法: 在客户端需要配置如下信息: [root@worker-01 rancher]# cat /etc/docker/daemon.json

{ "insecure-registries":["172.16.7.199:5000"] }

说明:上述这个内容与阿里加速内容是不一样的,不能简单修改URL来使用。

准备离线镜像 https://www.bookstack.cn/read/rancher-v2.x/102ad603fddd1ea1.md

-----------按上文Rancher 2.4.3 - HA 部署高可用k8s集群 整理的-----------------

如何在国内优雅地使用Rancher https://mp.weixin.qq.com/s/XMh0-SscBPDYFfPdpmqdkw 如果你想单独pull这个,可以用阿里云的 registry.cn-hangzhou.aliyuncs.com/rancher/rke-tools:v0.1.65

一、概述

对于生产环境,需以高可用的配置安装 Rancher,确保用户始终可以访问 Rancher Server。当安装在Kubernetes集群中时,Rancher将与集群的 etcd 集成,并利用Kubernetes 调度实现高可用。 为确保高可用,本文所部署的 Kubernetes 集群将专用于运行 Rancher ,Rancher 运行起来后,可再创建或导入集群以运行具体的工作负载。

推荐架构

  • Rancher的DNS 应解析到 4层(TCP) 负载均衡上。
  • 负载均衡应将端口 TCP/80 和 TCP/443 转发到 Kubernetes 集群中的所有3个节点。
  • Ingress-controller 将 HTTP 重定向到HTTPS并终止端口 TCP/443 上的 SSL/TLS(SSL数字证书在这里部署)。
  • Ingress-controller 将流量转发到 pod 的 TCP/80 端口。 下面是一张从官网拉过来的图片,更直观一些。

    二、准备工作

    1.服务器准备

  • 1台 Linux服务器,配置不用很高,用于四层负载均衡
  • 3台 Linux服务器,Rancker-server-node 节点
  • n台 Linux服务器,Rancker-agent-node 节点(n<=50) RKE 高可用安装的 CPU 和 内存要求 节点服务器的硬件配置,可根据实际情况依据该表自行选择。 在 Rancher v2.4.0 中提高了性能。有关 v2.4.0 之前的 Rancher 的要求,请参阅本节。
规模 集群 节点 CPU 内存
最多150个 最多1500个 2 8 GB
最多300个 最多3000个 4 16 GB
最多500个 最多5000个 8 32 GB
特大 最多1000个 最多10000个 16 64 GB
超大 最多2000个 最多20000个 32 128 GB

联系 Rancher,如果您要管理 2000+ 集群和/或 20000+ 节点。

2.环境说明

服务器使用本地的EXSI,虚拟机具体配置如下:

主机名称 系统版本 内网ip 配置
rancher-01 CentOS 7.8 172.16.7.201 2核3g
rancher-02 CentOS 7.8 172.16.7.202 2核3g
rancher-03 CentOS 7.8 172.16.7.203 2核3g
rancher-slb CentOS 7.8 172.16.7.200 1核1g

3.centos7 永久修改主机名

hostnamectl set-hostname xxx

三、docker安装

关于docker安装,请参考链接: https://www.showdoc.com.cn/557523996513244?page_id=3839601638128520

四、安装RKE

Rancher Kubernetes Engine(RKE)是一款轻量级Kubernetes安装程序,支持在裸机和虚拟化服务器上安装Kubernetes。 RKE解决了Kubernettes社区中的一个常见问题,比如:安装复杂性。RKE支持多种平台运行,比如MacOS,linux,windows。 这里在rancher-01上安装rke:

1.下载二进制文件

rancher二进制文件下载

https://github.com/rancher/rke/releases
找到最新稳定版本,目前稳定版本为v1.1.12
下载文件:
Kubernetes Versions
v1.18.12-rancher1-1
rke_linux-amd64      点此行下载或右键获取下载链接如下
https://github.com/rancher/rke/releases/download/v1.1.12/rke_linux-arm64

XX:上次下载的v1.2.2 <<---

2.安装rke

chmod +x rke_linux-amd64
mv rke_linux-amd64 /usr/bin/
rke_linux-amd64 --version

注意:此文件安装的k8s版本为v1.18.12 XX:上次下载的为v1.19.3

三.安装Kubectl

kubectl是一个CLI命令行工具,用于运行Kubernetes集群的命令。Rancher 2.x中的许多维护和管理都需要它。 这里在 rancher-01 上安装kubectl:

1.kubectl下载

https://github.com/kubernetes
https://kubernetes.io/zh/docs/setup/release/notes/
https://dl.k8s.io/v1.18.0/kubernetes-client-linux-arm64.tar.gz
变更日志:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md    #最新稳定版本

注意:此链接必须在访问谷歌的电脑上面才行下载。

2.解压,并放到path路径下

tar zxvf kubernetes-client-linux-amd64.tar.gz -C /usr/src/
cp /usr/src/kubernetes/client/bin/kubectl /usr/bin/kubectl
chmod +x /usr/bin/kubectl

3.配置kubectl的shell补全

CentOS Linux上,您可能需要安装默认情况下未安装的bash-completion软件包。 bash-completion包centos yum自带

yum install -y bash-completion

可将kubectl自动补全添加到当前shell,要使kubectl自动补全命令自动加载,运行

echo 'source &lt;(kubectl completion bash)' &gt;&gt; /root/.bashrc

4.验证补全命令

退出,重新登录一下即可。命令实例示例:

# kubectl desc&lt;TAB&gt; no&lt;TAB&gt; node&lt;TAB&gt;

三、使用RKE安装kubernetes

下面使用 RKE(Kubernetes Engine) 安装高可用的 Kubernetes。

1.NODE-SERVER 之间建立 ssh 信任

我们目前有三台服务器用作 local 集群,首先要确保我们主机能够通过 ssh 访问到另外两台主机并执行相关操作。

1.1.创建用户rancher

注意:使用rke安装kubernetes时,不能以root用户执行。必须是一个普通用户才行!!! rancher-01,rancher-02,rancher-02 执行以下命令:

useradd rancher
passwd rancher

2.授予rancher具有docker权限

2.1.在 rancher-01,rancher-02,rancher-03 执行以下命令:

使用root账号登录

#将登陆用户develop加入到docker用户组中
gpasswd -a rancher docker
#更新用户组
newgrp docker  #或 usermod -aG docker rancher

2.2.切换到rancher用户进行测试

su rancher
docker ps

输出正常,则表示成功。

2.3.ssh信任

rancher-01,rancher-02,rancher-03 执行以下命令:

# su rancher
$ ssh-keygen -t rsa -P &quot;&quot; -f ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa.pub &gt;&gt; ~/.ssh/authorized_keys
$ chmod 600 ~/.ssh/authorized_keys

2.4.复制公钥

在rancher-01上执行,,,,XXX 我觉得应该在三台主机都执行这一步。待验证。 注意:以rancher用户执行。

$ ssh-copy-id 172.16.7.201
$ ssh-copy-id 172.16.7.202
$ ssh-copy-id 172.16.7.203

2.5.测试ssh免密

在rancher-01上执行 注意:以rancher用户执行。

$ ssh 172.16.7.201
$ ssh 172.16.7.202
$ ssh 172.16.7.203

3.编写 rancher-cluster.yml 文件

这里需要注意,这个文件没有明确配置rsa文件名,默认会使用 $HOME/.ssh/id_rsa 建立连接。内容如下 在rancher-01上执行 注意:以rancher用户执行。

$ vim rancher-cluster.yml

内容如下:

nodes:
  - address: 172.16.7.201
    internal_address: 172.16.7.201
    user: rancher
    role: [controlplane,worker,etcd]
    hostname_override: rancher-01.techzsun.com
  - address: 172.16.7.202
    internal_address: 172.16.7.202
    user: rancher
    role: [controlplane,worker,etcd]
    hostname_override: rancher-02.techzsun.com
  - address: 172.16.7.203
    internal_address: 172.16.7.203
    user: rancher
    role: [controlplane,worker,etcd]
    hostname_override: rancher-03.techzsun.com

services:
  etcd:
    backup_config:
        enabled: true
        interval_hours: 6
        retention: 60

备注: address 公共域名或IP地址 user 可以运行docker命令的用户 role 分配给节点的Kubernetes角色列表 internal_address 内部集群通信的私有域名或IP地址 开启了etcd的备份机制,每隔6小时备份一次,保存60天数据

4.运行 RKE 构建 kubernetes 集群

在rancher-01上执行 注意:以rancher用户执行。

$ rke_linux-amd64 up --config ./rancher-cluster.yml

输出如下:

执行成功会在当前目录生成2个文件,分别是rancher-cluster.rkestate和kube_config_rancher-cluster.yml 文件说明

rancher-cluster.yml:RKE集群配置文件。
kube_config_rancher-cluster.yml:群集的Kubeconfig文件,此文件包含完全访问群集的凭据。
rancher-cluster.rkestate:Kubernetes群集状态文件,此文件包含完全访问群集的凭据。

5.测试集群以及检查集群状态

5.1 设置环境变量

rancher-01 上执行 注意:以rancher用户执行。

mkdir ~/.kube
cp kube_config_rancher-cluster.yml ~/.kube/config
export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml

查看node

$ kubectl get nodes
NAME                      STATUS     ROLES                      AGE    VERSION
rancher-01.techzsun.com   Ready      controlplane,etcd,worker   163m   v1.19.3
rancher-02.techzsun.com   NotReady   controlplane,etcd,worker   163m   v1.19.3
rancher-03.techzsun.com   NotReady   controlplane,etcd,worker   163m   v1.19.3

如果需要root用户执行kubectl,切换到root用户,执行以下命令

mkdir ~/.kube
cp /home/rancher/kube_config_rancher-cluster.yml ~/.kube/config
export KUBECONFIG=~/.kube/config

5.2 测试集群以及检查集群状态

[rancher@rancher-01 ~]$ kubectl get nodes
NAME                      STATUS   ROLES                      AGE   VERSION
rancher-01.techzsun.com   Ready    controlplane,etcd,worker   4d    v1.19.3
rancher-02.techzsun.com   Ready    controlplane,etcd,worker   4d    v1.19.3
rancher-03.techzsun.com   Ready    controlplane,etcd,worker   4d    v1.19.3
[rancher@rancher-01 ~]$ kubectl get pods
No resources found in default namespace.
[rancher@rancher-01 ~]$ kubectl get pods -A
NAMESPACE       NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx   default-http-backend-65dd5949d9-qsc7j      1/1     Running     1          4d
ingress-nginx   nginx-ingress-controller-4r7g7             1/1     Running     1          4d
ingress-nginx   nginx-ingress-controller-8fh9g             1/1     Running     1          4d
ingress-nginx   nginx-ingress-controller-pv7h8             1/1     Running     1          4d
kube-system     calico-kube-controllers-649b7b795b-p5cb2   1/1     Running     1          4d
kube-system     canal-2kl6r                                2/2     Running     2          4d
kube-system     canal-bcwjw                                2/2     Running     2          4d
kube-system     canal-qzp9s                                2/2     Running     2          4d
kube-system     coredns-6f85d5fb88-f4t5b                   1/1     Running     1          4d
kube-system     coredns-6f85d5fb88-j2l5n                   1/1     Running     1          4d
kube-system     coredns-autoscaler-79599b9dc6-7pkxp        1/1     Running     1          4d
kube-system     metrics-server-8449844bf-l5r2s             1/1     Running     1          4d
kube-system     rke-coredns-addon-deploy-job-kwc5l         0/1     Completed   0          4d
kube-system     rke-ingress-controller-deploy-job-nqbjr    0/1     Completed   0          4d
kube-system     rke-metrics-addon-deploy-job-jqd9t         0/1     Completed   0          4d
kube-system     rke-network-plugin-deploy-job-hghvw        0/1     Completed   0          4d
[rancher@rancher-01 ~]$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx   default-http-backend-65dd5949d9-qsc7j      1/1     Running     1          4d
ingress-nginx   nginx-ingress-controller-4r7g7             1/1     Running     1          4d
ingress-nginx   nginx-ingress-controller-8fh9g             1/1     Running     1          4d
ingress-nginx   nginx-ingress-controller-pv7h8             1/1     Running     1          4d
kube-system     calico-kube-controllers-649b7b795b-p5cb2   1/1     Running     1          4d
kube-system     canal-2kl6r                                2/2     Running     2          4d
kube-system     canal-bcwjw                                2/2     Running     2          4d
kube-system     canal-qzp9s                                2/2     Running     2          4d
kube-system     coredns-6f85d5fb88-f4t5b                   1/1     Running     1          4d
kube-system     coredns-6f85d5fb88-j2l5n                   1/1     Running     1          4d
kube-system     coredns-autoscaler-79599b9dc6-7pkxp        1/1     Running     1          4d
kube-system     metrics-server-8449844bf-l5r2s             1/1     Running     1          4d
kube-system     rke-coredns-addon-deploy-job-kwc5l         0/1     Completed   0          4d
kube-system     rke-ingress-controller-deploy-job-nqbjr    0/1     Completed   0          4d
kube-system     rke-metrics-addon-deploy-job-jqd9t         0/1     Completed   0          4d
kube-system     rke-network-plugin-deploy-job-hghvw        0/1     Completed   0          4d

五、安装Rancher

1、添加 Helm Chart 仓库

此步骤在有公网的主机执行即可,为了得到tgz文件(cert-manager-v0.12.0.tgz)

1.1 添加 Helm Chart 仓库

helm repo add rancher-stable https://releases.rancher.com/server-charts/stable

1.2.获取最新的 Rancher Chart, tgz 文件会下载到本地。

helm fetch rancher-stable/rancher

将rancher-2.5.2.tgz文件拷贝到内网rancher1中的rancher用户家目录下

2.使用 Rancher 默认的自签名证书在公网环境下获取最新的cert-manager Chart

2.1.在可以连接互联网的系统中,添加 cert-manager 仓库。

helm repo add jetstack https://charts.jetstack.io
helm repo update

2.2.从 Helm Chart 仓库 中获取最新的 cert-manager Chart。

helm fetch jetstack/cert-manager --version v0.12.0

2.3.将tgz文件拷贝到内网rancher1中的rancher用户家目录下

将生成的cert-manager-v0.12.0.tgz文件拷贝到内网主机rancher1中 [rancher@rancher1 ~]$ scp root@10.0.0.20:/root/install/cert-manager-v0.12.0.tgz .

3、使用期望的参数渲染 chart 模板

[rancher@rancher1 ~]$ helm template cert-manager ./cert-manager-v0.12.0.tgz --output-dir . \
  --namespace cert-manager \
  --set image.repository=172.16.7.199:5000/quay.io/jetstack/cert-manager-controller \
  --set webhook.image.repository=172.16.7.199:5000/quay.io/jetstack/cert-manager-webhook \
  --set cainjector.image.repository=172.16.7.199:5000/quay.io/jetstack/cert-manager-cainjector

执行完成会得到一个包含相关 YAML文件的cert-manager目录,内容如下:

[rancher@rancher-01 ~]$ tree -L 3 cert-manager
cert-manager
└── templates
    ├── cainjector-deployment.yaml
    ├── cainjector-rbac.yaml
    ├── cainjector-serviceaccount.yaml
    ├── deployment.yaml
    ├── rbac.yaml
    ├── serviceaccount.yaml
    ├── service.yaml
    ├── webhook-deployment.yaml
    ├── webhook-mutating-webhook.yaml
    ├── webhook-rbac.yaml
    ├── webhook-serviceaccount.yaml
    ├── webhook-service.yaml
    └── webhook-validating-webhook.yaml

4.下载 cert-manager 所需的 CRD 文件。

curl -L -o cert-manager/cert-manager-crd.yaml https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml
# 可能会下载失败,FQ下载
https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml

5.渲染 Rancher 模板

[rancher@rancher1 ~]$ helm template rancher ./rancher-2.5.2.tgz --output-dir . \
  --namespace cattle-system \
  --set hostname=rancher.com \
  --set certmanager.version=v0.12.0 \
  --set rancherImage=172.16.7.199:5000/rancher/rancher \
  --set systemDefaultRegistry=172.16.7.199:5000 \
  --set useBundledSystemChart=true

输出如下内容:

wrote ./rancher/templates/serviceAccount.yaml
wrote ./rancher/templates/clusterRoleBinding.yaml
wrote ./rancher/templates/service.yaml
wrote ./rancher/templates/deployment.yaml
wrote ./rancher/templates/ingress.yaml
wrote ./rancher/templates/issuer-rancher.yaml

6、安装 Cert-manager

(仅限使用 Rancher 默认自签名证书)

6.1。为 cert-manager 创建 namespace。

[rancher@rancher1 ~]$ kubectl create namespace cert-manager
namespace/cert-manager created

6.2.创建 cert-manager CRD

kubectl apply -f cert-manager/cert-manager-crd.yaml

输出

Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created

6.3.启动 cert-manager

[rancher@rancher-01 ~]$ kubectl apply -f cert-manager/cert-manager-crd.yaml

输出结果

Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
[rancher@rancher-01 ~]$ kubectl apply -R -f ./cert-manager
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io unchanged
deployment.apps/cert-manager-cainjector created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
serviceaccount/cert-manager-cainjector created
deployment.apps/cert-manager created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
service/cert-manager created
serviceaccount/cert-manager created
deployment.apps/cert-manager-webhook created
Warning: admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:webhook-requester created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:auth-delegator created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:webhook-authentication-reader created
service/cert-manager-webhook created
serviceaccount/cert-manager-webhook created
Warning: admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

7.安装 Rancher

[rancher@rancher-01 ~]$ kubectl create namespace cattle-system
[rancher@rancher-01 ~]$ kubectl -n cattle-system apply -R -f ./rancher

输出如下:

clusterrolebinding.rbac.authorization.k8s.io/rancher created
deployment.apps/rancher created
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.extensions/rancher created
service/rancher created
serviceaccount/rancher created
Error from server (InternalError): error when creating &quot;rancher/templates/issuer-rancher.yaml&quot;: Internal error occurred: failed calling webhook &quot;webhook.cert-manager.io&quot;: Post &quot;https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=30s&quot;: context deadline exceeded

居然有报错,分析:

[rancher@rancher-01 ~]$ kubectl get pod --all-namespaces
NAMESPACE       NAME                                       READY   STATUS             RESTARTS   AGE
cert-manager    cert-manager-cainjector-78f8678b4-lhk2b    0/1     ImagePullBackOff   0          6m50s

分析-查看详细信息

[rancher@rancher-01 ~]$ kubectl describe -n cert-manager  pod  cert-manager-cainjector-78f8678b4-lhk2b
看下面这部分:
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  16m                   default-scheduler  Successfully assigned cert-manager/cert-manager-cainjector-78f8678b4-lhk2b to rancher-01.techzsun.com
  Normal   Pulling    15m (x4 over 16m)     kubelet            Pulling image &quot;172.16.7.199:50000/quay.io/jetstack/cert-manager-cainjector:v0.12.0&quot;
  Warning  Failed     15m (x4 over 16m)     kubelet            Failed to pull image &quot;172.16.7.199:50000/quay.io/jetstack/cert-manager-cainjector:v0.12.0&quot;: rpc error: code = Unknown desc = Error response from daemon: Get https://172.16.7.199:50000/v2/: dial tcp 172.16.7.199:50000: connect: no route to host
  Warning  Failed     15m (x4 over 16m)     kubelet            Error: ErrImagePull
  Warning  Failed     6m36s (x44 over 16m)  kubelet            Error: ImagePullBackOff
  Normal   BackOff    103s (x66 over 16m)   kubelet            Back-off pulling image &quot;172.16.7.199:50000/quay.io/jetstack/cert-manager-cainjector:v0.12.0&quot;

分析上面缺少镜像:

172.16.7.199:50000/quay.io/jetstack/cert-manager-cainjector:v0.12.0

自签证书

https://my.oschina.net/u/4257408/blog/3662544

安装和配置Helm

Helm是Kubernetes首选的包管理工具。Helmcharts为Kubernetes YAML清单文档提供模板语法。使用Helm,可以创建可配置的部署,而不仅仅是使用静态文件。Helm有两个部分:Helm客户端(helm)和Helm服务端(Tiller)。

helm客户端下载

https://github.com/helm/helm/releases
https://get.helm.sh/helm-v3.4.1-linux-amd64.tar.gz      #最新稳定版
Download Helm v3.4.1. The common platform binaries are here:              在这个位置下载。

helm常用命令 https://blog.csdn.net/kjh2007abc/article/details/99618455

kubeadm实现k8s高可用集群环境部署与配置 https://www.cnblogs.com/zhaoya2019/p/13032218.html

页面列表

ITEM_HTML