Rancher安装
<p>k8s external ip #worker集群高可用IP.</p>
<p>英文版最新部署指南,这是离线部署链接
<a href="https://rancher.com/docs/rancher/v2.x/en/installation/other-installation-methods/air-gap/">https://rancher.com/docs/rancher/v2.x/en/installation/other-installation-methods/air-gap/</a></p>
<p>Rancher 2.4.3 - HA 部署高可用k8s集群 #参考此文件</p>
<pre><code class="language-bash">https://www.cnblogs.com/xiao987334176/p/12981735.html
本文参考链接:
https://blog.51cto.com/bilibili/2440304
https://blog.51cto.com/liuzhengwei521/2398244
https://www.cnblogs.com/xzkzzz/p/9995956.html
https://www.cnblogs.com/kelsen/p/10836332.html</code></pre>
<p>Rancher 2.4.4-高可用集群HA部署-离线安装 2020-06-14
<a href="https://blog.csdn.net/weixin_42331537/article/details/106745662">https://blog.csdn.net/weixin_42331537/article/details/106745662</a>
rancher-images.txt
rancher-save-images.sh
rancher-images.txt
rancher-load-images.sh</p>
<p>Rancher Releases Mirror
<a href="http://mirror.cnrancher.com/">http://mirror.cnrancher.com/</a></p>
<p>rancher-server离线的高可用部署
<a href="https://blog.csdn.net/qq_39919755/article/details/94858022">https://blog.csdn.net/qq_39919755/article/details/94858022</a></p>
<p>安装前镜像下载,参考rancher官网离线安装中的“同步镜像到私有镜像仓库”,下载三个文件,找一台docker机器事先下载好。</p>
<h3>阿里云加速配置</h3>
<p>下载前,请先配置阿里镜加速配置,内容如下:</p>
<pre><code class="language-bash">{ "registry-mirrors" : ["https://dekn3ozn.mirror.aliyuncs.com"] }</code></pre>
<h3>登录私有仓库报错及解决办法</h3>
<p>报错信息:</p>
<pre><code class="language-bash">[root@worker-01 rancher]# docker login 172.16.7.199:5000
Username: docker
Password: docker
Error response from daemon: Get https://172.16.7.199:5000/v2/: http: server gave HTTP response to HTTPS client</code></pre>
<p>解决办法:
在客户端需要配置如下信息:
[root@worker-01 rancher]# cat /etc/docker/daemon.json</p>
<pre><code class="language-bash">{ "insecure-registries":["172.16.7.199:5000"] }</code></pre>
<p>说明:上述这个内容与阿里加速内容是不一样的,不能简单修改URL来使用。</p>
<p>准备离线镜像
<a href="https://www.bookstack.cn/read/rancher-v2.x/102ad603fddd1ea1.md">https://www.bookstack.cn/read/rancher-v2.x/102ad603fddd1ea1.md</a></p>
<p>-----------按上文Rancher 2.4.3 - HA 部署高可用k8s集群 整理的-----------------</p>
<p>如何在国内优雅地使用Rancher
<a href="https://mp.weixin.qq.com/s/XMh0-SscBPDYFfPdpmqdkw">https://mp.weixin.qq.com/s/XMh0-SscBPDYFfPdpmqdkw</a>
如果你想单独pull这个,可以用阿里云的 registry.cn-hangzhou.aliyuncs.com/rancher/rke-tools:v0.1.65</p>
<h1>一、概述</h1>
<p>对于生产环境,需以高可用的配置安装 Rancher,确保用户始终可以访问 Rancher Server。当安装在Kubernetes集群中时,Rancher将与集群的 etcd 集成,并利用Kubernetes 调度实现高可用。
为确保高可用,本文所部署的 Kubernetes 集群将专用于运行 Rancher ,Rancher 运行起来后,可再创建或导入集群以运行具体的工作负载。</p>
<h2>推荐架构</h2>
<ul>
<li>Rancher的DNS 应解析到 4层(TCP) 负载均衡上。</li>
<li>负载均衡应将端口 TCP/80 和 TCP/443 转发到 Kubernetes 集群中的所有3个节点。</li>
<li>Ingress-controller 将 HTTP 重定向到HTTPS并终止端口 TCP/443 上的 SSL/TLS(SSL数字证书在这里部署)。</li>
<li>Ingress-controller 将流量转发到 pod 的 TCP/80 端口。
下面是一张从官网拉过来的图片,更直观一些。
<img src="https://www.showdoc.com.cn/server/api/attachment/visitfile/sign/dd5c8306b41a26d5fe93f0c78bdaecbf?showdoc=.jpg" alt="" />
<h1>二、准备工作</h1>
<h2>1.服务器准备</h2></li>
<li>1台 Linux服务器,配置不用很高,用于四层负载均衡</li>
<li>3台 Linux服务器,Rancker-server-node 节点</li>
<li>n台 Linux服务器,Rancker-agent-node 节点(n<=50)
RKE 高可用安装的 CPU 和 内存要求
节点服务器的硬件配置,可根据实际情况依据该表自行选择。
在 Rancher v2.4.0 中提高了性能。有关 v2.4.0 之前的 Rancher 的要求,请参阅本节。</li>
</ul>
<table>
<thead>
<tr>
<th>规模</th>
<th>集群</th>
<th>节点</th>
<th>CPU</th>
<th>内存</th>
</tr>
</thead>
<tbody>
<tr>
<td>小</td>
<td>最多150个</td>
<td>最多1500个</td>
<td>2</td>
<td>8 GB</td>
</tr>
<tr>
<td>中</td>
<td>最多300个</td>
<td>最多3000个</td>
<td>4</td>
<td>16 GB</td>
</tr>
<tr>
<td>大</td>
<td>最多500个</td>
<td>最多5000个</td>
<td>8</td>
<td>32 GB</td>
</tr>
<tr>
<td>特大</td>
<td>最多1000个</td>
<td>最多10000个</td>
<td>16</td>
<td>64 GB</td>
</tr>
<tr>
<td>超大</td>
<td>最多2000个</td>
<td>最多20000个</td>
<td>32</td>
<td>128 GB</td>
</tr>
</tbody>
</table>
<p>联系 Rancher,如果您要管理 2000+ 集群和/或 20000+ 节点。</p>
<h2>2.环境说明</h2>
<p>服务器使用本地的EXSI,虚拟机具体配置如下:</p>
<table>
<thead>
<tr>
<th>主机名称</th>
<th>系统版本</th>
<th>内网ip</th>
<th>配置</th>
</tr>
</thead>
<tbody>
<tr>
<td>rancher-01</td>
<td>CentOS 7.8</td>
<td>172.16.7.201</td>
<td>2核3g</td>
</tr>
<tr>
<td>rancher-02</td>
<td>CentOS 7.8</td>
<td>172.16.7.202</td>
<td>2核3g</td>
</tr>
<tr>
<td>rancher-03</td>
<td>CentOS 7.8</td>
<td>172.16.7.203</td>
<td>2核3g</td>
</tr>
<tr>
<td>rancher-slb</td>
<td>CentOS 7.8</td>
<td>172.16.7.200</td>
<td>1核1g</td>
</tr>
</tbody>
</table>
<h2>3.centos7 永久修改主机名</h2>
<pre><code class="language-bash">hostnamectl set-hostname xxx</code></pre>
<h1>三、docker安装</h1>
<p>关于docker安装,请参考链接:
<a href="https://www.showdoc.com.cn/557523996513244?page_id=3839601638128520">https://www.showdoc.com.cn/557523996513244?page_id=3839601638128520</a></p>
<h1>四、安装RKE</h1>
<p>Rancher Kubernetes Engine(RKE)是一款轻量级Kubernetes安装程序,支持在裸机和虚拟化服务器上安装Kubernetes。 RKE解决了Kubernettes社区中的一个常见问题,比如:安装复杂性。RKE支持多种平台运行,比如MacOS,linux,windows。
这里在<strong>rancher-01</strong>上安装rke:</p>
<h2>1.下载二进制文件</h2>
<p>rancher二进制文件下载</p>
<pre><code class="language-bash">https://github.com/rancher/rke/releases
找到最新稳定版本,目前稳定版本为v1.1.12
下载文件:
Kubernetes Versions
v1.18.12-rancher1-1
rke_linux-amd64 点此行下载或右键获取下载链接如下
https://github.com/rancher/rke/releases/download/v1.1.12/rke_linux-arm64</code></pre>
<p>XX:上次下载的v1.2.2 <<---</p>
<h2>2.安装rke</h2>
<pre><code class="language-bash">chmod +x rke_linux-amd64
mv rke_linux-amd64 /usr/bin/
rke_linux-amd64 --version</code></pre>
<p>注意:此文件安装的k8s版本为v1.18.12
XX:上次下载的为v1.19.3</p>
<h1>三.安装Kubectl</h1>
<p>kubectl是一个CLI命令行工具,用于运行Kubernetes集群的命令。Rancher 2.x中的许多维护和管理都需要它。
这里在<strong> rancher-01 </strong>上安装kubectl:</p>
<h2>1.kubectl下载</h2>
<pre><code class="language-bash">https://github.com/kubernetes
https://kubernetes.io/zh/docs/setup/release/notes/
https://dl.k8s.io/v1.18.0/kubernetes-client-linux-arm64.tar.gz
变更日志:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md #最新稳定版本</code></pre>
<p>注意:此链接必须在访问谷歌的电脑上面才行下载。</p>
<h2>2.解压,并放到path路径下</h2>
<pre><code class="language-bash">tar zxvf kubernetes-client-linux-amd64.tar.gz -C /usr/src/
cp /usr/src/kubernetes/client/bin/kubectl /usr/bin/kubectl
chmod +x /usr/bin/kubectl</code></pre>
<h2>3.配置kubectl的shell补全</h2>
<p>CentOS Linux上,您可能需要安装默认情况下未安装的bash-completion软件包。
bash-completion包centos yum自带</p>
<pre><code class="language-bash">yum install -y bash-completion</code></pre>
<p>可将kubectl自动补全添加到当前shell,要使kubectl自动补全命令自动加载,运行</p>
<pre><code class="language-bash">echo 'source <(kubectl completion bash)' >> /root/.bashrc</code></pre>
<h2>4.验证补全命令</h2>
<p>退出,重新登录一下即可。命令实例示例:</p>
<pre><code class="language-bash"># kubectl desc<TAB> no<TAB> node<TAB></code></pre>
<h1>三、使用RKE安装kubernetes</h1>
<p>下面使用 RKE(Kubernetes Engine) 安装高可用的 Kubernetes。</p>
<h2>1.NODE-SERVER 之间建立 ssh 信任</h2>
<p>我们目前有三台服务器用作 local 集群,首先要确保我们主机能够通过 ssh 访问到另外两台主机并执行相关操作。</p>
<h3>1.1.创建用户rancher</h3>
<p><strong>注意:使用rke安装kubernetes时,不能以root用户执行。必须是一个普通用户才行!!!</strong>
在<strong> rancher-01,rancher-02,rancher-02 </strong>执行以下命令:</p>
<pre><code class="language-bash">useradd rancher
passwd rancher</code></pre>
<h2>2.授予rancher具有docker权限</h2>
<h3>2.1.在<strong> rancher-01,rancher-02,rancher-03 </strong>执行以下命令:</h3>
<p>使用root账号登录</p>
<pre><code class="language-bash">#将登陆用户develop加入到docker用户组中
gpasswd -a rancher docker
#更新用户组
newgrp docker #或 usermod -aG docker rancher</code></pre>
<h3>2.2.切换到rancher用户进行测试</h3>
<pre><code class="language-bash">su rancher
docker ps</code></pre>
<p>输出正常,则表示成功。</p>
<h3>2.3.ssh信任</h3>
<p>在<strong> rancher-01,rancher-02,rancher-03 </strong>执行以下命令:</p>
<pre><code class="language-bash"># su rancher
$ ssh-keygen -t rsa -P "" -f ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 600 ~/.ssh/authorized_keys</code></pre>
<h3>2.4.复制公钥</h3>
<p><strong>在rancher-01上执行</strong>,,,,XXX 我觉得应该在三台主机都执行这一步。待验证。
<strong>注意:以rancher用户执行。</strong></p>
<pre><code class="language-bash">$ ssh-copy-id 172.16.7.201
$ ssh-copy-id 172.16.7.202
$ ssh-copy-id 172.16.7.203</code></pre>
<h3>2.5.测试ssh免密</h3>
<p>在rancher-01上执行
注意:以rancher用户执行。</p>
<pre><code class="language-bash">$ ssh 172.16.7.201
$ ssh 172.16.7.202
$ ssh 172.16.7.203</code></pre>
<h2>3.编写 rancher-cluster.yml 文件</h2>
<p>这里需要注意,这个文件没有明确配置rsa文件名,默认会使用 $HOME/.ssh/id_rsa 建立连接。内容如下
<strong>在rancher-01上执行</strong>
<strong>注意:以rancher用户执行。</strong></p>
<pre><code class="language-bash">$ vim rancher-cluster.yml</code></pre>
<p>内容如下:</p>
<pre><code class="language-bash">nodes:
- address: 172.16.7.201
internal_address: 172.16.7.201
user: rancher
role: [controlplane,worker,etcd]
hostname_override: rancher-01.techzsun.com
- address: 172.16.7.202
internal_address: 172.16.7.202
user: rancher
role: [controlplane,worker,etcd]
hostname_override: rancher-02.techzsun.com
- address: 172.16.7.203
internal_address: 172.16.7.203
user: rancher
role: [controlplane,worker,etcd]
hostname_override: rancher-03.techzsun.com
services:
etcd:
backup_config:
enabled: true
interval_hours: 6
retention: 60</code></pre>
<p>备注:
address 公共域名或IP地址
user 可以运行docker命令的用户
role 分配给节点的Kubernetes角色列表
internal_address 内部集群通信的私有域名或IP地址
开启了etcd的备份机制,每隔6小时备份一次,保存60天数据</p>
<h2>4.运行 RKE 构建 kubernetes 集群</h2>
<p><strong>在rancher-01上执行
注意:以rancher用户执行。</strong></p>
<pre><code class="language-bash">$ rke_linux-amd64 up --config ./rancher-cluster.yml</code></pre>
<p>输出如下:</p>
<pre><code class="language-bash"></code></pre>
<p>执行成功会在当前目录生成2个文件,分别是rancher-cluster.rkestate和kube_config_rancher-cluster.yml
文件说明</p>
<pre><code class="language-bash">rancher-cluster.yml:RKE集群配置文件。
kube_config_rancher-cluster.yml:群集的Kubeconfig文件,此文件包含完全访问群集的凭据。
rancher-cluster.rkestate:Kubernetes群集状态文件,此文件包含完全访问群集的凭据。</code></pre>
<h2>5.测试集群以及检查集群状态</h2>
<h3>5.1 设置环境变量</h3>
<p>在<strong> rancher-01 </strong>上执行
注意:以rancher用户执行。</p>
<pre><code class="language-bash">mkdir ~/.kube
cp kube_config_rancher-cluster.yml ~/.kube/config
export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml</code></pre>
<p>查看node</p>
<pre><code class="language-bash">$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
rancher-01.techzsun.com Ready controlplane,etcd,worker 163m v1.19.3
rancher-02.techzsun.com NotReady controlplane,etcd,worker 163m v1.19.3
rancher-03.techzsun.com NotReady controlplane,etcd,worker 163m v1.19.3</code></pre>
<p>如果需要root用户执行kubectl,切换到root用户,执行以下命令</p>
<pre><code class="language-bash">mkdir ~/.kube
cp /home/rancher/kube_config_rancher-cluster.yml ~/.kube/config
export KUBECONFIG=~/.kube/config</code></pre>
<h3>5.2 测试集群以及检查集群状态</h3>
<pre><code class="language-bash">[rancher@rancher-01 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
rancher-01.techzsun.com Ready controlplane,etcd,worker 4d v1.19.3
rancher-02.techzsun.com Ready controlplane,etcd,worker 4d v1.19.3
rancher-03.techzsun.com Ready controlplane,etcd,worker 4d v1.19.3</code></pre>
<pre><code class="language-bash">[rancher@rancher-01 ~]$ kubectl get pods
No resources found in default namespace.</code></pre>
<pre><code class="language-bash">[rancher@rancher-01 ~]$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx default-http-backend-65dd5949d9-qsc7j 1/1 Running 1 4d
ingress-nginx nginx-ingress-controller-4r7g7 1/1 Running 1 4d
ingress-nginx nginx-ingress-controller-8fh9g 1/1 Running 1 4d
ingress-nginx nginx-ingress-controller-pv7h8 1/1 Running 1 4d
kube-system calico-kube-controllers-649b7b795b-p5cb2 1/1 Running 1 4d
kube-system canal-2kl6r 2/2 Running 2 4d
kube-system canal-bcwjw 2/2 Running 2 4d
kube-system canal-qzp9s 2/2 Running 2 4d
kube-system coredns-6f85d5fb88-f4t5b 1/1 Running 1 4d
kube-system coredns-6f85d5fb88-j2l5n 1/1 Running 1 4d
kube-system coredns-autoscaler-79599b9dc6-7pkxp 1/1 Running 1 4d
kube-system metrics-server-8449844bf-l5r2s 1/1 Running 1 4d
kube-system rke-coredns-addon-deploy-job-kwc5l 0/1 Completed 0 4d
kube-system rke-ingress-controller-deploy-job-nqbjr 0/1 Completed 0 4d
kube-system rke-metrics-addon-deploy-job-jqd9t 0/1 Completed 0 4d
kube-system rke-network-plugin-deploy-job-hghvw 0/1 Completed 0 4d</code></pre>
<pre><code class="language-bash">[rancher@rancher-01 ~]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx default-http-backend-65dd5949d9-qsc7j 1/1 Running 1 4d
ingress-nginx nginx-ingress-controller-4r7g7 1/1 Running 1 4d
ingress-nginx nginx-ingress-controller-8fh9g 1/1 Running 1 4d
ingress-nginx nginx-ingress-controller-pv7h8 1/1 Running 1 4d
kube-system calico-kube-controllers-649b7b795b-p5cb2 1/1 Running 1 4d
kube-system canal-2kl6r 2/2 Running 2 4d
kube-system canal-bcwjw 2/2 Running 2 4d
kube-system canal-qzp9s 2/2 Running 2 4d
kube-system coredns-6f85d5fb88-f4t5b 1/1 Running 1 4d
kube-system coredns-6f85d5fb88-j2l5n 1/1 Running 1 4d
kube-system coredns-autoscaler-79599b9dc6-7pkxp 1/1 Running 1 4d
kube-system metrics-server-8449844bf-l5r2s 1/1 Running 1 4d
kube-system rke-coredns-addon-deploy-job-kwc5l 0/1 Completed 0 4d
kube-system rke-ingress-controller-deploy-job-nqbjr 0/1 Completed 0 4d
kube-system rke-metrics-addon-deploy-job-jqd9t 0/1 Completed 0 4d
kube-system rke-network-plugin-deploy-job-hghvw 0/1 Completed 0 4d</code></pre>
<h1>五、安装Rancher</h1>
<h2>1、添加 Helm Chart 仓库</h2>
<p>此步骤在有公网的主机执行即可,为了得到tgz文件(cert-manager-v0.12.0.tgz)</p>
<h3>1.1 添加 Helm Chart 仓库</h3>
<pre><code class="language-bash">helm repo add rancher-stable https://releases.rancher.com/server-charts/stable</code></pre>
<h3>1.2.获取最新的 Rancher Chart, tgz 文件会下载到本地。</h3>
<pre><code class="language-bash">helm fetch rancher-stable/rancher</code></pre>
<p>将rancher-2.5.2.tgz文件拷贝到内网rancher1中的rancher用户家目录下</p>
<h2>2.使用 Rancher 默认的自签名证书在公网环境下获取最新的cert-manager Chart</h2>
<h3>2.1.在可以连接互联网的系统中,添加 cert-manager 仓库。</h3>
<pre><code class="language-bash">helm repo add jetstack https://charts.jetstack.io
helm repo update</code></pre>
<h3>2.2.从 <a href="https://hub.helm.sh/charts/jetstack/cert-manager">Helm Chart 仓库</a> 中获取最新的 cert-manager Chart。</h3>
<pre><code class="language-bash">helm fetch jetstack/cert-manager --version v0.12.0</code></pre>
<h3>2.3.将tgz文件拷贝到内网rancher1中的rancher用户家目录下</h3>
<p>将生成的cert-manager-v0.12.0.tgz文件拷贝到内网主机rancher1中
[rancher@rancher1 ~]$ scp root@10.0.0.20:/root/install/cert-manager-v0.12.0.tgz .</p>
<h2>3、使用期望的参数渲染 chart 模板</h2>
<pre><code class="language-bash">[rancher@rancher1 ~]$ helm template cert-manager ./cert-manager-v0.12.0.tgz --output-dir . \
--namespace cert-manager \
--set image.repository=172.16.7.199:5000/quay.io/jetstack/cert-manager-controller \
--set webhook.image.repository=172.16.7.199:5000/quay.io/jetstack/cert-manager-webhook \
--set cainjector.image.repository=172.16.7.199:5000/quay.io/jetstack/cert-manager-cainjector</code></pre>
<p>执行完成会得到一个包含相关 YAML文件的cert-manager目录,内容如下:</p>
<pre><code class="language-bash">[rancher@rancher-01 ~]$ tree -L 3 cert-manager
cert-manager
└── templates
├── cainjector-deployment.yaml
├── cainjector-rbac.yaml
├── cainjector-serviceaccount.yaml
├── deployment.yaml
├── rbac.yaml
├── serviceaccount.yaml
├── service.yaml
├── webhook-deployment.yaml
├── webhook-mutating-webhook.yaml
├── webhook-rbac.yaml
├── webhook-serviceaccount.yaml
├── webhook-service.yaml
└── webhook-validating-webhook.yaml</code></pre>
<h2>4.下载 cert-manager 所需的 CRD 文件。</h2>
<pre><code class="language-bsh">curl -L -o cert-manager/cert-manager-crd.yaml https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml
# 可能会下载失败,FQ下载
https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml</code></pre>
<h2>5.渲染 Rancher 模板</h2>
<pre><code class="language-bash">[rancher@rancher1 ~]$ helm template rancher ./rancher-2.5.2.tgz --output-dir . \
--namespace cattle-system \
--set hostname=rancher.com \
--set certmanager.version=v0.12.0 \
--set rancherImage=172.16.7.199:5000/rancher/rancher \
--set systemDefaultRegistry=172.16.7.199:5000 \
--set useBundledSystemChart=true</code></pre>
<p>输出如下内容:</p>
<pre><code class="language-bash">wrote ./rancher/templates/serviceAccount.yaml
wrote ./rancher/templates/clusterRoleBinding.yaml
wrote ./rancher/templates/service.yaml
wrote ./rancher/templates/deployment.yaml
wrote ./rancher/templates/ingress.yaml
wrote ./rancher/templates/issuer-rancher.yaml</code></pre>
<h2>6、安装 Cert-manager</h2>
<p>(仅限使用 Rancher 默认自签名证书)</p>
<h3>6.1。为 cert-manager 创建 namespace。</h3>
<pre><code class="language-bash">[rancher@rancher1 ~]$ kubectl create namespace cert-manager
namespace/cert-manager created</code></pre>
<h2>6.2.创建 cert-manager CRD</h2>
<pre><code class="language-bash">kubectl apply -f cert-manager/cert-manager-crd.yaml</code></pre>
<p>输出</p>
<pre><code class="language-bash">Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created</code></pre>
<h2>6.3.启动 cert-manager</h2>
<pre><code class="language-bash">[rancher@rancher-01 ~]$ kubectl apply -f cert-manager/cert-manager-crd.yaml</code></pre>
<p>输出结果</p>
<pre><code class="language-bash">Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
[rancher@rancher-01 ~]$ kubectl apply -R -f ./cert-manager
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io unchanged
deployment.apps/cert-manager-cainjector created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
serviceaccount/cert-manager-cainjector created
deployment.apps/cert-manager created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
service/cert-manager created
serviceaccount/cert-manager created
deployment.apps/cert-manager-webhook created
Warning: admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:webhook-requester created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:auth-delegator created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:webhook-authentication-reader created
service/cert-manager-webhook created
serviceaccount/cert-manager-webhook created
Warning: admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created</code></pre>
<h1>7.安装 Rancher</h1>
<pre><code class="language-bash">[rancher@rancher-01 ~]$ kubectl create namespace cattle-system
[rancher@rancher-01 ~]$ kubectl -n cattle-system apply -R -f ./rancher</code></pre>
<p>输出如下:</p>
<pre><code class="language-bash">clusterrolebinding.rbac.authorization.k8s.io/rancher created
deployment.apps/rancher created
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.extensions/rancher created
service/rancher created
serviceaccount/rancher created
Error from server (InternalError): error when creating "rancher/templates/issuer-rancher.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=30s": context deadline exceeded</code></pre>
<p>居然有报错,分析:</p>
<pre><code class="language-bash">[rancher@rancher-01 ~]$ kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
cert-manager cert-manager-cainjector-78f8678b4-lhk2b 0/1 ImagePullBackOff 0 6m50s</code></pre>
<p>分析-查看详细信息</p>
<pre><code class="language-bash">[rancher@rancher-01 ~]$ kubectl describe -n cert-manager pod cert-manager-cainjector-78f8678b4-lhk2b
看下面这部分:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned cert-manager/cert-manager-cainjector-78f8678b4-lhk2b to rancher-01.techzsun.com
Normal Pulling 15m (x4 over 16m) kubelet Pulling image "172.16.7.199:50000/quay.io/jetstack/cert-manager-cainjector:v0.12.0"
Warning Failed 15m (x4 over 16m) kubelet Failed to pull image "172.16.7.199:50000/quay.io/jetstack/cert-manager-cainjector:v0.12.0": rpc error: code = Unknown desc = Error response from daemon: Get https://172.16.7.199:50000/v2/: dial tcp 172.16.7.199:50000: connect: no route to host
Warning Failed 15m (x4 over 16m) kubelet Error: ErrImagePull
Warning Failed 6m36s (x44 over 16m) kubelet Error: ImagePullBackOff
Normal BackOff 103s (x66 over 16m) kubelet Back-off pulling image "172.16.7.199:50000/quay.io/jetstack/cert-manager-cainjector:v0.12.0"</code></pre>
<p>分析上面缺少镜像:</p>
<pre><code class="language-bash">172.16.7.199:50000/quay.io/jetstack/cert-manager-cainjector:v0.12.0</code></pre>
<h1>自签证书</h1>
<p><a href="https://my.oschina.net/u/4257408/blog/3662544">https://my.oschina.net/u/4257408/blog/3662544</a></p>
<h1>安装和配置Helm</h1>
<p>Helm是Kubernetes首选的包管理工具。Helmcharts为Kubernetes YAML清单文档提供模板语法。使用Helm,可以创建可配置的部署,而不仅仅是使用静态文件。Helm有两个部分:Helm客户端(helm)和Helm服务端(Tiller)。</p>
<p>helm客户端下载</p>
<pre><code class="language-bash">https://github.com/helm/helm/releases
https://get.helm.sh/helm-v3.4.1-linux-amd64.tar.gz #最新稳定版
Download Helm v3.4.1. The common platform binaries are here: 在这个位置下载。</code></pre>
<p>helm常用命令
<a href="https://blog.csdn.net/kjh2007abc/article/details/99618455">https://blog.csdn.net/kjh2007abc/article/details/99618455</a></p>
<p>kubeadm实现k8s高可用集群环境部署与配置
<a href="https://www.cnblogs.com/zhaoya2019/p/13032218.html">https://www.cnblogs.com/zhaoya2019/p/13032218.html</a></p>