kubeadm安装法
<p>以下内容来自阿良的kubeadm快速部署Kubernetes(注释加精版).pdf</p>
<ol>
<li>所有节点安装Docker/kubeadm/kubelet
Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。</li>
</ol>
<p>4.1 安装Docker
[root@k8s-master ~]# wget <a href="https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo">https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo</a> -O /etc/yum.repos.d/docker-ce.repo
[root@k8s-master ~]# yum -y install docker-ce-18.06.1.ce-3.el7
[root@k8s-master ~]# systemctl enable docker && systemctl start docker
[root@k8s-master ~]# docker --version </p>
<p>4.2 添加阿里云YUM软件源
[root@k8s-master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=<a href="https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64">https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64</a>
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=<a href="https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg">https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg</a> <a href="https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg">https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg</a>
EOF</p>
<p>4.3 安装kubeadm,kubelet和kubectl
说明:所有主机都要安装。由于版本更新频繁,这里指定版本号部署:
node主机安装完成后,kubelet服务在未加入集群前未能从集群获取初始化数据,所以一直在启动中;
检查了Swap已经关闭,SELinux已经被disabled,firewalld也已经被disabled,cgroup也已经和docker设置成一样。
tail -f /var/log/messages或journalctl -xefu kubelet查看信息显示为:
failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
忽略以上的报错,设置为开机自启动即可,因为此时的配置还没初始化完成,所以此时不能启动kubelet,等后续kubeadm启动成功后再查看
[root@k8s-master ~]# yum install -y kubelet-1.15.2-0 kubeadm-1.15.2-0 kubectl-1.15-2-0
[root@k8s-master ~]# systemctl enable kubelet</p>
<ol>
<li>部署Kubernetes Maste
说明:只在master主机安装
[root@k8s-master ~]# kubeadm init \
--apiserver-advertise-address=172.16.3.32 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.15.2 \
--service-cidr=10.1.0.0/16\
--pod-network-cidr=10.244.0.0/16</li>
</ol>
<p>说明:由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。 </p>
<p>几分钟后,执行结果有一大片,最后会显示toke相关的内容:
[init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at <a href="https://kubernetes.io/docs/setup/cri/">https://kubernetes.io/docs/setup/cri/</a>
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 172.16.3.32]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.16.3.32 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.16.3.32 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 36.501473 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: eeb8ss.5r27b92splcn8mdq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy</p>
<p>Your Kubernetes control-plane has initialized successfully!</p>
<p>To start using your cluster, you need to run the following as a regular user:</p>
<p>mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config</p>
<p>You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
<a href="https://kubernetes.io/docs/concepts/cluster-administration/addons/">https://kubernetes.io/docs/concepts/cluster-administration/addons/</a></p>
<p>Then you can join any number of worker nodes by running the following on each as root:</p>
<p>kubeadm join 172.16.3.32:6443 --token ulg6mf.7kasnh4c7o4ye7kb \
--discovery-token-ca-cert-hash sha256:17ba7255f4064491c5e7f5e270e7581281180213451bed433dfb2953ae38aadf</p>
<p>说明:记录上述命令用于向集群中添加node节点,并执行上述三行命令。</p>
<p>[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config</p>
<p>[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 3m13s v1.15.2</p>
<ol>
<li>
<p>安装Pod网络插件(CNI)
说明:这步只在master上操作。
[root@k8s-master ~]# kubectl apply -f <a href="https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml">https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml</a>
确保能够访问到quay.io这个registery。 </p>
</li>
<li>加入Kubernetes Node
说明:这步操作在每个node节点操作,下面IP为master的IP,向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:(根据每次生成token 修改下面命令)
[root@k8s-master ~]# kubeadm join 172.16.3.32:6443 --token ulg6mf.7kasnh4c7o4ye7kb --discovery-token-ca-cert-hash sha256:17ba7255f4064491c5e7f5e270e7581281180213451bed433dfb2953ae38aadf</li>
</ol>
<p>说明:加入后,需要等待所有node进入ready才可以。
[root@k8s-master ~]# kubectl get pods -n kube-system
[root@k8s-master ~]# kubectl get nodes</p>
<p>说明:其中一个执行了不止一次,共花了39分钟,成功后如果如下:
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-bccdc95cf-fdk5t 1/1 Running 0 66m
coredns-bccdc95cf-hcvj6 1/1 Running 0 66m
etcd-k8s-master 1/1 Running 0 65m
kube-apiserver-k8s-master 1/1 Running 0 65m
kube-controller-manager-k8s-master 1/1 Running 0 65m
kube-flannel-ds-amd64-9ds4v 1/1 Running 0 61m
kube-flannel-ds-amd64-b4cwn 1/1 Running 0 33m
kube-flannel-ds-amd64-jt7z9 1/1 Running 2 39m
kube-proxy-fwkbr 1/1 Running 0 33m
kube-proxy-nc4kf 1/1 Running 0 39m
kube-proxy-pnbw4 1/1 Running 0 66m
kube-scheduler-k8s-master 1/1 Running 0 65m</p>
<p>[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 67m v1.15.2
k8s-node1 Ready <none> 40m v1.15.2
k8s-node2 Ready <none> 33m v1.15.2</p>
<ol>
<li>测试kubernetes集群
[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
[root@k8s-master ~]# kubectl get pod,svc</li>
</ol>
<p>[root@k8s-master manifests]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-554b9c67f9-nccbj 1/1 Running 0 5h27m</p>
<p>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 6h36m
service/nginx NodePort 10.1.144.203 <none> 80:32266/TCP 5h27m</p>
<p>访问地址:<a href="http://NodeIP:Port">http://NodeIP:Port</a> </p>
<p>浏览器输入如下IP,都能正常访问:(需要稍微等一会)
<a href="http://172.16.3.32:31144">http://172.16.3.32:31144</a>
<a href="http://172.16.3.33:31144">http://172.16.3.33:31144</a>
<a href="http://172.16.3.34:31144">http://172.16.3.34:31144</a></p>
<ol>
<li>部署 Dashboard
下载或传入kubernetes-dashboard.yaml至/etc/kubernetes/manifests
cd /etc/kubernetes/manifests</li>
</ol>
<p>然后执行以下命令部署dashboard服务:
kubectl create -f kubernetes-dashboard.yaml
查看Pod 的状态为running说明dashboard已经部署成功:
[root@k8s-master manifests]# kubectl get pod --namespace=kube-system -o wide | grep dashboard
kubernetes-dashboard-86844cc55f-cq5zb 1/1 Running 0 7m50s 10.244.2.3 k8s-node2 <none> <none></p>
<p>Dashboard 会在 kube-system namespace 中创建自己的 Deployment 和 Service:
[root@k8s-master manifests]# kubectl get deployment kubernetes-dashboard --namespace=kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
kubernetes-dashboard 1/1 1 1 9m32s</p>
<p>[root@k8s-master manifests]# kubectl get service kubernetes-dashboard --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.1.192.181 <none> 443:31620/TCP 11m</p>
<p>访问dashboard
有以下几种方式访问dashboard:</p>
<pre><code>Nodport方式访问dashboard,service类型改为NodePort
loadbalacer方式,service类型改为loadbalacer
Ingress方式访问dashboard
API server方式访问 dashboard
kubectl proxy方式访问dashboard</code></pre>
<p>NodePort方式</p>
<p>为了便于本地访问,修改yaml文件,将service改为NodePort 类型:这里前面我已经全部修改好了。</p>
<p>重新应用yaml文件
[root@k8s-master manifests]# kubectl apply -f kubernetes-dashboard.yaml</p>
<p>查看service,TYPE类型已经变为NodePort,端口为31620
[root@k8s-master manifests]# kubectl get service -n kube-system | grep dashboard
kubernetes-dashboard NodePort 10.1.192.181 <none> 443:31620/TCP 14m</p>
<p>通过浏览器访问:
<a href="https://172.16.3.32:31620">https://172.16.3.32:31620</a>
<a href="https://172.16.3.33:31620">https://172.16.3.33:31620</a>
<a href="https://172.16.3.34:31620">https://172.16.3.34:31620</a>
登录界面如下:</p>
<p>Dashboard 支持 Kubeconfig 和 Token 两种认证方式,我们这里选择Token认证方式登录</p>
<p>创建登录用户
创建或上传dashboard-adminuser.yaml至/etc/kubernetes/manifests</p>
<p>执行yaml文件:
kubectl create -f dashboard-adminuser.yaml
说明:上面创建了一个叫admin-user的服务账号,并放在kube-system命名空间下,并将cluster-admin角色绑定到admin-user账户,
这样admin-user账户就有了管理员的权限。默认情况下,kubeadm创建集群时已经创建了cluster-admin角色,我们直接绑定即可。</p>
<p>查看admin-user账户的token
[root@k8s-master manifests]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-cfpc5
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: c46c31f5-93b1-47dc-898c-e35dfdb108ea</p>
<p>Type: kubernetes.io/service-account-token</p>
<h1>Data</h1>
<p>ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWNmcGM1Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjNDZjMzFmNS05M2IxLTQ3ZGMtODk4Yy1lMzVkZmRiMTA4ZWEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.fWRnvYu-6-KhRuAvz3N3-wOTxdq_oXRSCHK-xh1FKPkk3TUsQOvBLqPUzdkacekcH0sQAu8P--uBPyTDWNVqBrPza58RM3HdIVAHpMyuSwpDbUGB31KlVIRzp5ZGXP-iunCR86laineDNZstsF_OUQtZVk4KVOKifphGNGiBt1SYIUDJAadl6_YSsEcJABJ271PIjjhpeYFmlNgroarfYWVm_foE7MT7lom3Tlm2vDGcz5VcY3IGif9US7n8fph0NehVAzBZprnCMiNyJ8XEY44JwiVcl7QnviBGZVjkP7S3BBlI0V0D3qwx3nMDFb-TQj1iB1oaTvpXtXM1OCJeHQ</p>
<p>把获取到的Token复制到登录界面的Token输入框中:</p>
<p>部署好的环境:
1、kubernetes:
<a href="https://172.16.3.32:31620">https://172.16.3.32:31620</a>
把获取到的Token复制到登录界面的Token输入框中:
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWNmcGM1Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjNDZjMzFmNS05M2IxLTQ3ZGMtODk4Yy1lMzVkZmRiMTA4ZWEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.fWRnvYu-6-KhRuAvz3N3-wOTxdq_oXRSCHK-xh1FKPkk3TUsQOvBLqPUzdkacekcH0sQAu8P--uBPyTDWNVqBrPza58RM3HdIVAHpMyuSwpDbUGB31KlVIRzp5ZGXP-iunCR86laineDNZstsF_OUQtZVk4KVOKifphGNGiBt1SYIUDJAadl6_YSsEcJABJ271PIjjhpeYFmlNgroarfYWVm_foE7MT7lom3Tlm2vDGcz5VcY3IGif9US7n8fph0NehVAzBZprnCMiNyJ8XEY44JwiVcl7QnviBGZVjkP7S3BBlI0V0D3qwx3nMDFb-TQj1iB1oaTvpXtXM1OCJeHQ</p>
<p>2、镜像管理
<a href="http://172.16.3.32:3161">http://172.16.3.32:3161</a>
admin/310012</p>
<p>3、秒云
<a href="https://172.16.3.211:18443">https://172.16.3.211:18443</a>
admin/admin</p>
<p>登录成功。</p>
<p>Dashboard几种方式:
loadbalacer方式
nginx-ingress方式
hostport方式
NodePort方式
loadbalancer方式
API Server方式
Porxy方式</p>
<p>alias kubectl="hyperkube kubectl"</p>
<p>参考资源:
Install Kubernetes v1.13 with Docker 18.06 on Centos 7
<a href="https://randomopenstackthoughts.wordpress.com/2019/01/26/install-kubernetes-v1-13-with-docker-18-06-on-centos-7/">https://randomopenstackthoughts.wordpress.com/2019/01/26/install-kubernetes-v1-13-with-docker-18-06-on-centos-7/</a></p>
<p><strong><strong><strong>****</strong></strong></strong>kubernetes部署dashboard可视化插件<strong><strong><strong><strong>****</strong></strong></strong></strong>
<a href="https://blog.csdn.net/networken/article/details/85607593">https://blog.csdn.net/networken/article/details/85607593</a></p>
<p><strong><em>Kubernetes web界面kubernetes-dashboard安装</em></strong>
<a href="https://www.cnblogs.com/harlanzhang/p/10045975.html">https://www.cnblogs.com/harlanzhang/p/10045975.html</a></p>
<p>看这个 <a href="https://www.cnblogs.com/RainingNight/p/deploying-k8s-dashboard-ui.html">https://www.cnblogs.com/RainingNight/p/deploying-k8s-dashboard-ui.html</a>
kubectl apply -f <a href="http://mirror.faasx.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml">http://mirror.faasx.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml</a></p>
<p>使用kubeadm 安装 kubernetes 1.12.0
<a href="https://www.cnblogs.com/hongdada/p/9761336.html">https://www.cnblogs.com/hongdada/p/9761336.html</a></p>
<p>Centos7 单节点上安装kubernetes-dashboard过程
<a href="https://www.58jb.com/html/152.html">https://www.58jb.com/html/152.html</a></p>
<p>kubernetes-dashboard(1.8.3)部署与踩坑
<a href="https://www.cnblogs.com/RainingNight/p/deploying-k8s-dashboard-ui.html">https://www.cnblogs.com/RainingNight/p/deploying-k8s-dashboard-ui.html</a></p>
<p>kubernetes---CentOS7安装kubernetes1.11.2图文完整版
<a href="http://www.525.life/article?id=1510739742331">http://www.525.life/article?id=1510739742331</a>
一看必会系列:k8s-dashboard 1.10.1安装手册
<a href="http://www.jdccie.com/?p=4067">http://www.jdccie.com/?p=4067</a></p>
<p>问题收集:
1、如果使用 sudo kubectl get pods --all-namespaces 出现这样的错误:
kube-system kube-flannel-ds-7nt8p 0/1 Init:ErrImagePull 0 49m
kube-system kube-flannel-ds-l5ndv 0/1 Init:ImagePullBackOff 0 49m
则是 flannel 的镜像拉取错误,去 Docker hub 上搜索镜像代替一下:
【解决方法一】
sudo docker pull jmgao1983/flannel:v0.10.0-amd64
sudo docker tag jmgao1983/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
来源:基于 Kubeadm 的 K8s 安装
<a href="https://huanqiang.wang/2018/03/29/%E5%9F%BA%E4%BA%8E-Kubeadm-%E7%9A%84-K8s-%E5%AE%89%E8%A3%85/">https://huanqiang.wang/2018/03/29/%E5%9F%BA%E4%BA%8E-Kubeadm-%E7%9A%84-K8s-%E5%AE%89%E8%A3%85/</a></p>
<p>【方法二】
把下载好的镜像docker save出来,再在还未下载的node上docker load -i xxxx.tar
node1上已下载好 ,只在保存镜像就可给node2导入用
docker save -o quay.io.coreos.flannel:v0.11.0-amd64.tar quay.io/coreos/flannel:v0.11.0-amd64
node2上操作,
docker load -i quay.io.coreos.flannel:v0.11.0-amd64.tar</p>
<p>然后等会就会 pod 应用你下载的这个替代镜像重启。</p>
<p>2、docker save 与 docker export 的区别
<a href="https://juejin.im/entry/59a22eff5188252444425b5a">https://juejin.im/entry/59a22eff5188252444425b5a</a></p>