# rancher-K8s **Repository Path**: sky66/K8s ## Basic Information - **Project Name**: rancher-K8s - **Description**: rancher管理k8s集群----k8s集群二进制包一键化 - **Primary Language**: Shell - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 319 - **Created**: 2019-09-21 - **Last Updated**: 2020-12-19 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README 特点: 完全离线,不依赖互联网(系统新装配置好ip开机即可!其他都什么都不用安装) ==== 1. 真正原生centos7.3.-7.6Minimal新装系统(只需要统一集群的root密码即可)一键搭建k8s集群 1. 单机/集群任意服务器数量一键安装(目前一个节点对应一个etcd节点后续会分离可自定义) 1. 一键批量增删node节点(新增的服务器系统环境必须干净密码统一) 1. ipvs负载均衡,内网yum源共享页面端口42344 1. 图形化向导菜单安装,web管理页面dashboar端口42345 1. Heketi+GlusterFS(分布式存储集群)+helm完全一键离线部署 1. 默认版本v1.14.4,可执行替换各版本软件包,集群版目前已测安装数量在1-30台一键安装正常 1. 集群数量超过4台及以上默认开启k8s数据持久化方案:glusterfs+Heketi 最低3台分布式存储 \n (全自动自动安装,启用k8s集群持久化方案务必保证存储节点均有一块空盘(例如/deb/sdb无需分区(默认40%用于k8s集群持久化60%挂载到本机的/data目录))) 如启用Heketi+GlusterFS,默认会创建一个pvc验证动态存储效果 一键安装 === #### 一键安装介绍任选通道进行安装 一键安装命令(要求centos7系统为新装系统无任何软件环境可联网) 不推荐git下来仓库大概1.5gb左右比较大,可以直接下载离线包 ##一键安装通道01(走私有服务器高速通道) ``` shell rm -f K8s_1.0.tar*; wget http://www.linuxtools.cn:42344/K8s_1.0.tar && tar -xzvf K8s_1.0.tar && cd K8s/ && sh install.sh ``` ##一键安装通道02(走码云服务器) ``` shell yum install wget unzip -y ;rm -fv master.zip*; while [ true ]; do wget https://gitee.com/q7104475/K8s/repository/archive/master.zip || sleep 3 && break 1 ;done && unzip master.zip&& cd K8s/ && sh install.sh ``` ``` ==============master节点健康检测 kube-apiserver kube-controller-manager kube-scheduler etcd kubelet kube-proxy docker================== 192.168.123.51 | CHANGED | rc=0 >> active active active active active active active ===============================================note节点监控检测 etcd kubelet kube-proxy docker=============================================== 192.168.123.55 | CHANGED | rc=0 >> active active active active 192.168.123.53 | CHANGED | rc=0 >> active active active active 192.168.123.52 | CHANGED | rc=0 >> active active active active 192.168.123.56 | CHANGED | rc=0 >> active active active active 192.168.123.54 | CHANGED | rc=0 >> active active active active 192.168.123.57 | CHANGED | rc=0 >> active active active active 192.168.123.60 | CHANGED | rc=0 >> active active active active 192.168.123.59 | CHANGED | rc=0 >> active active active active 192.168.123.58 | CHANGED | rc=0 >> active active active active ===============================================监测csr,cs,pvc,pv,storageclasses=============================================== NAME AGE REQUESTOR CONDITION certificatesigningrequest.certificates.k8s.io/node-csr-8Dumqf_K9A_fQONoJOWpa_KgyZP3wzAe6Z5iGJIuKmk 16m kubelet-bootstrap Approved,Issued certificatesigningrequest.certificates.k8s.io/node-csr-AdU7hmZ-km7TX4VrWsV7iWpvIzhgO4ZPZaYRKgE8f1c 15m kubelet-bootstrap Approved,Issued certificatesigningrequest.certificates.k8s.io/node-csr-EWZEaK-iQem_08frMwTvJ7QdB8PTLFZh4GGECeKhrxc 17m kubelet-bootstrap Approved,Issued certificatesigningrequest.certificates.k8s.io/node-csr-G3AoLefbIeK6Al-sWW331YfjnIKpivLizekc8dd27N8 17m kubelet-bootstrap Approved,Issued certificatesigningrequest.certificates.k8s.io/node-csr-TXTxvkenqC9t5BytKOpu__8JoopEA4nijZQdMeoYj8c 17m kubelet-bootstrap Approved,Issued certificatesigningrequest.certificates.k8s.io/node-csr-dY4r6C5MzxNSMyumlSL0pJkMS8374onjL-O8rP7QbPw 17m kubelet-bootstrap Approved,Issued certificatesigningrequest.certificates.k8s.io/node-csr-drpTmdveqOdl7y2x5DWTOo8gcqhO1dewC5RAFqhnHmA 16m kubelet-bootstrap Approved,Issued certificatesigningrequest.certificates.k8s.io/node-csr-k1Wp5XvX3oOO0UeJO4gtZ1dJkK3BunceoCr-A4sRyfk 17m kubelet-bootstrap Approved,Issued certificatesigningrequest.certificates.k8s.io/node-csr-mE5hluSfa_ieJiskGS8iOFBy3TMUymDV8kW4bVTKwd4 16m kubelet-bootstrap Approved,Issued certificatesigningrequest.certificates.k8s.io/node-csr-mW-vtE_JrIC8DWVptPyadVZdH48PY_bXH4N0GknksMg 16m kubelet-bootstrap Approved,Issued NAME STATUS MESSAGE ERROR componentstatus/controller-manager Healthy ok componentstatus/scheduler Healthy ok componentstatus/etcd-5 Healthy {"health":"true"} componentstatus/etcd-2 Healthy {"health":"true"} componentstatus/etcd-0 Healthy {"health":"true"} componentstatus/etcd-1 Healthy {"health":"true"} componentstatus/etcd-9 Healthy {"health":"true"} componentstatus/etcd-3 Healthy {"health":"true"} componentstatus/etcd-8 Healthy {"health":"true"} componentstatus/etcd-6 Healthy {"health":"true"} componentstatus/etcd-7 Healthy {"health":"true"} componentstatus/etcd-4 Healthy {"health":"true"} NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/gluster1-test Bound pvc-c668a6fd-d612-11e9-983b-000c29c7746f 1Gi RWX gluster-heketi 6m13s NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-c668a6fd-d612-11e9-983b-000c29c7746f 1Gi RWX Delete Bound default/gluster1-test gluster-heketi 5m59s NAME PROVISIONER AGE storageclass.storage.k8s.io/gluster-heketi kubernetes.io/glusterfs 6m13s ===============================================监测node节点labels=============================================== NAME STATUS ROLES AGE VERSION LABELS 192.168.123.51 Ready master 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,dashboard=master,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.51,kubernetes.io/os=linux,node-role.kubernetes.io/master=master 192.168.123.52 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.52,kubernetes.io/os=linux,node-role.kubernetes.io/node=node,storagenode=glusterfs 192.168.123.53 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.53,kubernetes.io/os=linux,node-role.kubernetes.io/node=node,storagenode=glusterfs 192.168.123.54 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.54,kubernetes.io/os=linux,node-role.kubernetes.io/node=node,storagenode=glusterfs 192.168.123.55 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.55,kubernetes.io/os=linux,node-role.kubernetes.io/node=node 192.168.123.56 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.56,kubernetes.io/os=linux,node-role.kubernetes.io/node=node 192.168.123.57 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.57,kubernetes.io/os=linux,node-role.kubernetes.io/node=node 192.168.123.58 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.58,kubernetes.io/os=linux,node-role.kubernetes.io/node=node 192.168.123.59 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.59,kubernetes.io/os=linux,node-role.kubernetes.io/node=node 192.168.123.60 Ready node 15m v1.14.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.123.60,kubernetes.io/os=linux,node-role.kubernetes.io/node=node ===============================================监测coredns是否正常工作=============================================== Server: 10.0.0.2 Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local pod "dns-test" deleted ===============================================监测,pods状态=============================================== NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default glusterfs-r67db 1/1 Running 0 14m 192.168.123.52 192.168.123.52 default glusterfs-smmrk 1/1 Running 0 14m 192.168.123.54 192.168.123.54 default glusterfs-zswmm 1/1 Running 0 14m 192.168.123.53 192.168.123.53 default heketi-74cc7bb45c-sq87r 1/1 Running 0 6m34s 172.17.21.4 192.168.123.51 kube-system coredns-57656b67bb-m7sl2 1/1 Running 0 15m 172.17.38.2 192.168.123.54 kube-system kubernetes-dashboard-5b5697d4-wtn2w 1/1 Running 0 14m 172.17.21.2 192.168.123.51 kube-system tiller-deploy-7f4d76c4b6-78x55 1/1 Running 0 15m 172.17.21.3 192.168.123.51 ===============================================监测node节点状态=============================================== NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 192.168.123.51 Ready master 15m v1.14.4 192.168.123.51 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7 192.168.123.52 Ready node 15m v1.14.4 192.168.123.52 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7 192.168.123.53 Ready node 15m v1.14.4 192.168.123.53 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7 192.168.123.54 Ready node 15m v1.14.4 192.168.123.54 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7 192.168.123.55 Ready node 15m v1.14.4 192.168.123.55 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7 192.168.123.56 Ready node 15m v1.14.4 192.168.123.56 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7 192.168.123.57 Ready node 15m v1.14.4 192.168.123.57 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7 192.168.123.58 Ready node 15m v1.14.4 192.168.123.58 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7 192.168.123.59 Ready node 15m v1.14.4 192.168.123.59 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7 192.168.123.60 Ready node 15m v1.14.4 192.168.123.60 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.7 ================================================监测helm版本================================================ Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} [root@51 ~]# ``` * ps:目前是单master,后期会上多master高可用 * ps:近期提交代码过于频繁有时候可能会有=一些bug,欢迎到群随时提出 ==== ### [高能警告] 系统只能存在一个固定ip地址 一个网卡一个ip 切记美分系统不能多个ip多个网卡 ### [高能警告] 暂仅支持centos7.3-centos7.6, “不支持Centos7.2及其以下版本” ### [高能警告] 系统ip不能使用 10.0.0.0网段,尽量避开系统使用172.17.x.x 10.0.0.x网段(否则安装会有问题) ** # K8s升级替换v1.14.0 v1.15.0 #如果不需要使用v1.14.0 v1.15.0直接默认一键安装即可。master分支默认的是v1.14.4 ## 默认版本为v1.14.4,提供升级软件包v14 v15自行下载后放到 K8s/Software_package 目录即可(务必删除原有的) 链接:https://pan.baidu.com/s/1Sb8WH_z-dUI8z2vLEYWa_w 提取码:0eyz ![输入图片说明](https://images.gitee.com/uploads/images/2019/0629/223656_83724e63_525507.png "屏幕截图.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0629/223707_c0937c7b_525507.png "屏幕截图.png") 放入前务必执行以下操作 ``` shell rm -fv K8s/Software_package/kubernetes-server-linux-amd64.tar.a* ``` ``` shell #可选执行-----替换第三方yum源 rm -fv rm -f /etc/yum.repos.d/* while [ true ]; do curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo && break 1 ;done while [ true ]; do curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo && break 1 ;done ``` 华丽分界线。。。。。。。。。。。。。。 一键安装 === #### 一键安装介绍任选通道进行安装 一键安装命令(要求centos7系统为新装系统无任何软件环境可联网) 不推荐git下来仓库大概1.5gb左右比较大,可以直接下载离线包 ##一键安装通道01(走私有服务器高速通道) ``` shell rm -f K8s_1.0.tar*; wget http://www.linuxtools.cn:42344/K8s_1.0.tar && tar -xzvf K8s_1.0.tar && cd K8s/ && sh install.sh ``` ##一键安装通道02(走码云服务器) ``` shell yum install wget unzip -y ;rm -fv master.zip*; while [ true ]; do wget https://gitee.com/q7104475/K8s/repository/archive/master.zip || sleep 3 && break 1 ;done && unzip master.zip&& cd K8s/ && sh install.sh ``` 视频演示地址 === https://www.bilibili.com/video/av57242055?from=search&seid=4003077921686184728 #### 测试环境 * VMware15虚拟化平台,所有服务器节点2核2G * 已测2-20节点安装正常 * 建议新装centos7.6系统,环境干净(不需要提前安装任何软件不需要提前安装docker).集群功能至少2台服务器节点 网络 | 系统 | 内核版本 | IP获取方式 | docker版本 | Kubernetes版本 |K8s集群安装方式 | ---- | ----- | ------ | ---- | ---- | ---- | ---- | 桥接模式 | 新装CentOS7.6.1810 (Core) | 3.10.0-957.el7.x86_64 | 手动设置固定IP(不能dhcp获取所有节点) | 18.06.1-ce | v1.14.4 | 二进制包安装 | #### 安装教程 ``` yum install wget unzip -y wget https://gitee.com/q7104475/K8s/repository/archive/master.zip unzip master.zip cd K8s/ && sh install.sh ``` #### 使用说明 1. xxxx 2. xxxx 3. xxxx #### 参与贡献 #### 使用截图 ![输入图片说明](https://images.gitee.com/uploads/images/2019/0823/234839_5cb17b5a_525507.png "2.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0305/151653_e76832a6_525507.png "QQ图片20190305151528.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0908/234713_0071bc0d_525507.png "2.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0305/151703_5da78708_525507.png "QQ图片20190305151533.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0305/151710_92e5f5ba_525507.png "QQ图片20190305151537.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0305/151718_c3218e5c_525507.png "QQ图片20190305151541.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0305/151726_dcc498bc_525507.png "QQ图片20190305151544.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0305/151734_c2361acc_525507.png "QQ图片20190305151519.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0305/151746_8b15d028_525507.png "QQ图片20190305151556.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0305/151753_8597d7c3_525507.png "QQ图片20190305151600.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0305/151759_3cc9716d_525507.png "QQ图片20190305151548.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0305/151804_c09e620b_525507.png "QQ图片20190305151553.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0701/123204_d60fa35d_525507.png "屏幕截图.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0701/123306_2c843b85_525507.png "屏幕截图.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0710/014059_eee6f302_525507.png "屏幕截图.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0710/014143_41a3ef29_525507.png "屏幕截图.png") ![输入图片说明](https://images.gitee.com/uploads/images/2019/0710/092824_e556fbba_525507.png "屏幕截图.png") ==== ![输入图片说明](https://images.gitee.com/uploads/images/2019/0629/175427_0e439feb_525507.png "屏幕截图.png") * Q群名称:K8s自动化部署交流 * Q群 号:893480182 更新日志 === === ### ----------------- 2019-9-16 1. 集群版新增prometheus +grafan集群监控环境(被监控端根据集群数量自动弹性扩展) 默认端口30000,默认账户密码admin admin 1. 必须在启用数据持久化功能的基础上才会开启 1. grafana已内置了k8s集群pod监控模板,集群基础信息监控模板,已内置饼图插件,开箱即可用 ``` shell [root@51 ~]# kubectl get pods -o wide --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default glusterfs-5w7zn 1/1 Running 0 17m 192.168.123.53 192.168.123.53 default glusterfs-6h87t 1/1 Running 0 17m 192.168.123.55 192.168.123.55 default glusterfs-rpxt5 1/1 Running 0 17m 192.168.123.56 192.168.123.56 default glusterfs-tf2c2 1/1 Running 0 17m 192.168.123.54 192.168.123.54 default heketi-74cc7bb45c-xxdq7 1/1 Running 0 6m57s 172.17.54.5 192.168.123.51 default my-grafana-766fb5978b-md5jr 1/1 Running 0 6m30s 172.17.82.2 192.168.123.52 default my-prometheus-prometheus-alertmanager-79dfbddd64-vtfvk 2/2 Running 0 6m37s 172.17.54.8 192.168.123.51 default my-prometheus-prometheus-kube-state-metrics-64dcd5d669-28prp 1/1 Running 0 6m37s 172.17.54.6 192.168.123.51 default my-prometheus-prometheus-node-exporter-9sqjt 1/1 Running 0 6m37s 192.168.123.52 192.168.123.52 default my-prometheus-prometheus-node-exporter-b5tjt 1/1 Running 0 6m37s 192.168.123.53 192.168.123.53 default my-prometheus-prometheus-node-exporter-cdnh8 1/1 Running 0 6m37s 192.168.123.56 192.168.123.56 default my-prometheus-prometheus-node-exporter-gllzk 1/1 Running 0 6m37s 192.168.123.54 192.168.123.54 default my-prometheus-prometheus-node-exporter-pl7nd 1/1 Running 0 6m37s 192.168.123.55 192.168.123.55 default my-prometheus-prometheus-node-exporter-sf7bp 1/1 Running 0 6m37s 192.168.123.51 192.168.123.51 default my-prometheus-prometheus-pushgateway-76d96d955d-tmf2p 1/1 Running 0 6m37s 172.17.54.7 192.168.123.51 default my-prometheus-prometheus-server-558dc894b5-7bnvv 2/2 Running 0 6m37s 172.17.54.9 192.168.123.51 kube-system coredns-57656b67bb-n8xk4 1/1 Running 0 17m 172.17.54.2 192.168.123.51 kube-system kubernetes-dashboard-5b5697d4-jjnsx 1/1 Running 0 17m 172.17.54.4 192.168.123.51 kube-system tiller-deploy-7f4d76c4b6-smz65 1/1 Running 0 18m 172.17.54.3 192.168.123.51 [root@51 ~]# ``` ### ----------------- ### ----------------- 2019-9-13 1. 集群版新增coredns 感谢群内dockercore大佬的指导 1. 优化集群版部署脚本,新增集群重要功能监测脚本 1. 新增内置busybox镜像测试dns功能 ### ----------------- 2019-8-26 1 新增node节点批量增删 2 新增glusterfs分布式复制卷---持久化存储(集群版4台及以上自动内置部署) ### ----------------- 2019-7-11 修复部分环境IP取值不精确导致etcd安装失败的问题 ### ----------------- 2019-7-10 1. 新增集群版 web图形化控制台dashboard 2. 更新docker-ce版本为 Version: 18.09.7 K8s集群版安装完毕,web控制界面dashboard地址为 http://IP:42345 ### ----------------- 2019-7-1 新增单机版 web图形化控制台dashboard K8s单机版安装完毕,web控制界面dashboard地址为 http://IP:42345