# kubernetes **Repository Path**: ak161/kubernetes ## Basic Information - **Project Name**: kubernetes - **Description**: No description available - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2020-07-14 - **Last Updated**: 2020-12-19 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README ### kubernetes ## 一、理论部分 #### [1. 理解kubernetes的认证授权][1] #### [2. 网络插件(CNI)的选择][2] ## 二、高可用集群部署 #### [1. 使用kubeadm方式部署kubernetes高可用集群][4] #### [2. 使用二进制方式部署kubernetes高可用集群][3] [1]:https://gitee.com/pa/kubernetes/blob/master/docs/auth.md [2]:https://gitee.com/pa/kubernetes/blob/master/docs/cni.md [3]:https://gitee.com/pa/kubernetes-ha-binary [4]:https://gitee.com/pa/kubernetes-ha-kubeadm ### 安装文档: ``` ~]# systemctl disable --now firewalld ~]# setenforce 0 ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config ``` ``` ~]# swapoff -a ~]# sed -i.bak 's/^.*centos-swap/#&/g' /etc/fstab ``` ``` ~]# hostnamectl set-hostname master ~]# hostnamectl set-hostname node1 ~]# hostnamectl set-hostname node2 ``` ``` ~]# cat >>/etc/hosts < /etc/sysctl.d/k8s.conf << EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF ``` ``` mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo yum clean all yum makecache ``` 非阿里云ecs: `sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo` ### 安装docker ubuntu ``` sudo apt-get update sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get -y update sudo apt-get -y install docker-ce 安装指定版本的Docker-CE: apt-cache madison docker-ce docker-ce | 17.03.1~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages docker-ce | 17.03.0~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages 安装指定版本的Docker-CE: (VERSION例如上面的17.03.1~ce-0~ubuntu-xenial) sudo apt-get -y install docker-ce=[VERSION] ``` Centos7 ``` sudo yum install -y yum-utils device-mapper-persistent-data lvm2` sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 更新并安装Docker-CE sudo yum makecache fast sudo yum -y install docker-ce 开启Docker服务 sudo service docker start ``` 注意: 官方软件源默认启用了最新的软件,您可以通过编辑软件源的方式获取各个版本的软件包。例如官方并没有将测试版本的软件源置为可用,您可以通过以下方式开启。同理可以开启各种测试版本等。 ``` vim /etc/yum.repos.d/docker-ee.repo 将[docker-ce-test]下方的enabled=0修改为enabled=1 安装指定版本的Docker-CE: Step 1: 查找Docker-CE的版本: yum list docker-ce.x86_64 --showduplicates | sort -r Loading mirror speeds from cached hostfile Loaded plugins: branch, fastestmirror, langpacks docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.1.ce-1.el7.centos @docker-ce-stable docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable Available Packages Step2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.0.ce.1-1.el7.centos) sudo yum -y install docker-ce-[VERSION] 修改docker加速地址: ~]# cat > /etc/docker/daemon.json << EOF { "registry-mirrors": ["https://f1bhsuge.mirror.aliyuncs.com"] } EOF ``` ### 安装kubernetes ``` Debian / Ubuntu apt-get update && apt-get install -y apt-transport-https curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat </etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl ``` ### CentOS / RHEL / Fedora ``` cat < /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF setenforce 0 yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet ``` ### 生成初始文件: ``` kubeadm config print init-defaults > kubeadm-init.yaml vim kubeadm-init.yaml apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: #改成master节点ip advertiseAddress: 192.168.10.78 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: master taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.19.0 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 #加入pod ip范围 podSubnet: 10.244.0.0/16 scheduler: {} ``` 查看需要镜像 ``` #查看需要镜像 kubeadm config images list #拉取镜像,用阿里云镜像仓库registry.cn-hangzhou.aliyuncs.com/ak161/ docker tag imagesid 需要的镜像 #提前下载镜像 kubeadm config images pull --config kubeadm-init.yaml ``` ### 初始化master节点 : `kubeadm init --config kubeadm-init.yaml` ### 安装flannel插件 : `~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml` 文件国外的,以放在此代码仓库下,镜像地址已改好,但需要登录 ### 安装dashboard 已修改好并将文件 放在此代码仓库下面 `[root@master ~]# kubectl apply -f recommended.yaml` ``` #查看状态 [root@master ~]# kubectl get pod,svc -n kubernetes-dashboard -o wide #创建目录使用证书 [root@master ~]# mkdir key && cd key #查看是否存在namespace为kubernetes-dashboard [root@master ~]# kubectl get namespaces #不存在namespace为创建kubernetes-dashboard创建namespace [root@master ~]# kubectl create namespace kubernetes-dashboard #生成 key [root@master ~]# openssl genrsa -out dashboard.key 2048 #生成证书请求 [root@master ~]# openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=**192.168.100.10**' #生成自签证书 [root@master ~]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt #目录结构 [root@master ~]# [root@k8smaster key]# ll total 12 -rw-r--r-- 1 root root 1001 Oct 23 22:21 dashboard.crt -rw-r--r-- 1 root root 903 Oct 23 22:20 dashboard.csr -rw-r--r-- 1 root root 1679 Oct 23 22:20 dashboard.key #使用自签证书创建secret [root@master ~]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard #设置node选择器label为master [root@master ~]# kubectl label node k8smaster type=master #拉取镜像 [root@master ~]# docker pull kubernetesui/dashboard:v2.0.0-beta4 #启动dashboard [root@master ~]# kubectl apply -f recommended.yaml #查看pod 与service是否运行正常 [root@master ~]# kubectl get pod,svc -n kubernetes-dashboard -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/dashboard-metrics-scraper-566cddb686-7csmx 1/1 Running 0 2m16s 10.244.2.15 k8snode1 pod/kubernetes-dashboard-75d8b49cf6-fcn6v 1/1 Running 0 2m17s 10.244.0.19 k8smaster NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/dashboard-metrics-scraper ClusterIP 10.110.92.64 8000/TCP 2m16s k8s-app=dashboard-metrics-scraper service/kubernetes-dashboard NodePort 10.101.171.115 443:30001/TCP 2m17s k8s-app=kubernetes-dashboard [root@master ~]# kubectl describe pod kubernetes-dashboard-75d8b49cf6-fcn6v -n kubernetes-dashboard Name: kubernetes-dashboard-75d8b49cf6-fcn6v Namespace: kubernetes-dashboard Priority: 0 Node: k8smaster/192.168.100.10 Start Time: Wed, 23 Oct 2019 22:31:49 -0400 Labels: k8s-app=kubernetes-dashboard pod-template-hash=75d8b49cf6 ... ... 省略 ... ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled default-scheduler Successfully assigned kubernetes-dashboard/kubernetes-dashboard-75d8b49cf6-fcn6v to k8smaster Normal Pulled 3m1s kubelet, k8smaster Container image "kubernetesui/dashboard:v2.0.0-beta4" already present on machine Normal Created 3m1s kubelet, k8smaster Created container kubernetes-dashboard Normal Started 3m kubelet, k8smaster Started container kubernetes-dashboard # 授权管理员 # 创建sa [root@master ~]# kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard # 绑定集群管理员 [root@master ~]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin # 查看sa以及secrets [root@master ~]# kubectl get sa,secrets -n kubernetes-dashboard NAME SECRETS AGE serviceaccount/dashboard-admin 1 32s serviceaccount/default 1 33m serviceaccount/kubernetes-dashboard 1 25m NAME TYPE DATA AGE secret/dashboard-admin-token-rjk49 kubernetes.io/service-account-token 3 32s secret/default-token-65rm4 kubernetes.io/service-account-token 3 33m secret/kubernetes-dashboard-certs Opaque 2 33m secret/kubernetes-dashboard-csrf Opaque 1 25m secret/kubernetes-dashboard-key-holder Opaque 2 25m secret/kubernetes-dashboard-token-696vq kubernetes.io/service-account-token 3 25m # 查看token [root@master ~]# kubectl describe secrets dashboard-admin-token-rjk49 -n kubernetes-dashboard [root@master ~]# - 或者通过下面命令直接获取token kubectl describe secrets $(kubectl get secrets -n kubernetes-dashboard | awk '/dashboard-admin-token/{print $1}' ) -n kubernetes-dashboard |sed -n '/token:.*/p' ``` 浏览器访问:节点ip:30001 ### 常见问题 1,docker存储驱动报错,在安装kubernetes的过程中,经常会遇见如下错误 `failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"` 原因是docker的Cgroup Driver和kubelet的Cgroup Driver不一致。 修改docker的Cgroup Driver修改/etc/docker/daemon.json文件 ``` { "exec-opts": ["native.cgroupdriver=systemd"] } ``` 重启docker即可 ``` systemctl daemon-reload systemctl restart docker ``` 2,node节点报localhost:8080拒绝错误node节点执行kubectl get pod报错如下: ``` [root@node1 ~]# kubectl get pod The connection to the server localhost:8080 was refused - did you specify the right host or port? ``` 出现这个问题的原因是kubectl命令需要使用kubernetes-admin密钥来运行 解决方法: 在master节点上将/etc/kubernetes/admin.conf文件远程复制到node节点的/etc/kubernetes目录下,然后在node节点配置一下环境变量 ``` [root@node1 images]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile [root@node1 images]# source ~/.bash_profile ``` node节点再次执行kubectl get pod: ``` [root@node1 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-f89759699-z4fc2 1/1 Running 0 20m ``` 3,node节点加入集群身份验证报错 ``` [root@node1 ~]# kubeadm join 192.168.50.128:6443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:05b84c41152f72ca33afe39a7ef7fa359eec3d3ed654c2692b665e2c4810af3e W0801 11:06:05.871557 2864 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks error execution phase preflight: couldn't validate the identity of the API Server: cluster CA found in cluster-info ConfigMap is invalid: none of the public keys "sha256:a74a8f5a2690aa46bd2cd08af22276c08a0ed9489b100c0feb0409e1f61dc6d0" are pinned To see the stack trace of this error execute with --v=5 or higher ``` 密钥复制的不对,重新把master初始化之后的加入集群指令复制一下, 4,初始化master节点时,swap未关闭 `[ERROR Swap]:running with swap on is not supported please diable swap` 关闭swap分区即可。 ``` swapoff -a sed -i.bak 's/^.*centos-swap/#&/g' /etc/fstab ``` 5,执行kubectl get cs显示组件处于非健康状态 ``` [root@master ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused etcd-0 Healthy {"health":"true"} ``` 修改scheduler和controller-manager两个组件的配置文件,分别将--port=0去掉。配置文件的路径是/etc/kubernetes/manifests/,下面有kube-controller-manager.yaml和kube-scheduler.yaml两个配置文件。 修改好之后保存一下即可,不需要手动重启服务。等个半分钟集群自动就恢复正常,再次执行kubectl get cs命令就可以看到组件是正常的了。 6,dashboard报错: `Get [https://10.96.0.1:443/version](https://10.96.0.1/version): dial tcp 10.96.0.1:443: i/o timeout` 出现这个问题实际上还是集群网络存在问题,但是如果你查看节点或者flannel的pod等等是正常的,所以还是排查不出来问题的。最快的解决方法让dashboard调度到master节点上就可以了。 修改dashboard的配置文件,将下面几行注释掉(大约在232-234行) ``` nodeSelector: "beta.kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master # tolerations: # - key: node-role.kubernetes.io/master # effect: NoSchedule ``` 也就是将上面的最后三行注释掉。 接着是再增加选中的节点 template: ``` metadata: labels: k8s-app: kubernetes-dashboard spec: nodeName: master containers: - name: kubernetes-dashboard image: kubernetesui/dashboard:v2.0.0-beta8 imagePul lPolicy: Always ports: ``` 大约在第190行,增加一行信息nodeName: master 保存好之后重新执行kubectl apply命令申请加入集群即可。 如果想自己继续研究的话,多看看是不是flannel的网段定义的问题。