containerd版本:1.6.6
k8s版本:1.24.1
linux系统:CentOS7
ip | hostname | 角色 |
---|---|---|
192.168.46.134 | master134 | master |
192.168.46.135 | master135 | master |
192.168.46.136 | master136 | master |
192.168.46.139 | node139 | node |
在下面的安装过程中,如果失败了需要重新安装,也是执行此指令即可
kubeadm reset
ifconfig cni0 down && ip link delete cni0
ifconfig flannel.1 down && ip link delete flannel.1
rm -rf /var/lib/cni/
注:执行上述指令时,可能会出错;因为你还没有安装kubeadm或者flannel就尝试使用他们的指令,此时忽略即可
看k8s的软硬件要求,更多详见官网
这里列出针对上述检查项相关的操作指令
说明:selinux是用来加强安全性的一个组件,但非常容易出错且难以定位,而k8s自身有越来越完善的安全保证机制,所以不需要selinux
# 使用命令直接关闭
setenforce 0
# 也可以直接修改/etc/selinux/config文件
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
# 查看一下(SELINUX=disabled即表示禁用成功)
cat /etc/selinux/config
说明:开始k8s还在支持docker时,swap会导致docker的运行不正常,性能下降,是个bug,关闭swap就解决了,就变成了通用方案,基本上默认关闭了就OK,内存开大点儿不太会oom,本来容器也可以限制内存的使用量,控制一下就好
# 关闭swap区
swapoff -a
# 修改/etc/fstab文件; 注释掉 SWAP 的自动挂载
vim /etc/fstab
# 确认 swap 已经关闭(确认swap各项参数值是否都为0)
free -m
# 编辑k8s配置,在最后一行添加 vm.swappiness=0
vim /etc/sysctl.d/k8s.conf
在/etc/sysctl.d/k8s.conf最后添加
vm.swappiness=0
# 执行此指令,使生效
sysctl -p /etc/sysctl.d/k8s.conf
执行效果如图:
需要额外添加一个kube-vip负载均衡需要使用到的虚拟ip
vim /etc/hosts
在将当前机器的hostname设置为对应的名字
hostnamectl set-hostname {hostname}
# e.g.
# hostnamectl set-hostname master134
# hostnamectl set-hostname master135
# hostnamectl set-hostname master136
# hostnamectl set-hostname node39
# 开启内核ipv4转发需要用到br_netfilter,所以这里先加载一下br_netfilter
modprobe br_netfilter
# 编辑
vim /etc/sysctl.d/k8s.conf
在/etc/sysctl.d/k8s.conf
后面添加:
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
说明:
使修改生效
cat /etc/sysctl.d/k8s.conf
# 使修改生效
sysctl -p /etc/sysctl.d/k8s.conf
安装ipvs:
# 创建/etc/sysconfig/modules/ipvs.modules文件
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
# 保证在节点重启后能自动加载所需模块
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
# 查看是否已经正确加载所需的内核模块
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
确保已经安装了 ipset 软件包
yum install ipset
便于查看 ipvs 的代理规则,最好安装一下管理工具 ipvsadm
yum install ipvsadm
yum install chrony -y
systemctl enable chronyd
systemctl start chronyd
chronyc sources
k8s的端口要求,更多详见官网
master节点需要开放这些端口
协议 | 端口方向 | 端口(范围) | 用途 | 使用者 |
---|---|---|---|---|
TCP | Inbound | 6443 | Kubernetes API server | All |
TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
TCP | Inbound | 10259 | kube-scheduler | Self |
TCP | Inbound | 10257 | kube-controller-manager | Self |
TCP | Inbound | 443 | 常用端口 | kubernetes-dashboard等 |
TCP | Inbound | 80 | 常用端口 |
node节点需要开放这些端口
协议 | 端口方向 | 端口(范围) | 用途 | 使用者 |
---|---|---|---|---|
TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
TCP | Inbound | 30000-32767 | NodePort Services† | All |
TCP | Inbound | 443 | 内部通讯 | kubernetes-dashboard等 |
TCP | Inbound | 80 | 常用端口 |
master开放端口
# 开放端口
firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent
firewall-cmd --zone=public --add-port=10250/tcp --permanent
firewall-cmd --zone=public --add-port=10259/tcp --permanent
firewall-cmd --zone=public --add-port=10257/tcp --permanent
firewall-cmd --permanent --add-port=443/tcp
firewall-cmd --permanent --add-port=80/tcp
# 刷新生效
firewall-cmd --reload
# 查看开放了的端口
firewall-cmd --zone=public --list-ports
node开放端口
# 开放端口
firewall-cmd --zone=public --add-port=10250/tcp --permanent
firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent
firewall-cmd --permanent --add-port=443/tcp
firewall-cmd --permanent --add-port=80/tcp
# 刷新生效
firewall-cmd --reload
# 查看开放了的端口
firewall-cmd --zone=public --list-ports
提示:为了安全考虑,仅在安全的网络下可这样干
systemctl stop firewalld
systemctl disable firewalld
k8s自1.24版本开始,不再支持docker作为运行时,而是直接采用containerd作为运行时
注:docker也是对containerd的一层封装,随着k8s越加完善,docker的这层封装封装越加显得多余,还带来了不少bug,这层封装也使得k8s设计的新特性实现难度增大,所以舍弃docker,直连containerd
CentOS7安装containerd见:CentOS7安装容器运行时containerd
安装k8s集群有两种方式:
- 使用(编译好的)二进制文件安装(或升级)k8s集群
- 使用k8s推出的kubeadm工具安装(或升级)k8s集群
本文以kubeadm安装k8s集群
编辑文件/etc/yum.repos.d/kubernetes.repo,内容如下(配置的是阿里源)
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
这里也给出官方源(国外)的配置,以供参考:
[kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube*
提示:安装kubelet、kubeadm、kubectl时要换好对应的版本,因为只有对应版本的kubeadm才能安装对应版本(及相邻一两个版本)的k8s
# 查看kubelet、kubeadm、kubectl可用版本
# yum list --showduplicates kubelet --disableexcludes=kubernetes
# yum list --showduplicates kubeadm --disableexcludes=kubernetes
# yum list --showduplicates kubectl --disableexcludes=kubernetes
# 将软件包信息提前在本地索引缓存,用来提高搜索安装软件的速度
yum makecache fast
# 安装kubelet、kubeadm、kubectl, 其中, --disableexcludes: 禁掉除了kubernetes之外的别的仓库
yum install -y kubelet-1.24.1-0 kubeadm-1.24.1-0 kubectl-1.24.1-0 --disableexcludes=kubernetes
提示:kubeadm使用kubelet部署和启动k8s的主要服务,所以需要先启动kubelet
systemctl enable --now kubelet
# 这里拿一个目录专门来放k8s相关文件
mkdir -p /usr/local/k8s
cd /usr/local/k8s
# init参数备份
kubeadm config print init-defaults --component-configs KubeletConfiguration > kubeadm.yaml
cp kubeadm.yaml kubeadm.yaml.default
vim kubeadm.yaml
修改后的kubeadm.yaml示例:
配置说明详见官网
此配置项中的以下配置需要根据自己的情况对应修改:
- localAPIEndpoint.advertiseAddress
- nodeRegistration.name
- apiServer.certSANs
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.46.134 #指定当前master节点的内网ip
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock #指定运行时产品的sock,这里采用containerd作为运行时
imagePullPolicy: IfNotPresent
name: master134 #指定节点名称(值为前面配置的hostname)
taints:
# 将master标记为污点, master节点不能调度应用创建POD之类的
- effect: "NoSchedule"
key: "node-role.kubernetes.io/master"
---
# KubeProxyConfiguration配置
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs # kube-proxy 模式
---
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
#imageRepository: k8s.gcr.io #指定镜像仓库仓库地址
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #指定国内的镜像仓库地址
kind: ClusterConfiguration
kubernetesVersion: 1.24.1 #指定要安装的k8s版本
controlPlaneEndpoint: api.k8s.local:6443 # 设置控制平面Endpoint地址,api.k8s.local即为前面设置的vip的hostname
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
certSANs: # 添加其他master节点的相关信息
- api.k8s.local
- master134
- master135
- master136
- 192.168.46.134
- 192.168.46.135
- 192.168.46.136
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 #指定pod网段
serviceSubnet: 10.96.0.0/12 #指定server网段
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
kubeadm config images list --config=kubeadm.yaml
拉取镜像的能力是,运行时containerd提供的;只是拉取镜像的话,也可以用containerd直接拉取
kubeadm config images pull --config=kubeadm.yaml
# --upload-certs:将(在所有master实例之间的)共享证书上传到集群
kubeadm init --upload-certs --config kubeadm.yaml
执行效果如下:
可以看到,这里有一个警告,说没关防火墙;我们在准备工作里面选取的方案就不是关防火墙,只要对应的端口开了就行,这里忽略它
[root@localhost k8s]#
[root@master134 k8s]# kubeadm init --upload-certs --config kubeadm.yaml
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.k8s.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master134 master135 master136] and IPs [10.96.0.1 192.168.46.134 192.168.46.135 192.168.46.136]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master134] and IPs [192.168.46.134 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master134] and IPs [192.168.46.134 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.522536 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
c59c58c739ffeb9c18940b57af4221caea1b7f6e162032874e53831940092e5a
[mark-control-plane] Marking the node master134 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master134 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join api.k8s.local:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:61f554bb9ff2d2e17d5fc33bec864a743ac3588489e3c5ef6c220f7b6e400076 \
--control-plane --certificate-key c59c58c739ffeb9c18940b57af4221caea1b7f6e162032874e53831940092e5a
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join api.k8s.local:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:61f554bb9ff2d2e17d5fc33bec864a743ac3588489e3c5ef6c220f7b6e400076
[root@master134 k8s]#
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
提示:在本人搭建时,此步骤虽然也是只有安装第一个master时才执行,但是我的执行时机是在安装完containerd之后,安装kubelet之前执行的;笔记里记在这个位置,理论上,放在这时再执行也是一样的
mkdir -p /etc/kubernetes/manifests/
# 配置vip地址(这个地址需要和前面hostname里面设置的vip地址一样)
export VIP=192.168.46.10
# 设置网卡名称
export INTERFACE=ens33
# 拉取镜像
ctr image pull docker.io/plndr/kube-vip:v0.3.8
# 输出静态Pod资源清单
ctr run --rm --net-host docker.io/plndr/kube-vip:v0.3.8 vip \
/kube-vip manifest pod \
--interface $INTERFACE \
--vip $VIP \
--controlplane \
--services \
--arp \
--leaderElection | tee /etc/kubernetes/manifests/kube-vip.yaml
直接执行join指令(具体指令见上一步kubeadm init初始化后的信息提示)
kubeadm join api.k8s.local:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:61f554bb9ff2d2e17d5fc33bec864a743ac3588489e3c5ef6c220f7b6e400076 \
--control-plane --certificate-key c59c58c739ffeb9c18940b57af4221caea1b7f6e162032874e53831940092e5a
执行效果如下:
[root@node135 tmp]#
[root@node135 tmp]# kubeadm join api.k8s.local:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:61f554bb9ff2d2e17d5fc33bec864a743ac3588489e3c5ef6c220f7b6e400076 \
> --control-plane --certificate-key c59c58c739ffeb9c18940b57af4221caea1b7f6e162032874e53831940092e5a
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [api.k8s.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master134 master135 master136] and IPs [10.96.0.1 192.168.46.135 192.168.46.134 192.168.46.136]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master135] and IPs [192.168.46.135 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master135] and IPs [192.168.46.135 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node master135 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master135 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
[root@node135 tmp]#
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
mkdir -p /etc/kubernetes/manifests/
# 配置vip地址(这个地址需要和前面hostname里面设置的vip地址一样)
export VIP=192.168.46.10
# 设置网卡名称
export INTERFACE=ens33
# 拉取镜像
ctr image pull docker.io/plndr/kube-vip:v0.3.8
# 输出静态Pod资源清单
ctr run --rm --net-host docker.io/plndr/kube-vip:v0.3.8 vip \
/kube-vip manifest pod \
--interface $INTERFACE \
--vip $VIP \
--controlplane \
--services \
--arp \
--leaderElection | tee /etc/kubernetes/manifests/kube-vip.yaml
查看kube-vip成员
kubectl get pods -A | grep vip
kubectl get nodes
将第一个安装的master节点上的配置文件$HOME/.kube/config
拷贝到node节点对应位置($HOME/.kube/
)下
注:node节点如果没有该目录,则直接mkdir -p $HOME/.kube/
创建
mkdir -p $HOME/.kube/
cd $HOME/.kube/
# 然后将文件拷贝至$HOME/.kube/下
获取join指令
kubeadm token create --print-join-command
输出示例
[root@localhost .kube]# kubeadm token create --print-join-command
kubeadm join 192.168.46.134:6443 --token rywtzu.jezeup0ya5swy9id --discovery-token-ca-cert-hash sha256:feab3ae0749eb98c63e0e7e1da8ac6e5c389ca4acaf371736244f719d37ae434
[root@localhost .kube]#
执行获取到的join指令
kubeadm join 192.168.46.134:6443 --token rywtzu.jezeup0ya5swy9id --discovery-token-ca-cert-hash sha256:feab3ae0749eb98c63e0e7e1da8ac6e5c389ca4acaf371736244f719d37ae434
注:安装一次即可,后面join进来的节点会自动安装;但步骤6的CNI插件处理需要每个节点都进行
注:避免节点出现NotReady的情况,我们先安装 flannel网络插件
提示:使用bridge网络的容器无法跨多个宿主机进行通信,跨主机通信需要借助其他的 cni 插件,如Flannel
方式一:wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
下载
注:因为此网站被墙,所以很难直接下载下来的
方式二:直接创建kube-flannel.yml文件
vim kube-flannel.yml
v0.18.1版本的kube-flannel,内容如下(直接贴进去即可):
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
seLinux:
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
image: rancher/mirrored-flannelcni-flannel:v0.18.1
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: rancher/mirrored-flannelcni-flannel:v0.18.1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
如果是直接下载下来的kube-flannel.yml,那么镜像仓库是quay.io,(因为有墙,所以)直接从这个仓库拉取镜像的话可能很久都不能拉取完成,所以可以改成rancher,即:上面直接创建kube-flannel.yml时里面的镜像仓库渎职
vim kube-flannel.yml
......
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.14.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth0 # 如果是多网卡的话,指定内网网卡的名称
......
kubectl apply -f kube-flannel.yml
提示:安装可能需要一定的时间,可使用
kubectl describe pod -n {namespace} {podName}
查看安装进度
kubectl get pod -o wide -A
注:如果此时你的coredns还一直处于创建中,可以先不管,等进行下面的【修正pod的ip地址】后,一般就会恢复正常
注:k8s有几个节点,就会对应安装几个flannel的pod
我们查看以下pod,可发现有pod的地址是10.88开头的:
但是我们在前面配置kubeadm.yaml时,明明设置的pod的网段是podSubnet: 10.244.0.0/16
先查看各个节点(node和master节点)的CNI的配置文件:
ls -la /etc/cni/net.d/
可以看到里面包含三个配置,一个是 10-containerd-net.conflist,另外一个是我们上面创建的 Flannel 网络插件生成的配置,还有一个是nerdctl桥接插件的配置,而我们的需求肯定是想使用 Flannel 的这个配置;
因为如果目录/etc/cni/net.d/中有多个 cni 配置文件(以.conflist结尾的文件),kubelet 将会使用按文件名的字典顺序排列的第一个作为配置文件,所以为了让kubelet选择到flannel的配置文件,所以我们需要把flannel前面的文件删除(或者让其不以.conflist为后缀)
在k8s的**所有节点(node和master)**中都需要执行:
# 之前就是10-containerd-net.conflist文件生效了,这个文件里面配置的subnet就是10.88.开头的,所以
# 生成的pod的ip地址也是10.88.的了,需要把这个文件"删除",不作为配置文件
mv /etc/cni/net.d/10-containerd-net.conflist /etc/cni/net.d/10-containerd-net.conflist.bak
# 此语句报错的话,可忽略
ifconfig cni0 down && ip link delete cni0
systemctl daemon-reload
systemctl restart containerd kubelet
重建那些ip地址不对的pod
提示:删除pod后,k8s立马就会重新创建pod,就相当于重建了
kubectl delete pod -n {namespace} {podName}
再次查看节点
kubectl get pod -o wide -A
可看到ip地址变成了10.244开头的,符合预期
kubectl get nodes
注:若此时你的node节点处于NotReady状态;可能你需要单独安装网络插件,可参考here
# kubectl get pods -n {命名空间}
kubectl get pods -n kube-system
注:有时pod启得没那么快,需要等一下status状态才回变为Running
错误场景(说明):
查看具体错误信息:
使用describe查看pod具体错误信息
# kubectl describe pods -n {命名空间} {pod名称} kubectl describe pods -n kube-system coredns-7f74c56694-stmch输出如下(我们主要看最下面输出的Events):
[root@localhost net.d]# kubectl describe pods -n kube-system coredns-7f74c56694-stmch Name: coredns-7f74c56694-stmch Namespace: kube-system Priority: 2000000000 Priority Class Name: system-cluster-critical Node: master134/192.168.46.134 Start Time: Fri, 29 Jul 2022 02:07:09 -0700 Labels: k8s-app=kube-dns pod-template-hash=7f74c56694 Annotations: <none> Status: Pending IP: IPs: <none> Controlled By: ReplicaSet/coredns-7f74c56694 Containers: coredns: Container ID: Image: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6 Image ID: Ports: 53/UDP, 53/TCP, 9153/TCP Host Ports: 0/UDP, 0/TCP, 0/TCP Args: -conf /etc/coredns/Corefile State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Limits: memory: 170Mi Requests: cpu: 100m memory: 70Mi Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: <none> Mounts: /etc/coredns from config-volume (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k6mrr (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: coredns Optional: false kube-api-access-k6mrr: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: CriticalAddonsOnly op=Exists node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 2m54s (x385 over 86m) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e23b5842b1d9acca7f10428319bf58d1d4c69b63e4bb57bc5adfe224f676b74b": plugin type="bridge" failed (add): incompatible CNI versions; config is "1.0.0", plugin supports ["0.1.0" "0.2.0" "0.3.0" "0.3.1" "0.4.0"]这里看到,Events里面说
Warning FailedCreatePodSandBox 2m54s (x385 over 86m) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e23b5842b1d9acca7f10428319bf58d1d4c69b63e4bb57bc5adfe224f676b74b": plugin type="bridge" failed (add): incompatible CNI versions; config is "1.0.0", plugin supports ["0.1.0" "0.2.0" "0.3.0" "0.3.1" "0.4.0"]
,即:我们使用的CNI插件不兼容解决方式:
方式一:安装时,对应节点没有进行flannel CNI插件选择配置,没有进行【安装flannel插件】中的步骤6;在对应节点上执行【安装flannel插件】中的步骤6即可
方式二(当不使用flannel插件,而直接使用containerd的网络插件时,使用此方式进行解决):安装containerd时,没有安装对应的containernetworking/plugins插件(或者后来被覆盖)导致的问题,按照CentOS7安装容器运行时containerd中的步骤,(所有节点)重新安装containernetworking/plugins插件后即可恢复正常
master和node都安装完毕后,这里进行高可用测试
# 查看 所有带有vip字样的节点信息
kubectl get pod -o wide -A | grep vip
# 查看pod的日志
kubectl logs -f -n {namespace} {podname}
说明高可用,成功
Dashboard是一个web可视化的k8s管理工具
注:在集群中的任一节点进行安装都行,最终会被安装到哪个节点,k8s会自动分配;本人安装时,在master节点进行安装的,但是最终是由k8s安装到了其它node上
因为本人安装的k8s是1.24.1版本,所以这里选择安装2.6.0版本的dashboard;更多版本间的对应关系可见here
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml
注:若实在下载不下来,可直接去官网下载对应的source code,然后在源码包的aio/deploy/路径下,也有recommended.yaml文件
为方便,这里直接提供一个recommended.yaml文件的内容(无需下载,直接使用):
mkdir -p /usr/local/tmp/
cd /usr/local/tmp/
vim recommended.yaml
直接将下述内容贴进去:
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30000 # 添加固定端口
selector:
k8s-app: kubernetes-dashboard
type: NodePort # 加上type=NodePort变成NodePort类型的服务
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.6.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
nodePort: 30001 # 添加固定端口
selector:
k8s-app: dashboard-metrics-scraper
type: NodePort # 加上type=NodePort变成NodePort类型的服务
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.8
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
vim recommended.yaml
在kind为Service对应位置,加上type: NodePort
,使其变成NodePort类型的服务,并指定node端口(需保证对应node上的该端口已开放)
kubectl apply -f recommended.yaml
提示:安装可能需要一定的时间,可使用
kubectl describe pod -n {namespace} {podName}
查看安装进度
kubectl get pod -o wide -A
注:若有问题,可以从以下两个方面查看具体报错问题
# 查看详情
kubectl describe pod -n {命名空间} {podName}
# 查看日志
kubectl logs -f -n {命名空间} {podName}
kubectl get svc -n kubernetes-dashboard
访问页面
地址:https://{nodeIP}:{端口}
而node139节点的ip是192.168.46.139,所以这里直接访问https://192.168.46.139:30000:
生成token(,需要在master上操作)
注:多个master中的任意一个master即可
kubernetes v1.24.0更新之后进行创建ServiceAccount不会自动生成 Secret 需要对其手动创建
创建ServiceAccount账号
cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kube-system
EOF
给账号创建secret资源(密钥资源)
cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: dashboard-admin
namespace: kube-system
annotations:
kubernetes.io/service-account.name: "dashboard-admin"
EOF
给账号绑定角色
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
查看secret信息(密钥资源),获取token
kubectl describe secrets dashboard-admin -n kube-system
使用获取到的token进行登录即可
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。