# Cloud Testbench
**Repository Path**: nics-robot/cloud-testbench
## Basic Information
- **Project Name**: Cloud Testbench
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2024-01-16
- **Last Updated**: 2024-08-24
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
Cloud Testbench
control and develop robot remotely
## Installation
### 第三方依赖应用部署
#### 一、Harbor部署
1. 参考官方文档部署Harbor,官方文档:https://goharbor.io/docs/2.10.0/install-config/
2. 在Harbor中创建project: `cloud-testbench`
#### 二、k8s部署
1. 配置k8s节点基础环境
在集群中的所有节点都要进行以下的环境安装:
1. 将节点IP设置为固定IP
2. 将apt配置为国内源
3. 关闭防火墙
``` bash
sudo ufw status # 查看防火墙状态
sudo ufw disable # 关闭防火墙
```
4. 关闭内存交换 (swap)
编辑`/etc/fstab`
``` bash
vim /etc/fstab
```
注释掉最后一行`/swapfile `
``` bash
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#
# / was on /dev/sda5 during installation
UUID=1e6c2502-6a78-4a93-9ad5-ed1bd59fef22 / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/sda1 during installation
UUID=E754-628B /boot/efi vfat umask=0077 0 1
# /swapfile none swap sw 0 0
```
5. 安装和配置docker
安装docker:
``` bash
sudo apt-get update
sudo apt install -y docker.io
```
替换docker源为阿里云:
登录阿里的镜像服务 https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors 免费获得阿里云镜像加速地址,然后按照文档指示修改`/etc/docker/daemon.json`并重启docker服务
6. 安装kubeadm、kubelet、kubectl
安装kubeadm、kubelet、kubectl:
``` bash
sudo apt update && apt install apt-transport-https
curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main"
sudo apt-get update
sudo apt-cache madison kubelet kubectl kubeadm |grep '1.22.7-00'
sudo apt install -y kubelet=1.22.7-00 kubectl=1.22.7-00 kubeadm=1.22.7-00
```
再次禁用swap:
在`/etc/default/kubelet`中添加配置`KUBELET_EXTRA_ARGS="--fail-swap-on=false"`,然后执行`systemctl daemon-reload && systemctl restart kubelet`重启kubelet服务
7. 修改cgroup管理器
在`/etc/docker/daemon.json`中加入配置:
``` json
{
"exec-opts": [
"native.cgroupdriver=systemd"
],
}
```
2. 将节点配置为k8s Master节点
在集群中任意节点执行:
``` bash
kubeadm init \
--kubernetes-version=v1.22.7 \
--image-repository registry.aliyuncs.com/google_containers \
--pod-network-cidr=10.24.0.0/16 \
--ignore-preflight-errors=Swap
```
最终输出以下信息:
``` bash
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.153.130:6443 --token r9u5j1.22jvtt8v9yojz9qx \
--discovery-token-ca-cert-hash sha256:8d89868d63aa863bae751ee2c848d8417ca7e464ee905afa99a15fb15a680191
```
其中最后一段:
``` bash
kubeadm join 192.168.153.130:6443 --token r9u5j1.22jvtt8v9yojz9qx \
--discovery-token-ca-cert-hash sha256:8d89868d63aa863bae751ee2c848d8417ca7e464ee905afa99a15fb15a680191
```
需要记录下来,在已经完成k8s基础配置的节点上执行这一段代码之后,这个节点就会被加入k8s集群,成为worker节点。另外,在上面的输出中还能看到这样一段话:
``` bash
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
```
按照这段话的指示,执行以下命令,配置~/.kube
``` bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
最后,配置calico作为k8s的网络插件:
``` bash
curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O
kubectl apply -f calico.yaml
```
3. 将节点配置为k8s Worker节点
在已经完成k8s环境安装但未加入集群的节点执行:
``` bash
kubeadm join 192.168.153.130:6443 --token r9u5j1.22jvtt8v9yojz9qx \
--discovery-token-ca-cert-hash sha256:8d89868d63aa863bae751ee2c848d8417ca7e464ee905afa99a15fb15a680191
```
该节点即加入集群,可在集群中任意节点执行:
``` bash
kubectl get nodes
```
查看集群中当前的节点
4. 配置dashboard
k8s dashboard是k8s集群的一个网页端管理界面,不是必须配置的,但是配置之后的好处是可以直观地查看集群状态。在k8s集群中任何一个节点执行以下命令(最好是Master节点):
``` bash
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
```
执行以下命令配置dashboard的访问端口:
``` bash
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
```
在以下配置中将 `type: ClusterIP` 改为 `type: NodePort`:
``` bash
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
creationTimestamp: "2023-11-28T04:45:12Z"
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
resourceVersion: "1575"
uid: 51c78274-07d1-4464-8234-024143c47979
spec:
clusterIP: 10.102.183.195
clusterIPs:
- 10.102.183.195
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 32297
port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: ClusterIP # 修改此处
status:
loadBalancer: {}
```
完成配置后执行以 `kubectl get svc -A |grep kubernetes-dashboard` 查看dashboard的访问端口:
``` bash
setsuna@setsuna-virtual-machine:~$ kubectl get svc -A |grep kubernetes-dashboard
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.101.214.16 8000/TCP 54d
kubernetes-dashboard kubernetes-dashboard NodePort 10.102.183.195 443:32297/TCP 54d
```
在这里可以看到dashboard的服务端口是`32297`,也就是说在集群中访问localhost:32297即可访问k8s dashboard。但是此时登录会提示需要输入token,然而此时还没有创建用户,故没有token可以用,所以接下来需要创建dashboard用户。创建 `dash.yanl`:
``` yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
```
在k8s集群中任何一个节点执行以下命令(最好是Master节点):
``` bash
kubectl apply -f dash.yaml
```
完成用户创建之后,执行:
``` bash
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
```
获取用户访问token,可以看到如下输出:
``` bash
setsuna@setsuna-virtual-machine:~$ kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
eyJhbGciOiJSUzI1NiIsImtpZCI6IjRKQ2FnRXdHSE1yUWhZYm9zYzhIVlBwUG80dlJPQzFKSUtNVGVtalk1SnMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWNqbHFmIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiYTczMGI5ZC02NGRiLTQ1NmMtOTg0Yy04Y2VkODBlYjQ1Y2QiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.BcRQJN-wQUmHBbHKX-TOOmpwFoMlYn02xUvosrJxtpxq7SB7tGrqXeelSNE2UbvRFd7q2U4SoMbDK_A5SPoIsec5KWEauURa3QiG_1T0tomKF_0j3lTNdBl8uBOQVlmXgcnSmLfUMcesPC2lrdfKIISM-Y-5kjIuE-LjJPmaQxfSwtep6BclowtdyHJQEIXaJurk26VQkeuKxiVVFdzuIev65Qn6sx7JdtNt38Rgdw45JX9AaFronxxmVc77oVJYxfWUKnfnIvg5u8JbUYX9FrkoLowWoGy59Qt5A_z48s0G5C___4z0tBRWjYIGpmGVc7ySgjs5eDgT3IRZ6Sh5BQ
setsuna@setsuna-virtual-machine:~$
```
最好把这个token保存下来,在登录k8s dashboard的时候输入这个token
5. 配置Macvlan网络
为了让k8s集群中的容器可以拥有宿主机所在网段的IP,需要配置Macvlan网络。在k8s集群中任何一个节点,最好是Master节点执行以下操作
1. 配置网卡名称
假设现在希望让k8s集群中所有容器拥有`192.168.1`网段下的IP,首先需要保证将k8s集群中所有节点都拥有一张网卡接入到该网段,其次需要将k8s集群中所有节点的在`192.168.1`网段下的网卡统一重命名为同一个名称,例如`ens38`
2. 安装Multus
拉取Multus
``` bash
git clone https://github.com/k8snetworkplumbingwg/multus-cni.git
```
使用kubeelet部署Multus:
``` bash
cd multus-cni
cat ./deployments/multus-daemonset-thick.yml | kubectl apply -f -
```
3. 添加additional interface
创建additional interface的yaml文件`multus-macvlan.yaml`,其中`master`、`subnet`、`rangeStart`、`rangeEnd`、`gateway`需要根据实际情况进行调整。
``` yaml
# multus-macvlan.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "ens38",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.1.0/24",
"rangeStart": "192.168.1.50",
"rangeEnd": "192.168.1.216",
"routes": [],
"gateway": "192.168.1.1"
}
}'
```
使用yaml文件创建additional interface
``` bash
kubectl create -f multus-macvlan.yaml
```
6. 配置Harbor镜像仓库
1. 禁用证书验证(在k8s集群的每一个节点执行)
创建或编辑 Docker 配置文件
``` bash
sudo vi /etc/docker/daemon.json
```
在配置文件中添加配置
``` json
{
"insecure-registries" : ["192.168.124.143"] // 此处需要替换为部署Harbor镜像仓库的地址
}
```
重启docker服务
``` bash
sudo systemctl restart docker
```
2. Harbor镜像仓库授权配置(在k8s集群的每一个节点执行)
修改或创建 Docker 配置文件:
``` bash
vi /etc/docker/config.json
```
将以下配置添加到:
``` json
{
"auths": {
"your-harbor-host": {
"auth": "base64_encoded_username_password"
}
}
}
```
其中,`your-harbor-host` 是 Harbor 的主机地址,`base64_encoded_username_password` 是使用 `echo -n 'your_username:your_password' | base64` 命令生成的 base64 编码的用户名和密码。
在每个节点上重启 Docker 服务,以使配置生效:
``` bash
sudo systemctl restart docker
```
3. 在k8s中添加Harbor认证配置
在k8s集群中任何一个节点,最好是Master节点执行:
``` bash
kubectl create secret docker-registry harbor-secret \
--docker-server=your-harbor-host \
--docker-username=your-username \
--docker-password=your-password \
--docker-email=your-email
```
其中需要替换 `your-harbor-host`, `your-username`, `your-password`, 和 `your-email` 为 Harbor 镜像仓库的实际信息。
#### 三、数字孪生部署
参考github仓库README.md:https://gitee.com/nics-robot/ct-digital-twin
#### 四、Seafile部署 (Optional)
参考官方文档:https://cloud.seafile.com/published/seafile-manual-cn/home.md
#### 五、Foxglove部署 (Optional)
参考官方文档:https://docs.foxglove.dev/docs/introduction
### Cloud Testbench部署
#### 一、配置基本环境
1. 安装基本依赖
``` bash
sudo apt update
sudo apt install python3 python3-pip mysql-server
```
2. 在mysql中创建数据库`cloud_testbench`
3. 通过nvm安装nodejs和npm
执行以下命令安装nvm:
``` bash
export NVM_DIR="$HOME/.nvm" && (
git clone https://github.com/nvm-sh/nvm.git "$NVM_DIR"
cd "$NVM_DIR"
git checkout `git describe --abbrev=0 --tags --match "v[0-9]*" $(git rev-list --tags --max-count=1)`
) && \. "$NVM_DIR/nvm.sh"
```
在`~/.bashrc`中添加以下内容,将nvm添加到环境变量:
``` bash
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
```
安装和启用nodejs:
```bash
nvm install 18
nvm use 18
```
#### 二、上传依赖镜像到Harbor仓库
在可以访问到harbor仓库的设备(比如k8s的master节点)上执行:
``` bash
# 创建并上传机器人控制demo镜像
git clone https://gitee.com/nics-robot/ct-robot-controller.git
cd ct-robot-controller/docker
sudo sh build.sh
sudo docker tag ct_bot_ctl_panel:v3 192.168.124.162/cloud-testbench/ct_bot_ctl_panel:v3
sudo docker push 192.168.124.162/cloud-testbench/ct_bot_ctl_panel:v3
# 拉取并上传 code-server 镜像
sudo docker pull lscr.io/linuxserver/code-server:latest
sudo docker tag lscr.io/linuxserver/code-server:latest 192.168.124.162/cloud-testbench/lscr.io/linuxserver/code-server:latest
sudo docker push 192.168.124.162/cloud-testbench/lscr.io/linuxserver/code-server:latest
```
在这里`ct_bot_ctl_panel:v3`替换为实际使用的机器人控制demo镜像版本,同时`192.168.124.162`需要替换为harbor镜像仓库的实际IP
#### 三、安装项目
1. 克隆代码
``` bash
git clone https://gitee.com/nics-robot/cloud-testbench.git
```
2. 安装后端依赖
``` bash
cd cloud-testbench/backend
python3 -m pip install -r requirements.txt
```
3. 安装前端依赖
``` bash
cd cloud-testbench/frontend
npm install
```
## Running
### 启动后端
1. 编辑后端配置文件
编辑`cloud-testbench/backend/config/base_settings.conf`:
``` conf
[basic]
code_server_image = lscr.io/linuxserver/code-server:latest # vscode web 使用的镜像版本
vnc_image = ubuntu-novnc:20.04 # 申请真实机器人用于启动vnc-client使用的镜像版本
[mysql]
username = "root" # mysql 数据库用户名
password = "root" # mysql 数据库密码
[redis]
host = localhost # Redis host
port = 6379 # Redis 服务端口
db = 0 # Redis 数据库编号
[robot]
subnet = 192.168.124 # 真实机器人在局域网中所在网段
controller_image = ct_bot_ctl_panel:v3 # 真实机器人远程控制demo容器镜像版本
geofence_topic = /geofence/lock # 真实机器人电子围栏节点订阅的话题名称
[gzweb]
url = http://222.128.65.50:8080 # 数字孪生前端URL
[harbor]
host = 192.168.124.143 # Harbor仓库IP
project = cloud-testbench # 使用Harbor仓库中该项目下的镜像
[image_manager]
port = 8000 # k8s子节点上镜像管理服务的端口
router_commit_and_push = /image/commit_and_save # k8s子节点上镜像管理服务的路径
```
2. 载入k8s配置文件
从k8s Master节点的`~/.kube`目录下复制`config`文件,将该配置文件重命名为`kube_config`,放置到`cloud-testbench/backend/config`目录下
3. 运行
``` bash
cd cloud-testbench/backend
python3 main.py
```
### 启动前端
1. 编辑前端配置文件
编辑`cloud-testbench/frontend/src/config/config.js`:
``` javascript
export const BACKEND_HOST = '192.168.124.142' // 后端IP
export const K8S_HOST = '192.168.124.162' // k8s Master节点IP
export const BASE_URL = 'http://' + BACKEND_HOST + ':8000' // 后端API URL
export const GZWEB_URL = 'http://' + BACKEND_HOST + ':9190' // 数字孪生网页URL
export const MONITOR_URL = 'http://' + BACKEND_HOST + ':80' // 监控Webcam URL
export const RVIZWEB_URL = 'http://' + BACKEND_HOST + ':8081' // Rviz Web网页URL
export const SEAFILE_URL = 'http://' + BACKEND_HOST + ':9088' // Seafile网盘服务URL
export const DIGITAL_TWINS_WS_URL = 'ws://' + BACKEND_HOST + ':9090' // Rviz Web的foxglove bridge URL
```
2. 运行
```bash
cd cloud-testbench/frontend
npm run dev
```