# ewem-new **Repository Path**: jsj-luojie/ewem-new ## Basic Information - **Project Name**: ewem-new - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-11-24 - **Last Updated**: 2025-11-24 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # 一物一码系统 ## 开发框架采用ruoyi-vue-pro,是基于 Spring Boot 多模块架构 ## 源码更新 ```bash ``` ## 源码打包构建 ```bash mvn clean package -Dmaven.test.skip=true ``` ## 打包构建镜像 [//]: # (```bash) [//]: # (docker buildx build --platform=linux/amd64 -f ./xidyun-server/Dockerfile -t registry.cn-chengdu.aliyuncs.com/scxed_repo/xidyun-server:4.65 .) [//]: # (```) ```bash docker build -f ./xidyun-server/Dockerfile -t registry.cn-chengdu.aliyuncs.com/scxed_repo/ewem-sso:0.0.48 . ``` ## ```bash docker push registry.cn-chengdu.aliyuncs.com/scxed_repo/ewem-sso:0.0.48 ``` ```bashx docker run -d --restart=always --network host -p 48080:48080/tcp --name xidyun-iot registry.cn-chengdu.aliyuncs.com/scxed_repo/xidyun-iot:1.0.0 docker pull registry.cn-chengdu.aliyuncs.com/scxed_repo/xidyun-server:laster ``` ## k3s部署yaml ```bash docker run -d -v ~/data/taos/dnode/data:/var/lib/taos -v ~/data/taos/dnode/log:/var/log/taos -p 6030:6030 -p 6041:6041 -p 6043-6060:6043-6060 -p 6043-6060:6043-6060/udp registry.cn-chengdu.aliyuncs.com/tdengine:1.0 ``` 配置仓库密码 ```yaml --- apiVersion: v1 data: .dockerconfigjson: >- eyJhdXRocyI6eyJodHRwczovL3JlZ2lzdHJ5LmNuLWNoZW5nZHUuYWxpeXVuY3MuY29tIjp7InVzZXJuYW1lIjoid2FuZ3RAc2Mtc2hpZWxkLmNvbSIsInBhc3N3b3JkIjoiWGVkMjAyNDAzMTEiLCJhdXRoIjoiZDJGdVozUkFjMk10YzJocFpXeGtMbU52YlRwWVpXUXlNREkwTURNeE1RPT0ifX19 immutable: false kind: Secret metadata: name: xed namespace: default type: kubernetes.io/dockerconfigjson ``` ## 创建工作负载yaml ```yaml --- apiVersion: apps/v1 kind: Deployment metadata: annotations: k8s.kuboard.cn/displayName: 一物一码 k8s.kuboard.cn/workload: xidyun-server labels: k8s.kuboard.cn/layer: svc k8s.kuboard.cn/name: xidyun-server name: xidyun-server namespace: default spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s.kuboard.cn/layer: svc k8s.kuboard.cn/name: xidyun-server strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: kubectl.kubernetes.io/restartedAt: '2024-05-11T15:02:46+08:00' creationTimestamp: null labels: k8s.kuboard.cn/layer: svc k8s.kuboard.cn/name: xidyun-server spec: containers: - image: 'registry.cn-chengdu.aliyuncs.com/scxed_repo/xidyun-server:2.3' imagePullPolicy: Always name: xidyun-server ports: - containerPort: 48080 hostPort: 48080 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst imagePullSecrets: - name: xed restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: availableReplicas: 1 conditions: - lastTransitionTime: '2024-05-09T03:16:03Z' lastUpdateTime: '2024-05-09T03:16:03Z' message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: 'True' type: Available - lastTransitionTime: '2024-04-30T07:36:57Z' lastUpdateTime: '2024-05-11T07:03:20Z' message: ReplicaSet "xidyun-server-5496ff4659" has successfully progressed. reason: NewReplicaSetAvailable status: 'True' type: Progressing observedGeneration: 33 readyReplicas: 1 replicas: 1 updatedReplicas: 1 ``` # Kuboard Kuboard 部署在不同于 K3s 集群的服务器上,并通过配置使其访问 K3s 集群。这样做的好处是可以将管理界面与实际的工作负载分离,提高管理的灵活性和安全性。 5MoecqLMVMVfBgg2uVhiLHQQj8DKVAs2bLYWoP15 W66R56ihAk1jqceCMSzlkgApJ5glK6U9iFeeIkOy 在 K3s 集群中创建一个 Service Account 和 kubeconfig 文件: 在 K3s 集群中创建一个用于 Kuboard 访问的 Service Account,并生成 kubeconfig 文件。 ```bash # 创建 Service Account kubectl create serviceaccount kuboard-sa -n kube-system # 绑定 cluster-admin 角色 kubectl create clusterrolebinding kuboard-sa-admin --clusterrole=cluster-admin --serviceaccount=kube-system:kuboard-sa # 获取 Service Account 的 token TOKEN=$(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-sa | awk '{print $1}') -o jsonpath='{.data.token}' | base64 --decode) # 获取集群的 API Server 地址 SERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}') ``` # 创建 kubeconfig 文件 ``` cat < kuboard-kubeconfig.yaml apiVersion: v1 kind: Config clusters: - cluster: server: $SERVER insecure-skip-tls-verify: true name: k3s-cluster contexts: - context: cluster: k3s-cluster user: kuboard-sa name: kuboard-context current-context: kuboard-context users: - name: kuboard-sa user: token: $TOKEN EOF ``` 在目标服务器上部署 Kuboard: 在目标服务器上,您可以选择使用 Docker 来运行 Kuboard。 ```bash docker run -d --name kuboard -p 30080:80 \ -e KUBOARD_ENDPOINT="http://localhost:30080" \ -e KUBECONFIG=/etc/kubeconfig/kuboard-kubeconfig.yaml \ -v /path/to/kuboard-kubeconfig.yaml:/etc/kubeconfig/kuboard-kubeconfig.yaml \ eipwork/kuboard:v3 ``` ``` source /Users/xed/PycharmProjects/pythonProject/.venv/bin/activate ``` sk-lp6R62j0JIkywlJeA53353425bC944A2Be7c8d78Ac274190 00eb00110c21201025070200000050000101012025081109101500c2a001020000000300b4010000000300200c9006480690064807906a4609a80f9900f069fc0db008bc04b018470000252b0000452b0000ffb50027ff4383b000205c6801909e681d46a9697f1c049888470090296a88470028f6d10098401c0cd0032040028443002e19dd600614d500982b2809d02d280fd103e00020c04307b0f0bd012080020443a9697f1c04988847761e0090002e02dd0098302803d00598002825d035e0 8280000000000000000000000000000 00EB00110c21201025070200000050000101012025081109101500c2a001020000000300b4010000000300200c9006480690064807906a4609a80f9900f069fc0db008bc04b018470000252b0000452b0000ffb50027ff4383b000205c6801909e681d46a9697f1c049888470090296a88470028f6d10098401c0cd0032040028443002e19dd600614d500982b2809d02d280fd103e00020c04307b0f0bd012080020443a9697f1c04988847761e0090002e02dd0098302803d00598002825d030012040020443a969761e7f1c049888470090002e03dd782805d058280000000000000000000000000000