# YunJI **Repository Path**: White-lby/YunJI ## Basic Information - **Project Name**: YunJI - **Description**: YunJI前端的代码开发 - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 1 - **Forks**: 0 - **Created**: 2022-11-29 - **Last Updated**: 2022-12-09 ## Categories & Tags **Categories**: Uncategorized **Tags**: 分布式, vue3, element-plus, vite ## README # 企业项目YunJi的小教学正式开始了 **云迹是一套基于场所码数据分析的一套疫情防控看板,集“场所码”,“商店码”,“街道码”多方位数据收集、跟踪、分析于一体的实时性大数据看板,为疫情防控提供可靠的及时数据,对未来趋势预测提供数据基础。 系统包括场所码生成、人员信息采集,大数据分析等模块,该系统基于真实开发流程,采用企业级敏捷开发平台,分布式微服务架构。从需求梳理,迭代规划,编码实现,压力测试, 代码合并,自动化部署等流程完成实战开发。** ### Vue3-H5学习+本项目移动端开发 https://gitee.com/White-lby/vue3-h5-development.git # 1. 脚手架基础项目 ``` https://gitee.com/beiyou123/project-base-template.git ``` ## 联动后端:https://gitee.com/White-lby/yunji-root.git ```vue npm install 安装依赖 npm run dev 运行项目 ``` ``` https://cn.vitejs.dev/ ``` ![输入图片说明](READMEimages/image1.png) # 2. 组件介绍 ### 2.1 element-plus ``` https://element-plus.gitee.io/zh-CN/ ``` Element Plus,由饿了么大前端团队开源出品的一套为开发者、设计师和产品经理准备的基于 Vue 3.0 的组件库,Element Plus是基于Vue3面向设计师和开发者的组件库,提供了配套设计资源,帮助你的网站快速成型。 ## 2.2 Axios ``` https://www.axios-http.cn/ ``` Axios 是一个基于 promise 的网络请求库,可以用于浏览器和 node.js ## 2.3 Vue3 ``` https://cn.vuejs.org/ ``` 渐进式 JavaScript 框架 易学易用,性能出色,适用场景丰富的 Web 前端框架。 ## 2.4 Vue Router ``` https://router.vuejs.org/zh/ ``` Vue Router 是 Vue.js 的官方路由。它与 Vue.js 核心深度集成,让用 Vue.js 构建单页应用变得轻而易举 ## 2.5 Pinia ``` https://pinia.vuejs.org/zh/ ``` Pinia 是 Vue 的专属状态管理库,它允许你跨组件或页面共享状态。 # 3. 项目结构 # 4. 场所调用MockApi ## 4.1 进入mock平台新建测试接口 ![输入图片说明](READMEimages/image3.png) ![输入图片说明](READMEimages/image4.png) ## 4.2 在api文件夹内新建girlApi.ts ``` import http from "@/http/index" export default { select: { name: "查询", url: "/yunji-api/mygirlfriend", call: async function (params: any = {}) { return await http.get(this.url, params); } }, } ``` ## 4.3 在api文件中的Index.ts中添加如下代码: ``` import girlApi from "./girlApi"; export {girlApi} ``` ## 4.4 在Home.vue 添加如下代码,测试调用接口 ``` ``` # 5 Vue3百度地图组件使用 ## 5.1. 插件官网 https://map.heifahaizei.com/doc/index.html ## 5.2. 安装 ``` npm install vue-baidu-map-3x --save ``` ## 5.3. main.ts文件导入 ```vue import BaiduMap from 'vue-baidu-map-3x' import {BmlMarkerClusterer} from 'vue-baidu-map-3x' const app = createApp(App) app.use(BaiduMap, { // ak 是在百度地图开发者平台申请的密钥 详见 http://lbsyun.baidu.com/apiconsole/key */ ak: 'FMYihQ2aXcKidOkniSS9hv68QcH7gskK', // v:'2.0', // 默认使用3.0 // type: 'WebGL' // ||API 默认API (使用此模式 BMap=BMapGL) }); app.component('bml-marker-cluster', BmlMarkerClusterer) app.mount('#app') ``` [^注意]: 最后一行的app.mount('#app'),也许之前已经有了,就删掉就好,不然后果很严重 ## 5.4. 使用 ![输入图片说明](READMEimages/image5.png) ```vue ``` ## 5.5. 进阶,移动坐标获取详细地址 ```vue ``` # 6. 二维码生成工具vue-qrcode ## 6.1 安装 ``` npm install @chenfengyuan/vue-qrcod ``` ## 6.2 main.ts 引入组件 ``` import VueQrcode from '@chenfengyuan/vue-qrcode'; const app = createApp(App) app.component(VueQrcode.name, VueQrcode); ``` ## 6.3 使用 ``` ``` ## 6.4 效果图 ![输入图片说明](READMEimages/image6.png) # 7. 人脸识别功能开发 ## 7.1. SDK下载 ![输入图片说明](READMEimages/image7.png) ![输入图片说明](READMEimages/image8.png) ![输入图片说明](READMEimages/image9.png) ![输入图片说明](READMEimages/image10.png) ## 7.2. 解压SDK ### 7.2.1 后改成.zip格式,即可解压. ArcSoft_ArcFace_Java_Windows_x64_V3.0.mp4 ### 7.2.2 解压后目录格式如下: ![输入图片说明](READMEimages/image11.png) ## 7.3. 书写引擎类库 ### 7.3.1 在Java项目中导入刚刚解压的jar包,步骤如下: ![输入图片说明](READMEimages/image12.png) ![输入图片说明](READMEimages/image13.png) ### 7.3.2 书写引擎类库 ![输入图片说明](READMEimages/image14.png) ``` @Service public class MyFaceEngine { private static FaceEngine faceEngine; static { //从官网获取 String appId = "7FvDuNLSQXD63tfzWR3v1mbmL7VmRFSEjrLCvX1Zhrum"; String sdkKey = "FeFfCbKqwYuMCGnzEHoT5a7rxhRazpsTY7amiYjYKc1a"; //解压后的dll文件路径 faceEngine = new FaceEngine("D:\\ArcSoft_ArcFace_Java_Windows_x64_V3.0\\libs\\WIN64"); //激活引擎 int errorCode = faceEngine.activeOnline(appId, sdkKey); if (errorCode != ErrorInfo.MOK.getValue() && errorCode != ErrorInfo.MERR_ASF_ALREADY_ACTIVATED.getValue()) { System.out.println("引擎激活失败"); } ActiveFileInfo activeFileInfo=new ActiveFileInfo(); errorCode = faceEngine.getActiveFileInfo(activeFileInfo); if (errorCode != ErrorInfo.MOK.getValue() && errorCode != ErrorInfo.MERR_ASF_ALREADY_ACTIVATED.getValue()) { System.out.println("获取激活文件信息失败"); } //引擎配置 EngineConfiguration engineConfiguration = new EngineConfiguration(); engineConfiguration.setDetectMode(DetectMode.ASF_DETECT_MODE_IMAGE); engineConfiguration.setDetectFaceOrientPriority(DetectOrient.ASF_OP_ALL_OUT); engineConfiguration.setDetectFaceMaxNum(10); engineConfiguration.setDetectFaceScaleVal(16); //功能配置 FunctionConfiguration functionConfiguration = new FunctionConfiguration(); functionConfiguration.setSupportAge(true); functionConfiguration.setSupportFace3dAngle(true); functionConfiguration.setSupportFaceDetect(true); functionConfiguration.setSupportFaceRecognition(true); functionConfiguration.setSupportGender(true); functionConfiguration.setSupportLiveness(true); functionConfiguration.setSupportIRLiveness(true); engineConfiguration.setFunctionConfiguration(functionConfiguration); //初始化引擎 errorCode = faceEngine.init(engineConfiguration); if (errorCode != ErrorInfo.MOK.getValue()) { System.out.println("初始化引擎失败"); } } public byte[] getFeature(byte[] bytes){ //人脸检测 //new File("D:\\hm\\ArcSoft_ArcFace_Java_Windows_x64_V3.0\\a.jpg") ImageInfo imageInfo = getRGBData(bytes); List faceInfoList = new ArrayList(); int errorCode = faceEngine.detectFaces(imageInfo.getImageData(), imageInfo.getWidth(), imageInfo.getHeight(), imageInfo.getImageFormat(), faceInfoList); System.out.println(faceInfoList); //特征提取 FaceFeature faceFeature = new FaceFeature(); errorCode = faceEngine.extractFaceFeature(imageInfo.getImageData(), imageInfo.getWidth(), imageInfo.getHeight(), imageInfo.getImageFormat(), faceInfoList.get(0), faceFeature); System.out.println("特征值大小:" + faceFeature.getFeatureData().length); return faceFeature.getFeatureData(); } public float compare(byte[] data1,byte[] data2){ //特征比对 FaceFeature targetFaceFeature = new FaceFeature(); targetFaceFeature.setFeatureData(data1); FaceFeature sourceFaceFeature = new FaceFeature(); sourceFaceFeature.setFeatureData(data2); FaceSimilar faceSimilar = new FaceSimilar(); int errorCode = faceEngine.compareFaceFeature(targetFaceFeature, sourceFaceFeature, faceSimilar); System.out.println("相似度:" + faceSimilar.getScore()); return faceSimilar.getScore(); }} ``` ## 7.4. 业务调用 ### 7.4.1 获取人脸特征 ``` String[] dataArray = StrUtil.splitToArray("图片base64字符串", "base64,"); byte[] bytes = Base64.decode(dataArray[1]); byte[] feature = myFaceEngine.getFeature(bytes); ``` ### 7.4.2 人脸识别对比 ``` String img = userImg.getFaceImg(); String[] dataArray = StrUtil.splitToArray(img, "base64,"); byte[] bytes = Base64.decode(dataArray[1]); byte[] feature = myFaceEngine.getFeature(bytes); Map entries = hashOperations.entries("venue." + userImg.getVenueId()); for (Map.Entry entry : entries.entrySet()) { //开始人脸对比相似度 float b = myFaceEngine.compare(feature,entry.getValue()); } ``` ## 7.5. 存储至Redis ### 5.1 配置 ``` # Redis配置 spring.redis.host=123.57.206.19 spring.redis.port=6380 ``` 导入依赖: ``` org.springframework.boot spring-boot-starter-data-redis ``` 创建RedisConfig类: ``` @Configuration public class RedisConfig { @Bean(name = "bytesHashOperations") public HashOperations bytesRedisTemplate(RedisConnectionFactory redisConnectionFactory) { RedisTemplate redisTemplate = new RedisTemplate<>(); redisTemplate.setConnectionFactory(redisConnectionFactory); redisTemplate.setKeySerializer(RedisSerializer.string()); redisTemplate.setHashKeySerializer(RedisSerializer.string()); //设置hash的值的序列化工具为jsonSerializer redisTemplate.setHashValueSerializer(RedisSerializer.byteArray()); redisTemplate.afterPropertiesSet(); return redisTemplate.opsForHash(); } } ``` ### 5.2 扩展 ``` //注入,@Autowired不行就用@Resource @Autowired private StringRedisTemplate stringRedisTemplate; ``` //然后在人脸识别方法中加入,注意saan要在2.7.5版本中使用,故整个项目要进行升级版本 ​ //即root的pom.xml中改为2.7.5 ``` ScanOptions scanOptions = ScanOptions.scanOptions().match("venue.*").build(); Cursor cursor = stringRedisTemplate.scan(scanOptions); while (cursor.hasNext()){ System.out.println(cursor.next()); } cursor.close(); ``` # 8. 摄像头人脸采集 ## 8.1. trackingjs组件官网 ## 8.2. 安装 ``` npm install tracking ``` ## 8.3 新建文件 asset/trackingX.js 兼容性bug修复 ``` function getUserMedia(constrains, success, error) { if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) { //最新标准API // Notify({ type: 'success', message: '支持最新标准API' }); navigator.mediaDevices.getUserMedia(constrains).then(success).catch(error); } else if (navigator.webkitGetUserMedia) { // Notify({ type: 'success', message: '支持webkit内核浏览器' }); //webkit内核浏览器 navigator.webkitGetUserMedia(constrains).then(success).catch(error); } else if (navigator.mozGetUserMedia) { // Notify({ type: 'success', message: '支持Firefox浏览器' }); //Firefox浏览器 navagator.mozGetUserMedia(constrains).then(success).catch(error); } else if (navigator.getUserMedia) { // Notify({ type: 'success', message: '支持旧版API' }); //旧版API navigator.getUserMedia(constrains).then(success).catch(error); } else { // Notify('浏览器不支持getUserMedia'); console.log("浏览器不支持getUserMedia") } } // 要重写initUserMedia_ 方法,因为chrome的底层api已做调整 window.tracking.initUserMedia_ = function (element, opt_options) { const options = { video: true, audio: !!(opt_options && opt_options.audio) }; getUserMedia(options, function (stream) { try { element.srcObject = stream; } catch (err) { element.src = window.URL.createObjectURL(stream); } }, function (e) { // Notify(e.message); console.log(e.message) } ); }; // 重写视频捕获方法,因为不能停止 stop无效的bug window.tracking.trackVideo_ = function (element, tracker) { var canvas = document.createElement('canvas'); var context = canvas.getContext('2d'); var width; var height; var resizeCanvas_ = function () { width = element.offsetWidth; height = element.offsetHeight; canvas.width = width; canvas.height = height; }; resizeCanvas_(); element.addEventListener('resize', resizeCanvas_); var requestId; var stopped = false; var requestAnimationFrame_ = function () { requestId = window.requestAnimationFrame(function () { if (element.readyState === element.HAVE_ENOUGH_DATA) { try { // Firefox v~30.0 gets confused with the video readyState firing an // erroneous HAVE_ENOUGH_DATA just before HAVE_CURRENT_DATA state, // hence keep trying to read it until resolved. context.drawImage(element, 0, 0, width, height); } catch (err) { } tracking.trackCanvasInternal_(canvas, tracker); } if (stopped !== true) { requestAnimationFrame_(); } }); }; var task = new tracking.TrackerTask(tracker); task.on('stop', function () { stopped = true; window.cancelAnimationFrame(requestId); }); task.on('run', function () { stopped = false; requestAnimationFrame_(); }); return task.run(); }; ///////////////// ``` ## 8.4. 在main.ts导入tracking类库 ``` ////////////// import "tracking"; import "tracking/build/data/face"; import "@/asset/trackingX.js" ///////////////// ``` ## 8.5. 使用 ``` ```