登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
AI 队友
登录
注册
轻量养虾,开箱即用!低 Token + 稳定算力,Gitee & 模力方舟联合出品的 PocketClaw 正式开售!点击了解详情
代码拉取完成,页面将自动刷新
开源项目
>
程序开发
>
作业/任务调度
&&
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
263
Star
2.5K
Fork
873
GVP
假诗人
/
PowerJob
代码
Issues
93
Pull Requests
3
Wiki
统计
流水线
服务
JavaDoc
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
我知道了,不再自动展开
更新失败,请稍后重试!
移除标识
内容风险标识
本任务被
标识为内容中包含有代码安全 Bug 、隐私泄露等敏感信息,仓库外成员不可访问
稳定复现org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: "PUBLIC.pkey_INDEX_A ON PUBLIC.task_info(instance_id, task_id) VALUES 2";
待办的
#ID5JWN
常照虎
创建于
2025-11-07 16:06
===================代码===================== @Slf4j @Component @RequiredArgsConstructor public class MapReduceProcessorDemo implements MapReduceProcessor { @Resource(name="taskThreadPoolTaskExecutor") private ThreadPoolTaskExecutor taskThreadPoolTaskExecutor; private final ProvinceAndIndustryMapper provinceAndIndustryMapper; @Override public ProcessResult process(TaskContext context) throws Exception { // PowerJob 提供的日志 API,可支持在控制台指定多种日志模式(在线查看 / 本地打印)。最佳实践:全部使用 OmsLogger 打印日志,开发阶段控制台配置为 在线日志方便开发;上线后调整为本地日志,与直接使用 SLF4J 无异 OmsLogger omsLogger = context.getOmsLogger(); // 是否为根任务,一般根任务进行任务的分发 boolean isRootTask = isRootTask(); if (isRootTask) { log.info("进入主任务获取到任务参数:{}",context); omsLogger.info("进入主任务获取到任务参数:{}",context); int batchNum = (int) provinceAndIndustryMapper.selectCountByQuery(QueryWrapper.create().isNotNull("id")); // 每个子任务携带多少个文件ID(此参数越大,每个子任务就“越大”,如果失败的重试成本就越高。参数越小,每个子任务就越轻,当相应的分片数量会提升,会让 PowerJob 计算开销增大,建议按业务需求合理调配) int batchSize = 10000; int batchCount= BatchUtils.calcBatchCount(batchNum,batchSize); log.info("根据总数:{},批次数量:{}计算出需要的批次为:{}",batchNum,batchSize,batchCount); omsLogger.info("根据总数:{},批次数量:{}计算出需要的批次为:{}",batchNum,batchSize,batchCount); // 此处模拟从文件读取 num 个 ID,每个子任务携带 batchSize 个 ID 作为一个分片 for (int i = 1; i <= batchCount; i++) { // 构造子任务 try { SubTask subTask = new SubTask(i, batchSize); map(Lists.newArrayList(subTask), "L1_TEST_PROCESS"); log.info("将子任务参数放入成功:{}",subTask.toString()); omsLogger.info("将子任务参数放入成功:{}",subTask.toString()); } catch (Exception e) { // 注意 MAP 操作可能抛出异常,建议进行捕获并按需处理 omsLogger.error("[MapReduceDemo] map task failed!", e); throw e; } } } Future<Boolean> flag= taskThreadPoolTaskExecutor.submit(() -> { log.info("进入子任务获取到任务参数:{}",context); omsLogger.info("进入子任务获取到任务参数:{}",context); if(context.getTaskName().equals("L1_TEST_PROCESS")){ SubTask subTask = (SubTask)context.getSubTask(); log.info("进入子任务 获取到子任务为:{}",subTask.toString()); omsLogger.info("进入子任务 获取到子任务为:{}",subTask.toString()); return true; }else{ return false ; } }); return flag.get()?new ProcessResult(true, "处理成功"):new ProcessResult(false, "处理失败"); } @Override public ProcessResult reduce(TaskContext context, List<TaskResult> taskResults) { OmsLogger omsLogger = context.getOmsLogger(); // 子任务结果太大,上报在线日志会有 IO 问题,直接使用本地日志打 log.info("所有任务执行完成获取结果{}", JSONObject.toJSONString(taskResults)); omsLogger.info("所有任务执行完成获取结果{}", JSONObject.toJSONString(taskResults)); log.info("================ 任务执行完成开始统计结果 ================"); omsLogger.info("================ 任务执行完成开始统计结果 ================"); // 所有 Task 执行结束后,reduce 将会被执行,taskResults 保存了所有子任务的执行结果。(注意 reduce 由于保存了所有子任务的执行结果,在子任务规模巨大时对内存有极大开销,超大型计算任务慎用或使用流式 reduce(开发中)) // 用法举例:统计执行结果 AtomicLong successCnt = new AtomicLong(0); AtomicLong failedCnt = new AtomicLong(0); taskResults.forEach(tr -> { if (tr.isSuccess()) { successCnt.incrementAndGet(); } else { failedCnt.incrementAndGet(); } }); double successRate = 1.0 * successCnt.get() / (successCnt.get() + failedCnt.get()); String resultMsg = String.format("成功的任务数量:%d,失败的任务数量:%d,成功率:%f", successCnt.get(), failedCnt.get(), successRate); log.info("统计完成统计结果:{}", resultMsg); omsLogger.info("统计完成统计结果:{}", resultMsg); // reduce 阶段的结果,将作为任务真正执行结果 if (successRate > 0.8) { return new ProcessResult(true, resultMsg); } else { return new ProcessResult(false, resultMsg); } } @Data @AllArgsConstructor public static class SubTask implements Serializable { /** * 再次强调,一定要有无参构造方法 */ public SubTask() { } private Integer pageNumber; private Integer pageSize; } ===========================依赖================================ <dependency> <groupId>tech.powerjob</groupId> <artifactId>powerjob-worker-spring-boot-starter</artifactId> <version>5.1.2</version> </dependency> ==========================尝试操作=========================== 删除C:\Users\zhaoh\powerjob 此目录下所有文件重启项目执行还是报冲突! ===========================任务截图=================================== [输入图片说明](https://foruda.gitee.com/images/1762502429274646776/bbfcaa28_9543771.png "屏幕截图") ============================日志============================ instanceId=867071326349688960, subInstanceId=867071326349688960, taskId=0.16, taskName=L1_TEST_PROCESS, jobParams=测试分片任务参数, instanceParams=null, maxRetryTimes=1, currentRetryTimes=0, subTask=MapReduceProcessorDemo.SubTask(pageNumber=17, pageSize=10000), omsLogger=tech.powerjob.worker.log.impl.OmsServerLogger@38a68894, userContext=null, workflowContext=tech.powerjob.worker.core.processor.WorkflowContext@ba49278, instanceMeta=InstanceMeta(ett=1762501913608)) 2025-11-07 15:51:58.718 INFO [emss-task-service,250c7e9e3b5f8115,250c7e9e3b5f8115] [41144] [ask-executor-10] c.sgcc.emss.task.MapReduceProcessorDemo : 进入子任务 获取到子任务为:MapReduceProcessorDemo.SubTask(pageNumber=17, pageSize=10000) 2025-11-07 15:51:58.875 WARN [emss-task-service,,] [41144] [orker-thread-16] t.p.w.p.DbTaskPersistenceService : [TaskPersistenceService] [Slow] [867071326349688960] batchSave cost 236ms 2025-11-07 15:51:58.886 ERROR [emss-task-service,,] [41144] [orker-thread-16] t.p.w.p.DbTaskPersistenceService : [TaskPersistenceService] batchSave tasks([{taskId='0.0', instanceId=867071326349688960, subInstanceId=867071326349688960, taskName='L1_TEST_PROCESS', address='null', status=1, result='null', failedCnt=0, createdTime=1762501918639, lastModifiedTime=1762501918639, lastReportTime=-1}]) failed. org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: "PUBLIC.pkey_INDEX_A ON PUBLIC.task_info(instance_id, task_id) VALUES 2"; SQL statement: insert into task_info(task_id, instance_id, sub_instance_id, task_name, task_content, address, status, result, failed_cnt, created_time, last_modified_time, last_report_time) values (?,?,?,?,?,?,?,?,?,?,?,?) [23505-200] at org.h2.jdbc.JdbcPreparedStatement.executeBatch(JdbcPreparedStatement.java:1235) ~[h2-1.4.200.jar:1.4.200] at com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:127) ~[HikariCP-4.0.3.jar:na] at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java) ~[HikariCP-4.0.3.jar:na] at tech.powerjob.worker.persistence.db.TaskDAOImpl.batchSave(TaskDAOImpl.java:74) ~[powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.worker.persistence.DbTaskPersistenceService.lambda$batchSave$0(DbTaskPersistenceService.java:70) ~[powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.common.utils.CommonUtils.executeWithRetry(CommonUtils.java:47) ~[powerjob-common-5.1.2.jar:5.1.2] at tech.powerjob.worker.persistence.DbTaskPersistenceService.execute(DbTaskPersistenceService.java:330) ~[powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.worker.persistence.DbTaskPersistenceService.batchSave(DbTaskPersistenceService.java:70) ~[powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.worker.persistence.SwapTaskPersistenceService.persistTask2Db(SwapTaskPersistenceService.java:355) [powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.worker.persistence.SwapTaskPersistenceService.batchSave(SwapTaskPersistenceService.java:156) [powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.worker.core.tracker.task.heavy.HeavyTaskTracker.submitTask(HeavyTaskTracker.java:294) [powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.worker.actors.TaskTrackerActor.onReceiveProcessorMapTaskRequest(TaskTrackerActor.java:108) [powerjob-worker-5.1.2.jar:5.1.2] at sun.reflect.GeneratedMethodAccessor112.invoke(Unknown Source) ~[na:na] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_361] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_361] at tech.powerjob.remote.http.HttpVertxCSInitializer.lambda$buildRequestHandler$3(HttpVertxCSInitializer.java:140) [powerjob-remote-impl-http-5.1.2.jar:5.1.2] at io.vertx.ext.web.impl.BlockingHandlerDecorator.lambda$handle$0(BlockingHandlerDecorator.java:48) ~[vertx-web-4.3.7.jar:4.3.7] at io.vertx.core.impl.ContextBase.lambda$null$0(ContextBase.java:137) ~[vertx-core-4.3.7.jar:4.3.7] at io.vertx.core.impl.ContextInternal.dispatch(ContextInternal.java:264) ~[vertx-core-4.3.7.jar:4.3.7] at io.vertx.core.impl.ContextBase.lambda$executeBlocking$1(ContextBase.java:135) ~[vertx-core-4.3.7.jar:4.3.7] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_361] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_361] at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.84.Final.jar:4.1.84.Final] at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_361] 2025-11-07 15:51:58.890 WARN [emss-task-service,,] [41144] [ PPP-1] t.p.w.c.p.r.HeavyProcessorRunnable : [ProcessorRunnable-867071326349688960] task(id=0,name=OMS_ROOT_TASK) process failed. tech.powerjob.common.exception.PowerJobCheckedException: map failed for task: L1_TEST_PROCESS at tech.powerjob.worker.core.processor.sdk.MapProcessor.map(MapProcessor.java:62) ~[powerjob-worker-5.1.2.jar:5.1.2] at com.sgcc.emss.task.MapReduceProcessorDemo.process(MapReduceProcessorDemo.java:60) ~[classes/:na] at tech.powerjob.worker.core.processor.runnable.HeavyProcessorRunnable.innerRun(HeavyProcessorRunnable.java:94) [powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.worker.core.processor.runnable.HeavyProcessorRunnable.run(HeavyProcessorRunnable.java:247) [powerjob-worker-5.1.2.jar:5.1.2] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_361] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_361] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_361] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_361] at java.lang.Thread.run(Thread.java:750) [na:1.8.0_361] 2025-11-07 15:51:59.248 INFO [emss-task-service,,] [41144] [b-worker-core-0] t.p.w.b.heartbeat.WorkerHealthReporter : [WorkerHealthReporter] report health status,appId:1,appName:emss-task-service,isOverload:false,maxLightweightTaskNum:1024,currentLightweightTaskNum:0,maxHeavyweightTaskNum:64,currentHeavyweightTaskNum:1 2025-11-07 15:52:03.644 INFO [emss-task-service,120616032fa99200,120616032fa99200] [41144] [ask-executor-11] c.sgcc.emss.task.MapReduceProcessorDemo : 进入子任务获取到任务参数:TaskContext(jobId=2, instanceId=867071326349688960, subInstanceId=867071326349688960, taskId=0.17, taskName=L1_TEST_PROCESS, jobParams=测试分片任务参数, instanceParams=null, maxRetryTimes=1, currentRetryTimes=0, subTask=MapReduceProcessorDemo.SubTask(pageNumber=18, pageSize=10000), omsLogger=tech.powerjob.worker.log.impl.OmsServerLogger@38a68894, userContext=null, workflowContext=tech.powerjob.worker.core.processor.WorkflowContext@6e6bc0d9, instanceMeta=InstanceMeta(ett=1762501913608)) 2025-11-07 15:52:03.644 INFO [emss-task-service,120616032fa99200,120616032fa99200] [41144] [ask-executor-11] c.sgcc.emss.task.MapReduceProcessorDemo : 进入子任务 获取到子任务为:MapReduceProcessorDemo.SubTask(pageNumber=18, pageSize=10000) 2025-11-07 15:52:03.645 INFO [emss-task-service,9dcbbac8d3daa3c3,9dcbbac8d3daa3c3] [41144] [ask-executor-12] c.sgcc.emss.task.MapReduceProcessorDemo : 进入子任务获取到任务参数:TaskContext(jobId=2,
===================代码===================== @Slf4j @Component @RequiredArgsConstructor public class MapReduceProcessorDemo implements MapReduceProcessor { @Resource(name="taskThreadPoolTaskExecutor") private ThreadPoolTaskExecutor taskThreadPoolTaskExecutor; private final ProvinceAndIndustryMapper provinceAndIndustryMapper; @Override public ProcessResult process(TaskContext context) throws Exception { // PowerJob 提供的日志 API,可支持在控制台指定多种日志模式(在线查看 / 本地打印)。最佳实践:全部使用 OmsLogger 打印日志,开发阶段控制台配置为 在线日志方便开发;上线后调整为本地日志,与直接使用 SLF4J 无异 OmsLogger omsLogger = context.getOmsLogger(); // 是否为根任务,一般根任务进行任务的分发 boolean isRootTask = isRootTask(); if (isRootTask) { log.info("进入主任务获取到任务参数:{}",context); omsLogger.info("进入主任务获取到任务参数:{}",context); int batchNum = (int) provinceAndIndustryMapper.selectCountByQuery(QueryWrapper.create().isNotNull("id")); // 每个子任务携带多少个文件ID(此参数越大,每个子任务就“越大”,如果失败的重试成本就越高。参数越小,每个子任务就越轻,当相应的分片数量会提升,会让 PowerJob 计算开销增大,建议按业务需求合理调配) int batchSize = 10000; int batchCount= BatchUtils.calcBatchCount(batchNum,batchSize); log.info("根据总数:{},批次数量:{}计算出需要的批次为:{}",batchNum,batchSize,batchCount); omsLogger.info("根据总数:{},批次数量:{}计算出需要的批次为:{}",batchNum,batchSize,batchCount); // 此处模拟从文件读取 num 个 ID,每个子任务携带 batchSize 个 ID 作为一个分片 for (int i = 1; i <= batchCount; i++) { // 构造子任务 try { SubTask subTask = new SubTask(i, batchSize); map(Lists.newArrayList(subTask), "L1_TEST_PROCESS"); log.info("将子任务参数放入成功:{}",subTask.toString()); omsLogger.info("将子任务参数放入成功:{}",subTask.toString()); } catch (Exception e) { // 注意 MAP 操作可能抛出异常,建议进行捕获并按需处理 omsLogger.error("[MapReduceDemo] map task failed!", e); throw e; } } } Future<Boolean> flag= taskThreadPoolTaskExecutor.submit(() -> { log.info("进入子任务获取到任务参数:{}",context); omsLogger.info("进入子任务获取到任务参数:{}",context); if(context.getTaskName().equals("L1_TEST_PROCESS")){ SubTask subTask = (SubTask)context.getSubTask(); log.info("进入子任务 获取到子任务为:{}",subTask.toString()); omsLogger.info("进入子任务 获取到子任务为:{}",subTask.toString()); return true; }else{ return false ; } }); return flag.get()?new ProcessResult(true, "处理成功"):new ProcessResult(false, "处理失败"); } @Override public ProcessResult reduce(TaskContext context, List<TaskResult> taskResults) { OmsLogger omsLogger = context.getOmsLogger(); // 子任务结果太大,上报在线日志会有 IO 问题,直接使用本地日志打 log.info("所有任务执行完成获取结果{}", JSONObject.toJSONString(taskResults)); omsLogger.info("所有任务执行完成获取结果{}", JSONObject.toJSONString(taskResults)); log.info("================ 任务执行完成开始统计结果 ================"); omsLogger.info("================ 任务执行完成开始统计结果 ================"); // 所有 Task 执行结束后,reduce 将会被执行,taskResults 保存了所有子任务的执行结果。(注意 reduce 由于保存了所有子任务的执行结果,在子任务规模巨大时对内存有极大开销,超大型计算任务慎用或使用流式 reduce(开发中)) // 用法举例:统计执行结果 AtomicLong successCnt = new AtomicLong(0); AtomicLong failedCnt = new AtomicLong(0); taskResults.forEach(tr -> { if (tr.isSuccess()) { successCnt.incrementAndGet(); } else { failedCnt.incrementAndGet(); } }); double successRate = 1.0 * successCnt.get() / (successCnt.get() + failedCnt.get()); String resultMsg = String.format("成功的任务数量:%d,失败的任务数量:%d,成功率:%f", successCnt.get(), failedCnt.get(), successRate); log.info("统计完成统计结果:{}", resultMsg); omsLogger.info("统计完成统计结果:{}", resultMsg); // reduce 阶段的结果,将作为任务真正执行结果 if (successRate > 0.8) { return new ProcessResult(true, resultMsg); } else { return new ProcessResult(false, resultMsg); } } @Data @AllArgsConstructor public static class SubTask implements Serializable { /** * 再次强调,一定要有无参构造方法 */ public SubTask() { } private Integer pageNumber; private Integer pageSize; } ===========================依赖================================ <dependency> <groupId>tech.powerjob</groupId> <artifactId>powerjob-worker-spring-boot-starter</artifactId> <version>5.1.2</version> </dependency> ==========================尝试操作=========================== 删除C:\Users\zhaoh\powerjob 此目录下所有文件重启项目执行还是报冲突! ===========================任务截图=================================== [输入图片说明](https://foruda.gitee.com/images/1762502429274646776/bbfcaa28_9543771.png "屏幕截图") ============================日志============================ instanceId=867071326349688960, subInstanceId=867071326349688960, taskId=0.16, taskName=L1_TEST_PROCESS, jobParams=测试分片任务参数, instanceParams=null, maxRetryTimes=1, currentRetryTimes=0, subTask=MapReduceProcessorDemo.SubTask(pageNumber=17, pageSize=10000), omsLogger=tech.powerjob.worker.log.impl.OmsServerLogger@38a68894, userContext=null, workflowContext=tech.powerjob.worker.core.processor.WorkflowContext@ba49278, instanceMeta=InstanceMeta(ett=1762501913608)) 2025-11-07 15:51:58.718 INFO [emss-task-service,250c7e9e3b5f8115,250c7e9e3b5f8115] [41144] [ask-executor-10] c.sgcc.emss.task.MapReduceProcessorDemo : 进入子任务 获取到子任务为:MapReduceProcessorDemo.SubTask(pageNumber=17, pageSize=10000) 2025-11-07 15:51:58.875 WARN [emss-task-service,,] [41144] [orker-thread-16] t.p.w.p.DbTaskPersistenceService : [TaskPersistenceService] [Slow] [867071326349688960] batchSave cost 236ms 2025-11-07 15:51:58.886 ERROR [emss-task-service,,] [41144] [orker-thread-16] t.p.w.p.DbTaskPersistenceService : [TaskPersistenceService] batchSave tasks([{taskId='0.0', instanceId=867071326349688960, subInstanceId=867071326349688960, taskName='L1_TEST_PROCESS', address='null', status=1, result='null', failedCnt=0, createdTime=1762501918639, lastModifiedTime=1762501918639, lastReportTime=-1}]) failed. org.h2.jdbc.JdbcBatchUpdateException: Unique index or primary key violation: "PUBLIC.pkey_INDEX_A ON PUBLIC.task_info(instance_id, task_id) VALUES 2"; SQL statement: insert into task_info(task_id, instance_id, sub_instance_id, task_name, task_content, address, status, result, failed_cnt, created_time, last_modified_time, last_report_time) values (?,?,?,?,?,?,?,?,?,?,?,?) [23505-200] at org.h2.jdbc.JdbcPreparedStatement.executeBatch(JdbcPreparedStatement.java:1235) ~[h2-1.4.200.jar:1.4.200] at com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:127) ~[HikariCP-4.0.3.jar:na] at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java) ~[HikariCP-4.0.3.jar:na] at tech.powerjob.worker.persistence.db.TaskDAOImpl.batchSave(TaskDAOImpl.java:74) ~[powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.worker.persistence.DbTaskPersistenceService.lambda$batchSave$0(DbTaskPersistenceService.java:70) ~[powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.common.utils.CommonUtils.executeWithRetry(CommonUtils.java:47) ~[powerjob-common-5.1.2.jar:5.1.2] at tech.powerjob.worker.persistence.DbTaskPersistenceService.execute(DbTaskPersistenceService.java:330) ~[powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.worker.persistence.DbTaskPersistenceService.batchSave(DbTaskPersistenceService.java:70) ~[powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.worker.persistence.SwapTaskPersistenceService.persistTask2Db(SwapTaskPersistenceService.java:355) [powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.worker.persistence.SwapTaskPersistenceService.batchSave(SwapTaskPersistenceService.java:156) [powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.worker.core.tracker.task.heavy.HeavyTaskTracker.submitTask(HeavyTaskTracker.java:294) [powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.worker.actors.TaskTrackerActor.onReceiveProcessorMapTaskRequest(TaskTrackerActor.java:108) [powerjob-worker-5.1.2.jar:5.1.2] at sun.reflect.GeneratedMethodAccessor112.invoke(Unknown Source) ~[na:na] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_361] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_361] at tech.powerjob.remote.http.HttpVertxCSInitializer.lambda$buildRequestHandler$3(HttpVertxCSInitializer.java:140) [powerjob-remote-impl-http-5.1.2.jar:5.1.2] at io.vertx.ext.web.impl.BlockingHandlerDecorator.lambda$handle$0(BlockingHandlerDecorator.java:48) ~[vertx-web-4.3.7.jar:4.3.7] at io.vertx.core.impl.ContextBase.lambda$null$0(ContextBase.java:137) ~[vertx-core-4.3.7.jar:4.3.7] at io.vertx.core.impl.ContextInternal.dispatch(ContextInternal.java:264) ~[vertx-core-4.3.7.jar:4.3.7] at io.vertx.core.impl.ContextBase.lambda$executeBlocking$1(ContextBase.java:135) ~[vertx-core-4.3.7.jar:4.3.7] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_361] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_361] at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.84.Final.jar:4.1.84.Final] at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_361] 2025-11-07 15:51:58.890 WARN [emss-task-service,,] [41144] [ PPP-1] t.p.w.c.p.r.HeavyProcessorRunnable : [ProcessorRunnable-867071326349688960] task(id=0,name=OMS_ROOT_TASK) process failed. tech.powerjob.common.exception.PowerJobCheckedException: map failed for task: L1_TEST_PROCESS at tech.powerjob.worker.core.processor.sdk.MapProcessor.map(MapProcessor.java:62) ~[powerjob-worker-5.1.2.jar:5.1.2] at com.sgcc.emss.task.MapReduceProcessorDemo.process(MapReduceProcessorDemo.java:60) ~[classes/:na] at tech.powerjob.worker.core.processor.runnable.HeavyProcessorRunnable.innerRun(HeavyProcessorRunnable.java:94) [powerjob-worker-5.1.2.jar:5.1.2] at tech.powerjob.worker.core.processor.runnable.HeavyProcessorRunnable.run(HeavyProcessorRunnable.java:247) [powerjob-worker-5.1.2.jar:5.1.2] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_361] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_361] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_361] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_361] at java.lang.Thread.run(Thread.java:750) [na:1.8.0_361] 2025-11-07 15:51:59.248 INFO [emss-task-service,,] [41144] [b-worker-core-0] t.p.w.b.heartbeat.WorkerHealthReporter : [WorkerHealthReporter] report health status,appId:1,appName:emss-task-service,isOverload:false,maxLightweightTaskNum:1024,currentLightweightTaskNum:0,maxHeavyweightTaskNum:64,currentHeavyweightTaskNum:1 2025-11-07 15:52:03.644 INFO [emss-task-service,120616032fa99200,120616032fa99200] [41144] [ask-executor-11] c.sgcc.emss.task.MapReduceProcessorDemo : 进入子任务获取到任务参数:TaskContext(jobId=2, instanceId=867071326349688960, subInstanceId=867071326349688960, taskId=0.17, taskName=L1_TEST_PROCESS, jobParams=测试分片任务参数, instanceParams=null, maxRetryTimes=1, currentRetryTimes=0, subTask=MapReduceProcessorDemo.SubTask(pageNumber=18, pageSize=10000), omsLogger=tech.powerjob.worker.log.impl.OmsServerLogger@38a68894, userContext=null, workflowContext=tech.powerjob.worker.core.processor.WorkflowContext@6e6bc0d9, instanceMeta=InstanceMeta(ett=1762501913608)) 2025-11-07 15:52:03.644 INFO [emss-task-service,120616032fa99200,120616032fa99200] [41144] [ask-executor-11] c.sgcc.emss.task.MapReduceProcessorDemo : 进入子任务 获取到子任务为:MapReduceProcessorDemo.SubTask(pageNumber=18, pageSize=10000) 2025-11-07 15:52:03.645 INFO [emss-task-service,9dcbbac8d3daa3c3,9dcbbac8d3daa3c3] [41144] [ask-executor-12] c.sgcc.emss.task.MapReduceProcessorDemo : 进入子任务获取到任务参数:TaskContext(jobId=2,
评论 (
2
)
登录
后才可以发表评论
状态
待办的
待办的
进行中
已完成
已关闭
负责人
未设置
标签
未设置
标签管理
里程碑
未关联里程碑
未关联里程碑
Pull Requests
未关联
未关联
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
未关联
分支 (
-
)
标签 (
-
)
开始日期   -   截止日期
-
置顶选项
不置顶
置顶等级:高
置顶等级:中
置顶等级:低
优先级
不指定
严重
主要
次要
不重要
参与者(1)
Java
1
https://gitee.com/KFCFans/PowerJob.git
git@gitee.com:KFCFans/PowerJob.git
KFCFans
PowerJob
PowerJob
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
评论
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册