diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000000000000000000000000000000000000..6b72b339b038fdafe94d1633cb5cd72bcd0893e2
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,10 @@
+/.idea
+/.run
+.mvn
+/logs
+/*/mvnw
+/*/mvnw.cmd
+/*/logs
+/*.iml
+/target/*
+/*/target/*
\ No newline at end of file
diff --git a/README.md b/README.md
index 782811815e768e51320a66c3b97340fccb78e9bf..940029570304c93ef8b4e8fd44ff016dca047c60 100644
--- a/README.md
+++ b/README.md
@@ -1,25 +1,22 @@
-# openGauss-migration-portal
+# MySQL一键式迁移
-### 功能介绍
+## 功能介绍
-opengauss-migration-portal是一个用java编写的,在linux系统上运行的,集成了全量迁移、增量迁移、反向迁移、数据校验的工具。opengauss-migration-portal支持以上工具的一键式安装与启动。
+gs_rep_portal是一个用Java编写的,在linux系统上运行的,集成了全量迁移、增量迁移、反向迁移、数据校验的工具。gs_rep_portal支持以上工具的一键式安装上述工具,设定迁移任务,任务根据用户设定的执行计划顺序的调用相应工具完成每个迁移步骤,并能实时展示每个步骤的状态、进度、异常原因等。
-### 注意事项
+## 注意事项
-1.对于同一个mysql实例和opengauss数据库,一旦执行增量迁移之后执行过反向迁移,就不能再次执行增量迁移,否则会引起数据不一致问题。
+- portal在执行增量迁移、反向迁移、增量校验时需要使用curl工具。
+- 同一个迁移计划的增量迁移和反向迁移不会同时开启,如果一个计划中包含了增量迁移和反向迁移,那么需要用户手动停止增量迁移,启动反向迁移。当用户启动反向迁移之后,无法再启动增量迁移。
+- portal使用的workspace.id只能为小写字母与数字的组合。
+- portal在启动多个计划时,需要保证MySQL数据库实例各不相同,openGauss端数据库各不相同,且同一个MySQL数据库实例和openGauss端数据库的增量迁移和反向迁移不能同时开启。
-2.portal在执行增量迁移、反向迁移、增量校验时需要使用curl工具。
+ ## 默认文件结构
-3.增量迁移和反向迁移不能同时开启,如果一个计划中包含了增量迁移和反向迁移,那么需要用户手动停止增量迁移,启动反向迁移。
-
-4.portal使用的workspace.id只能为小写字母与数字的组合。
-
-5.portal在启动多个计划时,需要保证mysql数据库实例各不相同,openGauss端数据库各不相同。
-
- ### 默认文件结构
+使用默认配置安装的portal的文件结构如下。
```
-/portal
+portal/
config/
migrationConfig.properties
toolspath.properties
@@ -27,60 +24,62 @@ opengauss-migration-portal是一个用java编写的,在linux系统上运行的
currentPlan
input
chameleon/
- config-example.yml
- datacheck/
- application-source.yml
- application-sink.yml
- application.yml
- log4j2.xml
- log4j2source.xml
- log4j2sink.xml
- debezium/
- connect-avro-standalone.properties
- mysql-sink.properties
- mysql-source.properties
- opengauss-sink.properties
- opengauss-source.properties
+ config-example.yml
+ datacheck/
+ application-source.yml
+ application-sink.yml
+ application.yml
+ log4j2.xml
+ log4j2source.xml
+ log4j2sink.xml
+ debezium/
+ connect-avro-standalone.properties
+ mysql-sink.properties
+ mysql-source.properties
+ opengauss-sink.properties
+ opengauss-source.properties
logs/
portal.log
pkg/
- chameleon/
- chameleon-5.0.0-py3-none-any.whl
- datacheck/
- openGauss-datachecker-performance-5.0.0.tar.gz
- debezium/
- confluent-community-5.5.1-2.12.zip
- replicate-mysql2openGauss-5.0.0.tar.gz
- replicate-openGauss2mysql-5.0.0.tar.gz
- kafka_2.13-3.2.3.tgz
- tmp/
- tools/
- chameleon/
- datacheck/
- debezium/
- confluent-5.5.1/
- kafka_2.13-3.2.3/
- plugin/
- debezium-connector-mysql/
- debezium-connector-opengauss/
- portal.portId.lock
- portalControl-1.0-SNAPSHOT-exec.jar
- README.md
- ```
-
-### 安装教程
+ chameleon/
+ chameleon-7.0.0rc2-py3-none-any.whl
+ datacheck/
+ gs_datacheck-7.0.0rc2.tar.gz
+ debezium/
+ confluent-community-5.5.1-2.12.zip
+ replicate-mysql2openGauss-7.0.0rc2.tar.gz
+ replicate-openGauss2mysql-7.0.0rc2.tar.gz
+ tmp/
+ tools/
+ chameleon/
+ datacheck/
+ debezium/
+ confluent-5.5.1/
+ plugin/
+ debezium-connector-mysql/
+ debezium-connector-opengauss/
+ portal.portId.lock
+ portalControl-7.0.0rc2-exec.jar
+ gs_datacheck.sh
+ gs_mysync.sh
+ gs_rep_portal.sh
+ gs_replicate.sh
+ README.md
+ ```
+
+## 安装教程
portal的安装目录默认为/ops/portal,可根据实际需要更换。
-#### 安装portal
+### 源码安装:
-通过git命令下载源代码,将源代码中的portal文件夹复制到/ops下。
+1.通过git命令下载源代码,将源代码中的portal文件夹复制到/ops下。
```
- git clone https://gitee.com/opengauss/openGauss-migration-portal.git
+git clone https://gitee.com/opengauss/openGauss-migration-portal.git
```
-使用maven命令编译源代码获得portalControl-1.0-SNAPSHOT-exec.jar,并将jar包放在/ops/portal下。
+2.使用maven命令编译源代码获得portalControl-7.0.0rc2-exec.jar,并将jar包放在/ops/portal下。
```
mvn clean package -Dmaven.test.skip=true
@@ -90,72 +89,105 @@ java版本:open JDK11及以上
maven版本:3.8.1以上
-### 启动方式
+3.使用一键式脚本启动portal时,请将/ops/portal/shell目录下中的.sh文件提取出来,放在/ops/portal/目录,也就是和jar包同一目录下。
+
+### 安装包安装:
+
+各系统版本和架构对应的下载链接如下:
+
+| 系统名称 | 系统架构 | 下载链接 |
+|:---------------| -------- |----------------------------------------------------------------------------------------------------------------------|
+| centos7 | x86_64 | https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/centos7/PortalControl-7.0.0rc2-x86_64.tar.gz |
+| openEuler20.03 | x86_64 | https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/openEuler20.03/PortalControl-7.0.0rc2-x86_64.tar.gz |
+| openEuler20.03 | aarch64 | https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/openEuler20.03/PortalControl-7.0.0rc2-aarch64.tar.gz |
+| openEuler22.03 | x86_64 | https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/openEuler22.03/PortalControl-7.0.0rc2-x86_64.tar.gz |
+| openEuler22.03 | aarch64 | https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/openEuler22.03/PortalControl-7.0.0rc2-aarch64.tar.gz |
+| openEuler24.03 | x86_64 | https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/openEuler24.03/PortalControl-7.0.0rc2-x86_64.tar.gz |
+| openEuler24.03 | aarch64 | https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/openEuler24.03/PortalControl-7.0.0rc2-aarch64.tar.gz |
+
+1.下载gs_rep_portal安装包
+
+ ```
+wget -c https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/centos7/PortalControl-7.0.0rc2-x86_64.tar.gz
+ ```
+
+2.解压gs_rep_portal安装包
+
+ ```
+tar -zxvf PortalControl-7.0.0rc2-x86_64.tar.gz
+ ```
+
+## 启动方式
-在命令行输出以下格式的命令启动portal,通过指令使用portal的各项功能。
+使用一键式脚本gs_rep_portal启动portal,通过参数使用portal的各项功能。
```
-java -Dpath=/ops/portal/ -Dskip=true -Dorder=指令 -Dworkspace.id=1 -jar portalControl-1.0-SNAPSHOT-exec.jar
+sh gs_rep_portal.sh 参数 workspace.id &
```
-其中path的值为工作目录,如果这里输入错误会导致portal报错,并且要以/结尾。
+这里的参数为数个单词之间加下划线,比如"start_mysql_full_migration"这种形式,分为安装指令,启动指令,停止指令,卸载指令等,会在下文介绍。
-指令为数个单词之间加下划线,比如"start_mysql_full_migration"这种形式。
+portal会在workspace文件夹下创造对应id的文件夹,并将执行任务时的参数和日志等信息存入该文件夹。如果不指定workspace.id,那么workspace.id默认值为1。
-portal会在workspace文件夹下创造对应id的文件夹,并将执行任务时的参数和日志等信息存入该文件夹。如果不指定workspace.id,那么workspace的默认id为1。
+命令行输入以下指令可以查看帮助(包括使用方式和可用指令):
-参数优先级:命令行输入 > workspace下设置的参数 > 公共空间参数。如果使用的workspace.id和之前存在的workspace.id相同的话将沿用之前的workspace里面的参数,如果不同的话,那么portal将从config文件夹中复制一份配置文件到id对应的workspace下面作为这个任务的配置文件。
+ ```
+sh gs_rep_portal.sh help &
+ ```
+
+参数优先级: workspace下设置的参数 > 公共空间参数。如果使用的workspace.id和之前存在的workspace.id相同的话将沿用之前的workspace里面的参数,如果不同的话,那么portal将从config文件夹中复制一份配置文件到id对应的workspace下面作为这个任务的配置文件。
建议每次运行迁移任务时使用不同的workspace.id。
-#### 安装迁移工具
+### 安装迁移工具
迁移功能与对应的迁移工具如下表所示:
-| 迁移功能 | 使用工具 |
-| ---------------------------------- | ---------------------------------------------- |
-| 全量迁移 | chameleon |
-| 增量迁移 | kafka、confluent、debezium-connector-mysql |
-| 反向迁移 | kafka、confluent、debezium-connector-opengauss |
-| 数据校验(包括全量校验和增量校验) | kafka、confluent、datacheck |
+| 迁移功能 | 使用工具 |
+| ---------------------------------- | --------------------------------------- |
+| 全量迁移 | chameleon |
+| 增量迁移 | confluent、debezium-connector-mysql |
+| 反向迁移 | confluent、debezium-connector-opengauss |
+| 数据校验(包括全量校验和增量校验) | confluent、datacheck |
各工具推荐版本:
-| 工具 | 版本 |
-| ---------------------------- |------------|
-| chameleon | 5.0.0 |
-| kafka | 2.13-3.2.3 |
-| confluent | 5.5.1 |
-| datacheck | 5.0.0 |
-| debezium-connector-mysql | 1.8.1 |
-| debezium-connector-opengauss | 1.8.1 |
+| 工具 | 版本 |
+|-----------------------------|----------|
+| chameleon | 7.0.0rc2 |
+| confluent | 5.5.1 |
+| datacheck | 7.0.0rc2 |
+| replicate-mysql2openGauss | 7.0.0rc2 |
+| replicate-openGauss2mysql | 7.0.0rc2 |
-在/ops/portal/config目录的toolspath.properties文件中修改工具安装路径:
+在/ops/portal/config目录的toolspath.properties文件中修改工具安装路径,其中文件夹要以/结尾:
| 参数名称 | 参数说明 |
| ---------------------------- | ------------------------------------------------------------ |
-| chameleon.venv.path | 变色龙虚拟环境所在位置 |
+| chameleon.venv.path | 变色龙虚拟环境所在路径 |
+| chameleon.path | 变色龙工作目录 |
+| chameleon.pkg.url | 变色龙的安装包下载链接 |
| chameleon.pkg.path | 变色龙的安装包所在路径 |
| chameleon.pkg.name | 变色龙的安装包名 |
-| chameleon.pkg.url | 变色龙的安装包下载链接 |
-| debezium.path | debezium+kafka所在路径(默认kafka、confluent、connector都安装在该路径下) |
-| kafka.path | kafka所在路径 |
+| debezium.path | debezium+confluent所在路径(默认confluent、connector都安装在该路径下) |
| confluent.path | confluent所在路径 |
| connector.path | connector所在路径 |
-| debezium.pkg.path | debezium+kafka安装包所在路径(默认kafka、confluent、connector安装包都在该路径下) |
-| kafka.pkg.name | kafka安装包名 |
-| kafka.pkg.url | kafka安装包下载链接 |
-| confluent.pkg.name | confluent安装包名 |
+| connector.mysql.path | mysql connector所在路径 |
+| connector.opengauss.path | opengauss connector所在路径 |
| confluent.pkg.url | confluent安装包下载链接 |
-| connector.mysql.pkg.name | mysql connector安装包名 |
| connector.mysql.pkg.url | mysql connector安装包下载链接 |
-| connector.opengauss.pkg.name | opengauss connector安装包名 |
| connector.opengauss.pkg.url | opengauss connector安装包下载链接 |
+| debezium.pkg.path | debezium+confluent安装包所在路径 |
+| confluent.pkg.name | confluent安装包名 |
+| connector.mysql.pkg.name | mysql connector安装包名 |
+| connector.opengauss.pkg.name | opengauss connector安装包名 |
+| datacheck.pkg.url | datacheck安装包下载链接 |
| datacheck.install.path | datacheck安装路径 |
| datacheck.path | datacheck所在路径 |
| datacheck.pkg.path | datacheck安装包所在路径 |
| datacheck.pkg.name | datacheck安装包名 |
-| datacheck.pkg.url | datacheck安装包下载链接 |
+| datacheck.extract.jar.name | datacheck抽取jar包名 |
+| datacheck.check.jar.name | datacheck校验jar包名 |
工具的安装支持离线安装和在线安装:
@@ -174,12 +206,24 @@ portal会在workspace文件夹下创造对应id的文件夹,并将执行任务
使用以下指令可以安装对应的迁移工具,举例:
```
-java -Dpath=/ops/portal/ -Dskip=true -Dorder=install_mysql_all_migration_tools -Dworkspace.id=1 -jar portalControl-1.0-SNAPSHOT-exec.jar
+sh gs_rep_portal.sh install_mysql_all_migration_tools 1 &
```
在命令行运行这条命令可以安装所有迁移功能用到的迁移工具。
-#### 安装指令
+#### 准备动作
+
+如果portal在安装时安装了全量迁移工具以外的其他工具,那么portal启动confluent(内置kafka)作为运行其他工具时的准备动作。安装之后将自动运行准备动作指令。
+
+结束准备动作时的命令:
+
+sh gs_rep_portal.sh stop_kafka a
+
+启动准备动作时的命令:
+
+sh gs_rep_portal.sh start_kafka a
+
+### 安装指令
| 指令名称 | 指令说明 |
| ------------------------------------------------- | ------------------------------------------------- |
@@ -197,11 +241,21 @@ java -Dpath=/ops/portal/ -Dskip=true -Dorder=install_mysql_all_migration_tools -
| install_mysql_datacheck_tools | 安装mysql数据校验工具(安装方式由配置文件指定) |
| install_mysql_all_migration_tools | 安装mysql迁移工具(各工具安装方式由配置文件指定) |
-#### 配置参数
+#### 离线安装特性说明
+
+由于全量迁移工具chameleon是python语言编写,对于不同的安装环境,需要依赖环境中的mariadb-devel(或mysql-devel,mysql5-devel),python-devel,python3-devel软件。当环境中无上述软件,且安装环境无法联网时,可能会导致chameleon安装失败。
+
+为了优化用户安装体验,并且在无法联网环境下也能安装成功,portal将chameleon所依赖的软件打包到了安装包中,用户在使用portal安装chameleon前,只需要给予portal安装用户sudo且不需要输入密码的权限。安装时,portal会自动在安装chameleon前,安装其所依赖的软件,然后自动安装chameleon。安装成功后,可以将portal安装用户的sudo权限取消,后续迁移无需用到sudo权限。
+
+用户根据安装环境的系统及架构,从此文档上方提供的下载链接下载portal安装包,便可使用此特性进行安装。
+
+当portal安装用户没有sudo权限时,使用portal安装chameleon时,会自动跳过安装其依赖的软件,直接安装chameleon,如果环境上已存在其所依赖的软件,仍然可以安装chameleon成功。因此,安装用户没有sudo权限时,并不会阻塞portal安装各迁移工具。
+
+### 配置参数
用户可以在/ops/portal/config目录的migrationConfig.properties文件中修改迁移所用参数。
-参数优先级:命令行输入 > workspace下设置的参数 > 公共空间参数。如果使用的workspace.id和之前存在的workspace.id相同的话将沿用之前的workspace里面的参数,如果不同的话,那么portal将从config文件夹中复制一份配置文件到id对应的workspace下面作为这个任务的配置文件。
+参数优先级:workspace下设置的参数 > 公共空间参数。如果使用的workspace.id和之前存在的workspace.id相同的话将沿用之前的workspace里面的参数,如果不同的话,那么portal将从config文件夹中复制一份配置文件到id对应的workspace下面作为这个任务的配置文件。
| 参数名称 | 参数说明 |
| ------------------------- | ----------------------- |
@@ -222,48 +276,84 @@ java -Dpath=/ops/portal/ -Dskip=true -Dorder=install_mysql_all_migration_tools -
注意事项:
- zookeeper默认端口2181、kafka默认端口9092、schema-registry默认端口8081不会自动分配,其余工具均会自动分配端口。用户如果需要修改工具的端口,请不要修改IP。如果需要修改kafka的端口,要注意将kafka的文件中的参数listeners的值修改为PLAINTEXT://localhost:要配置的端口。
-- 下表使用${config}代表/ops/portal/。
-- 下表使用${kafka.path}代表/ops/portal/config目录的toolspath.properties文件里面kafka.path的值。
+- 下表使用${config}代表/ops/portal/config目录,即公共空间配置的参数。如果想修改某个workspace的参数,比如workspace.id=2的计划的参数,请将/ops/portal/config替换为/ops/portal/workspace/2/config。
- 下表使用${confluent.path}代表/ops/portal/config目录的toolspath.properties文件里面confluent.path的值。
- 每次创建新的任务时,/ops/portal/config/debezium目录的connect-avro-standalone.properties文件会被自动复制成四份并修改端口。
-| 工具名称 | 配置文件位置 |
-| ------------------- | ------------------------------------------------------------ |
-| chameleon | ${config}/chameleon/config-example.yml |
-| datacheck | ${config}/datacheck/application-source.yml |
-| | ${config}/datacheck/application-sink.yml |
-| | ${config}/datacheck/application.yml |
-| zookeeper | ${kafka.path}/config/zookeeper.properties |
-| kafka | ${kafka.path}/config/server.properties |
-| schema-registry | ${confluent.path}/etc/schema-registry/schema-registry.properties |
-| connector-mysql | ${config}/debezium/connect-avro-standalone.properties |
-| | ${config}/debezium/mysql-source.properties |
-| | ${config}/debezium/mysql-sink.properties |
-| connector-opengauss | ${config}/debezium/connect-avro-standalone.properties |
-| | ${config}/debezium/opengauss-source.properties |
-| | ${config}/debezium/opengauss-sink.properties |
-
-### 执行迁移计划
-
-portal支持启动多个进程执行不同的迁移计划,但是要求各迁移计划使用的mysql实例和opengauss数据库互不相同。
-
-启动迁移计划时需要添加参数-Dworkspace.id="ID",这样不同的迁移计划可以根据不同的workspaceID进行区分,如果不添加的话,workspaceID默认值为1。
-
-启动全量迁移:
+
+
+ | 工具名称 |
+ 配置文件位置 |
+
+
+ | chameleon |
+ ${config}/chameleon/config-example.yml |
+
+
+ | zookeeper |
+ ${confluent.path}/etc/kafka/zookeeper.properties |
+
+
+ | kafka |
+ ${confluent.path}/etc/kafka/server.properties |
+
+
+ | schema-registry |
+ ${confluent.path}/etc/schema-registry/schema-registry.properties |
+
+
+ | connector-mysql |
+ ${config}/debezium/connect-avro-standalone.properties |
+
+
+ | ${config}/debezium/mysql-source.properties |
+
+
+ | ${config}/debezium/mysql-sink.properties |
+
+
+ | connector-opengauss |
+ ${config}/debezium/connect-avro-standalone.properties |
+
+
+ | ${config}/debezium/opengauss-source.properties |
+
+
+ | ${config}/debezium/opengauss-sink.properties |
+
+
+ | datacheck |
+ ${config}/datacheck/application-source.yml |
+
+
+ | ${config}/datacheck/application-sink.yml |
+
+
+ | ${config}/datacheck/application.yml |
+
+
+
+## 执行迁移计划
+
+portal支持启动多个任务执行不同的迁移计划,但是要求各迁移计划使用的MySQL实例和openGauss数据库互不相同。
+
+启动迁移计划时需要添加参数,这样不同的迁移计划可以根据不同的workspace.id进行区分,如果不添加的话,workspace.id默认值为1。
+
+启动workspace.id为2的全量迁移:
```
-java -Dpath=/ops/portal/ -Dskip=true -Dorder=start_mysql_full_migration -Dworkspace.id=2 -jar portalControl-1.0-SNAPSHOT-exec.jar
+sh gs_rep_portal.sh start_mysql_full_migration 2 &
```
-portal除了支持单项任务的启动与停止,也会提供一些组合的默认计划:
+portal除了支持单项功能的启动与停止,也会提供一些组合的默认计划:
-启动包括全量迁移和全量校验在内的迁移计划:
+启动workspace.id为2的包括全量迁移和全量校验在内的迁移计划:
```
-java -Dpath=/ops/portal/ -Dskip=true -Dorder=start_plan1 -Dworkspace.id=3 -jar portalControl-1.0-SNAPSHOT-exec.jar
+sh gs_rep_portal.sh start_plan1 2 &
```
-#### 计划列表
+### 计划列表
| 计划名称 | 包括指令 |
| -------- | -------------------------------------------- |
@@ -271,42 +361,48 @@ java -Dpath=/ops/portal/ -Dskip=true -Dorder=start_plan1 -Dworkspace.id=3 -jar p
| plan2 | 全量迁移→全量校验→增量迁移→增量校验 |
| plan3 | 全量迁移→全量校验→增量迁移→增量校验→反向迁移 |
-#### 增量迁移和反向迁移
+### 增量迁移和反向迁移
增量迁移功能是持续将MySQL端的数据修改同步到openGauss端的功能,而反向迁移功能是持续将openGauss端的数据修改同步到MySQL端的功能,所以二者均不会自动关闭。如果用户想要停止增量迁移功能,需要另开窗口输入指令停止增量迁移功能,反向迁移功能同理。
-并且需要注意的是:增量迁移和反向迁移不能同时开启,如果一个计划中包含了增量迁移和反向迁移,那么需要用户手动停止增量迁移,启动反向迁移,以启动默认计划3为例:
+并且需要注意的是:增量迁移和反向迁移不能同时开启,如果一个计划中包含了增量迁移和反向迁移,那么需要用户手动停止增量迁移,启动反向迁移。用户在停止增量迁移之后到启动反向迁移之前,禁止向openGauss进行作业,否则会导致这之间的数据丢失。
+
+以启动默认计划3为例:
-在配置好配置文件后输入以下指令开启plan3:
+1.在配置好配置文件后输入以下指令启动workspace.id为3的计划plan3:
```
-java -Dpath=/ops/portal/ -Dskip=true -Dorder=start_plan3 -Dworkspace.id=3 -jar portalControl-1.0-SNAPSHOT-exec.jar
+sh gs_rep_portal.sh start_plan3 3 &
```
-这时portal会自动执行全量迁移→全量校验→增量迁移→增量校验,然后一直处于增量迁移状态(此时增量迁移和增量校验同时运行),如果用户想要停止增量迁移功能,需要另开窗口输入以下指令停止增量迁移功能:
+这时portal会自动执行全量迁移→全量校验→增量迁移→增量校验,然后一直处于增量迁移状态(此时增量迁移和增量校验同时运行)。
+
+2.如果用户想要停止增量迁移功能,需要另开窗口输入以下指令停止增量迁移功能:
```
-java -Dpath=/ops/portal/ -Dskip=true -Dorder=stop_incremental_migration -Dworkspace.id=3 -jar portalControl-1.0-SNAPSHOT-exec.jar
+sh gs_rep_portal.sh stop_incremental_migration 3 &
```
-输入指令后,这个进程会退出,而正在执行计划的portal会接收到停止增量迁移的消息,从而停止增量迁移,等待下一步指令。
+输入指令后,这个进程会退出,而正在执行计划的workspace.id为3的portal主进程会接收到停止增量迁移的消息,从而停止增量迁移,等待下一步指令。
-如果用户想要启动反向迁移功能,需要输入以下指令:
+3.如果用户想要启动反向迁移功能,需要输入以下指令:
```
-java -Dpath=/ops/portal/ -Dskip=true -Dorder=run_reverse_migration -Dworkspace.id=3 -jar portalControl-1.0-SNAPSHOT-exec.jar
+sh gs_rep_portal.sh run_reverse_migration 3 &
```
-输入指令后,这个进程会退出,而正在执行计划的portal会接收到启动反向迁移的消息,从而启动反向迁移,此时portal一直处于反向迁移状态。
+输入指令后,这个进程会退出,而正在执行计划的workspace.id为3的portal主进程会接收到启动反向迁移的消息,从而启动反向迁移,此时portal一直处于反向迁移状态。
如果想要停止整个迁移计划,请参考下方的“停止计划”小节。
以下为启动迁移计划的指令列表:
-#### 指令列表
+### 启动指令列表
-| 指令名称 | 指令说明 |
-| ------------------------------------------- | ------------------------------------------------------------ |
+| 指令名称 | 指令说明 |
+|---------------------------------------------|------------------------------------------------- |
+| verify_pre_migration | 迁移前校验 |
+| verify_reverse_migration | 反向迁移前校验 |
| start_mysql_full_migration | 开始mysql全量迁移 |
| start_mysql_incremental_migration | 开始mysql增量迁移 |
| start_mysql_reverse_migration | 开始mysql反向迁移 |
@@ -322,60 +418,60 @@ java -Dpath=/ops/portal/ -Dskip=true -Dorder=run_reverse_migration -Dworkspace.i
用户也可以在/ops/portal/config目录的currentPlan文件中自定义迁移计划,但自定义迁移计划需要遵守以下规则:
-1.在currentPlan中每行填入一条启动单个迁移任务的指令,如start_mysql_full_migration,start_mysql_incremental_migration等。指令的顺序遵循:
+- 在currentPlan中每行填入一条启动单个迁移任务的指令,如start_mysql_full_migration,start_mysql_incremental_migration等。指令的顺序遵循:
-- start_mysql_full_migration
-- start_mysql_full_migration_datacheck
-- start_mysql_incremental_migration
-- start_mysql_incremental_migration_datacheck
-- start_mysql_reverse_migration
+ - start_mysql_full_migration
+ - start_mysql_full_migration_datacheck
+ - start_mysql_incremental_migration
+ - start_mysql_incremental_migration_datacheck
+ - start_mysql_reverse_migration
-如果顺序错误则portal报错。
+ 如果顺序错误则portal报错。
-2.增量校验的上一项一定是增量迁移,全量校验的上一项一定是全量迁移。
+- 增量校验的上一项一定是增量迁移,全量校验的上一项一定是全量迁移。
-3.每个单项任务只能添加一次。
+- 每个单项任务只能添加一次。
-#### 停止计划
+### 停止计划
举例:
-在portal正在执行计划的状态下,另开一个窗口输入以下指令可以停止workspace.id为2的任务:
+在portal正在执行计划的状态下,另开一个窗口输入以下指令可以停止workspace.id为3的任务:
```
-java -Dpath=/ops/portal/ -Dskip=true -Dorder=stop_plan -Dworkspace.id=2 -jar portalControl-1.0-SNAPSHOT-exec.jar
+sh gs_rep_portal.sh stop_plan 3 &
```
-输入指令后,这个进程会退出,而正在执行计划的portal会接收到停止计划的消息,从而停止计划。
+输入指令后,这个进程会退出,而正在执行计划的workspace.id为3的portal主进程会接收到停止计划的消息,从而停止计划。
-#### 启动多个计划
+### 启动多个计划
portal支持同时启动多个计划,但是这些计划的mysql端应该为各不相同的实例,openGauss端应该为各不相同的数据库:
首先修改配置文件,详情见配置参数环节。
-使用workspace.id为p1启动第一个迁移计划(这里启动计划3):
+使用workspace.id为p1启动第一个迁移计划(这里以启动计划3为例):
```
-java -Dpath=/ops/portal/ -Dskip=true -Dorder=start_plan3 -Dworkspace.id=p1 -jar portalControl-1.0-SNAPSHOT-exec.jar
+sh gs_rep_portal.sh start_plan3 p1 &
```
然后再次修改配置文件。
-使用workspace.id为p2启动第一个迁移计划(这里启动计划3):
+使用workspace.id为p2启动第一个迁移计划(这里以启动计划3为例):
```
-java -Dpath=/ops/portal/ -Dskip=true -Dorder=start_plan3 -Dworkspace.id=p2 -jar portalControl-1.0-SNAPSHOT-exec.jar
+sh gs_rep_portal.sh start_plan3 p2 &
```
这样就启动了多个portal。
-#### 卸载迁移工具
+## 卸载迁移工具
使用以下指令可以卸载不同功能对应的迁移工具,举例:
```
-java -Dpath=/ops/portal/ -Dskip=true -Dorder=uninstall_mysql_all_migration_tools -Dworkspace.id=1 -jar portalControl-1.0-SNAPSHOT-exec.jar
+sh gs_rep_portal.sh uninstall_mysql_all_migration_tools 1 &
```
在命令行运行这条命令可以卸载所有功能用到的迁移工具。
@@ -388,7 +484,49 @@ java -Dpath=/ops/portal/ -Dskip=true -Dorder=uninstall_mysql_all_migration_tools
| uninstall_mysql_reverse_migration_tools | 卸载mysql反向迁移工具 |
| uninstall_mysql_all_migration_tools | 卸载mysql迁移工具 |
+## 完整数据迁移流程
+
+1.下载gs_rep_portal安装包
+
+ ```
+wget -c https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/centos7/PortalControl-7.0.0rc2-x86_64.tar.gz
+ ```
+
+2.解压gs_rep_portal安装包
+
+ ```
+tar -zxvf PortalControl-7.0.0rc2-x86_64.tar.gz
+ ```
+
+3.在/ops/portal/config目录的toolspath.properties文件中修改安装路径,然后启动命令安装
+
+ ```
+sh gs_rep_portal.sh install_mysql_all_migration_tools 1 &
+ ```
+
+4.在/ops/portal/config目录的migrationConfig.properties文件中修改迁移参数,指定新的workspace.id为2启动迁移计划3
+ ```
+sh gs_rep_portal.sh start_plan3 2 &
+ ```
+
+5.程序将自动运行至增量迁移和增量校验同时开启中,让workspace.id为2的任务停止增量迁移,此时程序进入等待状态,之后可以启动反向迁移或停止计划
+
+ ```
+sh gs_rep_portal.sh stop_incremental_migration 2 &
+ ```
+
+6.启动反向迁移,此时程序进入反向迁移状态,之后可以停止计划
+
+ ```
+sh gs_rep_portal.sh run_reverse_migration 2 &
+ ```
+
+7.停止workspace.id为2的计划
+
+ ```
+sh gs_rep_portal.sh stop_plan 2 &
+ ```
#### 参与贡献
1. Fork 本仓库
diff --git a/multidb-portal/README.md b/multidb-portal/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..7db0455c465a5d052544635a57378ab3967b8e79
--- /dev/null
+++ b/multidb-portal/README.md
@@ -0,0 +1,275 @@
+# 1 简介
+
+## 1.1 工具介绍
+
+mutidb_portal 是一款基于Java开发的openGauss数据迁移门户工具,整合了openGauss全量迁移、增量迁移、反向迁移及数据校验功能,支持完成MySQL/PostgreSQL到openGauss的一站式迁移。
+
+## 1.2 使用限制
+
+(1)服务器限制
+
+工具当前仅支持在指定系统架构的Linux服务器中运行,支持的系统架构如下:
+
+- CentOS7 x86_64
+- openEuler20.03 x86_64/aarch64
+- openEuler22.03 x86_64/aarch64
+- openEuler24.03 x86_64/aarch64
+
+(2)运行环境限制
+
+工具使用Java 11编写,需要服务器准备Java 11或更高版本的运行环境。
+
+(3)数据库版本限制
+
+- MySQL 5.7及以上版本。
+- PostgreSQL 9.4.26及以上版本。
+- openGauss适配MySQL需要5.0.0及以上版本。
+- openGauss适配PostgreSQL需要6.0.0-RC1及以上版本。
+
+# 2 工具安装
+
+## 2.1 安装包获取
+
+各系统架构对应的安装包下载链接如下表:
+
+| 系统名称 | 架构 | 下载链接 |
+| :------------- | ------- | ------------------------------------------------------------ |
+| CentOS7 | x86_64 | https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/centos7/openGauss-portal-7.0.0rc2-CentOS7-x86_64.tar.gz |
+| openEuler20.03 | x86_64 | https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/openEuler20.03/openGauss-portal-7.0.0rc2-openEuler20.03-x86_64.tar.gz |
+| openEuler20.03 | aarch64 | https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/openEuler20.03/openGauss-portal-7.0.0rc2-openEuler20.03-aarch64.tar.gz |
+| openEuler22.03 | x86_64 | https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/openEuler22.03/openGauss-portal-7.0.0rc2-openEuler22.03-x86_64.tar.gz |
+| openEuler22.03 | aarch64 | https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/openEuler22.03/openGauss-portal-7.0.0rc2-openEuler22.03-aarch64.tar.gz |
+| openEuler24.03 | x86_64 | https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/openEuler24.03/openGauss-portal-7.0.0rc2-openEuler24.03-x86_64.tar.gz |
+| openEuler24.03 | aarch64 | https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/openEuler24.03/openGauss-portal-7.0.0rc2-openEuler24.03-aarch64.tar.gz |
+
+## 2.2 安装步骤
+
+此处以在CentOS7 x86_64的服务器上安装为例,讲解安装步骤。
+
+(1)下载安装包
+
+下载匹配自身系统架构的安装包,参考命令如下
+
+```sh
+wget https://opengauss.obs.cn-south-1.myhuaweicloud.com/latest/tools/centos7/openGauss-portal-7.0.0rc2-CentOS7-x86_64.tar.gz
+```
+
+(2)解压安装包
+
+完成安装包下载后,参考如下命令解压安装包
+
+```sh
+tar -zxvf openGauss-portal-7.0.0rc2-CentOS7-x86_64.tar.gz
+```
+
+(3)查看目录结构
+
+切换至解压出的portal目录下,查看其目录结构,参考命令如下:
+
+```sh
+cd portal && ls -l
+```
+
+检查是否包含如下目录结构
+
+```sh
+bin # 工具操作命令储存目录,其中包含的命令可逐个执行,以学习各命令提示的用法
+config # 工具配置文件目录
+openGauss-portal-7.0.0rc2.jar # 工具核心jar文件
+pkg # 迁移组件储存目录
+template # 迁移模版文件储存目录
+```
+
+**注意:上述罗列的目录结构中的内容,请勿修改,删减等,否则可能导致工具无法正常运行。**
+
+(4)安装chameleon依赖
+
+chameleon为MySQL全量迁移工具,不需要迁移MySQL时,可以跳过此项。
+
+依赖安装,要求使用root用户,或者sudo免密用户。切换到portal目录下后,执行如下命令
+
+```sh
+./bin/install dependencies
+```
+
+(5)安装迁移工具
+
+迁移工具安装命令如下,其中除去全量迁移工具可根据自身需要安装以外,其他工具均需安装
+
+```sh
+./bin/install tools # 一键安装所有迁移工具命令,需提前完成chameleon依赖安装
+./bin/install chameleon # MySQL全量迁移工具安装命令,需提前完成chameleon依赖安装
+./bin/install full_migration_tool # PostgreSQL全量迁移工具安装命令
+./bin/install debezium # 增量、反向迁移工具安装命令
+./bin/install data_checker # 数据校验工具安装命令
+./bin/install kafka # 工具所需三方工具安装命令
+```
+
+(6)检查安装状态
+
+迁移工具安装完成后,使用如下命令检查各工具安装状态,确保所需迁移工具均已完成安装
+
+```sh
+./bin/install check
+```
+
+# 3 使用迁移功能
+
+## 1.1 创建迁移任务
+
+(1)创建迁移任务
+
+创建迁移任务的命令模版如下,使用时请根据自身情况替换对应参数。
+
+```sh
+./bin/task create
+```
+
+其中,
+
+- task_id:任务唯一标识符,不可重复,可以由字母数字下换线和连字符组成,长度不可超过50个字符。
+- source_db_type:源端数据库类型,当前仅支持MySQL和PostgreSQL,创建时可取值:mysql、MySQL、postgresql、PostgreSQL。
+
+命令使用示例如下
+
+```sh
+./bin/task create 1 mysql
+```
+
+(2)查询已有任务
+
+成功创建任务后,可参考如下命令查询已存在哪些任务
+
+```sh
+./bin/task list
+```
+
+**注**:其他task命令,请自行运行task脚本学习。
+
+## 1.2 配置迁移任务
+
+(1)迁移任务配置简介
+
+此处以MySQL迁移配置为例,简要介绍配置文件的主要内容。配置文件中,各项配置也包含注释可自行学习。
+
+**注意:此处介绍的配置,配置迁移任务时,为必配项。**
+
+```properties
+# 迁移模式,用于控制迁移任务包含全量迁移、增量迁移、反向迁移、全量校验中的哪些阶段,可通过./bin/mode命令管理
+migration.mode=plan1
+
+# MySQL服务配置如下
+# MySQL服务所在主机IP
+mysql.database.ip=127.0.0.1
+
+# MySQL服务端口
+mysql.database.port=3306
+
+# 要迁移的MySQL数据库名称
+mysql.database.name=test_db
+
+# MySQL服务连接用户
+mysql.database.username=test_user
+
+# MySQL服务连接用户密码
+mysql.database.password=******
+
+# openGauss服务配置如下
+# openGauss服务所在主机IP
+opengauss.database.ip=127.0.0.1
+
+# openGauss服务端口
+opengauss.database.port=5432
+
+# 迁移到openGauss的数据库名称,需要在openGauss侧提前创建好,且要求兼容性为b
+# 创建语句参考:create database test_db with dbcompatibility = 'b';
+opengauss.database.name=test_db
+
+# openGauss服务连接用户
+opengauss.database.username=test_user
+
+# openGauss服务连接用户密码
+opengauss.database.password=******
+```
+
+(2)配置迁移任务
+
+创建迁移任务成功后,会在portal的workspace目录下生成对应task_id的任务目录结构。
+
+如上述创建的示例任务,生成的任务目录为`./workspace/task_1`,迁移任务配置文件路径为:`./workspace/task_1/config/migration.properties`。
+
+请使用如下命令,前往修改迁移任务目录中的迁移任务配置文件,完成迁移任务配置
+
+```sh
+vim ./workspace/task_1/config/migration.properties
+```
+
+配置完成后,按下`ESC`键,键入`:wq`保存退出。
+
+## 1.3 启动迁移任务
+
+(1)启动迁移任务
+
+启动迁移任务的命令模版如下,使用时请根据自身情况替换对应参数。
+
+```sh
+./bin/migration start
+```
+
+其中,
+
+- task_id:迁移任务ID,与创建迁移任务时取值一致。
+
+命令使用示例如下
+
+```sh
+./bin/migration start 1
+```
+
+**注意:此命令启动的迁移进程为迁移主进程,迁移任务不停止,此进程会持续存活,并输出日志到终端。** 如若后台启动,可前往`./workspace/task_1/logs/portal.log`路径,查看日志文件。
+
+(2)查看迁移任务状态
+
+迁移任务启动成功后,可再启动一个终端,切换到portal目录下后,参考如下命令,查看迁移任务状态。
+
+```sh
+./bin/migration status 1
+```
+
+或者,使用如下命令查看迁移进度详情
+
+```sh
+./bin/migration status 1 --detail
+```
+
+(3)停止增量迁移
+
+迁移任务包含有“增量迁移”阶段时,参考如下命令停止增量迁移。不包含时,跳过此命令。
+
+```sh
+./bin/migration stop_incremental 1
+```
+
+(4)启动反向迁移
+
+迁移任务包含有“反向迁移”阶段时,参考如下命令启动反向迁移。不包含时,跳过此命令。
+
+```sh
+./bin/migration start_reverse 1
+```
+
+(5)停止迁移
+
+无论迁移任务所处任何迁移阶段,均可参考如下命令停止整个迁移任务。
+
+```sh
+./bin/migration stop 1
+```
+
+停止命令执行成功后,上述迁移任务主进程会进行一些清理操作后,自动退出。
+
+(6)Tips
+
+1. 如果一个迁移任务包含所有迁移阶段,全量迁移完成后,会自动启动全量校验,全量校验完成后,会自动启动增量迁移。增量迁移无用户干扰时,会持续进行,因此需要手动停止。手动停止增量迁移后,在手动启动反向迁移。反向迁移无用户干扰时,也会持续进行,同样需要手动停止。
+2. 对于不包含所有迁移阶段的任务,各迁移阶段同样保持上述逻辑顺序,不包含的阶段会自动跳过。
+3. 对于仅包含反向迁移阶段的任务,通过第一步启动迁移任务后,反向迁移阶段会自动启动,无需再手动“启动反向迁移”。
diff --git a/multidb-portal/build.sh b/multidb-portal/build.sh
new file mode 100644
index 0000000000000000000000000000000000000000..d11ece81cf973c3025c2fc2fcd628c9a42eeaea3
--- /dev/null
+++ b/multidb-portal/build.sh
@@ -0,0 +1,128 @@
+#!/bin/bash
+
+valid_system_archs=("CentOS7-x86_64" "openEuler20.03-x86_64" "openEuler20.03-aarch64" "openEuler22.03-x86_64" "openEuler22.03-aarch64" "openEuler24.03-x86_64" "openEuler24.03-aarch64")
+
+usage() {
+ temp=""
+
+ for ((i=0; i<${#valid_system_archs[@]}; i++))
+ do
+ if [ $i -eq 0 ]; then
+ temp="${valid_system_archs[i]}"
+ else
+ temp="${temp}|${valid_system_archs[i]}"
+ fi
+ done
+
+ echo "Usage: $0 <${temp}>"
+ exit 1
+}
+
+check_param() {
+ if [ $# -eq 0 ]; then
+ echo "No arguments provided"
+ usage
+ fi
+
+ if [ $# -gt 1 ]; then
+ echo "Too many arguments provided"
+ usage
+ fi
+
+ if [[ ! " ${valid_system_archs[@]} " =~ " $1 " ]]; then
+ echo "The '$1' parameter is invalid."
+ usage
+ fi
+}
+
+config_properties() {
+ system_arch=$1
+
+ IFS='-' read -ra parts <<< "$system_arch"
+ if [[ ${#parts[@]} -ne 2 ]]; then
+ echo "The '$1' parameter is invalid."
+ exit 1
+ fi
+
+ echo "system.name=${parts[0]}" > portal/config/application.properties
+ echo "system.arch=${parts[1]}" >> portal/config/application.properties
+ echo " Portal config file generated successfully"
+}
+
+download_dependencies() {
+ local base_dir="../portal/offline/install"
+ local target_dir="../../../multidb-portal/portal"
+ local platform="$1"
+ local script_args=()
+
+ if ! cd "${base_dir}"; then
+ echo "Error: Failed to enter directory ${base_dir}" >&2
+ exit 1;
+ fi
+
+ echo "Start to download the RPM packages"
+
+ case "$platform" in
+ "CentOS7-x86_64")
+ script_args=("CentOS7_x86_64" "$target_dir")
+ ;;
+ "openEuler20.03-x86_64")
+ script_args=("openEuler2003_x86_64" "$target_dir")
+ ;;
+ "openEuler20.03-aarch64")
+ script_args=("openEuler2003_aarch64" "$target_dir")
+ ;;
+ "openEuler22.03-x86_64")
+ script_args=("openEuler2203_x86_64" "$target_dir")
+ ;;
+ "openEuler22.03-aarch64")
+ script_args=("openEuler2203_aarch64" "$target_dir")
+ ;;
+ "openEuler24.03-x86_64")
+ script_args=("openEuler2403_x86_64" "$target_dir")
+ ;;
+ "openEuler24.03-aarch64")
+ script_args=("openEuler2403_aarch64" "$target_dir")
+ ;;
+ *)
+ echo "Error: Invalid platform parameter '$platform'" >&2
+ exit 1;
+ ;;
+ esac
+
+ if ! sh main.sh "${script_args[@]}"; then
+ echo "Error: Failed to download packages" >&2
+ exit 1;
+ fi
+
+ echo "Download the RPM packages successfully"
+
+ if ! cd - >/dev/null; then
+ echo "Warning: Failed to return to original directory" >&2
+ fi
+}
+
+package_portal() {
+ echo "Start to package the portal"
+ mvn clean package -DskipTests
+ echo "Package the portal successfully"
+}
+
+build_dirs() {
+ echo "Start to build the directories"
+ cd portal
+ chmod +x ./bin/*
+ cp ../target/openGauss-portal-*.jar ./
+
+ mkdir -p pkg/chameleon pkg/confluent pkg/datachecker pkg/debezium pkg/full-migration
+ mkdir -p template/config/chameleon template/config/datachecker template/config/debezium template/config/full-migration
+ echo "Build the directories successfully"
+}
+
+check_param $@
+config_properties $@
+download_dependencies $@
+package_portal
+build_dirs
+
+# Next, copy the migration tools installation packages and configuration files to the specified directories, and package the entire portal directory as a tar.gz file, complete the packaging.
\ No newline at end of file
diff --git a/multidb-portal/pom.xml b/multidb-portal/pom.xml
new file mode 100644
index 0000000000000000000000000000000000000000..26fe4a281cef8cf2716e8c87c52ae94d835c443a
--- /dev/null
+++ b/multidb-portal/pom.xml
@@ -0,0 +1,154 @@
+
+ 4.0.0
+
+ org.opengauss
+ multidb-portal
+ 7.0.0rc2
+
+
+ 3.6.9
+ 1.18.32
+ 2.0
+ 2.0.57
+ 2.24.2
+ 1.9.0
+ 5.7.1
+ 3.0.0
+ 8.0.27
+ 42.7.6
+
+ 11
+ 11
+ UTF-8
+
+
+
+
+
+ io.quarkus
+ quarkus-bom
+ ${quarkus.version}
+ pom
+ import
+
+
+
+
+
+
+ io.quarkus
+ quarkus-resteasy
+
+
+
+ io.quarkus
+ quarkus-arc
+
+
+
+ io.quarkus
+ quarkus-smallrye-fault-tolerance
+
+
+
+ com.fasterxml.jackson.core
+ jackson-databind
+
+
+
+ org.projectlombok
+ lombok
+ ${lombok.version}
+
+
+
+ org.yaml
+ snakeyaml
+
+
+
+ com.alibaba.fastjson2
+ fastjson2
+ ${fastjson2.version}
+
+
+
+ org.apache.logging.log4j
+ log4j-api
+ ${log4j2.version}
+
+
+ org.apache.logging.log4j
+ log4j-core
+ ${log4j2.version}
+
+
+
+ commons-cli
+ commons-cli
+ ${commons-cli.version}
+
+
+ com.opencsv
+ opencsv
+ ${opencsv.version}
+
+
+ org.apache.commons
+ commons-compress
+
+
+
+ org.opengauss
+ opengauss-jdbc
+ ${opengauss.jdbc.version}
+
+
+
+ mysql
+ mysql-connector-java
+
+
+
+ org.postgresql
+ postgresql
+
+
+
+ io.quarkus
+ quarkus-junit5
+ test
+
+
+
+
+ openGauss-portal-${version}
+
+
+ io.quarkus
+ quarkus-maven-plugin
+ ${quarkus.version}
+ true
+
+
+
+ build
+ generate-code
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-compiler-plugin
+ 3.11.0
+
+ ${maven.compiler.source}
+ ${maven.compiler.target}
+
+
+
+
+
diff --git a/multidb-portal/portal/bin/install b/multidb-portal/portal/bin/install
new file mode 100644
index 0000000000000000000000000000000000000000..d22b20c1afd2af778aecb5b4343d04592976b0f6
--- /dev/null
+++ b/multidb-portal/portal/bin/install
@@ -0,0 +1,118 @@
+#!/bin/bash
+
+# Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+# Description: Migration Tool Installation Script
+
+set -euo pipefail
+
+usage() {
+ cat <&1 | awk -F '"' '/version/ {print $2}'); then
+ echo "Error: Java is not installed or not in PATH"
+ return 1
+ fi
+
+ if [[ "$java_version" =~ ^1\. ]]; then
+ version_num=$(echo "$java_version" | cut -d. -f2)
+ else
+ version_num=$(echo "$java_version" | cut -d. -f1)
+ fi
+
+ if [ "$version_num" -lt 11 ]; then
+ echo "Error: Java 11 or later is required (found Java $java_version)"
+ return 1
+ fi
+ return 0
+}
+
+# Verify Java is available and version >= 11
+if ! check_java_version; then
+ exit 1
+fi
+
+COMPONENT="$1"
+shift
+
+# Validate component
+case "$COMPONENT" in
+ tools|chameleon|full_migration_tool|debezium|data_checker|kafka|dependencies|check)
+ ;;
+ *)
+ echo "Error: Invalid component name '$COMPONENT'"
+ usage
+ ;;
+esac
+
+# Build Java command arguments
+ARGS=("--install" "$COMPONENT")
+
+if [ $# -gt 0 ] && [[ "$1" == "-f" || "$1" == "--force" ]]; then
+ ARGS+=("--force")
+ shift
+fi
+
+if [ $# -gt 0 ]; then
+ echo "Error: Unknown parameter '$1'"
+ usage
+fi
+
+# Change to project root
+SCRIPT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" &> /dev/null && pwd)
+PROJECT_ROOT=$(dirname "$SCRIPT_DIR")
+cd "$PROJECT_ROOT" || {
+ echo "Error: Failed to change to project directory"
+ exit 1
+}
+
+# Find the JAR file
+JAVA_PROGRAM=$(ls openGauss-portal-*.jar 2> /dev/null | head -n 1)
+if [[ -z "$JAVA_PROGRAM" ]]; then
+ echo "Error: No openGauss-portal-*.jar file found in $PROJECT_ROOT"
+ exit 1
+fi
+
+if [[ "${ARGS[*]}" =~ "--force" ]]; then
+ echo "Warning: Force mode enabled"
+fi
+
+# Execute Java program
+exec java -Dfile.encoding=UTF-8 -jar "$JAVA_PROGRAM" "${ARGS[@]}"
\ No newline at end of file
diff --git a/multidb-portal/portal/bin/kafka b/multidb-portal/portal/bin/kafka
new file mode 100644
index 0000000000000000000000000000000000000000..968740842d506264c47e5c979cf58f9ef1e2b31d
--- /dev/null
+++ b/multidb-portal/portal/bin/kafka
@@ -0,0 +1,113 @@
+#!/bin/bash
+
+# Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+# Description: Kafka Operation Script
+
+set -euo pipefail
+
+usage() {
+ cat <&1 | awk -F '"' '/version/ {print $2}'); then
+ echo "Error: Java is not installed or not in PATH"
+ return 1
+ fi
+
+ if [[ "$java_version" =~ ^1\. ]]; then
+ version_num=$(echo "$java_version" | cut -d. -f2)
+ else
+ version_num=$(echo "$java_version" | cut -d. -f1)
+ fi
+
+ if [ "$version_num" -lt 11 ]; then
+ echo "Error: Java 11 or later is required (found Java $java_version)"
+ return 1
+ fi
+ return 0
+}
+
+# Verify Java is available and version >= 11
+if ! check_java_version; then
+ exit 1
+fi
+
+OPERATION="$1"
+shift
+
+# Validate operation
+case "$OPERATION" in
+ status|start|stop|clean)
+ ;;
+ *)
+ echo "Error: Invalid operation name '$OPERATION'"
+ usage
+ ;;
+esac
+
+# Build Java command arguments
+ARGS=("--kafka" "$OPERATION")
+
+if [ $# -gt 0 ]; then
+ echo "Error: Unknown parameter '$1'"
+ usage
+fi
+
+# Change to project root
+SCRIPT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" &> /dev/null && pwd)
+PROJECT_ROOT=$(dirname "$SCRIPT_DIR")
+cd "$PROJECT_ROOT" || {
+ echo "Error: Failed to change to project directory"
+ exit 1
+}
+
+# Find the JAR file
+JAVA_PROGRAM=$(ls openGauss-portal-*.jar 2> /dev/null | head -n 1)
+if [[ -z "$JAVA_PROGRAM" ]]; then
+ echo "Error: No openGauss-portal-*.jar file found in $PROJECT_ROOT"
+ exit 1
+fi
+
+# Special handling for clean operation
+if [[ "$OPERATION" == "clean" ]]; then
+ read -p "WARNING: This will remove all Kafka data. Are you sure? (y/n) " -n 1 -r
+ echo
+ if [[ ! $REPLY =~ ^[Yy]$ ]]; then
+ echo "Clean operation cancelled."
+ exit 0
+ fi
+ echo "Warning: Force clean mode enabled - all Kafka data will be removed immediately"
+fi
+
+# Execute Java program
+exec java -Dfile.encoding=UTF-8 -jar "$JAVA_PROGRAM" "${ARGS[@]}"
\ No newline at end of file
diff --git a/multidb-portal/portal/bin/migration b/multidb-portal/portal/bin/migration
new file mode 100644
index 0000000000000000000000000000000000000000..c331ff47994a0b4954aeddcc5d7cecd0a98a9526
--- /dev/null
+++ b/multidb-portal/portal/bin/migration
@@ -0,0 +1,147 @@
+#!/bin/bash
+
+# Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+# Description: Migration Operation Script
+
+set -euo pipefail
+
+usage() {
+ cat < Start migration
+ $0 status Check migration status
+ $0 status [-d|--detail] Generate migration detail csv file
+ $0 stop Stop migration
+ $0 start_incremental Start incremental migration
+ $0 resume_incremental Resume incremental migration
+ $0 stop_incremental Stop incremental migration
+ $0 restart_incremental Restart incremental migration
+ $0 start_reverse Start reverse migration
+ $0 resume_reverse Resume reverse migration
+ $0 stop_reverse Stop reverse migration
+ $0 restart_reverse Restart reverse migration
+ $0 -h|--help Show this help message
+
+ Examples:
+ $0 start 1
+ $0 status 1
+ $0 status 1 -d
+ $0 stop 1
+ $0 start_incremental 1
+
+Tips:
+ 1. Requires Java 11 or later to be installed.
+ 2. Task ID must correspond to an existing migration task.
+EOF
+ exit 1
+}
+
+# Function to check if task exists
+task_exists() {
+ local task_id=$1
+ local task_dir="$PROJECT_ROOT/workspace/task_$task_id"
+ [[ -d "$task_dir" ]]
+}
+
+# Check if the first argument is provided
+if [ $# -eq 0 ]; then
+ usage
+fi
+
+# Check for help argument
+if [[ "$1" == "-h" || "$1" == "--help" ]]; then
+ usage
+fi
+
+# Function to check Java version
+check_java_version() {
+ local java_version
+ local version_num
+
+ if ! java_version=$(java -version 2>&1 | awk -F '"' '/version/ {print $2}'); then
+ echo "Error: Java is not installed or not in PATH"
+ return 1
+ fi
+
+ if [[ "$java_version" =~ ^1\. ]]; then
+ version_num=$(echo "$java_version" | cut -d. -f2)
+ else
+ version_num=$(echo "$java_version" | cut -d. -f1)
+ fi
+
+ if [ "$version_num" -lt 11 ]; then
+ echo "Error: Java 11 or later is required (found Java $java_version)"
+ return 1
+ fi
+ return 0
+}
+
+# Verify Java is available and version >= 11
+if ! check_java_version; then
+ exit 1
+fi
+
+# Change to project root
+SCRIPT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" &> /dev/null && pwd)
+PROJECT_ROOT=$(dirname "$SCRIPT_DIR")
+cd "$PROJECT_ROOT" || {
+ echo "Error: Failed to change to project directory"
+ exit 1
+}
+
+# Ensure workspace directory exists
+mkdir -p "$PROJECT_ROOT/workspace"
+
+OPERATION="$1"
+shift
+
+# Validate operation and arguments
+case "$OPERATION" in
+ start|status|stop|start_incremental|stop_incremental|start_reverse|stop_reverse| \
+ resume_incremental|restart_incremental|resume_reverse|restart_reverse)
+ if [ $# -lt 1 ]; then
+ echo "Error: '$OPERATION' operation requires task_id"
+ usage
+ fi
+ TASK_ID="$1"
+
+ # Check if task already exists
+ if ! task_exists "$TASK_ID"; then
+ echo "Error: Task $TASK_ID does not exist in $PROJECT_ROOT/workspace/task_$TASK_ID"
+ exit 1
+ fi
+ shift
+ ;;
+ *)
+ echo "Error: Invalid operation name '$OPERATION'"
+ usage
+ ;;
+esac
+
+# Build Java command arguments
+ARGS=("--migration" "$OPERATION" "$TASK_ID")
+if [ $# -gt 0 ] && [[ "$1" == "-d" || "$1" == "--detail" ]]; then
+ ARGS+=("--detail")
+ shift
+fi
+
+if [ $# -gt 0 ]; then
+ echo "Error: Too many arguments provided"
+ usage
+fi
+
+# Find the JAR file
+JAVA_PROGRAM=$(ls openGauss-portal-*.jar 2> /dev/null | head -n 1)
+if [[ -z "$JAVA_PROGRAM" ]]; then
+ echo "Error: No openGauss-portal-*.jar file found in $PROJECT_ROOT"
+ exit 1
+fi
+
+# Set workspace directory
+WORKSPACE_DIR="$PROJECT_ROOT/workspace/task_$TASK_ID"
+
+# Execute Java program in the workspace directory
+(cd "$WORKSPACE_DIR" && exec java -Dfile.encoding=UTF-8 -jar "$PROJECT_ROOT/$JAVA_PROGRAM" "${ARGS[@]}")
\ No newline at end of file
diff --git a/multidb-portal/portal/bin/mode b/multidb-portal/portal/bin/mode
new file mode 100644
index 0000000000000000000000000000000000000000..2ca4b0e7e4596879a62691872b78d030287b79b7
--- /dev/null
+++ b/multidb-portal/portal/bin/mode
@@ -0,0 +1,145 @@
+#!/bin/bash
+
+# Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+# Description: Migration Mode Management Script
+
+set -euo pipefail
+
+usage() {
+ cat < Add a new migration mode from file
+ $0 update Update an existing migration mode
+ $0 delete Delete a migration mode
+ $0 -h|--help Show this help message
+
+ Examples:
+ $0 list
+ $0 template
+ $0 add ../tmp/mode-template.properties
+ $0 update ../tmp/mode-template.properties
+ $0 delete old_mode
+
+Tips:
+ 1. Requires Java 11 or later to be installed.
+ 2. Mode file should be a valid properties configuration.
+ 3. For 'add' and 'update' operations, the file path must be specified.
+ 4. For 'delete' operation, the mode name must be specified.
+EOF
+ exit 1
+}
+
+# Check if the first argument is provided
+if [ $# -eq 0 ]; then
+ usage
+fi
+
+# Check for help argument
+if [[ "$1" == "-h" || "$1" == "--help" ]]; then
+ usage
+fi
+
+# Function to check Java version
+check_java_version() {
+ local java_version
+ local version_num
+
+ if ! java_version=$(java -version 2>&1 | awk -F '"' '/version/ {print $2}'); then
+ echo "Error: Java is not installed or not in PATH"
+ return 1
+ fi
+
+ if [[ "$java_version" =~ ^1\. ]]; then
+ version_num=$(echo "$java_version" | cut -d. -f2)
+ else
+ version_num=$(echo "$java_version" | cut -d. -f1)
+ fi
+
+ if [ "$version_num" -lt 11 ]; then
+ echo "Error: Java 11 or later is required (found Java $java_version)"
+ return 1
+ fi
+ return 0
+}
+
+# Verify Java is available and version >= 11
+if ! check_java_version; then
+ exit 1
+fi
+
+OPERATION="$1"
+shift
+
+# Validate operation and arguments
+case "$OPERATION" in
+ list|template)
+ if [ $# -gt 0 ]; then
+ echo "Error: '$OPERATION' operation does not require additional arguments"
+ usage
+ fi
+ ;;
+ add|update)
+ if [ $# -eq 0 ]; then
+ echo "Error: '$OPERATION' operation requires a mode file path"
+ usage
+ fi
+ MODE_FILE="$1"
+ if [ ! -f "$MODE_FILE" ]; then
+ echo "Error: Mode file '$MODE_FILE' does not exist or is not readable"
+ exit 1
+ fi
+ shift
+ ;;
+ delete)
+ if [ $# -eq 0 ]; then
+ echo "Error: 'delete' operation requires a mode name"
+ usage
+ fi
+ MODE_NAME="$1"
+ shift
+ ;;
+ *)
+ echo "Error: Invalid operation name '$OPERATION'"
+ usage
+ ;;
+esac
+
+if [ $# -gt 0 ]; then
+ echo "Error: Too many arguments provided"
+ usage
+fi
+
+# Build Java command arguments
+ARGS=("--mode" "$OPERATION")
+
+case "$OPERATION" in
+ add|update)
+ ARGS+=("$MODE_FILE")
+ ;;
+ delete)
+ ARGS+=("$MODE_NAME")
+ ;;
+esac
+
+# Change to project root
+SCRIPT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" &> /dev/null && pwd)
+PROJECT_ROOT=$(dirname "$SCRIPT_DIR")
+cd "$PROJECT_ROOT" || {
+ echo "Error: Failed to change to project directory"
+ exit 1
+}
+
+# Find the JAR file
+JAVA_PROGRAM=$(ls openGauss-portal-*.jar 2> /dev/null | head -n 1)
+if [[ -z "$JAVA_PROGRAM" ]]; then
+ echo "Error: No openGauss-portal-*.jar file found in $PROJECT_ROOT"
+ exit 1
+fi
+
+# Execute Java program
+exec java -Dfile.encoding=UTF-8 -jar "$JAVA_PROGRAM" "${ARGS[@]}"
\ No newline at end of file
diff --git a/multidb-portal/portal/bin/task b/multidb-portal/portal/bin/task
new file mode 100644
index 0000000000000000000000000000000000000000..a31f36c3422f38d87594cf9f2f571c9233ff6684
--- /dev/null
+++ b/multidb-portal/portal/bin/task
@@ -0,0 +1,174 @@
+#!/bin/bash
+
+# Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+# Description: Migration Task Management Script
+
+set -euo pipefail
+
+usage() {
+ cat < Create a new migration task
+ $0 delete Delete a migration task
+ $0 -h|--help Show this help message
+
+ Supported Source DB Types:
+ mysql/MySQL
+ postgresql/PostgreSQL
+
+ Examples:
+ $0 list
+ $0 create 1 mysql
+ $0 create 2 PostgreSQL
+ $0 delete 1
+
+Tips:
+ 1. Requires Java 11 or later to be installed.
+ 2. Task ID must be unique.
+ 3. Source database type must be one of the supported types.
+EOF
+ exit 1
+}
+
+# Function to check if task exists
+task_exists() {
+ local task_id=$1
+ local task_dir="$PROJECT_ROOT/workspace/task_$task_id"
+ [[ -d "$task_dir" ]]
+}
+
+# Check if the first argument is provided
+if [ $# -eq 0 ]; then
+ usage
+fi
+
+# Check for help argument
+if [[ "$1" == "-h" || "$1" == "--help" ]]; then
+ usage
+fi
+
+# Function to check Java version
+check_java_version() {
+ local java_version
+ local version_num
+
+ if ! java_version=$(java -version 2>&1 | awk -F '"' '/version/ {print $2}'); then
+ echo "Error: Java is not installed or not in PATH"
+ return 1
+ fi
+
+ if [[ "$java_version" =~ ^1\. ]]; then
+ version_num=$(echo "$java_version" | cut -d. -f2)
+ else
+ version_num=$(echo "$java_version" | cut -d. -f1)
+ fi
+
+ if [ "$version_num" -lt 11 ]; then
+ echo "Error: Java 11 or later is required (found Java $java_version)"
+ return 1
+ fi
+ return 0
+}
+
+# Verify Java is available and version >= 11
+if ! check_java_version; then
+ exit 1
+fi
+
+# Change to project root
+SCRIPT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" &> /dev/null && pwd)
+PROJECT_ROOT=$(dirname "$SCRIPT_DIR")
+cd "$PROJECT_ROOT" || {
+ echo "Error: Failed to change to project directory"
+ exit 1
+}
+
+# Ensure workspace directory exists
+mkdir -p "$PROJECT_ROOT/workspace"
+
+OPERATION="$1"
+shift
+
+# Validate operation and arguments
+case "$OPERATION" in
+ list)
+ if [ $# -gt 0 ]; then
+ echo "Error: 'list' operation does not require additional arguments"
+ usage
+ fi
+ ;;
+ create)
+ if [ $# -lt 2 ]; then
+ echo "Error: 'create' operation requires task_id and source_db_type"
+ usage
+ fi
+ TASK_ID="$1"
+ SOURCE_DB_TYPE="$2"
+
+ # Check if task already exists
+ if task_exists "$TASK_ID"; then
+ echo "Error: Task $TASK_ID already exists in $PROJECT_ROOT/workspace/task_$TASK_ID"
+ exit 1
+ fi
+
+ # Validate source_db_type
+ case "$SOURCE_DB_TYPE" in
+ mysql|MySQL|postgresql|PostgreSQL)
+ ;;
+ *)
+ echo "Error: Invalid source_db_type '$SOURCE_DB_TYPE'. Supported types: mysql/MySQL/postgresql/PostgreSQL"
+ usage
+ ;;
+ esac
+ shift 2
+ ;;
+ delete)
+ if [ $# -lt 1 ]; then
+ echo "Error: 'delete' operation requires task_id"
+ usage
+ fi
+ TASK_ID="$1"
+
+ # Check if task exists
+ if ! task_exists "$TASK_ID"; then
+ echo "Error: Task $TASK_ID does not exist in $PROJECT_ROOT/workspace/task_$TASK_ID"
+ exit 1
+ fi
+ shift
+ ;;
+ *)
+ echo "Error: Invalid operation name '$OPERATION'"
+ usage
+ ;;
+esac
+
+if [ $# -gt 0 ]; then
+ echo "Error: Too many arguments provided"
+ usage
+fi
+
+# Build Java command arguments
+ARGS=("--task" "$OPERATION")
+
+case "$OPERATION" in
+ create)
+ ARGS+=("$TASK_ID" "$SOURCE_DB_TYPE")
+ ;;
+ delete)
+ ARGS+=("$TASK_ID")
+ ;;
+esac
+
+# Find the JAR file
+JAVA_PROGRAM=$(ls openGauss-portal-*.jar 2> /dev/null | head -n 1)
+if [[ -z "$JAVA_PROGRAM" ]]; then
+ echo "Error: No openGauss-portal-*.jar file found in $PROJECT_ROOT"
+ exit 1
+fi
+
+# Execute Java program
+exec java -Dfile.encoding=UTF-8 -jar "$JAVA_PROGRAM" "${ARGS[@]}"
\ No newline at end of file
diff --git a/multidb-portal/portal/bin/uninstall b/multidb-portal/portal/bin/uninstall
new file mode 100644
index 0000000000000000000000000000000000000000..d1c299b32d3bf45188bb5c2f5b37bb86fe156db6
--- /dev/null
+++ b/multidb-portal/portal/bin/uninstall
@@ -0,0 +1,98 @@
+#!/bin/bash
+
+# Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+# Description: Migration Tool Uninstallation Script
+
+set -euo pipefail
+
+usage() {
+ cat <&1 | awk -F '"' '/version/ {print $2}'); then
+ echo "Error: Java is not installed or not in PATH"
+ return 1
+ fi
+
+ if [[ "$java_version" =~ ^1\. ]]; then
+ version_num=$(echo "$java_version" | cut -d. -f2)
+ else
+ version_num=$(echo "$java_version" | cut -d. -f1)
+ fi
+
+ if [ "$version_num" -lt 11 ]; then
+ echo "Error: Java 11 or later is required (found Java $java_version)"
+ return 1
+ fi
+ return 0
+}
+
+# Verify Java is available and version >= 11
+if ! check_java_version; then
+ exit 1
+fi
+
+COMPONENT="$1"
+shift
+
+# Validate component - only "tools" is supported for uninstall
+case "$COMPONENT" in
+ tools)
+ ;;
+ *)
+ echo "Error: Invalid component name '$COMPONENT'. Only 'tools' can be uninstalled."
+ usage
+ ;;
+esac
+
+# Build Java command arguments
+ARGS=("--uninstall" "$COMPONENT")
+
+if [ $# -gt 0 ]; then
+ echo "Error: Unknown parameter '$1'"
+ usage
+fi
+
+# Change to project root
+SCRIPT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" &> /dev/null && pwd)
+PROJECT_ROOT=$(dirname "$SCRIPT_DIR")
+cd "$PROJECT_ROOT" || {
+ echo "Error: Failed to change to project directory"
+ exit 1
+}
+
+# Find the JAR file
+JAVA_PROGRAM=$(ls openGauss-portal-*.jar 2> /dev/null | head -n 1)
+if [[ -z "$JAVA_PROGRAM" ]]; then
+ echo "Error: No openGauss-portal-*.jar file found in $PROJECT_ROOT"
+ exit 1
+fi
+
+# Execute Java program
+exec java -Dfile.encoding=UTF-8 -jar "$JAVA_PROGRAM" "${ARGS[@]}"
\ No newline at end of file
diff --git a/multidb-portal/portal/config/application.properties b/multidb-portal/portal/config/application.properties
new file mode 100644
index 0000000000000000000000000000000000000000..6bf634123446d3f1507d0116963850131139dc49
--- /dev/null
+++ b/multidb-portal/portal/config/application.properties
@@ -0,0 +1,2 @@
+system.name=openEuler20.03
+system.arch=aarch64
diff --git a/multidb-portal/src/main/java/org/opengauss/Main.java b/multidb-portal/src/main/java/org/opengauss/Main.java
new file mode 100644
index 0000000000000000000000000000000000000000..dbacd4c80d4cc3732d30c4fd8116850e693ac1b0
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/Main.java
@@ -0,0 +1,72 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss;
+
+import io.quarkus.runtime.Quarkus;
+import io.quarkus.runtime.annotations.QuarkusMain;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.ParseException;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.command.Command;
+import org.opengauss.command.CommandFactory;
+import org.opengauss.command.HelpCommand;
+import org.opengauss.command.parser.CommandParser;
+import org.opengauss.handler.PortalExceptionHandler;
+
+/**
+ * Main class
+ *
+ * @since 2025/2/27
+ */
+@QuarkusMain
+public class Main {
+ private static final Logger LOGGER = LogManager.getLogger(Main.class);
+
+ private static String[] args;
+
+ /**
+ * Main method
+ *
+ * @param args command line arguments
+ */
+ public static void main(String[] args) {
+ Thread.currentThread().setUncaughtExceptionHandler(new PortalExceptionHandler());
+ Main.args = args;
+ Command command = parseCommand(args);
+ if (command != null) {
+ command.execute();
+ }
+ }
+
+ /**
+ * Start quarkus
+ */
+ public static void startQuarkus() {
+ Quarkus.run(args);
+ }
+
+ /**
+ * Stop quarkus
+ */
+ public static void stopQuarkus() {
+ Quarkus.asyncExit();
+ }
+
+ private static Command parseCommand(String[] args) {
+ Command command = null;
+ try {
+ CommandLine commandLine = new CommandParser().parse(args);
+ command = CommandFactory.createCommand(commandLine);
+ } catch (ParseException e) {
+ LOGGER.error("Failed to parse command line arguments:", e);
+ new HelpCommand().execute();
+ } catch (IllegalArgumentException e) {
+ LOGGER.error("Invalid command: ", e);
+ new HelpCommand().execute();
+ }
+ return command;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/Command.java b/multidb-portal/src/main/java/org/opengauss/command/Command.java
new file mode 100644
index 0000000000000000000000000000000000000000..556eaddde8a8ebb54c0e1eabe5725d0cd85d9340
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/Command.java
@@ -0,0 +1,17 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command;
+
+/**
+ * command interface
+ *
+ * @since 2025/3/26
+ */
+public interface Command {
+ /**
+ * execute command
+ */
+ void execute();
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/CommandFactory.java b/multidb-portal/src/main/java/org/opengauss/command/CommandFactory.java
new file mode 100644
index 0000000000000000000000000000000000000000..a2b99c6836053b964b0ebfad92307b9d8495a534
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/CommandFactory.java
@@ -0,0 +1,87 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command;
+
+import org.apache.commons.cli.CommandLine;
+
+import java.util.Map;
+import java.util.function.Function;
+
+/**
+ * Command factory
+ *
+ * @since 2025/3/26
+ */
+public class CommandFactory {
+ private static final Map> COMMAND_MAP = Map.of(
+ "help", CommandFactory::generateHelpCommand,
+ "install", CommandFactory::generateInstallCommand,
+ "uninstall", CommandFactory::generateUninstallCommand,
+ "kafka", CommandFactory::generateKafkaCommand,
+ "mode", CommandFactory::generateModeCommand,
+ "task", CommandFactory::generateTaskCommand,
+ "migration", CommandFactory::generateMigrationCommand,
+ "config_description", CommandFactory::generateConfigDescCommand
+ );
+
+ /**
+ * Create command
+ *
+ * @param cmd command line
+ * @return command
+ */
+ public static Command createCommand(CommandLine cmd) {
+ for (Map.Entry> entry : COMMAND_MAP.entrySet()) {
+ if (cmd.hasOption(entry.getKey())) {
+ return entry.getValue().apply(cmd);
+ }
+ }
+
+ throw new IllegalArgumentException("Invalid command");
+ }
+
+ private static HelpCommand generateHelpCommand(CommandLine cmd) {
+ return new HelpCommand();
+ }
+
+ private static InstallCommand generateInstallCommand(CommandLine cmd) {
+ if (cmd.hasOption("force")) {
+ return new InstallCommand(cmd.getOptionValue("install"), true);
+ }
+ return new InstallCommand(cmd.getOptionValue("install"), false);
+ }
+
+ private static UninstallCommand generateUninstallCommand(CommandLine cmd) {
+ return new UninstallCommand(cmd.getOptionValue("uninstall"));
+ }
+
+ private static KafkaCommand generateKafkaCommand(CommandLine cmd) {
+ return new KafkaCommand(cmd.getOptionValue("kafka"));
+ }
+
+ private static ModeCommand generateModeCommand(CommandLine cmd) {
+ return new ModeCommand(cmd.getOptionValues("mode"));
+ }
+
+ private static TaskCommand generateTaskCommand(CommandLine cmd) {
+ return new TaskCommand(cmd.getOptionValues("task"));
+ }
+
+ private static MigrationCommand generateMigrationCommand(CommandLine cmd) {
+ String[] args = cmd.getOptionValues("migration");
+ if (args == null || args.length != 2) {
+ throw new IllegalArgumentException("Command migration requires two arguments: operation taskId");
+ }
+ boolean hasDetail = cmd.hasOption("detail");
+ if (hasDetail) {
+ return new MigrationCommand(args[0], args[1], true);
+ }
+ return new MigrationCommand(args[0], args[1]);
+ }
+
+ private static ConfigDescCommand generateConfigDescCommand(CommandLine cmd) {
+ return new ConfigDescCommand(cmd.getOptionValue("config_description"));
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/ConfigDescCommand.java b/multidb-portal/src/main/java/org/opengauss/command/ConfigDescCommand.java
new file mode 100644
index 0000000000000000000000000000000000000000..24bfdbc5b58267f3a1f83e58a6fdab8a51f48fa8
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/ConfigDescCommand.java
@@ -0,0 +1,63 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.command.receiver.ConfigDescCommandReceiver;
+import org.opengauss.constants.TaskConstants;
+import org.opengauss.enums.DatabaseType;
+
+import java.util.Locale;
+
+/**
+ * config description command
+ *
+ * @since 2025/6/24
+ */
+public class ConfigDescCommand implements Command {
+ private static final Logger LOGGER = LogManager.getLogger(ConfigDescCommand.class);
+
+ private final String databaseType;
+
+ public ConfigDescCommand(String databaseType) {
+ if (databaseType == null) {
+ throw new IllegalArgumentException("Missing argument: databaseType");
+ }
+ this.databaseType = databaseType;
+ }
+
+ @Override
+ public void execute() {
+ DatabaseType type = parseDatabaseType();
+ ConfigDescCommandReceiver commandReceiver = new ConfigDescCommandReceiver();
+
+ switch (type) {
+ case MYSQL:
+ LOGGER.info("Start command to get MySQL migration configuration description");
+ commandReceiver.mysqlConfigDesc();
+ break;
+ case POSTGRESQL:
+ LOGGER.info("Start command to get PostgreSQL migration configuration description");
+ commandReceiver.pgsqlConfigDesc();
+ break;
+ default:
+ throw new IllegalArgumentException("Unsupported database type: " + databaseType);
+ }
+ }
+
+ private DatabaseType parseDatabaseType() {
+ try {
+ DatabaseType type = DatabaseType.valueOf(databaseType.toUpperCase(Locale.ROOT));
+ if (TaskConstants.SUPPORTED_SOURCE_DB_TYPES.contains(type)) {
+ return type;
+ } else {
+ throw new IllegalArgumentException("Unsupported database type: " + databaseType);
+ }
+ } catch (IllegalArgumentException e) {
+ throw new IllegalArgumentException("Unsupported database type: " + databaseType);
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/HelpCommand.java b/multidb-portal/src/main/java/org/opengauss/command/HelpCommand.java
new file mode 100644
index 0000000000000000000000000000000000000000..c0d0fc8e87a609cabf4efe30bfab15c25241a54e
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/HelpCommand.java
@@ -0,0 +1,23 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command;
+
+import org.apache.commons.cli.HelpFormatter;
+import org.opengauss.command.parser.CommandParser;
+
+/**
+ * help command
+ *
+ * @since 2025/3/26
+ */
+public class HelpCommand implements Command {
+ public HelpCommand() {
+ }
+
+ @Override
+ public void execute() {
+ new HelpFormatter().printHelp("数据迁移工具", new CommandParser().getOptions());
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/InstallCommand.java b/multidb-portal/src/main/java/org/opengauss/command/InstallCommand.java
new file mode 100644
index 0000000000000000000000000000000000000000..667a82d267c543cad868a3d10dd784e58bd03793
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/InstallCommand.java
@@ -0,0 +1,67 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.command.receiver.InstallCommandReceiver;
+
+/**
+ * install command
+ *
+ * @since 2025/3/26
+ */
+public class InstallCommand implements Command {
+ private static final Logger LOGGER = LogManager.getLogger(InstallCommand.class);
+
+ private final String component;
+ private final boolean isForce;
+
+ InstallCommand(String component, boolean isForce) {
+ this.component = component;
+ this.isForce = isForce;
+ }
+
+ @Override
+ public void execute() {
+ InstallCommandReceiver commandReceiver = new InstallCommandReceiver();
+ switch (component) {
+ case "dependencies":
+ LOGGER.info("Start command to install dependencies");
+ commandReceiver.dependencies(isForce);
+ break;
+ case "tools":
+ LOGGER.info("Start command to install migration tools");
+ commandReceiver.migrationTools();
+ break;
+ case "chameleon":
+ LOGGER.info("Start command to install chameleon");
+ commandReceiver.chameleon();
+ break;
+ case "full_migration_tool":
+ LOGGER.info("Start command to install full-migration-tool");
+ commandReceiver.fullMigrationTool();
+ break;
+ case "data_checker":
+ LOGGER.info("Start command to install data-checker");
+ commandReceiver.dataChecker();
+ break;
+ case "debezium":
+ LOGGER.info("Start command to install debezium");
+ commandReceiver.debezium();
+ break;
+ case "kafka":
+ LOGGER.info("Start command to install kafka");
+ commandReceiver.kafka();
+ break;
+ case "check":
+ LOGGER.info("Start command to check installation");
+ commandReceiver.check();
+ break;
+ default:
+ throw new IllegalArgumentException("Unsupported component: " + component + " for install");
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/KafkaCommand.java b/multidb-portal/src/main/java/org/opengauss/command/KafkaCommand.java
new file mode 100644
index 0000000000000000000000000000000000000000..46a7c3d3f273679aa916e37a47fcd62d67aaa280
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/KafkaCommand.java
@@ -0,0 +1,50 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.command.receiver.KafkaCommandReceiver;
+
+/**
+ * Kafka command
+ *
+ * @since 2025/3/26
+ */
+public class KafkaCommand implements Command {
+ private static final Logger LOGGER = LogManager.getLogger(KafkaCommand.class);
+
+ private final String operation;
+
+ KafkaCommand(String operation) {
+ this.operation = operation;
+ }
+
+ @Override
+ public void execute() {
+ KafkaCommandReceiver commandReceiver = new KafkaCommandReceiver();
+
+ switch (operation) {
+ case "start":
+ LOGGER.info("Start command to start Kafka");
+ commandReceiver.start();
+ break;
+ case "stop":
+ LOGGER.info("Start command to stop Kafka");
+ commandReceiver.stop();
+ break;
+ case "status":
+ LOGGER.info("Start command to get Kafka status");
+ commandReceiver.status();
+ break;
+ case "clean":
+ LOGGER.info("Start command to clean Kafka data");
+ commandReceiver.clean();
+ break;
+ default:
+ throw new IllegalArgumentException("Unsupported Kafka operation: " + operation);
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/MigrationCommand.java b/multidb-portal/src/main/java/org/opengauss/command/MigrationCommand.java
new file mode 100644
index 0000000000000000000000000000000000000000..850ed2a07ab939dc83082ff8bf68e8a713cb9174
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/MigrationCommand.java
@@ -0,0 +1,90 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.command.receiver.MigrationCommandReceiver;
+import org.opengauss.utils.StringUtils;
+
+/**
+ * migration command
+ *
+ * @since 2025/3/26
+ */
+public class MigrationCommand implements Command {
+ private static final Logger LOGGER = LogManager.getLogger(MigrationCommand.class);
+
+ private final String operation;
+ private final String taskId;
+ private final boolean isDetail;
+
+ MigrationCommand(String operation, String taskId) {
+ this(operation, taskId, false);
+ }
+
+ MigrationCommand(String operation, String taskId, boolean isDetail) {
+ this.operation = operation;
+ this.taskId = taskId;
+ this.isDetail = isDetail;
+ }
+
+ @Override
+ public void execute() {
+ validateArgs();
+
+ MigrationCommandReceiver migrationExecutor = new MigrationCommandReceiver(taskId);
+ switch (operation) {
+ case "start":
+ LOGGER.info("Start command to start migration");
+ migrationExecutor.start();
+ break;
+ case "status":
+ LOGGER.info("Start command to check migration status");
+ migrationExecutor.status(isDetail);
+ break;
+ case "stop":
+ LOGGER.info("Start command to stop migration");
+ migrationExecutor.stop();
+ break;
+ case "stop_incremental":
+ LOGGER.info("Start command to stop incremental migration");
+ migrationExecutor.stopIncremental();
+ break;
+ case "resume_incremental":
+ LOGGER.info("Start command to resume incremental migration");
+ migrationExecutor.resumeIncremental();
+ break;
+ case "restart_incremental":
+ LOGGER.info("Start command to restart incremental migration");
+ migrationExecutor.restartIncremental();
+ break;
+ case "start_reverse":
+ LOGGER.info("Start command to start reverse migration");
+ migrationExecutor.startReverse();
+ break;
+ case "resume_reverse":
+ LOGGER.info("Start command to resume reverse migration");
+ migrationExecutor.resumeReverse();
+ break;
+ case "restart_reverse":
+ LOGGER.info("Start command to restart reverse migration");
+ migrationExecutor.restartReverse();
+ break;
+ case "stop_reverse":
+ LOGGER.info("Start command to stop reverse migration");
+ migrationExecutor.stopReverse();
+ break;
+ default:
+ throw new IllegalArgumentException("Unsupported migration operation: " + operation);
+ }
+ }
+
+ private void validateArgs() {
+ if (StringUtils.isNullOrBlank(operation) || StringUtils.isNullOrBlank(taskId)) {
+ throw new IllegalArgumentException("Migration operation and workspace id cannot be empty");
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/ModeCommand.java b/multidb-portal/src/main/java/org/opengauss/command/ModeCommand.java
new file mode 100644
index 0000000000000000000000000000000000000000..3994293e334a9ad317869ec9170b746dd031809e
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/ModeCommand.java
@@ -0,0 +1,70 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.command.receiver.ModeCommandReceiver;
+
+/**
+ * mode command
+ *
+ * @since 2025/3/26
+ */
+public class ModeCommand implements Command {
+ private static final Logger LOGGER = LogManager.getLogger(ModeCommand.class);
+ private final String[] args;
+
+ ModeCommand(String[] args) {
+ this.args = args;
+ }
+
+ @Override
+ public void execute() {
+ validateArgs(args);
+
+ ModeCommandReceiver commandReceiver = new ModeCommandReceiver();
+ String operation = args[0];
+ switch (operation) {
+ case "list":
+ LOGGER.info("Start command to list migration modes");
+ commandReceiver.list();
+ break;
+ case "add":
+ LOGGER.info("Start command to add migration mode");
+ validateOptionArgs(args, "add");
+ commandReceiver.add(args[1]);
+ break;
+ case "update":
+ LOGGER.info("Start command to update migration mode");
+ validateOptionArgs(args, "update");
+ commandReceiver.update(args[1]);
+ break;
+ case "delete":
+ LOGGER.info("Start command to delete migration mode");
+ validateOptionArgs(args, "delete");
+ commandReceiver.delete(args[1]);
+ break;
+ case "template":
+ LOGGER.info("Start command to get mode template file content");
+ commandReceiver.template();
+ break;
+ default:
+ throw new IllegalArgumentException("Unsupported migration mode operation: " + operation);
+ }
+ }
+
+ private void validateArgs(String[] args) {
+ if (args == null || args.length == 0) {
+ throw new IllegalArgumentException("Missing argument for command: mode");
+ }
+ }
+
+ private void validateOptionArgs(String[] args, String optionName) {
+ if (args.length < 2) {
+ throw new IllegalArgumentException("Missing argument for command: mode " + optionName);
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/TaskCommand.java b/multidb-portal/src/main/java/org/opengauss/command/TaskCommand.java
new file mode 100644
index 0000000000000000000000000000000000000000..289d588ced7bce41250fbd1365e982b9a2eece63
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/TaskCommand.java
@@ -0,0 +1,68 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.command.receiver.TaskCommandReceiver;
+
+/**
+ * Task command
+ *
+ * @since 2025/3/26
+ */
+public class TaskCommand implements Command {
+ private static final Logger LOGGER = LogManager.getLogger(TaskCommand.class);
+
+ private final String[] args;
+
+ TaskCommand(String[] args) {
+ this.args = args;
+ }
+
+ @Override
+ public void execute() {
+ validateArgs(args);
+
+ TaskCommandReceiver commandReceiver = new TaskCommandReceiver();
+ String operation = args[0];
+ switch (operation) {
+ case "list":
+ LOGGER.info("Start command to list migration tasks");
+ commandReceiver.list();
+ break;
+ case "create":
+ LOGGER.info("Start command to create migration task");
+ validateCreateArgs(args);
+ commandReceiver.create(args[1], args[2]);
+ break;
+ case "delete":
+ LOGGER.info("Start command to delete migration task");
+ validateDeleteArgs(args);
+ commandReceiver.delete(args[1]);
+ break;
+ default:
+ throw new IllegalArgumentException("Unsupported task operation: " + operation);
+ }
+ }
+
+ private void validateArgs(String[] args) {
+ if (args == null || args.length == 0) {
+ throw new IllegalArgumentException("Missing argument for command: task");
+ }
+ }
+
+ private void validateCreateArgs(String[] args) {
+ if (args.length < 3) {
+ throw new IllegalArgumentException("Missing argument for command: task create");
+ }
+ }
+
+ private void validateDeleteArgs(String[] args) {
+ if (args.length < 2) {
+ throw new IllegalArgumentException("Missing argument for command: task delete");
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/UninstallCommand.java b/multidb-portal/src/main/java/org/opengauss/command/UninstallCommand.java
new file mode 100644
index 0000000000000000000000000000000000000000..79e857f8cc7ac90cf8630787ba3b97a0c2f8cb2f
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/UninstallCommand.java
@@ -0,0 +1,34 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.command.receiver.UninstallCommandReceiver;
+
+/**
+ * uninstall command
+ *
+ * @since 2025/3/28
+ */
+public class UninstallCommand implements Command {
+ private static final Logger LOGGER = LogManager.getLogger(UninstallCommand.class);
+ private final String component;
+
+ UninstallCommand(String component) {
+ this.component = component;
+ }
+
+ @Override
+ public void execute() {
+ if (component.equals("tools")) {
+ UninstallCommandReceiver commandReceiver = new UninstallCommandReceiver();
+ LOGGER.info("Start command to uninstall migration tools");
+ commandReceiver.migrationTools();
+ } else {
+ throw new IllegalArgumentException("Unsupported component: " + component + "for uninstall");
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/parser/CommandParser.java b/multidb-portal/src/main/java/org/opengauss/command/parser/CommandParser.java
new file mode 100644
index 0000000000000000000000000000000000000000..7dda887727fdbc1488ad28e38985312f8b1e40a7
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/parser/CommandParser.java
@@ -0,0 +1,156 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command.parser;
+
+import lombok.Getter;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.DefaultParser;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+
+/**
+ * command parser
+ *
+ * @since 2025/3/26
+ */
+public class CommandParser {
+ @Getter
+ private final Options options = new Options();
+ private final CommandLineParser parser = new DefaultParser();
+
+ public CommandParser() {
+ buildInstallOptions();
+ buildUnInstallOptions();
+ buildKafkaOptions();
+ buildModeOptions();
+ buildTaskOptions();
+ buildMigrationOptions();
+ buildConfigDescriptionOptions();
+ buildForceOptions();
+ buildDetailOptions();
+ buildHelpOptions();
+ }
+
+ /**
+ * Parse command options
+ *
+ * @param args args
+ * @return CommandLine
+ * @throws ParseException ParseException
+ */
+ public CommandLine parse(String[] args) throws ParseException {
+ return parser.parse(options, args);
+ }
+
+ private void buildInstallOptions() {
+ Option install = Option.builder()
+ .option("i")
+ .longOpt("install")
+ .desc("install component [tools|chameleon|full_migration_tool|datachecker|debezium|"
+ + "kafka|dependencies|check] <--force>")
+ .hasArg()
+ .argName("component")
+ .build();
+
+ options.addOption(install);
+ }
+
+ private void buildUnInstallOptions() {
+ Option uninstall = Option.builder()
+ .option("u")
+ .longOpt("uninstall")
+ .desc("uninstall component [tools]")
+ .hasArg()
+ .argName("component")
+ .build();
+ options.addOption(uninstall);
+ }
+
+ private void buildKafkaOptions() {
+ Option kafka = Option.builder()
+ .option("k")
+ .longOpt("kafka")
+ .desc("Kafka operation [status|start|stop|clean]")
+ .hasArg()
+ .argName("operation")
+ .build();
+ options.addOption(kafka);
+ }
+
+ private void buildModeOptions() {
+ Option mode = Option.builder()
+ .option("mo")
+ .longOpt("mode")
+ .desc("Migration mode management "
+ + "[list|add|delete|update|template] ")
+ .numberOfArgs(Option.UNLIMITED_VALUES)
+ .argName("operation> ")
+ .numberOfArgs(Option.UNLIMITED_VALUES)
+ .argName("operation> ")
+ .numberOfArgs(2)
+ .argName("operation> headers, List> rows) {
+ validateArgs(headers, rows);
+
+ int[] columnWidths = new int[headers.size()];
+ for (int i = 0; i < headers.size(); i++) {
+ columnWidths[i] = headers.get(i).length();
+ }
+
+ for (List row : rows) {
+ for (int i = 0; i < row.size(); i++) {
+ String cell = row.get(i);
+ if (cell != null && cell.length() > columnWidths[i]) {
+ columnWidths[i] = cell.length();
+ }
+ }
+ }
+
+ for (int i = 0; i < columnWidths.length; i++) {
+ columnWidths[i] += 2;
+ }
+
+ StringBuilder table = new StringBuilder();
+ appendLine(table, columnWidths);
+ appendRow(table, headers, columnWidths);
+ appendLine(table, columnWidths);
+
+ for (List row : rows) {
+ appendRow(table, row, columnWidths);
+ }
+
+ appendLine(table, columnWidths);
+ return table.toString();
+ }
+
+ private static void appendLine(StringBuilder sb, int[] columnWidths) {
+ sb.append(PLUS_SIGN);
+ for (int width : columnWidths) {
+ sb.append(MINUS_SIGN.repeat(width));
+ sb.append(PLUS_SIGN);
+ }
+ sb.append(System.lineSeparator());
+ }
+
+ private static void appendRow(StringBuilder sb, List cells, int[] columnWidths) {
+ sb.append(PIPE_SIGN);
+ for (int i = 0; i < cells.size(); i++) {
+ String cell = cells.get(i) != null ? cells.get(i) : "";
+ sb.append(String.format(" %-" + (columnWidths[i] - 1) + "s|", cell));
+ }
+ sb.append(System.lineSeparator());
+ }
+
+ private static void validateArgs(List headers, List> rows) {
+ if (headers == null || headers.isEmpty()) {
+ throw new IllegalArgumentException("Headers cannot be null or empty");
+ }
+
+ if (rows == null) {
+ throw new IllegalArgumentException("Rows cannot be null");
+ }
+
+ int headerSize = headers.size();
+ for (int i = 0; i < rows.size(); i++) {
+ List row = rows.get(i);
+ if (row == null || row.size() != headerSize) {
+ throw new IllegalArgumentException(
+ String.format(Locale.ROOT, "Row %d has invalid size (header size: %d, row size: %s)",
+ i + 1, headerSize, row == null ? "null" : row.size()));
+ }
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/receiver/CommandReceiver.java b/multidb-portal/src/main/java/org/opengauss/command/receiver/CommandReceiver.java
new file mode 100644
index 0000000000000000000000000000000000000000..6119e7f3253d6509fae29b4e806790f2b2fc05cf
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/receiver/CommandReceiver.java
@@ -0,0 +1,13 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command.receiver;
+
+/**
+ * command receiver interface
+ *
+ * @since 2025/4/10
+ */
+public interface CommandReceiver {
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/receiver/ConfigDescCommandReceiver.java b/multidb-portal/src/main/java/org/opengauss/command/receiver/ConfigDescCommandReceiver.java
new file mode 100644
index 0000000000000000000000000000000000000000..ece6d08bc89a85640b69b4f5111acd5f625684b7
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/receiver/ConfigDescCommandReceiver.java
@@ -0,0 +1,55 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command.receiver;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.enums.TemplateConfigType;
+import org.opengauss.exceptions.PortalException;
+import org.opengauss.config.ApplicationConfig;
+import org.opengauss.utils.FileUtils;
+
+import java.io.IOException;
+
+/**
+ * Config description command receiver
+ *
+ * @since 2025/6/24
+ */
+public class ConfigDescCommandReceiver implements CommandReceiver {
+ private static final Logger LOGGER = LogManager.getLogger(ConfigDescCommandReceiver.class);
+
+ /**
+ * Export mysql config desc
+ */
+ public void mysqlConfigDesc() {
+ exportDescFile(TemplateConfigType.MYSQL_MIGRATION_CONFIG);
+ }
+
+ /**
+ * Export pgsql config desc
+ */
+ public void pgsqlConfigDesc() {
+ exportDescFile(TemplateConfigType.PGSQL_MIGRATION_CONFIG);
+ }
+
+ private void exportDescFile(TemplateConfigType configType) {
+ String configFilePath = configType.getFilePath();
+ String configDescFilePath = configType.getConfigDescFilePath();
+ String targetDirPath = ApplicationConfig.getInstance().getPortalTmpDirPath();
+ String targetConfigFilePath = String.format("%s/%s", targetDirPath, configType.getName());
+ String targetConfigDescFilePath = String.format("%s/%s", targetDirPath, configType.getConfigDescFileName());
+
+ try {
+ FileUtils.exportResource(configFilePath, targetConfigFilePath);
+ FileUtils.exportResource(configDescFilePath, targetConfigDescFilePath);
+ } catch (IOException e) {
+ throw new PortalException("Failed to export config desc file", e);
+ }
+ LOGGER.info("Config description exported successfully");
+ LOGGER.info("Config file path: {}", targetConfigFilePath);
+ LOGGER.info("Config description file path: {}", targetConfigDescFilePath);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/receiver/InstallCommandReceiver.java b/multidb-portal/src/main/java/org/opengauss/command/receiver/InstallCommandReceiver.java
new file mode 100644
index 0000000000000000000000000000000000000000..30972f97130d98596b4395165ec5d1d9f50eb613
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/receiver/InstallCommandReceiver.java
@@ -0,0 +1,240 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command.receiver;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.PortalConstants;
+import org.opengauss.exceptions.InstallException;
+import org.opengauss.migration.tools.Chameleon;
+import org.opengauss.migration.tools.DataChecker;
+import org.opengauss.migration.tools.Debezium;
+import org.opengauss.migration.tools.FullMigrationTool;
+import org.opengauss.migration.tools.Kafka;
+import org.opengauss.config.ApplicationConfig;
+import org.opengauss.utils.FileUtils;
+import org.opengauss.utils.ProcessUtils;
+
+import java.io.IOException;
+
+/**
+ * install command receiver
+ *
+ * @since 2025/3/27
+ */
+public class InstallCommandReceiver implements CommandReceiver {
+ private static final Logger LOGGER = LogManager.getLogger(InstallCommandReceiver.class);
+
+ /**
+ * install chameleon dependencies
+ *
+ * @param isForce force to install dependencies
+ **/
+ public void dependencies(boolean isForce) {
+ LOGGER.info("Start to install dependencies");
+ if (!isForce && !checkSystemAndArch()) {
+ return;
+ }
+ LOGGER.info("Check user sudo permission");
+ checkSudoPermission();
+ installDependencies();
+ LOGGER.info("Install dependencies finished");
+ }
+
+ /**
+ * install all migration tools
+ **/
+ public void migrationTools() {
+ checkLeastSpace();
+ FullMigrationTool.getInstance().install();
+ Chameleon.getInstance().install();
+ DataChecker.getInstance().install();
+ Debezium.getInstance().install();
+ Kafka.getInstance().install();
+ LOGGER.info("Install all migration tools successfully");
+ }
+
+ /**
+ * install chameleon
+ **/
+ public void chameleon() {
+ Chameleon.getInstance().install();
+ }
+
+ /**
+ * install full-migration-tool
+ **/
+ public void fullMigrationTool() {
+ FullMigrationTool.getInstance().install();
+ }
+
+ /**
+ * install debezium
+ **/
+ public void debezium() {
+ Debezium.getInstance().install();
+ }
+
+ /**
+ * install data-checker
+ **/
+ public void dataChecker() {
+ DataChecker.getInstance().install();
+ }
+
+ /**
+ * install kafka
+ **/
+ public void kafka() {
+ Kafka.getInstance().install();
+ }
+
+ /**
+ * check all migration tools
+ **/
+ public void check() {
+ boolean isAllInstalled = true;
+ if (Chameleon.getInstance().checkInstall()) {
+ LOGGER.info("Chameleon is already installed");
+ } else {
+ LOGGER.error("Chameleon is not installed");
+ isAllInstalled = false;
+ }
+
+ if (FullMigrationTool.getInstance().checkInstall()) {
+ LOGGER.info("Full-Migration tool is already installed");
+ } else {
+ LOGGER.error("Full-Migration tool is not installed");
+ isAllInstalled = false;
+ }
+
+ if (DataChecker.getInstance().checkInstall()) {
+ LOGGER.info("DataChecker is already installed");
+ } else {
+ LOGGER.error("DataChecker is not installed");
+ isAllInstalled = false;
+ }
+
+ if (Debezium.getInstance().checkInstall()) {
+ LOGGER.info("Debezium is already installed");
+ } else {
+ LOGGER.error("Debezium is not installed");
+ isAllInstalled = false;
+ }
+
+ if (Kafka.getInstance().checkInstall()) {
+ LOGGER.info("Kafka is already installed");
+ } else {
+ LOGGER.error("Kafka is not installed");
+ isAllInstalled = false;
+ }
+
+ if (isAllInstalled) {
+ LOGGER.info("All migration tools are already installed");
+ } else {
+ LOGGER.error("Some migration tools are not installed");
+ }
+ }
+
+ private void checkLeastSpace() {
+ LOGGER.info("Check space is sufficient");
+ String portalHomeDir = ApplicationConfig.getInstance().getPortalHomeDirPath();
+ try {
+ if (!FileUtils.isSpaceSufficient(portalHomeDir, PortalConstants.LEAST_SPACE_MB)) {
+ throw new InstallException("Not enough space in portal home directory to install migration tools, "
+ + "at least" + PortalConstants.LEAST_SPACE_MB + " MB is required");
+ }
+ } catch (IOException e) {
+ throw new InstallException("Failed to check space is sufficient in portal home directory", e);
+ }
+ }
+
+ private boolean checkSystemAndArch() {
+ String osName = getSystemOs() + getSystemOsVersion();
+ String osArch = getOsArch();
+
+ String portalSystemName = ApplicationConfig.getInstance().getSystemName();
+ String portalSystemArch = ApplicationConfig.getInstance().getSystemArch();
+
+ if (!osName.equalsIgnoreCase(portalSystemName) || !osArch.equalsIgnoreCase(portalSystemArch)) {
+ LOGGER.warn("System and architecture do not match, current portal install package supported "
+ + "system and architecture is {}_{}", portalSystemName, portalSystemArch);
+ LOGGER.warn("Check current system and architecture is {}_{}", osName, osArch);
+ LOGGER.warn("If you still want to install, you can add --force option to the end of the install command");
+ return false;
+ }
+ LOGGER.debug("System and architecture match");
+ return true;
+ }
+
+ private String getOsArch() {
+ String arch = System.getProperty("os.arch").toLowerCase();
+ if (arch.contains("aarch64")) {
+ return "aarch64";
+ } else if (arch.contains("x86_64") || arch.contains("amd64")) {
+ return "x86_64";
+ } else if (arch.contains("x86") || arch.contains("i386")) {
+ return "x86";
+ } else {
+ return arch;
+ }
+ }
+
+ private String getSystemOs() {
+ try {
+ return ProcessUtils.executeCommandWithResult(PortalConstants.COMMAND_OS).trim();
+ } catch (IOException | InterruptedException e) {
+ throw new InstallException("Failed to get system os", e);
+ }
+ }
+
+ private String getSystemOsVersion() {
+ try {
+ return ProcessUtils.executeCommandWithResult(PortalConstants.COMMAND_OS_VERSION).trim();
+ } catch (IOException | InterruptedException e) {
+ throw new InstallException("Failed to get system os version", e);
+ }
+ }
+
+ private void checkSudoPermission() {
+ try {
+ ProcessBuilder processBuilder = new ProcessBuilder(
+ "/bin/bash", "-c", "sudo -n true &> /dev/null && echo 0 || echo 1"
+ );
+ String exitCode = ProcessUtils.executeCommandWithResult(processBuilder);
+
+ if (exitCode.equals("0")) {
+ LOGGER.debug("The installation user has the sudo permission");
+ } else {
+ throw new InstallException("The installation user does not have the sudo permission, "
+ + "or a password is required.");
+ }
+ } catch (IOException | InterruptedException e) {
+ throw new InstallException("Failed to check sudo permission", e);
+ }
+ }
+
+ private void installDependencies() {
+ LOGGER.info("Check dependencies install script");
+ String installScriptName = PortalConstants.DEPENDENCIES_INSTALL_SCRIPT_NAME;
+ String installScriptDirPath = String.format("%s/%s", ApplicationConfig.getInstance().getPortalPkgDirPath(),
+ PortalConstants.DEPENDENCIES_INSTALL_SCRIPT_DIR_RELATIVE_PATH);
+ String installScriptPath = String.format("%s/%s", installScriptDirPath, installScriptName);
+ if (!FileUtils.checkFileExists(installScriptPath)) {
+ throw new InstallException("Failed to install dependencies, required file not found - "
+ + installScriptPath);
+ }
+
+ try {
+ LOGGER.info("Run dependencies install script");
+ String installLogPath = String.format("%s/execute_%s.log", installScriptDirPath, installScriptName);
+ ProcessUtils.executeShellScript(installScriptName, installScriptDirPath, installLogPath, 60000L);
+ String installLog = FileUtils.readFileContents(installLogPath);
+ LOGGER.info("Install script logs: \n{}", installLog);
+ } catch (IOException | InterruptedException e) {
+ throw new InstallException("Failed to install dependencies", e);
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/receiver/KafkaCommandReceiver.java b/multidb-portal/src/main/java/org/opengauss/command/receiver/KafkaCommandReceiver.java
new file mode 100644
index 0000000000000000000000000000000000000000..4c8c753bb71d4cc1e1c1e777215a10d8c0094c7a
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/receiver/KafkaCommandReceiver.java
@@ -0,0 +1,106 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command.receiver;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.command.printer.TablePrinter;
+import org.opengauss.domain.dto.KafkaStatusDto;
+import org.opengauss.migration.tools.Kafka;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * kafka command receiver
+ *
+ * @since 2025/3/29
+ */
+public class KafkaCommandReceiver implements CommandReceiver {
+ private static final Logger LOGGER = LogManager.getLogger(KafkaCommandReceiver.class);
+
+ private final Kafka kafka;
+
+ public KafkaCommandReceiver() {
+ kafka = Kafka.getInstance();
+ }
+
+ /**
+ * start kafka processes
+ */
+ public void start() {
+ kafka.start();
+ }
+
+ /**
+ * stop kafka processes
+ */
+ public void stop() {
+ kafka.stop();
+ }
+
+ /**
+ * get kafka processes status
+ */
+ public void status() {
+ Optional statusOptional = kafka.getStatusDetail();
+ if (statusOptional.isEmpty()) {
+ return;
+ }
+
+ List header = new ArrayList<>();
+ header.add("Component");
+ header.add("Running");
+ header.add("Stopped");
+
+ KafkaStatusDto kafkaStatusDto = statusOptional.get();
+ List row = new ArrayList<>();
+ if (kafkaStatusDto.isZookeeperRunning()) {
+ row.add("Zookeeper");
+ row.add("Y");
+ row.add("");
+ } else {
+ row.add("Zookeeper");
+ row.add("");
+ row.add("Y");
+ }
+ List> tableInfoList = new ArrayList<>();
+ tableInfoList.add(row);
+
+ row = new ArrayList<>();
+ if (kafkaStatusDto.isKafkaRunning()) {
+ row.add("Kafka");
+ row.add("Y");
+ row.add("");
+ } else {
+ row.add("Kafka");
+ row.add("");
+ row.add("Y");
+ }
+ tableInfoList.add(row);
+
+ row = new ArrayList<>();
+ if (kafkaStatusDto.isSchemaRegistryRunning()) {
+ row.add("Schema Registry");
+ row.add("Y");
+ row.add("");
+ } else {
+ row.add("Schema Registry");
+ row.add("");
+ row.add("Y");
+ }
+ tableInfoList.add(row);
+ String table = TablePrinter.printTable(header, tableInfoList);
+ LOGGER.info("Kafka Processes Status:{}{}", System.lineSeparator(), table);
+ }
+
+ /**
+ * clean kafka logs
+ */
+ public void clean() {
+ kafka.clean();
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/receiver/MigrationCommandReceiver.java b/multidb-portal/src/main/java/org/opengauss/command/receiver/MigrationCommandReceiver.java
new file mode 100644
index 0000000000000000000000000000000000000000..7f1e802deb14718aaacc0bd873c3ffab3a2f0042
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/receiver/MigrationCommandReceiver.java
@@ -0,0 +1,272 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command.receiver;
+
+import com.opencsv.CSVWriter;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.Main;
+import org.opengauss.config.ApplicationConfig;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.enums.DatabaseType;
+import org.opengauss.exceptions.PortalException;
+import org.opengauss.migration.MigrationManager;
+import org.opengauss.migration.helper.TaskHelper;
+import org.opengauss.migration.status.StatusManager;
+import org.opengauss.migration.status.model.ObjectStatusEntry;
+import org.opengauss.migration.workspace.TaskWorkspaceManager;
+import org.opengauss.utils.FileUtils;
+import org.opengauss.utils.PortUtils;
+import org.opengauss.utils.ProcessUtils;
+
+import java.io.FileWriter;
+import java.io.IOException;
+import java.net.SocketException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Locale;
+
+/**
+ * Migration command receiver
+ *
+ * @since 2025/3/27
+ */
+public class MigrationCommandReceiver implements CommandReceiver {
+ private static final Logger LOGGER = LogManager.getLogger(MigrationCommandReceiver.class);
+
+ private final String taskId;
+
+ public MigrationCommandReceiver(String taskId) {
+ this.taskId = taskId;
+ }
+
+ /**
+ * Start migration
+ */
+ public void start() {
+ TaskWorkspaceManager workspaceManager = new TaskWorkspaceManager();
+ TaskWorkspace taskWorkspace = new TaskWorkspace(taskId);
+
+ if (workspaceManager.isTaskRunning(taskWorkspace)) {
+ LOGGER.error("Task {} is already running", taskId);
+ return;
+ }
+
+ if (workspaceManager.checkTaskIdExists(taskId)) {
+ MigrationManager.initialize(taskWorkspace);
+ setQuarkusPort(taskWorkspace);
+ Main.startQuarkus();
+ } else {
+ LOGGER.error("Task {} does not exist", taskId);
+ }
+ }
+
+ /**
+ * Stop incremental migration
+ */
+ public void stopIncremental() {
+ TaskWorkspace taskWorkspace = new TaskWorkspace(taskId);
+ if (isTaskStopped(taskWorkspace)) {
+ return;
+ }
+ sendRequest(taskWorkspace, "stopIncremental");
+ }
+
+ /**
+ * Start reverse migration
+ */
+ public void startReverse() {
+ TaskWorkspace taskWorkspace = new TaskWorkspace(taskId);
+ if (isTaskStopped(taskWorkspace)) {
+ return;
+ }
+ sendRequest(taskWorkspace, "startReverse");
+ }
+
+ /**
+ * Restart incremental migration
+ */
+ public void restartIncremental() {
+ TaskWorkspace taskWorkspace = new TaskWorkspace(taskId);
+ if (isTaskStopped(taskWorkspace)) {
+ return;
+ }
+ sendRequest(taskWorkspace, "restartIncremental");
+ }
+
+ /**
+ * Restart reverse migration
+ */
+ public void restartReverse() {
+ TaskWorkspace taskWorkspace = new TaskWorkspace(taskId);
+ if (isTaskStopped(taskWorkspace)) {
+ return;
+ }
+ sendRequest(taskWorkspace, "restartReverse");
+ }
+
+ /**
+ * Resume incremental migration
+ */
+ public void resumeIncremental() {
+ TaskWorkspace taskWorkspace = new TaskWorkspace(taskId);
+ if (isTaskStopped(taskWorkspace)) {
+ return;
+ }
+ sendRequest(taskWorkspace, "resumeIncremental");
+ }
+
+ /**
+ * Resume reverse migration
+ */
+ public void resumeReverse() {
+ TaskWorkspace taskWorkspace = new TaskWorkspace(taskId);
+ if (isTaskStopped(taskWorkspace)) {
+ return;
+ }
+ sendRequest(taskWorkspace, "resumeReverse");
+ }
+
+ /**
+ * Stop reverse migration
+ */
+ public void stopReverse() {
+ TaskWorkspace taskWorkspace = new TaskWorkspace(taskId);
+ if (isTaskStopped(taskWorkspace)) {
+ return;
+ }
+ sendRequest(taskWorkspace, "stopReverse");
+ }
+
+ /**
+ * Stop migration
+ */
+ public void stop() {
+ TaskWorkspace taskWorkspace = new TaskWorkspace(taskId);
+ if (isTaskStopped(taskWorkspace)) {
+ return;
+ }
+ sendRequest(taskWorkspace, "stop");
+ }
+
+ /**
+ * Get migration status
+ *
+ * @param isDetail whether to print detailed information
+ */
+ public void status(boolean isDetail) {
+ TaskWorkspace taskWorkspace = new TaskWorkspace(taskId);
+ StatusManager statusManager = new StatusManager(taskWorkspace);
+ if (!isDetail) {
+ String status = statusManager.getStatus();
+ LOGGER.info("Migration status: {}{}", System.lineSeparator(), status);
+ } else {
+ DatabaseType sourceDbType = TaskHelper.loadSourceDbType(taskWorkspace);
+ List statusEntryList;
+ if (sourceDbType == DatabaseType.MYSQL) {
+ statusEntryList = statusManager.getMysqlObjectStatusEntryList();
+ } else if (sourceDbType == DatabaseType.POSTGRESQL) {
+ statusEntryList = statusManager.getPgsqlObjectStatusEntryList();
+ } else {
+ LOGGER.error("Unsupported database type: {}", sourceDbType);
+ return;
+ }
+
+ if (statusEntryList.isEmpty()) {
+ LOGGER.info("No detail migration status found");
+ } else {
+ exportCsv(statusEntryList);
+ }
+ }
+ }
+
+ private void exportCsv(List statusEntryList) {
+ String csvFilePath = String.format("%s/task_%s_status.csv",
+ ApplicationConfig.getInstance().getPortalTmpDirPath(), taskId);
+ try (CSVWriter writer = new CSVWriter(new FileWriter(csvFilePath))) {
+ String[] header = {
+ "Schema", "Name", "Type", "Status(1 - pending, 2 - migrating, 3,4,5 - completed, 6,7 - failed)",
+ "Percent", "Migration error", "Check Status(0 - success, 1 - fail)", "Check Message", "Repair File Path"
+ };
+ writer.writeNext(header);
+
+ ArrayList rows = new ArrayList<>();
+ for (ObjectStatusEntry statusEntry : statusEntryList) {
+ rows.add(new String[] {
+ statusEntry.getSchema(),
+ statusEntry.getName(),
+ statusEntry.getType(),
+ String.valueOf(statusEntry.getStatus()),
+ String.valueOf(statusEntry.getPercent()),
+ statusEntry.getError(),
+ statusEntry.getCheckStatus() == null ? "" : String.valueOf(statusEntry.getCheckStatus()),
+ statusEntry.getCheckMessage(),
+ statusEntry.getRepairFilePath()
+ });
+ }
+ writer.writeAll(rows);
+ LOGGER.info("Export csv file successfully, file path: {}", csvFilePath);
+ } catch (IOException e) {
+ LOGGER.error("Failed to export csv file", e);
+ }
+ }
+
+ private void sendRequest(TaskWorkspace taskWorkspace, String api) {
+ String curl = String.format(Locale.ROOT, "curl -X POST http://localhost:%d/task/%s",
+ readQuarkusPort(taskWorkspace), api);
+ try {
+ String curlResult = ProcessUtils.executeCommandWithResult(curl);
+ if (curlResult != null && curlResult.contains("SUCCESS")) {
+ LOGGER.info("Task {} {} command was sent successfully. For detail, please refer to the main "
+ + "migration process log.", taskId, api);
+ } else {
+ LOGGER.error("Task {} {} command was sent failed, response: {}{}",
+ taskId, api, System.lineSeparator(), curlResult);
+ }
+ } catch (IOException | InterruptedException e) {
+ LOGGER.error("Execute curl command failed, command: {}", curl, e);
+ }
+ }
+
+ private int readQuarkusPort(TaskWorkspace taskWorkspace) {
+ try {
+ String portFilePath = taskWorkspace.getQuarkusPortFilePath();
+ return Integer.parseInt(FileUtils.readFileContents(portFilePath).trim());
+ } catch (IOException e) {
+ throw new PortalException("Failed to read quarkus port from file", e);
+ } catch (NumberFormatException e) {
+ throw new PortalException("Port is not a number in port file, please restart migration", e);
+ }
+ }
+
+ private boolean isTaskStopped(TaskWorkspace taskWorkspace) {
+ TaskWorkspaceManager workspaceManager = new TaskWorkspaceManager();
+ if (!workspaceManager.isTaskRunning(taskWorkspace)) {
+ LOGGER.error("Task {} is already stopped", taskId);
+ return true;
+ }
+ return false;
+ }
+
+ private void setQuarkusPort(TaskWorkspace taskWorkspace) {
+ try {
+ String quarkusPort = System.getProperty("quarkus.http.port");
+ if (quarkusPort == null) {
+ int expectedPort = 6000;
+ quarkusPort = String.valueOf(PortUtils.getUsefulPort(expectedPort));
+ }
+ System.setProperty("quarkus.http.port", quarkusPort);
+
+ String portFilePath = taskWorkspace.getQuarkusPortFilePath();
+ FileUtils.deletePath(portFilePath);
+ FileUtils.writeToFile(portFilePath, quarkusPort, false);
+ FileUtils.setFileReadOnly(portFilePath);
+ } catch (SocketException e) {
+ throw new PortalException("Can not get useful port used as quarkus port", e);
+ } catch (IOException e) {
+ throw new PortalException("Failed to write quarkus port to file", e);
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/receiver/ModeCommandReceiver.java b/multidb-portal/src/main/java/org/opengauss/command/receiver/ModeCommandReceiver.java
new file mode 100644
index 0000000000000000000000000000000000000000..277951eb9bb2647087d3411967de88c6034b11e3
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/receiver/ModeCommandReceiver.java
@@ -0,0 +1,95 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command.receiver;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.command.printer.TablePrinter;
+import org.opengauss.enums.MigrationPhase;
+import org.opengauss.migration.mode.MigrationMode;
+import org.opengauss.migration.mode.ModeManager;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * mode command receiver
+ *
+ * @since 2025/3/29
+ */
+public class ModeCommandReceiver implements CommandReceiver {
+ private static final Logger LOGGER = LogManager.getLogger(ModeCommandReceiver.class);
+
+ private final ModeManager modeManager;
+
+ public ModeCommandReceiver() {
+ modeManager = new ModeManager();
+ }
+
+ /**
+ * list all migration modes
+ */
+ public void list() {
+ List modeList = modeManager.list();
+ MigrationPhase[] allPhases = MigrationPhase.values();
+
+ List header = new ArrayList<>();
+ header.add("Mode Name");
+ for (MigrationPhase phase : allPhases) {
+ header.add(phase.getPhaseName());
+ }
+
+ List> tableInfoList = new ArrayList<>();
+ for (MigrationMode mode : modeList) {
+ List row = new ArrayList<>();
+ row.add(mode.getModeName());
+ for (MigrationPhase phase : allPhases) {
+ if (mode.hasPhase(phase)) {
+ row.add("Y");
+ } else {
+ row.add("");
+ }
+ }
+ tableInfoList.add(row);
+ }
+
+ String table = TablePrinter.printTable(header, tableInfoList);
+ LOGGER.info("Migration Modes:{}{}", System.lineSeparator(), table);
+ }
+
+ /**
+ * add a migration mode
+ *
+ * @param modeFilePath mode file path
+ */
+ public void add(String modeFilePath) {
+ modeManager.add(modeFilePath);
+ }
+
+ /**
+ * update a migration mode
+ *
+ * @param modeFilePath mode file path
+ */
+ public void update(String modeFilePath) {
+ modeManager.update(modeFilePath);
+ }
+
+ /**
+ * delete a migration mode
+ *
+ * @param modeName mode name
+ */
+ public void delete(String modeName) {
+ modeManager.delete(modeName);
+ }
+
+ /**
+ * get a migration mode define template file
+ */
+ public void template() {
+ modeManager.template();
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/receiver/TaskCommandReceiver.java b/multidb-portal/src/main/java/org/opengauss/command/receiver/TaskCommandReceiver.java
new file mode 100644
index 0000000000000000000000000000000000000000..d0e49fb35d6fcaedeba73edcfcbb01fa80ce116b
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/receiver/TaskCommandReceiver.java
@@ -0,0 +1,81 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command.receiver;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.command.printer.TablePrinter;
+import org.opengauss.domain.vo.TaskListVo;
+import org.opengauss.migration.workspace.TaskWorkspaceManager;
+
+import java.util.ArrayList;
+import java.util.Comparator;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * Task command receiver
+ *
+ * @since 2025/3/29
+ */
+public class TaskCommandReceiver implements CommandReceiver {
+ private static final Logger LOGGER = LogManager.getLogger(TaskCommandReceiver.class);
+
+ private final TaskWorkspaceManager workspaceManager;
+
+ public TaskCommandReceiver() {
+ workspaceManager = new TaskWorkspaceManager();
+ }
+
+ /**
+ * List all migration task
+ */
+ public void list() {
+ List taskListVoList = workspaceManager.list();
+ printTaskTable(taskListVoList);
+ }
+
+ /**
+ * Create migration task
+ *
+ * @param taskId task id
+ * @param sourceDbType source database type
+ */
+ public void create(String taskId, String sourceDbType) {
+ workspaceManager.create(taskId, sourceDbType);
+ }
+
+ /**
+ * Delete migration task
+ *
+ * @param taskId task id
+ */
+ public void delete(String taskId) {
+ workspaceManager.delete(taskId);
+ }
+
+ private void printTaskTable(List taskListVoList) {
+ List header = new ArrayList<>();
+ header.add("Task ID");
+ header.add("Source Database Type");
+ header.add("Is Running");
+
+ List taskList = taskListVoList.stream()
+ .sorted(Comparator.comparing(TaskListVo::getTaskId))
+ .collect(Collectors.toList());
+
+ List> tableInfoList = new ArrayList<>();
+ for (TaskListVo taskListVo : taskList) {
+ List row = new ArrayList<>();
+ row.add(taskListVo.getTaskId());
+ row.add(taskListVo.getSourceDbType());
+ row.add(taskListVo.isRunning() ? "Y" : "N");
+ tableInfoList.add(row);
+ }
+
+ String table = TablePrinter.printTable(header, tableInfoList);
+ LOGGER.info("Task List:{}{}", System.lineSeparator(), table);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/command/receiver/UninstallCommandReceiver.java b/multidb-portal/src/main/java/org/opengauss/command/receiver/UninstallCommandReceiver.java
new file mode 100644
index 0000000000000000000000000000000000000000..b366e05243d9ee2ffa346a9b1f082bc49c6ca796
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/command/receiver/UninstallCommandReceiver.java
@@ -0,0 +1,34 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.command.receiver;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.migration.tools.Chameleon;
+import org.opengauss.migration.tools.DataChecker;
+import org.opengauss.migration.tools.Debezium;
+import org.opengauss.migration.tools.FullMigrationTool;
+import org.opengauss.migration.tools.Kafka;
+
+/**
+ * uninstall command receiver
+ *
+ * @since 2025/3/29
+ */
+public class UninstallCommandReceiver implements CommandReceiver {
+ private static final Logger LOGGER = LogManager.getLogger(UninstallCommandReceiver.class);
+
+ /**
+ * uninstall all migration tools
+ */
+ public void migrationTools() {
+ Kafka.getInstance().unInstall();
+ Chameleon.getInstance().unInstall();
+ FullMigrationTool.getInstance().unInstall();
+ DataChecker.getInstance().unInstall();
+ Debezium.getInstance().unInstall();
+ LOGGER.info("Uninstall all migration tools successfully");
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/config/ApplicationConfig.java b/multidb-portal/src/main/java/org/opengauss/config/ApplicationConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..83126cfc616291b59bb31b7211899ee77b2bf36d
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/config/ApplicationConfig.java
@@ -0,0 +1,171 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.config;
+
+import lombok.Getter;
+import org.opengauss.constants.PortalConstants;
+import org.opengauss.exceptions.PortalException;
+import org.opengauss.utils.FileUtils;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.util.Properties;
+
+/**
+ * Application config
+ *
+ * @since 2025/3/21
+ */
+@Getter
+public class ApplicationConfig {
+ private static volatile ApplicationConfig instance;
+
+ private String portalHomeDirPath;
+ private String systemName;
+ private String systemArch;
+
+ private ApplicationConfig() {}
+
+ /**
+ * Get instance of ApplicationConfig
+ *
+ * @return instance of ApplicationConfig
+ */
+ public static ApplicationConfig getInstance() {
+ if (instance == null) {
+ synchronized (ApplicationConfig.class) {
+ if (instance == null) {
+ instance = new ApplicationConfig();
+ instance.loadConfig();
+ instance.initPortalDir();
+ }
+ }
+ }
+
+ return instance;
+ }
+
+ /**
+ * Get portal bin dir path
+ *
+ * @return String portal bin dir path
+ */
+ public String getPortalBinDirPath() {
+ return String.format("%s/%s", portalHomeDirPath, PortalConstants.BIN_DIR_NANE);
+ }
+
+ /**
+ * Get portal config dir path
+ *
+ * @return String portal config dir path
+ */
+ public String getPortalConfigDirPath() {
+ return String.format("%s/%s", portalHomeDirPath, PortalConstants.CONFIG_DIR_NANE);
+ }
+
+ /**
+ * Get portal data dir path
+ *
+ * @return String portal data dir path
+ */
+ public String getPortalDataDirPath() {
+ return String.format("%s/%s", portalHomeDirPath, PortalConstants.DATA_DIR_NANE);
+ }
+
+ /**
+ * Get portal logs dir path
+ *
+ * @return String portal logs dir path
+ */
+ public String getPortalLogsDirPath() {
+ return String.format("%s/%s", portalHomeDirPath, PortalConstants.LOGS_DIR_NANE);
+ }
+
+ /**
+ * Get portal pkg dir path
+ *
+ * @return String portal pkg dir path
+ */
+ public String getPortalPkgDirPath() {
+ return String.format("%s/%s", portalHomeDirPath, PortalConstants.PKG_DIR_NANE);
+ }
+
+ /**
+ * Get portal template dir path
+ *
+ * @return String portal template dir path
+ */
+ public String getPortalTemplateDirPath() {
+ return String.format("%s/%s", portalHomeDirPath, PortalConstants.TEMPLATE_DIR_NANE);
+ }
+
+ /**
+ * Get portal tmp dir path
+ *
+ * @return String portal tmp dir path
+ */
+ public String getPortalTmpDirPath() {
+ return String.format("%s/%s", portalHomeDirPath, PortalConstants.TMP_DIR_NANE);
+ }
+
+ /**
+ * Get portal tools dir path
+ *
+ * @return String portal tools dir path
+ */
+ public String getPortalToolsDirPath() {
+ return String.format("%s/%s", portalHomeDirPath, PortalConstants.TOOLS_DIR_NANE);
+ }
+
+ /**
+ * Get portal workspace dir path
+ *
+ * @return String portal workspace dir path
+ */
+ public String getPortalWorkspaceDirPath() {
+ return String.format("%s/%s", portalHomeDirPath, PortalConstants.WORKSPACE_DIR_NANE);
+ }
+
+ private void loadConfig() {
+ instance.portalHomeDirPath = loadPortalHomeDir();
+
+ String configPath = instance.portalHomeDirPath + "/config/application.properties";
+ Properties properties = new Properties();
+ try (FileInputStream fis = new FileInputStream(configPath)) {
+ properties.load(fis);
+ } catch (IOException e) {
+ throw new PortalException("Load portal application config failed, file path: " + configPath, e);
+ }
+
+ instance.systemName = properties.getProperty("system.name");
+ instance.systemArch = properties.getProperty("system.arch");
+ }
+
+ private void initPortalDir() {
+ String[] dirs = {
+ getPortalBinDirPath(),
+ getPortalConfigDirPath(),
+ getPortalDataDirPath(),
+ getPortalLogsDirPath(),
+ getPortalPkgDirPath(),
+ getPortalTemplateDirPath(),
+ getPortalTmpDirPath(),
+ getPortalToolsDirPath(),
+ getPortalWorkspaceDirPath()
+ };
+
+ try {
+ FileUtils.createDirectories(dirs);
+ } catch (IOException e) {
+ throw new PortalException("Create portal directories failed", e);
+ }
+ }
+
+ private static String loadPortalHomeDir() {
+ String classPath = ApplicationConfig.class.getProtectionDomain().getCodeSource().getLocation().getPath();
+ return new File(classPath).getParent();
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/ConfigValidationConstants.java b/multidb-portal/src/main/java/org/opengauss/constants/ConfigValidationConstants.java
new file mode 100644
index 0000000000000000000000000000000000000000..600b3e770ae82c262ffb5492521db85504748290
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/ConfigValidationConstants.java
@@ -0,0 +1,36 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants;
+
+import java.util.regex.Pattern;
+
+/**
+ * Config validation constants
+ *
+ * @since 2025/5/6
+ */
+public class ConfigValidationConstants {
+ /**
+ * Regular expression for IP address, including IPv4 and IPv6 formats
+ */
+ public static final Pattern IP_REGEX = Pattern.compile("((2(5[0-5]|[0-4]\\d))|[0-1]?\\d{1,2})"
+ + "(\\.((2(5[0-5]|[0-4]\\d))|[0-1]?\\d{1,2})){3}"
+ + "|([0-9a-fA-F]{1,4}:){7}([0-9a-fA-F]{1,4}|:)"
+ + "|::([0-9a-fA-F]{1,4}:){0,6}[0-9a-fA-F]{1,4}");
+
+ /**
+ * Regular expression for port number
+ */
+ public static final Pattern PORT_REGEX = Pattern.compile("^("
+ + "(102[4-9]|10[3-9]\\d|1[1-9]\\d{2}|[2-9]\\d{3}|"
+ + "[1-5]\\d{4}|"
+ + "6[0-4]\\d{3}|"
+ + "655[0-2]\\d|"
+ + "6553[0-5])"
+ + ")$");
+
+ private ConfigValidationConstants() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/MigrationModeConstants.java b/multidb-portal/src/main/java/org/opengauss/constants/MigrationModeConstants.java
new file mode 100644
index 0000000000000000000000000000000000000000..adb57c56817be8108c81480617cd87840d7ff2a6
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/MigrationModeConstants.java
@@ -0,0 +1,83 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants;
+
+import org.opengauss.enums.MigrationPhase;
+import org.opengauss.migration.mode.MigrationMode;
+
+import java.util.List;
+
+/**
+ * migration mode constants
+ *
+ * @since 2025/4/22
+ */
+public class MigrationModeConstants {
+ /**
+ * custom mode storage file name
+ */
+ public static final String CUSTOM_MODE_STORAGE_FILE_NAME = "migration-mode.txt";
+
+ /**
+ * object separator
+ */
+ public static final String OBJECT_SEPARATOR = "<<>>";
+
+ /**
+ * define mode template name
+ */
+ public static final String DEFINE_MODE_TEMPLATE_NAME = "mode-template.properties";
+
+ /**
+ * define mode template resources path
+ */
+ public static final String DEFINE_MODE_TEMPLATE_RESOURCES_PATH = "mode/" + DEFINE_MODE_TEMPLATE_NAME;
+
+ /**
+ * template key: mode name
+ */
+ public static final String TEMPLATE_KEY_MODE_NAME = "mode.name";
+
+ /**
+ * template key: migration phase list
+ */
+ public static final String TEMPLATE_KEY_MIGRATION_PHASE_LIST = "migration.phases";
+
+ /**
+ * mode name max length
+ */
+ public static final int MODE_NAME_MAX_LENGTH = 50;
+
+ /**
+ * mode name pattern
+ */
+ public static final String MODE_NAME_PATTERN = "^[a-zA-Z0-9_-]+$";
+
+ /**
+ * default mode list
+ */
+ public static final List DEFALUT_MODE_LIST = List.of(
+ new MigrationMode("plan1",
+ List.of(MigrationPhase.FULL_MIGRATION, MigrationPhase.FULL_DATA_CHECK)
+ ),
+ new MigrationMode("plan2",
+ List.of(MigrationPhase.FULL_MIGRATION, MigrationPhase.FULL_DATA_CHECK,
+ MigrationPhase.INCREMENTAL_MIGRATION)
+ ),
+ new MigrationMode("plan3",
+ List.of(MigrationPhase.FULL_MIGRATION, MigrationPhase.FULL_DATA_CHECK,
+ MigrationPhase.INCREMENTAL_MIGRATION, MigrationPhase.REVERSE_MIGRATION)
+ ),
+ new MigrationMode(MigrationPhase.FULL_MIGRATION.getPhaseName(), List.of(MigrationPhase.FULL_MIGRATION)),
+ new MigrationMode(MigrationPhase.FULL_DATA_CHECK.getPhaseName(), List.of(MigrationPhase.FULL_DATA_CHECK)),
+ new MigrationMode(MigrationPhase.INCREMENTAL_MIGRATION.getPhaseName(),
+ List.of(MigrationPhase.INCREMENTAL_MIGRATION)),
+ new MigrationMode(MigrationPhase.REVERSE_MIGRATION.getPhaseName(),
+ List.of(MigrationPhase.REVERSE_MIGRATION))
+ );
+
+ private MigrationModeConstants() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/MigrationStatusConstants.java b/multidb-portal/src/main/java/org/opengauss/constants/MigrationStatusConstants.java
new file mode 100644
index 0000000000000000000000000000000000000000..4643a5b56ec2595884c83ff8513a7001ea912fe7
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/MigrationStatusConstants.java
@@ -0,0 +1,112 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants;
+
+import org.opengauss.enums.MigrationStatusEnum;
+
+import java.util.List;
+
+/**
+ * migration status constants
+ *
+ * @since 2025/5/13
+ */
+public class MigrationStatusConstants {
+ /**
+ * migration status file name
+ */
+ public static final String MIGRATION_STATUS_FILE_NAME = "migration-status.txt";
+
+ /**
+ * full migration status file name: total.txt
+ */
+ public static final String FULL_TOTAL_INFO_STATUS_FILE_NAME = "total.txt";
+
+ /**
+ * full migration status file name: table.txt
+ */
+ public static final String FULL_TABLE_STATUS_FILE_NAME = "table.txt";
+
+ /**
+ * full migration status file name: trigger.txt
+ */
+ public static final String FULL_TRIGGER_STATUS_FILE_NAME = "trigger.txt";
+
+ /**
+ * full migration status file name: view.txt
+ */
+ public static final String FULL_VIEW_STATUS_FILE_NAME = "view.txt";
+
+ /**
+ * full migration status file name: function.txt
+ */
+ public static final String FULL_FUNCTION_STATUS_FILE_NAME = "function.txt";
+
+ /**
+ * full migration status file name: procedure.txt
+ */
+ public static final String FULL_PROCEDURE_STATUS_FILE_NAME = "procedure.txt";
+
+ /**
+ * full migration status file name: success.txt
+ */
+ public static final String FULL_CHECK_SUCCESS_OBJECT_STATUS_FILE_NAME = "success.txt";
+
+ /**
+ * full migration status file name: failed.txt
+ */
+ public static final String FULL_CHECK_FAILED_OBJECT_STATUS_FILE_NAME = "failed.txt";
+
+ /**
+ * incremental migration status file name
+ */
+ public static final String INCREMENTAL_STATUS_FILE_NAME = "incremental.txt";
+
+ /**
+ * reverse migration status file name
+ */
+ public static final String REVERSE_STATUS_FILE_NAME = "reverse.txt";
+
+ /**
+ * migration status in full phase list
+ */
+ public static final List MIGRATION_STATUS_IN_FULL_PHASE_LIST = List.of(
+ MigrationStatusEnum.START_FULL_MIGRATION,
+ MigrationStatusEnum.FULL_MIGRATION_RUNNING,
+ MigrationStatusEnum.FULL_MIGRATION_FINISHED
+ );
+
+ /**
+ * migration status in full check phase list
+ */
+ public static final List MIGRATION_STATUS_IN_FULL_CHECK_PHASE_LIST = List.of(
+ MigrationStatusEnum.START_FULL_DATA_CHECK,
+ MigrationStatusEnum.FULL_DATA_CHECK_RUNNING,
+ MigrationStatusEnum.FULL_DATA_CHECK_FINISHED
+ );
+
+ /**
+ * migration status in incremental phase list
+ */
+ public static final List MIGRATION_STATUS_IN_INCREMENTAL_PHASE_LIST = List.of(
+ MigrationStatusEnum.START_INCREMENTAL_MIGRATION,
+ MigrationStatusEnum.INCREMENTAL_MIGRATION_RUNNING,
+ MigrationStatusEnum.INCREMENTAL_MIGRATION_INTERRUPTED,
+ MigrationStatusEnum.INCREMENTAL_MIGRATION_FINISHED
+ );
+
+ /**
+ * migration status in reverse phase list
+ */
+ public static final List MIGRATION_STATUS_IN_REVERSE_PHASE_LIST = List.of(
+ MigrationStatusEnum.START_REVERSE_MIGRATION,
+ MigrationStatusEnum.REVERSE_MIGRATION_RUNNING,
+ MigrationStatusEnum.REVERSE_MIGRATION_INTERRUPTED,
+ MigrationStatusEnum.REVERSE_MIGRATION_FINISHED
+ );
+
+ private MigrationStatusConstants() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/PortalConstants.java b/multidb-portal/src/main/java/org/opengauss/constants/PortalConstants.java
new file mode 100644
index 0000000000000000000000000000000000000000..522aa72d1e25b342663c530fb371b9b81413b7ab
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/PortalConstants.java
@@ -0,0 +1,92 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants;
+
+/**
+ * Portal constants
+ *
+ * @since 2025/4/14
+ */
+public class PortalConstants {
+ /**
+ * portal version
+ */
+ public static final String PORTAL_VERSION = "7.0.0rc2";
+
+ /**
+ * bin dir name
+ */
+ public static final String BIN_DIR_NANE = "bin";
+
+ /**
+ * config dir name
+ */
+ public static final String CONFIG_DIR_NANE = "config";
+
+ /**
+ * data dir name
+ */
+ public static final String DATA_DIR_NANE = "data";
+
+ /**
+ * logs dir name
+ */
+ public static final String LOGS_DIR_NANE = "logs";
+
+ /**
+ * pkg dir name
+ */
+ public static final String PKG_DIR_NANE = "pkg";
+
+ /**
+ * template dir name
+ */
+ public static final String TEMPLATE_DIR_NANE = "template";
+
+ /**
+ * tmp dir name
+ */
+ public static final String TMP_DIR_NANE = "tmp";
+
+ /**
+ * tools dir name
+ */
+ public static final String TOOLS_DIR_NANE = "tools";
+
+ /**
+ * workspace dir name
+ */
+ public static final String WORKSPACE_DIR_NANE = "workspace";
+
+ /**
+ * least space mb
+ */
+ public static final long LEAST_SPACE_MB = 900L;
+
+ /**
+ * command os
+ */
+ public static final String COMMAND_OS =
+ "cat /etc/os-release | grep ID= | head -n 1 | awk -F '=' '{print $2}' | sed 's/\\\"//g'";
+
+ /**
+ * command os version
+ */
+ public static final String COMMAND_OS_VERSION =
+ "cat /etc/os-release | grep VERSION_ID= | head -n 1|awk -F '=' '{print $2}' | sed 's/\\\"//g'";
+
+ /**
+ * dependencies install script dir relative path
+ */
+ public static final String DEPENDENCIES_INSTALL_SCRIPT_DIR_RELATIVE_PATH = "dependencies";
+
+ /**
+ * dependencies install script name
+ */
+ public static final String DEPENDENCIES_INSTALL_SCRIPT_NAME = "install_dependencies.sh";
+
+ private PortalConstants() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/ProcessNameConstants.java b/multidb-portal/src/main/java/org/opengauss/constants/ProcessNameConstants.java
new file mode 100644
index 0000000000000000000000000000000000000000..2e44f494b95c9fd793d7324aeb126e2d04c14724
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/ProcessNameConstants.java
@@ -0,0 +1,175 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants;
+
+/**
+ * Process name constants
+ *
+ * @since 2025/3/3
+ */
+public class ProcessNameConstants {
+ /**
+ * chameleon drop_replica_schema order process name
+ */
+ public static final String CHAMELEON_DROP_REPLICA_SCHEMA = "chameleon full drop replica schema process";
+
+ /**
+ * chameleon create_replica_schema order process name
+ */
+ public static final String CHAMELEON_CREATE_REPLICA_SCHEMA = "chameleon full create replica schema process";
+
+ /**
+ * chameleon add_source order process name
+ */
+ public static final String CHAMELEON_ADD_SOURCE = "chameleon full add source process";
+
+ /**
+ * chameleon init_replica order process name
+ */
+ public static final String CHAMELEON_INIT_REPLICA = "chameleon full init replica process";
+
+ /**
+ * chameleon start_trigger_replica order process name
+ */
+ public static final String CHAMELEON_START_TRIGGER_REPLICA = "chameleon full start trigger replica process";
+
+ /**
+ * chameleon start_view_replica order process name
+ */
+ public static final String CHAMELEON_START_VIEW_REPLICA = "chameleon full start view replica process";
+
+ /**
+ * chameleon start_func_replica order process name
+ */
+ public static final String CHAMELEON_START_FUNC_REPLICA = "chameleon full start func replica process";
+
+ /**
+ * chameleon start_proc_replica order process name
+ */
+ public static final String CHAMELEON_START_PROC_REPLICA = "chameleon full start proc replica process";
+
+ /**
+ * chameleon detach_replica order process name
+ */
+ public static final String CHAMELEON_DETACH_REPLICA = "chameleon full detach replica process";
+
+ /**
+ * full migration tool order table process name
+ */
+ public static final String FULL_MIGRATION_TOOL_MIGRATION_TABLE = "full-migration tool migration table process";
+
+ /**
+ * full migration tool order sequence process name
+ */
+ public static final String FULL_MIGRATION_TOOL_MIGRATION_SEQUENCE =
+ "full-migration tool migration sequence process";
+
+ /**
+ * full migration tool order primary key process name
+ */
+ public static final String FULL_MIGRATION_TOOL_MIGRATION_PRIMARY_KEY =
+ "full-migration tool migration primary key process";
+
+ /**
+ * full migration tool order index process name
+ */
+ public static final String FULL_MIGRATION_TOOL_MIGRATION_INDEX =
+ "full-migration tool migration index process";
+
+ /**
+ * full migration tool order constraint process name
+ */
+ public static final String FULL_MIGRATION_TOOL_MIGRATION_CONSTRAINT =
+ "full-migration tool migration constraint process";
+
+ /**
+ * full migration tool order view process name
+ */
+ public static final String FULL_MIGRATION_TOOL_MIGRATION_VIEW =
+ "full-migration tool migration view process";
+
+ /**
+ * full migration tool order function process name
+ */
+ public static final String FULL_MIGRATION_TOOL_MIGRATION_FUNCTION =
+ "full-migration tool migration function process";
+
+ /**
+ * full migration tool order procedure process name
+ */
+ public static final String FULL_MIGRATION_TOOL_MIGRATION_PROCEDURE =
+ "full-migration tool migration procedure process";
+
+ /**
+ * full migration tool order trigger process name
+ */
+ public static final String FULL_MIGRATION_TOOL_MIGRATION_TRIGGER =
+ "full-migration tool migration trigger process";
+
+ /**
+ * full migration tool order foreign key process name
+ */
+ public static final String FULL_MIGRATION_TOOL_MIGRATION_FOREIGN_KEY =
+ "full-migration tool migration foreign key process";
+
+ /**
+ * full migration tool order drop replica schema process name
+ */
+ public static final String FULL_MIGRATION_TOOL_DROP_REPLICA_SCHEMA =
+ "full-migration tool drop replica schema process";
+
+ /**
+ * debezium incremental connect source process name
+ */
+ public static final String DEBEZIUM_INCREMENTAL_CONNECT_SOURCE = "debezium incremental connect source process";
+
+ /**
+ * debezium incremental connect sink process name
+ */
+ public static final String DEBEZIUM_INCREMENTAL_CONNECT_SINK = "debezium incremental connect sink process";
+
+ /**
+ * debezium reverse connect source process name
+ */
+ public static final String DEBEZIUM_REVERSE_CONNECT_SOURCE = "debezium reverse connect source process";
+
+ /**
+ * debezium reverse connect sink process name
+ */
+ public static final String DEBEZIUM_REVERSE_CONNECT_SINK = "debezium reverse connect sink process";
+
+ /**
+ * data checker full sink process name
+ */
+ public static final String DATA_CHECKER_FULL_SINK = "data checker sink process";
+
+ /**
+ * data checker full source process name
+ */
+ public static final String DATA_CHECKER_FULL_SOURCE = "data checker source process";
+
+ /**
+ * data checker full check process name
+ */
+ public static final String DATA_CHECKER_FULL_CHECK = "data checker check process";
+
+ /**
+ * data checker incremental sink process name
+ */
+ public static final String DATA_CHECKER_INCREMENTAL_SINK = "data checker incremental sink process";
+
+ /**
+ * data checker incremental source process name
+ */
+ public static final String DATA_CHECKER_INCREMENTAL_SOURCE = "data checker incremental source process";
+
+ /**
+ * data checker incremental check process name
+ */
+ public static final String DATA_CHECKER_INCREMENTAL_CHECK = "data checker incremental check process";
+
+ private ProcessNameConstants() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/SqlConstants.java b/multidb-portal/src/main/java/org/opengauss/constants/SqlConstants.java
new file mode 100644
index 0000000000000000000000000000000000000000..37bf2d7b0d4069914e58f4ab4fe5839fa498fb5c
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/SqlConstants.java
@@ -0,0 +1,127 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants;
+
+/**
+ * Sql constants
+ *
+ * @since 2025/7/7
+ */
+public class SqlConstants {
+ /**
+ * Select version, support MySQL, openGauss, PostgreSQL
+ */
+ public static final String SELECT_VERSION = "SELECT version();";
+
+ /**
+ * Show tables, support openGauss, PostgreSQL
+ */
+ public static final String SHOW_TABLES = "SELECT tablename FROM pg_tables WHERE SCHEMANAME = ?;";
+
+ /**
+ * Check schema exists, support openGauss, PostgreSQL
+ */
+ public static final String IS_SCHEMA_EXISTS =
+ "SELECT EXISTS (SELECT 1 FROM information_schema.schemata WHERE schema_name = ?);";
+
+ /**
+ * Show variable, support openGauss, PostgreSQL
+ */
+ public static final String SHOW_VARIABLE = "SHOW %s;";
+
+ /**
+ * Count replication slots, support openGauss, PostgreSQL
+ */
+ public static final String COUNT_REPLICATION_SLOTS = "select count(*) from pg_get_replication_slots();";
+
+ /**
+ * Select replication slot names, support openGauss, PostgreSQL
+ */
+ public static final String SELECT_REPLICATION_SLOT_NAMES = "select slot_name from pg_get_replication_slots();";
+
+ /**
+ * Create replication slot, support openGauss, PostgreSQL
+ */
+ public static final String CREATE_REPLICATION_SLOT = "SELECT * FROM pg_create_logical_replication_slot(?, ?);";
+
+ /**
+ * Drop replication slot, support openGauss, PostgreSQL
+ */
+ public static final String DROP_REPLICATION_SLOT = "SELECT * FROM pg_drop_replication_slot(?);";
+
+ /**
+ * Select publication names, support openGauss, PostgreSQL
+ */
+ public static final String SELECT_PUBLICATION_NAMES = "SELECT pubname from pg_publication;";
+
+ /**
+ * Create publication for all tables, support openGauss, PostgreSQL
+ */
+ public static final String CREATE_PUBLICATION_ALL_TABLES = "CREATE PUBLICATION %s FOR ALL TABLES;";
+
+ /**
+ * Create publication for table list, support openGauss, PostgreSQL
+ */
+ public static final String CREATE_PUBLICATION_FOR_TABLE = "CREATE PUBLICATION %s FOR TABLE %s;";
+
+ /**
+ * Drop publication, support openGauss, PostgreSQL
+ */
+ public static final String DROP_PUBLICATION = "DROP PUBLICATION %s;";
+
+ /**
+ * Alter table replica identity full, support openGauss, PostgreSQL
+ */
+ public static final String ALTER_TABLE_REPLICA_IDENTITY_FULL = "ALTER TABLE \"%s\".\"%s\" REPLICA IDENTITY full;";
+
+ /**
+ * Alter table replica identity default, support openGauss, PostgreSQL
+ */
+ public static final String ALTER_TABLE_REPLICA_IDENTITY_DEFAULT =
+ "ALTER TABLE \"%s\".\"%s\" REPLICA IDENTITY default;";
+
+ /**
+ * Is user system admin, support openGauss
+ */
+ public static final String OPENGAUSS_IS_SYSTEM_ADMIN = "select rolsystemadmin from pg_roles where rolname = ?;";
+
+ /**
+ * Is user replication role, support openGauss
+ */
+ public static final String OPENGAUSS_IS_REPLICATION_ROLE = "select rolreplication from pg_roles where rolname = ?;";
+
+ /**
+ * Alter system set, support openGauss
+ */
+ public static final String OPENGAUSS_ALTER_SYSTEM_SET = "ALTER SYSTEM SET %s TO %s;";
+
+ /**
+ * Get database access permissions, support openGauss
+ */
+ public static final String OPENGAUSS_ACCESS_PERMISSIONS = "select datacl from pg_database where datname = ?;";
+
+ /**
+ * Select user auth plugin, support MySQL
+ */
+ public static final String MYSQL_SELECT_USER_AUTH_PLUGIN = "SELECT USER,PLUGIN FROM mysql.user WHERE USER = ?;";
+
+ /**
+ * Show variable, support MySQL
+ */
+ public static final String MYSQL_SHOW_VARIABLE = "show variables like ?;";
+
+ /**
+ * Select user column, support MySQL
+ */
+ public static final String MYSQL_SELECT_USER_COLUMN = "select %s from mysql.user where user = '%s';";
+
+ /**
+ * Show master status, support MySQL
+ */
+ public static final String MYSQL_SHOW_MASTER_STATUS = "SHOW MASTER STATUS;";
+
+ private SqlConstants() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/TaskConstants.java b/multidb-portal/src/main/java/org/opengauss/constants/TaskConstants.java
new file mode 100644
index 0000000000000000000000000000000000000000..25b7f35f86c74989e3b70e8f2fa340155c933e95
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/TaskConstants.java
@@ -0,0 +1,57 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants;
+
+import org.opengauss.enums.DatabaseType;
+
+import java.util.List;
+
+/**
+ * task constants
+ *
+ * @since 2025/4/28
+ */
+public class TaskConstants {
+ /**
+ * max task id length
+ */
+ public static final int MAX_TASK_ID_LENGTH = 50;
+
+ /**
+ * task id verify pattern
+ */
+ public static final String TASK_ID_PATTERN = "^[a-zA-Z0-9_-]+$";
+
+ /**
+ * supported source db types
+ */
+ public static final List SUPPORTED_SOURCE_DB_TYPES = List.of(
+ DatabaseType.MYSQL,
+ DatabaseType.POSTGRESQL
+ );
+
+ /**
+ * task workspace dir suffix
+ */
+ public static final String TASK_WORKSPACE_DIR_SUFFIX = "task_";
+
+ /**
+ * source db type config file name
+ */
+ public static final String SOURCE_DB_TYPE_CONFIG_FILE_NAME = "source-database-type";
+
+ /**
+ * quarkus port file name
+ */
+ public static final String QUARKUS_PORT_FILE_NAME = "port";
+
+ /**
+ * migration heartbeat file name
+ */
+ public static final String HEARTBEAT_FILE = "migration.heartbeat";
+
+ private TaskConstants() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/config/ChameleonConfig.java b/multidb-portal/src/main/java/org/opengauss/constants/config/ChameleonConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..06a2d4bea8ef2a04412bd19a028c4bb6be50abfb
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/config/ChameleonConfig.java
@@ -0,0 +1,115 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.config;
+
+/**
+ * Chameleon config
+ *
+ * @since 2025/5/6
+ */
+public class ChameleonConfig {
+ /**
+ * pg database ip
+ */
+ public static final String PG_DATABASE_IP = "pg_conn.host";
+
+ /**
+ * pg database port
+ */
+ public static final String PG_DATABASE_PORT = "pg_conn.port";
+
+ /**
+ * pg database user
+ */
+ public static final String PG_DATABASE_USER = "pg_conn.user";
+
+ /**
+ * pg database password
+ */
+ public static final String PG_DATABASE_PASSWORD = "pg_conn.password";
+
+ /**
+ * pg database name
+ */
+ public static final String PG_DATABASE_NAME = "pg_conn.database";
+
+ /**
+ * mysql database ip
+ */
+ public static final String MYSQL_DATABASE_IP = "sources.mysql.db_conn.host";
+
+ /**
+ * mysql database port
+ */
+ public static final String MYSQL_DATABASE_PORT = "sources.mysql.db_conn.port";
+
+ /**
+ * mysql database user
+ */
+ public static final String MYSQL_DATABASE_USER = "sources.mysql.db_conn.user";
+
+ /**
+ * mysql database password
+ */
+ public static final String MYSQL_DATABASE_PASSWORD = "sources.mysql.db_conn.password";
+
+ /**
+ * mysql database name
+ */
+ public static final String MYSQL_DATABASE_NAME = "sources.mysql.db_conn.database";
+
+ /**
+ * mysql schema mappings
+ */
+ public static final String MYSQL_SCHEMA_MAPPINGS = "sources.mysql.schema_mappings";
+
+ /**
+ * mysql limit tables
+ */
+ public static final String MYSQL_LIMIT_TABLES = "sources.mysql.limit_tables";
+
+ /**
+ * mysql csv dir
+ */
+ public static final String MYSQL_CSV_DIR = "sources.mysql.csv_dir";
+
+ /**
+ * mysql out dir
+ */
+ public static final String MYSQL_OUT_DIR = "sources.mysql.out_dir";
+
+ /**
+ * pid dir
+ */
+ public static final String PID_DIR = "pid_dir";
+
+ /**
+ * dump json
+ */
+ public static final String DUMP_JSON = "dump_json";
+
+ /**
+ * log level
+ */
+ public static final String LOG_LEVEL = "log_level";
+
+ /**
+ * alert log collection enable
+ */
+ public static final String ALERT_LOG_COLLECTION_ENABLE = "alert_log_collection_enable";
+
+ /**
+ * alert log kafka server
+ */
+ public static final String ALERT_LOG_KAFKA_SERVER = "alert_log_kafka_server";
+
+ /**
+ * alert log kafka topic
+ */
+ public static final String ALERT_LOG_KAFKA_TOPIC = "alert_log_kafka_topic";
+
+ private ChameleonConfig() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/config/ConnectAvroStandaloneConfig.java b/multidb-portal/src/main/java/org/opengauss/constants/config/ConnectAvroStandaloneConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..bcf4efb158dc257fce7f6744391cd68ae265bc8f
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/config/ConnectAvroStandaloneConfig.java
@@ -0,0 +1,50 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.config;
+
+/**
+ * Connect Avro Standalone Config
+ *
+ * @since 2025/5/7
+ */
+public class ConnectAvroStandaloneConfig {
+ /**
+ * key converter schema registry url
+ */
+ public static final String SCHEMA_REGISTRY_URL_FOR_KEY_CONVERTER = "key.converter.schema.registry.url";
+
+ /**
+ * rest port
+ */
+ public static final String REST_PORT = "rest.port";
+
+ /**
+ * plugin path
+ */
+ public static final String PLUGIN_PATH = "plugin.path";
+
+ /**
+ * offset storage file filename
+ */
+ public static final String OFFSET_STORAGE_FILE_FILENAME = "offset.storage.file.filename";
+
+ /**
+ * connector client config override policy
+ */
+ public static final String CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY = "connector.client.config.override.policy";
+
+ /**
+ * bootstrap servers
+ */
+ public static final String KAFKA_SERVERS = "bootstrap.servers";
+
+ /**
+ * value converter schema registry url
+ */
+ public static final String SCHEMA_REGISTRY_URL_FOR_VALUE_CONVERTER = "value.converter.schema.registry.url";
+
+ private ConnectAvroStandaloneConfig() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/config/DataCheckerCheckConfig.java b/multidb-portal/src/main/java/org/opengauss/constants/config/DataCheckerCheckConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..1af45e4c829e7454ef10e94af1ceedca011a7083
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/config/DataCheckerCheckConfig.java
@@ -0,0 +1,45 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.config;
+
+/**
+ * data checker check config
+ *
+ * @since 2025/5/8
+ */
+public class DataCheckerCheckConfig {
+ /**
+ * data check data path
+ */
+ public static final String DATA_CHECK_DATA_PATH = "data.check.data-path";
+
+ /**
+ * kafka bootstrap servers
+ */
+ public static final String KAFKA_BOOTSTRAP_SERVERS = "spring.kafka.bootstrap-servers";
+
+ /**
+ * logging config file path
+ */
+ public static final String LOGGING_CONFIG = "logging.config";
+
+ /**
+ * check source process uri
+ */
+ public static final String CHECK_SOURCE_URI = "data.check.source-uri";
+
+ /**
+ * check sink process uri
+ */
+ public static final String CHECK_SINK_URI = "data.check.sink-uri";
+
+ /**
+ * check server port
+ */
+ public static final String SERVER_PORT = "server.port";
+
+ private DataCheckerCheckConfig() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/config/DataCheckerSinkConfig.java b/multidb-portal/src/main/java/org/opengauss/constants/config/DataCheckerSinkConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..33c8566b8a9e3b0449129b4a9d121828705f09e3
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/config/DataCheckerSinkConfig.java
@@ -0,0 +1,70 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.config;
+
+/**
+ * datachecker sink config
+ *
+ * @since 2025/5/8
+ */
+public class DataCheckerSinkConfig {
+ /**
+ * database url
+ */
+ public static final String DATABASE_URL = "spring.datasource.url";
+
+ /**
+ * database username
+ */
+ public static final String DATABASE_USERNAME = "spring.datasource.username";
+
+ /**
+ * database password
+ */
+ public static final String DATABASE_PASSWORD = "spring.datasource.password";
+
+ /**
+ * extract schema
+ */
+ public static final String EXTRACT_SCHEMA = "spring.extract.schema";
+
+ /**
+ * extract debezium enable
+ */
+ public static final String EXTRACT_DEBEZIUM_ENABLE = "spring.extract.debezium-enable";
+
+ /**
+ * extract debezium avro registry
+ */
+ public static final String EXTRACT_DEBEZIUM_AVRO_REGISTRY = "spring.extract.debezium-avro-registry";
+
+ /**
+ * extract debezium topic
+ */
+ public static final String EXTRACT_DEBEZIUM_TOPIC = "spring.extract.debezium-topic";
+
+ /**
+ * kafka bootstrap servers
+ */
+ public static final String KAFKA_BOOTSTRAP_SERVERS = "spring.kafka.bootstrap-servers";
+
+ /**
+ * logging config file path
+ */
+ public static final String LOGGING_CONFIG = "logging.config";
+
+ /**
+ * check process uri
+ */
+ public static final String CHECK_SERVER_URI = "spring.check.server-uri";
+
+ /**
+ * check server port
+ */
+ public static final String SERVER_PORT = "server.port";
+
+ private DataCheckerSinkConfig() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/config/DataCheckerSourceConfig.java b/multidb-portal/src/main/java/org/opengauss/constants/config/DataCheckerSourceConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..7740b932498413f7e58958e0de7b82f3c02514b5
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/config/DataCheckerSourceConfig.java
@@ -0,0 +1,70 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.config;
+
+/**
+ * datachecker source config
+ *
+ * @since 2025/5/8
+ */
+public class DataCheckerSourceConfig {
+ /**
+ * database url
+ */
+ public static final String DATABASE_URL = "spring.datasource.url";
+
+ /**
+ * database username
+ */
+ public static final String DATABASE_USERNAME = "spring.datasource.username";
+
+ /**
+ * database password
+ */
+ public static final String DATABASE_PASSWORD = "spring.datasource.password";
+
+ /**
+ * extract schema
+ */
+ public static final String EXTRACT_SCHEMA = "spring.extract.schema";
+
+ /**
+ * extract debezium enable
+ */
+ public static final String EXTRACT_DEBEZIUM_ENABLE = "spring.extract.debezium-enable";
+
+ /**
+ * extract debezium avro registry
+ */
+ public static final String EXTRACT_DEBEZIUM_AVRO_REGISTRY = "spring.extract.debezium-avro-registry";
+
+ /**
+ * extract debezium topic
+ */
+ public static final String EXTRACT_DEBEZIUM_TOPIC = "spring.extract.debezium-topic";
+
+ /**
+ * kafka bootstrap servers
+ */
+ public static final String KAFKA_BOOTSTRAP_SERVERS = "spring.kafka.bootstrap-servers";
+
+ /**
+ * logging config
+ */
+ public static final String LOGGING_CONFIG = "logging.config";
+
+ /**
+ * check process uri
+ */
+ public static final String CHECK_SERVER_URI = "spring.check.server-uri";
+
+ /**
+ * check server port
+ */
+ public static final String SERVER_PORT = "server.port";
+
+ private DataCheckerSourceConfig() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumConnectLog4jConfig.java b/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumConnectLog4jConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..5abcff85fae47e1b473ce8e280d9e48cb8042596
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumConnectLog4jConfig.java
@@ -0,0 +1,46 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.config;
+
+/**
+ * debezium connect log4j config
+ *
+ * @since 2025/5/19
+ */
+public class DebeziumConnectLog4jConfig {
+ /**
+ * connect appender file
+ */
+ public static final String CONNECT_APPENDER_FILE = "log4j.appender.connectAppender.File";
+
+ /**
+ * kafka error logger
+ */
+ public static final String KAFKA_ERROR_LOGGER = "log4j.logger.org.apache.kafka";
+
+ /**
+ * kafka error appender
+ */
+ public static final String KAFKA_ERROR_APPENDER = "log4j.appender.kafkaErrorAppender";
+
+ /**
+ * kafka error appender file
+ */
+ public static final String KAFKA_ERROR_APPENDER_FILE = "log4j.appender.kafkaErrorAppender.File";
+
+ /**
+ * kafka error appender layout
+ */
+ public static final String KAFKA_ERROR_APPENDER_LAYOUT = "log4j.appender.kafkaErrorAppender.layout";
+
+ /**
+ * kafka error appender layout conversion pattern
+ */
+ public static final String KAFKA_ERROR_APPENDER_LAYOUT_CONVERSION_PATTERN =
+ "log4j.appender.kafkaErrorAppender.layout.ConversionPattern";
+
+ private DebeziumConnectLog4jConfig() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumMysqlSinkConfig.java b/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumMysqlSinkConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..3afe42edcb782500857592b19ec586f58d58e9b7
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumMysqlSinkConfig.java
@@ -0,0 +1,85 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.config;
+
+/**
+ * debezium mysql sink config
+ *
+ * @since 2025/5/7
+ */
+public class DebeziumMysqlSinkConfig {
+ /**
+ * opengauss username
+ */
+ public static final String OPENGAUSS_USERNAME = "opengauss.username";
+
+ /**
+ * opengauss password
+ */
+ public static final String OPENGAUSS_PASSWORD = "opengauss.password";
+
+ /**
+ * opengauss url
+ */
+ public static final String OPENGAUSS_URL = "opengauss.url";
+
+ /**
+ * schema mappings
+ */
+ public static final String SCHEMA_MAPPINGS = "schema.mappings";
+
+ /**
+ * opengauss standby hosts
+ */
+ public static final String OPENGAUSS_STANDBY_HOSTS = "database.standby.hostnames";
+
+ /**
+ * opengauss standby ports
+ */
+ public static final String OPENGAUSS_STANDBY_PORTS = "database.standby.ports";
+
+ /**
+ * record breakpoint kafka bootstrap servers
+ */
+ public static final String RECORD_BREAKPOINT_KAFKA_BOOTSTRAP_SERVERS = "record.breakpoint.kafka.bootstrap.servers";
+
+ /**
+ * debezium connect name
+ */
+ public static final String NAME = "name";
+
+ /**
+ * debezium connect topics
+ */
+ public static final String TOPICS = "topics";
+
+ /**
+ * debezium connect record breakpoint kafka topic
+ */
+ public static final String RECORD_BREAKPOINT_KAFKA_TOPIC = "record.breakpoint.kafka.topic";
+
+ /**
+ * debezium connect sink process file path
+ */
+ public static final String SINK_PROCESS_FILE_PATH = "sink.process.file.path";
+
+ /**
+ * debezium connect create count info path
+ */
+ public static final String CREATE_COUNT_INFO_PATH = "create.count.info.path";
+
+ /**
+ * debezium connect fail sql path
+ */
+ public static final String FAIL_SQL_PATH = "fail.sql.path";
+
+ /**
+ * debezium connect openGauss xlog
+ */
+ public static final String XLOG_LOCATION = "xlog.location";
+
+ private DebeziumMysqlSinkConfig() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumMysqlSourceConfig.java b/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumMysqlSourceConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..cc68727cc6ecda05bf8080ab438e5453a1122452
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumMysqlSourceConfig.java
@@ -0,0 +1,110 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.config;
+
+/**
+ * debezium mysql source config
+ *
+ * @since 2025/5/7
+ */
+public class DebeziumMysqlSourceConfig {
+ /**
+ * database hostname
+ */
+ public static final String DATABASE_HOSTNAME = "database.hostname";
+
+ /**
+ * database port
+ */
+ public static final String DATABASE_PORT = "database.port";
+
+ /**
+ * database user
+ */
+ public static final String DATABASE_USER = "database.user";
+
+ /**
+ * database history kafka bootstrap servers
+ */
+ public static final String DATABASE_HISTORY_KAFKA_SERVERS = "database.history.kafka.bootstrap.servers";
+
+ /**
+ * database password
+ */
+ public static final String DATABASE_PASSWORD = "database.password";
+
+ /**
+ * debezium connector name
+ */
+ public static final String NAME = "name";
+
+ /**
+ * transforms route regex
+ */
+ public static final String TRANSFORMS_ROUTE_REGEX = "transforms.route.regex";
+
+ /**
+ * database server name
+ */
+ public static final String DATABASE_SERVER_NAME = "database.server.name";
+
+ /**
+ * database server id
+ */
+ public static final String DATABASE_SERVER_ID = "database.server.id";
+
+ /**
+ * database history kafka topic
+ */
+ public static final String DATABASE_HISTORY_KAFKA_TOPIC = "database.history.kafka.topic";
+
+ /**
+ * transforms route replacement
+ */
+ public static final String TRANSFORMS_ROUTE_REPLACEMENT = "transforms.route.replacement";
+
+ /**
+ * source process file path
+ */
+ public static final String SOURCE_PROCESS_FILE_PATH = "source.process.file.path";
+
+ /**
+ * create count info path
+ */
+ public static final String CREATE_COUNT_INFO_PATH = "create.count.info.path";
+
+ /**
+ * snapshot offset binlog filename
+ */
+ public static final String SNAPSHOT_OFFSET_BINLOG_FILENAME = "snapshot.offset.binlog.filename";
+
+ /**
+ * snapshot offset binlog position
+ */
+ public static final String SNAPSHOT_OFFSET_BINLOG_POSITION = "snapshot.offset.binlog.position";
+
+ /**
+ * snapshot offset gtid set
+ */
+ public static final String SNAPSHOT_OFFSET_GTID_SET = "snapshot.offset.gtid.set";
+
+ /**
+ * kafka bootstrap servers
+ */
+ public static final String KAFKA_BOOTSTRAP_SERVERS = "kafka.bootstrap.server";
+
+ /**
+ * database include list
+ */
+ public static final String DATABASE_INCLUDE_LIST = "database.include.list";
+
+ /**
+ * database table include list
+ */
+ public static final String TABLE_INCLUDE_LIST = "table.include.list";
+
+ private DebeziumMysqlSourceConfig() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumOpenGaussSinkConfig.java b/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumOpenGaussSinkConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..55befec5a3ba80a6030271537b8da9acda5f191b
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumOpenGaussSinkConfig.java
@@ -0,0 +1,90 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.config;
+
+/**
+ * debezium openGauss sink config
+ *
+ * @since 2025/5/8
+ */
+public class DebeziumOpenGaussSinkConfig {
+ /**
+ * database type
+ */
+ public static final String DATABASE_TYPE = "database.type";
+
+ /**
+ * database username
+ */
+ public static final String DATABASE_USERNAME = "database.username";
+
+ /**
+ * database password
+ */
+ public static final String DATABASE_PASSWORD = "database.password";
+
+ /**
+ * database name
+ */
+ public static final String DATABASE_NAME = "database.name";
+
+ /**
+ * database port
+ */
+ public static final String DATABASE_PORT = "database.port";
+
+ /**
+ * database ip
+ */
+ public static final String DATABASE_IP = "database.ip";
+
+ /**
+ * schema mappings
+ */
+ public static final String SCHEMA_MAPPINGS = "schema.mappings";
+
+ /**
+ * table include list
+ */
+ public static final String TABLE_INCLUDE_LIST = "table.include.list";
+
+ /**
+ * debezium sink connect name
+ */
+ public static final String NAME = "name";
+
+ /**
+ * debezium sink topics
+ */
+ public static final String TOPICS = "topics";
+
+ /**
+ * debezium sink record breakpoint kafka topic
+ */
+ public static final String RECORD_BREAKPOINT_KAFKA_TOPIC = "record.breakpoint.kafka.topic";
+
+ /**
+ * debezium sink record breakpoint kafka bootstrap servers
+ */
+ public static final String RECORD_BREAKPOINT_KAFKA_BOOTSTRAP_SERVERS = "record.breakpoint.kafka.bootstrap.servers";
+
+ /**
+ * debezium sink process file path
+ */
+ public static final String SINK_PROCESS_FILE_PATH = "sink.process.file.path";
+
+ /**
+ * debezium sink create count info path
+ */
+ public static final String CREATE_COUNT_INFO_PATH = "create.count.info.path";
+
+ /**
+ * debezium sink fail sql path
+ */
+ public static final String FAIL_SQL_PATH = "fail.sql.path";
+
+ private DebeziumOpenGaussSinkConfig() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumOpenGaussSourceConfig.java b/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumOpenGaussSourceConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..793ca7f033ae5b378b1b4264b66dad687e8f8ed0
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumOpenGaussSourceConfig.java
@@ -0,0 +1,125 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.config;
+
+/**
+ * debezium openGauss source config
+ *
+ * @since 2025/5/8
+ */
+public class DebeziumOpenGaussSourceConfig {
+ /**
+ * database ip
+ */
+ public static final String DATABASE_HOSTNAME = "database.hostname";
+
+ /**
+ * database port
+ */
+ public static final String DATABASE_PORT = "database.port";
+
+ /**
+ * database username
+ */
+ public static final String DATABASE_USER = "database.user";
+
+ /**
+ * database password
+ */
+ public static final String DATABASE_PASSWORD = "database.password";
+
+ /**
+ * database name
+ */
+ public static final String DATABASE_NAME = "database.dbname";
+
+ /**
+ * include table list
+ */
+ public static final String TABLE_INCLUDE_LIST = "table.include.list";
+
+ /**
+ * include schema list
+ */
+ public static final String SCHEMA_INCLUDE_LIST = "schema.include.list";
+
+ /**
+ * database is cluster
+ */
+ public static final String DATABASE_IS_CLUSTER = "database.iscluster";
+
+ /**
+ * database standby hostnames
+ */
+ public static final String DATABASE_STANDBY_HOSTNAMES = "database.standby.hostnames";
+
+ /**
+ * database standby ports
+ */
+ public static final String DATABASE_STANDBY_PORTS = "database.standby.ports";
+
+ /**
+ * debezium source connector name
+ */
+ public static final String NAME = "name";
+
+ /**
+ * database server name
+ */
+ public static final String DATABASE_SERVER_NAME = "database.server.name";
+
+ /**
+ * database history kafka topic
+ */
+ public static final String DATABASE_HISTORY_KAFKA_TOPIC = "database.history.kafka.topic";
+
+ /**
+ * transform route regex
+ */
+ public static final String TRANSFORMS_ROUTE_REGEX = "transforms.route.regex";
+
+ /**
+ * transform route replacement
+ */
+ public static final String TRANSFORMS_ROUTE_REPLACEMENT = "transforms.route.replacement";
+
+ /**
+ * source process file path
+ */
+ public static final String SOURCE_PROCESS_FILE_PATH = "source.process.file.path";
+
+ /**
+ * create count info path
+ */
+ public static final String CREATE_COUNT_INFO_PATH = "create.count.info.path";
+
+ /**
+ * database slot name
+ */
+ public static final String SLOT_NAME = "slot.name";
+
+ /**
+ * database slot drop on stop
+ */
+ public static final String SLOT_DROP_ON_STOP = "slot.drop.on.stop";
+
+ /**
+ * debezium connect openGauss xlog
+ */
+ public static final String XLOG_LOCATION = "xlog.location";
+
+ /**
+ * debezium plugin name
+ */
+ public static final String PLUGIN_NAME = "plugin.name";
+
+ /**
+ * publication auto create mode
+ */
+ public static final String PUBLICATION_AUTO_CREATE_MODE = "publication.autocreate.mode";
+
+ private DebeziumOpenGaussSourceConfig() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumPgsqlSinkConfig.java b/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumPgsqlSinkConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..7dfb030e557253954fb6db7c92db3f1e111f8371
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumPgsqlSinkConfig.java
@@ -0,0 +1,80 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.config;
+
+/**
+ * debezium pgsql sink config
+ *
+ * @since 2025/6/10
+ */
+public class DebeziumPgsqlSinkConfig {
+ /**
+ * database username
+ */
+ public static final String DATABASE_USERNAME = "database.username";
+
+ /**
+ * database password
+ */
+ public static final String DATABASE_PASSWORD = "database.password";
+
+ /**
+ * database name
+ */
+ public static final String DATABASE_NAME = "database.name";
+
+ /**
+ * database port
+ */
+ public static final String DATABASE_PORT = "database.port";
+
+ /**
+ * database ip
+ */
+ public static final String DATABASE_IP = "database.ip";
+
+ /**
+ * schema mappings
+ */
+ public static final String SCHEMA_MAPPINGS = "schema.mappings";
+
+ /**
+ * debezium sink connector name
+ */
+ public static final String NAME = "name";
+
+ /**
+ * kafka topic
+ */
+ public static final String TOPICS = "topics";
+
+ /**
+ * record breakpoint kafka topic
+ */
+ public static final String COMMIT_PROCESS_WHILE_RUNNING = "commit.process.while.running";
+
+ /**
+ * sink process file path
+ */
+ public static final String SINK_PROCESS_FILE_PATH = "sink.process.file.path";
+
+ /**
+ * create count info path
+ */
+ public static final String CREATE_COUNT_INFO_PATH = "create.count.info.path";
+
+ /**
+ * fail sql path
+ */
+ public static final String FAIL_SQL_PATH = "fail.sql.path";
+
+ /**
+ * xlog location save path
+ */
+ public static final String XLOG_LOCATION = "xlog.location";
+
+ private DebeziumPgsqlSinkConfig() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumPgsqlSourceConfig.java b/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumPgsqlSourceConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..91cc97dcd14907ef9d98c3c3f0ec2c574f8d8a5d
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/config/DebeziumPgsqlSourceConfig.java
@@ -0,0 +1,120 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.config;
+
+/**
+ * debezium pgsql source config
+ *
+ * @since 2025/6/10
+ */
+public class DebeziumPgsqlSourceConfig {
+ /**
+ * database ip
+ */
+ public static final String DATABASE_HOSTNAME = "database.hostname";
+
+ /**
+ * database port
+ */
+ public static final String DATABASE_PORT = "database.port";
+
+ /**
+ * database user
+ */
+ public static final String DATABASE_USER = "database.user";
+
+ /**
+ * database password
+ */
+ public static final String DATABASE_PASSWORD = "database.password";
+
+ /**
+ * database name
+ */
+ public static final String DATABASE_NAME = "database.dbname";
+
+ /**
+ * schema include list
+ */
+ public static final String SCHEMA_INCLUDE_LIST = "schema.include.list";
+
+ /**
+ * table include list
+ */
+ public static final String TABLE_INCLUDE_LIST = "table.include.list";
+
+ /**
+ * schema exclude list
+ */
+ public static final String SCHEMA_EXCLUDE_LIST = "schema.exclude.list";
+
+ /**
+ * table exclude list
+ */
+ public static final String TABLE_EXCLUDE_LIST = "table.exclude.list";
+
+ /**
+ * debezium connect name
+ */
+ public static final String NAME = "name";
+
+ /**
+ * database server name
+ */
+ public static final String DATABASE_SERVER_NAME = "database.server.name";
+
+ /**
+ * transforms route regex
+ */
+ public static final String TRANSFORMS_ROUTE_REGEX = "transforms.route.regex";
+
+ /**
+ * transforms route replacement
+ */
+ public static final String TRANSFORMS_ROUTE_REPLACEMENT = "transforms.route.replacement";
+
+ /**
+ * commit process while running
+ */
+ public static final String COMMIT_PROCESS_WHILE_RUNNING = "commit.process.while.running";
+
+ /**
+ * source process file path
+ */
+ public static final String SOURCE_PROCESS_FILE_PATH = "source.process.file.path";
+
+ /**
+ * create count info path
+ */
+ public static final String CREATE_COUNT_INFO_PATH = "create.count.info.path";
+
+ /**
+ * database slot name
+ */
+ public static final String SLOT_NAME = "slot.name";
+
+ /**
+ * database slot drop on stop
+ */
+ public static final String SLOT_DROP_ON_STOP = "slot.drop.on.stop";
+
+ /**
+ * plugin name
+ */
+ public static final String PLUGIN_NAME = "plugin.name";
+
+ /**
+ * migration type
+ */
+ public static final String MIGRATION_TYPE = "migration.type";
+
+ /**
+ * truncate handling mode
+ */
+ public static final String TRUNCATE_HANDLING_MODE = "truncate.handling.mode";
+
+ private DebeziumPgsqlSourceConfig() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/config/FullMigrationToolConfig.java b/multidb-portal/src/main/java/org/opengauss/constants/config/FullMigrationToolConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..26949eb72b98657beada12b7636513ce556efaa2
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/config/FullMigrationToolConfig.java
@@ -0,0 +1,105 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.config;
+
+/**
+ * full migration tool config
+ *
+ * @since 2025/5/29
+ */
+public class FullMigrationToolConfig {
+ /**
+ * is dump json
+ */
+ public static final String IS_DUMP_JSON = "isDumpJson";
+
+ /**
+ * status dir
+ */
+ public static final String STATUS_DIR = "statusDir";
+
+ /**
+ * openGauss ip
+ */
+ public static final String OG_CONN_HOST = "ogConn.host";
+
+ /**
+ * openGauss port
+ */
+ public static final String OG_CONN_PORT = "ogConn.port";
+
+ /**
+ * openGauss user
+ */
+ public static final String OG_CONN_USER = "ogConn.user";
+
+ /**
+ * openGauss password
+ */
+ public static final String OG_CONN_PASSWORD = "ogConn.password";
+
+ /**
+ * openGauss database
+ */
+ public static final String OG_CONN_DATABASE = "ogConn.database";
+
+ /**
+ * source database host
+ */
+ public static final String SOURCE_DB_CONN_HOST = "sourceConfig.dbConn.host";
+
+ /**
+ * source database port
+ */
+ public static final String SOURCE_DB_CONN_PORT = "sourceConfig.dbConn.port";
+
+ /**
+ * source database user
+ */
+ public static final String SOURCE_DB_CONN_USER = "sourceConfig.dbConn.user";
+
+ /**
+ * source database password
+ */
+ public static final String SOURCE_DB_CONN_PASSWORD = "sourceConfig.dbConn.password";
+
+ /**
+ * source database database
+ */
+ public static final String SOURCE_DB_CONN_DATABASE = "sourceConfig.dbConn.database";
+
+ /**
+ * source database schema mappings
+ */
+ public static final String SOURCE_SCHEMA_MAPPINGS = "sourceConfig.schemaMappings";
+
+ /**
+ * is deleted csv fire when finish
+ */
+ public static final String IS_DELETE_CSV = "isDeleteCsv";
+
+ /**
+ * source csv dir
+ */
+ public static final String SOURCE_CSV_DIR = "sourceConfig.csvDir";
+
+ /**
+ * is record snapshot, default false
+ */
+ public static final String IS_RECORD_SNAPSHOT = "sourceConfig.isRecordSnapshot";
+
+ /**
+ * source database slot name
+ */
+ public static final String SLOT_NAME = "sourceConfig.slotName";
+
+ /**
+ * source database plugin name
+ */
+ public static final String PLUGIN_NAME = "sourceConfig.pluginName";
+
+ private FullMigrationToolConfig() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/config/MigrationConfig.java b/multidb-portal/src/main/java/org/opengauss/constants/config/MigrationConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..dada16d91284c0b99f866046ea176ff7b5b7b713
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/config/MigrationConfig.java
@@ -0,0 +1,190 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.config;
+
+/**
+ * migration config
+ *
+ * @since 2025/4/30
+ */
+public class MigrationConfig {
+ /**
+ * Migration mode
+ */
+ public static final String MIGRATION_MODE = "migration.mode";
+
+ /**
+ * Whether to migrate objects. Default value is true
+ */
+ public static final String IS_MIGRATION_OBJECT = "is.migration.object";
+
+ /**
+ * Whether adjust openGauss kernel parameters. Default value is false.
+ */
+ public static final String IS_ADJUST_KERNEL_PARAM = "is.adjust.kernel.param";
+
+ /**
+ * MySQL server IP address
+ */
+ public static final String MYSQL_DATABASE_IP = "mysql.database.ip";
+
+ /**
+ * MySQL server port
+ */
+ public static final String MYSQL_DATABASE_PORT = "mysql.database.port";
+
+ /**
+ * MySQL database name
+ */
+ public static final String MYSQL_DATABASE_NAME = "mysql.database.name";
+
+ /**
+ * MySQL server user name
+ */
+ public static final String MYSQL_DATABASE_USERNAME = "mysql.database.username";
+
+ /**
+ * MySQL server user password
+ */
+ public static final String MYSQL_DATABASE_PASSWORD = "mysql.database.password";
+
+ /**
+ * MySQL tables to be migrated
+ */
+ public static final String MYSQL_DATABASE_TABLES = "mysql.database.tables";
+
+ /**
+ * PostgreSQL server IP address
+ */
+ public static final String PGSQL_DATABASE_IP = "pgsql.database.ip";
+
+ /**
+ * PostgreSQL server port
+ */
+ public static final String PGSQL_DATABASE_PORT = "pgsql.database.port";
+
+ /**
+ * PostgreSQL database name
+ */
+ public static final String PGSQL_DATABASE_NAME = "pgsql.database.name";
+
+ /**
+ * PostgreSQL server user name
+ */
+ public static final String PGSQL_DATABASE_USERNAME = "pgsql.database.username";
+
+ /**
+ * PostgreSQL server user password
+ */
+ public static final String PGSQL_DATABASE_PASSWORD = "pgsql.database.password";
+
+ /**
+ * PostgreSQL schemas to be migrated
+ */
+ public static final String PGSQL_DATABASE_SCHEMAS = "pgsql.database.schemas";
+
+ /**
+ * OpenGauss server IP address
+ */
+ public static final String OPENGAUSS_DATABASE_IP = "opengauss.database.ip";
+
+ /**
+ * OpenGauss server port
+ */
+ public static final String OPENGAUSS_DATABASE_PORT = "opengauss.database.port";
+
+ /**
+ * OpenGauss database name
+ */
+ public static final String OPENGAUSS_DATABASE_NAME = "opengauss.database.name";
+
+ /**
+ * OpenGauss server user name
+ */
+ public static final String OPENGAUSS_DATABASE_USERNAME = "opengauss.database.username";
+
+ /**
+ * OpenGauss server user password
+ */
+ public static final String OPENGAUSS_DATABASE_PASSWORD = "opengauss.database.password";
+
+ /**
+ * OpenGauss schema of the migration
+ */
+ public static final String OPENGAUSS_DATABASE_SCHEMA = "opengauss.database.schema";
+
+ /**
+ * OpenGauss database standby nodes ip
+ */
+ public static final String OPENGAUSS_DATABASE_STANDBY_HOSTS = "opengauss.database.standby.hosts";
+
+ /**
+ * OpenGauss database standby nodes port
+ */
+ public static final String OPENGAUSS_DATABASE_STANDBY_PORTS = "opengauss.database.standby.ports";
+
+ /**
+ * Schema mappings
+ */
+ public static final String SCHEMA_MAPPINGS = "schema.mappings";
+
+ /**
+ * Full migration process JVM configuration
+ */
+ public static final String FULL_PROCESS_JVM = "full.process.jvm";
+
+ /**
+ * Full data check source process JVM configuration
+ */
+ public static final String FULL_CHECK_SOURCE_PROCESS_JVM = "full.check.source.jvm";
+
+ /**
+ * Full data check sink process JVM configuration
+ */
+ public static final String FULL_CHECK_SINK_PROCESS_JVM = "full.check.sink.jvm";
+
+ /**
+ * Full data check process JVM configuration
+ */
+ public static final String FULL_CHECK_CHECK_PROCESS_JVM = "full.check.jvm";
+
+ /**
+ * Incremental data check source process JVM configuration
+ */
+ public static final String INCREMENTAL_CHECK_SOURCE_PROCESS_JVM = "incremental.check.source.jvm";
+
+ /**
+ * Incremental data check sink process JVM configuration
+ */
+ public static final String INCREMENTAL_CHECK_SINK_PROCESS_JVM = "incremental.check.sink.jvm";
+
+ /**
+ * Incremental data check process JVM configuration
+ */
+ public static final String INCREMENTAL_CHECK_CHECK_PROCESS_JVM = "incremental.check.jvm";
+
+ /**
+ * Incremental migration source process JVM configuration
+ */
+ public static final String INCREMENTAL_MIGRATION_SOURCE_PROCESS_JVM = "incremental.source.jvm";
+
+ /**
+ * Incremental migration sink process JVM configuration
+ */
+ public static final String INCREMENTAL_MIGRATION_SINK_PROCESS_JVM = "incremental.sink.jvm";
+
+ /**
+ * Reverse migration source process JVM configuration
+ */
+ public static final String REVERSE_MIGRATION_SOURCE_PROCESS_JVM = "reverse.source.jvm";
+
+ /**
+ * Reverse migration sink process JVM configuration
+ */
+ public static final String REVERSE_MIGRATION_SINK_PROCESS_JVM = "reverse.sink.jvm";
+
+ private MigrationConfig() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/tool/ChameleonConstants.java b/multidb-portal/src/main/java/org/opengauss/constants/tool/ChameleonConstants.java
new file mode 100644
index 0000000000000000000000000000000000000000..8f3f8883d003e0af05748ca088f28f8f03c5b034
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/tool/ChameleonConstants.java
@@ -0,0 +1,120 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.tool;
+
+import java.util.List;
+
+/**
+ * chameleon constants
+ *
+ * @since 2025/4/19
+ */
+public class ChameleonConstants {
+ /**
+ * tool name
+ */
+ public static final String TOOL_NAME = "Chameleon";
+
+ /**
+ * pg chameleon dir path
+ */
+ public static final String PG_CHAMELEON_DIR_PATH = "~/.pg_chameleon";
+
+ /**
+ * pg chameleon config dir path
+ */
+ public static final String PG_CHAMELEON_CONFIG_DIR_PATH = PG_CHAMELEON_DIR_PATH + "/configuration";
+
+ /**
+ * install pkg dir name
+ */
+ public static final String INSTALL_PKG_DIR_NAME = "chameleon";
+
+ /**
+ * install pkg name model
+ */
+ public static final String INSTALL_PKG_NAME_MODEL = "chameleon-%s-%s.tar.gz";
+
+ /**
+ * install dir name
+ */
+ public static final String INSTALL_DIR_NAME = "chameleon";
+
+ /**
+ * chameleon dir home name model
+ */
+ public static final String CHAMELEON_DIR_HOME_NAME_MODEL = "chameleon-%s";
+
+ /**
+ * chameleon file relative path
+ */
+ public static final String CHAMELEON_FILE_RELATIVE_PATH = "venv/bin/chameleon";
+
+ /**
+ * wait chameleon process start millis
+ */
+ public static final int WAIT_PROCESS_START_MILLIS = 2000;
+
+ /**
+ * set configuration files order
+ */
+ public static final String ORDER_SET_CONFIGURATION_FILES = "set_configuration_files";
+
+ /**
+ * drop replica schema order
+ */
+ public static final String ORDER_DROP_REPLICA_SCHEMA = "drop_replica_schema";
+
+ /**
+ * create replica schema order
+ */
+ public static final String ORDER_CREATE_REPLICA_SCHEMA = "create_replica_schema";
+
+ /**
+ * add source order
+ */
+ public static final String ORDER_ADD_SOURCE = "add_source";
+
+ /**
+ * init replica order
+ */
+ public static final String ORDER_INIT_REPLICA = "init_replica";
+
+ /**
+ * start trigger replica order
+ */
+ public static final String ORDER_START_TRIGGER_REPLICA = "start_trigger_replica";
+
+ /**
+ * start view replica order
+ */
+ public static final String ORDER_START_VIEW_REPLICA = "start_view_replica";
+
+ /**
+ * start func replica order
+ */
+ public static final String ORDER_START_FUNC_REPLICA = "start_func_replica";
+
+ /**
+ * start proc replica order
+ */
+ public static final String ORDER_START_PROC_REPLICA = "start_proc_replica";
+
+ /**
+ * detach replica order
+ */
+ public static final String ORDER_DETACH_REPLICA = "detach_replica";
+
+ /**
+ * need config param order list
+ */
+ public static final List ORDER_NEED_CONFIG_SOURCE_LIST = List.of(
+ ORDER_ADD_SOURCE, ORDER_INIT_REPLICA, ORDER_START_TRIGGER_REPLICA, ORDER_START_VIEW_REPLICA,
+ ORDER_START_FUNC_REPLICA, ORDER_START_PROC_REPLICA, ORDER_DETACH_REPLICA
+ );
+
+ private ChameleonConstants() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/tool/DataCheckerConstants.java b/multidb-portal/src/main/java/org/opengauss/constants/tool/DataCheckerConstants.java
new file mode 100644
index 0000000000000000000000000000000000000000..b5c914a013a2893dc010266e9c25ff2e0c46a7a1
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/tool/DataCheckerConstants.java
@@ -0,0 +1,110 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.tool;
+
+/**
+ * data checker constants
+ *
+ * @since 2025/4/19
+ */
+public class DataCheckerConstants {
+ /**
+ * tool name
+ */
+ public static final String TOOL_NAME = "DataChecker";
+
+ /**
+ * install pkg dir name
+ */
+ public static final String INSTALL_PKG_DIR_NAME = "datachecker";
+
+ /**
+ * install pkg name model
+ */
+ public static final String INSTALL_PKG_NAME_MODEL = "gs_datacheck-%s.tar.gz";
+
+ /**
+ * install dir name
+ */
+ public static final String INSTALL_DIR_NAME = "datachecker";
+
+ /**
+ * data checker home dir name model
+ */
+ public static final String DATA_CHECKER_HOME_DIR_NAME_MODEL = "gs_datacheck-%s";
+
+ /**
+ * check jar name model
+ */
+ public static final String CHECK_JAR_NAME_MODEL = "datachecker-check-%s.jar";
+
+ /**
+ * extract jar name model
+ */
+ public static final String EXTRACT_JAR_NAME_MODEL = "datachecker-extract-%s.jar";
+
+ /**
+ * data checker lib dir name
+ */
+ public static final String DATA_CHECKER_LIB_DIR_NAME = "lib";
+
+ /**
+ * wait process start millis
+ */
+ public static final int WAIT_PROCESS_START_MILLIS = 5000;
+
+ /**
+ * check result success file name
+ */
+ public static final String CHECK_RESULT_SUCCESS_FILE_NAME = "success.log";
+
+ /**
+ * check result failed file name
+ */
+ public static final String CHECK_RESULT_FAILED_FILE_NAME = "failed.log";
+
+ /**
+ * check result repair file name model
+ */
+ public static final String CHECK_RESULT_REPAIR_FILE_NAME_MODEL = "repair_%s_%s_0_0.txt";
+
+ /**
+ * process sign file name
+ */
+ public static final String PROCESS_SIGN_FILE_NAME = "process.pid";
+
+ /**
+ * source process start sign
+ */
+ public static final String SOURCE_PROCESS_START_SIGN = "\"endpoint\":\"SOURCE\",\"event\":\"start\"";
+
+ /**
+ * sink process start sign
+ */
+ public static final String SINK_PROCESS_START_SIGN = "\"endpoint\":\"SINK\",\"event\":\"start\"";
+
+ /**
+ * check process start sign
+ */
+ public static final String CHECK_PROCESS_START_SIGN = "\"endpoint\":\"CHECK\",\"event\":\"start\"";
+
+ /**
+ * source process stop sign
+ */
+ public static final String SOURCE_PROCESS_STOP_SIGN = "\"endpoint\":\"SOURCE\",\"event\":\"stop\"";
+
+ /**
+ * sink process stop sign
+ */
+ public static final String SINK_PROCESS_STOP_SIGN = "\"endpoint\":\"SINK\",\"event\":\"stop\"";
+
+ /**
+ * check process stop sign
+ */
+ public static final String CHECK_PROCESS_STOP_SIGN = "\"endpoint\":\"CHECK\",\"event\":\"stop\"";
+
+ private DataCheckerConstants() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/tool/DebeziumConstants.java b/multidb-portal/src/main/java/org/opengauss/constants/tool/DebeziumConstants.java
new file mode 100644
index 0000000000000000000000000000000000000000..52a62d8a89290938145ad99c1ed9ac2d8167d114
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/tool/DebeziumConstants.java
@@ -0,0 +1,93 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.tool;
+
+/**
+ * debezium constants
+ *
+ * @since 2025/4/19
+ */
+public class DebeziumConstants {
+ /**
+ * tool name
+ */
+ public static final String TOOL_NAME = "Debezium";
+
+ /**
+ * install pkg dir name
+ */
+ public static final String INSTALL_PKG_DIR_NAME = "debezium";
+
+ /**
+ * connect mysql install pkg name model
+ */
+ public static final String CONNECT_MYSQL_INSTALL_PKG_NAME_MODEL = "replicate-mysql2openGauss-%s.tar.gz";
+
+ /**
+ * connect openGauss install pkg name model
+ */
+ public static final String CONNECT_OPENGAUSS_INSTALL_PKG_NAME_MODEL = "replicate-openGauss2mysql-%s.tar.gz";
+
+ /**
+ * connect postgresql install pkg name model
+ */
+ public static final String CONNECT_PGSQL_INSTALL_PKG_NAME_MODEL = "replicate-postgresql2openGauss-%s.tar.gz";
+
+ /**
+ * install dir name
+ */
+ public static final String INSTALL_DIR_NAME = "debezium";
+
+ /**
+ * connect mysql jar relative path
+ */
+ public static final String CONNECT_MYSQL_JAR_RELATIVE_PATH =
+ "debezium-connector-mysql/debezium-connector-mysql-1.8.1.Final.jar";
+
+ /**
+ * connect openGauss jar relative path
+ */
+ public static final String CONNECT_OPENGAUSS_JAR_RELATIVE_PATH =
+ "debezium-connector-opengauss/debezium-connector-opengauss-1.8.1.Final.jar";
+
+ /**
+ * connect postgresql jar relative path
+ */
+ public static final String CONNECT_PGSQL_JAR_RELATIVE_PATH =
+ "debezium-connector-postgres/debezium-connector-postgres-1.8.1.Final.jar";
+
+ /**
+ * wait process start millis
+ */
+ public static final int WAIT_PROCESS_START_MILLIS = 3000;
+
+ /**
+ * source process status file name prefix
+ */
+ public static final String INCREMENTAL_SOURCE_STATUS_FILE_PREFIX = "forward-source-process";
+
+ /**
+ * sink process status file name prefix
+ */
+ public static final String INCREMENTAL_SINK_STATUS_FILE_PREFIX = "forward-sink-process";
+
+ /**
+ * reverse source process status file name prefix
+ */
+ public static final String REVERSE_SOURCE_STATUS_FILE_PREFIX = "reverse-source-process";
+
+ /**
+ * reverse sink process status file name prefix
+ */
+ public static final String REVERSE_SINK_STATUS_FILE_PREFIX = "reverse-sink-process";
+
+ /**
+ * fail sql file name
+ */
+ public static final String FAIL_SQL_FILE_NAME = "fail-sql.txt";
+
+ private DebeziumConstants() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/tool/FullMigrationToolConstants.java b/multidb-portal/src/main/java/org/opengauss/constants/tool/FullMigrationToolConstants.java
new file mode 100644
index 0000000000000000000000000000000000000000..bfe66edd47362d6c803e31b0736ff86ef298bd53
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/tool/FullMigrationToolConstants.java
@@ -0,0 +1,110 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.tool;
+
+/**
+ * full migration tool constants
+ *
+ * @since 2025/5/29
+ */
+public class FullMigrationToolConstants {
+ /**
+ * tool name
+ */
+ public static final String TOOL_NAME = "Full-Migration";
+
+ /**
+ * install package directory name
+ */
+ public static final String INSTALL_PKG_DIR_NAME = "full-migration";
+
+ /**
+ * install package name
+ */
+ public static final String INSTALL_PKG_NAME = "full-migration-tool-%s.tar.gz";
+
+ /**
+ * install directory name
+ */
+ public static final String INSTALL_DIR_NAME = "full-migration";
+
+ /**
+ * full migration jar name model
+ */
+ public static final String FULL_MIGRATION_JAR_NAME_MODEL = "full-migration-tool-%s.jar";
+
+ /**
+ * full migration jar name
+ */
+ public static final String FULL_MIGRATION_JAR_HOME_NAME = "full-migration-tool";
+
+ /**
+ * wait process start millis
+ */
+ public static final int WAIT_PROCESS_START_MILLIS = 2000;
+
+ /**
+ * order table
+ */
+ public static final String ORDER_TABLE = "table";
+
+ /**
+ * order sequence
+ */
+ public static final String ORDER_SEQUENCE = "sequence";
+
+ /**
+ * order primary key
+ */
+ public static final String ORDER_PRIMARY_KEY = "primarykey";
+
+ /**
+ * order index
+ */
+ public static final String ORDER_INDEX = "index";
+
+ /**
+ * order constraint
+ */
+ public static final String ORDER_CONSTRAINT = "constraint";
+
+ /**
+ * order view
+ */
+ public static final String ORDER_VIEW = "view";
+
+ /**
+ * order function
+ */
+ public static final String ORDER_FUNCTION = "function";
+
+ /**
+ * order procedure
+ */
+ public static final String ORDER_PROCEDURE = "procedure";
+
+ /**
+ * order trigger
+ */
+ public static final String ORDER_TRIGGER = "trigger";
+
+ /**
+ * order foreignkey
+ */
+ public static final String ORDER_FOREIGN_KEY = "foreignkey";
+
+ /**
+ * order drop_replica_schema
+ */
+ public static final String ORDER_DROP_REPLICA_SCHEMA = "drop_replica_schema";
+
+ /**
+ * support source db type: postgresql
+ */
+ public static final String SUPPORT_SOURCE_DB_TYPE_PGSQL = "postgresql";
+
+ private FullMigrationToolConstants() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/constants/tool/KafkaConstants.java b/multidb-portal/src/main/java/org/opengauss/constants/tool/KafkaConstants.java
new file mode 100644
index 0000000000000000000000000000000000000000..848791fd6f16b6943858e78951fb8788cd54f7a1
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/constants/tool/KafkaConstants.java
@@ -0,0 +1,115 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.constants.tool;
+
+/**
+ * kafka constants
+ *
+ * @since 2025/4/19
+ */
+public class KafkaConstants {
+ /**
+ * tool name
+ */
+ public static final String TOOL_NAME = "Kafka";
+
+ /**
+ * install pkg dir name
+ */
+ public static final String INSTALL_PKG_DIR_NAME = "confluent";
+
+ /**
+ * install pkg name
+ */
+ public static final String INSTALL_PKG_NAME = "confluent-community-5.5.1-2.12.zip";
+
+ /**
+ * install dir name
+ */
+ public static final String INSTALL_DIR_NAME = "confluent";
+
+ /**
+ * confluent dir name
+ */
+ public static final String CONFLUENT_DIR_NAME = "confluent-5.5.1";
+
+ /**
+ * kafka tmp dir name
+ */
+ public static final String KAFKA_TMP_DIR_NAME = "kafka-logs";
+
+ /**
+ * kafka starter relative path
+ */
+ public static final String KAFKA_STARTER_RELATIVE_PATH = "bin/kafka-server-start";
+
+ /**
+ * kafka config relative path
+ */
+ public static final String KAFKA_CONFIG_RELATIVE_PATH = "etc/kafka/server.properties";
+
+ /**
+ * zookeeper tmp dir name
+ */
+ public static final String ZOOKEEPER_TMP_DIR_NAME = "zookeeper";
+
+ /**
+ * zookeeper starter relative path
+ */
+ public static final String ZOOKEEPER_STARTER_RELATIVE_PATH = "bin/zookeeper-server-start";
+
+ /**
+ * zookeeper config relative path
+ */
+ public static final String ZOOKEEPER_CONFIG_RELATIVE_PATH = "etc/kafka/zookeeper.properties";
+
+ /**
+ * schema registry starter relative path
+ */
+ public static final String SCHEMA_REGISTRY_STARTER_RELATIVE_PATH = "bin/schema-registry-start";
+
+ /**
+ * schema registry config relative path
+ */
+ public static final String SCHEMA_REGISTRY_CONFIG_RELATIVE_PATH = "etc/schema-registry/schema-registry.properties";
+
+ /**
+ * connect standalone relative path
+ */
+ public static final String CONNECT_STANDALONE_RELATIVE_PATH = "bin/connect-standalone";
+
+ /**
+ * kafka port config name
+ */
+ public static final String PORT_CONFIG_NAME = "kafka-port.properties";
+
+ /**
+ * kafka port config key
+ */
+ public static final String KAFKA_PORT_CONFIG_KEY = "kafka.port";
+
+ /**
+ * zookeeper port config key
+ */
+ public static final String ZOOKEEPER_PORT_CONFIG_KEY = "zookeeper.port";
+
+ /**
+ * schema registry port config key
+ */
+ public static final String SCHEMA_REGISTRY_PORT_CONFIG_KEY = "schema.registry.port";
+
+ /**
+ * confluent servers ip
+ */
+ public static final String CONFLUENT_IP = "localhost";
+
+ /**
+ * confluent url prefix
+ */
+ public static final String CONFLUENT_URL_PREFIX = "http://";
+
+ private KafkaConstants() {
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/domain/dto/AbstractMigrationConfigDto.java b/multidb-portal/src/main/java/org/opengauss/domain/dto/AbstractMigrationConfigDto.java
new file mode 100644
index 0000000000000000000000000000000000000000..250632fce992300523c544e991cdac8b65419341
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/domain/dto/AbstractMigrationConfigDto.java
@@ -0,0 +1,97 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.domain.dto;
+
+import lombok.Getter;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.config.MigrationConfig;
+import org.opengauss.domain.model.OpenGaussDatabaseConnectInfo;
+import org.opengauss.utils.StringUtils;
+
+import java.util.Map;
+
+/**
+ * Abstract migration configuration dto
+ *
+ * @since 2025/6/30
+ */
+@Getter
+public abstract class AbstractMigrationConfigDto {
+ private static final Logger LOGGER = LogManager.getLogger(AbstractMigrationConfigDto.class);
+
+ /**
+ * Migration mode
+ */
+ protected String migrationMode;
+
+ /**
+ * Is migration object
+ */
+ protected String isMigrationObject;
+
+ /**
+ * Is adjust kernel param
+ */
+ protected String isAdjustKernelParam;
+
+ /**
+ * Get config from map
+ *
+ * @param key config key
+ * @param configMap config map
+ * @return config value
+ */
+ protected static String getConfigFromMap(String key, Map configMap) {
+ Object value = configMap.get(key);
+ if (value == null) {
+ throw new IllegalArgumentException("Migration config key '" + key + "' cannot be null");
+ }
+ return value.toString();
+ }
+
+ /**
+ * Get config from map, if value is null, return default value
+ *
+ * @param key config key
+ * @param configMap config map
+ * @param defaultValue default value
+ * @return config value
+ */
+ protected static String getConfigFromMap(String key, Map configMap, String defaultValue) {
+ Object value = configMap.get(key);
+ if (value == null) {
+ return defaultValue;
+ }
+ return value.toString().trim();
+ }
+
+ /**
+ * Check whether the openGauss cluster is available
+ *
+ * @param hosts openGauss cluster hostnames
+ * @param ports openGauss cluster ports
+ * @return true if the openGauss cluster is available
+ */
+ protected boolean isOpenGaussClusterAvailable(String hosts, String ports) {
+ if (StringUtils.isNullOrBlank(hosts) || StringUtils.isNullOrBlank(ports)) {
+ return false;
+ }
+
+ if (hosts.split(",").length != ports.split(",").length) {
+ LOGGER.warn("The number of hostname in {} does not match the number of port in {}",
+ MigrationConfig.OPENGAUSS_DATABASE_STANDBY_HOSTS, MigrationConfig.OPENGAUSS_DATABASE_STANDBY_PORTS);
+ return false;
+ }
+ return true;
+ }
+
+ /**
+ * Get openGauss database connect info
+ *
+ * @return OpenGaussDatabaseConnectInfo openGauss database connect info
+ */
+ public abstract OpenGaussDatabaseConnectInfo getOpenGaussConnectInfo();
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/domain/dto/KafkaStatusDto.java b/multidb-portal/src/main/java/org/opengauss/domain/dto/KafkaStatusDto.java
new file mode 100644
index 0000000000000000000000000000000000000000..a21a92492824205c9c59aa955ea1ee765ce572b9
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/domain/dto/KafkaStatusDto.java
@@ -0,0 +1,19 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.domain.dto;
+
+import lombok.Data;
+
+/**
+ * kafka status dto
+ *
+ * @since 2025/4/24
+ */
+@Data
+public class KafkaStatusDto {
+ private boolean isZookeeperRunning;
+ private boolean isKafkaRunning;
+ private boolean isSchemaRegistryRunning;
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/domain/dto/MysqlMigrationConfigDto.java b/multidb-portal/src/main/java/org/opengauss/domain/dto/MysqlMigrationConfigDto.java
new file mode 100644
index 0000000000000000000000000000000000000000..e677db89adff352b2131c87551f695933bc81ec2
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/domain/dto/MysqlMigrationConfigDto.java
@@ -0,0 +1,155 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.domain.dto;
+
+import lombok.Getter;
+import org.opengauss.constants.config.MigrationConfig;
+import org.opengauss.domain.model.DatabaseConnectInfo;
+import org.opengauss.domain.model.OpenGaussDatabaseConnectInfo;
+
+import java.util.Map;
+
+/**
+ * MySQL migration configuration dto
+ *
+ * @since 2025/6/30
+ */
+@Getter
+public class MysqlMigrationConfigDto extends AbstractMigrationConfigDto {
+ /**
+ * MySQL database configuration
+ */
+ private String mysqlDatabaseIp;
+ private String mysqlDatabasePort;
+ private String mysqlDatabaseName;
+ private String mysqlDatabaseUsername;
+ private String mysqlDatabasePassword;
+ private String mysqlDatabaseTables;
+
+ /**
+ * openGauss database configuration
+ */
+ private String opengaussDatabaseIp;
+ private String opengaussDatabasePort;
+ private String opengaussDatabaseName;
+ private String opengaussDatabaseUsername;
+ private String opengaussDatabasePassword;
+ private String opengaussDatabaseSchema;
+
+ /**
+ * openGauss database standby nodes configuration
+ */
+ private String opengaussDatabaseStandbyHosts;
+ private String opengaussDatabaseStandbyPorts;
+
+ /**
+ * data check process jvm configuration
+ */
+ private String fullCheckSourceProcessJvm;
+ private String fullCheckSinkProcessJvm;
+ private String fullCheckCheckProcessJvm;
+ private String incrementalCheckSourceProcessJvm;
+ private String incrementalCheckSinkProcessJvm;
+ private String incrementalCheckCheckProcessJvm;
+
+ /**
+ * incremental process jvm configuration
+ */
+ private String incrementalMigrationSourceProcessJvm;
+ private String incrementalMigrationSinkProcessJvm;
+
+ /**
+ * reverse process jvm configuration
+ */
+ private String reverseMigrationSourceProcessJvm;
+ private String reverseMigrationSinkProcessJvm;
+
+ private MysqlMigrationConfigDto() {
+ }
+
+ /**
+ * Generate mysql migration config dto
+ *
+ * @param configMap migration config map
+ * @return MysqlMigrationConfigDto
+ */
+ public static MysqlMigrationConfigDto generateMysqlMigrationConfigDto(Map configMap) {
+ if (configMap == null) {
+ throw new IllegalArgumentException(
+ "Config map that is used to generate MySQL migration config dto cannot be null");
+ }
+ MysqlMigrationConfigDto dto = new MysqlMigrationConfigDto();
+ dto.migrationMode = getConfigFromMap(MigrationConfig.MIGRATION_MODE, configMap);
+ dto.isMigrationObject = getConfigFromMap(MigrationConfig.IS_MIGRATION_OBJECT, configMap, "true");
+ dto.isAdjustKernelParam = getConfigFromMap(MigrationConfig.IS_ADJUST_KERNEL_PARAM, configMap, "false");
+
+ dto.mysqlDatabaseIp = getConfigFromMap(MigrationConfig.MYSQL_DATABASE_IP, configMap);
+ dto.mysqlDatabasePort = getConfigFromMap(MigrationConfig.MYSQL_DATABASE_PORT, configMap);
+ String mysqlDbName = getConfigFromMap(MigrationConfig.MYSQL_DATABASE_NAME, configMap);
+ dto.mysqlDatabaseName = mysqlDbName;
+ dto.mysqlDatabaseUsername = getConfigFromMap(MigrationConfig.MYSQL_DATABASE_USERNAME, configMap);
+ dto.mysqlDatabasePassword = getConfigFromMap(MigrationConfig.MYSQL_DATABASE_PASSWORD, configMap);
+ dto.mysqlDatabaseTables = getConfigFromMap(MigrationConfig.MYSQL_DATABASE_TABLES, configMap, "");
+
+ dto.opengaussDatabaseIp = getConfigFromMap(MigrationConfig.OPENGAUSS_DATABASE_IP, configMap);
+ dto.opengaussDatabasePort = getConfigFromMap(MigrationConfig.OPENGAUSS_DATABASE_PORT, configMap);
+ dto.opengaussDatabaseName = getConfigFromMap(MigrationConfig.OPENGAUSS_DATABASE_NAME, configMap);
+ dto.opengaussDatabaseUsername = getConfigFromMap(MigrationConfig.OPENGAUSS_DATABASE_USERNAME, configMap);
+ dto.opengaussDatabasePassword = getConfigFromMap(MigrationConfig.OPENGAUSS_DATABASE_PASSWORD, configMap);
+ dto.opengaussDatabaseSchema =
+ getConfigFromMap(MigrationConfig.OPENGAUSS_DATABASE_SCHEMA, configMap, mysqlDbName);
+
+ dto.opengaussDatabaseStandbyHosts =
+ getConfigFromMap(MigrationConfig.OPENGAUSS_DATABASE_STANDBY_HOSTS, configMap, "");
+ dto.opengaussDatabaseStandbyPorts =
+ getConfigFromMap(MigrationConfig.OPENGAUSS_DATABASE_STANDBY_PORTS, configMap, "");
+
+ dto.fullCheckSourceProcessJvm = getConfigFromMap(MigrationConfig.FULL_CHECK_SOURCE_PROCESS_JVM, configMap);
+ dto.fullCheckSinkProcessJvm = getConfigFromMap(MigrationConfig.FULL_CHECK_SINK_PROCESS_JVM, configMap);
+ dto.fullCheckCheckProcessJvm = getConfigFromMap(MigrationConfig.FULL_CHECK_CHECK_PROCESS_JVM, configMap);
+ dto.incrementalCheckSourceProcessJvm =
+ getConfigFromMap(MigrationConfig.INCREMENTAL_CHECK_SOURCE_PROCESS_JVM, configMap);
+ dto.incrementalCheckSinkProcessJvm =
+ getConfigFromMap(MigrationConfig.INCREMENTAL_CHECK_SINK_PROCESS_JVM, configMap);
+ dto.incrementalCheckCheckProcessJvm =
+ getConfigFromMap(MigrationConfig.INCREMENTAL_CHECK_CHECK_PROCESS_JVM, configMap);
+
+ dto.incrementalMigrationSourceProcessJvm =
+ getConfigFromMap(MigrationConfig.INCREMENTAL_MIGRATION_SOURCE_PROCESS_JVM, configMap);
+ dto.incrementalMigrationSinkProcessJvm =
+ getConfigFromMap(MigrationConfig.INCREMENTAL_MIGRATION_SINK_PROCESS_JVM, configMap);
+ dto.reverseMigrationSourceProcessJvm =
+ getConfigFromMap(MigrationConfig.REVERSE_MIGRATION_SOURCE_PROCESS_JVM, configMap);
+ dto.reverseMigrationSinkProcessJvm =
+ getConfigFromMap(MigrationConfig.REVERSE_MIGRATION_SINK_PROCESS_JVM, configMap);
+ return dto;
+ }
+
+ /**
+ * Check whether openGauss cluster is available
+ *
+ * @return true if openGauss cluster is available
+ */
+ public boolean isOpenGaussClusterAvailable() {
+ return isOpenGaussClusterAvailable(opengaussDatabaseStandbyHosts, opengaussDatabaseStandbyPorts);
+ }
+
+ /**
+ * Get mysql database connect info
+ *
+ * @return DatabaseConnectInfo mysql database connect info
+ */
+ public DatabaseConnectInfo getMysqlConnectInfo() {
+ return new DatabaseConnectInfo(mysqlDatabaseIp, mysqlDatabasePort, mysqlDatabaseName,
+ mysqlDatabaseUsername, mysqlDatabasePassword);
+ }
+
+ @Override
+ public OpenGaussDatabaseConnectInfo getOpenGaussConnectInfo() {
+ return new OpenGaussDatabaseConnectInfo(opengaussDatabaseIp, opengaussDatabasePort, opengaussDatabaseName,
+ opengaussDatabaseUsername, opengaussDatabasePassword, isOpenGaussClusterAvailable(),
+ opengaussDatabaseStandbyHosts, opengaussDatabaseStandbyPorts);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/domain/dto/PgsqlMigrationConfigDto.java b/multidb-portal/src/main/java/org/opengauss/domain/dto/PgsqlMigrationConfigDto.java
new file mode 100644
index 0000000000000000000000000000000000000000..9e4286af8cce8f09e25a62a5d212027a546bc3a6
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/domain/dto/PgsqlMigrationConfigDto.java
@@ -0,0 +1,143 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.domain.dto;
+
+import lombok.Getter;
+import org.opengauss.constants.config.MigrationConfig;
+import org.opengauss.domain.model.DatabaseConnectInfo;
+import org.opengauss.domain.model.OpenGaussDatabaseConnectInfo;
+
+import java.util.Map;
+
+/**
+ * PostgreSQL migration configuration dto
+ *
+ * @since 2025/6/30
+ */
+@Getter
+public class PgsqlMigrationConfigDto extends AbstractMigrationConfigDto {
+ /**
+ * PostgreSQL database configuration
+ */
+ private String pgsqlDatabaseIp;
+ private String pgsqlDatabasePort;
+ private String pgsqlDatabaseName;
+ private String pgsqlDatabaseUsername;
+ private String pgsqlDatabasePassword;
+ private String pgsqlDatabaseSchemas;
+
+ /**
+ * openGauss database configuration
+ */
+ private String opengaussDatabaseIp;
+ private String opengaussDatabasePort;
+ private String opengaussDatabaseName;
+ private String opengaussDatabaseUsername;
+ private String opengaussDatabasePassword;
+
+ /**
+ * openGauss database standby nodes configuration
+ */
+ private String opengaussDatabaseStandbyHosts;
+ private String opengaussDatabaseStandbyPorts;
+
+ /**
+ * schema mapping configuration
+ */
+ private String schemaMappings;
+
+ /**
+ * full migration process jvm configuration
+ */
+ private String fullProcessJvm;
+
+ /**
+ * incremental process jvm configuration
+ */
+ private String incrementalMigrationSourceProcessJvm;
+ private String incrementalMigrationSinkProcessJvm;
+
+ /**
+ * reverse process jvm configuration
+ */
+ private String reverseMigrationSourceProcessJvm;
+ private String reverseMigrationSinkProcessJvm;
+
+ /**
+ * Generate pgsql migration config dto
+ *
+ * @param migrationConfigMap migration config map
+ * @return PgsqlMigrationConfigDto
+ */
+ public static PgsqlMigrationConfigDto generatePgsqlMigrationConfigDto(Map migrationConfigMap) {
+ if (migrationConfigMap == null) {
+ throw new IllegalArgumentException(
+ "Config map that is used to generate PostgreSQL migration config dto cannot be null");
+ }
+ PgsqlMigrationConfigDto dto = new PgsqlMigrationConfigDto();
+ dto.migrationMode = getConfigFromMap(MigrationConfig.MIGRATION_MODE, migrationConfigMap);
+ dto.isMigrationObject = getConfigFromMap(MigrationConfig.IS_MIGRATION_OBJECT, migrationConfigMap, "true");
+ dto.isAdjustKernelParam = getConfigFromMap(MigrationConfig.IS_ADJUST_KERNEL_PARAM, migrationConfigMap, "false");
+
+ dto.pgsqlDatabaseIp = getConfigFromMap(MigrationConfig.PGSQL_DATABASE_IP, migrationConfigMap);
+ dto.pgsqlDatabasePort = getConfigFromMap(MigrationConfig.PGSQL_DATABASE_PORT, migrationConfigMap);
+ dto.pgsqlDatabaseName = getConfigFromMap(MigrationConfig.PGSQL_DATABASE_NAME, migrationConfigMap);
+ dto.pgsqlDatabaseUsername = getConfigFromMap(MigrationConfig.PGSQL_DATABASE_USERNAME, migrationConfigMap);
+ dto.pgsqlDatabasePassword = getConfigFromMap(MigrationConfig.PGSQL_DATABASE_PASSWORD, migrationConfigMap);
+ dto.pgsqlDatabaseSchemas = getConfigFromMap(MigrationConfig.PGSQL_DATABASE_SCHEMAS, migrationConfigMap);
+
+ dto.opengaussDatabaseIp = getConfigFromMap(MigrationConfig.OPENGAUSS_DATABASE_IP, migrationConfigMap);
+ dto.opengaussDatabasePort = getConfigFromMap(MigrationConfig.OPENGAUSS_DATABASE_PORT, migrationConfigMap);
+ dto.opengaussDatabaseName = getConfigFromMap(MigrationConfig.OPENGAUSS_DATABASE_NAME, migrationConfigMap);
+ dto.opengaussDatabaseUsername =
+ getConfigFromMap(MigrationConfig.OPENGAUSS_DATABASE_USERNAME, migrationConfigMap);
+ dto.opengaussDatabasePassword =
+ getConfigFromMap(MigrationConfig.OPENGAUSS_DATABASE_PASSWORD, migrationConfigMap);
+
+ dto.opengaussDatabaseStandbyHosts =
+ getConfigFromMap(MigrationConfig.OPENGAUSS_DATABASE_STANDBY_HOSTS, migrationConfigMap, "");
+ dto.opengaussDatabaseStandbyPorts =
+ getConfigFromMap(MigrationConfig.OPENGAUSS_DATABASE_STANDBY_PORTS, migrationConfigMap, "");
+
+ dto.schemaMappings = getConfigFromMap(MigrationConfig.SCHEMA_MAPPINGS, migrationConfigMap, "");
+
+ dto.fullProcessJvm = getConfigFromMap(MigrationConfig.FULL_PROCESS_JVM, migrationConfigMap);
+ dto.incrementalMigrationSourceProcessJvm =
+ getConfigFromMap(MigrationConfig.INCREMENTAL_MIGRATION_SOURCE_PROCESS_JVM, migrationConfigMap);
+ dto.incrementalMigrationSinkProcessJvm =
+ getConfigFromMap(MigrationConfig.INCREMENTAL_MIGRATION_SINK_PROCESS_JVM, migrationConfigMap);
+ dto.reverseMigrationSourceProcessJvm =
+ getConfigFromMap(MigrationConfig.REVERSE_MIGRATION_SOURCE_PROCESS_JVM, migrationConfigMap);
+ dto.reverseMigrationSinkProcessJvm =
+ getConfigFromMap(MigrationConfig.REVERSE_MIGRATION_SINK_PROCESS_JVM, migrationConfigMap);
+ return dto;
+ }
+
+ /**
+ * Check whether openGauss cluster is available
+ *
+ * @return true if openGauss cluster is available
+ */
+ public boolean isOpenGaussClusterAvailable() {
+ return isOpenGaussClusterAvailable(opengaussDatabaseStandbyHosts, opengaussDatabaseStandbyPorts);
+ }
+
+ /**
+ * Get PostgreSQL database connect info
+ *
+ * @return DatabaseConnectInfo PostgreSQL database connect info
+ */
+ public DatabaseConnectInfo getPgsqlConnectInfo() {
+ return new DatabaseConnectInfo(pgsqlDatabaseIp, pgsqlDatabasePort, pgsqlDatabaseName, pgsqlDatabaseUsername,
+ pgsqlDatabasePassword);
+ }
+
+ @Override
+ public OpenGaussDatabaseConnectInfo getOpenGaussConnectInfo() {
+ return new OpenGaussDatabaseConnectInfo(opengaussDatabaseIp, opengaussDatabasePort, opengaussDatabaseName,
+ opengaussDatabaseUsername, opengaussDatabasePassword, isOpenGaussClusterAvailable(),
+ opengaussDatabaseStandbyHosts, opengaussDatabaseStandbyPorts);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/domain/model/AbstractToolConfigBundle.java b/multidb-portal/src/main/java/org/opengauss/domain/model/AbstractToolConfigBundle.java
new file mode 100644
index 0000000000000000000000000000000000000000..50a6ff6ad205315d735993a09561179aaa22fa0d
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/domain/model/AbstractToolConfigBundle.java
@@ -0,0 +1,27 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.domain.model;
+
+/**
+ * Abstract tool config bundle
+ *
+ * @since 2025/7/2
+ */
+public abstract class AbstractToolConfigBundle {
+ /**
+ * load config map from config file
+ */
+ public abstract void loadConfigMap();
+
+ /**
+ * save config map to config file
+ */
+ public abstract void saveConfigMap();
+
+ /**
+ * generate config file when create task
+ */
+ public abstract void generateFile();
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/domain/model/ChameleonConfigBundle.java b/multidb-portal/src/main/java/org/opengauss/domain/model/ChameleonConfigBundle.java
new file mode 100644
index 0000000000000000000000000000000000000000..c5df3a2ba4c86ebd513e5db0a426d8eab72e5ebe
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/domain/model/ChameleonConfigBundle.java
@@ -0,0 +1,32 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.domain.model;
+
+import lombok.Data;
+
+/**
+ * chameleon config file bundle
+ *
+ * @since 2025/7/2
+ */
+@Data
+public class ChameleonConfigBundle extends AbstractToolConfigBundle {
+ private ConfigFile configFile;
+
+ @Override
+ public void loadConfigMap() {
+ configFile.loadConfigMap();
+ }
+
+ @Override
+ public void saveConfigMap() {
+ configFile.saveConfigMap();
+ }
+
+ @Override
+ public void generateFile() {
+ configFile.generateFile();
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/domain/model/ConfigFile.java b/multidb-portal/src/main/java/org/opengauss/domain/model/ConfigFile.java
new file mode 100644
index 0000000000000000000000000000000000000000..fd7480770947519364884131be645caaae4a8e9e
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/domain/model/ConfigFile.java
@@ -0,0 +1,163 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.domain.model;
+
+import lombok.Getter;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.enums.FileFormat;
+import org.opengauss.enums.TemplateConfigType;
+import org.opengauss.exceptions.ConfigException;
+import org.opengauss.utils.FileUtils;
+import org.opengauss.utils.PropertiesUtils;
+import org.opengauss.utils.YmlUtils;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+/**
+ * config file
+ *
+ * @since 2025/4/29
+ */
+@Getter
+public class ConfigFile {
+ private static final Logger LOGGER = LogManager.getLogger(ConfigFile.class);
+
+ private final String name;
+ private final String fileDirPath;
+ private final TaskWorkspace taskWorkspace;
+ private final TemplateConfigType templateConfigType;
+ private final Map configMap;
+ private final Set deleteConfigKeySet;
+
+ public ConfigFile(String name, String fileDirPath, TaskWorkspace taskWorkspace,
+ TemplateConfigType templateConfigType) {
+ this.name = name;
+ this.fileDirPath = fileDirPath;
+ this.taskWorkspace = taskWorkspace;
+ this.templateConfigType = templateConfigType;
+ this.configMap = new HashMap<>();
+ this.deleteConfigKeySet = new HashSet<>();
+ }
+
+ /**
+ * get file path
+ *
+ * @return file path
+ */
+ public String getFilePath() {
+ return String.format("%s/%s", fileDirPath, name);
+ }
+
+ /**
+ * get config map
+ *
+ * @return config map
+ */
+ public Map getConfigMap() {
+ if (configMap.isEmpty() && !templateConfigType.getFileFormat().equals(FileFormat.XML)) {
+ throw new IllegalStateException("Config map has not loaded yet. Please call loadConfigMap() first. ");
+ }
+ return configMap;
+ }
+
+ /**
+ * load config map
+ */
+ public void loadConfigMap() {
+ try {
+ if (templateConfigType.getFileFormat().equals(FileFormat.PROPERTIES)) {
+ configMap.putAll(PropertiesUtils.readPropertiesAsMap(getFilePath()));
+ return;
+ }
+
+ if (templateConfigType.getFileFormat().equals(FileFormat.YML)) {
+ configMap.putAll(YmlUtils.loadYaml(getFilePath()));
+ return;
+ }
+
+ if (templateConfigType.getFileFormat().equals(FileFormat.XML)) {
+ return;
+ }
+ } catch (IOException e) {
+ throw new ConfigException("Failed to load config map from file: " + getFilePath(), e);
+ }
+ LOGGER.warn("Unsupported file format: {} to load config map", templateConfigType.getFileFormat());
+ }
+
+ /**
+ * generate config file from template
+ */
+ public void generateFile() {
+ String configTemplatePath = templateConfigType.getFilePath();
+ boolean isInResources = templateConfigType.isInResources();
+ String configFilePath = getFilePath();
+ try {
+ if (isInResources) {
+ FileUtils.exportResource(configTemplatePath, configFilePath);
+ } else {
+ FileUtils.copyFile(configTemplatePath, configFilePath);
+ }
+ } catch (IOException e) {
+ throw new ConfigException("Failed to prepare migration config file: " + configFilePath, e);
+ }
+ }
+
+ /**
+ * save config map to file
+ */
+ public void saveConfigMap() {
+ changeConfig(configMap);
+ deleteConfigKeys();
+ }
+
+ /**
+ * change config file params in config map
+ *
+ * @param configMap config map
+ */
+ public void changeConfig(Map configMap) {
+ try {
+ if (templateConfigType.getFileFormat().equals(FileFormat.PROPERTIES)) {
+ HashMap changeParams = new HashMap<>();
+ for (Map.Entry entry : configMap.entrySet()) {
+ changeParams.put(entry.getKey(), String.valueOf(entry.getValue()));
+ }
+ PropertiesUtils.updateProperties(getFilePath(), changeParams);
+ return;
+ }
+ if (templateConfigType.getFileFormat().equals(FileFormat.YML)) {
+ YmlUtils.updateYaml(getFilePath(), configMap);
+ return;
+ }
+ if (templateConfigType.getFileFormat().equals(FileFormat.XML)) {
+ for (Map.Entry entry : configMap.entrySet()) {
+ FileUtils.replaceInFile(getFilePath(), entry.getKey(), String.valueOf(entry.getValue()));
+ }
+ return;
+ }
+ } catch (IOException e) {
+ throw new ConfigException("Failed to save config map to file: " + getFilePath(), e);
+ }
+ LOGGER.warn("Unsupported file format: {} to save config map", templateConfigType.getFileFormat());
+ }
+
+ /**
+ * delete config keys in config file
+ */
+ public void deleteConfigKeys() {
+ try {
+ if (templateConfigType.getFileFormat().equals(FileFormat.PROPERTIES)) {
+ PropertiesUtils.commentProperties(getFilePath(), deleteConfigKeySet);
+ }
+ } catch (IOException e) {
+ throw new ConfigException("Failed to comment keys from file: " + getFilePath(), e);
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/domain/model/DataCheckerConfigBundle.java b/multidb-portal/src/main/java/org/opengauss/domain/model/DataCheckerConfigBundle.java
new file mode 100644
index 0000000000000000000000000000000000000000..2c0040102a40ae23113c781b0d36b05eabce6228
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/domain/model/DataCheckerConfigBundle.java
@@ -0,0 +1,43 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.domain.model;
+
+import lombok.Data;
+
+/**
+ * data-checker config file bundle
+ *
+ * @since 2025/7/2
+ */
+@Data
+public class DataCheckerConfigBundle extends AbstractToolConfigBundle {
+ private ConfigFile checkConfigFile;
+ private ConfigFile sinkConfigFile;
+ private ConfigFile sourceConfigFile;
+ private ConfigFile log4j2ConfigFile;
+
+ @Override
+ public void loadConfigMap() {
+ checkConfigFile.loadConfigMap();
+ sinkConfigFile.loadConfigMap();
+ sourceConfigFile.loadConfigMap();
+ }
+
+ @Override
+ public void saveConfigMap() {
+ checkConfigFile.saveConfigMap();
+ sinkConfigFile.saveConfigMap();
+ sourceConfigFile.saveConfigMap();
+ log4j2ConfigFile.saveConfigMap();
+ }
+
+ @Override
+ public void generateFile() {
+ checkConfigFile.generateFile();
+ sinkConfigFile.generateFile();
+ sourceConfigFile.generateFile();
+ log4j2ConfigFile.generateFile();
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/domain/model/DatabaseConnectInfo.java b/multidb-portal/src/main/java/org/opengauss/domain/model/DatabaseConnectInfo.java
new file mode 100644
index 0000000000000000000000000000000000000000..07e24d05cc41c9048e002b7c98bc7041b1e2808e
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/domain/model/DatabaseConnectInfo.java
@@ -0,0 +1,44 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.domain.model;
+
+import lombok.AllArgsConstructor;
+import lombok.Data;
+import lombok.NoArgsConstructor;
+
+/**
+ * Database server connection information
+ *
+ * @since 2025/7/1
+ */
+@Data
+@NoArgsConstructor
+@AllArgsConstructor
+public class DatabaseConnectInfo {
+ /**
+ * Database server ip
+ */
+ protected String ip;
+
+ /**
+ * Database server port
+ */
+ protected String port;
+
+ /**
+ * Database name
+ */
+ protected String databaseName;
+
+ /**
+ * Database connect username
+ */
+ protected String username;
+
+ /**
+ * Database connect user password
+ */
+ protected String password;
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/domain/model/DebeziumConfigBundle.java b/multidb-portal/src/main/java/org/opengauss/domain/model/DebeziumConfigBundle.java
new file mode 100644
index 0000000000000000000000000000000000000000..99a2c1f0c8f08a2f9bcd2ead855fd50f4e6b509f
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/domain/model/DebeziumConfigBundle.java
@@ -0,0 +1,52 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.domain.model;
+
+import lombok.Data;
+
+/**
+ * debezium config file bundle
+ *
+ * @since 2025/7/2
+ */
+@Data
+public class DebeziumConfigBundle extends AbstractToolConfigBundle {
+ private ConfigFile connectSinkConfigFile;
+ private ConfigFile connectSourceConfigFile;
+ private ConfigFile workerSinkConfigFile;
+ private ConfigFile workerSourceConfigFile;
+ private ConfigFile log4jSinkConfigFile;
+ private ConfigFile log4jSourceConfigFile;
+
+ @Override
+ public void loadConfigMap() {
+ connectSinkConfigFile.loadConfigMap();
+ connectSourceConfigFile.loadConfigMap();
+ workerSinkConfigFile.loadConfigMap();
+ workerSourceConfigFile.loadConfigMap();
+ log4jSinkConfigFile.loadConfigMap();
+ log4jSourceConfigFile.loadConfigMap();
+ }
+
+ @Override
+ public void saveConfigMap() {
+ connectSinkConfigFile.saveConfigMap();
+ connectSourceConfigFile.saveConfigMap();
+ workerSinkConfigFile.saveConfigMap();
+ workerSourceConfigFile.saveConfigMap();
+ log4jSinkConfigFile.saveConfigMap();
+ log4jSourceConfigFile.saveConfigMap();
+ }
+
+ @Override
+ public void generateFile() {
+ connectSinkConfigFile.generateFile();
+ connectSourceConfigFile.generateFile();
+ workerSinkConfigFile.generateFile();
+ workerSourceConfigFile.generateFile();
+ log4jSinkConfigFile.generateFile();
+ log4jSourceConfigFile.generateFile();
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/domain/model/FullMigrationToolConfigBundle.java b/multidb-portal/src/main/java/org/opengauss/domain/model/FullMigrationToolConfigBundle.java
new file mode 100644
index 0000000000000000000000000000000000000000..f93550ddefd03d64f3ea58f6ce0257f8194686b9
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/domain/model/FullMigrationToolConfigBundle.java
@@ -0,0 +1,32 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.domain.model;
+
+import lombok.Data;
+
+/**
+ * full migration tool config file bundle
+ *
+ * @since 2025/7/2
+ */
+@Data
+public class FullMigrationToolConfigBundle extends AbstractToolConfigBundle {
+ private ConfigFile configFile;
+
+ @Override
+ public void loadConfigMap() {
+ configFile.loadConfigMap();
+ }
+
+ @Override
+ public void saveConfigMap() {
+ configFile.saveConfigMap();
+ }
+
+ @Override
+ public void generateFile() {
+ configFile.generateFile();
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/domain/model/MigrationStopIndicator.java b/multidb-portal/src/main/java/org/opengauss/domain/model/MigrationStopIndicator.java
new file mode 100644
index 0000000000000000000000000000000000000000..f09514666ebf87f494aecf5219f9a8d59804ddeb
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/domain/model/MigrationStopIndicator.java
@@ -0,0 +1,34 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.domain.model;
+
+/**
+ * control stop order
+ *
+ * @since 2025/3/1
+ */
+public class MigrationStopIndicator {
+ private volatile boolean isStop;
+
+ public MigrationStopIndicator() {
+ isStop = false;
+ }
+
+ /**
+ * is stopped
+ *
+ * @return boolean is stopped
+ */
+ public boolean isStopped() {
+ return isStop;
+ }
+
+ /**
+ * set stop
+ */
+ public void setStop() {
+ isStop = true;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/domain/model/OpenGaussDatabaseConnectInfo.java b/multidb-portal/src/main/java/org/opengauss/domain/model/OpenGaussDatabaseConnectInfo.java
new file mode 100644
index 0000000000000000000000000000000000000000..409598a458b1dcc4910beb2717ab582ccc2b593d
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/domain/model/OpenGaussDatabaseConnectInfo.java
@@ -0,0 +1,27 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.domain.model;
+
+import lombok.Data;
+
+/**
+ * openGauss database connect information
+ *
+ * @since 2025/7/1
+ */
+@Data
+public class OpenGaussDatabaseConnectInfo extends DatabaseConnectInfo {
+ private boolean isClusterAvailable;
+ private String standbyHosts;
+ private String standbyPorts;
+
+ public OpenGaussDatabaseConnectInfo(String ip, String port, String databaseName, String username, String password,
+ boolean isClusterAvailable, String standbyHosts, String standbyPorts) {
+ super(ip, port, databaseName, username, password);
+ this.isClusterAvailable = isClusterAvailable;
+ this.standbyHosts = standbyHosts;
+ this.standbyPorts = standbyPorts;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/domain/model/TaskWorkspace.java b/multidb-portal/src/main/java/org/opengauss/domain/model/TaskWorkspace.java
new file mode 100644
index 0000000000000000000000000000000000000000..f310756116b38fbd10748168ab65ccc9ed8b473b
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/domain/model/TaskWorkspace.java
@@ -0,0 +1,113 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.domain.model;
+
+import lombok.Getter;
+import org.opengauss.constants.TaskConstants;
+import org.opengauss.exceptions.TaskException;
+import org.opengauss.config.ApplicationConfig;
+import org.opengauss.utils.FileUtils;
+
+import java.io.IOException;
+
+/**
+ * workspace
+ *
+ * @since 2025/2/27
+ */
+@Getter
+public class TaskWorkspace {
+ private final String id;
+ private final String homeDir;
+
+ private final String configDirPath;
+ private final String logsDirPath;
+ private final String statusDirPath;
+ private final String tmpDirPath;
+
+ private final String configFullDirPath;
+ private final String configFullDataCheckDirPath;
+ private final String configIncrementalDirPath;
+ private final String configIncrementalDataCheckDirPath;
+ private final String configReverseDirPath;
+
+ private final String logsFullDirPath;
+ private final String logsFullDataCheckDirPath;
+ private final String logsIncrementalDirPath;
+ private final String logsIncrementalDataCheckDirPath;
+ private final String logsReverseDirPath;
+
+ private final String statusFullDirPath;
+ private final String statusFullDataCheckDirPath;
+ private final String statusIncrementalDirPath;
+ private final String statusIncrementalDataCheckDirPath;
+ private final String statusReverseDirPath;
+
+ private final String quarkusPortFilePath;
+ private final String sourceDbTypeFilePath;
+
+ public TaskWorkspace(String taskId) {
+ String portalWorkspaceDirPath = ApplicationConfig.getInstance().getPortalWorkspaceDirPath();
+ id = taskId;
+ homeDir = String.format("%s/%s%s", portalWorkspaceDirPath, TaskConstants.TASK_WORKSPACE_DIR_SUFFIX, taskId);
+
+ configDirPath = String.format("%s/config", homeDir);
+ logsDirPath = String.format("%s/logs", homeDir);
+ statusDirPath = String.format("%s/status", homeDir);
+ tmpDirPath = String.format("%s/tmp", homeDir);
+
+ configFullDirPath = String.format("%s/full", configDirPath);
+ configFullDataCheckDirPath = String.format("%s/data-check/full", configDirPath);
+ configIncrementalDirPath = String.format("%s/incremental", configDirPath);
+ configIncrementalDataCheckDirPath = String.format("%s/data-check/incremental", configDirPath);
+ configReverseDirPath = String.format("%s/reverse", configDirPath);
+
+ logsFullDirPath = String.format("%s/full", logsDirPath);
+ logsFullDataCheckDirPath = String.format("%s/data-check/full", logsDirPath);
+ logsIncrementalDirPath = String.format("%s/incremental", logsDirPath);
+ logsIncrementalDataCheckDirPath = String.format("%s/data-check/incremental", logsDirPath);
+ logsReverseDirPath = String.format("%s/reverse", logsDirPath);
+
+ statusFullDirPath = String.format("%s/full", statusDirPath);
+ statusFullDataCheckDirPath = String.format("%s/data-check/full", statusDirPath);
+ statusIncrementalDirPath = String.format("%s/incremental", statusDirPath);
+ statusIncrementalDataCheckDirPath = String.format("%s/data-check/incremental", statusDirPath);
+ statusReverseDirPath = String.format("%s/reverse", statusDirPath);
+
+ sourceDbTypeFilePath = String.format("%s/%s", configDirPath, TaskConstants.SOURCE_DB_TYPE_CONFIG_FILE_NAME);
+ quarkusPortFilePath = String.format("%s/%s", configDirPath, TaskConstants.QUARKUS_PORT_FILE_NAME);
+ }
+
+ /**
+ * create task workspace directory structure
+ */
+ public void create() {
+ try {
+ FileUtils.createDirectories(homeDir, configDirPath, logsDirPath, statusDirPath, tmpDirPath,
+ configFullDirPath, configFullDataCheckDirPath, configIncrementalDirPath,
+ configIncrementalDataCheckDirPath, configReverseDirPath,
+ logsFullDirPath, logsFullDataCheckDirPath, logsIncrementalDirPath,
+ logsIncrementalDataCheckDirPath, logsReverseDirPath,
+ statusFullDirPath, statusFullDataCheckDirPath, statusIncrementalDirPath,
+ statusIncrementalDataCheckDirPath, statusReverseDirPath);
+
+ FileUtils.createFile(sourceDbTypeFilePath);
+ FileUtils.createFile(quarkusPortFilePath);
+ } catch (IOException e) {
+ throw new TaskException("Failed to create workspace directories", e);
+ }
+ }
+
+ /**
+ * delete task workspace directory
+ */
+ public void delete() {
+ try {
+ FileUtils.deletePath(homeDir);
+ } catch (IOException e) {
+ throw new TaskException("Failed to clean up task workspace directory: " + homeDir, e);
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/domain/vo/TaskListVo.java b/multidb-portal/src/main/java/org/opengauss/domain/vo/TaskListVo.java
new file mode 100644
index 0000000000000000000000000000000000000000..41061dd51cc56abe6c88de152c26c10945cb1437
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/domain/vo/TaskListVo.java
@@ -0,0 +1,25 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.domain.vo;
+
+import lombok.Data;
+import lombok.NoArgsConstructor;
+
+/**
+ * task dto
+ *
+ * @since 2025/4/24
+ */
+@Data
+@NoArgsConstructor
+public class TaskListVo {
+ private String taskId;
+ private String sourceDbType;
+
+ /**
+ * task is running or not
+ */
+ private boolean isRunning;
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/enums/DataCheckerProcessType.java b/multidb-portal/src/main/java/org/opengauss/enums/DataCheckerProcessType.java
new file mode 100644
index 0000000000000000000000000000000000000000..0eb67534b4c26e5614d54c2c7b6ffbe516cff783
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/enums/DataCheckerProcessType.java
@@ -0,0 +1,26 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.enums;
+
+import lombok.Getter;
+
+/**
+ * data checker process type
+ *
+ * @since 2025/5/14
+ */
+@Getter
+public enum DataCheckerProcessType {
+ SINK("sink"),
+ SOURCE("source"),
+ CHECK("check")
+ ;
+
+ DataCheckerProcessType(String type) {
+ this.type = type;
+ }
+
+ private final String type;
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/enums/DatabaseType.java b/multidb-portal/src/main/java/org/opengauss/enums/DatabaseType.java
new file mode 100644
index 0000000000000000000000000000000000000000..2b6a0fb7c8abe03cdd6136d79a3b668367483f03
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/enums/DatabaseType.java
@@ -0,0 +1,26 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.enums;
+
+import lombok.Getter;
+
+/**
+ * database type
+ *
+ * @since 2025/2/27
+ */
+@Getter
+public enum DatabaseType {
+ MYSQL("MySQL"),
+ OPENGAUSS("openGauss"),
+ POSTGRESQL("PostgreSQL"),
+ ;
+
+ DatabaseType(String standardName) {
+ this.standardName = standardName;
+ }
+
+ private final String standardName;
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/enums/DebeziumProcessType.java b/multidb-portal/src/main/java/org/opengauss/enums/DebeziumProcessType.java
new file mode 100644
index 0000000000000000000000000000000000000000..d6de3fa067075a9433e98b12019527bd922d511f
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/enums/DebeziumProcessType.java
@@ -0,0 +1,25 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.enums;
+
+import lombok.Getter;
+
+/**
+ * debezium process type
+ *
+ * @since 2025/5/19
+ */
+@Getter
+public enum DebeziumProcessType {
+ SINK("sink"),
+ SOURCE("source"),
+ ;
+
+ DebeziumProcessType(String type) {
+ this.type = type;
+ }
+
+ private final String type;
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/enums/FileFormat.java b/multidb-portal/src/main/java/org/opengauss/enums/FileFormat.java
new file mode 100644
index 0000000000000000000000000000000000000000..b036cfdcaa65a8f6342398d0214b56abfe4d7f93
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/enums/FileFormat.java
@@ -0,0 +1,16 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.enums;
+
+/**
+ * file format
+ *
+ * @since 2025/2/27
+ */
+public enum FileFormat {
+ YML,
+ PROPERTIES,
+ XML
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/enums/MigrationPhase.java b/multidb-portal/src/main/java/org/opengauss/enums/MigrationPhase.java
new file mode 100644
index 0000000000000000000000000000000000000000..a2b91012b962bd6621c807be669e98299cf3aeba
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/enums/MigrationPhase.java
@@ -0,0 +1,27 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.enums;
+
+import lombok.AllArgsConstructor;
+import lombok.Getter;
+
+/**
+ * migration phase
+ *
+ * @since 2025/2/27
+ */
+@Getter
+@AllArgsConstructor
+public enum MigrationPhase {
+ FULL_MIGRATION("full_migration", "full migration phase"),
+ FULL_DATA_CHECK("full_data_check", "full data check phase"),
+ INCREMENTAL_MIGRATION("incremental_migration", "incremental migration phase"),
+ INCREMENTAL_DATA_CHECK("incremental_data_check", "incremental data check phase"),
+ REVERSE_MIGRATION("reverse_migration", "reverse migration phase")
+ ;
+
+ private final String phaseName;
+ private final String phaseDesc;
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/enums/MigrationStatusEnum.java b/multidb-portal/src/main/java/org/opengauss/enums/MigrationStatusEnum.java
new file mode 100644
index 0000000000000000000000000000000000000000..39f791164d3ff1b7a273087410319be8e428b547
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/enums/MigrationStatusEnum.java
@@ -0,0 +1,50 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.enums;
+
+import lombok.Getter;
+
+/**
+ * Migration status enum
+ *
+ * @since 2025/3/1
+ */
+@Getter
+public enum MigrationStatusEnum {
+ NOT_START(0, "Migration not started"),
+
+ START_FULL_MIGRATION(100, "Full migration started"),
+ FULL_MIGRATION_RUNNING(101, "Full migration running"),
+ FULL_MIGRATION_FINISHED(102, "Full migration finished"),
+
+ START_FULL_DATA_CHECK(200, "Full data check started"),
+ FULL_DATA_CHECK_RUNNING(201, "Full data check running"),
+ FULL_DATA_CHECK_FINISHED(202, "Full data check finished"),
+
+ START_INCREMENTAL_MIGRATION(300, "Incremental migration started"),
+ INCREMENTAL_MIGRATION_RUNNING(301, "Incremental migration running"),
+ INCREMENTAL_MIGRATION_FINISHED(302, "Incremental migration finished"),
+
+ START_REVERSE_MIGRATION(401, "Reverse migration started"),
+ REVERSE_MIGRATION_RUNNING(402, "Reverse migration running"),
+ REVERSE_MIGRATION_FINISHED(403, "Reverse migration finished"),
+
+ MIGRATION_FINISHED(600, "Migration finished"),
+ PRE_MIGRATION_VERIFY_FAILED(601, "Pre migration verify failed"),
+ PRE_REVERSE_PHASE_VERIFY_FAILED(602, "Pre reverse phase verify failed"),
+ MIGRATION_FAILED(500, "Migration failed"),
+
+ INCREMENTAL_MIGRATION_INTERRUPTED(501, "Incremental migration interrupted"),
+ REVERSE_MIGRATION_INTERRUPTED(502, "Reverse migration interrupted"),
+ ;
+
+ MigrationStatusEnum(int status, String description) {
+ this.status = status;
+ this.description = description;
+ }
+
+ private final int status;
+ private final String description;
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/enums/TemplateConfigType.java b/multidb-portal/src/main/java/org/opengauss/enums/TemplateConfigType.java
new file mode 100644
index 0000000000000000000000000000000000000000..bb0330750026f5f483f1ecda11eedb376a2061cd
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/enums/TemplateConfigType.java
@@ -0,0 +1,118 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.enums;
+
+import lombok.Getter;
+import org.opengauss.config.ApplicationConfig;
+
+/**
+ * template config type
+ *
+ * @since 2025/4/29
+ */
+@Getter
+public enum TemplateConfigType {
+ /**
+ * migration config template files
+ */
+ MYSQL_MIGRATION_CONFIG("mysql-migration.properties", FileFormat.PROPERTIES, true, "config",
+ "the migration config file with MySQL source database", "mysql-migration-desc.properties"),
+ PGSQL_MIGRATION_CONFIG("pgsql-migration.properties", FileFormat.PROPERTIES, true, "config",
+ "the migration config file with PostgreSQL source database", "pgsql-migration-desc.properties"),
+
+ /**
+ * chameleon config template file
+ */
+ CHAMELEON_CONFIG("config-example.yml", FileFormat.YML, false, "config/chameleon",
+ "the chameleon config file", null),
+
+ /**
+ * full migration tool config template file
+ */
+ FULL_MIGRATION_TOOL_CONFIG("config.yml", FileFormat.YML, false, "config/full-migration",
+ "the full migration tool config file", null),
+
+ /**
+ * datachecker config template files
+ */
+ DATACHECKER_SINK_CONFIG("application-sink.yml", FileFormat.YML, false, "config/datachecker",
+ "the datachecker sink process config file", null),
+ DATACHECKER_SOURCE_CONFIG("application-source.yml", FileFormat.YML, false, "config/datachecker",
+ "the datachecker source process config file", null),
+ DATACHECKER_CHECK_CONFIG("application.yml", FileFormat.YML, false, "config/datachecker",
+ "the datachecker check process config file", null),
+ DATACHECKER_LOG4J2_CONFIG("log4j2.xml", FileFormat.XML, false, "config/datachecker",
+ "the datachecker log4j2 config file", null),
+
+ /**
+ * debezium config template files
+ */
+ DEBEZIUM_CONNECT_AVRO_STANDALONE_CONFIG("connect-avro-standalone.properties", FileFormat.PROPERTIES, false,
+ "config/debezium", "the debezium connect standalone config file", null),
+ DEBEZIUM_CONNECT_LOG4J2_CONFIG("connect-log4j.properties", FileFormat.PROPERTIES, false, "config/debezium",
+ "the debezium connect log4j config file", null),
+ DEBEZIUM_CONNECT_MYSQL_SINK_CONFIG("mysql-sink.properties", FileFormat.PROPERTIES, false, "config/debezium",
+ "the debezium connect MySQL sink process config file", null),
+ DEBEZIUM_CONNECT_MYSQL_SOURCE_CONFIG("mysql-source.properties", FileFormat.PROPERTIES, false, "config/debezium",
+ "the debezium connect MySQL source process config file", null),
+ DEBEZIUM_CONNECT_OPENGAUSS_SINK_CONFIG("opengauss-sink.properties", FileFormat.PROPERTIES, false, "config/debezium",
+ "the debezium connect openGauss sink process config file", null),
+ DEBEZIUM_CONNECT_OPENGAUSS_SOURCE_CONFIG("opengauss-source.properties", FileFormat.PROPERTIES, false,
+ "config/debezium", "the debezium connect openGauss source process config file", null),
+ DEBEZIUM_CONNECT_PGSQL_SINK_CONFIG("postgres-sink.properties", FileFormat.PROPERTIES, false, "config/debezium",
+ "the debezium connect PostgreSQL sink process config file", null),
+ DEBEZIUM_CONNECT_PGSQL_SOURCE_CONFIG("postgres-source.properties", FileFormat.PROPERTIES, false, "config/debezium",
+ "the debezium connect PostgreSQL source process config file", null),
+ ;
+
+ TemplateConfigType(String name, FileFormat fileFormat, boolean isInResources, String filePath, String description,
+ String configDescFileName) {
+ this.name = name;
+ this.fileFormat = fileFormat;
+ this.isInResources = isInResources;
+ this.filePath = filePath;
+ this.description = description;
+ this.configDescFileName = configDescFileName;
+ }
+
+ private final String name;
+ private final FileFormat fileFormat;
+ private final boolean isInResources;
+ private final String filePath;
+ private final String description;
+ private final String configDescFileName;
+
+ /**
+ * get template config file path
+ *
+ * @return String file path
+ */
+ public String getFilePath() {
+ if (isInResources) {
+ return String.format("%s/%s", filePath, name);
+ }
+
+ String templateDirPath = ApplicationConfig.getInstance().getPortalTemplateDirPath();
+ return String.format("%s/%s/%s", templateDirPath, filePath, name);
+ }
+
+ /**
+ * get template config description file path
+ *
+ * @return String file path
+ */
+ public String getConfigDescFilePath() {
+ if (configDescFileName == null) {
+ throw new UnsupportedOperationException("Config file " + name + " does not have config description file");
+ }
+
+ if (isInResources) {
+ return String.format("%s/%s", filePath, configDescFileName);
+ }
+
+ String templateDirPath = ApplicationConfig.getInstance().getPortalTemplateDirPath();
+ return String.format("%s/%s/%s", templateDirPath, filePath, configDescFileName);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/exceptions/ConfigException.java b/multidb-portal/src/main/java/org/opengauss/exceptions/ConfigException.java
new file mode 100644
index 0000000000000000000000000000000000000000..02ea347cfeb920a8748bbcd06ce8ea91b6895953
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/exceptions/ConfigException.java
@@ -0,0 +1,24 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.exceptions;
+
+/**
+ * config exception
+ *
+ * @since 2025/2/27
+ */
+public class ConfigException extends RuntimeException {
+ public ConfigException(String msg) {
+ super(msg);
+ }
+
+ public ConfigException(Throwable throwable) {
+ super(throwable);
+ }
+
+ public ConfigException(String msg, Throwable throwable) {
+ super(msg, throwable);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/exceptions/InstallException.java b/multidb-portal/src/main/java/org/opengauss/exceptions/InstallException.java
new file mode 100644
index 0000000000000000000000000000000000000000..d933d03fb8764288ebe62404a29cfde23ef59f91
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/exceptions/InstallException.java
@@ -0,0 +1,24 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.exceptions;
+
+/**
+ * install exception
+ *
+ * @since 2025/4/15
+ */
+public class InstallException extends RuntimeException {
+ public InstallException(String msg) {
+ super(msg);
+ }
+
+ public InstallException(Throwable throwable) {
+ super(throwable);
+ }
+
+ public InstallException(String msg, Throwable throwable) {
+ super(msg, throwable);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/exceptions/KafkaException.java b/multidb-portal/src/main/java/org/opengauss/exceptions/KafkaException.java
new file mode 100644
index 0000000000000000000000000000000000000000..0f84d2d4b65d5f30ddd80f37d60db170428486f6
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/exceptions/KafkaException.java
@@ -0,0 +1,24 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.exceptions;
+
+/**
+ * kafka exception
+ *
+ * @since 2025/4/18
+ */
+public class KafkaException extends RuntimeException {
+ public KafkaException(String msg) {
+ super(msg);
+ }
+
+ public KafkaException(Throwable throwable) {
+ super(throwable);
+ }
+
+ public KafkaException(String msg, Throwable throwable) {
+ super(msg, throwable);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/exceptions/MigrationException.java b/multidb-portal/src/main/java/org/opengauss/exceptions/MigrationException.java
new file mode 100644
index 0000000000000000000000000000000000000000..a5c32b21b4a0d6495b2a42763cf9d78fbd69a677
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/exceptions/MigrationException.java
@@ -0,0 +1,24 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.exceptions;
+
+/**
+ * migration exception
+ *
+ * @since 2025/4/30
+ */
+public class MigrationException extends RuntimeException {
+ public MigrationException(String msg) {
+ super(msg);
+ }
+
+ public MigrationException(Throwable throwable) {
+ super(throwable);
+ }
+
+ public MigrationException(String msg, Throwable throwable) {
+ super(msg, throwable);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/exceptions/MigrationModeException.java b/multidb-portal/src/main/java/org/opengauss/exceptions/MigrationModeException.java
new file mode 100644
index 0000000000000000000000000000000000000000..397c7eaa07e7f6baaaa54d070c460d785b3b9a39
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/exceptions/MigrationModeException.java
@@ -0,0 +1,24 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.exceptions;
+
+/**
+ * migration mode exception
+ *
+ * @since 2025/4/23
+ */
+public class MigrationModeException extends RuntimeException {
+ public MigrationModeException(String message) {
+ super(message);
+ }
+
+ public MigrationModeException(Throwable e) {
+ super(e);
+ }
+
+ public MigrationModeException(String message, Throwable e) {
+ super(message, e);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/exceptions/PortalException.java b/multidb-portal/src/main/java/org/opengauss/exceptions/PortalException.java
new file mode 100644
index 0000000000000000000000000000000000000000..99feb0cc219325a0164b91567fed13ecd98d2ac4
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/exceptions/PortalException.java
@@ -0,0 +1,24 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.exceptions;
+
+/**
+ * Portal exception
+ *
+ * @since 2025/6/5
+ */
+public class PortalException extends RuntimeException {
+ public PortalException(String message) {
+ super(message);
+ }
+
+ public PortalException(Throwable throwable) {
+ super(throwable);
+ }
+
+ public PortalException(String message, Throwable throwable) {
+ super(message, throwable);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/exceptions/TaskException.java b/multidb-portal/src/main/java/org/opengauss/exceptions/TaskException.java
new file mode 100644
index 0000000000000000000000000000000000000000..b450e89ebcd3e52c87903aed45b0b78ddf2846e0
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/exceptions/TaskException.java
@@ -0,0 +1,24 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.exceptions;
+
+/**
+ * task exception
+ *
+ * @since 2025/4/24
+ */
+public class TaskException extends RuntimeException {
+ public TaskException(String msg) {
+ super(msg);
+ }
+
+ public TaskException(Throwable throwable) {
+ super(throwable);
+ }
+
+ public TaskException(String msg, Throwable throwable) {
+ super(msg, throwable);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/exceptions/VerifyException.java b/multidb-portal/src/main/java/org/opengauss/exceptions/VerifyException.java
new file mode 100644
index 0000000000000000000000000000000000000000..a234cf06e07258589ce8bcdbb9552dc640009db1
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/exceptions/VerifyException.java
@@ -0,0 +1,24 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.exceptions;
+
+/**
+ * verify exception
+ *
+ * @since 2025/6/7
+ */
+public class VerifyException extends RuntimeException {
+ public VerifyException(String msg) {
+ super(msg);
+ }
+
+ public VerifyException(Throwable throwable) {
+ super(throwable);
+ }
+
+ public VerifyException(String msg, Throwable throwable) {
+ super(msg, throwable);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/handler/PortalExceptionHandler.java b/multidb-portal/src/main/java/org/opengauss/handler/PortalExceptionHandler.java
new file mode 100644
index 0000000000000000000000000000000000000000..95a3b030f9ea81d461212c36d8b72463c8545770
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/handler/PortalExceptionHandler.java
@@ -0,0 +1,23 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.handler;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+/**
+ * Portal exception handler
+ *
+ * @since 2025/4/19
+ */
+public class PortalExceptionHandler implements Thread.UncaughtExceptionHandler {
+ private static final Logger LOGGER = LogManager.getLogger(PortalExceptionHandler.class);
+
+ @Override
+ public void uncaughtException(Thread t, Throwable e) {
+ String errorMessage = String.format("thread %s occur exception: ", t.getName());
+ LOGGER.error(errorMessage, e);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/MigrationManager.java b/multidb-portal/src/main/java/org/opengauss/migration/MigrationManager.java
new file mode 100644
index 0000000000000000000000000000000000000000..e247273776d81a8e2aca1211f5744feb84d2ed31
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/MigrationManager.java
@@ -0,0 +1,217 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration;
+
+import org.opengauss.Main;
+import org.opengauss.domain.model.MigrationStopIndicator;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.enums.DatabaseType;
+import org.opengauss.enums.MigrationStatusEnum;
+import org.opengauss.exceptions.MigrationException;
+import org.opengauss.migration.config.AbstractMigrationJobConfig;
+import org.opengauss.migration.config.MysqlMigrationJobConfig;
+import org.opengauss.migration.config.PgsqlMigrationJobConfig;
+import org.opengauss.migration.helper.TaskHelper;
+import org.opengauss.migration.job.AbstractMigrationJob;
+import org.opengauss.migration.job.MysqlMigrationJob;
+import org.opengauss.migration.job.PgsqlMigrationJob;
+import org.opengauss.migration.monitor.MigrationAliveMonitor;
+import org.opengauss.migration.process.ProcessMonitor;
+import org.opengauss.migration.progress.ProgressMonitor;
+import org.opengauss.migration.progress.ProgressMonitorFactory;
+import org.opengauss.migration.status.StatusMonitor;
+
+/**
+ * Migration manager
+ *
+ * @since 2025/7/3
+ */
+public class MigrationManager {
+ private static volatile MigrationManager instance;
+
+ private TaskWorkspace taskWorkspace;
+ private DatabaseType sourceDbType;
+ private AbstractMigrationJobConfig migrationJobConfig;
+ private MigrationStopIndicator migrationStopIndicator;
+ private ProgressMonitor progressMonitor;
+ private ProcessMonitor processMonitor;
+ private StatusMonitor statusMonitor;
+ private MigrationAliveMonitor migrationAliveMonitor;
+ private AbstractMigrationJob migrationJob;
+
+ private MigrationManager() {
+ }
+
+ /**
+ * Initialize migration context
+ *
+ * @param taskWorkspace task workspace
+ */
+ public static void initialize(TaskWorkspace taskWorkspace) {
+ if (instance == null) {
+ synchronized (MigrationManager.class) {
+ if (instance == null) {
+ initMigrationContext(taskWorkspace);
+ }
+ }
+ } else {
+ throw new IllegalStateException("Migration context already initialized");
+ }
+ }
+
+ /**
+ * Get migration manager
+ *
+ * @return MigrationManager migration manager
+ */
+ public static MigrationManager getInstance() {
+ if (instance == null) {
+ synchronized (MigrationManager.class) {
+ if (instance == null) {
+ throw new IllegalStateException("Migration context has not initialized");
+ }
+ }
+ }
+ return instance;
+ }
+
+ /**
+ * Start migration
+ */
+ public void start() {
+ if (!migrationJob.preMigrationVerify()) {
+ migrationStopIndicator.setStop();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.PRE_MIGRATION_VERIFY_FAILED);
+ Main.stopQuarkus();
+ return;
+ }
+
+ startMonitor();
+ migrationJob.beforeTask();
+ migrationJob.startTask(migrationStopIndicator, processMonitor, statusMonitor);
+
+ if (!migrationJobConfig.hasIncrementalMigration() && !migrationJobConfig.hasReverseMigration()) {
+ Main.stopQuarkus();
+ }
+ }
+
+ /**
+ * Stop migration
+ */
+ public void stop() {
+ if (!migrationStopIndicator.isStopped()) {
+ doStop();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.MIGRATION_FINISHED);
+ }
+ }
+
+ /**
+ * Stop migration on error
+ */
+ public void stopOnError() {
+ if (!migrationStopIndicator.isStopped()) {
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.MIGRATION_FAILED);
+ doStop();
+ }
+ }
+
+ /**
+ * Stop incremental migration
+ */
+ public void stopIncremental() {
+ migrationJob.stopIncremental(migrationStopIndicator, statusMonitor);
+ }
+
+ /**
+ * Resume incremental migration
+ */
+ public void resumeIncremental() {
+ migrationJob.resumeIncremental(statusMonitor);
+ }
+
+ /**
+ * Restart incremental migration
+ */
+ public void restartIncremental() {
+ migrationJob.restartIncremental(migrationStopIndicator, statusMonitor);
+ }
+
+ /**
+ * Start reverse migration
+ */
+ public void startReverse() {
+ migrationJob.startReverse(migrationStopIndicator, statusMonitor);
+ }
+
+ /**
+ * Stop reverse migration
+ */
+ public void stopReverse() {
+ migrationJob.stopReverse(statusMonitor);
+ }
+
+ /**
+ * Resume reverse migration
+ */
+ public void resumeReverse() {
+ migrationJob.resumeReverse(statusMonitor);
+ }
+
+ /**
+ * Restart reverse migration
+ */
+ public void restartReverse() {
+ migrationJob.restartReverse(migrationStopIndicator, statusMonitor);
+ }
+
+ private void doStop() {
+ migrationStopIndicator.setStop();
+ migrationJob.stopTask();
+ stopMonitor();
+ }
+
+ private void stopMonitor() {
+ processMonitor.stopMonitoring();
+ progressMonitor.stopMonitoring();
+ migrationAliveMonitor.stop();
+ }
+
+ private void startMonitor() {
+ processMonitor.startMonitoring(this, statusMonitor);
+ progressMonitor.start();
+ migrationAliveMonitor.start();
+ }
+
+ private static void initMigrationContext(TaskWorkspace taskWorkspace) {
+ MigrationManager migrationManager = new MigrationManager();
+ DatabaseType sourceDbType = TaskHelper.loadSourceDbType(taskWorkspace);
+ migrationManager.taskWorkspace = taskWorkspace;
+ migrationManager.sourceDbType = sourceDbType;
+
+ if (DatabaseType.MYSQL.equals(sourceDbType)) {
+ MysqlMigrationJobConfig migrationJobConfig = new MysqlMigrationJobConfig(taskWorkspace);
+ TaskHelper.loadConfig(migrationJobConfig);
+ migrationManager.migrationJobConfig = migrationJobConfig;
+ migrationManager.migrationJob = new MysqlMigrationJob(migrationJobConfig);
+ } else if (DatabaseType.POSTGRESQL.equals(sourceDbType)) {
+ PgsqlMigrationJobConfig migrationJobConfig = new PgsqlMigrationJobConfig(taskWorkspace);
+ TaskHelper.loadConfig(migrationJobConfig);
+ migrationManager.migrationJobConfig = migrationJobConfig;
+ migrationManager.migrationJob = new PgsqlMigrationJob(migrationJobConfig);
+ } else {
+ throw new MigrationException("Unsupported source database type: " + sourceDbType);
+ }
+
+ StatusMonitor statusMonitor = new StatusMonitor(taskWorkspace);
+ migrationManager.statusMonitor = statusMonitor;
+ migrationManager.progressMonitor = ProgressMonitorFactory.createProgressMonitor(
+ sourceDbType, statusMonitor, taskWorkspace);
+ migrationManager.migrationStopIndicator = new MigrationStopIndicator();
+ migrationManager.migrationAliveMonitor = new MigrationAliveMonitor(taskWorkspace);
+ migrationManager.processMonitor = new ProcessMonitor();
+
+ instance = migrationManager;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/config/AbstractMigrationJobConfig.java b/multidb-portal/src/main/java/org/opengauss/migration/config/AbstractMigrationJobConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..6894af98da9f2bc94c7704508e6c8a44eff4b375
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/config/AbstractMigrationJobConfig.java
@@ -0,0 +1,144 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.config;
+
+import lombok.Getter;
+import org.opengauss.constants.config.MigrationConfig;
+import org.opengauss.domain.model.ConfigFile;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.enums.MigrationPhase;
+import org.opengauss.migration.mode.MigrationMode;
+import org.opengauss.migration.mode.ModeManager;
+
+import java.util.List;
+
+/**
+ * Abstract migration job config
+ *
+ * @since 2025/7/2
+ */
+@Getter
+public abstract class AbstractMigrationJobConfig {
+ /**
+ * Task workspace
+ */
+ protected final TaskWorkspace taskWorkspace;
+
+ /**
+ * Migration config file
+ */
+ protected final ConfigFile migrationConfigFile;
+
+ private volatile List migrationPhaseList;
+ private volatile Boolean hasFullMigration;
+ private volatile Boolean hasFullDataCheck;
+ private volatile Boolean hasIncrementalMigration;
+ private volatile Boolean hasIncrementalDataCheck;
+ private volatile Boolean hasReverseMigration;
+
+ AbstractMigrationJobConfig(TaskWorkspace taskWorkspace, ConfigFile migrationConfigFile) {
+ this.taskWorkspace = taskWorkspace;
+ this.migrationConfigFile = migrationConfigFile;
+ }
+
+ /**
+ * Load migration phase list from migration.properties
+ *
+ * @return List migration phase list
+ */
+ public List getMigrationPhaseList() {
+ if (migrationPhaseList == null) {
+ String modeName = migrationConfigFile.getConfigMap().get(MigrationConfig.MIGRATION_MODE).toString();
+ MigrationMode migrationMode = new ModeManager().getModeByName(modeName);
+ migrationPhaseList = migrationMode.getMigrationPhaseList();
+ }
+ return migrationPhaseList;
+ }
+
+ /**
+ * Check whether migration phase list has full migration
+ *
+ * @return boolean has full migration
+ */
+ public boolean hasFullMigration() {
+ if (migrationPhaseList == null || hasFullMigration == null) {
+ hasFullMigration = getMigrationPhaseList().contains(MigrationPhase.FULL_MIGRATION);
+ }
+ return hasFullMigration;
+ }
+
+ /**
+ * Check whether migration phase list has full data check
+ *
+ * @return boolean has full data check
+ */
+ public boolean hasFullDataCheck() {
+ if (migrationPhaseList == null || hasFullDataCheck == null) {
+ hasFullDataCheck = getMigrationPhaseList().contains(MigrationPhase.FULL_DATA_CHECK);
+ }
+ return hasFullDataCheck;
+ }
+
+ /**
+ * Check whether migration phase list has incremental migration
+ *
+ * @return boolean has incremental migration
+ */
+ public boolean hasIncrementalMigration() {
+ if (migrationPhaseList == null || hasIncrementalMigration == null) {
+ hasIncrementalMigration = getMigrationPhaseList().contains(MigrationPhase.INCREMENTAL_MIGRATION);
+ }
+ return hasIncrementalMigration;
+ }
+
+ /**
+ * Check whether migration phase list has incremental data check
+ *
+ * @return boolean has incremental data check
+ */
+ public boolean hasIncrementalDataCheck() {
+ if (migrationPhaseList == null || hasIncrementalDataCheck == null) {
+ hasIncrementalDataCheck = getMigrationPhaseList().contains(MigrationPhase.INCREMENTAL_DATA_CHECK);
+ }
+ return hasIncrementalDataCheck;
+ }
+
+ /**
+ * Check whether migration phase list has reverse migration
+ *
+ * @return boolean has reverse migration
+ */
+ public boolean hasReverseMigration() {
+ if (migrationPhaseList == null || hasReverseMigration == null) {
+ hasReverseMigration = getMigrationPhaseList().contains(MigrationPhase.REVERSE_MIGRATION);
+ }
+ return hasReverseMigration;
+ }
+
+ /**
+ * Load migration config from config files
+ */
+ public abstract void loadConfig();
+
+ /**
+ * Validate migration config
+ */
+ public abstract void validateConfig();
+
+ /**
+ * Change migration tools config
+ */
+ public abstract void changeToolsConfig();
+
+ /**
+ * Save change migration config
+ */
+ public abstract void saveChangeConfig();
+
+ /**
+ * Generate migration tools config files, when create task
+ */
+ public abstract void generateToolsConfigFiles();
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/config/MysqlMigrationJobConfig.java b/multidb-portal/src/main/java/org/opengauss/migration/config/MysqlMigrationJobConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..0a8354de0d2bdd530f47e449aad1bc52dedd06dc
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/config/MysqlMigrationJobConfig.java
@@ -0,0 +1,315 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.config;
+
+import lombok.Getter;
+import org.opengauss.constants.ConfigValidationConstants;
+import org.opengauss.constants.config.MigrationConfig;
+import org.opengauss.domain.dto.MysqlMigrationConfigDto;
+import org.opengauss.domain.model.ChameleonConfigBundle;
+import org.opengauss.domain.model.ConfigFile;
+import org.opengauss.domain.model.DataCheckerConfigBundle;
+import org.opengauss.domain.model.DebeziumConfigBundle;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.enums.DebeziumProcessType;
+import org.opengauss.enums.TemplateConfigType;
+import org.opengauss.exceptions.ConfigException;
+import org.opengauss.migration.helper.config.ChameleonMysqlMigrationConfigHelper;
+import org.opengauss.migration.helper.config.DataCheckerMysqlMigrationConfigHelper;
+import org.opengauss.migration.helper.config.DebeziumMysqlMigrationConfigHelper;
+import org.opengauss.migration.helper.tool.ChameleonHelper;
+import org.opengauss.migration.helper.tool.DataCheckerHelper;
+
+import java.util.Map;
+
+/**
+ * MySQL Migration Job Config
+ *
+ * @since 2025/7/2
+ */
+@Getter
+public class MysqlMigrationJobConfig extends AbstractMigrationJobConfig {
+ private final ChameleonConfigBundle fullConfigBundle;
+ private final DataCheckerConfigBundle fullDataCheckConfigBundle;
+ private final DataCheckerConfigBundle incrementalDataCheckConfigBundle;
+ private final DebeziumConfigBundle incrementalConfigBundle;
+ private final DebeziumConfigBundle reverseConfigBundle;
+
+ private volatile MysqlMigrationConfigDto migrationConfigDto;
+
+ public MysqlMigrationJobConfig(TaskWorkspace taskWorkspace) {
+ super(taskWorkspace, new ConfigFile("migration.properties", taskWorkspace.getConfigDirPath(),
+ taskWorkspace, TemplateConfigType.MYSQL_MIGRATION_CONFIG));
+
+ this.fullConfigBundle = getFullConfigBundle(taskWorkspace);
+ this.fullDataCheckConfigBundle = getFullDataCheckConfigBundle(taskWorkspace);
+ this.incrementalConfigBundle = getIncrementalConfigBundle(taskWorkspace);
+ this.incrementalDataCheckConfigBundle = getIncrementalDataCheckConfigBundle(taskWorkspace);
+ this.reverseConfigBundle = getReverseConfigBundle(taskWorkspace);
+ }
+
+ /**
+ * Get migration config dto
+ *
+ * @return mysql migration config dto
+ */
+ public MysqlMigrationConfigDto getMigrationConfigDto() {
+ if (migrationConfigDto == null) {
+ throw new IllegalStateException("MySQL migration config is not loaded");
+ }
+ return migrationConfigDto;
+ }
+
+ @Override
+ public void loadConfig() {
+ migrationConfigFile.loadConfigMap();
+ migrationConfigDto = MysqlMigrationConfigDto.generateMysqlMigrationConfigDto(
+ migrationConfigFile.getConfigMap());
+
+ if (hasFullMigration()) {
+ fullConfigBundle.loadConfigMap();
+ }
+ if (hasFullDataCheck()) {
+ fullDataCheckConfigBundle.loadConfigMap();
+ }
+
+ if (hasIncrementalMigration()) {
+ incrementalConfigBundle.loadConfigMap();
+ if (hasIncrementalDataCheck()) {
+ incrementalDataCheckConfigBundle.loadConfigMap();
+ }
+ }
+
+ if (hasReverseMigration()) {
+ reverseConfigBundle.loadConfigMap();
+ }
+ }
+
+ @Override
+ public void validateConfig() {
+ Map migrationConfig = migrationConfigFile.getConfigMap();
+ String mysqlIp = migrationConfig.get(MigrationConfig.MYSQL_DATABASE_IP).toString();
+ String mysqlPort = migrationConfig.get(MigrationConfig.MYSQL_DATABASE_PORT).toString();
+ String opengaussIp = migrationConfig.get(MigrationConfig.OPENGAUSS_DATABASE_IP).toString();
+ String opengaussPort = migrationConfig.get(MigrationConfig.OPENGAUSS_DATABASE_PORT).toString();
+
+ if (!ConfigValidationConstants.IP_REGEX.matcher(mysqlIp).matches()
+ || !ConfigValidationConstants.PORT_REGEX.matcher(mysqlPort).matches()
+ || !ConfigValidationConstants.IP_REGEX.matcher(opengaussIp).matches()
+ || !ConfigValidationConstants.PORT_REGEX.matcher(opengaussPort).matches()) {
+ throw new ConfigException("IP or Port is invalid");
+ }
+ }
+
+ @Override
+ public void changeToolsConfig() {
+ if (hasFullMigration()) {
+ changeFullConfig();
+ }
+ if (hasFullDataCheck()) {
+ changeFullDataCheckConfig();
+ }
+
+ if (hasIncrementalMigration()) {
+ changeIncrementalConfig();
+ if (hasIncrementalDataCheck()) {
+ changeIncrementalDataCheckConfig();
+ }
+ }
+
+ if (hasReverseMigration()) {
+ changeReverseConfig();
+ }
+ }
+
+ @Override
+ public void saveChangeConfig() {
+ if (hasFullMigration()) {
+ fullConfigBundle.saveConfigMap();
+ }
+ if (hasFullDataCheck()) {
+ fullDataCheckConfigBundle.saveConfigMap();
+ }
+
+ if (hasIncrementalMigration()) {
+ incrementalConfigBundle.saveConfigMap();
+ if (hasIncrementalDataCheck()) {
+ incrementalDataCheckConfigBundle.saveConfigMap();
+ }
+ }
+
+ if (hasReverseMigration()) {
+ reverseConfigBundle.saveConfigMap();
+ }
+ }
+
+ @Override
+ public void generateToolsConfigFiles() {
+ migrationConfigFile.generateFile();
+ fullConfigBundle.generateFile();
+ fullDataCheckConfigBundle.generateFile();
+ incrementalConfigBundle.generateFile();
+ incrementalDataCheckConfigBundle.generateFile();
+ reverseConfigBundle.generateFile();
+ }
+
+ private void changeFullConfig() {
+ fullConfigBundle.getConfigFile().getConfigMap().putAll(
+ ChameleonMysqlMigrationConfigHelper.mysqlFullMigrationConfig(migrationConfigDto, taskWorkspace));
+ }
+
+ private void changeFullDataCheckConfig() {
+ String logConfigPath = fullDataCheckConfigBundle.getLog4j2ConfigFile().getFilePath();
+ Map checkParams = DataCheckerMysqlMigrationConfigHelper.mysqlFullDataCheckCheckConfig(
+ taskWorkspace, logConfigPath);
+ Map sinkParams = DataCheckerMysqlMigrationConfigHelper.mysqlFullDataCheckSinkConfig(
+ migrationConfigDto, logConfigPath);
+ Map sourceParams = DataCheckerMysqlMigrationConfigHelper.mysqlFullDataCheckSourceConfig(
+ migrationConfigDto, logConfigPath);
+ Map log4j2Config = DataCheckerHelper.getFullCheckLog4j2Config(
+ taskWorkspace);
+
+ fullDataCheckConfigBundle.getCheckConfigFile().getConfigMap().putAll(checkParams);
+ fullDataCheckConfigBundle.getSinkConfigFile().getConfigMap().putAll(sinkParams);
+ fullDataCheckConfigBundle.getSourceConfigFile().getConfigMap().putAll(sourceParams);
+ fullDataCheckConfigBundle.getLog4j2ConfigFile().getConfigMap().putAll(log4j2Config);
+ }
+
+ private void changeIncrementalConfig() {
+ Map connectSourceParams = DebeziumMysqlMigrationConfigHelper.incrementalSourceConfig(
+ migrationConfigDto, taskWorkspace);
+ Map connectSinkParams = DebeziumMysqlMigrationConfigHelper.incrementalSinkConfig(
+ migrationConfigDto, taskWorkspace);
+ Map workerSourceParams = DebeziumMysqlMigrationConfigHelper.incrementalWorkerSourceConfig(
+ taskWorkspace);
+ Map workerSinkParams = DebeziumMysqlMigrationConfigHelper.incrementalWorkerSinkConfig(
+ taskWorkspace);
+ Map log4jSourceParams = DebeziumMysqlMigrationConfigHelper.incrementalLog4jConfig(
+ taskWorkspace, DebeziumProcessType.SOURCE);
+ Map log4jSinkParams = DebeziumMysqlMigrationConfigHelper.incrementalLog4jConfig(
+ taskWorkspace, DebeziumProcessType.SINK);
+
+ incrementalConfigBundle.getConnectSourceConfigFile().getConfigMap().putAll(connectSourceParams);
+ incrementalConfigBundle.getConnectSinkConfigFile().getConfigMap().putAll(connectSinkParams);
+ incrementalConfigBundle.getWorkerSourceConfigFile().getConfigMap().putAll(workerSourceParams);
+ incrementalConfigBundle.getWorkerSinkConfigFile().getConfigMap().putAll(workerSinkParams);
+ incrementalConfigBundle.getLog4jSourceConfigFile().getConfigMap().putAll(log4jSourceParams);
+ incrementalConfigBundle.getLog4jSinkConfigFile().getConfigMap().putAll(log4jSinkParams);
+ }
+
+ private void changeIncrementalDataCheckConfig() {
+ String logConfigPath = incrementalDataCheckConfigBundle.getLog4j2ConfigFile().getFilePath();
+ String incrementalKafkaTopic = DebeziumMysqlMigrationConfigHelper.generateIncrementalKafkaTopic(taskWorkspace);
+
+ Map checkParams = DataCheckerMysqlMigrationConfigHelper.mysqlIncrementalDataCheckCheckConfig(
+ taskWorkspace, logConfigPath);
+ incrementalDataCheckConfigBundle.getCheckConfigFile().getConfigMap().putAll(checkParams);
+ Map sinkParams = DataCheckerMysqlMigrationConfigHelper.mysqlIncrementalDataCheckSinkConfig(
+ migrationConfigDto, logConfigPath, incrementalKafkaTopic);
+ incrementalDataCheckConfigBundle.getSinkConfigFile().getConfigMap().putAll(sinkParams);
+ Map sourceParams = DataCheckerMysqlMigrationConfigHelper.mysqlIncrementalDataCheckSourceConfig(
+ migrationConfigDto, logConfigPath, incrementalKafkaTopic);
+ incrementalDataCheckConfigBundle.getSourceConfigFile().getConfigMap().putAll(sourceParams);
+
+ Map log4j2Config = DataCheckerHelper.getIncrementalCheckLog4j2Config(
+ taskWorkspace);
+ incrementalDataCheckConfigBundle.getLog4j2ConfigFile().getConfigMap().putAll(log4j2Config);
+ }
+
+ private void changeReverseConfig() {
+ Map connectSourceParams = DebeziumMysqlMigrationConfigHelper.reverseSourceConfig(
+ migrationConfigDto, taskWorkspace);
+ Map connectSinkParams = DebeziumMysqlMigrationConfigHelper.reverseSinkConfig(
+ migrationConfigDto, taskWorkspace);
+ reverseConfigBundle.getConnectSourceConfigFile().getConfigMap().putAll(connectSourceParams);
+ reverseConfigBundle.getConnectSinkConfigFile().getConfigMap().putAll(connectSinkParams);
+
+ Map workerSourceParams = DebeziumMysqlMigrationConfigHelper.reverseWorkerSourceConfig(
+ taskWorkspace);
+ Map workerSinkParams = DebeziumMysqlMigrationConfigHelper.reverseWorkerSinkConfig(
+ taskWorkspace);
+ reverseConfigBundle.getWorkerSourceConfigFile().getConfigMap().putAll(workerSourceParams);
+ reverseConfigBundle.getWorkerSinkConfigFile().getConfigMap().putAll(workerSinkParams);
+
+ Map log4jSourceParams = DebeziumMysqlMigrationConfigHelper.reverseLog4jConfig(taskWorkspace,
+ DebeziumProcessType.SOURCE);
+ Map log4jSinkParams = DebeziumMysqlMigrationConfigHelper.reverseLog4jConfig(taskWorkspace,
+ DebeziumProcessType.SINK);
+ reverseConfigBundle.getLog4jSourceConfigFile().getConfigMap().putAll(log4jSourceParams);
+ reverseConfigBundle.getLog4jSinkConfigFile().getConfigMap().putAll(log4jSinkParams);
+ }
+
+ private ChameleonConfigBundle getFullConfigBundle(TaskWorkspace taskWorkspace) {
+ ChameleonConfigBundle result = new ChameleonConfigBundle();
+ String fullConfigName = ChameleonHelper.generateFullMigrationConfigFileName(taskWorkspace);
+ result.setConfigFile(new ConfigFile(fullConfigName, taskWorkspace.getConfigFullDirPath(), taskWorkspace,
+ TemplateConfigType.CHAMELEON_CONFIG));
+ return result;
+ }
+
+ private DataCheckerConfigBundle getFullDataCheckConfigBundle(TaskWorkspace taskWorkspace) {
+ DataCheckerConfigBundle result = new DataCheckerConfigBundle();
+ String configFullDataCheckDirPath = taskWorkspace.getConfigFullDataCheckDirPath();
+ result.setCheckConfigFile(new ConfigFile("application.yml", configFullDataCheckDirPath,
+ taskWorkspace, TemplateConfigType.DATACHECKER_CHECK_CONFIG));
+ result.setSinkConfigFile(new ConfigFile("application-sink.yml", configFullDataCheckDirPath,
+ taskWorkspace, TemplateConfigType.DATACHECKER_SINK_CONFIG));
+ result.setSourceConfigFile(new ConfigFile("application-source.yml", configFullDataCheckDirPath,
+ taskWorkspace, TemplateConfigType.DATACHECKER_SOURCE_CONFIG));
+ result.setLog4j2ConfigFile(new ConfigFile("log4j2.xml", configFullDataCheckDirPath,
+ taskWorkspace, TemplateConfigType.DATACHECKER_LOG4J2_CONFIG));
+ return result;
+ }
+
+ private DebeziumConfigBundle getIncrementalConfigBundle(TaskWorkspace taskWorkspace) {
+ DebeziumConfigBundle result = new DebeziumConfigBundle();
+ String configIncrementalDirPath = taskWorkspace.getConfigIncrementalDirPath();
+ result.setConnectSinkConfigFile(new ConfigFile("incremental-connect-sink.properties",
+ configIncrementalDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_MYSQL_SINK_CONFIG));
+ result.setConnectSourceConfigFile(new ConfigFile("incremental-connect-source.properties",
+ configIncrementalDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_MYSQL_SOURCE_CONFIG));
+ result.setWorkerSinkConfigFile(new ConfigFile("incremental-worker-sink.properties",
+ configIncrementalDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_AVRO_STANDALONE_CONFIG));
+ result.setWorkerSourceConfigFile(new ConfigFile("incremental-worker-source.properties",
+ configIncrementalDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_AVRO_STANDALONE_CONFIG));
+ result.setLog4jSinkConfigFile(new ConfigFile("incremental-log4j-sink.properties", configIncrementalDirPath,
+ taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_LOG4J2_CONFIG));
+ result.setLog4jSourceConfigFile(new ConfigFile("incremental-log4j-source.properties",
+ configIncrementalDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_LOG4J2_CONFIG));
+ return result;
+ }
+
+ private DataCheckerConfigBundle getIncrementalDataCheckConfigBundle(TaskWorkspace taskWorkspace) {
+ DataCheckerConfigBundle result = new DataCheckerConfigBundle();
+ String configIncrementalDataCheckDirPath = taskWorkspace.getConfigIncrementalDataCheckDirPath();
+ result.setCheckConfigFile(new ConfigFile("application.yml",
+ configIncrementalDataCheckDirPath, taskWorkspace, TemplateConfigType.DATACHECKER_CHECK_CONFIG));
+ result.setSinkConfigFile(new ConfigFile("application-sink.yml",
+ configIncrementalDataCheckDirPath, taskWorkspace, TemplateConfigType.DATACHECKER_SINK_CONFIG));
+ result.setSourceConfigFile(new ConfigFile("application-source.yml",
+ configIncrementalDataCheckDirPath, taskWorkspace, TemplateConfigType.DATACHECKER_SOURCE_CONFIG));
+ result.setLog4j2ConfigFile(new ConfigFile("log4j2.xml",
+ configIncrementalDataCheckDirPath, taskWorkspace, TemplateConfigType.DATACHECKER_LOG4J2_CONFIG));
+ return result;
+ }
+
+ private DebeziumConfigBundle getReverseConfigBundle(TaskWorkspace taskWorkspace) {
+ DebeziumConfigBundle result = new DebeziumConfigBundle();
+ String configReverseDirPath = taskWorkspace.getConfigReverseDirPath();
+ result.setConnectSinkConfigFile(new ConfigFile("reverse-connect-sink.properties",
+ configReverseDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_OPENGAUSS_SINK_CONFIG));
+ result.setConnectSourceConfigFile(new ConfigFile("reverse-connect-source.properties",
+ configReverseDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_OPENGAUSS_SOURCE_CONFIG));
+ result.setWorkerSinkConfigFile(new ConfigFile("reverse-worker-sink.properties",
+ configReverseDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_AVRO_STANDALONE_CONFIG));
+ result.setWorkerSourceConfigFile(new ConfigFile("reverse-worker-source.properties",
+ configReverseDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_AVRO_STANDALONE_CONFIG));
+ result.setLog4jSinkConfigFile(new ConfigFile("reverse-log4j-sink.properties", configReverseDirPath,
+ taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_LOG4J2_CONFIG));
+ result.setLog4jSourceConfigFile(new ConfigFile("reverse-log4j-source.properties", configReverseDirPath,
+ taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_LOG4J2_CONFIG));
+ return result;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/config/PgsqlMigrationJobConfig.java b/multidb-portal/src/main/java/org/opengauss/migration/config/PgsqlMigrationJobConfig.java
new file mode 100644
index 0000000000000000000000000000000000000000..c7b40d9a61fa331dd091ba10d08a03b3125aa771
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/config/PgsqlMigrationJobConfig.java
@@ -0,0 +1,232 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.config;
+
+import lombok.Getter;
+import org.opengauss.constants.ConfigValidationConstants;
+import org.opengauss.constants.config.MigrationConfig;
+import org.opengauss.domain.dto.PgsqlMigrationConfigDto;
+import org.opengauss.domain.model.ConfigFile;
+import org.opengauss.domain.model.DebeziumConfigBundle;
+import org.opengauss.domain.model.FullMigrationToolConfigBundle;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.enums.DebeziumProcessType;
+import org.opengauss.enums.TemplateConfigType;
+import org.opengauss.exceptions.ConfigException;
+import org.opengauss.migration.helper.config.DebeziumPgsqlMigrationConfigHelper;
+import org.opengauss.migration.helper.config.FullMigrationToolPgsqlMigrationConfigHelper;
+
+import java.util.Map;
+import java.util.Set;
+
+/**
+ * PostgreSQL Migration Job Config
+ *
+ * @since 2025/7/2
+ */
+@Getter
+public class PgsqlMigrationJobConfig extends AbstractMigrationJobConfig {
+ private final FullMigrationToolConfigBundle fullConfigBundle;
+ private final DebeziumConfigBundle incrementalConfigBundle;
+ private final DebeziumConfigBundle reverseConfigBundle;
+
+ private volatile PgsqlMigrationConfigDto migrationConfigDto;
+
+ public PgsqlMigrationJobConfig(TaskWorkspace taskWorkspace) {
+ super(taskWorkspace, new ConfigFile("migration.properties", taskWorkspace.getConfigDirPath(),
+ taskWorkspace, TemplateConfigType.PGSQL_MIGRATION_CONFIG));
+
+ this.fullConfigBundle = getFullConfigBundle(taskWorkspace);
+ this.incrementalConfigBundle = getIncrementalConfigBundle(taskWorkspace);
+ this.reverseConfigBundle = getReverseConfigBundle(taskWorkspace);
+ }
+
+ /**
+ * Get migration config dto.
+ *
+ * @return pgsql migration config dto
+ */
+ public PgsqlMigrationConfigDto getMigrationConfigDto() {
+ if (migrationConfigDto == null) {
+ throw new IllegalStateException("PostgreSQL migration config is not loaded");
+ }
+ return migrationConfigDto;
+ }
+
+ @Override
+ public void loadConfig() {
+ migrationConfigFile.loadConfigMap();
+ migrationConfigDto = PgsqlMigrationConfigDto.generatePgsqlMigrationConfigDto(
+ migrationConfigFile.getConfigMap());
+
+ if (hasFullMigration()) {
+ fullConfigBundle.loadConfigMap();
+ }
+
+ if (hasIncrementalMigration()) {
+ incrementalConfigBundle.loadConfigMap();
+ }
+
+ if (hasReverseMigration()) {
+ reverseConfigBundle.loadConfigMap();
+ }
+ }
+
+ @Override
+ public void validateConfig() {
+ Map migrationConfig = migrationConfigFile.getConfigMap();
+ String pgsqlIp = migrationConfig.get(MigrationConfig.PGSQL_DATABASE_IP).toString();
+ String pgsqlPort = migrationConfig.get(MigrationConfig.PGSQL_DATABASE_PORT).toString();
+ String opengaussIp = migrationConfig.get(MigrationConfig.OPENGAUSS_DATABASE_IP).toString();
+ String opengaussPort = migrationConfig.get(MigrationConfig.OPENGAUSS_DATABASE_PORT).toString();
+
+ if (!ConfigValidationConstants.IP_REGEX.matcher(pgsqlIp).matches()
+ || !ConfigValidationConstants.PORT_REGEX.matcher(pgsqlPort).matches()
+ || !ConfigValidationConstants.IP_REGEX.matcher(opengaussIp).matches()
+ || !ConfigValidationConstants.PORT_REGEX.matcher(opengaussPort).matches()) {
+ throw new ConfigException("IP or Port is invalid");
+ }
+ }
+
+ @Override
+ public void changeToolsConfig() {
+ if (hasFullMigration()) {
+ changeFullConfig(hasIncrementalMigration());
+ }
+
+ if (hasIncrementalMigration()) {
+ changeIncrementalConfig();
+ }
+
+ if (hasReverseMigration()) {
+ changeReverseConfig();
+ }
+ }
+
+ @Override
+ public void saveChangeConfig() {
+ if (hasFullMigration()) {
+ fullConfigBundle.saveConfigMap();
+ }
+
+ if (hasIncrementalMigration()) {
+ incrementalConfigBundle.saveConfigMap();
+ }
+
+ if (hasReverseMigration()) {
+ reverseConfigBundle.saveConfigMap();
+ }
+ }
+
+ @Override
+ public void generateToolsConfigFiles() {
+ migrationConfigFile.generateFile();
+ fullConfigBundle.generateFile();
+ incrementalConfigBundle.generateFile();
+ reverseConfigBundle.generateFile();
+ }
+
+ private void changeFullConfig(boolean hasIncremental) {
+ Map configMap = FullMigrationToolPgsqlMigrationConfigHelper.pgsqlFullMigrationConfig(
+ migrationConfigDto, taskWorkspace);
+ if (hasIncremental) {
+ configMap.putAll(FullMigrationToolPgsqlMigrationConfigHelper.pgsqlFullMigrationRecordSnapshotConfig(
+ migrationConfigDto));
+ }
+ fullConfigBundle.getConfigFile().getConfigMap().putAll(configMap);
+ }
+
+ private void changeIncrementalConfig() {
+ Map connectSourceParams = DebeziumPgsqlMigrationConfigHelper.incrementalSourceConfig(
+ migrationConfigDto, taskWorkspace);
+ incrementalConfigBundle.getConnectSourceConfigFile().getConfigMap().putAll(connectSourceParams);
+ Set deleteKeySet = DebeziumPgsqlMigrationConfigHelper.incrementalSourceConfigDeleteKeySet();
+ incrementalConfigBundle.getConnectSourceConfigFile().getDeleteConfigKeySet().addAll(deleteKeySet);
+
+ Map connectSinkParams = DebeziumPgsqlMigrationConfigHelper.incrementalSinkConfig(
+ migrationConfigDto, taskWorkspace);
+ incrementalConfigBundle.getConnectSinkConfigFile().getConfigMap().putAll(connectSinkParams);
+
+ Map workerSourceParams = DebeziumPgsqlMigrationConfigHelper.incrementalWorkerSourceConfig(
+ taskWorkspace);
+ incrementalConfigBundle.getWorkerSourceConfigFile().getConfigMap().putAll(workerSourceParams);
+ Map workerSinkParams = DebeziumPgsqlMigrationConfigHelper.incrementalWorkerSinkConfig(
+ taskWorkspace);
+ incrementalConfigBundle.getWorkerSinkConfigFile().getConfigMap().putAll(workerSinkParams);
+
+ Map log4jSourceParams = DebeziumPgsqlMigrationConfigHelper.incrementalLog4jConfig(
+ taskWorkspace, DebeziumProcessType.SOURCE);
+ incrementalConfigBundle.getLog4jSourceConfigFile().getConfigMap().putAll(log4jSourceParams);
+ Map log4jSinkParams = DebeziumPgsqlMigrationConfigHelper.incrementalLog4jConfig(
+ taskWorkspace, DebeziumProcessType.SINK);
+ incrementalConfigBundle.getLog4jSinkConfigFile().getConfigMap().putAll(log4jSinkParams);
+ }
+
+ private void changeReverseConfig() {
+ Map connectSourceParams = DebeziumPgsqlMigrationConfigHelper.reverseSourceConfig(
+ migrationConfigDto, taskWorkspace);
+ reverseConfigBundle.getConnectSourceConfigFile().getConfigMap().putAll(connectSourceParams);
+ Map connectSinkParams = DebeziumPgsqlMigrationConfigHelper.reverseSinkConfig(
+ migrationConfigDto, taskWorkspace);
+ reverseConfigBundle.getConnectSinkConfigFile().getConfigMap().putAll(connectSinkParams);
+
+ Map workerSourceParams = DebeziumPgsqlMigrationConfigHelper.reverseWorkerSourceConfig(
+ taskWorkspace);
+ reverseConfigBundle.getWorkerSourceConfigFile().getConfigMap().putAll(workerSourceParams);
+ Map workerSinkParams = DebeziumPgsqlMigrationConfigHelper.reverseWorkerSinkConfig(
+ taskWorkspace);
+ reverseConfigBundle.getWorkerSinkConfigFile().getConfigMap().putAll(workerSinkParams);
+
+ Map log4jSourceParams = DebeziumPgsqlMigrationConfigHelper.reverseLog4jConfig(
+ taskWorkspace, DebeziumProcessType.SOURCE);
+ reverseConfigBundle.getLog4jSourceConfigFile().getConfigMap().putAll(log4jSourceParams);
+ Map log4jSinkParams = DebeziumPgsqlMigrationConfigHelper.reverseLog4jConfig(
+ taskWorkspace, DebeziumProcessType.SINK);
+ reverseConfigBundle.getLog4jSinkConfigFile().getConfigMap().putAll(log4jSinkParams);
+ }
+
+ private FullMigrationToolConfigBundle getFullConfigBundle(TaskWorkspace taskWorkspace) {
+ FullMigrationToolConfigBundle result = new FullMigrationToolConfigBundle();
+ result.setConfigFile(new ConfigFile("config.yml", taskWorkspace.getConfigFullDirPath(), taskWorkspace,
+ TemplateConfigType.FULL_MIGRATION_TOOL_CONFIG));
+ return result;
+ }
+
+ private DebeziumConfigBundle getIncrementalConfigBundle(TaskWorkspace taskWorkspace) {
+ DebeziumConfigBundle result = new DebeziumConfigBundle();
+ String configIncrementalDirPath = taskWorkspace.getConfigIncrementalDirPath();
+ result.setConnectSinkConfigFile(new ConfigFile("incremental-connect-sink.properties",
+ configIncrementalDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_PGSQL_SINK_CONFIG));
+ result.setConnectSourceConfigFile(new ConfigFile("incremental-connect-source.properties",
+ configIncrementalDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_PGSQL_SOURCE_CONFIG));
+ result.setWorkerSinkConfigFile(new ConfigFile("incremental-worker-sink.properties",
+ configIncrementalDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_AVRO_STANDALONE_CONFIG));
+ result.setWorkerSourceConfigFile(new ConfigFile("incremental-worker-source.properties",
+ configIncrementalDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_AVRO_STANDALONE_CONFIG));
+ result.setLog4jSinkConfigFile(new ConfigFile("incremental-log4j-sink.properties", configIncrementalDirPath,
+ taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_LOG4J2_CONFIG));
+ result.setLog4jSourceConfigFile(new ConfigFile("incremental-log4j-source.properties", configIncrementalDirPath,
+ taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_LOG4J2_CONFIG));
+ return result;
+ }
+
+ private DebeziumConfigBundle getReverseConfigBundle(TaskWorkspace taskWorkspace) {
+ DebeziumConfigBundle result = new DebeziumConfigBundle();
+ String configReverseDirPath = taskWorkspace.getConfigReverseDirPath();
+ result.setConnectSinkConfigFile(new ConfigFile("reverse-connect-sink.properties",
+ configReverseDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_OPENGAUSS_SINK_CONFIG));
+ result.setConnectSourceConfigFile(new ConfigFile("reverse-connect-source.properties",
+ configReverseDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_OPENGAUSS_SOURCE_CONFIG));
+ result.setWorkerSinkConfigFile(new ConfigFile("reverse-worker-sink.properties",
+ configReverseDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_AVRO_STANDALONE_CONFIG));
+ result.setWorkerSourceConfigFile(new ConfigFile("reverse-worker-source.properties",
+ configReverseDirPath, taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_AVRO_STANDALONE_CONFIG));
+ result.setLog4jSinkConfigFile(new ConfigFile("reverse-log4j-sink.properties", configReverseDirPath,
+ taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_LOG4J2_CONFIG));
+ result.setLog4jSourceConfigFile(new ConfigFile("reverse-log4j-source.properties", configReverseDirPath,
+ taskWorkspace, TemplateConfigType.DEBEZIUM_CONNECT_LOG4J2_CONFIG));
+ return result;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/executor/TaskAssistantExecutor.java b/multidb-portal/src/main/java/org/opengauss/migration/executor/TaskAssistantExecutor.java
new file mode 100644
index 0000000000000000000000000000000000000000..fbdfd5c9b16ea918acedbbe1d8ba82a8c673f2e1
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/executor/TaskAssistantExecutor.java
@@ -0,0 +1,43 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.executor;
+
+import org.opengauss.domain.model.MigrationStopIndicator;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Migration task assistant executor
+ *
+ * @since 2025/3/25
+ */
+public class TaskAssistantExecutor {
+ private final MigrationStopIndicator migrationStopIndicator;
+ private final List steps = new ArrayList<>();
+ private int currentTaskIndex = 0;
+
+ public TaskAssistantExecutor(MigrationStopIndicator taskControlOrder) {
+ this.migrationStopIndicator = taskControlOrder;
+ }
+
+ /**
+ * Add migration step
+ *
+ * @param step migration step
+ */
+ public void addStep(Runnable step) {
+ steps.add(step);
+ }
+
+ /**
+ * Execute migration steps
+ */
+ public void execute() {
+ while (currentTaskIndex < steps.size() && !migrationStopIndicator.isStopped()) {
+ steps.get(currentTaskIndex++).run();
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/handler/ThreadExceptionHandler.java b/multidb-portal/src/main/java/org/opengauss/migration/handler/ThreadExceptionHandler.java
new file mode 100644
index 0000000000000000000000000000000000000000..fd457fb56a680e6929719df848e60e890e0a098a
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/handler/ThreadExceptionHandler.java
@@ -0,0 +1,31 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.handler;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import org.opengauss.Main;
+import org.opengauss.migration.MigrationManager;
+
+/**
+ * Migration thread uncaught exception handler
+ *
+ * @since 2025/4/1
+ */
+public class ThreadExceptionHandler implements Thread.UncaughtExceptionHandler {
+ private static final Logger LOGGER = LogManager.getLogger(ThreadExceptionHandler.class);
+
+ @Override
+ public void uncaughtException(Thread t, Throwable throwable) {
+ LOGGER.error("Thread {} occur exception: ", t.getName(), throwable);
+
+ try {
+ MigrationManager.getInstance().stopOnError();
+ } finally {
+ Main.stopQuarkus();
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/helper/MigrationStatusHelper.java b/multidb-portal/src/main/java/org/opengauss/migration/helper/MigrationStatusHelper.java
new file mode 100644
index 0000000000000000000000000000000000000000..d7d57c99ff241d3e84863ffcad07acc00000651b
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/helper/MigrationStatusHelper.java
@@ -0,0 +1,180 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.helper;
+
+import org.opengauss.constants.MigrationStatusConstants;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.enums.MigrationStatusEnum;
+
+/**
+ * migration status helper
+ *
+ * @since 2025/5/13
+ */
+public class MigrationStatusHelper {
+ private MigrationStatusHelper() {
+ }
+
+ /**
+ * generate migration status file path
+ *
+ * @param taskWorkspace task workspace
+ * @return String file path
+ */
+ public static String generateMigrationStatusFilePath(TaskWorkspace taskWorkspace) {
+ String statusDirPath = taskWorkspace.getStatusDirPath();
+ return String.format("%s/%s", statusDirPath, MigrationStatusConstants.MIGRATION_STATUS_FILE_NAME);
+ }
+
+ /**
+ * generate full migration total info status file path
+ *
+ * @param taskWorkspace task workspace
+ * @return String file path
+ */
+ public static String generateFullTotalInfoStatusFilePath(TaskWorkspace taskWorkspace) {
+ String statusFullDirPath = taskWorkspace.getStatusFullDirPath();
+ return String.format("%s/%s", statusFullDirPath, MigrationStatusConstants.FULL_TOTAL_INFO_STATUS_FILE_NAME);
+ }
+
+ /**
+ * generate full migration table status file path
+ *
+ * @param taskWorkspace task workspace
+ * @return String file path
+ */
+ public static String generateFullTableStatusFilePath(TaskWorkspace taskWorkspace) {
+ String statusFullDirPath = taskWorkspace.getStatusFullDirPath();
+ return String.format("%s/%s", statusFullDirPath, MigrationStatusConstants.FULL_TABLE_STATUS_FILE_NAME);
+ }
+
+ /**
+ * generate full migration trigger status file path
+ *
+ * @param taskWorkspace task workspace
+ * @return String file path
+ */
+ public static String generateFullTriggerStatusFilePath(TaskWorkspace taskWorkspace) {
+ String statusFullDirPath = taskWorkspace.getStatusFullDirPath();
+ return String.format("%s/%s", statusFullDirPath, MigrationStatusConstants.FULL_TRIGGER_STATUS_FILE_NAME);
+ }
+
+ /**
+ * generate full migration view status file path
+ *
+ * @param taskWorkspace task workspace
+ * @return String file path
+ */
+ public static String generateFullViewStatusFilePath(TaskWorkspace taskWorkspace) {
+ String statusFullDirPath = taskWorkspace.getStatusFullDirPath();
+ return String.format("%s/%s", statusFullDirPath, MigrationStatusConstants.FULL_VIEW_STATUS_FILE_NAME);
+ }
+
+ /**
+ * generate full migration function status file path
+ *
+ * @param taskWorkspace task workspace
+ * @return String file path
+ */
+ public static String generateFullFuncStatusFilePath(TaskWorkspace taskWorkspace) {
+ String statusFullDirPath = taskWorkspace.getStatusFullDirPath();
+ return String.format("%s/%s", statusFullDirPath, MigrationStatusConstants.FULL_FUNCTION_STATUS_FILE_NAME);
+ }
+
+ /**
+ * generate full migration procedure status file path
+ *
+ * @param taskWorkspace task workspace
+ * @return String file path
+ */
+ public static String generateFullProcStatusFilePath(TaskWorkspace taskWorkspace) {
+ String statusFullDirPath = taskWorkspace.getStatusFullDirPath();
+ return String.format("%s/%s", statusFullDirPath, MigrationStatusConstants.FULL_PROCEDURE_STATUS_FILE_NAME);
+ }
+
+ /**
+ * generate full migration check success object status file path
+ *
+ * @param taskWorkspace task workspace
+ * @return String file path
+ */
+ public static String generateFullCheckSuccessObjectStatusFilePath(TaskWorkspace taskWorkspace) {
+ String statusDir = taskWorkspace.getStatusFullDataCheckDirPath();
+ return String.format("%s/%s", statusDir, MigrationStatusConstants.FULL_CHECK_SUCCESS_OBJECT_STATUS_FILE_NAME);
+ }
+
+ /**
+ * generate full migration check failed object status file path
+ *
+ * @param taskWorkspace task workspace
+ * @return String file path
+ */
+ public static String generateFullCheckFailedObjectStatusFilePath(TaskWorkspace taskWorkspace) {
+ String statusDir = taskWorkspace.getStatusFullDataCheckDirPath();
+ return String.format("%s/%s", statusDir, MigrationStatusConstants.FULL_CHECK_FAILED_OBJECT_STATUS_FILE_NAME);
+ }
+
+ /**
+ * generate full migration status file path
+ *
+ * @param taskWorkspace task workspace
+ * @return String file path
+ */
+ public static String generateIncrementalStatusFilePath(TaskWorkspace taskWorkspace) {
+ String statusDirPath = taskWorkspace.getStatusIncrementalDirPath();
+ return String.format("%s/%s", statusDirPath, MigrationStatusConstants.INCREMENTAL_STATUS_FILE_NAME);
+ }
+
+ /**
+ * generate incremental migration status file path
+ *
+ * @param taskWorkspace task workspace
+ * @return String file path
+ */
+ public static String generateReverseStatusFilePath(TaskWorkspace taskWorkspace) {
+ String statusDirPath = taskWorkspace.getStatusReverseDirPath();
+ return String.format("%s/%s", statusDirPath, MigrationStatusConstants.REVERSE_STATUS_FILE_NAME);
+ }
+
+ /**
+ * Is full migration status
+ *
+ * @param status migration status enum
+ * @return boolean
+ */
+ public static boolean isFullMigrationStatus(MigrationStatusEnum status) {
+ return MigrationStatusConstants.MIGRATION_STATUS_IN_FULL_PHASE_LIST.contains(status);
+ }
+
+ /**
+ * Is full data check status
+ *
+ * @param status migration status enum
+ * @return boolean
+ */
+ public static boolean isFullDataCheckStatus(MigrationStatusEnum status) {
+ return MigrationStatusConstants.MIGRATION_STATUS_IN_FULL_CHECK_PHASE_LIST.contains(status);
+ }
+
+ /**
+ * Is incremental migration status
+ *
+ * @param status migration status enum
+ * @return boolean
+ */
+ public static boolean isIncrementalMigrationStatus(MigrationStatusEnum status) {
+ return MigrationStatusConstants.MIGRATION_STATUS_IN_INCREMENTAL_PHASE_LIST.contains(status);
+ }
+
+ /**
+ * Is reverse migration status
+ *
+ * @param status migration status enum
+ * @return boolean
+ */
+ public static boolean isReverseMigrationStatus(MigrationStatusEnum status) {
+ return MigrationStatusConstants.MIGRATION_STATUS_IN_REVERSE_PHASE_LIST.contains(status);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/helper/TaskHelper.java b/multidb-portal/src/main/java/org/opengauss/migration/helper/TaskHelper.java
new file mode 100644
index 0000000000000000000000000000000000000000..f13d546118b40655c56e662e20592aa313f7178b
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/helper/TaskHelper.java
@@ -0,0 +1,66 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.helper;
+
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.enums.DatabaseType;
+import org.opengauss.exceptions.MigrationException;
+import org.opengauss.migration.config.AbstractMigrationJobConfig;
+import org.opengauss.utils.FileUtils;
+
+import java.io.IOException;
+
+/**
+ * Migration task helper
+ *
+ * @since 2025/7/9
+ */
+public class TaskHelper {
+ private TaskHelper() {
+ }
+
+ /**
+ * Load source database type
+ *
+ * @param taskWorkspace task workspace
+ * @return DatabaseType source database type
+ */
+ public static DatabaseType loadSourceDbType(TaskWorkspace taskWorkspace) {
+ String sourceDbTypeFilePath = taskWorkspace.getSourceDbTypeFilePath();
+ try {
+ if (FileUtils.checkFileExists(sourceDbTypeFilePath)) {
+ return DatabaseType.valueOf(FileUtils.readFileContents(sourceDbTypeFilePath).trim());
+ }
+ } catch (IOException e) {
+ throw new MigrationException("Failed to read source database type", e);
+ } catch (IllegalArgumentException e) {
+ throw new MigrationException("The source database type file is abnormal. "
+ + "Please create the migration task correctly");
+ }
+ throw new MigrationException("The source database type file does not exist. "
+ + "Please do not delete the file or modify the file name, "
+ + "and do not modify the directory structure of the task");
+ }
+
+ /**
+ * Load migration config from config file
+ *
+ * @param migrationJobConfig migration job config
+ */
+ public static void loadConfig(AbstractMigrationJobConfig migrationJobConfig) {
+ migrationJobConfig.loadConfig();
+ migrationJobConfig.validateConfig();
+ }
+
+ /**
+ * Change each migration phase's config
+ *
+ * @param migrationJobConfig migration job config
+ */
+ public static void changePhasesConfig(AbstractMigrationJobConfig migrationJobConfig) {
+ migrationJobConfig.changeToolsConfig();
+ migrationJobConfig.saveChangeConfig();
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/helper/config/ChameleonMysqlMigrationConfigHelper.java b/multidb-portal/src/main/java/org/opengauss/migration/helper/config/ChameleonMysqlMigrationConfigHelper.java
new file mode 100644
index 0000000000000000000000000000000000000000..548e6079f467f3fdb9b8430cc70d38339d83788d
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/helper/config/ChameleonMysqlMigrationConfigHelper.java
@@ -0,0 +1,87 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.helper.config;
+
+import org.opengauss.constants.config.ChameleonConfig;
+import org.opengauss.domain.dto.MysqlMigrationConfigDto;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.utils.StringUtils;
+
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * chameleon mysql migration config helper
+ *
+ * @since 2025/5/7
+ */
+public class ChameleonMysqlMigrationConfigHelper {
+ private ChameleonMysqlMigrationConfigHelper() {
+ }
+
+ /**
+ * get mysql full migration config map
+ *
+ * @param dto mysql migration config dto
+ * @param workspace task workspace
+ * @return mysql full migration config
+ */
+ public static Map mysqlFullMigrationConfig(MysqlMigrationConfigDto dto, TaskWorkspace workspace) {
+ HashMap changeParams = new HashMap<>();
+ changeParams.put(ChameleonConfig.MYSQL_DATABASE_IP, dto.getMysqlDatabaseIp());
+ changeParams.put(ChameleonConfig.MYSQL_DATABASE_PORT, dto.getMysqlDatabasePort());
+ changeParams.put(ChameleonConfig.MYSQL_DATABASE_USER, dto.getMysqlDatabaseUsername());
+ changeParams.put(ChameleonConfig.MYSQL_DATABASE_PASSWORD, dto.getMysqlDatabasePassword());
+ String mysqlDbName = dto.getMysqlDatabaseName();
+ changeParams.put(ChameleonConfig.MYSQL_DATABASE_NAME, mysqlDbName);
+
+ String schemaMappingKey = String.format("%s.%s", ChameleonConfig.MYSQL_SCHEMA_MAPPINGS, mysqlDbName);
+ String schemaMappingValue = mysqlDbName;
+ if (!StringUtils.isNullOrBlank(dto.getOpengaussDatabaseSchema())) {
+ schemaMappingValue = dto.getOpengaussDatabaseSchema();
+ }
+ changeParams.put(schemaMappingKey, schemaMappingValue);
+
+ if (!StringUtils.isNullOrBlank(dto.getMysqlDatabaseTables())) {
+ List limitTables = Arrays.asList(dto.getMysqlDatabaseTables().split(","));
+ changeParams.put(ChameleonConfig.MYSQL_LIMIT_TABLES, limitTables);
+ }
+
+ changeParams.put(ChameleonConfig.PG_DATABASE_IP, dto.getOpengaussDatabaseIp());
+ changeParams.put(ChameleonConfig.PG_DATABASE_PORT, dto.getOpengaussDatabasePort());
+ changeParams.put(ChameleonConfig.PG_DATABASE_USER, dto.getOpengaussDatabaseUsername());
+ changeParams.put(ChameleonConfig.PG_DATABASE_PASSWORD, dto.getOpengaussDatabasePassword());
+ changeParams.put(ChameleonConfig.PG_DATABASE_NAME, dto.getOpengaussDatabaseName());
+
+ String csvDir = generateCsvDir(workspace);
+ changeParams.put(ChameleonConfig.MYSQL_CSV_DIR, csvDir);
+ changeParams.put(ChameleonConfig.MYSQL_OUT_DIR, csvDir);
+ changeParams.put(ChameleonConfig.PID_DIR, generatePidDir(workspace));
+ changeParams.put(ChameleonConfig.DUMP_JSON, "yes");
+ return changeParams;
+ }
+
+ /**
+ * get mysql pid dir
+ *
+ * @param taskWorkspace task workspace
+ * @return mysql pid dir
+ */
+ public static String generatePidDir(TaskWorkspace taskWorkspace) {
+ return String.format("%s/%s", taskWorkspace.getTmpDirPath(), "pid");
+ }
+
+ /**
+ * get mysql csv dir
+ *
+ * @param taskWorkspace task workspace
+ * @return mysql csv dir
+ */
+ public static String generateCsvDir(TaskWorkspace taskWorkspace) {
+ return taskWorkspace.getTmpDirPath();
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/helper/config/DataCheckerMysqlMigrationConfigHelper.java b/multidb-portal/src/main/java/org/opengauss/migration/helper/config/DataCheckerMysqlMigrationConfigHelper.java
new file mode 100644
index 0000000000000000000000000000000000000000..ddae6fa8624373012f927ee214fb3cbf698d1269
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/helper/config/DataCheckerMysqlMigrationConfigHelper.java
@@ -0,0 +1,163 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.helper.config;
+
+import org.opengauss.constants.config.DataCheckerCheckConfig;
+import org.opengauss.constants.config.DataCheckerSinkConfig;
+import org.opengauss.constants.config.DataCheckerSourceConfig;
+import org.opengauss.domain.dto.MysqlMigrationConfigDto;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.migration.helper.tool.DataCheckerHelper;
+import org.opengauss.migration.tools.Kafka;
+import org.opengauss.utils.StringUtils;
+
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * data checker mysql migration config helper
+ *
+ * @since 2025/5/8
+ */
+public class DataCheckerMysqlMigrationConfigHelper {
+ private DataCheckerMysqlMigrationConfigHelper() {
+ }
+
+ /**
+ * get mysql full data check source process config map
+ *
+ * @param dto mysql migration config dto
+ * @param logConfigPath log config path
+ * @return mysql full data check source config
+ */
+ public static Map mysqlFullDataCheckSourceConfig(
+ MysqlMigrationConfigDto dto, String logConfigPath) {
+ HashMap changeParams = new HashMap<>();
+
+ String mysqlDatabaseIp = dto.getMysqlDatabaseIp();
+ String mysqlDatabasePort = dto.getMysqlDatabasePort();
+ String mysqlDatabaseName = dto.getMysqlDatabaseName();
+ String mysqlDatabaseUrl = String.format("jdbc:mysql://%s:%s/%s?useSSL=false&useUnicode=true"
+ + "&characterEncoding=utf-8&serverTimezone=UTC&allowPublicKeyRetrieval=true",
+ mysqlDatabaseIp, mysqlDatabasePort, mysqlDatabaseName);
+ changeParams.put(DataCheckerSourceConfig.DATABASE_URL, mysqlDatabaseUrl);
+ changeParams.put(DataCheckerSourceConfig.DATABASE_USERNAME, dto.getMysqlDatabaseUsername());
+ changeParams.put(DataCheckerSourceConfig.DATABASE_PASSWORD, dto.getMysqlDatabasePassword());
+ changeParams.put(DataCheckerSourceConfig.EXTRACT_SCHEMA, mysqlDatabaseName);
+
+ Kafka kafka = Kafka.getInstance();
+ String kafkaIpPort = kafka.getKafkaIpPort();
+ String schemaRegistryUrl = kafka.getSchemaRegistryUrl();
+ changeParams.put(DataCheckerSourceConfig.EXTRACT_DEBEZIUM_AVRO_REGISTRY, schemaRegistryUrl);
+ changeParams.put(DataCheckerSourceConfig.KAFKA_BOOTSTRAP_SERVERS, kafkaIpPort);
+
+ changeParams.put(DataCheckerSourceConfig.EXTRACT_DEBEZIUM_ENABLE, false);
+ changeParams.put(DataCheckerSourceConfig.LOGGING_CONFIG, logConfigPath);
+ return changeParams;
+ }
+
+ /**
+ * get mysql full data check sink process config map
+ *
+ * @param dto mysql migration config dto
+ * @param logConfigPath log config path
+ * @return mysql full data check sink config
+ */
+ public static Map mysqlFullDataCheckSinkConfig(MysqlMigrationConfigDto dto, String logConfigPath) {
+ HashMap changeParams = new HashMap<>();
+
+ String opengaussDatabaseIp = dto.getOpengaussDatabaseIp();
+ String opengaussDatabasePort = dto.getOpengaussDatabasePort();
+ String opengaussDatabaseName = dto.getOpengaussDatabaseName();
+ String opengaussDatabaseUrl = String.format(
+ "jdbc:opengauss://%s:%s/%s?useSSL=false&useUnicode=true&characterEncoding=utf-8&serverTimezone=UTC",
+ opengaussDatabaseIp, opengaussDatabasePort, opengaussDatabaseName);
+ changeParams.put(DataCheckerSinkConfig.DATABASE_URL, opengaussDatabaseUrl);
+ changeParams.put(DataCheckerSinkConfig.DATABASE_USERNAME, dto.getOpengaussDatabaseUsername());
+ changeParams.put(DataCheckerSinkConfig.DATABASE_PASSWORD, dto.getOpengaussDatabasePassword());
+
+ if (StringUtils.isNullOrBlank(dto.getOpengaussDatabaseSchema())) {
+ changeParams.put(DataCheckerSinkConfig.EXTRACT_SCHEMA, dto.getMysqlDatabaseName());
+ } else {
+ changeParams.put(DataCheckerSinkConfig.EXTRACT_SCHEMA, dto.getOpengaussDatabaseSchema());
+ }
+
+ Kafka kafka = Kafka.getInstance();
+ String kafkaIpPort = kafka.getKafkaIpPort();
+ String schemaRegistryUrl = kafka.getSchemaRegistryUrl();
+ changeParams.put(DataCheckerSinkConfig.EXTRACT_DEBEZIUM_AVRO_REGISTRY, schemaRegistryUrl);
+ changeParams.put(DataCheckerSinkConfig.KAFKA_BOOTSTRAP_SERVERS, kafkaIpPort);
+
+ changeParams.put(DataCheckerSinkConfig.EXTRACT_DEBEZIUM_ENABLE, false);
+ changeParams.put(DataCheckerSinkConfig.LOGGING_CONFIG, logConfigPath);
+ return changeParams;
+ }
+
+ /**
+ * get mysql full data check the check process config map
+ *
+ * @param taskWorkspace task workspace
+ * @param logConfigPath log config path
+ * @return mysql full data check the check config
+ */
+ public static Map mysqlFullDataCheckCheckConfig(TaskWorkspace taskWorkspace, String logConfigPath) {
+ HashMap changeParams = new HashMap<>();
+
+ changeParams.put(DataCheckerCheckConfig.DATA_CHECK_DATA_PATH,
+ DataCheckerHelper.generateFullDataCheckDataPath(taskWorkspace));
+ changeParams.put(DataCheckerCheckConfig.LOGGING_CONFIG, logConfigPath);
+
+ String kafkaIpPort = Kafka.getInstance().getKafkaIpPort();
+ changeParams.put(DataCheckerCheckConfig.KAFKA_BOOTSTRAP_SERVERS, kafkaIpPort);
+ return changeParams;
+ }
+
+ /**
+ * get mysql incremental data check source process config map
+ *
+ * @param dto mysql migration config dto
+ * @param logConfigPath log config path
+ * @param sourceTopic incremental migration source topic
+ * @return mysql incremental data check source config
+ */
+ public static Map mysqlIncrementalDataCheckSourceConfig(
+ MysqlMigrationConfigDto dto, String logConfigPath, String sourceTopic) {
+ Map changeParams = mysqlFullDataCheckSourceConfig(dto, logConfigPath);
+ changeParams.put(DataCheckerSourceConfig.EXTRACT_DEBEZIUM_ENABLE, true);
+ changeParams.put(DataCheckerSourceConfig.EXTRACT_DEBEZIUM_TOPIC, sourceTopic);
+ return changeParams;
+ }
+
+ /**
+ * get mysql incremental data check sink process config map
+ *
+ * @param dto mysql migration config dto
+ * @param logConfigPath log config path
+ * @param sinkTopic incremental migration sink topic
+ * @return mysql incremental data check sink config
+ */
+ public static Map mysqlIncrementalDataCheckSinkConfig(
+ MysqlMigrationConfigDto dto, String logConfigPath, String sinkTopic) {
+ Map changeParams = mysqlFullDataCheckSinkConfig(dto, logConfigPath);
+ changeParams.put(DataCheckerSinkConfig.EXTRACT_DEBEZIUM_ENABLE, true);
+ changeParams.put(DataCheckerSinkConfig.EXTRACT_DEBEZIUM_TOPIC, sinkTopic);
+ return changeParams;
+ }
+
+ /**
+ * get mysql incremental data check the check process config map
+ *
+ * @param taskWorkspace task workspace
+ * @param logConfigPath log config path
+ * @return mysql incremental data check the check config
+ */
+ public static Map mysqlIncrementalDataCheckCheckConfig(
+ TaskWorkspace taskWorkspace, String logConfigPath) {
+ Map changeParams = mysqlFullDataCheckCheckConfig(taskWorkspace, logConfigPath);
+ changeParams.put(DataCheckerCheckConfig.DATA_CHECK_DATA_PATH,
+ DataCheckerHelper.generateIncrementalDataCheckDataPath(taskWorkspace));
+ return changeParams;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/helper/config/DebeziumMysqlMigrationConfigHelper.java b/multidb-portal/src/main/java/org/opengauss/migration/helper/config/DebeziumMysqlMigrationConfigHelper.java
new file mode 100644
index 0000000000000000000000000000000000000000..f67f3eaef6f00aba6a46108f54b7c976a7ab258f
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/helper/config/DebeziumMysqlMigrationConfigHelper.java
@@ -0,0 +1,619 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.helper.config;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.config.ConnectAvroStandaloneConfig;
+import org.opengauss.constants.config.DebeziumConnectLog4jConfig;
+import org.opengauss.constants.config.DebeziumMysqlSinkConfig;
+import org.opengauss.constants.config.DebeziumMysqlSourceConfig;
+import org.opengauss.constants.config.DebeziumOpenGaussSinkConfig;
+import org.opengauss.constants.config.DebeziumOpenGaussSourceConfig;
+import org.opengauss.domain.dto.MysqlMigrationConfigDto;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.enums.DebeziumProcessType;
+import org.opengauss.migration.tools.Debezium;
+import org.opengauss.migration.tools.Kafka;
+import org.opengauss.utils.FileUtils;
+import org.opengauss.utils.JdbcUtils;
+import org.opengauss.utils.OpenGaussUtils;
+import org.opengauss.utils.StringUtils;
+import org.opengauss.utils.TimeUtils;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * debezium mysql migration config helper
+ *
+ * @since 2025/5/7
+ */
+public class DebeziumMysqlMigrationConfigHelper {
+ private static final Logger LOGGER = LogManager.getLogger(DebeziumMysqlMigrationConfigHelper.class);
+
+ private DebeziumMysqlMigrationConfigHelper() {
+ }
+
+ /**
+ * get mysql incremental migration source process config
+ *
+ * @param dto mysql migration config dto
+ * @param workspace task workspace
+ * @return incremental source config
+ */
+ public static Map incrementalSourceConfig(MysqlMigrationConfigDto dto, TaskWorkspace workspace) {
+ HashMap changeParams = new HashMap<>();
+ changeParams.put(DebeziumMysqlSourceConfig.DATABASE_HOSTNAME, dto.getMysqlDatabaseIp());
+ changeParams.put(DebeziumMysqlSourceConfig.DATABASE_PORT, dto.getMysqlDatabasePort());
+ changeParams.put(DebeziumMysqlSourceConfig.DATABASE_USER, dto.getMysqlDatabaseUsername());
+ changeParams.put(DebeziumMysqlSourceConfig.DATABASE_PASSWORD, dto.getMysqlDatabasePassword());
+ changeParams.put(DebeziumMysqlSourceConfig.DATABASE_INCLUDE_LIST, dto.getMysqlDatabaseName());
+ if (!StringUtils.isNullOrBlank(dto.getMysqlDatabaseTables())) {
+ changeParams.put(DebeziumMysqlSourceConfig.TABLE_INCLUDE_LIST, dto.getMysqlDatabaseTables());
+ }
+
+ String kafkaServer = Kafka.getInstance().getKafkaIpPort();
+ changeParams.put(DebeziumMysqlSourceConfig.DATABASE_HISTORY_KAFKA_SERVERS, kafkaServer);
+ changeParams.put(DebeziumMysqlSourceConfig.KAFKA_BOOTSTRAP_SERVERS, kafkaServer);
+
+ String workspaceId = workspace.getId();
+ changeParams.put(DebeziumMysqlSourceConfig.NAME, "mysql_source_" + workspaceId);
+
+ String databaseServerName = generateIncrementalDatabaseServerName(workspace);
+ changeParams.put(DebeziumMysqlSourceConfig.DATABASE_SERVER_NAME, databaseServerName);
+ changeParams.put(DebeziumMysqlSourceConfig.DATABASE_SERVER_ID,
+ String.valueOf(TimeUtils.timestampFrom20250101()));
+ changeParams.put(DebeziumMysqlSourceConfig.DATABASE_HISTORY_KAFKA_TOPIC,
+ generateIncrementalHistoryKafkaTopic(workspace));
+ changeParams.put(DebeziumMysqlSourceConfig.TRANSFORMS_ROUTE_REGEX,
+ "^" + databaseServerName + "(.*)");
+ changeParams.put(DebeziumMysqlSourceConfig.TRANSFORMS_ROUTE_REPLACEMENT,
+ generateIncrementalKafkaTopic(workspace));
+
+ String processFilePath = generateIncrementalProcessFilePath(workspace);
+ changeParams.put(DebeziumMysqlSourceConfig.SOURCE_PROCESS_FILE_PATH, processFilePath);
+ changeParams.put(DebeziumMysqlSourceConfig.CREATE_COUNT_INFO_PATH, processFilePath);
+ return changeParams;
+ }
+
+ /**
+ * get mysql incremental migration sink process config
+ *
+ * @param dto mysql migration config dto
+ * @param workspace task workspace
+ * @return incremental sink config
+ */
+ public static Map incrementalSinkConfig(MysqlMigrationConfigDto dto, TaskWorkspace workspace) {
+ HashMap changeParams = new HashMap<>();
+
+ String opengaussUrl = String.format("jdbc:opengauss://%s:%s/%s?loggerLevel=OFF",
+ dto.getOpengaussDatabaseIp(), dto.getOpengaussDatabasePort(), dto.getOpengaussDatabaseName());
+ changeParams.put(DebeziumMysqlSinkConfig.OPENGAUSS_URL, opengaussUrl);
+ changeParams.put(DebeziumMysqlSinkConfig.OPENGAUSS_USERNAME, dto.getOpengaussDatabaseUsername());
+ changeParams.put(DebeziumMysqlSinkConfig.OPENGAUSS_PASSWORD, dto.getOpengaussDatabasePassword());
+
+ String schemaMappings = generateIncrementalSchemaMappings(dto);
+ changeParams.put(DebeziumMysqlSinkConfig.SCHEMA_MAPPINGS, schemaMappings);
+
+ if (dto.isOpenGaussClusterAvailable()) {
+ changeParams.put(DebeziumMysqlSinkConfig.OPENGAUSS_STANDBY_HOSTS, dto.getOpengaussDatabaseStandbyHosts());
+ changeParams.put(DebeziumMysqlSinkConfig.OPENGAUSS_STANDBY_PORTS, dto.getOpengaussDatabaseStandbyPorts());
+ }
+
+ String kafkaServer = Kafka.getInstance().getKafkaIpPort();
+ changeParams.put(DebeziumMysqlSinkConfig.RECORD_BREAKPOINT_KAFKA_BOOTSTRAP_SERVERS, kafkaServer);
+
+ String workspaceId = workspace.getId();
+ changeParams.put(DebeziumMysqlSinkConfig.NAME, "mysql_sink_" + workspaceId);
+ changeParams.put(DebeziumMysqlSinkConfig.TOPICS, generateIncrementalKafkaTopic(workspace));
+ changeParams.put(DebeziumMysqlSinkConfig.RECORD_BREAKPOINT_KAFKA_TOPIC,
+ generateIncrementalBreakpointKafkaTopic(workspace));
+
+ String processFilePath = generateIncrementalProcessFilePath(workspace);
+ changeParams.put(DebeziumMysqlSinkConfig.CREATE_COUNT_INFO_PATH, processFilePath);
+ changeParams.put(DebeziumMysqlSinkConfig.SINK_PROCESS_FILE_PATH, processFilePath);
+ changeParams.put(DebeziumMysqlSinkConfig.FAIL_SQL_PATH, processFilePath);
+
+ String xlogPath = generateXlogPath(workspace);
+ changeParams.put(DebeziumMysqlSinkConfig.XLOG_LOCATION, xlogPath);
+
+ return changeParams;
+ }
+
+ /**
+ * get mysql incremental migration worker source process config
+ *
+ * @param taskWorkspace task workspace
+ * @return incremental worker source config
+ */
+ public static Map incrementalWorkerSourceConfig(TaskWorkspace taskWorkspace) {
+ HashMap changeParams = new HashMap<>();
+
+ Kafka kafka = Kafka.getInstance();
+ String kafkaServer = kafka.getKafkaIpPort();
+ String schemaRegistryUrl = kafka.getSchemaRegistryUrl();
+ changeParams.put(ConnectAvroStandaloneConfig.SCHEMA_REGISTRY_URL_FOR_KEY_CONVERTER, schemaRegistryUrl);
+ changeParams.put(ConnectAvroStandaloneConfig.SCHEMA_REGISTRY_URL_FOR_VALUE_CONVERTER, schemaRegistryUrl);
+ changeParams.put(ConnectAvroStandaloneConfig.CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY, "All");
+ changeParams.put(ConnectAvroStandaloneConfig.KAFKA_SERVERS, kafkaServer);
+
+ changeParams.put(ConnectAvroStandaloneConfig.OFFSET_STORAGE_FILE_FILENAME,
+ generateIncrementalStorageOffsetFilePath(taskWorkspace));
+ String pluginPath = "share/java, " + Debezium.getInstance().getInstallDirPath();
+ changeParams.put(ConnectAvroStandaloneConfig.PLUGIN_PATH, pluginPath);
+
+ return changeParams;
+ }
+
+ /**
+ * get mysql incremental migration worker sink process config
+ *
+ * @param taskWorkspace task workspace
+ * @return incremental worker sink config
+ */
+ public static Map incrementalWorkerSinkConfig(TaskWorkspace taskWorkspace) {
+ return incrementalWorkerSourceConfig(taskWorkspace);
+ }
+
+ /**
+ * get mysql incremental migration log4j config map
+ *
+ * @param taskWorkspace task workspace
+ * @param processType process type
+ * @return incremental log4j config
+ */
+ public static Map incrementalLog4jConfig(
+ TaskWorkspace taskWorkspace, DebeziumProcessType processType) {
+ HashMap changeParams = new HashMap<>();
+ String logsIncrementalDirPath = taskWorkspace.getLogsIncrementalDirPath();
+ String logPath = String.format("%s/incremental-connect-%s.log", logsIncrementalDirPath, processType.getType());
+ changeParams.put(DebeziumConnectLog4jConfig.CONNECT_APPENDER_FILE, logPath);
+
+ changeParams.put(DebeziumConnectLog4jConfig.KAFKA_ERROR_LOGGER, "ERROR, kafkaErrorAppender");
+ changeParams.put(DebeziumConnectLog4jConfig.KAFKA_ERROR_APPENDER, "org.apache.log4j.FileAppender");
+ changeParams.put(DebeziumConnectLog4jConfig.KAFKA_ERROR_APPENDER_LAYOUT, "org.apache.log4j.PatternLayout");
+ changeParams.put(DebeziumConnectLog4jConfig.KAFKA_ERROR_APPENDER_LAYOUT_CONVERSION_PATTERN,
+ "%d{yyyy-MM-dd HH:mm:ss,SSS} [%t] %p %c:(%L) - %m%n");
+
+ String kafkaErrorLogPath = generateIncrementalKafkaErrorLogPath(taskWorkspace, processType);
+ changeParams.put(DebeziumConnectLog4jConfig.KAFKA_ERROR_APPENDER_FILE, kafkaErrorLogPath);
+ return changeParams;
+ }
+
+ /**
+ * get mysql incremental migration source process config
+ *
+ * @param dto mysql migration config dto
+ * @param workspace task workspace
+ * @return incremental source config
+ */
+ public static Map reverseSourceConfig(MysqlMigrationConfigDto dto, TaskWorkspace workspace) {
+ Map changeParams = new HashMap<>();
+
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_HOSTNAME, dto.getOpengaussDatabaseIp());
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_PORT, dto.getOpengaussDatabasePort());
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_USER, dto.getOpengaussDatabaseUsername());
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_PASSWORD, dto.getOpengaussDatabasePassword());
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_NAME, dto.getOpengaussDatabaseName());
+ if (!StringUtils.isNullOrBlank(dto.getMysqlDatabaseTables())) {
+ changeParams.put(DebeziumOpenGaussSourceConfig.TABLE_INCLUDE_LIST, dto.getMysqlDatabaseTables());
+ }
+
+ if (dto.isOpenGaussClusterAvailable()) {
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_IS_CLUSTER, true);
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_STANDBY_HOSTNAMES,
+ dto.getOpengaussDatabaseStandbyHosts());
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_STANDBY_PORTS,
+ dto.getOpengaussDatabaseStandbyPorts());
+ }
+
+ String workspaceId = workspace.getId();
+ changeParams.put(DebeziumOpenGaussSourceConfig.NAME, "opengauss_source_" + workspaceId);
+
+ String databaseServerName = generateReverseDatabaseServerName(workspace);
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_SERVER_NAME, databaseServerName);
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_HISTORY_KAFKA_TOPIC,
+ generateReverseHistoryKafkaTopic(workspace));
+ changeParams.put(DebeziumOpenGaussSourceConfig.TRANSFORMS_ROUTE_REGEX,
+ "^" + databaseServerName + "(.*)");
+ changeParams.put(DebeziumOpenGaussSourceConfig.TRANSFORMS_ROUTE_REPLACEMENT,
+ generateReverseKafkaTopic(workspace));
+
+ String processFilePath = generateReverseProcessFilePath(workspace);
+ changeParams.put(DebeziumOpenGaussSourceConfig.SOURCE_PROCESS_FILE_PATH, processFilePath);
+ changeParams.put(DebeziumOpenGaussSourceConfig.CREATE_COUNT_INFO_PATH,
+ processFilePath);
+
+ changeParams.put(DebeziumOpenGaussSourceConfig.SLOT_NAME, generateReverseSlotName(workspace));
+ changeParams.put(DebeziumOpenGaussSourceConfig.SLOT_DROP_ON_STOP, false);
+
+ try (Connection connection = JdbcUtils.getOpengaussConnection(dto.getOpenGaussConnectInfo())) {
+ if (!OpenGaussUtils.isSystemAdmin(dto.getOpengaussDatabaseUsername(), connection)) {
+ changeParams.put(DebeziumOpenGaussSourceConfig.PUBLICATION_AUTO_CREATE_MODE, "filtered");
+ }
+ } catch (SQLException e) {
+ LOGGER.warn("Failed to get system admin status, publication.autocreate.mode is not set to filtered."
+ + " Error: {}", e.getMessage());
+ }
+ return changeParams;
+ }
+
+ /**
+ * get mysql incremental migration sink process config
+ *
+ * @param dto mysql migration config dto
+ * @param taskWorkspace task workspace
+ * @return incremental sink config
+ */
+ public static Map reverseSinkConfig(MysqlMigrationConfigDto dto, TaskWorkspace taskWorkspace) {
+ Map changeParams = new HashMap<>();
+
+ changeParams.put(DebeziumOpenGaussSinkConfig.DATABASE_TYPE, "mysql");
+ changeParams.put(DebeziumOpenGaussSinkConfig.DATABASE_IP, dto.getMysqlDatabaseIp());
+ changeParams.put(DebeziumOpenGaussSinkConfig.DATABASE_PORT, dto.getMysqlDatabasePort());
+ changeParams.put(DebeziumOpenGaussSinkConfig.DATABASE_USERNAME, dto.getMysqlDatabaseUsername());
+ changeParams.put(DebeziumOpenGaussSinkConfig.DATABASE_PASSWORD, dto.getMysqlDatabasePassword());
+ changeParams.put(DebeziumOpenGaussSinkConfig.DATABASE_NAME, dto.getMysqlDatabaseName());
+
+ String schemaMappings = generateReverseSchemaMappings(dto);
+ changeParams.put(DebeziumOpenGaussSinkConfig.SCHEMA_MAPPINGS, schemaMappings);
+ if (!StringUtils.isNullOrBlank(dto.getMysqlDatabaseTables())) {
+ changeParams.put(DebeziumOpenGaussSinkConfig.TABLE_INCLUDE_LIST, dto.getMysqlDatabaseTables());
+ }
+
+ String workspaceId = taskWorkspace.getId();
+ changeParams.put(DebeziumOpenGaussSinkConfig.NAME, "opengauss_sink_" + workspaceId);
+ changeParams.put(DebeziumOpenGaussSinkConfig.TOPICS, generateReverseKafkaTopic(taskWorkspace));
+ changeParams.put(DebeziumOpenGaussSinkConfig.RECORD_BREAKPOINT_KAFKA_TOPIC,
+ generateReverseBreakpointKafkaTopic(taskWorkspace));
+
+ String processFilePath = generateReverseProcessFilePath(taskWorkspace);
+ changeParams.put(DebeziumOpenGaussSinkConfig.CREATE_COUNT_INFO_PATH, processFilePath);
+ changeParams.put(DebeziumOpenGaussSinkConfig.SINK_PROCESS_FILE_PATH, processFilePath);
+ changeParams.put(DebeziumOpenGaussSinkConfig.FAIL_SQL_PATH, processFilePath);
+
+ String kafkaServer = Kafka.getInstance().getKafkaIpPort();
+ changeParams.put(DebeziumOpenGaussSinkConfig.RECORD_BREAKPOINT_KAFKA_BOOTSTRAP_SERVERS, kafkaServer);
+
+ return changeParams;
+ }
+
+ /**
+ * get mysql incremental migration worker source process config
+ *
+ * @param taskWorkspace task workspace
+ * @return incremental worker source config
+ */
+ public static Map reverseWorkerSourceConfig(TaskWorkspace taskWorkspace) {
+ Map changeParams = incrementalWorkerSourceConfig(taskWorkspace);
+ changeParams.put(ConnectAvroStandaloneConfig.OFFSET_STORAGE_FILE_FILENAME,
+ generateReverseStorageOffsetFilePath(taskWorkspace));
+ return changeParams;
+ }
+
+ /**
+ * get mysql incremental migration worker sink process config
+ *
+ * @param taskWorkspace task workspace
+ * @return incremental worker sink config
+ */
+ public static Map reverseWorkerSinkConfig(TaskWorkspace taskWorkspace) {
+ return reverseWorkerSourceConfig(taskWorkspace);
+ }
+
+ /**
+ * get mysql incremental migration log4j config
+ *
+ * @param workspace task workspace
+ * @param processType process type
+ * @return incremental log4j config
+ */
+ public static Map reverseLog4jConfig(TaskWorkspace workspace, DebeziumProcessType processType) {
+ Map changeParams = incrementalLog4jConfig(workspace, processType);
+
+ String logsReverseDirPath = workspace.getLogsReverseDirPath();
+ String logPath = String.format("%s/reverse-connect-%s.log", logsReverseDirPath, processType.getType());
+ changeParams.put(DebeziumConnectLog4jConfig.CONNECT_APPENDER_FILE, logPath);
+
+ String kafkaErrorLogPath = generateReverseKafkaErrorLogPath(workspace, processType);
+ changeParams.put(DebeziumConnectLog4jConfig.KAFKA_ERROR_APPENDER_FILE, kafkaErrorLogPath);
+ return changeParams;
+ }
+
+ /**
+ * generate mysql reverse migration openGauss slot name
+ *
+ * @param taskWorkspace task workspace
+ * @return reverse migration slot name
+ */
+ public static String generateReverseSlotName(TaskWorkspace taskWorkspace) {
+ return "slot_" + taskWorkspace.getId();
+ }
+
+ /**
+ * generate mysql incremental migration connect kafka error log path
+ *
+ * @param workspace task workspace
+ * @param processType process type
+ * @return incremental migration connect kafka error log path
+ */
+ public static String generateIncrementalKafkaErrorLogPath(
+ TaskWorkspace workspace, DebeziumProcessType processType) {
+ String logsIncrementalDirPath = workspace.getLogsIncrementalDirPath();
+ return String.format("%s/kafka-connect/connect-%s-error.log", logsIncrementalDirPath, processType.getType());
+ }
+
+ /**
+ * generate mysql reverse migration connect kafka error log path
+ *
+ * @param workspace task workspace
+ * @param processType process type
+ * @return reverse migration connect kafka error log path
+ */
+ public static String generateReverseKafkaErrorLogPath(TaskWorkspace workspace, DebeziumProcessType processType) {
+ String logsReverseDirPath = workspace.getLogsReverseDirPath();
+ return String.format("%s/kafka-connect/connect-%s-error.log", logsReverseDirPath, processType.getType());
+ }
+
+ /**
+ * generate mysql incremental migration kafka topic
+ *
+ * @param taskWorkspace task workspace
+ * @return incremental migration kafka topic
+ */
+ public static String generateIncrementalKafkaTopic(TaskWorkspace taskWorkspace) {
+ return generateIncrementalDatabaseServerName(taskWorkspace) + "_topic";
+ }
+
+ /**
+ * generate mysql reverse migration kafka topic
+ *
+ * @param taskWorkspace task workspace
+ * @return reverse migration kafka topic
+ */
+ public static String generateReverseKafkaTopic(TaskWorkspace taskWorkspace) {
+ return generateReverseDatabaseServerName(taskWorkspace) + "_topic";
+ }
+
+ /**
+ * generate mysql incremental migration history kafka topic
+ *
+ * @param taskWorkspace task workspace
+ * @return incremental migration history kafka topic
+ */
+ public static String generateIncrementalHistoryKafkaTopic(TaskWorkspace taskWorkspace) {
+ return generateIncrementalKafkaTopic(taskWorkspace) + "_history";
+ }
+
+ /**
+ * generate mysql reverse migration history kafka topic
+ *
+ * @param taskWorkspace task workspace
+ * @return reverse migration history kafka topic
+ */
+ public static String generateReverseHistoryKafkaTopic(TaskWorkspace taskWorkspace) {
+ return generateReverseKafkaTopic(taskWorkspace) + "_history";
+ }
+
+ /**
+ * generate mysql incremental migration breakpoint kafka topic
+ *
+ * @param taskWorkspace task workspace
+ * @return incremental migration breakpoint kafka topic
+ */
+ public static String generateIncrementalBreakpointKafkaTopic(TaskWorkspace taskWorkspace) {
+ return generateIncrementalKafkaTopic(taskWorkspace) + "_bp";
+ }
+
+ /**
+ * generate mysql reverse migration breakpoint kafka topic
+ *
+ * @param taskWorkspace task workspace
+ * @return reverse migration breakpoint kafka topic
+ */
+ public static String generateReverseBreakpointKafkaTopic(TaskWorkspace taskWorkspace) {
+ return generateReverseKafkaTopic(taskWorkspace) + "_bp";
+ }
+
+ /**
+ * generate mysql incremental migration storage offset file path
+ *
+ * @param taskWorkspace task workspace
+ * @return incremental migration storage offset file path
+ */
+ public static String generateIncrementalStorageOffsetFilePath(TaskWorkspace taskWorkspace) {
+ return String.format("%s/%s", taskWorkspace.getTmpDirPath(), "incremental-connect.offsets");
+ }
+
+ /**
+ * generate mysql reverse migration storage offset file path
+ *
+ * @param taskWorkspace task workspace
+ * @return reverse migration storage offset file path
+ */
+ public static String generateReverseStorageOffsetFilePath(TaskWorkspace taskWorkspace) {
+ return String.format("%s/%s", taskWorkspace.getTmpDirPath(), "reverse-connect.offsets");
+ }
+
+ /**
+ * set mysql incremental migration snapshot offset
+ *
+ * @param changeParams change params map
+ * @param dto mysql migration config dto
+ */
+ public static void setSnapshotOffset(Map changeParams, MysqlMigrationConfigDto dto) {
+ String mysqlActiveCheckSql = "show variables like 'read_only';";
+ String mysqlActiveGtidSql = "show global variables like 'server_uuid';";
+ String mysqlStandbyGtidSql = "show slave status;";
+ String snapshotSchema = "sch_chameleon";
+ String oGGtidSql = "select t_binlog_name,i_binlog_position,t_gtid_set from sch_chameleon.t_replica_batch;";
+
+ try (Connection opengaussConnection = JdbcUtils.getOpengaussConnection(dto.getOpenGaussConnectInfo())) {
+ if (!OpenGaussUtils.isSchemaExists(snapshotSchema, opengaussConnection)) {
+ return;
+ }
+
+ try (Connection mysqlConnection = JdbcUtils.getMysqlConnection(dto.getMysqlConnectInfo());
+ Statement mysqlStatement1 = mysqlConnection.createStatement();
+ Statement mysqlStatement2 = mysqlConnection.createStatement();
+ Statement mysqlStatement3 = mysqlConnection.createStatement();
+ ResultSet mysqlActiveCheckResultSet = mysqlStatement1.executeQuery(mysqlActiveCheckSql);
+ ResultSet mysqlActiveGtidResultSet = mysqlStatement2.executeQuery(mysqlActiveGtidSql);
+ ResultSet mysqlStandbyGtidResultSet = mysqlStatement3.executeQuery(mysqlStandbyGtidSql);
+ Statement oGStatement = opengaussConnection.createStatement();
+ ResultSet oGGtidResultSet = oGStatement.executeQuery(oGGtidSql)) {
+ String mysqlCurrentUuid = "";
+ if (mysqlActiveCheckResultSet.next()) {
+ String mysqlActiveResult = mysqlActiveCheckResultSet.getString("Value");
+ if (mysqlActiveResult.equals("OFF")) {
+ if (mysqlActiveGtidResultSet.next()) {
+ mysqlCurrentUuid = mysqlActiveGtidResultSet.getString("Value");
+ }
+ } else {
+ if (mysqlStandbyGtidResultSet.next()) {
+ mysqlCurrentUuid = mysqlStandbyGtidResultSet.getString("Master_UUID");
+ }
+ }
+ }
+
+ if (oGGtidResultSet.next()) {
+ String tBinlogName = oGGtidResultSet.getString("t_binlog_name");
+ String iBinlogPosition = oGGtidResultSet.getString("i_binlog_position");
+ String tGtidSet = oGGtidResultSet.getString("t_gtid_set");
+
+ if (StringUtils.isNullOrBlank(tGtidSet)) {
+ LOGGER.warn("Mysql Execute_Gtid_Set is empty");
+ return;
+ }
+
+ String preGtidSet = getPreGtidSet(tGtidSet, mysqlCurrentUuid);
+ changeParams.put(DebeziumMysqlSourceConfig.SNAPSHOT_OFFSET_BINLOG_FILENAME, tBinlogName);
+ changeParams.put(DebeziumMysqlSourceConfig.SNAPSHOT_OFFSET_BINLOG_POSITION, iBinlogPosition);
+ changeParams.put(DebeziumMysqlSourceConfig.SNAPSHOT_OFFSET_GTID_SET, preGtidSet);
+ }
+ }
+ } catch (SQLException | ClassNotFoundException e) {
+ LOGGER.warn("Failed to load Mysql Execute_Gtid_Set", e);
+ }
+ }
+
+ /**
+ * generate mysql incremental migration process file path
+ *
+ * @param taskWorkspace task workspace
+ * @return incremental migration process file path
+ */
+ public static String generateIncrementalProcessFilePath(TaskWorkspace taskWorkspace) {
+ return taskWorkspace.getStatusIncrementalDirPath();
+ }
+
+ /**
+ * generate mysql reverse migration process file path
+ *
+ * @param taskWorkspace task workspace
+ * @return reverse migration process file path
+ */
+ public static String generateReverseProcessFilePath(TaskWorkspace taskWorkspace) {
+ return taskWorkspace.getStatusReverseDirPath();
+ }
+
+ /**
+ * read xlog location
+ *
+ * @param taskWorkspace task workspace
+ * @return xlog location
+ */
+ public static String readXlogLocation(TaskWorkspace taskWorkspace) {
+ String xlogPath = generateXlogPath(taskWorkspace);
+ String xlogLocation = "";
+ try {
+ String fileContents = FileUtils.readFileContents(xlogPath);
+ String[] lines = fileContents.split("\n");
+ for (String line : lines) {
+ if (line.contains(DebeziumOpenGaussSourceConfig.XLOG_LOCATION)) {
+ int index = line.lastIndexOf("=") + 1;
+ xlogLocation = line.substring(index).trim();
+ }
+ }
+ } catch (IOException e) {
+ LOGGER.warn("Failed to read xlog location, error: {}", e.getMessage());
+ }
+ return xlogLocation;
+ }
+
+ /**
+ * generate mysql incremental migration xlog file path
+ *
+ * @param taskWorkspace task workspace
+ * @return xlog file path
+ */
+ public static String generateXlogPath(TaskWorkspace taskWorkspace) {
+ return String.format("%s/%s", taskWorkspace.getStatusIncrementalDirPath(), "xlog.txt");
+ }
+
+ /**
+ * generate mysql reverse migration database server name
+ *
+ * @param taskWorkspace task workspace
+ * @return reverse migration database server name
+ */
+ public static String generateReverseDatabaseServerName(TaskWorkspace taskWorkspace) {
+ return "opengauss_server_" + taskWorkspace.getId();
+ }
+
+ private static String generateIncrementalDatabaseServerName(TaskWorkspace taskWorkspace) {
+ return "mysql_server_" + taskWorkspace.getId();
+ }
+
+ private static String getPreGtidSet(String tGtidSet, String mysqlCurrentUuid) {
+ StringBuilder newGtidSet = new StringBuilder();
+
+ String[] gtidSetParts = tGtidSet.replaceAll(System.lineSeparator(), "").split(",");
+ for (String part : gtidSetParts) {
+ int uuidIndex = part.lastIndexOf(":");
+ String uuid = part.substring(0, uuidIndex);
+ int offsetIndex = part.lastIndexOf("-") + 1;
+
+ if (uuid.equals(mysqlCurrentUuid) && (part.contains("-")) && offsetIndex > uuidIndex) {
+ long offset = Long.parseLong(part.substring(offsetIndex));
+ offset--;
+ part = part.substring(0, offsetIndex) + offset;
+ }
+ newGtidSet.append(part).append(",");
+ }
+
+ return newGtidSet.substring(0, newGtidSet.length() - 1);
+ }
+
+ private static String generateIncrementalSchemaMappings(MysqlMigrationConfigDto migrationConfigDto) {
+ String schemaMappings;
+ if (StringUtils.isNullOrBlank(migrationConfigDto.getOpengaussDatabaseSchema())) {
+ schemaMappings = String.format("%s:%s", migrationConfigDto.getMysqlDatabaseName(),
+ migrationConfigDto.getMysqlDatabaseName());
+ } else {
+ schemaMappings = String.format("%s:%s", migrationConfigDto.getMysqlDatabaseName(),
+ migrationConfigDto.getOpengaussDatabaseSchema());
+ }
+ return schemaMappings;
+ }
+
+ private static String generateReverseSchemaMappings(MysqlMigrationConfigDto migrationConfigDto) {
+ String schemaMappings;
+ if (StringUtils.isNullOrBlank(migrationConfigDto.getOpengaussDatabaseSchema())) {
+ schemaMappings = String.format("%s:%s", migrationConfigDto.getMysqlDatabaseName(),
+ migrationConfigDto.getMysqlDatabaseName());
+ } else {
+ schemaMappings = String.format("%s:%s", migrationConfigDto.getOpengaussDatabaseSchema(),
+ migrationConfigDto.getMysqlDatabaseName());
+ }
+ return schemaMappings;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/helper/config/DebeziumPgsqlMigrationConfigHelper.java b/multidb-portal/src/main/java/org/opengauss/migration/helper/config/DebeziumPgsqlMigrationConfigHelper.java
new file mode 100644
index 0000000000000000000000000000000000000000..a98970a611d15b49221df229749e37f26c2d591a
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/helper/config/DebeziumPgsqlMigrationConfigHelper.java
@@ -0,0 +1,490 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.helper.config;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.config.ConnectAvroStandaloneConfig;
+import org.opengauss.constants.config.DebeziumConnectLog4jConfig;
+import org.opengauss.constants.config.DebeziumOpenGaussSinkConfig;
+import org.opengauss.constants.config.DebeziumOpenGaussSourceConfig;
+import org.opengauss.constants.config.DebeziumPgsqlSinkConfig;
+import org.opengauss.constants.config.DebeziumPgsqlSourceConfig;
+import org.opengauss.domain.dto.PgsqlMigrationConfigDto;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.enums.DebeziumProcessType;
+import org.opengauss.exceptions.ConfigException;
+import org.opengauss.migration.tools.Kafka;
+import org.opengauss.utils.FileUtils;
+import org.opengauss.utils.JdbcUtils;
+import org.opengauss.utils.OpenGaussUtils;
+import org.opengauss.utils.StringUtils;
+import org.opengauss.utils.ThreadUtils;
+import org.opengauss.utils.TimeUtils;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+/**
+ * debezium pgsql migration config helper
+ *
+ * @since 2025/6/10
+ */
+public class DebeziumPgsqlMigrationConfigHelper {
+ private static final Logger LOGGER = LogManager.getLogger(DebeziumPgsqlMigrationConfigHelper.class);
+
+ private static String slotName;
+
+ private DebeziumPgsqlMigrationConfigHelper() {
+ }
+
+ /**
+ * get pgsql incremental migration source process config map
+ *
+ * @param dto pgsql migration config dto
+ * @param taskWorkspace task workspace
+ * @return Map source process config map
+ */
+ public static Map incrementalSourceConfig(
+ PgsqlMigrationConfigDto dto, TaskWorkspace taskWorkspace) {
+ HashMap changeParams = new HashMap<>();
+ changeParams.put(DebeziumPgsqlSourceConfig.DATABASE_HOSTNAME, dto.getPgsqlDatabaseIp());
+ changeParams.put(DebeziumPgsqlSourceConfig.DATABASE_PORT, dto.getPgsqlDatabasePort());
+ changeParams.put(DebeziumPgsqlSourceConfig.DATABASE_USER, dto.getPgsqlDatabaseUsername());
+ changeParams.put(DebeziumPgsqlSourceConfig.DATABASE_PASSWORD, dto.getPgsqlDatabasePassword());
+ changeParams.put(DebeziumPgsqlSourceConfig.DATABASE_NAME, dto.getPgsqlDatabaseName());
+ changeParams.put(DebeziumPgsqlSourceConfig.SCHEMA_INCLUDE_LIST, dto.getPgsqlDatabaseSchemas());
+
+ changeParams.put(DebeziumPgsqlSourceConfig.NAME, "pgsql_source_" + taskWorkspace.getId());
+ String databaseServerName = generateIncrementalDatabaseServerName(taskWorkspace);
+ changeParams.put(DebeziumPgsqlSourceConfig.DATABASE_SERVER_NAME, databaseServerName);
+ changeParams.put(DebeziumPgsqlSourceConfig.TRANSFORMS_ROUTE_REGEX, "^" + databaseServerName + "(.*)");
+ changeParams.put(DebeziumPgsqlSourceConfig.TRANSFORMS_ROUTE_REPLACEMENT,
+ generateIncrementalKafkaTopic(taskWorkspace));
+ changeParams.put(DebeziumPgsqlSourceConfig.COMMIT_PROCESS_WHILE_RUNNING, true);
+ String processFilePath = generateIncrementalProcessFilePath(taskWorkspace);
+ changeParams.put(DebeziumPgsqlSourceConfig.SOURCE_PROCESS_FILE_PATH, processFilePath);
+ changeParams.put(DebeziumPgsqlSourceConfig.CREATE_COUNT_INFO_PATH, processFilePath);
+
+ changeParams.put(DebeziumPgsqlSourceConfig.SLOT_DROP_ON_STOP, "false");
+ changeParams.put(DebeziumPgsqlSourceConfig.MIGRATION_TYPE, "incremental");
+
+ int majorPgsqlVersion = FullMigrationToolPgsqlMigrationConfigHelper.getMajorPgsqlVersion(dto);
+ if (majorPgsqlVersion >= 11) {
+ changeParams.put(DebeziumPgsqlSourceConfig.TRUNCATE_HANDLING_MODE, "include");
+ changeParams.put(DebeziumPgsqlSourceConfig.PLUGIN_NAME, "pgoutput");
+ } else if (majorPgsqlVersion == 10) {
+ changeParams.put(DebeziumPgsqlSourceConfig.TRUNCATE_HANDLING_MODE, "skip");
+ changeParams.put(DebeziumPgsqlSourceConfig.PLUGIN_NAME, "pgoutput");
+ } else {
+ changeParams.put(DebeziumPgsqlSourceConfig.TRUNCATE_HANDLING_MODE, "skip");
+ changeParams.put(DebeziumPgsqlSourceConfig.PLUGIN_NAME, "wal2json");
+ }
+
+ return changeParams;
+ }
+
+ /**
+ * get pgsql incremental migration source process delete key set
+ *
+ * @return Set delete key set
+ */
+ public static Set incrementalSourceConfigDeleteKeySet() {
+ Set deleteKeySet = new HashSet<>();
+ deleteKeySet.add(DebeziumPgsqlSourceConfig.TABLE_INCLUDE_LIST);
+ deleteKeySet.add(DebeziumPgsqlSourceConfig.SCHEMA_EXCLUDE_LIST);
+ deleteKeySet.add(DebeziumPgsqlSourceConfig.TABLE_EXCLUDE_LIST);
+ return deleteKeySet;
+ }
+
+ /**
+ * get pgsql incremental migration sink process config map
+ *
+ * @param dto pgsql migration config dto
+ * @param taskWorkspace task workspace
+ * @return Map sink process config map
+ */
+ public static Map incrementalSinkConfig(PgsqlMigrationConfigDto dto, TaskWorkspace taskWorkspace) {
+ HashMap changeParams = new HashMap<>();
+
+ changeParams.put(DebeziumPgsqlSinkConfig.DATABASE_USERNAME, dto.getOpengaussDatabaseUsername());
+ changeParams.put(DebeziumPgsqlSinkConfig.DATABASE_PASSWORD, dto.getOpengaussDatabasePassword());
+ changeParams.put(DebeziumPgsqlSinkConfig.DATABASE_NAME, dto.getOpengaussDatabaseName());
+ changeParams.put(DebeziumPgsqlSinkConfig.DATABASE_PORT, dto.getOpengaussDatabasePort());
+ changeParams.put(DebeziumPgsqlSinkConfig.DATABASE_IP, dto.getOpengaussDatabaseIp());
+
+ Map schemaMappings =
+ FullMigrationToolPgsqlMigrationConfigHelper.getMigrationSchemaMappings(dto);
+ StringBuilder mappingStrBuilder = new StringBuilder();
+ for (Map.Entry entry : schemaMappings.entrySet()) {
+ mappingStrBuilder.append(entry.getKey()).append(":").append(entry.getValue()).append(";");
+ }
+ changeParams.put(DebeziumPgsqlSinkConfig.SCHEMA_MAPPINGS,
+ mappingStrBuilder.substring(0, mappingStrBuilder.length() - 1));
+
+ changeParams.put(DebeziumPgsqlSinkConfig.NAME, "pgsql_sink_" + taskWorkspace.getId());
+ changeParams.put(DebeziumPgsqlSinkConfig.TOPICS, generateIncrementalKafkaTopic(taskWorkspace));
+ changeParams.put(DebeziumPgsqlSinkConfig.COMMIT_PROCESS_WHILE_RUNNING, true);
+ String processFilePath = generateIncrementalProcessFilePath(taskWorkspace);
+ changeParams.put(DebeziumPgsqlSinkConfig.SINK_PROCESS_FILE_PATH, processFilePath);
+ changeParams.put(DebeziumPgsqlSinkConfig.CREATE_COUNT_INFO_PATH, processFilePath);
+ changeParams.put(DebeziumPgsqlSinkConfig.FAIL_SQL_PATH, processFilePath);
+
+ String xlogPath = generateXlogPath(taskWorkspace);
+ changeParams.put(DebeziumPgsqlSinkConfig.XLOG_LOCATION, xlogPath);
+ return changeParams;
+ }
+
+ /**
+ * get pgsql incremental migration worker source process config map
+ *
+ * @param workspace task workspace
+ * @return Map worker source process config map
+ */
+ public static Map incrementalWorkerSourceConfig(TaskWorkspace workspace) {
+ Map changeParams = DebeziumMysqlMigrationConfigHelper.incrementalWorkerSourceConfig(workspace);
+
+ changeParams.put(ConnectAvroStandaloneConfig.OFFSET_STORAGE_FILE_FILENAME,
+ generateIncrementalStorageOffsetFilePath(workspace));
+ return changeParams;
+ }
+
+ /**
+ * get pgsql incremental migration worker sink process config map
+ *
+ * @param taskWorkspace task workspace
+ * @return Map worker sink process config map
+ */
+ public static Map incrementalWorkerSinkConfig(TaskWorkspace taskWorkspace) {
+ return incrementalWorkerSourceConfig(taskWorkspace);
+ }
+
+ /**
+ * get pgsql incremental migration log4j config map
+ *
+ * @param workspace task workspace
+ * @param processType process type
+ * @return Map log4j config map
+ */
+ public static Map incrementalLog4jConfig(TaskWorkspace workspace, DebeziumProcessType processType) {
+ Map changeParams =
+ DebeziumMysqlMigrationConfigHelper.incrementalLog4jConfig(workspace, processType);
+ String kafkaErrorLogPath = generateIncrementalKafkaErrorLogPath(workspace, processType);
+ changeParams.put(DebeziumConnectLog4jConfig.KAFKA_ERROR_APPENDER_FILE, kafkaErrorLogPath);
+ return changeParams;
+ }
+
+ /**
+ * get pgsql reverse migration source process config map
+ *
+ * @param dto pgsql migration config dto
+ * @param taskWorkspace task workspace
+ * @return Map source process config map
+ */
+ public static Map reverseSourceConfig(PgsqlMigrationConfigDto dto, TaskWorkspace taskWorkspace) {
+ Map changeParams = new HashMap<>();
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_HOSTNAME, dto.getOpengaussDatabaseIp());
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_PORT, dto.getOpengaussDatabasePort());
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_USER, dto.getOpengaussDatabaseUsername());
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_PASSWORD, dto.getOpengaussDatabasePassword());
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_NAME, dto.getOpengaussDatabaseName());
+
+ Map schemaMappings =
+ FullMigrationToolPgsqlMigrationConfigHelper.getMigrationSchemaMappings(dto);
+ StringBuilder includeSchemasBuilder = new StringBuilder();
+ schemaMappings.forEach((key, value) -> includeSchemasBuilder.append(value).append(","));
+ if (!StringUtils.isNullOrBlank(dto.getPgsqlDatabaseSchemas())) {
+ changeParams.put(DebeziumOpenGaussSourceConfig.SCHEMA_INCLUDE_LIST,
+ includeSchemasBuilder.substring(0, includeSchemasBuilder.length() - 1));
+ }
+
+ String workspaceId = taskWorkspace.getId();
+ changeParams.put(DebeziumOpenGaussSourceConfig.NAME, "opengauss_source_" + workspaceId);
+
+ String databaseServerName = generateReverseDatabaseServerName(taskWorkspace);
+ changeParams.put(DebeziumOpenGaussSourceConfig.DATABASE_SERVER_NAME, databaseServerName);
+ changeParams.put(DebeziumOpenGaussSourceConfig.TRANSFORMS_ROUTE_REGEX, "^" + databaseServerName + "(.*)");
+ changeParams.put(DebeziumOpenGaussSourceConfig.TRANSFORMS_ROUTE_REPLACEMENT,
+ generateReverseKafkaTopic(taskWorkspace));
+
+ String processFilePath = generateReverseProcessFilePath(taskWorkspace);
+ changeParams.put(DebeziumOpenGaussSourceConfig.SOURCE_PROCESS_FILE_PATH, processFilePath);
+ changeParams.put(DebeziumOpenGaussSourceConfig.CREATE_COUNT_INFO_PATH, processFilePath);
+
+ changeParams.put(DebeziumOpenGaussSourceConfig.SLOT_NAME, generateReverseSlotName(taskWorkspace));
+ changeParams.put(DebeziumOpenGaussSourceConfig.SLOT_DROP_ON_STOP, false);
+
+ try (Connection connection = JdbcUtils.getOpengaussConnection(dto.getOpenGaussConnectInfo())) {
+ if (!OpenGaussUtils.isSystemAdmin(dto.getOpengaussDatabaseUsername(), connection)) {
+ changeParams.put(DebeziumOpenGaussSourceConfig.PUBLICATION_AUTO_CREATE_MODE, "filtered");
+ }
+ } catch (SQLException e) {
+ LOGGER.warn("Failed to get system admin status, publication.autocreate.mode is not set to"
+ + " filtered. Error: {}", e.getMessage());
+ }
+
+ return changeParams;
+ }
+
+ /**
+ * get pgsql reverse migration sink process config map
+ *
+ * @param dto pgsql migration config dto
+ * @param taskWorkspace task workspace
+ * @return Map sink process config map
+ */
+ public static Map reverseSinkConfig(PgsqlMigrationConfigDto dto, TaskWorkspace taskWorkspace) {
+ Map changeParams = new HashMap<>();
+ changeParams.put(DebeziumOpenGaussSinkConfig.DATABASE_TYPE, "postgres");
+ changeParams.put(DebeziumOpenGaussSinkConfig.DATABASE_IP, dto.getPgsqlDatabaseIp());
+ changeParams.put(DebeziumOpenGaussSinkConfig.DATABASE_PORT, dto.getPgsqlDatabasePort());
+ changeParams.put(DebeziumOpenGaussSinkConfig.DATABASE_USERNAME, dto.getPgsqlDatabaseUsername());
+ changeParams.put(DebeziumOpenGaussSinkConfig.DATABASE_PASSWORD, dto.getPgsqlDatabasePassword());
+ changeParams.put(DebeziumOpenGaussSinkConfig.DATABASE_NAME, dto.getPgsqlDatabaseName());
+
+ Map schemaMappings =
+ FullMigrationToolPgsqlMigrationConfigHelper.getMigrationSchemaMappings(dto);
+ StringBuilder mappingStrBuilder = new StringBuilder();
+ for (Map.Entry entry : schemaMappings.entrySet()) {
+ mappingStrBuilder.append(entry.getValue()).append(":").append(entry.getKey()).append(";");
+ }
+ changeParams.put(DebeziumOpenGaussSinkConfig.SCHEMA_MAPPINGS,
+ mappingStrBuilder.substring(0, mappingStrBuilder.length() - 1));
+
+ String workspaceId = taskWorkspace.getId();
+ changeParams.put(DebeziumOpenGaussSinkConfig.NAME, "opengauss_sink_" + workspaceId);
+ changeParams.put(DebeziumOpenGaussSinkConfig.TOPICS, generateReverseKafkaTopic(taskWorkspace));
+ changeParams.put(DebeziumOpenGaussSinkConfig.RECORD_BREAKPOINT_KAFKA_TOPIC,
+ generateReverseBreakpointKafkaTopic(taskWorkspace));
+
+ String processFilePath = generateReverseProcessFilePath(taskWorkspace);
+ changeParams.put(DebeziumOpenGaussSinkConfig.CREATE_COUNT_INFO_PATH, processFilePath);
+ changeParams.put(DebeziumOpenGaussSinkConfig.SINK_PROCESS_FILE_PATH, processFilePath);
+ changeParams.put(DebeziumOpenGaussSinkConfig.FAIL_SQL_PATH, processFilePath);
+
+ String kafkaServer = Kafka.getInstance().getKafkaIpPort();
+ changeParams.put(DebeziumOpenGaussSinkConfig.RECORD_BREAKPOINT_KAFKA_BOOTSTRAP_SERVERS, kafkaServer);
+
+ return changeParams;
+ }
+
+ /**
+ * get pgsql reverse migration worker source process config map
+ *
+ * @param taskWorkspace task workspace
+ * @return Map worker source process config map
+ */
+ public static Map reverseWorkerSourceConfig(TaskWorkspace taskWorkspace) {
+ Map changeParams = incrementalWorkerSourceConfig(taskWorkspace);
+ changeParams.put(ConnectAvroStandaloneConfig.OFFSET_STORAGE_FILE_FILENAME,
+ generateReverseStorageOffsetFilePath(taskWorkspace));
+ return changeParams;
+ }
+
+ /**
+ * get pgsql reverse migration worker sink process config map
+ *
+ * @param taskWorkspace task workspace
+ * @return Map worker sink process config map
+ */
+ public static Map reverseWorkerSinkConfig(TaskWorkspace taskWorkspace) {
+ return reverseWorkerSourceConfig(taskWorkspace);
+ }
+
+ /**
+ * get pgsql reverse migration log4j config map
+ *
+ * @param workspace task workspace
+ * @param processType process type
+ * @return Map log4j config map
+ */
+ public static Map reverseLog4jConfig(TaskWorkspace workspace, DebeziumProcessType processType) {
+ Map changeParams =
+ DebeziumMysqlMigrationConfigHelper.reverseLog4jConfig(workspace, processType);
+ String kafkaErrorLogPath = generateReverseKafkaErrorLogPath(workspace, processType);
+ changeParams.put(DebeziumConnectLog4jConfig.KAFKA_ERROR_APPENDER_FILE, kafkaErrorLogPath);
+ return changeParams;
+ }
+
+ /**
+ * get pgsql incremental migration slot name
+ *
+ * @param migrationConfigDto pgsql migration config dto
+ * @param workspace task workspace
+ * @return String slot name
+ */
+ public static synchronized String generateIncrementalSlotName(
+ PgsqlMigrationConfigDto migrationConfigDto, TaskWorkspace workspace) {
+ if (slotName == null) {
+ slotName = "slot_" + workspace.getId();
+
+ String selectSlotsSql = "SELECT * FROM pg_get_replication_slots();";
+ try (Connection connection = JdbcUtils.getPgsqlConnection(migrationConfigDto.getPgsqlConnectInfo());
+ Statement statement = connection.createStatement();
+ ResultSet resultSet = statement.executeQuery(selectSlotsSql)) {
+ ArrayList slotList = new ArrayList<>();
+ while (resultSet.next()) {
+ slotList.add(resultSet.getString("slot_name"));
+ }
+
+ while (slotList.contains(slotName)) {
+ slotName = slotName + "_" + TimeUtils.timestampFrom20250101();
+ ThreadUtils.sleep(10);
+ }
+ } catch (SQLException | ClassNotFoundException e) {
+ throw new ConfigException("Failed to select pgsql replication slots", e);
+ }
+ }
+ return slotName;
+ }
+
+ /**
+ * get pgsql reverse migration slot name
+ *
+ * @param taskWorkspace task workspace
+ * @return String slot name
+ */
+ public static String generateReverseSlotName(TaskWorkspace taskWorkspace) {
+ return DebeziumMysqlMigrationConfigHelper.generateReverseSlotName(taskWorkspace);
+ }
+
+ /**
+ * get pgsql incremental migration kafka topic
+ *
+ * @param taskWorkspace task workspace
+ * @return String kafka topic
+ */
+ public static String generateIncrementalKafkaTopic(TaskWorkspace taskWorkspace) {
+ return generateIncrementalDatabaseServerName(taskWorkspace) + "_topic";
+ }
+
+ /**
+ * get pgsql reverse migration kafka topic
+ *
+ * @param taskWorkspace task workspace
+ * @return String kafka topic
+ */
+ public static String generateReverseKafkaTopic(TaskWorkspace taskWorkspace) {
+ return generateReverseDatabaseServerName(taskWorkspace) + "_topic";
+ }
+
+ /**
+ * get pgsql reverse migration breakpoint kafka topic
+ *
+ * @param taskWorkspace task workspace
+ * @return String breakpoint kafka topic
+ */
+ public static String generateReverseBreakpointKafkaTopic(TaskWorkspace taskWorkspace) {
+ return generateReverseKafkaTopic(taskWorkspace) + "_bp";
+ }
+
+ /**
+ * get pgsql incremental migration connect kafka error log path
+ *
+ * @param taskWorkspace task workspace
+ * @param processType process type
+ * @return String connect kafka error log path
+ */
+ public static String generateIncrementalKafkaErrorLogPath(
+ TaskWorkspace taskWorkspace, DebeziumProcessType processType) {
+ return DebeziumMysqlMigrationConfigHelper.generateIncrementalKafkaErrorLogPath(taskWorkspace, processType);
+ }
+
+ /**
+ * get pgsql reverse migration connect kafka error log path
+ *
+ * @param taskWorkspace task workspace
+ * @param processType process type
+ * @return String connect kafka error log path
+ */
+ public static String generateReverseKafkaErrorLogPath(
+ TaskWorkspace taskWorkspace, DebeziumProcessType processType) {
+ return DebeziumMysqlMigrationConfigHelper.generateReverseKafkaErrorLogPath(taskWorkspace, processType);
+ }
+
+ /**
+ * get pgsql incremental migration storage offset file path
+ *
+ * @param taskWorkspace task workspace
+ * @return String storage offset file path
+ */
+ public static String generateIncrementalStorageOffsetFilePath(TaskWorkspace taskWorkspace) {
+ return DebeziumMysqlMigrationConfigHelper.generateIncrementalStorageOffsetFilePath(taskWorkspace);
+ }
+
+ /**
+ * get pgsql reverse migration storage offset file path
+ *
+ * @param taskWorkspace task workspace
+ * @return String storage offset file path
+ */
+ public static String generateReverseStorageOffsetFilePath(TaskWorkspace taskWorkspace) {
+ return DebeziumMysqlMigrationConfigHelper.generateReverseStorageOffsetFilePath(taskWorkspace);
+ }
+
+ /**
+ * get pgsql incremental migration process file path
+ *
+ * @param taskWorkspace task workspace
+ * @return String process file path
+ */
+ public static String generateIncrementalProcessFilePath(TaskWorkspace taskWorkspace) {
+ return DebeziumMysqlMigrationConfigHelper.generateIncrementalProcessFilePath(taskWorkspace);
+ }
+
+ /**
+ * get pgsql reverse migration process file path
+ *
+ * @param taskWorkspace task workspace
+ * @return String process file path
+ */
+ public static String generateReverseProcessFilePath(TaskWorkspace taskWorkspace) {
+ return DebeziumMysqlMigrationConfigHelper.generateReverseProcessFilePath(taskWorkspace);
+ }
+
+ /**
+ * Read xlog
+ *
+ * @param taskWorkspace task workspace
+ * @return xlog
+ */
+ public static String readXlogLocation(TaskWorkspace taskWorkspace) {
+ String xlogPath = generateXlogPath(taskWorkspace);
+ String xlogLocation = "";
+ try {
+ String fileContents = FileUtils.readFileContents(xlogPath);
+ String[] lines = fileContents.split("\n");
+ for (String line : lines) {
+ if (line.contains(DebeziumOpenGaussSourceConfig.XLOG_LOCATION)) {
+ int index = line.lastIndexOf("=") + 1;
+ xlogLocation = line.substring(index).trim();
+ }
+ }
+ } catch (IOException ignored) {
+ LOGGER.trace("Failed to read xlog from file: {}", xlogPath);
+ }
+ return xlogLocation;
+ }
+
+ private static String generateXlogPath(TaskWorkspace taskWorkspace) {
+ return DebeziumMysqlMigrationConfigHelper.generateXlogPath(taskWorkspace);
+ }
+
+ private static String generateIncrementalDatabaseServerName(TaskWorkspace taskWorkspace) {
+ return "pgsql_server_" + taskWorkspace.getId();
+ }
+
+ private static String generateReverseDatabaseServerName(TaskWorkspace taskWorkspace) {
+ return DebeziumMysqlMigrationConfigHelper.generateReverseDatabaseServerName(taskWorkspace);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/helper/config/FullMigrationToolPgsqlMigrationConfigHelper.java b/multidb-portal/src/main/java/org/opengauss/migration/helper/config/FullMigrationToolPgsqlMigrationConfigHelper.java
new file mode 100644
index 0000000000000000000000000000000000000000..9500b040c71bf1e4ad80f8ec14482b28659ec83a
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/helper/config/FullMigrationToolPgsqlMigrationConfigHelper.java
@@ -0,0 +1,167 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.helper.config;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.config.FullMigrationToolConfig;
+import org.opengauss.constants.config.MigrationConfig;
+import org.opengauss.domain.dto.PgsqlMigrationConfigDto;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.exceptions.ConfigException;
+import org.opengauss.utils.JdbcUtils;
+import org.opengauss.utils.PgsqlUtils;
+import org.opengauss.utils.StringUtils;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * full migration tool pgsql migration config helper
+ *
+ * @since 2025/5/29
+ */
+public class FullMigrationToolPgsqlMigrationConfigHelper {
+ private static final Logger LOGGER = LogManager.getLogger(FullMigrationToolPgsqlMigrationConfigHelper.class);
+
+ private static int pgsqlMajorVersion = 0;
+
+ private FullMigrationToolPgsqlMigrationConfigHelper() {
+ }
+
+ /**
+ * get pgsql full migration config map
+ *
+ * @param dto pgsql migration config dto
+ * @param workspace task workspace
+ * @return change params
+ */
+ public static Map pgsqlFullMigrationConfig(PgsqlMigrationConfigDto dto, TaskWorkspace workspace) {
+ HashMap changeParams = new HashMap<>();
+
+ changeParams.put(FullMigrationToolConfig.IS_DUMP_JSON, true);
+ changeParams.put(FullMigrationToolConfig.STATUS_DIR, workspace.getStatusFullDirPath());
+
+ changeParams.put(FullMigrationToolConfig.OG_CONN_HOST, dto.getOpengaussDatabaseIp());
+ changeParams.put(FullMigrationToolConfig.OG_CONN_PORT, dto.getOpengaussDatabasePort());
+ changeParams.put(FullMigrationToolConfig.OG_CONN_USER, dto.getOpengaussDatabaseUsername());
+ changeParams.put(FullMigrationToolConfig.OG_CONN_PASSWORD, dto.getOpengaussDatabasePassword());
+ changeParams.put(FullMigrationToolConfig.OG_CONN_DATABASE, dto.getOpengaussDatabaseName());
+
+ changeParams.put(FullMigrationToolConfig.SOURCE_DB_CONN_HOST, dto.getPgsqlDatabaseIp());
+ changeParams.put(FullMigrationToolConfig.SOURCE_DB_CONN_PORT, dto.getPgsqlDatabasePort());
+ changeParams.put(FullMigrationToolConfig.SOURCE_DB_CONN_USER, dto.getPgsqlDatabaseUsername());
+ changeParams.put(FullMigrationToolConfig.SOURCE_DB_CONN_PASSWORD, dto.getPgsqlDatabasePassword());
+ changeParams.put(FullMigrationToolConfig.SOURCE_DB_CONN_DATABASE, dto.getPgsqlDatabaseName());
+
+ changeParams.put(FullMigrationToolConfig.SOURCE_SCHEMA_MAPPINGS, getMigrationSchemaMappings(dto));
+ changeParams.put(FullMigrationToolConfig.IS_DELETE_CSV, false);
+ changeParams.put(FullMigrationToolConfig.SOURCE_CSV_DIR, generateCsvDirPath(workspace));
+ changeParams.put(FullMigrationToolConfig.IS_RECORD_SNAPSHOT, false);
+ return changeParams;
+ }
+
+ /**
+ * get pgsql full migration record snapshot config map
+ *
+ * @param dto pgsql migration config dto
+ * @return change params
+ */
+ public static Map pgsqlFullMigrationRecordSnapshotConfig(PgsqlMigrationConfigDto dto) {
+ HashMap changeParams = new HashMap<>();
+ changeParams.put(FullMigrationToolConfig.IS_RECORD_SNAPSHOT, true);
+ int majorPgsqlVersion = getMajorPgsqlVersion(dto);
+ if (majorPgsqlVersion >= 10) {
+ changeParams.put(FullMigrationToolConfig.PLUGIN_NAME, "pgoutput");
+ } else {
+ changeParams.put(FullMigrationToolConfig.PLUGIN_NAME, "wal2json");
+ }
+ return changeParams;
+ }
+
+ /**
+ * get major pgsql version
+ *
+ * @param dto pgsql migration config dto
+ * @return int major pgsql version
+ */
+ public static int getMajorPgsqlVersion(PgsqlMigrationConfigDto dto) {
+ if (pgsqlMajorVersion != 0) {
+ return pgsqlMajorVersion;
+ }
+
+ try (Connection connection = JdbcUtils.getPgsqlConnection(dto.getPgsqlConnectInfo())) {
+ String pgsqlVersion = PgsqlUtils.getPgsqlVersion(connection);
+ if (pgsqlVersion != null) {
+ String[] versionParts = pgsqlVersion.split("\\.");
+ if (versionParts.length >= 2) {
+ pgsqlMajorVersion = Integer.parseInt(versionParts[0]);
+ return pgsqlMajorVersion;
+ }
+ }
+ } catch (SQLException | ClassNotFoundException e) {
+ throw new ConfigException("Failed to get pgsql version", e);
+ }
+ throw new ConfigException("Failed to parse pgsql version");
+ }
+
+ /**
+ * generate csv dir path
+ *
+ * @param taskWorkspace task workspace
+ * @return String csv dir path
+ */
+ public static String generateCsvDirPath(TaskWorkspace taskWorkspace) {
+ return String.format("%s/csv", taskWorkspace.getTmpDirPath());
+ }
+
+ /**
+ * get migration schema mappings
+ *
+ * @param dto pgsql migration config dto
+ * @return Map schema mappings
+ */
+ public static Map getMigrationSchemaMappings(PgsqlMigrationConfigDto dto) {
+ String pgsqlDatabaseSchemas = dto.getPgsqlDatabaseSchemas();
+ List pgSchemas = Arrays.asList(pgsqlDatabaseSchemas.split(","));
+
+ String schemaMappings = dto.getSchemaMappings();
+ String[] configMappings = null;
+ if (!StringUtils.isNullOrBlank(schemaMappings)) {
+ configMappings = schemaMappings.split(",");
+ }
+
+ Map resultMapping = new HashMap<>();
+ if (configMappings != null) {
+ for (String configMapping : configMappings) {
+ if (StringUtils.isNullOrBlank(configMapping)) {
+ continue;
+ }
+
+ String[] parts = configMapping.split(":");
+ if (parts.length != 2) {
+ LOGGER.error("Invalid schema mapping: {}", configMapping);
+ throw new ConfigException("The " + MigrationConfig.SCHEMA_MAPPINGS + " is not a valid value");
+ }
+
+ String sourceSchema = parts[0];
+ if (pgSchemas.contains(sourceSchema)) {
+ resultMapping.put(sourceSchema, parts[1]);
+ }
+ }
+ }
+
+ for (String configSchema : pgSchemas) {
+ if (!resultMapping.containsKey(configSchema)) {
+ resultMapping.put(configSchema, configSchema);
+ }
+ }
+ return resultMapping;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/helper/tool/ChameleonHelper.java b/multidb-portal/src/main/java/org/opengauss/migration/helper/tool/ChameleonHelper.java
new file mode 100644
index 0000000000000000000000000000000000000000..56b5f97b581dd38fefcfb124be408da955831ee6
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/helper/tool/ChameleonHelper.java
@@ -0,0 +1,140 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.helper.tool;
+
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONException;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.tool.ChameleonConstants;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.migration.progress.model.tool.ChameleonStatusEntry;
+import org.opengauss.migration.tools.Chameleon;
+import org.opengauss.utils.StringUtils;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * chameleon helper
+ *
+ * @since 2025/5/14
+ */
+public class ChameleonHelper {
+ private static final Logger LOGGER = LogManager.getLogger(ChameleonHelper.class);
+
+ private ChameleonHelper() {
+ }
+
+ /**
+ * parse chameleon status file to chameleon status entry
+ *
+ * @param statusFilePath status file path
+ * @return chameleon status entry
+ */
+ public static Optional parseChameleonStatusFile(String statusFilePath) {
+ Path statusPath = Path.of(statusFilePath);
+ try {
+ if (!Files.exists(statusPath)) {
+ return Optional.empty();
+ }
+
+ String text = Files.readString(statusPath);
+ if (!StringUtils.isNullOrBlank(text)) {
+ return Optional.ofNullable(JSON.parseObject(text, ChameleonStatusEntry.class));
+ }
+ } catch (IOException | JSONException e) {
+ LOGGER.warn("Failed to read or parse chameleon progress, error: {}", e.getMessage());
+ }
+ return Optional.empty();
+ }
+
+ /**
+ * get all chameleon status file path list
+ *
+ * @param taskWorkspace task workspace
+ * @return all status file path list
+ */
+ public static List getAllStatusFilePathList(TaskWorkspace taskWorkspace) {
+ ArrayList result = new ArrayList<>();
+ result.add(generateOrderStatusFilePath(taskWorkspace, ChameleonConstants.ORDER_DROP_REPLICA_SCHEMA));
+ result.add(generateOrderStatusFilePath(taskWorkspace, ChameleonConstants.ORDER_CREATE_REPLICA_SCHEMA));
+ result.add(generateOrderStatusFilePath(taskWorkspace, ChameleonConstants.ORDER_ADD_SOURCE));
+ result.add(generateOrderStatusFilePath(taskWorkspace, ChameleonConstants.ORDER_INIT_REPLICA));
+ result.add(generateOrderStatusFilePath(taskWorkspace, ChameleonConstants.ORDER_START_TRIGGER_REPLICA));
+ result.add(generateOrderStatusFilePath(taskWorkspace, ChameleonConstants.ORDER_START_VIEW_REPLICA));
+ result.add(generateOrderStatusFilePath(taskWorkspace, ChameleonConstants.ORDER_START_FUNC_REPLICA));
+ result.add(generateOrderStatusFilePath(taskWorkspace, ChameleonConstants.ORDER_START_PROC_REPLICA));
+ result.add(generateOrderStatusFilePath(taskWorkspace, ChameleonConstants.ORDER_DETACH_REPLICA));
+ return result;
+ }
+
+ /**
+ * generate chameleon order status file path
+ *
+ * @param taskWorkspace task workspace
+ * @param chameleonOrder chameleon order
+ * @return chameleon order status file path
+ */
+ public static String generateOrderStatusFilePath(TaskWorkspace taskWorkspace, String chameleonOrder) {
+ return String.format("%s/data_default_%s_%s.json", Chameleon.getInstance().getChameleonHomeDirPath(),
+ taskWorkspace.getId(), chameleonOrder);
+ }
+
+ /**
+ * generate chameleon full migration config file name
+ *
+ * @param taskWorkspace task workspace
+ * @return chameleon full migration config file name
+ */
+ public static String generateFullMigrationConfigFileName(TaskWorkspace taskWorkspace) {
+ String fullConfigNameModel = "default_%s.yml";
+ return String.format(fullConfigNameModel, taskWorkspace.getId());
+ }
+
+ /**
+ * generate chameleon full migration log path
+ *
+ * @param taskWorkspace task workspace
+ * @return chameleon full migration log path
+ */
+ public static String generateFullMigrationLogPath(TaskWorkspace taskWorkspace) {
+ return String.format("%s/%s", taskWorkspace.getLogsFullDirPath(), "full_migration.log");
+ }
+
+ /**
+ * generate chameleon process start command
+ *
+ * @param taskWorkspace task workspace
+ * @param chameleonOrder chameleon order
+ * @return chameleon process start command
+ */
+ public static String generateProcessStartCommand(TaskWorkspace taskWorkspace, String chameleonOrder) {
+ HashMap orderParams = generateOrderParams(taskWorkspace, chameleonOrder);
+
+ String chameleonPath = Chameleon.getInstance().getChameleonPath();
+ StringBuilder commandBuilder = new StringBuilder(chameleonPath);
+ commandBuilder.append(" ").append(chameleonOrder).append(" ");
+
+ for (String key : orderParams.keySet()) {
+ commandBuilder.append(key).append(" ").append(orderParams.get(key)).append(" ");
+ }
+ return commandBuilder.substring(0, commandBuilder.length() - 1);
+ }
+
+ private static HashMap generateOrderParams(TaskWorkspace taskWorkspace, String chameleonOrder) {
+ HashMap orderParams = new HashMap<>();
+ orderParams.put("--config", "default_" + taskWorkspace.getId());
+ if (ChameleonConstants.ORDER_NEED_CONFIG_SOURCE_LIST.contains(chameleonOrder)) {
+ orderParams.put("--source", "mysql");
+ }
+ return orderParams;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/helper/tool/DataCheckerHelper.java b/multidb-portal/src/main/java/org/opengauss/migration/helper/tool/DataCheckerHelper.java
new file mode 100644
index 0000000000000000000000000000000000000000..79163dcf04f396772a5b165b0a9efa520ddf8013
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/helper/tool/DataCheckerHelper.java
@@ -0,0 +1,326 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.helper.tool;
+
+import com.alibaba.fastjson2.JSONArray;
+import com.alibaba.fastjson2.JSONException;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.tool.DataCheckerConstants;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.enums.DataCheckerProcessType;
+import org.opengauss.migration.tools.DataChecker;
+import org.opengauss.utils.StringUtils;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Optional;
+
+/**
+ * Data-checker helper
+ *
+ * @since 2025/5/14
+ */
+public class DataCheckerHelper {
+ private static final Logger LOGGER = LogManager.getLogger(DataCheckerHelper.class);
+
+ private DataCheckerHelper() {
+ }
+
+ /**
+ * Parse data-checker status file to json array
+ *
+ * @param statusFilePath status file path
+ * @return JDONArray data-checker status
+ */
+ public static Optional parseDataCheckerStatusFile(String statusFilePath) {
+ Path statusPath = Path.of(statusFilePath);
+
+ if (!Files.exists(statusPath)) {
+ return Optional.empty();
+ }
+
+ try {
+ String text = Files.readString(statusPath);
+ if (!StringUtils.isNullOrBlank(text)) {
+ text = "[" + text.substring(0, text.length() - 1) + "]";
+ return Optional.ofNullable(JSONArray.parseArray(text));
+ }
+ } catch (IOException | JSONException e) {
+ LOGGER.warn("Failed to read or parse data-checker progress, error: {}", e.getMessage());
+ }
+ return Optional.empty();
+ }
+
+ /**
+ * Generate data check process start command
+ *
+ * @param processType process type
+ * @param configFilePath config file path
+ * @param jvmPrefixOptions jvm prefix options
+ * @return process start command
+ */
+ public static String generateProcessStartCommand(
+ DataCheckerProcessType processType, String configFilePath, String jvmPrefixOptions) {
+ StringBuilder builder = new StringBuilder();
+ builder.append("nohup java").append(" ")
+ .append(jvmPrefixOptions).append(" ")
+ .append("-Dloader.path=").append(DataChecker.getInstance().getLibDirPath()).append(" ")
+ .append(generateProcessCheckCommand(processType, configFilePath)).append(" ")
+ .append("> /dev/null &");
+
+ return builder.toString();
+ }
+
+ /**
+ * Generate data check process check command
+ *
+ * @param processType process type
+ * @param configFilePath config file path
+ * @return process check command
+ */
+ public static String generateProcessCheckCommand(DataCheckerProcessType processType, String configFilePath) {
+ StringBuilder builder = new StringBuilder();
+ builder.append("-Dspring.config.additional-location=").append(configFilePath).append(" ")
+ .append("-jar").append(" ");
+
+ DataChecker dataChecker = DataChecker.getInstance();
+ if (DataCheckerProcessType.SINK.equals(processType)) {
+ builder.append(dataChecker.getExtractJarPath()).append(" ")
+ .append("--").append(processType.getType());
+ } else if (DataCheckerProcessType.SOURCE.equals(processType)) {
+ builder.append(dataChecker.getExtractJarPath()).append(" ")
+ .append("--").append(processType.getType());
+ } else {
+ builder.append(dataChecker.getCheckJarPath());
+ }
+
+ return builder.toString();
+ }
+
+ /**
+ * Get data-checker full process sign file path
+ *
+ * @param taskWorkspace task workspace
+ * @return process sign file path
+ */
+ public static String getFullProcessSignFilePath(TaskWorkspace taskWorkspace) {
+ String resultDirPath = getFullCheckResultDirPath(taskWorkspace);
+ return String.format("%s/%s", resultDirPath, DataCheckerConstants.PROCESS_SIGN_FILE_NAME);
+ }
+
+ /**
+ * Get data-checker incremental process sign file path
+ *
+ * @param taskWorkspace task workspace
+ * @return process sign file path
+ */
+ public static String getIncrementalProcessSignFilePath(TaskWorkspace taskWorkspace) {
+ String resultDirPath = getIncrementalCheckResultDirPath(taskWorkspace);
+ return String.format("%s/%s", resultDirPath, DataCheckerConstants.PROCESS_SIGN_FILE_NAME);
+ }
+
+ /**
+ * Generate full data check data path
+ *
+ * @param workspace task workspace
+ * @return full data check data path
+ */
+ public static String generateFullDataCheckDataPath(TaskWorkspace workspace) {
+ return workspace.getStatusFullDataCheckDirPath();
+ }
+
+ /**
+ * Generate incremental data check data path
+ *
+ * @param workspace task workspace
+ * @return incremental data check data path
+ */
+ public static String generateIncrementalDataCheckDataPath(TaskWorkspace workspace) {
+ return workspace.getStatusIncrementalDataCheckDirPath();
+ }
+
+ /**
+ * Generate full data check logs dir path
+ *
+ * @param workspace task workspace
+ * @return log path
+ */
+ public static String generateFullDataCheckLogsDirPath(TaskWorkspace workspace) {
+ return workspace.getLogsFullDataCheckDirPath();
+ }
+
+ /**
+ * generate incremental data check logs dir path
+ *
+ * @param workspace task workspace
+ * @return log path
+ */
+ public static String generateIncrementalDataCheckLogsDirPath(TaskWorkspace workspace) {
+ return workspace.getLogsIncrementalDataCheckDirPath();
+ }
+
+ /**
+ * Get data-checker process start sign
+ *
+ * @param processType process type
+ * @return process start sign
+ */
+ public static String getProcessStartSign(DataCheckerProcessType processType) {
+ if (DataCheckerProcessType.SOURCE.equals(processType)) {
+ return DataCheckerConstants.SOURCE_PROCESS_START_SIGN;
+ } else if (DataCheckerProcessType.SINK.equals(processType)) {
+ return DataCheckerConstants.SINK_PROCESS_START_SIGN;
+ } else {
+ return DataCheckerConstants.CHECK_PROCESS_START_SIGN;
+ }
+ }
+
+ /**
+ * Get data-checker process stop sign
+ *
+ * @param processType process type
+ * @return process stop sign
+ */
+ public static String getProcessStopSign(DataCheckerProcessType processType) {
+ if (DataCheckerProcessType.SOURCE.equals(processType)) {
+ return DataCheckerConstants.SOURCE_PROCESS_STOP_SIGN;
+ } else if (DataCheckerProcessType.SINK.equals(processType)) {
+ return DataCheckerConstants.SINK_PROCESS_STOP_SIGN;
+ } else {
+ return DataCheckerConstants.CHECK_PROCESS_STOP_SIGN;
+ }
+ }
+
+ /**
+ * Get data-checker full check result dir path
+ *
+ * @param taskWorkspace task workspace
+ * @return check result dir path
+ */
+ public static String getFullCheckResultDirPath(TaskWorkspace taskWorkspace) {
+ String statusPath = taskWorkspace.getStatusFullDataCheckDirPath();
+ return String.format("%s/result", statusPath);
+ }
+
+ /**
+ * Get data-checker incremental check result dir path
+ *
+ * @param taskWorkspace task workspace
+ * @return check result dir path
+ */
+ public static String getIncrementalCheckResultDirPath(TaskWorkspace taskWorkspace) {
+ String statusPath = taskWorkspace.getStatusIncrementalDataCheckDirPath();
+ return String.format("%s/result", statusPath);
+ }
+
+ /**
+ * Get data-checker full check success result file path
+ *
+ * @param taskWorkspace task workspace
+ * @return full check success result file path
+ */
+ public static String getFullCheckResultSuccessFilePath(TaskWorkspace taskWorkspace) {
+ String resultDirPath = getFullCheckResultDirPath(taskWorkspace);
+ return String.format("%s/%s", resultDirPath, DataCheckerConstants.CHECK_RESULT_SUCCESS_FILE_NAME);
+ }
+
+ /**
+ * get data-checker incremental check success result file path
+ *
+ * @param taskWorkspace task workspace
+ * @return incremental check success result file path
+ */
+ public static String getIncrementalCheckResultSuccessFilePath(TaskWorkspace taskWorkspace) {
+ String resultDirPath = getIncrementalCheckResultDirPath(taskWorkspace);
+ return String.format("%s/%s", resultDirPath, DataCheckerConstants.CHECK_RESULT_SUCCESS_FILE_NAME);
+ }
+
+ /**
+ * Get data-checker full check failed result file path
+ *
+ * @param taskWorkspace task workspace
+ * @return full check failed result file path
+ */
+ public static String getFullCheckResultFailedFilePath(TaskWorkspace taskWorkspace) {
+ String resultDirPath = getFullCheckResultDirPath(taskWorkspace);
+ return String.format("%s/%s", resultDirPath, DataCheckerConstants.CHECK_RESULT_FAILED_FILE_NAME);
+ }
+
+ /**
+ * get data-checker incremental check failed result file path
+ *
+ * @param taskWorkspace task workspace
+ * @return incremental check failed result file path
+ */
+ public static String getIncrementalCheckResultFailedFilePath(TaskWorkspace taskWorkspace) {
+ String resultDirPath = getIncrementalCheckResultDirPath(taskWorkspace);
+ return String.format("%s/%s", resultDirPath, DataCheckerConstants.CHECK_RESULT_FAILED_FILE_NAME);
+ }
+
+ /**
+ * generate data-checker full check result repair file path
+ *
+ * @param taskWorkspace task workspace
+ * @param schemaName schema name
+ * @param tableName table name
+ * @return full check result repair file path
+ */
+ public static String generateFullCheckResultRepairFilePath(TaskWorkspace taskWorkspace, String schemaName,
+ String tableName) {
+ String resultDirPath = getFullCheckResultDirPath(taskWorkspace);
+ String repairFileName = String.format(DataCheckerConstants.CHECK_RESULT_REPAIR_FILE_NAME_MODEL,
+ schemaName, tableName);
+ return String.format("%s/%s", resultDirPath, repairFileName);
+ }
+
+ /**
+ * Generate data-checker incremental check result repair file path
+ *
+ * @param taskWorkspace task workspace
+ * @param schemaName schema name
+ * @param tableName table name
+ * @return incremental check result repair file path
+ */
+ public static String generateIncrementalCheckResultRepairFilePath(TaskWorkspace taskWorkspace, String schemaName,
+ String tableName) {
+ String resultDirPath = getIncrementalCheckResultDirPath(taskWorkspace);
+ String repairFileName = String.format(DataCheckerConstants.CHECK_RESULT_REPAIR_FILE_NAME_MODEL,
+ schemaName, tableName);
+ return String.format("%s/%s", resultDirPath, repairFileName);
+ }
+
+ /**
+ * Get full check log4j2 config map
+ *
+ * @param taskWorkspace task workspace
+ * @return full check log4j2 config
+ */
+ public static Map getFullCheckLog4j2Config(TaskWorkspace taskWorkspace) {
+ return getLog4j2Config(generateFullDataCheckLogsDirPath(taskWorkspace));
+ }
+
+ /**
+ * Get incremental check log4j2 config map
+ *
+ * @param taskWorkspace task workspace
+ * @return incremental check log4j2 config
+ */
+ public static Map getIncrementalCheckLog4j2Config(TaskWorkspace taskWorkspace) {
+ return getLog4j2Config(generateIncrementalDataCheckLogsDirPath(taskWorkspace));
+ }
+
+ private static Map getLog4j2Config(String logDirPath) {
+ Map changeParams = new HashMap<>();
+ String configModel = "%s";
+ String changeString = String.format(configModel, "logs");
+ String newString = String.format(configModel, logDirPath);
+ changeParams.put(changeString, newString);
+ return changeParams;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/helper/tool/DebeziumHelper.java b/multidb-portal/src/main/java/org/opengauss/migration/helper/tool/DebeziumHelper.java
new file mode 100644
index 0000000000000000000000000000000000000000..8b7a7151ecc018159a198abdcb8e799637f5f126
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/helper/tool/DebeziumHelper.java
@@ -0,0 +1,221 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.helper.tool;
+
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONException;
+import com.alibaba.fastjson2.JSONReader;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.tool.DebeziumConstants;
+import org.opengauss.domain.model.ConfigFile;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.migration.progress.model.tool.DebeziumSinkStatusEntry;
+import org.opengauss.migration.progress.model.tool.DebeziumSourceStatusEntry;
+import org.opengauss.migration.tools.Kafka;
+import org.opengauss.utils.StringUtils;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.util.Optional;
+
+/**
+ * debezium helper
+ *
+ * @since 2025/5/17
+ */
+public class DebeziumHelper {
+ private static final Logger LOGGER = LogManager.getLogger(DebeziumHelper.class);
+
+ private DebeziumHelper() {
+ }
+
+ /**
+ * generate debezium process start command
+ *
+ * @param connectorConfig connector config
+ * @param workerConfig worker config
+ * @param log4jConfig log4j config
+ * @param commandPrefix command prefix
+ * @return process start command
+ */
+ public static String generateProcessStartCommand(
+ ConfigFile connectorConfig, ConfigFile workerConfig, ConfigFile log4jConfig, String commandPrefix) {
+ StringBuilder commandBuilder = new StringBuilder();
+ commandBuilder.append(commandPrefix).append(" && ");
+ commandBuilder.append("export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:");
+ commandBuilder.append(log4jConfig.getFilePath()).append("\" && ");
+ commandBuilder.append(Kafka.getInstance().getConnectStandalonePath()).append(" -daemon ");
+ commandBuilder.append(workerConfig.getFilePath()).append(" ").append(connectorConfig.getFilePath());
+ return commandBuilder.toString();
+ }
+
+ /**
+ * generate debezium process check command
+ *
+ * @param connectorConfig connector config
+ * @param workerConfig worker config
+ * @return process check command
+ */
+ public static String generateProcessCheckCommand(ConfigFile connectorConfig, ConfigFile workerConfig) {
+ return String.format("ConnectStandalone %s %s", workerConfig.getFilePath(), connectorConfig.getFilePath());
+ }
+
+ /**
+ * get incremental source status file path
+ *
+ * @param taskWorkspace task workspace
+ * @return incremental source status file path
+ */
+ public static String getIncrementalSourceStatusFilePath(TaskWorkspace taskWorkspace) {
+ String statusDirPath = taskWorkspace.getStatusIncrementalDirPath();
+ return getDebeziumLatestStatusFilePath(statusDirPath, DebeziumConstants.INCREMENTAL_SOURCE_STATUS_FILE_PREFIX);
+ }
+
+ /**
+ * get incremental sink status file path
+ *
+ * @param taskWorkspace task workspace
+ * @return incremental sink status file path
+ */
+ public static String getIncrementalSinkStatusFilePath(TaskWorkspace taskWorkspace) {
+ String statusDirPath = taskWorkspace.getStatusIncrementalDirPath();
+ return getDebeziumLatestStatusFilePath(statusDirPath, DebeziumConstants.INCREMENTAL_SINK_STATUS_FILE_PREFIX);
+ }
+
+ /**
+ * get reverse source status file path
+ *
+ * @param taskWorkspace task workspace
+ * @return reverse source status file path
+ */
+ public static String getReverseSourceStatusFilePath(TaskWorkspace taskWorkspace) {
+ String statusDirPath = taskWorkspace.getStatusReverseDirPath();
+ return getDebeziumLatestStatusFilePath(statusDirPath, DebeziumConstants.REVERSE_SOURCE_STATUS_FILE_PREFIX);
+ }
+
+ /**
+ * get reverse sink status file path
+ *
+ * @param taskWorkspace task workspace
+ * @return reverse sink status file path
+ */
+ public static String getReverseSinkStatusFilePath(TaskWorkspace taskWorkspace) {
+ String statusDirPath = taskWorkspace.getStatusReverseDirPath();
+ return getDebeziumLatestStatusFilePath(statusDirPath, DebeziumConstants.REVERSE_SINK_STATUS_FILE_PREFIX);
+ }
+
+ /**
+ * get debezium incremental fail sql file path
+ *
+ * @param taskWorkspace task workspace
+ * @return debezium incremental fail sql file path
+ */
+ public static String getDebeziumIncrementalFailSqlFilePath(TaskWorkspace taskWorkspace) {
+ String statusDirPath = taskWorkspace.getStatusIncrementalDirPath();
+ return String.format("%s/%s", statusDirPath, DebeziumConstants.FAIL_SQL_FILE_NAME);
+ }
+
+ /**
+ * get debezium reverse fail sql file path
+ *
+ * @param taskWorkspace task workspace
+ * @return debezium reverse fail sql file path
+ */
+ public static String getDebeziumReverseFailSqlFilePath(TaskWorkspace taskWorkspace) {
+ String statusDirPath = taskWorkspace.getStatusReverseDirPath();
+ return String.format("%s/%s", statusDirPath, DebeziumConstants.FAIL_SQL_FILE_NAME);
+ }
+
+ /**
+ * parse debezium sink status file to debezium sink status entry
+ *
+ * @param filePath status file path
+ * @return debezium sink status entry
+ */
+ public static Optional parseDebeziumSinkStatusFile(String filePath) {
+ Path statusPath = Path.of(filePath);
+ if (!Files.exists(statusPath)) {
+ return Optional.empty();
+ }
+
+ try {
+ String text = Files.readString(statusPath);
+ if (!StringUtils.isNullOrBlank(text)) {
+ return Optional.ofNullable(JSON.parseObject(text, DebeziumSinkStatusEntry.class,
+ JSONReader.Feature.IgnoreAutoTypeNotMatch));
+ }
+ } catch (IOException | JSONException e) {
+ LOGGER.warn("Failed to read or parse debezium sink progress, error: {}", e.getMessage());
+ }
+ return Optional.empty();
+ }
+
+ /**
+ * parse debezium source status file to debezium source status entry
+ *
+ * @param filePath status file path
+ * @return debezium source status entry
+ */
+ public static Optional parseDebeziumSourceStatusFile(String filePath) {
+ Path statusPath = Path.of(filePath);
+ if (!Files.exists(statusPath)) {
+ return Optional.empty();
+ }
+
+ try {
+ String text = Files.readString(statusPath);
+ if (!StringUtils.isNullOrBlank(text)) {
+ return Optional.ofNullable(JSON.parseObject(
+ text, DebeziumSourceStatusEntry.class, JSONReader.Feature.IgnoreAutoTypeNotMatch));
+ }
+ } catch (IOException | JSONException e) {
+ LOGGER.warn("Failed to read or parse debezium source progress, error: {}", e.getMessage());
+ }
+ return Optional.empty();
+ }
+
+ private static String getDebeziumLatestStatusFilePath(String fileParentDir, String statusFilePrefix) {
+ String result = "";
+
+ File directory = new File(fileParentDir);
+ if (directory.exists() && directory.isDirectory()) {
+ File[] dirListFiles = directory.listFiles();
+ result = Optional.ofNullable(dirListFiles)
+ .map(files -> getLastedFileName(files, statusFilePrefix))
+ .orElse("");
+ }
+ return result;
+ }
+
+ private static String getLastedFileName(File[] dirListFiles, String target) {
+ File targetFile = null;
+ for (File dirListFile : dirListFiles) {
+ if (!dirListFile.getName().contains(target)) {
+ continue;
+ }
+
+ if (targetFile == null) {
+ targetFile = dirListFile;
+ continue;
+ }
+
+ if (dirListFile.lastModified() > targetFile.lastModified()) {
+ targetFile = dirListFile;
+ }
+ }
+
+ try {
+ if (targetFile != null) {
+ return targetFile.getCanonicalPath();
+ }
+ } catch (IOException e) {
+ LOGGER.trace("Failed to get latest file path, error: {}", e.getMessage());
+ }
+ return "";
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/helper/tool/FullMigrationToolHelper.java b/multidb-portal/src/main/java/org/opengauss/migration/helper/tool/FullMigrationToolHelper.java
new file mode 100644
index 0000000000000000000000000000000000000000..3c187a98f2ef04c53f98bb78c9b93ab63a170fd3
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/helper/tool/FullMigrationToolHelper.java
@@ -0,0 +1,132 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.helper.tool;
+
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONException;
+import com.alibaba.fastjson2.JSONReader;
+import lombok.extern.slf4j.Slf4j;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.tool.FullMigrationToolConstants;
+import org.opengauss.domain.model.ConfigFile;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.migration.progress.model.tool.FullMigrationToolStatusEntry;
+import org.opengauss.migration.tools.FullMigrationTool;
+import org.opengauss.utils.StringUtils;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.util.Optional;
+
+/**
+ * full migration tool helper
+ *
+ * @since 2025/5/29
+ */
+@Slf4j
+public class FullMigrationToolHelper {
+ private static final Logger LOGGER = LogManager.getLogger(FullMigrationToolHelper.class);
+
+ private FullMigrationToolHelper() {
+ }
+
+ /**
+ * generate full migration tool process start command
+ *
+ * @param fullConfig full config file
+ * @param sourceDbType source db type
+ * @param fullMigrationToolOrder full migration tool order
+ * @param jvmPrefixOptions jvm prefix options
+ * @return process start command
+ */
+ public static String generateProcessStartCommand(
+ ConfigFile fullConfig, String sourceDbType, String fullMigrationToolOrder, String jvmPrefixOptions) {
+ StringBuilder commandBuilder = new StringBuilder();
+
+ String jarPath = FullMigrationTool.getInstance().getJarPath();
+ commandBuilder.append("java").append(" ")
+ .append(jvmPrefixOptions).append(" ")
+ .append("-jar").append(" ").append(jarPath).append(" ")
+ .append("--start").append(" ").append(fullMigrationToolOrder).append(" ")
+ .append("--source").append(" ").append(sourceDbType).append(" ")
+ .append("--config").append(" ").append(fullConfig.getFilePath());
+
+ return commandBuilder.toString();
+ }
+
+ /**
+ * generate full migration tool process check command
+ *
+ * @param fullConfig full config file
+ * @param sourceDbType source db type
+ * @param fullMigrationToolOrder full migration tool order
+ * @param jvmPrefixOptions jvm prefix options
+ * @return process check command
+ */
+ public static String generateProcessCheckCommand(
+ ConfigFile fullConfig, String sourceDbType, String fullMigrationToolOrder, String jvmPrefixOptions) {
+ return generateProcessStartCommand(fullConfig, sourceDbType, fullMigrationToolOrder, jvmPrefixOptions);
+ }
+
+ /**
+ * generate full migration log path
+ *
+ * @param taskWorkspace task workspace
+ * @return log path
+ */
+ public static String generateFullMigrationLogPath(TaskWorkspace taskWorkspace) {
+ return String.format("%s/%s", taskWorkspace.getLogsFullDirPath(), "full_migration.log");
+ }
+
+ /**
+ * get full migration tool process stop sign
+ *
+ * @param fullMigrationToolOrder full migration tool order
+ * @return process stop sign
+ */
+ public static String getProcessStopSign(String fullMigrationToolOrder) {
+ if (FullMigrationToolConstants.ORDER_DROP_REPLICA_SCHEMA.equals(fullMigrationToolOrder)) {
+ return "drop replica schema(sch_debezium) success.";
+ }
+ return fullMigrationToolOrder + " migration complete. full report thread is close.";
+ }
+
+ /**
+ * generate full migration tool order status file path
+ *
+ * @param taskWorkspace task workspace
+ * @param fullMigrationToolOrder full migration tool order
+ * @return order status file path
+ */
+ public static String generateOrderStatusFilePath(TaskWorkspace taskWorkspace, String fullMigrationToolOrder) {
+ return String.format("%s/%s.json", taskWorkspace.getStatusFullDirPath(), fullMigrationToolOrder);
+ }
+
+ /**
+ * parse full migration tool status file to full migration tool status entry
+ *
+ * @param statusFilePath status file path
+ * @return full migration tool status entry
+ */
+ public static Optional parseToolStatusFile(String statusFilePath) {
+ Path statusPath = Path.of(statusFilePath);
+ try {
+ if (!Files.exists(statusPath)) {
+ return Optional.empty();
+ }
+
+ String text = Files.readString(statusPath);
+ if (!StringUtils.isNullOrBlank(text)) {
+ return Optional.ofNullable(JSON.parseObject(text, FullMigrationToolStatusEntry.class,
+ JSONReader.Feature.IgnoreAutoTypeNotMatch));
+ }
+ } catch (IOException | JSONException e) {
+ LOGGER.warn("Failed to read or parse full migration tool progress, error: {}", e.getMessage());
+ }
+ return Optional.empty();
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/job/AbstractMigrationJob.java b/multidb-portal/src/main/java/org/opengauss/migration/job/AbstractMigrationJob.java
new file mode 100644
index 0000000000000000000000000000000000000000..8b11afd853749abe47d87c62159a69041eebf720
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/job/AbstractMigrationJob.java
@@ -0,0 +1,181 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.job;
+
+import org.opengauss.domain.dto.AbstractMigrationConfigDto;
+import org.opengauss.domain.model.MigrationStopIndicator;
+import org.opengauss.migration.process.ProcessMonitor;
+import org.opengauss.migration.status.StatusMonitor;
+import org.opengauss.migration.tasks.phase.FullDataCheckTask;
+import org.opengauss.migration.tasks.phase.FullMigrationTask;
+import org.opengauss.migration.tasks.phase.IncrementalDataCheckTask;
+import org.opengauss.migration.tasks.phase.IncrementalMigrationTask;
+import org.opengauss.migration.tasks.phase.ReverseMigrationTask;
+import org.opengauss.utils.JdbcUtils;
+import org.opengauss.utils.OpenGaussUtils;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+
+/**
+ * Abstract migration job
+ *
+ * @since 2025/7/2
+ */
+public abstract class AbstractMigrationJob {
+ /**
+ * Has full migration
+ */
+ protected boolean hasFullMigration;
+
+ /**
+ * Has full data check
+ */
+ protected boolean hasFullDataCheck;
+
+ /**
+ * Has incremental migration
+ */
+ protected boolean hasIncrementalMigration;
+
+ /**
+ * Has incremental data check
+ */
+ protected boolean hasIncrementalDataCheck;
+
+ /**
+ * Has reverse migration
+ */
+ protected boolean hasReverseMigration;
+
+ /**
+ * Full migration task
+ */
+ protected FullMigrationTask fullMigrationTask;
+
+ /**
+ * Full data check task
+ */
+ protected FullDataCheckTask fullDataCheckTask;
+
+ /**
+ * Incremental migration task
+ */
+ protected IncrementalMigrationTask incrementalMigrationTask;
+
+ /**
+ * Incremental data check task
+ */
+ protected IncrementalDataCheckTask incrementalDataCheckTask;
+
+ /**
+ * Reverse migration task
+ */
+ protected ReverseMigrationTask reverseMigrationTask;
+
+ /**
+ * Pre migration verify
+ *
+ * @return true if pre-migration verify success, false otherwise
+ */
+ public abstract boolean preMigrationVerify();
+
+ /**
+ * Before migration
+ */
+ public abstract void beforeTask();
+
+ /**
+ * Start migration
+ *
+ * @param migrationStopIndicator migration stop indicator
+ * @param processMonitor process monitor
+ * @param statusMonitor status manager
+ */
+ public abstract void startTask(MigrationStopIndicator migrationStopIndicator, ProcessMonitor processMonitor,
+ StatusMonitor statusMonitor);
+
+ /**
+ * Stop incremental migration
+ *
+ * @param migrationStopIndicator migration stop indicator
+ * @param statusMonitor status manager
+ */
+ public abstract void stopIncremental(MigrationStopIndicator migrationStopIndicator, StatusMonitor statusMonitor);
+
+ /**
+ * Resume incremental migration
+ *
+ * @param statusMonitor status manager
+ */
+ public abstract void resumeIncremental(StatusMonitor statusMonitor);
+
+ /**
+ * Restart incremental migration
+ *
+ * @param migrationStopIndicator migration stop indicator
+ * @param statusMonitor status manager
+ */
+ public abstract void restartIncremental(MigrationStopIndicator migrationStopIndicator, StatusMonitor statusMonitor);
+
+ /**
+ * Start reverse migration
+ *
+ * @param migrationStopIndicator migration stop indicator
+ * @param statusMonitor status manager
+ */
+ public abstract void startReverse(MigrationStopIndicator migrationStopIndicator, StatusMonitor statusMonitor);
+
+ /**
+ * Stop reverse migration
+ *
+ * @param statusMonitor status manager
+ */
+ public abstract void stopReverse(StatusMonitor statusMonitor);
+
+ /**
+ * Resume reverse migration
+ *
+ * @param statusMonitor status manager
+ */
+ public abstract void resumeReverse(StatusMonitor statusMonitor);
+
+ /**
+ * Restart reverse migration
+ *
+ * @param migrationStopIndicator migration stop indicator
+ * @param statusMonitor status manager
+ */
+ public abstract void restartReverse(MigrationStopIndicator migrationStopIndicator, StatusMonitor statusMonitor);
+
+ /**
+ * Stop migration
+ */
+ public abstract void stopTask();
+
+ /**
+ * Adjust kernel fsync param
+ *
+ * @param isOn whether fsync is on
+ * @param migrationConfigDto migration config dto
+ * @throws SQLException sql exception
+ */
+ protected void adjustKernelFsyncParam(boolean isOn, AbstractMigrationConfigDto migrationConfigDto)
+ throws SQLException {
+ if (!migrationConfigDto.getIsAdjustKernelParam().equalsIgnoreCase("true")) {
+ return;
+ }
+
+ String fsyncParam = "fsync";
+ String fsyncValue = isOn ? "on" : "off";
+ try (Connection connection = JdbcUtils.getOpengaussConnection(migrationConfigDto.getOpenGaussConnectInfo())) {
+ OpenGaussUtils.alterSystemSet(fsyncParam, fsyncValue, connection);
+ }
+ }
+
+ abstract void generateTasks(MigrationStopIndicator migrationStopIndicator, ProcessMonitor processMonitor);
+
+ abstract void afterTask();
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/job/MysqlMigrationJob.java b/multidb-portal/src/main/java/org/opengauss/migration/job/MysqlMigrationJob.java
new file mode 100644
index 0000000000000000000000000000000000000000..b295b9c7838996d70bcff2a8c22776a3e299028a
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/job/MysqlMigrationJob.java
@@ -0,0 +1,474 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.job;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.domain.dto.MysqlMigrationConfigDto;
+import org.opengauss.domain.model.MigrationStopIndicator;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.enums.MigrationStatusEnum;
+import org.opengauss.exceptions.MigrationException;
+import org.opengauss.migration.config.MysqlMigrationJobConfig;
+import org.opengauss.migration.executor.TaskAssistantExecutor;
+import org.opengauss.migration.helper.TaskHelper;
+import org.opengauss.migration.process.ProcessMonitor;
+import org.opengauss.migration.status.StatusMonitor;
+import org.opengauss.migration.tasks.impl.ChameleonMysqlFullMigrationTask;
+import org.opengauss.migration.tasks.impl.DataCheckerMysqlFullDataCheckTask;
+import org.opengauss.migration.tasks.impl.DataCheckerMysqlIncrementalDataCheckTask;
+import org.opengauss.migration.tasks.impl.DebeziumMysqlIncrementalMigrationTask;
+import org.opengauss.migration.tasks.impl.DebeziumMysqlReverseMigrationTask;
+import org.opengauss.migration.verify.VerifyManager;
+
+import java.sql.SQLException;
+
+/**
+ * Mysql Migration Job
+ *
+ * @since 2025/7/2
+ */
+public class MysqlMigrationJob extends AbstractMigrationJob {
+ private static final Logger LOGGER = LogManager.getLogger(MysqlMigrationJob.class);
+
+ private final MysqlMigrationJobConfig migrationJobConfig;
+
+ private boolean hasDoBeforeReverse = false;
+ private boolean hasAdjustKernelParam = false;
+
+ public MysqlMigrationJob(MysqlMigrationJobConfig migrationJobConfig) {
+ this.migrationJobConfig = migrationJobConfig;
+ this.hasFullMigration = migrationJobConfig.hasFullMigration();
+ this.hasFullDataCheck = migrationJobConfig.hasFullDataCheck();
+ this.hasIncrementalMigration = migrationJobConfig.hasIncrementalMigration();
+ this.hasIncrementalDataCheck = migrationJobConfig.hasIncrementalDataCheck();
+ this.hasReverseMigration = migrationJobConfig.hasReverseMigration();
+ }
+
+ @Override
+ public boolean preMigrationVerify() {
+ return VerifyManager.mysqlMigrationVerify(migrationJobConfig.getMigrationPhaseList(),
+ migrationJobConfig.getMigrationConfigDto(), migrationJobConfig.getTaskWorkspace());
+ }
+
+ @Override
+ public void beforeTask() {
+ try {
+ adjustKernelFsyncParam(false, migrationJobConfig.getMigrationConfigDto());
+ hasAdjustKernelParam = true;
+ } catch (SQLException e) {
+ throw new MigrationException("Adjust kernel parameter fsync failed", e);
+ }
+ }
+
+ @Override
+ public void startTask(MigrationStopIndicator migrationStopIndicator, ProcessMonitor processMonitor,
+ StatusMonitor statusMonitor) {
+ TaskHelper.changePhasesConfig(migrationJobConfig);
+ generateTasks(migrationStopIndicator, processMonitor);
+ TaskAssistantExecutor executor = getTaskExecutor(migrationStopIndicator, statusMonitor);
+ executor.execute();
+ }
+
+ @Override
+ public synchronized void stopIncremental(
+ MigrationStopIndicator migrationStopIndicator, StatusMonitor statusMonitor) {
+ if (!hasIncrementalMigration) {
+ LOGGER.warn("No incremental migration phase, unable to stop incremental migration");
+ return;
+ }
+
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (!currentStatus.equals(MigrationStatusEnum.INCREMENTAL_MIGRATION_RUNNING)
+ && !currentStatus.equals(MigrationStatusEnum.INCREMENTAL_MIGRATION_INTERRUPTED)) {
+ LOGGER.warn("Can not stop incremental migration, incremental migration is not running or interrupted");
+ return;
+ }
+
+ if (hasIncrementalDataCheck) {
+ incrementalDataCheckTask.stopTask();
+ LOGGER.info("Stop incremental data check successfully");
+ }
+ incrementalMigrationTask.stopTask();
+
+ if (hasFullMigration && fullMigrationTask.isForeignKeyMigrated()) {
+ LOGGER.info("Migrate foreign key");
+ fullMigrationTask.migrateForeignKey();
+ }
+
+ if (!migrationStopIndicator.isStopped() && hasReverseMigration && !hasDoBeforeReverse) {
+ reverseMigrationTask.beforeTask();
+ hasDoBeforeReverse = true;
+ }
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.INCREMENTAL_MIGRATION_FINISHED);
+ LOGGER.info("Stop incremental migration successfully");
+ }
+
+ @Override
+ public synchronized void resumeIncremental(StatusMonitor statusMonitor) {
+ if (!hasIncrementalMigration) {
+ LOGGER.warn("No incremental migration phase, unable to resume incremental migration");
+ return;
+ }
+
+ if (!statusMonitor.getCurrentStatus().getStatus().equals(
+ MigrationStatusEnum.INCREMENTAL_MIGRATION_INTERRUPTED)) {
+ LOGGER.warn("Can not resume incremental migration, incremental migration is not interrupted");
+ return;
+ }
+
+ incrementalMigrationTask.resumeTask();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.INCREMENTAL_MIGRATION_RUNNING);
+ LOGGER.info("Resume incremental migration successfully");
+ }
+
+ @Override
+ public synchronized void restartIncremental(
+ MigrationStopIndicator migrationStopIndicator, StatusMonitor statusMonitor) {
+ if (!hasIncrementalMigration) {
+ LOGGER.warn("No incremental migration phase, unable to restart incremental migration");
+ return;
+ }
+
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (MigrationStatusEnum.INCREMENTAL_MIGRATION_FINISHED.equals(currentStatus)) {
+ if (!migrationStopIndicator.isStopped()) {
+ clearBeforeReverse();
+
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.START_INCREMENTAL_MIGRATION);
+ incrementalMigrationTask.startSource();
+ incrementalMigrationTask.startSink();
+
+ if (hasIncrementalDataCheck) {
+ incrementalDataCheckTask.startTask();
+ }
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.INCREMENTAL_MIGRATION_RUNNING);
+ }
+ } else if (MigrationStatusEnum.INCREMENTAL_MIGRATION_INTERRUPTED.equals(currentStatus)
+ || MigrationStatusEnum.INCREMENTAL_MIGRATION_RUNNING.equals(currentStatus)) {
+ if (!migrationStopIndicator.isStopped()) {
+ if (hasIncrementalDataCheck) {
+ incrementalDataCheckTask.stopTask();
+ }
+ incrementalMigrationTask.stopTask();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.INCREMENTAL_MIGRATION_FINISHED);
+
+ clearBeforeReverse();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.START_INCREMENTAL_MIGRATION);
+ incrementalMigrationTask.startSource();
+ incrementalMigrationTask.startSink();
+ if (hasIncrementalDataCheck) {
+ incrementalDataCheckTask.startTask();
+ }
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.INCREMENTAL_MIGRATION_RUNNING);
+ }
+ } else {
+ LOGGER.warn("Can not restart incremental migration,"
+ + " incremental migration is not finished or interrupted or running");
+ return;
+ }
+ LOGGER.info("Restart incremental migration successfully");
+ }
+
+ @Override
+ public synchronized void startReverse(MigrationStopIndicator migrationStopIndicator, StatusMonitor statusMonitor) {
+ if (!hasReverseMigration) {
+ LOGGER.warn("No reverse migration phase, unable to start reverse migration");
+ return;
+ }
+
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (MigrationStatusEnum.START_REVERSE_MIGRATION.equals(currentStatus)
+ || MigrationStatusEnum.REVERSE_MIGRATION_RUNNING.equals(currentStatus)
+ || MigrationStatusEnum.REVERSE_MIGRATION_INTERRUPTED.equals(currentStatus)) {
+ LOGGER.warn("Reverse migration is already running or interrupted, unable to start reverse migration again");
+ return;
+ }
+
+ if (!isPreReversePhaseFinished(statusMonitor)) {
+ LOGGER.warn("Can not start reverse migration, the previous phase task is not completed");
+ return;
+ }
+
+ if (migrationStopIndicator.isStopped()) {
+ return;
+ }
+
+ if (VerifyManager.mysqlReversePhaseVerify(migrationJobConfig.getMigrationConfigDto(),
+ migrationJobConfig.getTaskWorkspace())) {
+ if (!hasDoBeforeReverse) {
+ reverseMigrationTask.beforeTask();
+ }
+ executeReverseTask(statusMonitor);
+ LOGGER.info("Start reverse migration successfully");
+ } else {
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.PRE_REVERSE_PHASE_VERIFY_FAILED);
+ LOGGER.info("Reverse migration verify failed, skip reverse migration");
+ }
+ }
+
+ @Override
+ public synchronized void stopReverse(StatusMonitor statusMonitor) {
+ if (!hasReverseMigration) {
+ LOGGER.warn("No reverse migration phase, unable to stop reverse migration");
+ return;
+ }
+
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (!MigrationStatusEnum.REVERSE_MIGRATION_RUNNING.equals(currentStatus)
+ && !MigrationStatusEnum.REVERSE_MIGRATION_INTERRUPTED.equals(currentStatus)) {
+ LOGGER.warn("Can not stop reverse migration, reverse migration is not running or interrupted");
+ return;
+ }
+
+ reverseMigrationTask.stopTask();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.REVERSE_MIGRATION_FINISHED);
+ LOGGER.info("Stop reverse migration successfully");
+ }
+
+ @Override
+ public synchronized void resumeReverse(StatusMonitor statusMonitor) {
+ if (!hasReverseMigration) {
+ LOGGER.warn("No reverse migration phase, unable to resume reverse migration");
+ return;
+ }
+
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (!MigrationStatusEnum.REVERSE_MIGRATION_INTERRUPTED.equals(currentStatus)) {
+ LOGGER.warn("Can not resume reverse migration, reverse migration is not interrupted");
+ return;
+ }
+
+ reverseMigrationTask.resumeTask();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.REVERSE_MIGRATION_RUNNING);
+ LOGGER.info("Resume reverse migration successfully");
+ }
+
+ @Override
+ public synchronized void restartReverse(
+ MigrationStopIndicator migrationStopIndicator, StatusMonitor statusMonitor) {
+ if (!hasReverseMigration) {
+ LOGGER.warn("No reverse migration phase, unable to restart reverse migration");
+ return;
+ }
+
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (MigrationStatusEnum.REVERSE_MIGRATION_FINISHED.equals(currentStatus)) {
+ if (!migrationStopIndicator.isStopped()) {
+ executeReverseTask(statusMonitor);
+ }
+ } else if (MigrationStatusEnum.REVERSE_MIGRATION_INTERRUPTED.equals(currentStatus)
+ || MigrationStatusEnum.REVERSE_MIGRATION_RUNNING.equals(currentStatus)) {
+ if (!migrationStopIndicator.isStopped()) {
+ reverseMigrationTask.stopTask();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.REVERSE_MIGRATION_FINISHED);
+
+ executeReverseTask(statusMonitor);
+ }
+ } else {
+ LOGGER.warn("Can not restart reverse migration,"
+ + " reverse migration is not finished or interrupted or running");
+ return;
+ }
+ LOGGER.info("Restart reverse migration successfully");
+ }
+
+ @Override
+ public synchronized void stopTask() {
+ if (hasFullMigration && fullMigrationTask != null) {
+ fullMigrationTask.stopTask();
+ }
+ if (hasFullDataCheck && fullDataCheckTask != null) {
+ fullDataCheckTask.stopTask();
+ }
+ if (hasIncrementalMigration && incrementalMigrationTask != null) {
+ if (hasIncrementalDataCheck && incrementalDataCheckTask != null) {
+ incrementalDataCheckTask.stopTask();
+ }
+ incrementalMigrationTask.stopTask();
+
+ if (hasFullMigration && fullMigrationTask.isForeignKeyMigrated()) {
+ LOGGER.info("Migrate foreign key");
+ fullMigrationTask.migrateForeignKey();
+ }
+ }
+ if (hasReverseMigration && reverseMigrationTask != null) {
+ reverseMigrationTask.stopTask();
+ }
+
+ afterTask();
+ }
+
+ @Override
+ void generateTasks(MigrationStopIndicator migrationStopIndicator, ProcessMonitor processMonitor) {
+ TaskWorkspace taskWorkspace = migrationJobConfig.getTaskWorkspace();
+ MysqlMigrationConfigDto migrationConfigDto = migrationJobConfig.getMigrationConfigDto();
+ if (hasFullMigration) {
+ fullMigrationTask = new ChameleonMysqlFullMigrationTask(taskWorkspace, migrationStopIndicator,
+ migrationJobConfig.getFullConfigBundle());
+ }
+
+ if (hasFullDataCheck) {
+ fullDataCheckTask = new DataCheckerMysqlFullDataCheckTask(processMonitor, migrationStopIndicator,
+ taskWorkspace, migrationConfigDto, migrationJobConfig.getFullDataCheckConfigBundle());
+ }
+
+ if (hasIncrementalMigration) {
+ incrementalMigrationTask = new DebeziumMysqlIncrementalMigrationTask(processMonitor, migrationStopIndicator,
+ taskWorkspace, migrationConfigDto, migrationJobConfig.getIncrementalConfigBundle());
+
+ if (hasIncrementalDataCheck) {
+ incrementalDataCheckTask = new DataCheckerMysqlIncrementalDataCheckTask(processMonitor,
+ migrationStopIndicator, taskWorkspace, migrationConfigDto,
+ migrationJobConfig.getIncrementalDataCheckConfigBundle());
+ }
+ }
+
+ if (hasReverseMigration) {
+ reverseMigrationTask = new DebeziumMysqlReverseMigrationTask(processMonitor, migrationStopIndicator,
+ taskWorkspace, migrationConfigDto, migrationJobConfig.getReverseConfigBundle());
+ }
+ }
+
+ @Override
+ void afterTask() {
+ if (hasAdjustKernelParam) {
+ try {
+ adjustKernelFsyncParam(true, migrationJobConfig.getMigrationConfigDto());
+ } catch (SQLException e) {
+ LOGGER.error("Adjust kernel parameter fsync failed, please manually restore it to on", e);
+ }
+ }
+
+ if (hasFullMigration && fullMigrationTask != null) {
+ fullMigrationTask.afterTask();
+ }
+
+ if (hasFullDataCheck && fullDataCheckTask != null) {
+ fullDataCheckTask.afterTask();
+ }
+
+ if (hasIncrementalMigration && incrementalMigrationTask != null) {
+ incrementalMigrationTask.afterTask();
+ if (hasIncrementalDataCheck && incrementalDataCheckTask != null) {
+ incrementalDataCheckTask.afterTask();
+ }
+ }
+
+ if (hasReverseMigration && reverseMigrationTask != null) {
+ reverseMigrationTask.afterTask();
+ }
+ }
+
+ private TaskAssistantExecutor getTaskExecutor(
+ MigrationStopIndicator migrationStopIndicator, StatusMonitor statusMonitor) {
+ TaskAssistantExecutor executor = new TaskAssistantExecutor(migrationStopIndicator);
+ if (hasFullMigration) {
+ executor.addStep(() -> {
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.START_FULL_MIGRATION);
+ fullMigrationTask.beforeTask();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.FULL_MIGRATION_RUNNING);
+ fullMigrationTask.migrateTable();
+ });
+ }
+
+ if (hasIncrementalMigration) {
+ executor.addStep(() -> {
+ incrementalMigrationTask.beforeTask();
+ incrementalMigrationTask.startSource();
+ });
+ }
+
+ if (hasFullMigration) {
+ if ("true".equals(migrationJobConfig.getMigrationConfigDto().getIsMigrationObject())) {
+ executor.addStep(() -> fullMigrationTask.migrateObject());
+ } else {
+ executor.addStep(() -> {
+ if (!(fullMigrationTask instanceof ChameleonMysqlFullMigrationTask)) {
+ throw new IllegalArgumentException("Full migration task is not "
+ + "ChameleonMysqlFullMigrationTask");
+ }
+ ChameleonMysqlFullMigrationTask chameleonTask = (ChameleonMysqlFullMigrationTask) fullMigrationTask;
+ chameleonTask.waitTableMigrationExit();
+ });
+ }
+
+ if (!hasIncrementalMigration) {
+ executor.addStep(() -> {
+ fullMigrationTask.migrateForeignKey();
+ });
+ }
+ executor.addStep(() -> statusMonitor.setCurrentStatus(MigrationStatusEnum.FULL_MIGRATION_FINISHED));
+ }
+
+ if (hasFullDataCheck) {
+ executor.addStep(() -> executeFullDataCheckTask(statusMonitor));
+ }
+ addIncrementalAndReversePhase(executor, statusMonitor);
+ return executor;
+ }
+
+ private void addIncrementalAndReversePhase(TaskAssistantExecutor executor, StatusMonitor statusMonitor) {
+ if (hasIncrementalMigration) {
+ executor.addStep(() -> {
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.START_INCREMENTAL_MIGRATION);
+ incrementalMigrationTask.startSource();
+ incrementalMigrationTask.startSink();
+ });
+ if (hasIncrementalDataCheck) {
+ executor.addStep(() -> {
+ incrementalDataCheckTask.beforeTask();
+ incrementalDataCheckTask.startTask();
+ });
+ }
+ executor.addStep(() -> statusMonitor.setCurrentStatus(MigrationStatusEnum.INCREMENTAL_MIGRATION_RUNNING));
+ }
+
+ if (!hasFullMigration && !hasFullDataCheck && !hasIncrementalMigration && hasReverseMigration) {
+ executor.addStep(() -> {
+ reverseMigrationTask.beforeTask();
+ executeReverseTask(statusMonitor);
+ });
+ }
+ }
+
+ private boolean isPreReversePhaseFinished(StatusMonitor statusMonitor) {
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (hasIncrementalMigration) {
+ return MigrationStatusEnum.INCREMENTAL_MIGRATION_FINISHED.equals(currentStatus);
+ }
+
+ if (hasFullDataCheck) {
+ return MigrationStatusEnum.FULL_DATA_CHECK_FINISHED.equals(currentStatus);
+ }
+
+ if (hasFullMigration) {
+ return MigrationStatusEnum.FULL_MIGRATION_FINISHED.equals(currentStatus);
+ }
+ return true;
+ }
+
+ private void executeFullDataCheckTask(StatusMonitor statusMonitor) {
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.START_FULL_DATA_CHECK);
+ fullDataCheckTask.beforeTask();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.FULL_DATA_CHECK_RUNNING);
+ fullDataCheckTask.startTask();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.FULL_DATA_CHECK_FINISHED);
+ }
+
+ private void executeReverseTask(StatusMonitor statusMonitor) {
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.START_REVERSE_MIGRATION);
+ reverseMigrationTask.startSource();
+ reverseMigrationTask.startSink();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.REVERSE_MIGRATION_RUNNING);
+ }
+
+ private void clearBeforeReverse() {
+ if (hasDoBeforeReverse) {
+ reverseMigrationTask.afterTask();
+ hasDoBeforeReverse = false;
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/job/PgsqlMigrationJob.java b/multidb-portal/src/main/java/org/opengauss/migration/job/PgsqlMigrationJob.java
new file mode 100644
index 0000000000000000000000000000000000000000..a1a663d34bc9e953cb3d0c0ddf7d7f0aa207b307
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/job/PgsqlMigrationJob.java
@@ -0,0 +1,420 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.job;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.domain.dto.PgsqlMigrationConfigDto;
+import org.opengauss.domain.model.MigrationStopIndicator;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.enums.MigrationStatusEnum;
+import org.opengauss.exceptions.MigrationException;
+import org.opengauss.migration.config.PgsqlMigrationJobConfig;
+import org.opengauss.migration.executor.TaskAssistantExecutor;
+import org.opengauss.migration.helper.TaskHelper;
+import org.opengauss.migration.process.ProcessMonitor;
+import org.opengauss.migration.status.StatusMonitor;
+import org.opengauss.migration.tasks.impl.DebeziumPgsqlIncrementalMigrationTask;
+import org.opengauss.migration.tasks.impl.DebeziumPgsqlReverseMigrationTask;
+import org.opengauss.migration.tasks.impl.FullMigrationToolPgsqlFullMigrationTask;
+import org.opengauss.migration.verify.VerifyManager;
+
+import java.sql.SQLException;
+
+/**
+ * PostgreSQL migration job
+ *
+ * @since 2025/7/3
+ */
+public class PgsqlMigrationJob extends AbstractMigrationJob {
+ private static final Logger LOGGER = LogManager.getLogger(PgsqlMigrationJob.class);
+
+ private final PgsqlMigrationJobConfig migrationJobConfig;
+
+ private boolean hasDoBeforeReverse = false;
+ private boolean hasAdjustKernelParam = false;
+
+ public PgsqlMigrationJob(PgsqlMigrationJobConfig migrationJobConfig) {
+ this.migrationJobConfig = migrationJobConfig;
+ this.hasFullMigration = migrationJobConfig.hasFullMigration();
+ this.hasIncrementalMigration = migrationJobConfig.hasIncrementalMigration();
+ this.hasReverseMigration = migrationJobConfig.hasReverseMigration();
+ }
+
+ @Override
+ public boolean preMigrationVerify() {
+ return VerifyManager.pgsqlMigrationVerify(migrationJobConfig.getMigrationPhaseList(),
+ migrationJobConfig.getMigrationConfigDto(), migrationJobConfig.getTaskWorkspace());
+ }
+
+ @Override
+ public void beforeTask() {
+ try {
+ adjustKernelFsyncParam(false, migrationJobConfig.getMigrationConfigDto());
+ hasAdjustKernelParam = true;
+ } catch (SQLException e) {
+ throw new MigrationException("Adjust kernel parameter fsync failed", e);
+ }
+ }
+
+ @Override
+ public void startTask(MigrationStopIndicator migrationStopIndicator, ProcessMonitor processMonitor,
+ StatusMonitor statusMonitor) {
+ TaskHelper.changePhasesConfig(migrationJobConfig);
+ generateTasks(migrationStopIndicator, processMonitor);
+ TaskAssistantExecutor executor = getTaskExecutor(migrationStopIndicator, statusMonitor);
+ executor.execute();
+ }
+
+ @Override
+ public synchronized void stopIncremental(
+ MigrationStopIndicator migrationStopIndicator, StatusMonitor statusMonitor) {
+ if (!hasIncrementalMigration) {
+ LOGGER.warn("No incremental migration phase, unable to stop incremental migration");
+ return;
+ }
+
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (!MigrationStatusEnum.INCREMENTAL_MIGRATION_RUNNING.equals(currentStatus)
+ && !MigrationStatusEnum.INCREMENTAL_MIGRATION_INTERRUPTED.equals(currentStatus)) {
+ LOGGER.warn("Can not stop incremental migration, incremental migration is not running or interrupted");
+ return;
+ }
+
+ incrementalMigrationTask.stopTask();
+
+ if (hasFullMigration && fullMigrationTask.isForeignKeyMigrated()) {
+ LOGGER.info("Migrate foreign key");
+ fullMigrationTask.migrateForeignKey();
+ }
+
+ if (!migrationStopIndicator.isStopped() && hasReverseMigration && !hasDoBeforeReverse) {
+ reverseMigrationTask.beforeTask();
+ hasDoBeforeReverse = true;
+ }
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.INCREMENTAL_MIGRATION_FINISHED);
+ LOGGER.info("Stop incremental migration successfully");
+ }
+
+ @Override
+ public synchronized void resumeIncremental(StatusMonitor statusMonitor) {
+ if (!hasIncrementalMigration) {
+ LOGGER.warn("No incremental migration phase, unable to resume incremental migration");
+ return;
+ }
+
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (!MigrationStatusEnum.INCREMENTAL_MIGRATION_INTERRUPTED.equals(currentStatus)) {
+ LOGGER.warn("Can not resume incremental migration, incremental migration is not interrupted");
+ return;
+ }
+
+ incrementalMigrationTask.resumeTask();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.INCREMENTAL_MIGRATION_RUNNING);
+ LOGGER.info("Resume incremental migration successfully");
+ }
+
+ @Override
+ public synchronized void restartIncremental(
+ MigrationStopIndicator migrationStopIndicator, StatusMonitor statusMonitor) {
+ if (!hasIncrementalMigration) {
+ LOGGER.warn("No incremental migration phase, unable to restart incremental migration");
+ return;
+ }
+
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (MigrationStatusEnum.INCREMENTAL_MIGRATION_FINISHED.equals(currentStatus)) {
+ if (!migrationStopIndicator.isStopped()) {
+ clearBeforeReverse();
+
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.START_INCREMENTAL_MIGRATION);
+ incrementalMigrationTask.startSource();
+ incrementalMigrationTask.startSink();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.INCREMENTAL_MIGRATION_RUNNING);
+ }
+ } else if (MigrationStatusEnum.INCREMENTAL_MIGRATION_INTERRUPTED.equals(currentStatus)
+ || MigrationStatusEnum.INCREMENTAL_MIGRATION_RUNNING.equals(currentStatus)) {
+ if (!migrationStopIndicator.isStopped()) {
+ incrementalMigrationTask.stopTask();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.INCREMENTAL_MIGRATION_FINISHED);
+
+ clearBeforeReverse();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.START_INCREMENTAL_MIGRATION);
+ incrementalMigrationTask.startSource();
+ incrementalMigrationTask.startSink();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.INCREMENTAL_MIGRATION_RUNNING);
+ }
+ } else {
+ LOGGER.warn("Can not restart incremental migration,"
+ + " incremental migration is not finished or interrupted or running");
+ return;
+ }
+ LOGGER.info("Restart incremental migration successfully");
+ }
+
+ @Override
+ public synchronized void startReverse(MigrationStopIndicator migrationStopIndicator, StatusMonitor statusMonitor) {
+ if (!hasReverseMigration) {
+ LOGGER.warn("No reverse migration phase, unable to start reverse migration");
+ return;
+ }
+
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (MigrationStatusEnum.START_REVERSE_MIGRATION.equals(currentStatus)
+ || MigrationStatusEnum.REVERSE_MIGRATION_RUNNING.equals(currentStatus)
+ || MigrationStatusEnum.REVERSE_MIGRATION_INTERRUPTED.equals(currentStatus)
+ || MigrationStatusEnum.REVERSE_MIGRATION_FINISHED.equals(currentStatus)) {
+ LOGGER.warn("Reverse migration is already running or interrupted or finished, "
+ + "unable to start reverse migration again");
+ return;
+ }
+
+ if (!isPreReversePhaseFinished(statusMonitor)) {
+ LOGGER.warn("Can not start reverse migration, the previous phase task is not completed");
+ return;
+ }
+
+ if (hasIncrementalMigration) {
+ incrementalMigrationTask.afterTask();
+ }
+
+ if (migrationStopIndicator.isStopped()) {
+ return;
+ }
+
+ if (VerifyManager.pgsqlReversePhaseVerify(migrationJobConfig.getMigrationConfigDto(),
+ migrationJobConfig.getTaskWorkspace())) {
+ if (!hasDoBeforeReverse) {
+ reverseMigrationTask.beforeTask();
+ }
+ executeReverseTask(statusMonitor);
+ LOGGER.info("Start reverse migration successfully");
+ } else {
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.PRE_REVERSE_PHASE_VERIFY_FAILED);
+ LOGGER.info("Reverse migration verify failed, skip reverse migration");
+ }
+ }
+
+ @Override
+ public synchronized void stopReverse(StatusMonitor statusMonitor) {
+ if (!hasReverseMigration) {
+ LOGGER.warn("No reverse migration phase, unable to stop reverse migration");
+ return;
+ }
+
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (!MigrationStatusEnum.REVERSE_MIGRATION_RUNNING.equals(currentStatus)
+ && !MigrationStatusEnum.REVERSE_MIGRATION_INTERRUPTED.equals(currentStatus)) {
+ LOGGER.warn("Can not stop reverse migration, reverse migration is not running or interrupted");
+ return;
+ }
+
+ reverseMigrationTask.stopTask();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.REVERSE_MIGRATION_FINISHED);
+ LOGGER.info("Stop reverse migration successfully");
+ }
+
+ @Override
+ public synchronized void resumeReverse(StatusMonitor statusMonitor) {
+ if (!hasReverseMigration) {
+ LOGGER.warn("No reverse migration phase, unable to resume reverse migration");
+ return;
+ }
+
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (!MigrationStatusEnum.REVERSE_MIGRATION_INTERRUPTED.equals(currentStatus)) {
+ LOGGER.warn("Can not resume reverse migration, reverse migration is not interrupted");
+ return;
+ }
+
+ reverseMigrationTask.resumeTask();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.REVERSE_MIGRATION_RUNNING);
+ LOGGER.info("Resume reverse migration successfully");
+ }
+
+ @Override
+ public synchronized void restartReverse(
+ MigrationStopIndicator migrationStopIndicator, StatusMonitor statusMonitor) {
+ if (!hasReverseMigration) {
+ LOGGER.warn("No reverse migration phase, unable to restart reverse migration");
+ return;
+ }
+
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (MigrationStatusEnum.REVERSE_MIGRATION_FINISHED.equals(currentStatus)) {
+ if (!migrationStopIndicator.isStopped()) {
+ executeReverseTask(statusMonitor);
+ }
+ } else if (MigrationStatusEnum.REVERSE_MIGRATION_INTERRUPTED.equals(currentStatus)
+ || MigrationStatusEnum.REVERSE_MIGRATION_RUNNING.equals(currentStatus)) {
+ if (!migrationStopIndicator.isStopped()) {
+ reverseMigrationTask.stopTask();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.REVERSE_MIGRATION_FINISHED);
+
+ executeReverseTask(statusMonitor);
+ }
+ } else {
+ LOGGER.warn("Can not restart reverse migration,"
+ + " reverse migration is not finished or interrupted or running");
+ return;
+ }
+ LOGGER.info("Restart reverse migration successfully");
+ }
+
+ @Override
+ public synchronized void stopTask() {
+ if (hasFullMigration) {
+ fullMigrationTask.stopTask();
+ }
+
+ if (hasIncrementalMigration) {
+ incrementalMigrationTask.stopTask();
+
+ if (hasFullMigration && fullMigrationTask.isForeignKeyMigrated()) {
+ LOGGER.info("Migrate foreign key");
+ fullMigrationTask.migrateForeignKey();
+ }
+ }
+
+ if (hasReverseMigration) {
+ reverseMigrationTask.stopTask();
+ }
+
+ afterTask();
+ }
+
+ @Override
+ void generateTasks(MigrationStopIndicator migrationStopIndicator, ProcessMonitor processMonitor) {
+ TaskWorkspace taskWorkspace = migrationJobConfig.getTaskWorkspace();
+ PgsqlMigrationConfigDto migrationConfigDto = migrationJobConfig.getMigrationConfigDto();
+ if (hasFullMigration) {
+ fullMigrationTask = new FullMigrationToolPgsqlFullMigrationTask(taskWorkspace, migrationStopIndicator,
+ migrationConfigDto, migrationJobConfig.getFullConfigBundle());
+ }
+
+ if (hasIncrementalMigration) {
+ incrementalMigrationTask = new DebeziumPgsqlIncrementalMigrationTask(processMonitor, migrationStopIndicator,
+ taskWorkspace, migrationConfigDto, migrationJobConfig.getIncrementalConfigBundle());
+ }
+
+ if (hasReverseMigration) {
+ reverseMigrationTask = new DebeziumPgsqlReverseMigrationTask(processMonitor, migrationStopIndicator,
+ taskWorkspace, migrationConfigDto, migrationJobConfig.getReverseConfigBundle());
+ }
+ }
+
+ @Override
+ void afterTask() {
+ if (hasAdjustKernelParam) {
+ try {
+ adjustKernelFsyncParam(true, migrationJobConfig.getMigrationConfigDto());
+ } catch (SQLException e) {
+ LOGGER.error("Adjust kernel parameter fsync failed, please manually restore it to on", e);
+ }
+ }
+
+ if (hasFullMigration) {
+ fullMigrationTask.afterTask();
+ }
+
+ if (hasIncrementalMigration) {
+ incrementalMigrationTask.afterTask();
+ }
+
+ if (hasReverseMigration) {
+ reverseMigrationTask.afterTask();
+ }
+ }
+
+ private TaskAssistantExecutor getTaskExecutor(
+ MigrationStopIndicator migrationStopIndicator, StatusMonitor statusMonitor) {
+ TaskAssistantExecutor executor = new TaskAssistantExecutor(migrationStopIndicator);
+ if (hasFullMigration) {
+ executor.addStep(() -> {
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.START_FULL_MIGRATION);
+ fullMigrationTask.beforeTask();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.FULL_MIGRATION_RUNNING);
+ fullMigrationTask.migrateTable();
+ });
+ }
+
+ if (hasIncrementalMigration) {
+ executor.addStep(() -> {
+ incrementalMigrationTask.beforeTask();
+ incrementalMigrationTask.startSource();
+ });
+ }
+
+ if (hasFullMigration) {
+ if ("true".equals(migrationJobConfig.getMigrationConfigDto().getIsMigrationObject())) {
+ executor.addStep(() -> fullMigrationTask.migrateObject());
+ } else {
+ executor.addStep(() -> {
+ if (!(fullMigrationTask instanceof FullMigrationToolPgsqlFullMigrationTask)) {
+ throw new IllegalArgumentException("Full migration task is not instance of "
+ + "FullMigrationToolPgsqlFullMigrationTask");
+ }
+
+ FullMigrationToolPgsqlFullMigrationTask fullMigrationToolTask =
+ (FullMigrationToolPgsqlFullMigrationTask) fullMigrationTask;
+ fullMigrationToolTask.waitTableMigrationExit();
+ });
+ }
+
+ if (!hasIncrementalMigration) {
+ executor.addStep(() -> {
+ fullMigrationTask.migrateForeignKey();
+ });
+ }
+ executor.addStep(() -> statusMonitor.setCurrentStatus(MigrationStatusEnum.FULL_MIGRATION_FINISHED));
+ }
+ addIncrementalAndReversePhase(executor, statusMonitor);
+ return executor;
+ }
+
+ private void addIncrementalAndReversePhase(TaskAssistantExecutor executor, StatusMonitor statusMonitor) {
+ if (hasIncrementalMigration) {
+ executor.addStep(() -> {
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.START_INCREMENTAL_MIGRATION);
+ incrementalMigrationTask.startSource();
+ incrementalMigrationTask.startSink();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.INCREMENTAL_MIGRATION_RUNNING);
+ });
+ }
+
+ if (!hasFullMigration && !hasIncrementalMigration && hasReverseMigration) {
+ executor.addStep(() -> {
+ reverseMigrationTask.beforeTask();
+ executeReverseTask(statusMonitor);
+ });
+ }
+ }
+
+ private void executeReverseTask(StatusMonitor statusMonitor) {
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.START_REVERSE_MIGRATION);
+ reverseMigrationTask.startSource();
+ reverseMigrationTask.startSink();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.REVERSE_MIGRATION_RUNNING);
+ }
+
+ private void clearBeforeReverse() {
+ if (hasDoBeforeReverse) {
+ reverseMigrationTask.afterTask();
+ hasDoBeforeReverse = false;
+ }
+ }
+
+ private boolean isPreReversePhaseFinished(StatusMonitor statusMonitor) {
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (hasIncrementalMigration) {
+ return MigrationStatusEnum.INCREMENTAL_MIGRATION_FINISHED.equals(currentStatus);
+ }
+
+ if (hasFullMigration) {
+ return MigrationStatusEnum.FULL_MIGRATION_FINISHED.equals(currentStatus);
+ }
+ return true;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/mode/MigrationMode.java b/multidb-portal/src/main/java/org/opengauss/migration/mode/MigrationMode.java
new file mode 100644
index 0000000000000000000000000000000000000000..e8c7f8a08c3e0d3431583e74328cacdb0b1441eb
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/mode/MigrationMode.java
@@ -0,0 +1,36 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.mode;
+
+import lombok.AllArgsConstructor;
+import lombok.Data;
+import lombok.NoArgsConstructor;
+import org.opengauss.enums.MigrationPhase;
+
+import java.util.HashSet;
+import java.util.List;
+
+/**
+ * Migration mode
+ *
+ * @since 2025/2/27
+ */
+@Data
+@NoArgsConstructor
+@AllArgsConstructor
+public class MigrationMode {
+ private String modeName;
+ private List migrationPhaseList;
+
+ /**
+ * Check if the migration mode contains the specified phase
+ *
+ * @param phase migration phase
+ * @return true if the migration mode contains the specified phase, false otherwise
+ */
+ public boolean hasPhase(MigrationPhase phase) {
+ return new HashSet<>(migrationPhaseList).contains(phase);
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/mode/ModeManager.java b/multidb-portal/src/main/java/org/opengauss/migration/mode/ModeManager.java
new file mode 100644
index 0000000000000000000000000000000000000000..49ac7d69f971192715d55a0ee8c4a0fa10e42929
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/mode/ModeManager.java
@@ -0,0 +1,334 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.mode;
+
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONException;
+import com.alibaba.fastjson2.JSONWriter;
+import lombok.Getter;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.MigrationModeConstants;
+import org.opengauss.enums.MigrationPhase;
+import org.opengauss.exceptions.MigrationModeException;
+import org.opengauss.config.ApplicationConfig;
+import org.opengauss.utils.FileUtils;
+import org.opengauss.utils.PropertiesUtils;
+import org.opengauss.utils.StringUtils;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Properties;
+
+/**
+ * Migration mode manager
+ *
+ * @since 2025/2/27
+ */
+@Getter
+public class ModeManager {
+ private static final Logger LOGGER = LogManager.getLogger(ModeManager.class);
+
+ private final String modeJsonPath;
+
+ public ModeManager() {
+ modeJsonPath = String.format("%s/%s", ApplicationConfig.getInstance().getPortalDataDirPath(),
+ MigrationModeConstants.CUSTOM_MODE_STORAGE_FILE_NAME);
+ }
+
+ /**
+ * Get mode by name
+ *
+ * @param modeName mode name
+ * @return migration mode
+ */
+ public MigrationMode getModeByName(String modeName) {
+ LOGGER.info("Getting migration mode by name: {}", modeName);
+ for (MigrationMode migrationMode : list()) {
+ if (migrationMode.getModeName().equals(modeName)) {
+ return migrationMode;
+ }
+ }
+ throw new MigrationModeException("Migration mode " + modeName + " does not exist");
+ }
+
+ /**
+ * List all migration modes
+ *
+ * @return List of migration modes
+ */
+ public List list() {
+ LOGGER.info("List all migration modes");
+ List customModeList = loadCustomModeList();
+ List defalutModeList = MigrationModeConstants.DEFALUT_MODE_LIST;
+
+ List mergedModeList = new ArrayList<>(customModeList);
+ mergedModeList.addAll(defalutModeList);
+ return mergedModeList;
+ }
+
+ /**
+ * Add migration mode
+ *
+ * @param modeFilePath migration mode define file path
+ */
+ public void add(String modeFilePath) {
+ LOGGER.info("Start to add migration mode");
+ Path filePath = Paths.get(modeFilePath).toAbsolutePath().normalize();
+ checkModeFileExists(filePath.toString());
+
+ try {
+ Properties config = loadModeFile(filePath.toString());
+ String addModeName = config.getProperty(MigrationModeConstants.TEMPLATE_KEY_MODE_NAME).trim();
+ String addPhasesStr = config.getProperty(MigrationModeConstants.TEMPLATE_KEY_MIGRATION_PHASE_LIST).trim();
+ checkModeName(addModeName);
+
+ List migrationModeList = list();
+ if (isModeNameExists(addModeName, migrationModeList)) {
+ throw new MigrationModeException("Migration mode " + addModeName + " already exists, "
+ + "please use a different name");
+ }
+
+ List addPhaseList = parseMigrationPhasesStr(addPhasesStr);
+
+ checkPhaseListExists(addPhaseList, migrationModeList);
+
+ MigrationMode addMigrationMode = new MigrationMode(addModeName, addPhaseList);
+ writeModeToJsonFile(addMigrationMode);
+ LOGGER.info("Migration mode {} added successfully", addModeName);
+ } catch (IOException e) {
+ throw new MigrationModeException("Failed to add migration mode", e);
+ }
+ }
+
+ /**
+ * Delete migration mode.
+ *
+ * @param modeName migration mode name
+ */
+ public void delete(String modeName) {
+ LOGGER.info("Start to delete migration mode");
+ if (isModeNameExists(modeName, MigrationModeConstants.DEFALUT_MODE_LIST)) {
+ throw new MigrationModeException("Default migration mode " + modeName + " cannot be deleted or modified");
+ }
+
+ List customModeList = loadCustomModeList();
+ if (customModeList.isEmpty() || !isModeNameExists(modeName, customModeList)) {
+ throw new MigrationModeException("Migration mode " + modeName + " does not exist");
+ }
+
+ customModeList.removeIf(migrationMode -> migrationMode.getModeName().equals(modeName));
+ try {
+ writeModeListToJsonFile(customModeList);
+ LOGGER.info("Migration mode {} deleted successfully", modeName);
+ } catch (IOException e) {
+ throw new MigrationModeException("Failed to delete migration mode", e);
+ }
+ }
+
+ /**
+ * Update migration mode
+ *
+ * @param modeFilePath migration mode define file path
+ */
+ public void update(String modeFilePath) {
+ LOGGER.info("Start to update migration mode");
+ Path filePath = Paths.get(modeFilePath).toAbsolutePath().normalize();
+ checkModeFileExists(filePath.toString());
+
+ Properties config = loadModeFile(filePath.toString());
+ String updateModeName = config.getProperty(MigrationModeConstants.TEMPLATE_KEY_MODE_NAME).trim();
+ String updatePhasesStr = config.getProperty(MigrationModeConstants.TEMPLATE_KEY_MIGRATION_PHASE_LIST).trim();
+ checkModeName(updateModeName);
+
+ if (isModeNameExists(updateModeName, MigrationModeConstants.DEFALUT_MODE_LIST)) {
+ throw new MigrationModeException("Default migration mode " + updateModeName
+ + " cannot be modified or deleted");
+ }
+
+ List customModeList = loadCustomModeList();
+ if (customModeList.isEmpty() || !isModeNameExists(updateModeName, customModeList)) {
+ throw new MigrationModeException("Migration mode " + updateModeName + " does not exist");
+ }
+
+ List updatePhaseList = parseMigrationPhasesStr(updatePhasesStr);
+ customModeList.removeIf(migrationMode -> migrationMode.getModeName().equals(updateModeName));
+ checkPhaseListExists(updatePhaseList, customModeList);
+ checkPhaseListExists(updatePhaseList, MigrationModeConstants.DEFALUT_MODE_LIST);
+
+ MigrationMode addMigrationMode = new MigrationMode(updateModeName, updatePhaseList);
+ customModeList.add(addMigrationMode);
+ try {
+ writeModeListToJsonFile(customModeList);
+ LOGGER.info("Migration mode {} updated successfully", updateModeName);
+ } catch (IOException e) {
+ throw new MigrationModeException("Failed to update migration mode", e);
+ }
+ }
+
+ /**
+ * Export migration mode template file
+ */
+ public void template() {
+ try {
+ String targetFilePath = String.format("%s/%s", ApplicationConfig.getInstance().getPortalTmpDirPath(),
+ MigrationModeConstants.DEFINE_MODE_TEMPLATE_NAME);
+ FileUtils.exportResource(MigrationModeConstants.DEFINE_MODE_TEMPLATE_RESOURCES_PATH, targetFilePath);
+ LOGGER.info("Template file exported successfully");
+ LOGGER.info("Template file path: {}", targetFilePath);
+ } catch (IOException e) {
+ throw new MigrationModeException("Failed to export template file", e);
+ }
+ }
+
+ private void checkModeName(String modeName) {
+ if (modeName.length() > MigrationModeConstants.MODE_NAME_MAX_LENGTH) {
+ throw new MigrationModeException("The length of the mode name cannot exceed "
+ + MigrationModeConstants.MODE_NAME_MAX_LENGTH + " characters");
+ }
+
+ if (!modeName.matches(MigrationModeConstants.MODE_NAME_PATTERN)) {
+ throw new MigrationModeException("Invalid mode name: " + modeName + ". "
+ + "Only letters(a-z A-Z), numbers(0-9), underscores(_), and hyphens(-) are allowed");
+ }
+ }
+
+ private void checkModeFileExists(String modeFilePath) {
+ if (!FileUtils.checkFileExists(modeFilePath)) {
+ throw new MigrationModeException("File does not exist or is a directory: " + modeFilePath);
+ }
+ }
+
+ private boolean isModeNameExists(String modeName, List migrationModeList) {
+ return migrationModeList.stream().anyMatch(
+ migrationMode -> migrationMode.getModeName().equals(modeName));
+ }
+
+ private void checkPhaseListExists(List phaseList, List migrationModeList) {
+ for (MigrationMode migrationMode : migrationModeList) {
+ List oldPhaseList = migrationMode.getMigrationPhaseList();
+ if (new HashSet<>(oldPhaseList).equals(new HashSet<>(phaseList))) {
+ throw new MigrationModeException("The same migration phase list already exists in the migration mode "
+ + migrationMode.getModeName());
+ }
+ }
+ }
+
+ private Properties loadModeFile(String modeFilePath) {
+ try {
+ Properties properties = PropertiesUtils.readProperties(modeFilePath);
+ String modeName = properties.getProperty(MigrationModeConstants.TEMPLATE_KEY_MODE_NAME).trim();
+ String phasesStr = properties.getProperty(MigrationModeConstants.TEMPLATE_KEY_MIGRATION_PHASE_LIST).trim();
+ if (StringUtils.isNullOrBlank(modeName) || StringUtils.isNullOrBlank(phasesStr)) {
+ String errorMsg = String.format("Invalid mode file, %s or %s cannot be null or empty",
+ MigrationModeConstants.TEMPLATE_KEY_MODE_NAME,
+ MigrationModeConstants.TEMPLATE_KEY_MIGRATION_PHASE_LIST);
+ throw new MigrationModeException(errorMsg);
+ }
+ return properties;
+ } catch (IOException e) {
+ throw new MigrationModeException("Failed to load mode file", e);
+ }
+ }
+
+ private List parseMigrationPhasesStr(String phasesStr) {
+ List migrationPhaseList = new ArrayList<>();
+ List phaseStrs = Arrays.asList(phasesStr.split(","));
+
+ if (phaseStrs.contains(MigrationPhase.FULL_MIGRATION.getPhaseName())) {
+ migrationPhaseList.add(MigrationPhase.FULL_MIGRATION);
+ }
+ if (phaseStrs.contains(MigrationPhase.FULL_DATA_CHECK.getPhaseName())) {
+ migrationPhaseList.add(MigrationPhase.FULL_DATA_CHECK);
+ }
+
+ boolean hasIncremental = phaseStrs.contains(MigrationPhase.INCREMENTAL_MIGRATION.getPhaseName());
+ if (hasIncremental) {
+ migrationPhaseList.add(MigrationPhase.INCREMENTAL_MIGRATION);
+ }
+ if (phaseStrs.contains(MigrationPhase.INCREMENTAL_DATA_CHECK.getPhaseName())) {
+ if (!hasIncremental) {
+ throw new MigrationModeException("Invalid migration phase list: " + phasesStr
+ + ", please add incremental migration phase before incremental data check phase");
+ }
+ migrationPhaseList.add(MigrationPhase.INCREMENTAL_DATA_CHECK);
+ }
+ if (phaseStrs.contains(MigrationPhase.REVERSE_MIGRATION.getPhaseName())) {
+ migrationPhaseList.add(MigrationPhase.REVERSE_MIGRATION);
+ }
+
+ if (migrationPhaseList.isEmpty()) {
+ throw new MigrationModeException("Invalid migration phase list: " + phasesStr
+ + ", please use the correct migration phase");
+ }
+
+ return Collections.unmodifiableList(migrationPhaseList);
+ }
+
+ private List loadCustomModeList() {
+ try {
+ createJsonFileIfNotExists();
+
+ String modeJsonStr = FileUtils.readFileContents(modeJsonPath);
+ if (StringUtils.isNullOrBlank(modeJsonStr)) {
+ return Collections.emptyList();
+ }
+
+ ArrayList migrationModeList = new ArrayList<>();
+ String[] modeJsonStrs = modeJsonStr.split(MigrationModeConstants.OBJECT_SEPARATOR);
+ for (String modeJson : modeJsonStrs) {
+ if (!modeJson.isBlank()) {
+ try {
+ migrationModeList.add(JSON.parseObject(modeJson.trim(), MigrationMode.class));
+ } catch (JSONException e) {
+ LOGGER.error("Failed to parse custom migration mode JSON: {}, "
+ + "all custom migration modes has been cleared", modeJson);
+ FileUtils.writeToFile(modeJsonPath, "", false);
+ return Collections.emptyList();
+ }
+ }
+ }
+ return migrationModeList;
+ } catch (IOException e) {
+ LOGGER.error("Failed to load custom migration mode list", e);
+ return Collections.emptyList();
+ }
+ }
+
+ private void writeModeToJsonFile(MigrationMode migrationMode) throws IOException {
+ createJsonFileIfNotExists();
+
+ String objectJson = JSON.toJSONString(migrationMode, JSONWriter.Feature.PrettyFormat);
+ String writeStr = String.format("%s%s%s", objectJson, MigrationModeConstants.OBJECT_SEPARATOR,
+ System.lineSeparator());
+ FileUtils.writeToFile(modeJsonPath, writeStr, true);
+ }
+
+ private void writeModeListToJsonFile(List modeList) throws IOException {
+ createJsonFileIfNotExists();
+
+ StringBuilder jsonBuilder = new StringBuilder();
+ for (MigrationMode mode : modeList) {
+ String objectJson = JSON.toJSONString(mode);
+ jsonBuilder.append(objectJson)
+ .append(MigrationModeConstants.OBJECT_SEPARATOR)
+ .append(System.lineSeparator());
+ }
+
+ FileUtils.writeToFile(modeJsonPath, jsonBuilder.toString(), false);
+ }
+
+ private void createJsonFileIfNotExists() throws IOException {
+ if (!FileUtils.checkFileExists(modeJsonPath)) {
+ FileUtils.createFile(modeJsonPath);
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/monitor/MigrationAliveMonitor.java b/multidb-portal/src/main/java/org/opengauss/migration/monitor/MigrationAliveMonitor.java
new file mode 100644
index 0000000000000000000000000000000000000000..05995eec1b91a707c0328894e7487e541fc2f0a2
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/monitor/MigrationAliveMonitor.java
@@ -0,0 +1,92 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.monitor;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.TaskConstants;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.handler.PortalExceptionHandler;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Migration alive monitor
+ *
+ * @since 2025/7/2
+ */
+public class MigrationAliveMonitor {
+ private static final Logger LOGGER = LogManager.getLogger(MigrationAliveMonitor.class);
+ private static final long HEARTBEAT_INTERVAL = 1L;
+
+ private ScheduledExecutorService executor;
+ private TaskWorkspace workspace;
+
+ public MigrationAliveMonitor(TaskWorkspace workspace) {
+ this.workspace = workspace;
+ }
+
+ /**
+ * Start heartbeat service
+ */
+ public void start() {
+ if (executor != null && !executor.isShutdown()) {
+ return;
+ }
+
+ String heartbeatFilePath = getHeartbeatFilePath(workspace);
+ executor = Executors.newSingleThreadScheduledExecutor();
+ executor.scheduleAtFixedRate(() -> {
+ Thread.currentThread().setUncaughtExceptionHandler(new PortalExceptionHandler());
+ try {
+ updateHeartbeat(heartbeatFilePath);
+ } catch (IOException e) {
+ LOGGER.warn("Failed to update heartbeat, error message:{}", e.getMessage());
+ }
+ }, 0, HEARTBEAT_INTERVAL, TimeUnit.SECONDS);
+ }
+
+ /**
+ * Stop heartbeat service
+ */
+ public void stop() {
+ if (executor != null) {
+ executor.shutdownNow();
+ cleanup();
+ executor = null;
+ workspace = null;
+ }
+ }
+
+ /**
+ * Get heartbeat file path
+ *
+ * @param workspace task workspace
+ * @return heartbeat file path
+ */
+ public static String getHeartbeatFilePath(TaskWorkspace workspace) {
+ return String.format("%s/%s", workspace.getStatusDirPath(), TaskConstants.HEARTBEAT_FILE);
+ }
+
+ private void updateHeartbeat(String filePath) throws IOException {
+ File heartbeatFile = new File(filePath);
+ if (!heartbeatFile.exists()) {
+ heartbeatFile.createNewFile();
+ } else {
+ heartbeatFile.setLastModified(System.currentTimeMillis());
+ }
+ }
+
+ private void cleanup() {
+ File heartbeatFile = new File(getHeartbeatFilePath(workspace));
+ if (heartbeatFile.exists()) {
+ heartbeatFile.delete();
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/process/ConfluentProcess.java b/multidb-portal/src/main/java/org/opengauss/migration/process/ConfluentProcess.java
new file mode 100644
index 0000000000000000000000000000000000000000..ac1971aa1a8796dbbab5c648db360d5fbae53288
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/process/ConfluentProcess.java
@@ -0,0 +1,122 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.process;
+
+import lombok.Getter;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.exceptions.KafkaException;
+import org.opengauss.config.ApplicationConfig;
+import org.opengauss.utils.ProcessUtils;
+import org.opengauss.utils.ThreadUtils;
+
+import java.io.IOException;
+
+/**
+ * Confluent process
+ *
+ * @since 2025/4/18
+ */
+@Getter
+public class ConfluentProcess implements Process {
+ private static final Logger LOGGER = LogManager.getLogger(ConfluentProcess.class);
+
+ private final String logPath;
+ private final long startWaitTime;
+ private final String processName;
+ private final String startCommand;
+ private final String checkCommand;
+
+ private int pid;
+
+ public ConfluentProcess(String processName, String startCommand, String checkCommand,
+ String logPath, long startWaitTime) {
+ this.processName = processName;
+ this.startCommand = startCommand;
+ this.checkCommand = checkCommand;
+ this.startWaitTime = startWaitTime;
+ this.logPath = logPath;
+ }
+
+ @Override
+ public void start() {
+ try {
+ if (!isAlive()) {
+ String workDirPath = ApplicationConfig.getInstance().getPortalTmpDirPath();
+ ProcessUtils.executeCommand(startCommand, workDirPath, logPath, startWaitTime);
+ } else {
+ LOGGER.info("Process {} is already started.", processName);
+ }
+ } catch (IOException | InterruptedException e) {
+ throw new KafkaException("Failed to start process " + processName, e);
+ }
+ }
+
+ @Override
+ public void stop() {
+ if (isAlive()) {
+ try {
+ ProcessUtils.killProcessByCommandSnippet(checkCommand, false);
+ } catch (IOException | InterruptedException e) {
+ LOGGER.warn("Kill {} with error: {}", processName, e.getMessage());
+ }
+
+ waitProcessExit();
+ }
+ }
+
+ @Override
+ public boolean checkStatus() {
+ if (isAlive()) {
+ return true;
+ } else {
+ LOGGER.error("Process {} exit abnormally.", processName);
+ return false;
+ }
+ }
+
+ @Override
+ public boolean isAlive() {
+ try {
+ int commandPid = ProcessUtils.getCommandPid(checkCommand);
+ if (commandPid == -1) {
+ pid = ProcessUtils.getCommandPid(checkCommand);
+ } else {
+ pid = commandPid;
+ }
+
+ return pid != -1;
+ } catch (IOException | InterruptedException e) {
+ LOGGER.warn("Check {} status with error: {}", processName, e.getMessage());
+ return false;
+ }
+ }
+
+ private void waitProcessExit() {
+ int oneSecond = 1000;
+ int processStopTime = 5000;
+ while (processStopTime > 0) {
+ ThreadUtils.sleep(oneSecond);
+ processStopTime -= oneSecond;
+
+ if (!isAlive()) {
+ LOGGER.info("{} stopped", processName);
+ return;
+ }
+ }
+
+ try {
+ ProcessUtils.killProcessByCommandSnippet(checkCommand, true);
+ } catch (IOException | InterruptedException e) {
+ LOGGER.warn("Kill {} with error: {}", processName, e.getMessage());
+ }
+
+ if (isAlive()) {
+ LOGGER.error("Failed to stop {}, please kill it manually, pid: {}", processName, pid);
+ } else {
+ LOGGER.info("{} stopped", processName);
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/process/Process.java b/multidb-portal/src/main/java/org/opengauss/migration/process/Process.java
new file mode 100644
index 0000000000000000000000000000000000000000..e9b9ed5b93455ffcea0fb4cbce047847e59d211e
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/process/Process.java
@@ -0,0 +1,43 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.process;
+
+/**
+ * process interface
+ *
+ * @since 2025/5/12
+ */
+public interface Process {
+ /**
+ * Get process name
+ *
+ * @return process name
+ */
+ String getProcessName();
+
+ /**
+ * Start process
+ */
+ void start();
+
+ /**
+ * Stop process
+ */
+ void stop();
+
+ /**
+ * Check process status
+ *
+ * @return whether process is normally
+ */
+ boolean checkStatus();
+
+ /**
+ * Is process alive
+ *
+ * @return whether process is alive
+ */
+ boolean isAlive();
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/process/ProcessErrorHandler.java b/multidb-portal/src/main/java/org/opengauss/migration/process/ProcessErrorHandler.java
new file mode 100644
index 0000000000000000000000000000000000000000..be026204c0d610c2e2d68019818031119c5b2433
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/process/ProcessErrorHandler.java
@@ -0,0 +1,98 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.process;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.enums.MigrationStatusEnum;
+import org.opengauss.exceptions.KafkaException;
+import org.opengauss.exceptions.MigrationException;
+import org.opengauss.migration.MigrationManager;
+import org.opengauss.migration.process.task.DataCheckerProcess;
+import org.opengauss.migration.process.task.DebeziumProcess;
+import org.opengauss.migration.process.task.TaskProcess;
+import org.opengauss.migration.status.StatusMonitor;
+import org.opengauss.migration.tools.Kafka;
+
+/**
+ * process error handler
+ *
+ * @since 2025/6/6
+ */
+public class ProcessErrorHandler {
+ private static final Logger LOGGER = LogManager.getLogger(ProcessErrorHandler.class);
+
+ private final MigrationManager migrationManager;
+ private final StatusMonitor statusMonitor;
+
+ public ProcessErrorHandler(MigrationManager migrationManager, StatusMonitor statusMonitor) {
+ this.migrationManager = migrationManager;
+ this.statusMonitor = statusMonitor;
+ }
+
+ /**
+ * handle task process error
+ *
+ * @param process task process
+ */
+ public void handleTaskProcessError(TaskProcess process) {
+ if (process instanceof DataCheckerProcess) {
+ throw new MigrationException("Data checker process has exit abnormally, stop migration");
+ }
+
+ if (process instanceof DebeziumProcess) {
+ if (statusMonitor.isIncrementalMigrationStatus()) {
+ LOGGER.error("Debezium process is abnormal, interrupt incremental migration");
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.INCREMENTAL_MIGRATION_INTERRUPTED);
+ }
+
+ if (statusMonitor.isReverseMigrationStatus()) {
+ LOGGER.error("Debezium process is abnormal, interrupt reverse migration");
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.REVERSE_MIGRATION_INTERRUPTED);
+ }
+ }
+ }
+
+ /**
+ * handle confluent process error
+ */
+ public void handleConfluentError() {
+ if (statusMonitor.isFullMigrationStatus()) {
+ return;
+ }
+
+ if (statusMonitor.isFullDataCheckStatus()) {
+ throw new KafkaException("Kafka process has exit abnormally");
+ }
+
+ boolean isRestarted = Kafka.getInstance().restart();
+ if (statusMonitor.isIncrementalMigrationStatus()) {
+ if (isRestarted) {
+ if (!statusMonitor.isIncrementalMigrationStopped()) {
+ LOGGER.info("Restarted Kafka process successfully, restarting incremental migration...");
+ migrationManager.restartIncremental();
+ }
+ } else {
+ LOGGER.error("Stop incremental migration due to Kafka process exit abnormally");
+ migrationManager.stopIncremental();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.INCREMENTAL_MIGRATION_INTERRUPTED);
+ }
+ return;
+ }
+
+ if (statusMonitor.isReverseMigrationStatus()) {
+ if (isRestarted) {
+ if (!statusMonitor.isReverseMigrationStopped()) {
+ LOGGER.info("Restarted Kafka process successfully, restarting reverse migration...");
+ migrationManager.restartReverse();
+ }
+ } else {
+ LOGGER.error("Stop reverse migration due to Kafka process exit abnormally");
+ migrationManager.startReverse();
+ statusMonitor.setCurrentStatus(MigrationStatusEnum.REVERSE_MIGRATION_INTERRUPTED);
+ }
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/process/ProcessMonitor.java b/multidb-portal/src/main/java/org/opengauss/migration/process/ProcessMonitor.java
new file mode 100644
index 0000000000000000000000000000000000000000..4fb3418256a2ff3ffa325fd8f2761e5879dbb6cf
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/process/ProcessMonitor.java
@@ -0,0 +1,175 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.process;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.ProcessNameConstants;
+import org.opengauss.migration.helper.tool.DebeziumHelper;
+import org.opengauss.migration.handler.ThreadExceptionHandler;
+import org.opengauss.migration.MigrationManager;
+import org.opengauss.migration.process.task.DebeziumProcess;
+import org.opengauss.migration.process.task.TaskProcess;
+import org.opengauss.migration.status.StatusMonitor;
+import org.opengauss.migration.tools.Kafka;
+import org.opengauss.utils.ThreadUtils;
+
+import java.io.File;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.concurrent.CopyOnWriteArrayList;
+
+/**
+ * process monitor
+ *
+ * @since 2025/3/1
+ */
+public class ProcessMonitor extends Thread {
+ private static final Logger LOGGER = LogManager.getLogger(ProcessMonitor.class);
+ private static final int INTERVAL_TIME = 500;
+ private static final int MAX_NOT_MODIFIED_COUNT = 60;
+ private static final HashMap fileLastModifiedCache = new HashMap<>();
+ private static final HashMap fileNotModifiedCountCache = new HashMap<>();
+
+ private final List taskProcessList = new CopyOnWriteArrayList<>();
+ private final List confluentProcessList = new ArrayList<>();
+
+ private volatile boolean isRunning = true;
+ private StatusMonitor statusMonitor;
+ private ProcessErrorHandler processErrorHandler;
+
+ public ProcessMonitor() {
+ super("Process-Monitor-Thread");
+ }
+
+ /**
+ * Start monitoring
+ *
+ * @param migrationManager migration manager
+ * @param statusMonitor status manager
+ */
+ public void startMonitoring(MigrationManager migrationManager, StatusMonitor statusMonitor) {
+ this.statusMonitor = statusMonitor;
+ this.processErrorHandler = new ProcessErrorHandler(migrationManager, statusMonitor);
+ setDaemon(true);
+ start();
+ }
+
+ @Override
+ public void run() {
+ Thread.currentThread().setUncaughtExceptionHandler(new ThreadExceptionHandler());
+ confluentProcessList.addAll(Kafka.getInstance().getConfluentProcessList());
+ while (isRunning) {
+ ThreadUtils.sleep(500);
+
+ for (TaskProcess taskProcess : taskProcessList) {
+ if (!taskProcess.checkStatus()) {
+ taskProcessList.remove(taskProcess);
+ processErrorHandler.handleTaskProcessError(taskProcess);
+ break;
+ }
+
+ if (taskProcess.isStopped()) {
+ taskProcessList.remove(taskProcess);
+ }
+
+ if (!isProcessFunctional(taskProcess)) {
+ taskProcessList.remove(taskProcess);
+ taskProcess.stop();
+ processErrorHandler.handleTaskProcessError(taskProcess);
+ break;
+ }
+ }
+
+ if (statusMonitor.isFullMigrationStatus()) {
+ continue;
+ }
+
+ for (ConfluentProcess confluentProcess : confluentProcessList) {
+ if (!confluentProcess.checkStatus()) {
+ processErrorHandler.handleConfluentError();
+ break;
+ }
+ }
+ }
+ LOGGER.info("Process monitor has stopped.");
+ }
+
+ /**
+ * Stop monitoring
+ */
+ public void stopMonitoring() {
+ this.isRunning = false;
+ }
+
+ /**
+ * Add process
+ *
+ * @param process task process
+ */
+ public void addProcess(TaskProcess process) {
+ taskProcessList.add(process);
+ }
+
+ private boolean isProcessFunctional(TaskProcess process) {
+ if (!(process instanceof DebeziumProcess)) {
+ return true;
+ }
+
+ String processName = process.getProcessName();
+ if (ProcessNameConstants.DEBEZIUM_INCREMENTAL_CONNECT_SOURCE.equals(processName)) {
+ String statusFilePath = DebeziumHelper.getIncrementalSourceStatusFilePath(process.getTaskWorkspace());
+ return isProcessStatusFileFunctional(processName, statusFilePath);
+ }
+
+ if (ProcessNameConstants.DEBEZIUM_INCREMENTAL_CONNECT_SINK.equals(processName)) {
+ String statusFilePath = DebeziumHelper.getIncrementalSinkStatusFilePath(process.getTaskWorkspace());
+ return isProcessStatusFileFunctional(processName, statusFilePath);
+ }
+
+ if (ProcessNameConstants.DEBEZIUM_REVERSE_CONNECT_SOURCE.equals(processName)) {
+ String statusFilePath = DebeziumHelper.getReverseSourceStatusFilePath(process.getTaskWorkspace());
+ return isProcessStatusFileFunctional(processName, statusFilePath);
+ }
+
+ if (ProcessNameConstants.DEBEZIUM_REVERSE_CONNECT_SINK.equals(processName)) {
+ String statusFilePath = DebeziumHelper.getReverseSinkStatusFilePath(process.getTaskWorkspace());
+ return isProcessStatusFileFunctional(processName, statusFilePath);
+ }
+ return true;
+ }
+
+ private boolean isProcessStatusFileFunctional(String processName, String statusFilePath) {
+ if (isFileModified(statusFilePath)) {
+ fileNotModifiedCountCache.put(statusFilePath, 0);
+ } else {
+ Integer cacheCount = fileNotModifiedCountCache.getOrDefault(statusFilePath, 0);
+ if (cacheCount >= MAX_NOT_MODIFIED_COUNT) {
+ LOGGER.error("Process '{}' status file is not modified for {} millis", processName,
+ INTERVAL_TIME * MAX_NOT_MODIFIED_COUNT);
+ fileNotModifiedCountCache.put(statusFilePath, 0);
+ return false;
+ }
+ fileNotModifiedCountCache.put(statusFilePath, cacheCount + 1);
+ }
+ return true;
+ }
+
+ private boolean isFileModified(String filePath) {
+ File file = new File(filePath);
+ if (!file.exists() || !file.isFile()) {
+ return true;
+ }
+
+ long lastModified = file.lastModified();
+ Long cacheModified = fileLastModifiedCache.get(filePath);
+ if (cacheModified == null || lastModified != cacheModified) {
+ fileLastModifiedCache.put(filePath, lastModified);
+ return true;
+ }
+ return false;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/process/task/ChameleonProcess.java b/multidb-portal/src/main/java/org/opengauss/migration/process/task/ChameleonProcess.java
new file mode 100644
index 0000000000000000000000000000000000000000..5a44363cba4a05984e7e0b21752dbd9c474b2809
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/process/task/ChameleonProcess.java
@@ -0,0 +1,99 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.process.task;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.tool.ChameleonConstants;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.exceptions.MigrationException;
+import org.opengauss.migration.helper.tool.ChameleonHelper;
+import org.opengauss.migration.tools.Chameleon;
+import org.opengauss.utils.FileUtils;
+import org.opengauss.utils.ProcessUtils;
+import org.opengauss.utils.ThreadUtils;
+
+import java.io.IOException;
+
+/**
+ * chameleon process
+ *
+ * @since 2025/3/1
+ */
+public class ChameleonProcess extends TaskProcess {
+ private static final Logger LOGGER = LogManager.getLogger(ChameleonProcess.class);
+
+ private final String chameleonOrder;
+
+ public ChameleonProcess(String processName, TaskWorkspace taskWorkspace, String chameleonOrder) {
+ super(processName, taskWorkspace, ChameleonHelper.generateProcessStartCommand(taskWorkspace, chameleonOrder),
+ ChameleonHelper.generateProcessStartCommand(taskWorkspace, chameleonOrder));
+ this.chameleonOrder = chameleonOrder;
+ }
+
+ @Override
+ public void start() {
+ if (isStarted) {
+ return;
+ }
+
+ String workDirPath = Chameleon.getInstance().getChameleonHomeDirPath();
+ String logPath = ChameleonHelper.generateFullMigrationLogPath(taskWorkspace);
+
+ try {
+ if (ChameleonConstants.ORDER_DETACH_REPLICA.equals(chameleonOrder)) {
+ String[] interactArgs = new String[]{"YES"};
+ ProcessUtils.executeInteractiveCommand(startCommand, workDirPath, logPath,
+ ChameleonConstants.WAIT_PROCESS_START_MILLIS, interactArgs);
+ } else {
+ ProcessUtils.executeCommand(startCommand, workDirPath, logPath,
+ ChameleonConstants.WAIT_PROCESS_START_MILLIS);
+ }
+ LOGGER.info("{} started", processName);
+ LOGGER.info("{} is running", processName);
+ } catch (IOException | InterruptedException e) {
+ throw new MigrationException("Failed to start chameleon process " + processName, e);
+ }
+
+ isStarted = true;
+ isStopped = false;
+ isNormal = true;
+ }
+
+ @Override
+ public boolean checkStatus() {
+ if (!isStarted || isStopped) {
+ return isNormal;
+ }
+
+ try {
+ if (!isAlive() && !isStopped) {
+ String logPath = ChameleonHelper.generateFullMigrationLogPath(taskWorkspace);
+ String lastLine = FileUtils.readFileLastLine(logPath);
+ String endFlag = chameleonOrder + " finished";
+
+ isStopped = true;
+ if (lastLine.contains(endFlag)) {
+ LOGGER.info("{} has finished", processName);
+ } else {
+ isNormal = false;
+ LOGGER.error("{} exit abnormally", processName);
+ }
+ }
+ } catch (IOException e) {
+ LOGGER.warn("Failed to read chameleon process log, error :{}", e.getMessage());
+ }
+
+ return isNormal;
+ }
+
+ @Override
+ public void waitExit() {
+ while (isStarted && !isStopped) {
+ ThreadUtils.sleep(1000);
+ checkStatus();
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/process/task/DataCheckerProcess.java b/multidb-portal/src/main/java/org/opengauss/migration/process/task/DataCheckerProcess.java
new file mode 100644
index 0000000000000000000000000000000000000000..2f4c7380f4ba572e20c96cd45102fa639e3f76f7
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/process/task/DataCheckerProcess.java
@@ -0,0 +1,108 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.process.task;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.tool.DataCheckerConstants;
+import org.opengauss.domain.model.ConfigFile;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.exceptions.MigrationException;
+import org.opengauss.enums.DataCheckerProcessType;
+import org.opengauss.migration.helper.tool.DataCheckerHelper;
+import org.opengauss.utils.FileUtils;
+import org.opengauss.utils.ProcessUtils;
+import org.opengauss.utils.ThreadUtils;
+
+import java.io.IOException;
+
+/**
+ * data checker process
+ *
+ * @since 2025/3/1
+ */
+public class DataCheckerProcess extends TaskProcess {
+ private static final Logger LOGGER = LogManager.getLogger(DataCheckerProcess.class);
+
+ private final DataCheckerProcessType processType;
+ private final ConfigFile processConfig;
+ private final boolean isFullMigration;
+
+ public DataCheckerProcess(String processName, TaskWorkspace taskWorkspace, ConfigFile processConfig,
+ DataCheckerProcessType processType, String jvmPrefixOptions, boolean isFullMigration) {
+ super(processName, taskWorkspace,
+ DataCheckerHelper.generateProcessStartCommand(processType, processConfig.getFilePath(),
+ jvmPrefixOptions),
+ DataCheckerHelper.generateProcessCheckCommand(processType, processConfig.getFilePath()));
+
+ this.processType = processType;
+ this.processConfig = processConfig;
+ this.isFullMigration = isFullMigration;
+ }
+
+ @Override
+ public void start() {
+ if (!isStarted) {
+ String workDirPath = taskWorkspace.getHomeDir();
+ try {
+ ProcessUtils.executeCommand(startCommand, workDirPath, DataCheckerConstants.WAIT_PROCESS_START_MILLIS);
+ LOGGER.info("{} started", processName);
+ LOGGER.info("{} is running", processName);
+ } catch (IOException | InterruptedException e) {
+ throw new MigrationException("Failed to start DataChecker process: " + processName, e);
+ }
+
+ isStarted = true;
+ isStopped = false;
+ isNormal = true;
+ }
+ }
+
+ @Override
+ public boolean checkStatus() {
+ if (!isStarted || isStopped) {
+ return isNormal;
+ }
+
+ if (!isAlive() && !isStopped) {
+ if (isFullMigration && checkExitSign()) {
+ LOGGER.info("{} has finished", processName);
+ } else {
+ isNormal = false;
+ LOGGER.error("{} exit abnormally", processName);
+ }
+ isStopped = true;
+ }
+ return isNormal;
+ }
+
+ @Override
+ public void waitExit() {
+ if (!isFullMigration) {
+ return;
+ }
+
+ while (isStarted && !isStopped) {
+ ThreadUtils.sleep(1000);
+ checkStatus();
+ }
+ }
+
+ private boolean checkExitSign() {
+ String signFilePath = isFullMigration ? DataCheckerHelper.getFullProcessSignFilePath(taskWorkspace)
+ : DataCheckerHelper.getIncrementalProcessSignFilePath(taskWorkspace);
+ try {
+ String fileContents = FileUtils.readFileContents(signFilePath);
+ String stopSign = DataCheckerHelper.getProcessStopSign(processType);
+ if (fileContents.contains(stopSign)) {
+ return true;
+ }
+ } catch (IOException e) {
+ LOGGER.error("Failed to check data check process exit sign, error: {}", e.getMessage());
+ return false;
+ }
+ return false;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/process/task/DebeziumProcess.java b/multidb-portal/src/main/java/org/opengauss/migration/process/task/DebeziumProcess.java
new file mode 100644
index 0000000000000000000000000000000000000000..0658a65d1a0828d17ee488b9281a09873a34c81a
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/process/task/DebeziumProcess.java
@@ -0,0 +1,75 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.process.task;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.migration.helper.tool.DebeziumHelper;
+import org.opengauss.constants.tool.DebeziumConstants;
+import org.opengauss.domain.model.ConfigFile;
+import org.opengauss.exceptions.MigrationException;
+import org.opengauss.utils.ProcessUtils;
+
+import java.io.IOException;
+
+/**
+ * debezium process
+ *
+ * @since 2025/3/1
+ */
+public class DebeziumProcess extends TaskProcess {
+ private static final Logger LOGGER = LogManager.getLogger(DebeziumProcess.class);
+
+ private final ConfigFile connectorConfig;
+ private final ConfigFile workerConfig;
+ private final ConfigFile log4jConfig;
+
+ public DebeziumProcess(String processName, TaskWorkspace taskWorkspace, ConfigFile connectorConfig,
+ ConfigFile workerConfig, ConfigFile log4jConfig, String commandPrefix) {
+ super(processName, taskWorkspace,
+ DebeziumHelper.generateProcessStartCommand(connectorConfig, workerConfig, log4jConfig, commandPrefix),
+ DebeziumHelper.generateProcessCheckCommand(connectorConfig, workerConfig));
+ this.connectorConfig = connectorConfig;
+ this.workerConfig = workerConfig;
+ this.log4jConfig = log4jConfig;
+ }
+
+ @Override
+ public void start() {
+ if (!isStarted) {
+ try {
+ String workDirPath = taskWorkspace.getHomeDir();
+ ProcessUtils.executeCommand(startCommand, workDirPath, DebeziumConstants.WAIT_PROCESS_START_MILLIS);
+ LOGGER.info("{} started", processName);
+ LOGGER.info("{} is running", processName);
+ } catch (IOException | InterruptedException e) {
+ throw new MigrationException("Failed to start Debezium process " + processName, e);
+ }
+ isStarted = true;
+ isStopped = false;
+ isNormal = true;
+ }
+ }
+
+ @Override
+ public boolean checkStatus() {
+ if (!isStarted || isStopped) {
+ return isNormal;
+ }
+
+ if (!isAlive() && !isStopped) {
+ this.isNormal = false;
+ this.isStopped = true;
+ LOGGER.error("{} exit abnormally", processName);
+ }
+ return isNormal;
+ }
+
+ @Override
+ public void waitExit() {
+ throw new UnsupportedOperationException("Debezium process does not support waitExit");
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/process/task/FullMigrationToolProcess.java b/multidb-portal/src/main/java/org/opengauss/migration/process/task/FullMigrationToolProcess.java
new file mode 100644
index 0000000000000000000000000000000000000000..d768c45d6fa136eaad9c965620c48c580481b819
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/process/task/FullMigrationToolProcess.java
@@ -0,0 +1,100 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.process.task;
+
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.tool.FullMigrationToolConstants;
+import org.opengauss.domain.model.ConfigFile;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.exceptions.MigrationException;
+import org.opengauss.migration.helper.tool.FullMigrationToolHelper;
+import org.opengauss.utils.FileUtils;
+import org.opengauss.utils.ProcessUtils;
+import org.opengauss.utils.ThreadUtils;
+
+import java.io.IOException;
+
+/**
+ * full migration tool process
+ *
+ * @since 2025/5/29
+ */
+public class FullMigrationToolProcess extends TaskProcess {
+ private static final Logger LOGGER = LogManager.getLogger(FullMigrationToolProcess.class);
+
+ private final ConfigFile fullConfig;
+ private final String sourceDbType;
+ private final String order;
+
+ public FullMigrationToolProcess(String processName, TaskWorkspace taskWorkspace, ConfigFile fullConfig,
+ String sourceDbType, String order, String jvmPrefixOptions) {
+ super(processName, taskWorkspace,
+ FullMigrationToolHelper.generateProcessStartCommand(fullConfig, sourceDbType, order, jvmPrefixOptions),
+ FullMigrationToolHelper.generateProcessCheckCommand(fullConfig, sourceDbType, order, jvmPrefixOptions));
+
+ this.fullConfig = fullConfig;
+ this.sourceDbType = sourceDbType;
+ this.order = order;
+ }
+
+ @Override
+ public void waitExit() {
+ while (isStarted && !isStopped) {
+ ThreadUtils.sleep(1000);
+ checkStatus();
+ }
+ }
+
+ @Override
+ public void start() {
+ if (isStarted) {
+ return;
+ }
+
+ String workDirPath = taskWorkspace.getStatusFullDirPath();
+ String logPath = FullMigrationToolHelper.generateFullMigrationLogPath(taskWorkspace);
+
+ try {
+ ProcessUtils.executeCommand(startCommand, workDirPath, logPath,
+ FullMigrationToolConstants.WAIT_PROCESS_START_MILLIS);
+ LOGGER.info("{} started", processName);
+ LOGGER.info("{} is running", processName);
+ } catch (IOException | InterruptedException e) {
+ throw new MigrationException("Failed to start full migration process: " + processName, e);
+ }
+
+ isStarted = true;
+ isStopped = false;
+ isNormal = true;
+ }
+
+ @Override
+ public boolean checkStatus() {
+ if (!isStarted || isStopped) {
+ return isNormal;
+ }
+
+ try {
+ if (!isAlive() && !isStopped) {
+ String logPath = FullMigrationToolHelper.generateFullMigrationLogPath(taskWorkspace);
+ String endFlag = FullMigrationToolHelper.getProcessStopSign(order);
+ String lastLine = FileUtils.readFileLastLine(logPath);
+
+ if (lastLine.contains(endFlag)) {
+ LOGGER.info("{} has finished", processName);
+ } else {
+ isNormal = false;
+ LOGGER.error("{} exit abnormally", processName);
+ }
+ isStopped = true;
+ }
+ } catch (IOException e) {
+ LOGGER.warn("Failed to read full migration tool process log, error :{}", e.getMessage());
+ }
+
+ return isNormal;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/process/task/TaskProcess.java b/multidb-portal/src/main/java/org/opengauss/migration/process/task/TaskProcess.java
new file mode 100644
index 0000000000000000000000000000000000000000..ac2f5156d469945da2bd46f3c5976c73f79fd59d
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/process/task/TaskProcess.java
@@ -0,0 +1,131 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.process.task;
+
+import lombok.Getter;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.migration.process.Process;
+import org.opengauss.utils.ProcessUtils;
+import org.opengauss.utils.ThreadUtils;
+
+import java.io.IOException;
+
+/**
+ * task process
+ *
+ * @since 2025/3/1
+ */
+@Getter
+public abstract class TaskProcess implements Process {
+ private static final Logger LOGGER = LogManager.getLogger(TaskProcess.class);
+
+ /**
+ * Process name
+ */
+ protected final String processName;
+
+ /**
+ * Task workspace
+ */
+ protected final TaskWorkspace taskWorkspace;
+
+ /**
+ * Start command
+ */
+ protected final String startCommand;
+
+ /**
+ * Check command
+ */
+ protected final String checkCommand;
+
+ /**
+ * Is process started
+ */
+ protected volatile boolean isStarted = false;
+
+ /**
+ * Is process stopped
+ */
+ protected volatile boolean isStopped = false;
+
+ /**
+ * Is process normally
+ */
+ protected boolean isNormal = true;
+
+ private int pid;
+
+ protected TaskProcess(String processName, TaskWorkspace taskWorkspace, String startCommand, String checkCommand) {
+ this.taskWorkspace = taskWorkspace;
+ this.processName = processName;
+ this.startCommand = startCommand;
+ this.checkCommand = checkCommand;
+ }
+
+ /**
+ * Wait process exit
+ */
+ public abstract void waitExit();
+
+ @Override
+ public void stop() {
+ if (!isStopped || isAlive()) {
+ isStopped = true;
+ try {
+ ProcessUtils.killProcessByCommandSnippet(checkCommand, false);
+ } catch (IOException | InterruptedException e) {
+ LOGGER.warn("Kill {} with error: {}", processName, e.getMessage());
+ }
+
+ waitProcessExit();
+ }
+ }
+
+ @Override
+ public boolean isAlive() {
+ try {
+ int commandPid = ProcessUtils.getCommandPid(checkCommand);
+ if (commandPid == -1) {
+ pid = ProcessUtils.getCommandPid(checkCommand);
+ } else {
+ pid = commandPid;
+ }
+
+ return pid != -1;
+ } catch (IOException | InterruptedException e) {
+ LOGGER.warn("Check {} status with error: {}", processName, e.getMessage());
+ return false;
+ }
+ }
+
+ private void waitProcessExit() {
+ int oneSecond = 1000;
+ int processStopTime = 5000;
+ while (processStopTime > 0) {
+ ThreadUtils.sleep(oneSecond);
+ processStopTime -= oneSecond;
+
+ if (!isAlive()) {
+ LOGGER.info("{} stopped", processName);
+ return;
+ }
+ }
+
+ try {
+ ProcessUtils.killProcessByCommandSnippet(checkCommand, true);
+ } catch (IOException | InterruptedException e) {
+ LOGGER.warn("Kill {} with error: {}", processName, e.getMessage());
+ }
+
+ if (isAlive()) {
+ LOGGER.error("Failed to stop {}, please kill it manually, pid: {}", processName, pid);
+ } else {
+ LOGGER.info("{} stopped", processName);
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/progress/MysqlProgressMonitor.java b/multidb-portal/src/main/java/org/opengauss/migration/progress/MysqlProgressMonitor.java
new file mode 100644
index 0000000000000000000000000000000000000000..d980732f996e4b508d724fdf69241466396fd7cd
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/progress/MysqlProgressMonitor.java
@@ -0,0 +1,238 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.progress;
+
+import com.alibaba.fastjson2.JSON;
+import com.alibaba.fastjson2.JSONArray;
+import com.alibaba.fastjson2.JSONObject;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.tool.ChameleonConstants;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.migration.helper.MigrationStatusHelper;
+import org.opengauss.migration.helper.tool.ChameleonHelper;
+import org.opengauss.migration.helper.tool.DataCheckerHelper;
+import org.opengauss.migration.progress.model.CheckEntry;
+import org.opengauss.migration.progress.model.CheckFailEntry;
+import org.opengauss.migration.progress.model.FullEntry;
+import org.opengauss.migration.progress.model.FullTotalInfo;
+import org.opengauss.migration.progress.model.tool.ChameleonStatusEntry;
+import org.opengauss.migration.status.StatusMonitor;
+import org.opengauss.utils.FileUtils;
+import org.opengauss.utils.StringUtils;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * mysql progress monitor
+ *
+ * @since 2025/4/1
+ */
+public class MysqlProgressMonitor extends ProgressMonitor {
+ private static final Logger LOGGER = LogManager.getLogger(MysqlProgressMonitor.class);
+
+ MysqlProgressMonitor(StatusMonitor statusMonitor, TaskWorkspace taskWorkspace) {
+ super(statusMonitor, taskWorkspace);
+ }
+
+ @Override
+ void readFullMigrationProgress() {
+ String tableJsonPath = ChameleonHelper.generateOrderStatusFilePath(taskWorkspace,
+ ChameleonConstants.ORDER_INIT_REPLICA);
+ if (isFileModified(tableJsonPath)) {
+ readTableProgress(tableJsonPath);
+ }
+
+ String viewJsonPath = ChameleonHelper.generateOrderStatusFilePath(taskWorkspace,
+ ChameleonConstants.ORDER_START_VIEW_REPLICA);
+ if (isFileModified(viewJsonPath)) {
+ readViewProgress(viewJsonPath);
+ }
+
+ String funcJsonPath = ChameleonHelper.generateOrderStatusFilePath(taskWorkspace,
+ ChameleonConstants.ORDER_START_FUNC_REPLICA);
+ if (isFileModified(funcJsonPath)) {
+ readFuncProgress(funcJsonPath);
+ }
+
+ String triggerJsonPath = ChameleonHelper.generateOrderStatusFilePath(taskWorkspace,
+ ChameleonConstants.ORDER_START_TRIGGER_REPLICA);
+ if (isFileModified(triggerJsonPath)) {
+ readTriggerProgress(triggerJsonPath);
+ }
+
+ String procJsonPath = ChameleonHelper.generateOrderStatusFilePath(taskWorkspace,
+ ChameleonConstants.ORDER_START_PROC_REPLICA);
+ if (isFileModified(procJsonPath)) {
+ readProcProgress(procJsonPath);
+ }
+ }
+
+ @Override
+ void readFullDataCheckProgress() {
+ String checkResultSuccessFilePath = DataCheckerHelper.getFullCheckResultSuccessFilePath(taskWorkspace);
+ if (isFileModified(checkResultSuccessFilePath)) {
+ readFullCheckSuccessProgress(checkResultSuccessFilePath);
+ }
+
+ String checkResultFailedFilePath = DataCheckerHelper.getFullCheckResultFailedFilePath(taskWorkspace);
+ if (isFileModified(checkResultFailedFilePath)) {
+ readFullCheckFailedProgress(checkResultFailedFilePath);
+ }
+ }
+
+ @Override
+ void readIncrementalMigrationProgress() {
+ super.readDebeziumIncrementalMigrationProgress();
+ }
+
+ @Override
+ void readReverseMigrationProgress() {
+ super.readDebeziumReverseMigrationProgress();
+ }
+
+ private void readFullCheckSuccessProgress(String filePath) {
+ Optional successArrayOptional = DataCheckerHelper.parseDataCheckerStatusFile(filePath);
+ if (successArrayOptional.isEmpty()) {
+ return;
+ }
+
+ List checkEntryList = new ArrayList<>();
+ JSONArray successArray = successArrayOptional.get();
+ for (int i = 0; i < successArray.size(); i++) {
+ JSONObject jsonObj = successArray.getJSONObject(i);
+ CheckEntry checkEntry = new CheckEntry();
+ checkEntry.setSchema(jsonObj.getString("schema"));
+ checkEntry.setName(jsonObj.getString("table"));
+ checkEntryList.add(checkEntry);
+ }
+
+ try {
+ String statusPath = MigrationStatusHelper.generateFullCheckSuccessObjectStatusFilePath(taskWorkspace);
+ FileUtils.writeToFile(statusPath, JSON.toJSONString(checkEntryList), false);
+ } catch (IOException e) {
+ LOGGER.warn("Failed to write full data check success status, error: {}", e.getMessage());
+ }
+ }
+
+ private void readFullCheckFailedProgress(String filePath) {
+ Optional failedArrayOptional = DataCheckerHelper.parseDataCheckerStatusFile(filePath);
+ if (failedArrayOptional.isEmpty()) {
+ return;
+ }
+
+ List checkFailEntryList = new ArrayList<>();
+ JSONArray failedArray = failedArrayOptional.get();
+ for (int i = 0; i < failedArray.size(); i++) {
+ JSONObject jsonObj = failedArray.getJSONObject(i);
+ CheckFailEntry checkFailEntry = new CheckFailEntry();
+ String schema = jsonObj.getString("schema");
+ String table = jsonObj.getString("table");
+ String repairPath = DataCheckerHelper.generateFullCheckResultRepairFilePath(taskWorkspace, schema, table);
+
+ checkFailEntry.setSchema(schema);
+ checkFailEntry.setName(table);
+ checkFailEntry.setError(jsonObj.getString("message"));
+ checkFailEntry.setRepairFilePath(repairPath);
+ checkFailEntryList.add(checkFailEntry);
+ }
+
+ try {
+ String failedStatusPath = MigrationStatusHelper.generateFullCheckFailedObjectStatusFilePath(taskWorkspace);
+ FileUtils.writeToFile(failedStatusPath, JSON.toJSONString(checkFailEntryList), false);
+ } catch (IOException e) {
+ LOGGER.warn("Failed to write full data check failed status, error: {}", e.getMessage());
+ }
+ }
+
+ private void readTableProgress(String filePath) {
+ Optional statusEntryOptional = ChameleonHelper.parseChameleonStatusFile(filePath);
+ if (statusEntryOptional.isEmpty()) {
+ return;
+ }
+
+ ChameleonStatusEntry statusEntry = statusEntryOptional.get();
+ FullTotalInfo total = statusEntry.getTotal();
+ if (total != null) {
+ String totalJsonString = JSON.toJSONString(total);
+ String totalStatusFilePath = MigrationStatusHelper.generateFullTotalInfoStatusFilePath(taskWorkspace);
+
+ try {
+ FileUtils.writeToFile(totalStatusFilePath, totalJsonString, false);
+ } catch (IOException e) {
+ LOGGER.warn("Failed to write full migration total status, error: {}", e.getMessage());
+ }
+ }
+
+ List tableList = statusEntry.getTable();
+ if (isEntryIntegrity(tableList)) {
+ writeObjectEntryList(tableList, MigrationStatusHelper.generateFullTableStatusFilePath(taskWorkspace));
+ }
+ }
+
+ private void readTriggerProgress(String filePath) {
+ Optional statusEntryOptional = ChameleonHelper.parseChameleonStatusFile(filePath);
+ if (statusEntryOptional.isEmpty()) {
+ return;
+ }
+ List entryList = statusEntryOptional.get().getTrigger();
+ if (isEntryIntegrity(entryList)) {
+ writeObjectEntryList(entryList, MigrationStatusHelper.generateFullTriggerStatusFilePath(taskWorkspace));
+ }
+ }
+
+ private void readViewProgress(String filePath) {
+ Optional statusEntryOptional = ChameleonHelper.parseChameleonStatusFile(filePath);
+ if (statusEntryOptional.isEmpty()) {
+ return;
+ }
+ List entryList = statusEntryOptional.get().getView();
+ if (isEntryIntegrity(entryList)) {
+ writeObjectEntryList(entryList, MigrationStatusHelper.generateFullViewStatusFilePath(taskWorkspace));
+ }
+ }
+
+ private void readFuncProgress(String filePath) {
+ Optional statusEntryOptional = ChameleonHelper.parseChameleonStatusFile(filePath);
+ if (statusEntryOptional.isEmpty()) {
+ return;
+ }
+ List entryList = statusEntryOptional.get().getFunction();
+ if (isEntryIntegrity(entryList)) {
+ writeObjectEntryList(entryList, MigrationStatusHelper.generateFullFuncStatusFilePath(taskWorkspace));
+ }
+ }
+
+ private void readProcProgress(String filePath) {
+ Optional statusEntryOptional = ChameleonHelper.parseChameleonStatusFile(filePath);
+ if (statusEntryOptional.isEmpty()) {
+ return;
+ }
+ List entryList = statusEntryOptional.get().getProcedure();
+ if (isEntryIntegrity(entryList)) {
+ writeObjectEntryList(entryList, MigrationStatusHelper.generateFullProcStatusFilePath(taskWorkspace));
+ }
+ }
+
+ private boolean isEntryIntegrity(List entryList) {
+ if (entryList == null || entryList.isEmpty()) {
+ return true;
+ }
+
+ for (FullEntry entry : entryList) {
+ if (entry.getStatus() == 0) {
+ return false;
+ }
+
+ if (StringUtils.isNullOrBlank(entry.getName())) {
+ return false;
+ }
+ }
+ return true;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/progress/PgsqlProgressMonitor.java b/multidb-portal/src/main/java/org/opengauss/migration/progress/PgsqlProgressMonitor.java
new file mode 100644
index 0000000000000000000000000000000000000000..992d44a72d7b06359e839aae67085e3ee02fee97
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/progress/PgsqlProgressMonitor.java
@@ -0,0 +1,174 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.progress;
+
+import com.alibaba.fastjson2.JSON;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.constants.tool.FullMigrationToolConstants;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.migration.helper.MigrationStatusHelper;
+import org.opengauss.migration.helper.tool.FullMigrationToolHelper;
+import org.opengauss.migration.progress.model.FullEntry;
+import org.opengauss.migration.progress.model.FullTotalInfo;
+import org.opengauss.migration.progress.model.tool.FullMigrationToolStatusEntry;
+import org.opengauss.migration.status.StatusMonitor;
+import org.opengauss.utils.FileUtils;
+import org.opengauss.utils.StringUtils;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * pgsql progress monitor
+ *
+ * @since 2025/4/1
+ */
+public class PgsqlProgressMonitor extends ProgressMonitor {
+ private static final Logger LOGGER = LogManager.getLogger(PgsqlProgressMonitor.class);
+
+ PgsqlProgressMonitor(StatusMonitor statusMonitor, TaskWorkspace taskWorkspace) {
+ super(statusMonitor, taskWorkspace);
+ }
+
+ @Override
+ void readFullMigrationProgress() {
+ String tableJsonPath = FullMigrationToolHelper.generateOrderStatusFilePath(taskWorkspace,
+ FullMigrationToolConstants.ORDER_TABLE);
+ if (isFileModified(tableJsonPath)) {
+ readTableProgress(tableJsonPath);
+ }
+
+ String viewJsonPath = FullMigrationToolHelper.generateOrderStatusFilePath(taskWorkspace,
+ FullMigrationToolConstants.ORDER_VIEW);
+ if (isFileModified(viewJsonPath)) {
+ readViewProgress(viewJsonPath);
+ }
+
+ String funcJsonPath = FullMigrationToolHelper.generateOrderStatusFilePath(taskWorkspace,
+ FullMigrationToolConstants.ORDER_FUNCTION);
+ if (isFileModified(funcJsonPath)) {
+ readFuncProgress(funcJsonPath);
+ }
+
+ String triggerJsonPath = FullMigrationToolHelper.generateOrderStatusFilePath(taskWorkspace,
+ FullMigrationToolConstants.ORDER_TRIGGER);
+ if (isFileModified(triggerJsonPath)) {
+ readTriggerProgress(triggerJsonPath);
+ }
+
+ String procJsonPath = FullMigrationToolHelper.generateOrderStatusFilePath(taskWorkspace,
+ FullMigrationToolConstants.ORDER_PROCEDURE);
+ if (isFileModified(procJsonPath)) {
+ readProcProgress(procJsonPath);
+ }
+ }
+
+ @Override
+ void readFullDataCheckProgress() {
+
+ }
+
+ @Override
+ void readIncrementalMigrationProgress() {
+ super.readDebeziumIncrementalMigrationProgress();
+ }
+
+ @Override
+ void readReverseMigrationProgress() {
+ super.readDebeziumReverseMigrationProgress();
+ }
+
+ private void readTableProgress(String filePath) {
+ Optional entryOptional = FullMigrationToolHelper.parseToolStatusFile(filePath);
+ if (entryOptional.isEmpty()) {
+ return;
+ }
+
+ FullMigrationToolStatusEntry statusEntry = entryOptional.get();
+ FullTotalInfo total = statusEntry.getTotal();
+ if (total != null) {
+ String totalJsonString = JSON.toJSONString(total);
+ String totalStatusFilePath = MigrationStatusHelper.generateFullTotalInfoStatusFilePath(taskWorkspace);
+
+ try {
+ FileUtils.writeToFile(totalStatusFilePath, totalJsonString, false);
+ } catch (IOException e) {
+ LOGGER.warn("Failed to write full migration total status, error: {}", e.getMessage());
+ }
+ }
+
+ List tableList = statusEntry.getTable();
+ if (isEntryIntegrity(tableList)) {
+ writeObjectEntryList(tableList, MigrationStatusHelper.generateFullTableStatusFilePath(taskWorkspace));
+ }
+ }
+
+ private void readViewProgress(String jsonPath) {
+ Optional entryOptional = FullMigrationToolHelper.parseToolStatusFile(jsonPath);
+ if (entryOptional.isEmpty()) {
+ return;
+ }
+
+ List viewList = entryOptional.get().getView();
+ if (isEntryIntegrity(viewList)) {
+ writeObjectEntryList(viewList, MigrationStatusHelper.generateFullViewStatusFilePath(taskWorkspace));
+ }
+ }
+
+ private void readFuncProgress(String jsonPath) {
+ Optional entryOptional = FullMigrationToolHelper.parseToolStatusFile(jsonPath);
+ if (entryOptional.isEmpty()) {
+ return;
+ }
+
+ List funcList = entryOptional.get().getFunction();
+ if (isEntryIntegrity(funcList)) {
+ writeObjectEntryList(funcList, MigrationStatusHelper.generateFullFuncStatusFilePath(taskWorkspace));
+ }
+ }
+
+ private void readTriggerProgress(String jsonPath) {
+ Optional entryOptional = FullMigrationToolHelper.parseToolStatusFile(jsonPath);
+ if (entryOptional.isEmpty()) {
+ return;
+ }
+
+ List triggerList = entryOptional.get().getTrigger();
+ if (isEntryIntegrity(triggerList)) {
+ writeObjectEntryList(triggerList, MigrationStatusHelper.generateFullTriggerStatusFilePath(taskWorkspace));
+ }
+ }
+
+ private void readProcProgress(String jsonPath) {
+ Optional entryOptional = FullMigrationToolHelper.parseToolStatusFile(jsonPath);
+ if (entryOptional.isEmpty()) {
+ return;
+ }
+
+ List procList = entryOptional.get().getProcedure();
+ if (isEntryIntegrity(procList)) {
+ writeObjectEntryList(procList, MigrationStatusHelper.generateFullProcStatusFilePath(taskWorkspace));
+ }
+ }
+
+ private boolean isEntryIntegrity(List entryList) {
+ if (entryList == null || entryList.isEmpty()) {
+ return true;
+ }
+
+ for (FullEntry entry : entryList) {
+ if (entry.getStatus() == 0) {
+ return false;
+ }
+
+ if (StringUtils.isNullOrBlank(entry.getName())) {
+ return false;
+ }
+ }
+ return true;
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/progress/ProgressMonitor.java b/multidb-portal/src/main/java/org/opengauss/migration/progress/ProgressMonitor.java
new file mode 100644
index 0000000000000000000000000000000000000000..6de87b1491920c7b584087a97512ef13d0d2427e
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/progress/ProgressMonitor.java
@@ -0,0 +1,272 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.progress;
+
+import com.alibaba.fastjson2.JSON;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.enums.MigrationPhase;
+import org.opengauss.enums.MigrationStatusEnum;
+import org.opengauss.migration.helper.MigrationStatusHelper;
+import org.opengauss.migration.helper.tool.DebeziumHelper;
+import org.opengauss.migration.handler.ThreadExceptionHandler;
+import org.opengauss.migration.progress.model.FullEntry;
+import org.opengauss.migration.progress.model.IncrementalAndReverseEntry;
+import org.opengauss.migration.progress.model.tool.DebeziumSinkStatusEntry;
+import org.opengauss.migration.progress.model.tool.DebeziumSourceStatusEntry;
+import org.opengauss.migration.status.StatusMonitor;
+import org.opengauss.utils.FileUtils;
+import org.opengauss.utils.StringUtils;
+import org.opengauss.utils.ThreadUtils;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.util.List;
+import java.util.Optional;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * progress monitor
+ *
+ * @since 2025/3/21
+ */
+public abstract class ProgressMonitor extends Thread {
+ private static final Logger LOGGER = LogManager.getLogger(ProgressMonitor.class);
+ private static final int INTERVAL_TIME = 1000;
+
+ /**
+ * Status manager
+ */
+ protected final StatusMonitor statusMonitor;
+
+ /**
+ * Task workspace
+ */
+ protected final TaskWorkspace taskWorkspace;
+
+ private final ConcurrentHashMap fileLastModifiedCache = new ConcurrentHashMap<>();
+ private volatile boolean isRunning = true;
+ private MigrationStatusEnum latestStatus = MigrationStatusEnum.NOT_START;
+
+ ProgressMonitor(StatusMonitor statusMonitor, TaskWorkspace taskWorkspace) {
+ super("Progress-Monitor-Thread");
+ this.statusMonitor = statusMonitor;
+ this.taskWorkspace = taskWorkspace;
+ }
+
+ abstract void readFullMigrationProgress();
+
+ abstract void readFullDataCheckProgress();
+
+ abstract void readIncrementalMigrationProgress();
+
+ abstract void readReverseMigrationProgress();
+
+ @Override
+ public void run() {
+ Thread.currentThread().setUncaughtExceptionHandler(new ThreadExceptionHandler());
+ while (isRunning) {
+ ThreadUtils.sleep(INTERVAL_TIME);
+ MigrationStatusEnum currentStatus = statusMonitor.getCurrentStatus().getStatus();
+ if (MigrationStatusEnum.NOT_START.equals(currentStatus)) {
+ continue;
+ }
+ if (MigrationStatusEnum.MIGRATION_FAILED.equals(currentStatus)
+ || MigrationStatusEnum.MIGRATION_FINISHED.equals(currentStatus)) {
+ stopMonitoring();
+ continue;
+ }
+
+ MigrationPhase currentPhase = getPhaseByStatus(currentStatus);
+ readPhaseProgress(currentPhase);
+
+ MigrationPhase latestPhase = getPhaseByStatus(latestStatus);
+ if (!latestPhase.equals(currentPhase)) {
+ readPhaseProgress(latestPhase);
+ }
+
+ latestStatus = currentStatus;
+ }
+ }
+
+ /**
+ * Stop monitoring
+ */
+ public void stopMonitoring() {
+ this.isRunning = false;
+ }
+
+ /**
+ * Read debezium incremental migration progress
+ */
+ protected void readDebeziumIncrementalMigrationProgress() {
+ Optional incrementalEntryOptional = readDebeziumStatusFileToEntry(false);
+ if (incrementalEntryOptional.isEmpty()) {
+ return;
+ }
+
+ try {
+ String statusFilePath = MigrationStatusHelper.generateIncrementalStatusFilePath(taskWorkspace);
+ FileUtils.writeToFile(statusFilePath, JSON.toJSONString(incrementalEntryOptional.get()), false);
+ } catch (IOException e) {
+ LOGGER.warn("Failed to write incremental migration progress, error: {}", e.getMessage());
+ }
+ }
+
+ /**
+ * Read debezium reverse migration progress.
+ */
+ protected void readDebeziumReverseMigrationProgress() {
+ Optional reverseEntryOptional = readDebeziumStatusFileToEntry(true);
+ if (reverseEntryOptional.isEmpty()) {
+ return;
+ }
+
+ try {
+ String statusFilePath = MigrationStatusHelper.generateReverseStatusFilePath(taskWorkspace);
+ FileUtils.writeToFile(statusFilePath, JSON.toJSONString(reverseEntryOptional.get()), false);
+ } catch (IOException e) {
+ LOGGER.warn("Failed to write reverse migration progress, error: {}", e.getMessage());
+ }
+ }
+
+ /**
+ * Is file modified
+ *
+ * @param filePath file path
+ * @return boolean
+ */
+ protected boolean isFileModified(String filePath) {
+ File file = new File(filePath);
+ if (!file.exists() || !file.isFile()) {
+ return false;
+ }
+
+ long lastModified = file.lastModified();
+ Long cacheModified = fileLastModifiedCache.get(filePath);
+ if (cacheModified == null || lastModified != cacheModified) {
+ fileLastModifiedCache.put(filePath, lastModified);
+ return true;
+ }
+
+ return false;
+ }
+
+ /**
+ * write object entry list
+ *
+ * @param entryList entry list
+ * @param filePath file path
+ */
+ protected void writeObjectEntryList(List entryList, String filePath) {
+ try {
+ if (entryList != null && !entryList.isEmpty()) {
+ String jsonString = JSON.toJSONString(entryList);
+ FileUtils.writeToFile(filePath, jsonString, false);
+ }
+ } catch (IOException e) {
+ LOGGER.warn("Failed to write full migration progress, error: {}", e.getMessage());
+ }
+ }
+
+ private Optional readDebeziumStatusFileToEntry(boolean isReverse) {
+ String sourceStatusFilePath;
+ String sinkStatusFilePath;
+ if (isReverse) {
+ sourceStatusFilePath = DebeziumHelper.getReverseSourceStatusFilePath(taskWorkspace);
+ sinkStatusFilePath = DebeziumHelper.getReverseSinkStatusFilePath(taskWorkspace);
+ } else {
+ sourceStatusFilePath = DebeziumHelper.getIncrementalSourceStatusFilePath(taskWorkspace);
+ sinkStatusFilePath = DebeziumHelper.getIncrementalSinkStatusFilePath(taskWorkspace);
+ }
+
+ if (StringUtils.isNullOrBlank(sinkStatusFilePath) || StringUtils.isNullOrBlank(sourceStatusFilePath)
+ || (!isFileModified(sourceStatusFilePath) && !isFileModified(sinkStatusFilePath))) {
+ return Optional.empty();
+ }
+ Optional sourceStatusEntry =
+ DebeziumHelper.parseDebeziumSourceStatusFile(sourceStatusFilePath);
+ Optional sinkStatusEntry =
+ DebeziumHelper.parseDebeziumSinkStatusFile(sinkStatusFilePath);
+ if (sourceStatusEntry.isEmpty() || sinkStatusEntry.isEmpty()) {
+ return Optional.empty();
+ }
+
+ DebeziumSourceStatusEntry sourceStatus = sourceStatusEntry.get();
+ DebeziumSinkStatusEntry sinkStatus = sinkStatusEntry.get();
+ IncrementalAndReverseEntry entry = new IncrementalAndReverseEntry();
+ entry.setCount(sinkStatus.getReplayedCount() + sinkStatus.getOverallPipe());
+ entry.setSourceSpeed(sourceStatus.getSpeed());
+ entry.setSinkSpeed(sinkStatus.getSpeed());
+ entry.setRest(sinkStatus.getOverallPipe());
+ entry.setFailCount(sinkStatus.getFailCount());
+ entry.setSuccessCount(sinkStatus.getSuccessCount());
+ entry.setReplayedCount(sinkStatus.getReplayedCount());
+
+ String failSqlFilePath;
+ if (isReverse) {
+ entry.setSkippedCount(sourceStatus.getSkippedExcludeCount());
+ failSqlFilePath = DebeziumHelper.getDebeziumReverseFailSqlFilePath(taskWorkspace);
+ } else {
+ entry.setSkippedCount(sinkStatus.getSkippedCount() + sinkStatus.getSkippedExcludeEventCount());
+ failSqlFilePath = DebeziumHelper.getDebeziumIncrementalFailSqlFilePath(taskWorkspace);
+ }
+
+ Path path = Path.of(failSqlFilePath);
+ if (Files.exists(path)) {
+ try {
+ if (!StringUtils.isNullOrBlank(Files.readString(path))) {
+ entry.setHasFailSql(true);
+ }
+ } catch (IOException e) {
+ LOGGER.trace("Failed to read fail sql file, error: {}", e.getMessage());
+ }
+ }
+ return Optional.of(entry);
+ }
+
+ private MigrationPhase getPhaseByStatus(MigrationStatusEnum currentStatus) {
+ if (statusMonitor.isFullMigrationStatus()) {
+ return MigrationPhase.FULL_MIGRATION;
+ }
+
+ if (statusMonitor.isFullDataCheckStatus()) {
+ return MigrationPhase.FULL_DATA_CHECK;
+ }
+
+ if (statusMonitor.isIncrementalMigrationStatus()) {
+ return MigrationPhase.INCREMENTAL_MIGRATION;
+ }
+
+ if (statusMonitor.isReverseMigrationStatus()) {
+ return MigrationPhase.REVERSE_MIGRATION;
+ }
+ throw new IllegalArgumentException("Invalid status: " + currentStatus);
+ }
+
+ private void readPhaseProgress(MigrationPhase phase) {
+ if (MigrationPhase.FULL_MIGRATION.equals(phase)) {
+ readFullMigrationProgress();
+ return;
+ }
+
+ if (MigrationPhase.FULL_DATA_CHECK.equals(phase)) {
+ readFullDataCheckProgress();
+ return;
+ }
+
+ if (MigrationPhase.INCREMENTAL_MIGRATION.equals(phase)) {
+ readIncrementalMigrationProgress();
+ return;
+ }
+
+ if (MigrationPhase.REVERSE_MIGRATION.equals(phase)) {
+ readReverseMigrationProgress();
+ }
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/progress/ProgressMonitorFactory.java b/multidb-portal/src/main/java/org/opengauss/migration/progress/ProgressMonitorFactory.java
new file mode 100644
index 0000000000000000000000000000000000000000..072462196f3e929b691c0c1e613c3031e13b3964
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/progress/ProgressMonitorFactory.java
@@ -0,0 +1,39 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.progress;
+
+import org.opengauss.domain.model.TaskWorkspace;
+import org.opengauss.enums.DatabaseType;
+import org.opengauss.exceptions.ConfigException;
+import org.opengauss.migration.status.StatusMonitor;
+
+/**
+ * progress monitor factory
+ *
+ * @since 2025/4/1
+ */
+public class ProgressMonitorFactory {
+ private ProgressMonitorFactory() {
+ }
+
+ /**
+ * create progress monitor
+ *
+ * @param sourceDbType source database type
+ * @param statusMonitor status manager
+ * @param taskWorkspace task workspace
+ * @return progress monitor
+ */
+ public static ProgressMonitor createProgressMonitor(
+ DatabaseType sourceDbType, StatusMonitor statusMonitor, TaskWorkspace taskWorkspace) {
+ if (sourceDbType.equals(DatabaseType.MYSQL)) {
+ return new MysqlProgressMonitor(statusMonitor, taskWorkspace);
+ }
+ if (sourceDbType.equals(DatabaseType.POSTGRESQL)) {
+ return new PgsqlProgressMonitor(statusMonitor, taskWorkspace);
+ }
+ throw new ConfigException("Unsupported database type");
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/progress/model/CheckEntry.java b/multidb-portal/src/main/java/org/opengauss/migration/progress/model/CheckEntry.java
new file mode 100644
index 0000000000000000000000000000000000000000..ff50c1835e8e1b108c041b7202be761e862f12fd
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/progress/model/CheckEntry.java
@@ -0,0 +1,25 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.progress.model;
+
+import lombok.Data;
+
+/**
+ * check entry
+ *
+ * @since 2025/6/4
+ */
+@Data
+public class CheckEntry {
+ /**
+ * schema name
+ */
+ protected String schema;
+
+ /**
+ * table name
+ */
+ protected String name;
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/progress/model/CheckFailEntry.java b/multidb-portal/src/main/java/org/opengauss/migration/progress/model/CheckFailEntry.java
new file mode 100644
index 0000000000000000000000000000000000000000..9838a13b1e9befd75f45ef2fc91664acc68538c1
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/progress/model/CheckFailEntry.java
@@ -0,0 +1,25 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.progress.model;
+
+import lombok.Data;
+
+/**
+ * Check fail entry
+ *
+ * @since 2025/6/4
+ */
+@Data
+public class CheckFailEntry extends CheckEntry {
+ /**
+ * error message, default is ""
+ */
+ private String error;
+
+ /**
+ * repair file path, default is ""
+ */
+ private String repairFilePath;
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/progress/model/FullEntry.java b/multidb-portal/src/main/java/org/opengauss/migration/progress/model/FullEntry.java
new file mode 100644
index 0000000000000000000000000000000000000000..a814ec3c01309ae992e918a90ef24360308824a7
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/progress/model/FullEntry.java
@@ -0,0 +1,68 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.progress.model;
+
+import com.alibaba.fastjson2.annotation.JSONField;
+import lombok.Data;
+
+/**
+ * full entry
+ *
+ * @since 2025/6/3
+ */
+@Data
+public class FullEntry {
+ /**
+ * schema name
+ */
+ @JSONField(defaultValue = "")
+ private String schema;
+
+ /**
+ * object name
+ */
+ private String name;
+
+ /**
+ * status: 1 - pending, 2 - migrating, 3,4,5 - completed, 6,7 - failed
+ */
+ private int status;
+
+ /**
+ * migrated percentage, less than 1 when in normal range, status is 6 may be greater than 1
+ */
+ private double percent;
+
+ /**
+ * error message, if object migration failed, will output error message, default is ""
+ */
+ private String error;
+
+ /**
+ * compare full entry
+ *
+ * @param o1 full entry 1
+ * @param o2 full entry 2
+ * @return int compare result
+ */
+ public static int compare(FullEntry o1, FullEntry o2) {
+ if (o1.getSchema().equals(o2.getSchema())) {
+ return o1.getName().compareTo(o2.getName());
+ } else {
+ return o1.getSchema().compareTo(o2.getSchema());
+ }
+ }
+
+ /**
+ * compare full entry by name
+ *
+ * @param o1 full entry 1
+ * @param o2 full entry 2
+ * @return int compare result
+ */
+ public static int compareByName(FullEntry o1, FullEntry o2) {
+ return o1.getName().compareTo(o2.getName());
+ }
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/progress/model/FullTotalInfo.java b/multidb-portal/src/main/java/org/opengauss/migration/progress/model/FullTotalInfo.java
new file mode 100644
index 0000000000000000000000000000000000000000..7cf61c0b74b73dfe1cb97ba6910f35e8123035b0
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/progress/model/FullTotalInfo.java
@@ -0,0 +1,35 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.progress.model;
+
+import lombok.Data;
+
+/**
+ * full total info
+ *
+ * @since 2025/6/3
+ */
+@Data
+public class FullTotalInfo {
+ /**
+ * all tables total record number, estimated value
+ */
+ private int record;
+
+ /**
+ * all tables total data size, estimated value
+ */
+ private String data;
+
+ /**
+ * migration total time, unit: seconds
+ */
+ private int time;
+
+ /**
+ * migration speed, unit: MB/s
+ */
+ private String speed;
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/progress/model/IncrementalAndReverseEntry.java b/multidb-portal/src/main/java/org/opengauss/migration/progress/model/IncrementalAndReverseEntry.java
new file mode 100644
index 0000000000000000000000000000000000000000..134dbcdb272227822dceb50e35a85335d5662aca
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/progress/model/IncrementalAndReverseEntry.java
@@ -0,0 +1,25 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.progress.model;
+
+import lombok.Data;
+
+/**
+ * incremental and reverse entry
+ *
+ * @since 2025/6/5
+ */
+@Data
+public class IncrementalAndReverseEntry {
+ private Integer count;
+ private Integer replayedCount;
+ private Integer skippedCount;
+ private Integer successCount;
+ private Integer failCount;
+ private Integer rest;
+ private Integer sourceSpeed;
+ private Integer sinkSpeed;
+ private Boolean hasFailSql;
+}
diff --git a/multidb-portal/src/main/java/org/opengauss/migration/progress/model/tool/ChameleonStatusEntry.java b/multidb-portal/src/main/java/org/opengauss/migration/progress/model/tool/ChameleonStatusEntry.java
new file mode 100644
index 0000000000000000000000000000000000000000..576c7bb68ba4e2dba0ec42b520d7cbfd9fd33ce6
--- /dev/null
+++ b/multidb-portal/src/main/java/org/opengauss/migration/progress/model/tool/ChameleonStatusEntry.java
@@ -0,0 +1,26 @@
+/*
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025-2025. All rights reserved.
+ */
+
+package org.opengauss.migration.progress.model.tool;
+
+import lombok.Data;
+import org.opengauss.migration.progress.model.FullEntry;
+import org.opengauss.migration.progress.model.FullTotalInfo;
+
+import java.util.List;
+
+/**
+ * chameleon status entry
+ *
+ * @since 2025/6/3
+ */
+@Data
+public class ChameleonStatusEntry {
+ private FullTotalInfo total;
+ private List table;
+ private List