diff --git a/README.md b/README.md
index c02117f31d9f49daa43438cab7a55fb7ffe7d480..29b9f18f3986cb352c53832bac5689e01d02ae0e 100644
--- a/README.md
+++ b/README.md
@@ -12,7 +12,7 @@
- **数据迁移**。
-基于JDBC的fetch-size参数分批次读取源端数据库数据,并基于insert/copy方式将数据分批次写入目的数据库。
+基于JDBC的分批次读取源端数据库数据,并基于insert/copy方式将数据分批次写入目的数据库。
支持有主键表的 **增量变更同步** (变化数据计算Change Data Calculate)功能(千万级以上数据量慎用)
@@ -102,32 +102,34 @@ sh ./docker-maven-build.sh
| 配置参数 | 配置说明 | 示例 | 备注 |
| :------| :------ | :------ | :------ |
-| source.datasource.url | 来源端JDBC连接的URL | jdbc:oracle:thin:@10.17.1.158:1521:ORCL | 可为:oracle/mysql/mariadb/sqlserver/postgresql/db2 |
-| source.datasource.driver-class-name | 来源端数据库的驱动类名称 | oracle.jdbc.driver.OracleDriver | 对应数据库的驱动类 |
-| source.datasource.username | 来源端连接帐号名 | tangyibo | 无 |
-| source.datasource.password | 来源端连接帐号密码 | tangyibo | 无 |
-| target.datasource.url | 目的端JDBC连接的URL | jdbc:postgresql://10.17.1.90:5432/study | 可为:oracle/sqlserver/postgresql/greenplum,mysql/mariadb/db2也支持,但字段类型兼容性问题比较多 |
-| target.datasource.driver-class-name |目的端 数据库的驱动类名称 | org.postgresql.Driver | 对应数据库的驱动类 |
-| target.datasource.username | 目的端连接帐号名 | study | 无 |
-| target.datasource.password | 目的端连接帐号密码 | 123456 | 无 |
-| source.datasource-fetch.size | 来源端数据库查询时的fetch_size设置 | 10000 | 需要大于100有效 |
-| source.datasource-source.schema | 来源端的schema名称 | dbo,test | 多个之间用英文逗号分隔 |
-| source.datasource-source.includes | 来源端schema下的表中需要包含的表名称 | users1,orgs1 | 多个之间用英文逗号分隔 |
-| source.datasource-source.excludes | 来源端schema下的表中需要过滤的表名称 | users,orgs | 不包含的表名称,多个之间用英文逗号分隔 |
-| target.datasource-target.schema | 目的端的schema名称 | public | 目的端的schema名称只能有且只有一个 |
-| target.datasource-target.drop | 是否执行先drop表然后create表命令,当target.datasource-target.drop=true时有效 | true | 可选值为:true、false |
-| target.create-table.auto-increment | 是否执启用支持create表时主键自增 | true | 可选值为:true、false |
-| target.writer-engine.insert | 是否使用insert写入数据 | false | 可选值为:true为insert写入、false为copy写入,只针对目的端数据库为PostgreSQL/Greenplum的有效 |
-| target.change-data-synch | 是否启用增量变更同步,target.datasource-target.drop为false时且表有主键情况下有效,千万级以上数据量建议设为false | false | 可选值为:true、false |
+| dbswitch.source[i].url | 来源端JDBC连接的URL | jdbc:oracle:thin:@10.17.1.158:1521:ORCL | 可为:oracle/mysql/mariadb/sqlserver/postgresql/db2 |
+| dbswitch.source[i].driver-class-name | 来源端数据库的驱动类名称 | oracle.jdbc.driver.OracleDriver | 对应数据库的驱动类 |
+| dbswitch.source[i].username | 来源端连接帐号名 | tangyibo | 无 |
+| dbswitch.source[i].password | 来源端连接帐号密码 | tangyibo | 无 |
+| dbswitch.source[i].fetch-size | 来源端数据库查询时的fetch_size设置 | 10000 | 需要大于100有效 |
+| dbswitch.source[i].source-schema | 来源端的schema名称 | dbo,test | 多个之间用英文逗号分隔 |
+| dbswitch.source[i].source-includes | 来源端schema下的表中需要包含的表名称 | users1,orgs1 | 多个之间用英文逗号分隔 |
+| dbswitch.source[i].source-excludes | 来源端schema下的表中需要过滤的表名称 | users,orgs | 不包含的表名称,多个之间用英文逗号分隔 |
+| dbswitch.target.url | 目的端JDBC连接的URL | jdbc:postgresql://10.17.1.90:5432/study | 可为:oracle/sqlserver/postgresql/greenplum,mysql/mariadb/db2也支持,但字段类型兼容性问题比较多 |
+| dbswitch.target.driver-class-name |目的端 数据库的驱动类名称 | org.postgresql.Driver | 对应数据库的驱动类 |
+| dbswitch.target.username | 目的端连接帐号名 | study | 无 |
+| dbswitch.target.password | 目的端连接帐号密码 | 123456 | 无 |
+| dbswitch.target.target-schema | 目的端的schema名称 | public | 目的端的schema名称只能有且只有一个 |
+| dbswitch.target.target-drop | 是否执行先drop表然后create表命令,当target.datasource-target.drop=true时有效 | true | 可选值为:true、false |
+| dbswitch.target.create-table-auto-increment | 是否执启用支持create表时主键自增 | true | 可选值为:true、false |
+| dbswitch.target.writer-engine-insert | 是否使用insert写入数据 | false | 可选值为:true为insert写入、false为copy写入,只针对目的端数据库为PostgreSQL/Greenplum的有效 |
+| dbswitch.target.change-data-synch | 是否启用增量变更同步,dbswitch.target.target-drop为false时且表有主键情况下有效,千万级以上数据量建议设为false | false | 可选值为:true、false |
**注意:**
-- *(1)如果source.datasource-source.includes不为空,则按照包含表的方式来执行;*
+- *(1)支持源端为多个数据源类型,如果dbswitch.source[i]为数组类型,i为编号,从0开始的整数;*
+
+- *(2)如果dbswitch.source[i].source-includes不为空,则按照包含表的方式来执行;*
-- *(2)如果source.datasource-source.includes为空,则按照source.datasource-source.excludes排除表的方式来执行。*
+- *(3)如果dbswitch.source[i].source-includes为空,则按照dbswitch.source[i].source-excludes排除表的方式来执行。*
-- *(3)如果target.datasource-target.drop=false,target.change-data-synch=true;时会对有主键表启用增量变更方式同步*
+- *(4)如果dbswitch.target.target-drop=false,dbswitch.target.change-data-synch=true;时会对有主键表启用增量变更方式同步*
- mysql/mariadb的驱动配置样例
@@ -144,28 +146,28 @@ jdbc驱动名称: org.mariadb.jdbc.Driver
- oracle的驱动配置样例
```
-jdbc连接地址:jdbc:oracle:thin:@172.17.2.58:1521:ORCL
+jdbc连接地址:jdbc:oracle:thin:@172.17.20.58:1521:ORCL
jdbc驱动名称:oracle.jdbc.driver.OracleDriver
```
- SqlServer(>=2005)的驱动配置样例
```
-jdbc连接地址:jdbc:sqlserver://172.16.2.66:1433;DatabaseName=hqtest
+jdbc连接地址:jdbc:sqlserver://172.16.20.66:1433;DatabaseName=hqtest
jdbc驱动名称:com.microsoft.sqlserver.jdbc.SQLServerDriver
```
- PostgreSQL的驱动配置样例
```
-jdbc连接地址:jdbc:postgresql://172.17.2.10:5432/study
+jdbc连接地址:jdbc:postgresql://172.17.20.10:5432/study
jdbc驱动名称:org.postgresql.Driver
```
- DB2的驱动配置样例
```
-jdbc连接地址:jdbc:db2://172.17.203.91:50000/testdb:driverType=4;fullyMaterializeLobData=true;fullyMaterializeInputStreams=true;progressiveStreaming=2;progresssiveLocators=2;
+jdbc连接地址:jdbc:db2://172.17.20.91:50000/testdb:driverType=4;fullyMaterializeLobData=true;fullyMaterializeInputStreams=true;progressiveStreaming=2;progresssiveLocators=2;
jdbc驱动名称:com.ibm.db2.jcc.DB2Driver
```
@@ -193,7 +195,7 @@ windows系统下:
**(b)** 如若使用insert方式写入,需要在config.properties配置文件中设置如下参数为true:
```
-target.writer-engine.insert=true
+dbswitch.target.writer-engine-insert=true
```
- 2、dbswitch离线同步工具提供各种数据库间表结构转换RESTful类型的API接口,服务启动方式如下:
@@ -211,9 +213,9 @@ bin/startup.sh
- 5、关于增量变更同步方式的使用说明
-> 步骤A:先通过设置target.datasource-target.drop=true,target.change-data-synch=false;启动程序进行表结构和数据的全量同步;
+> 步骤A:先通过设置dbswitch.target.target-drop=true,dbswitch.target.change-data-synch=false;启动程序进行表结构和数据的全量同步;
-> 步骤B:然后设置target.datasource-target.drop=false,target.change-data-synch=true;再启动程序对(有主键表)数据进行增量变更同步。
+> 步骤B:然后设置dbswitch.target.target-drop=false,dbswitch.target.change-data-synch=true;再启动程序对(有主键表)数据进行增量变更同步。
> 注:如果待同步的两端表结构已经一致或源端字段是目的端字段的子集,也可直接用步骤B配置进行变更同步
diff --git a/dbswitch-common/pom.xml b/dbswitch-common/pom.xml
index 58e758d2dcb201694c37f6a63b611f806b1c37f5..f0ceb46705a1a322e053284c2c90f464659ecd33 100644
--- a/dbswitch-common/pom.xml
+++ b/dbswitch-common/pom.xml
@@ -5,7 +5,7 @@
com.gitee
dbswitch
- 1.5.0
+ 1.5.1
dbswitch-common
diff --git a/dbswitch-core/pom.xml b/dbswitch-core/pom.xml
index 49196f94d939355087533d2ed9afe4735d9bb997..02deaedbab9fbf772b565a57d6df8987532bd194 100644
--- a/dbswitch-core/pom.xml
+++ b/dbswitch-core/pom.xml
@@ -5,7 +5,7 @@
com.gitee
dbswitch
- 1.5.0
+ 1.5.1
dbswitch-core
diff --git a/dbswitch-data/pom.xml b/dbswitch-data/pom.xml
index f9f50b236fe88736c13da49a9b44116aee9d43f5..36dd9348b16ed4aa2ae60f7db33996abdaf88dbe 100644
--- a/dbswitch-data/pom.xml
+++ b/dbswitch-data/pom.xml
@@ -5,7 +5,7 @@
com.gitee
dbswitch
- 1.5.0
+ 1.5.1
dbswitch-data
@@ -62,22 +62,24 @@
postgresql
-
- org.apache.commons
- commons-dbcp2
-
-
org.springframework.boot
spring-boot-configuration-processor
true
-
+
org.projectlombok
lombok
provided
+
+
+ org.springframework.boot
+ spring-boot-starter-test
+ test
+
+
\ No newline at end of file
diff --git a/dbswitch-data/src/main/java/com/gitee/dbswitch/data/config/DbswichProperties.java b/dbswitch-data/src/main/java/com/gitee/dbswitch/data/config/DbswichProperties.java
new file mode 100644
index 0000000000000000000000000000000000000000..821b20f32d52ba7af787fa8f35c4d7352c2bbc2b
--- /dev/null
+++ b/dbswitch-data/src/main/java/com/gitee/dbswitch/data/config/DbswichProperties.java
@@ -0,0 +1,64 @@
+// Copyright tang. All rights reserved.
+// https://gitee.com/inrgihc/dbswitch
+//
+// Use of this source code is governed by a BSD-style license
+//
+// Author: tang (inrgihc@126.com)
+// Data : 2020/1/2
+// Location: beijing , china
+/////////////////////////////////////////////////////////////
+package com.gitee.dbswitch.data.config;
+
+import java.util.ArrayList;
+import java.util.List;
+import org.springframework.boot.context.properties.ConfigurationProperties;
+import org.springframework.context.annotation.Configuration;
+import org.springframework.context.annotation.PropertySource;
+import lombok.Data;
+import lombok.NoArgsConstructor;
+
+/**
+ * Properties属性映射配置
+ *
+ * @author tang
+ *
+ */
+@Configuration
+@Data
+@ConfigurationProperties(prefix = "dbswitch", ignoreInvalidFields=false, ignoreUnknownFields = false)
+@PropertySource("classpath:config.properties")
+public class DbswichProperties {
+
+ private List source = new ArrayList<>();
+
+ private TargetDataSourceProperties target = new TargetDataSourceProperties();
+
+ @Data
+ @NoArgsConstructor
+ public static class SourceDataSourceProperties {
+ private String url;
+ private String driverClassName;
+ private String username;
+ private String password;
+
+ private Integer fetchSize=5000;
+ private String sourceSchema="";
+ private String sourceIncludes="";
+ private String sourceExcludes="";
+ }
+
+ @Data
+ @NoArgsConstructor
+ public static class TargetDataSourceProperties {
+ private String url;
+ private String driverClassName;
+ private String username;
+ private String password;
+
+ private String targetSchema="";
+ private Boolean targetDrop=Boolean.TRUE;
+ private Boolean createTableAutoIncrement=Boolean.FALSE;
+ private Boolean writerEngineInsert=Boolean.FALSE;
+ private Boolean changeDataSynch=Boolean.FALSE;
+ }
+}
diff --git a/dbswitch-data/src/main/java/com/gitee/dbswitch/data/config/PropertiesConfig.java b/dbswitch-data/src/main/java/com/gitee/dbswitch/data/config/PropertiesConfig.java
index ff0236771378ad75bbc5b6cb62e5126ee64383ef..6b5947bb7cb6807949f587f70700fdab96b4ccd0 100644
--- a/dbswitch-data/src/main/java/com/gitee/dbswitch/data/config/PropertiesConfig.java
+++ b/dbswitch-data/src/main/java/com/gitee/dbswitch/data/config/PropertiesConfig.java
@@ -9,149 +9,22 @@
/////////////////////////////////////////////////////////////
package com.gitee.dbswitch.data.config;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.List;
-import org.apache.commons.dbcp2.BasicDataSource;
-import org.apache.logging.log4j.util.Strings;
-import org.springframework.beans.factory.annotation.Qualifier;
-import org.springframework.beans.factory.annotation.Value;
-import org.springframework.boot.context.properties.ConfigurationProperties;
-import org.springframework.boot.jdbc.DataSourceBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
-import org.springframework.context.annotation.PropertySource;
-import org.springframework.jdbc.core.JdbcTemplate;
import com.gitee.dbswitch.core.service.IMetaDataService;
import com.gitee.dbswitch.core.service.impl.MigrationMetaDataServiceImpl;
/**
- * 配置数据属性配置类
+ * 注册所有映射属性类 { }中用逗号分隔即可注册多个属性类
*
* @author tang
*
*/
@Configuration
-@PropertySource("classpath:config.properties")
public class PropertiesConfig {
-
- @Value("${source.datasource.url}")
- public String dbSourceJdbcUrl;
-
- @Value("${source.datasource.driver-class-name}")
- public String dbSourceClassName;
-
- @Value("${source.datasource.username}")
- public String dbSourceUserName;
-
- @Value("${source.datasource.password}")
- public String dbSourcePassword;
-
- @Value("${target.datasource.url}")
- public String dbTargetJdbcUrl;
-
- @Value("${target.datasource.driver-class-name}")
- public String dbTargetClassName;
-
- @Value("${target.datasource.username}")
- public String dbTargetUserName;
-
- @Value("${target.datasource.password}")
- public String dbTargetPassword;
-
- /////////////////////////////////////////////
-
- @Value("${source.datasource-fetch.size}")
- public int fetchSizeSource;
-
- @Value("${source.datasource-source.schema}")
- private String schemaNameSource;
-
- public List getSourceSchemaNames() {
- if (!Strings.isEmpty(schemaNameSource)) {
- String[] strs = schemaNameSource.split(",");
- if (strs.length > 0) {
- return new ArrayList<>(Arrays.asList(strs));
- }
- }
-
- return new ArrayList<>();
- }
-
- @Value("${source.datasource-source.includes}")
- private String tableNameIncludesSource;
-
- @Value("${source.datasource-source.excludes}")
- private String tableNameExcludesSource;
-
- public List getSourceTableNameIncludes() {
- if (!Strings.isEmpty(tableNameIncludesSource)) {
- String[] strs = tableNameIncludesSource.split(",");
- if (strs.length > 0) {
- return new ArrayList<>(Arrays.asList(strs));
- }
- }
-
- return new ArrayList<>();
- }
-
- public List getSourceTableNameExcludes() {
- if (!Strings.isEmpty(tableNameExcludesSource)) {
- String[] strs = tableNameExcludesSource.split(",");
- if (strs.length > 0) {
- return new ArrayList<>(Arrays.asList(strs));
- }
- }
-
- return new ArrayList<>();
- }
-
- ////////////////////////////////////////////
-
- @Value("${target.datasource-target.schema}")
- public String dbTargetSchema;
-
- @Value("${target.datasource-target.drop}")
- public Boolean dropTargetTable;
-
- @Value("${target.create-table.auto-increment}")
- public Boolean createSupportAutoIncr;
-
- @Value("${target.writer-engine.insert}")
- public Boolean engineInsert;
-
- @Value("${target.change-data-synch}")
- public Boolean changeSynch;
-
- ////////////////////////////////////////////
-
- @Bean(name="sourceDataSource")
- @Qualifier("sourceDataSource")
- @ConfigurationProperties(prefix="source.datasource")
- public BasicDataSource sourceDataSource() {
- return DataSourceBuilder.create().type(BasicDataSource.class).build();
- }
- @Bean(name="targetDataSource")
- @Qualifier("targetDataSource")
- @ConfigurationProperties(prefix="target.datasource")
- public BasicDataSource targetDataSource() {
- return DataSourceBuilder.create().type(BasicDataSource.class).build();
- }
-
- @Bean(name = "sourceJdbcTemplate")
- public JdbcTemplate sourceJdbcTemplate(@Qualifier("sourceDataSource") BasicDataSource dataSource) {
- return new JdbcTemplate(dataSource);
- }
-
- @Bean(name = "targetJdbcTemplate")
- public JdbcTemplate target(@Qualifier("targetDataSource") BasicDataSource dataSource) {
- return new JdbcTemplate(dataSource);
- }
-
@Bean
public IMetaDataService getMetaDataService() {
return new MigrationMetaDataServiceImpl();
}
-
}
diff --git a/dbswitch-data/src/main/java/com/gitee/dbswitch/data/service/MainService.java b/dbswitch-data/src/main/java/com/gitee/dbswitch/data/service/MainService.java
index 35d18aee4ac14fcf8f5991a668f26525f4b39c33..e1ce0d87e7c643bd9628e3eb9c0687a0f0ec57ee 100644
--- a/dbswitch-data/src/main/java/com/gitee/dbswitch/data/service/MainService.java
+++ b/dbswitch-data/src/main/java/com/gitee/dbswitch/data/service/MainService.java
@@ -11,12 +11,14 @@ package com.gitee.dbswitch.data.service;
import java.sql.ResultSet;
import java.util.ArrayList;
+import java.util.Arrays;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
-import org.apache.commons.dbcp2.BasicDataSource;
+import java.util.Objects;
+import javax.sql.DataSource;
+import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
-import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Service;
import org.springframework.util.StopWatch;
@@ -26,7 +28,7 @@ import com.gitee.dbswitch.common.util.CommonUtils;
import com.gitee.dbswitch.core.model.ColumnDescription;
import com.gitee.dbswitch.core.model.TableDescription;
import com.gitee.dbswitch.core.service.IMetaDataService;
-import com.gitee.dbswitch.data.config.PropertiesConfig;
+import com.gitee.dbswitch.data.config.DbswichProperties;
import com.gitee.dbswitch.data.util.JdbcTemplateUtils;
import com.gitee.dbswitch.dbchange.ChangeCaculatorService;
import com.gitee.dbswitch.dbchange.IDatabaseChangeCaculator;
@@ -41,11 +43,11 @@ import com.gitee.dbswitch.dbsynch.DatabaseSynchronizeFactory;
import com.gitee.dbswitch.dbsynch.IDatabaseSynchronize;
import com.gitee.dbswitch.dbwriter.DatabaseWriterFactory;
import com.gitee.dbswitch.dbwriter.IDatabaseWriter;
-
+import com.zaxxer.hikari.HikariDataSource;
import lombok.extern.slf4j.Slf4j;
/**
- * 数据迁移服务类
+ * 数据迁移主逻辑类
*
* @author tang
*
@@ -57,15 +59,7 @@ public class MainService {
private ObjectMapper jackson = new ObjectMapper();
@Autowired
- @Qualifier("sourceDataSource")
- private BasicDataSource sourceDataSource;
-
- @Autowired
- @Qualifier("targetDataSource")
- private BasicDataSource targetDataSource;
-
- @Autowired
- private PropertiesConfig properties;
+ private DbswichProperties properties;
@Autowired
private IMetaDataService metaDataService;
@@ -74,58 +68,75 @@ public class MainService {
* 执行主逻辑
*/
public void run() {
- DatabaseTypeEnum sourceDatabaseType = JdbcTemplateUtils.getDatabaseProduceName(this.sourceDataSource);
- metaDataService.setDatabaseConnection(sourceDatabaseType);
-
- IDatabaseWriter writer = DatabaseWriterFactory.createDatabaseWriter(this.targetDataSource,
- properties.engineInsert);
-
StopWatch watch = new StopWatch();
watch.start();
try {
- log.info("service is running....");
-
- // 判断处理的策略:是排除还是包含
- List includes = properties.getSourceTableNameIncludes();
- log.info("Includes tables is :{}", jackson.writeValueAsString(includes));
- List filters = properties.getSourceTableNameExcludes();
- log.info("Filter tables is :{}", jackson.writeValueAsString(filters));
-
- boolean useExcludeTables = includes.isEmpty();
- if (useExcludeTables) {
- log.info("!!!! Use source.datasource-source.excludes to filter tables");
- } else {
- log.info("!!!! Use source.datasource-source.includes to filter tables");
- }
+ HikariDataSource targetDataSource = this.createTargetDataSource(properties.getTarget());
- List schemas=properties.getSourceSchemaNames();
- log.info("Source schema names is :{}", jackson.writeValueAsString(schemas));
- for (String schema : schemas) {
- // 读取源库指定shema里所有的表
- List tableList = metaDataService.queryTableList(properties.dbSourceJdbcUrl,
- properties.dbSourceUserName, properties.dbSourcePassword, schema);
- if (tableList.isEmpty()) {
- log.warn("### Find table list empty for shema={}", schema);
+ IDatabaseWriter writer = DatabaseWriterFactory.createDatabaseWriter(targetDataSource,
+ properties.getTarget().getWriterEngineInsert().booleanValue());
+
+ log.info("service is running....");
+ //log.info("Application properties configuration :{}", jackson.writeValueAsString(this.properties));
+
+ List sources = properties.getSource();
+
+ for (DbswichProperties.SourceDataSourceProperties source : sources) {
+ HikariDataSource sourceDataSource = this.createSourceDataSource(source);
+
+ DatabaseTypeEnum sourceDatabaseType = JdbcTemplateUtils.getDatabaseProduceName(sourceDataSource);
+ metaDataService.setDatabaseConnection(sourceDatabaseType);
+
+ // 判断处理的策略:是排除还是包含
+ List includes = stringToList(source.getSourceIncludes());
+ log.info("Includes tables is :{}", jackson.writeValueAsString(includes));
+ List filters = stringToList(source.getSourceExcludes());
+ log.info("Filter tables is :{}", jackson.writeValueAsString(filters));
+
+ boolean useExcludeTables = includes.isEmpty();
+ if (useExcludeTables) {
+ log.info("!!!! Use dbswitch.source[i].source-excludes to filter tables");
} else {
- int finished = 0;
- for (TableDescription td : tableList) {
- String tableName = td.getTableName();
- if (useExcludeTables) {
- if (!filters.contains(tableName)) {
- this.doDataMigration(td, writer);
- }
- } else {
- if (includes.contains(tableName)) {
- this.doDataMigration(td, writer);
+ log.info("!!!! Use dbswitch.source[i].source-includes to filter tables");
+ }
+
+ List schemas = stringToList(source.getSourceSchema());
+ log.info("Source schema names is :{}", jackson.writeValueAsString(schemas));
+ for (String schema : schemas) {
+ // 读取源库指定schema里所有的表
+ List tableList = metaDataService.queryTableList(source.getUrl(),
+ source.getUsername(), source.getPassword(), schema);
+ if (tableList.isEmpty()) {
+ log.warn("### Find table list empty for shema={}", schema);
+ } else {
+ int finished = 0;
+ for (TableDescription td : tableList) {
+ String tableName = td.getTableName();
+ if (useExcludeTables) {
+ if (!filters.contains(tableName)) {
+ this.doDataMigration(td, source, sourceDataSource, writer);
+ }
+ } else {
+ if (includes.contains(tableName)) {
+ this.doDataMigration(td, source, sourceDataSource, writer);
+ }
}
- }
- log.info("#### Complete data migration for schema [ {} ] count is {},total is {}, process is {}%",
- schema, ++finished, tableList.size(), (float) (finished * 100.0 / tableList.size()));
+ log.info(
+ "#### Complete data migration for schema [ {} ] count is {},total is {}, process is {}%",
+ schema, ++finished, tableList.size(),
+ (float) (finished * 100.0 / tableList.size()));
+ }
}
- }
+ }
+
+ try {
+ sourceDataSource.close();
+ } catch (Exception e) {
+ log.warn("Close data source error:",e);
+ }
}
log.info("service run success!");
} catch (Exception e) {
@@ -142,63 +153,70 @@ public class MainService {
* @param tableDescription
* @param writer
*/
- private void doDataMigration(TableDescription tableDescription, IDatabaseWriter writer) {
+ private void doDataMigration(TableDescription tableDescription,
+ DbswichProperties.SourceDataSourceProperties sourceProperties, HikariDataSource sourceDataSource,
+ IDatabaseWriter writer) {
log.info("migration table for {} ", tableDescription.getTableName());
- JdbcTemplate targetJdbcTemplate = new JdbcTemplate(this.targetDataSource);
- DatabaseTypeEnum targetDatabaseType = JdbcTemplateUtils.getDatabaseProduceName(this.targetDataSource);
+ JdbcTemplate targetJdbcTemplate = new JdbcTemplate(writer.getDataSource());
+ DatabaseTypeEnum targetDatabaseType = JdbcTemplateUtils.getDatabaseProduceName(writer.getDataSource());
- if (properties.dropTargetTable.booleanValue()) {
+ if (properties.getTarget().getTargetDrop().booleanValue()) {
/**
- * 如果配置了target.datasource-target.drop=true时,先执行drop table语句,然后执行create table语句
+ * 如果配置了dbswitch.target.datasource-target-drop=true时,先执行drop table语句,然后执行create
+ * table语句
*/
// 先drop表
try {
IDatabaseOperator targetOperator = DatabaseOperatorFactory
- .createDatabaseOperator(this.targetDataSource);
- targetOperator.dropTable(properties.dbTargetSchema, tableDescription.getTableName());
+ .createDatabaseOperator(writer.getDataSource());
+ targetOperator.dropTable(properties.getTarget().getTargetSchema(), tableDescription.getTableName());
} catch (Exception e) {
- log.info("Target Table {}.{} is not exits!", properties.dbTargetSchema, tableDescription.getTableName());
+ log.info("Target Table {}.{} is not exits!", properties.getTarget().getTargetSchema(),
+ tableDescription.getTableName());
}
// 然后create表
- List columnDescs = metaDataService.queryTableColumnMeta(properties.dbSourceJdbcUrl,
- properties.dbSourceUserName, properties.dbSourcePassword, tableDescription.getSchemaName(),
+ List columnDescs = metaDataService.queryTableColumnMeta(sourceProperties.getUrl(),
+ sourceProperties.getUsername(), sourceProperties.getPassword(), tableDescription.getSchemaName(),
tableDescription.getTableName());
- List primaryKeys = metaDataService.queryTablePrimaryKeys(properties.dbSourceJdbcUrl,
- properties.dbSourceUserName, properties.dbSourcePassword, tableDescription.getSchemaName(),
+ List primaryKeys = metaDataService.queryTablePrimaryKeys(sourceProperties.getUrl(),
+ sourceProperties.getUsername(), sourceProperties.getPassword(), tableDescription.getSchemaName(),
tableDescription.getTableName());
String sqlCreateTable = metaDataService.getDDLCreateTableSQL(targetDatabaseType, columnDescs, primaryKeys,
- properties.dbTargetSchema, tableDescription.getTableName(), properties.createSupportAutoIncr);
+ properties.getTarget().getTargetSchema(), tableDescription.getTableName(),
+ properties.getTarget().getCreateTableAutoIncrement().booleanValue());
targetJdbcTemplate.execute(sqlCreateTable);
log.info("Execute SQL: \n{}", sqlCreateTable);
- this.doFullCoverSynchronize(tableDescription, writer);
+ this.doFullCoverSynchronize(tableDescription, sourceProperties, sourceDataSource, writer);
} else {
// 判断是否具备变化量同步的条件:(1)两端表结构一致,且都有一样的主键字段;(2)MySQL使用Innodb引擎;
- if (properties.changeSynch.booleanValue()) {
+ if (properties.getTarget().getChangeDataSynch().booleanValue()) {
// 根据主键情况判断推送的方式
- JdbcMetaDataUtils mds = new JdbcMetaDataUtils(this.sourceDataSource);
- JdbcMetaDataUtils mdt = new JdbcMetaDataUtils(this.targetDataSource);
+ JdbcMetaDataUtils mds = new JdbcMetaDataUtils(sourceDataSource);
+ JdbcMetaDataUtils mdt = new JdbcMetaDataUtils(writer.getDataSource());
List pks1 = mds.queryTablePrimaryKeys(tableDescription.getSchemaName(),
tableDescription.getTableName());
- List pks2 = mdt.queryTablePrimaryKeys(properties.dbTargetSchema,
+ List pks2 = mdt.queryTablePrimaryKeys(properties.getTarget().getTargetSchema(),
tableDescription.getTableName());
if (!pks1.isEmpty() && !pks2.isEmpty() && pks1.containsAll(pks2) && pks2.containsAll(pks1)) {
if (targetDatabaseType == DatabaseTypeEnum.MYSQL
- && !isMysqlInodbStorageEngine(properties.dbTargetSchema, tableDescription.getTableName())) {
- this.doFullCoverSynchronize(tableDescription, writer);
+ && !isMysqlInodbStorageEngine(properties.getTarget().getTargetSchema(),
+ tableDescription.getTableName(), writer.getDataSource())) {
+ this.doFullCoverSynchronize(tableDescription, sourceProperties, sourceDataSource, writer);
} else {
List fields = mds.queryTableColumnName(tableDescription.getSchemaName(),
tableDescription.getTableName());
- this.doIncreaseSynchronize(tableDescription, writer, pks1, fields);
+ this.doIncreaseSynchronize(tableDescription, sourceProperties, sourceDataSource, writer, pks1,
+ fields);
}
} else {
- this.doFullCoverSynchronize(tableDescription, writer);
+ this.doFullCoverSynchronize(tableDescription, sourceProperties, sourceDataSource, writer);
}
} else {
- this.doFullCoverSynchronize(tableDescription, writer);
+ this.doFullCoverSynchronize(tableDescription, sourceProperties, sourceDataSource, writer);
}
}
}
@@ -209,29 +227,31 @@ public class MainService {
* @param tableDescription 表的描述信息,可能是视图表,可能是物理表
* @param writer 目的端的写入器
*/
- private void doFullCoverSynchronize(TableDescription tableDescription, IDatabaseWriter writer) {
+ private void doFullCoverSynchronize(TableDescription tableDescription,
+ DbswichProperties.SourceDataSourceProperties sourceProperties, HikariDataSource sourceDataSource,
+ IDatabaseWriter writer) {
int fetchSize = 100;
- if (properties.fetchSizeSource >= fetchSize) {
- fetchSize = properties.fetchSizeSource;
+ if (sourceProperties.getFetchSize() >= fetchSize) {
+ fetchSize = sourceProperties.getFetchSize();
}
final int BATCH_SIZE = fetchSize;
// 准备目的端的数据写入操作
- writer.prepareWrite(properties.dbTargetSchema, tableDescription.getTableName());
+ writer.prepareWrite(properties.getTarget().getTargetSchema(), tableDescription.getTableName());
// 清空目的端表的数据
- IDatabaseOperator targetOperator = DatabaseOperatorFactory.createDatabaseOperator(this.targetDataSource);
- targetOperator.truncateTableData(properties.dbTargetSchema, tableDescription.getTableName());
+ IDatabaseOperator targetOperator = DatabaseOperatorFactory.createDatabaseOperator(writer.getDataSource());
+ targetOperator.truncateTableData(properties.getTarget().getTargetSchema(), tableDescription.getTableName());
// 查询源端数据并写入目的端
- IDatabaseOperator sourceOperator = DatabaseOperatorFactory.createDatabaseOperator(this.sourceDataSource);
+ IDatabaseOperator sourceOperator = DatabaseOperatorFactory.createDatabaseOperator(sourceDataSource);
sourceOperator.setFetchSize(BATCH_SIZE);
- DatabaseTypeEnum sourceDatabaseType = JdbcTemplateUtils.getDatabaseProduceName(this.sourceDataSource);
+ DatabaseTypeEnum sourceDatabaseType = JdbcTemplateUtils.getDatabaseProduceName(sourceDataSource);
String fullTableName = CommonUtils.getTableFullNameByDatabase(sourceDatabaseType,
tableDescription.getSchemaName(), tableDescription.getTableName());
- Map columnMetaData = JdbcTemplateUtils
- .getColumnMetaData(new JdbcTemplate(this.sourceDataSource), fullTableName);
+ Map columnMetaData = JdbcTemplateUtils.getColumnMetaData(new JdbcTemplate(sourceDataSource),
+ fullTableName);
List fields = new ArrayList<>(columnMetaData.keySet());
StatementResultSet srs = sourceOperator.queryTableData(tableDescription.getSchemaName(),
@@ -282,30 +302,31 @@ public class MainService {
* @param tableDescription 表的描述信息,这里只能是物理表
* @param writer 目的端的写入器
*/
- private void doIncreaseSynchronize(TableDescription tableDescription, IDatabaseWriter writer, List pks,
- List fields) {
+ private void doIncreaseSynchronize(TableDescription tableDescription,
+ DbswichProperties.SourceDataSourceProperties sourceProperties, HikariDataSource sourceDataSource,
+ IDatabaseWriter writer, List pks, List fields) {
int fetchSize = 100;
- if (properties.fetchSizeSource >= fetchSize) {
- fetchSize = properties.fetchSizeSource;
+ if (sourceProperties.getFetchSize() >= fetchSize) {
+ fetchSize = sourceProperties.getFetchSize();
}
final int BATCH_SIZE = fetchSize;
- DatabaseTypeEnum sourceDatabaseType = JdbcTemplateUtils.getDatabaseProduceName(this.sourceDataSource);
+ DatabaseTypeEnum sourceDatabaseType = JdbcTemplateUtils.getDatabaseProduceName(sourceDataSource);
String fullTableName = CommonUtils.getTableFullNameByDatabase(sourceDatabaseType,
tableDescription.getSchemaName(), tableDescription.getTableName());
TaskParamBean.TaskParamBeanBuilder taskBuilder = TaskParamBean.builder();
- taskBuilder.oldDataSource(this.targetDataSource);
- taskBuilder.oldSchemaName(properties.dbTargetSchema);
+ taskBuilder.oldDataSource(writer.getDataSource());
+ taskBuilder.oldSchemaName(properties.getTarget().getTargetSchema());
taskBuilder.oldTableName(tableDescription.getTableName());
- taskBuilder.newDataSource(this.sourceDataSource);
+ taskBuilder.newDataSource(sourceDataSource);
taskBuilder.newSchemaName(tableDescription.getSchemaName());
taskBuilder.newTableName(tableDescription.getTableName());
taskBuilder.fieldColumns(fields);
TaskParamBean param = taskBuilder.build();
- IDatabaseSynchronize synch = DatabaseSynchronizeFactory.createDatabaseWriter(this.targetDataSource);
+ IDatabaseSynchronize synch = DatabaseSynchronizeFactory.createDatabaseWriter(writer.getDataSource());
synch.prepare(param.getOldSchemaName(), param.getOldTableName(), fields, pks);
IDatabaseChangeCaculator changeCaculator = new ChangeCaculatorService();
@@ -402,6 +423,73 @@ public class MainService {
});
}
+ /**
+ * 创建于指定数据库连接描述符的连接池
+ *
+ * @param dbdesc 数据库连接描述符
+ * @return HikariDataSource连接池
+ */
+ private HikariDataSource createSourceDataSource(DbswichProperties.SourceDataSourceProperties description) {
+ HikariDataSource ds = new HikariDataSource();
+ ds.setPoolName("The_Source_DB_Connection");
+ ds.setJdbcUrl(description.getUrl());
+ ds.setDriverClassName(description.getDriverClassName());
+ ds.setUsername(description.getUsername());
+ ds.setPassword(description.getPassword());
+ if (description.getDriverClassName().contains("oracle")) {
+ ds.setConnectionTestQuery("SELECT 'Hello' from DUAL");
+ } else {
+ ds.setConnectionTestQuery("SELECT 1");
+ }
+ ds.setMaximumPoolSize(5);
+ ds.setMinimumIdle(2);
+ ds.setConnectionTimeout(30000);
+ ds.setIdleTimeout(60000);
+
+ return ds;
+ }
+
+ /**
+ * 创建于指定数据库连接描述符的连接池
+ *
+ * @param dbdesc 数据库连接描述符
+ * @return HikariDataSource连接池
+ */
+ private HikariDataSource createTargetDataSource(DbswichProperties.TargetDataSourceProperties description) {
+ HikariDataSource ds = new HikariDataSource();
+ ds.setPoolName("The_Target_DB_Connection");
+ ds.setJdbcUrl(description.getUrl());
+ ds.setDriverClassName(description.getDriverClassName());
+ ds.setUsername(description.getUsername());
+ ds.setPassword(description.getPassword());
+ if (description.getDriverClassName().contains("oracle")) {
+ ds.setConnectionTestQuery("SELECT 'Hello' from DUAL");
+ } else {
+ ds.setConnectionTestQuery("SELECT 1");
+ }
+ ds.setMaximumPoolSize(5);
+ ds.setMinimumIdle(2);
+ ds.setConnectionTimeout(30000);
+ ds.setIdleTimeout(60000);
+
+ // 如果是Greenplum数据库,这里需要关闭会话的查询优化器
+ if (description.getDriverClassName().contains("postgresql")) {
+ org.springframework.jdbc.datasource.DriverManagerDataSource dataSource = new org.springframework.jdbc.datasource.DriverManagerDataSource();
+ dataSource.setDriverClassName(description.getDriverClassName());
+ dataSource.setUrl(description.getUrl());
+ dataSource.setUsername(description.getUsername());
+ dataSource.setPassword(description.getPassword());
+ JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
+ String versionString = jdbcTemplate.queryForObject("SELECT version()", String.class);
+ if (Objects.nonNull(versionString) && versionString.contains("Greenplum")) {
+ log.info("#### Target database is Greenplum Cluster,close optimizer now: set optimizer to 'off' ");
+ ds.setConnectionInitSql("set optimizer to 'off'");
+ }
+ }
+
+ return ds;
+ }
+
/**
* 检查MySQL数据库表的存储引擎是否为Innodb
*
@@ -409,9 +497,26 @@ public class MainService {
* @param task 任务实体
* @return 为Innodb存储引擎时返回True,否在为false
*/
- private boolean isMysqlInodbStorageEngine(String shemaName, String tableName) {
+ private boolean isMysqlInodbStorageEngine(String shemaName, String tableName, DataSource dataSource) {
String sql = "SELECT count(*) as total FROM information_schema.tables WHERE table_schema=? AND table_name=? AND ENGINE='InnoDB'";
- JdbcTemplate jdbcTemplate = new JdbcTemplate(this.targetDataSource);
+ JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
return jdbcTemplate.queryForObject(sql, new Object[] { shemaName, tableName }, Integer.class) > 0;
}
+
+ /**
+ * 根据逗号切分字符串为数组
+ *
+ * @param s 待切分的字符串
+ * @return List
+ */
+ private List stringToList(String s) {
+ if (!StringUtils.isEmpty(s)) {
+ String[] strs = s.split(",");
+ if (strs.length > 0) {
+ return new ArrayList<>(Arrays.asList(strs));
+ }
+ }
+
+ return new ArrayList<>();
+ }
}
diff --git a/dbswitch-data/src/main/java/com/gitee/dbswitch/data/util/JdbcTemplateUtils.java b/dbswitch-data/src/main/java/com/gitee/dbswitch/data/util/JdbcTemplateUtils.java
index 9279b449b8dc947391922d3f4883b36d0a5030c9..8b3756d7f569b3cdeb29e07f048aebf2f4ecf940 100644
--- a/dbswitch-data/src/main/java/com/gitee/dbswitch/data/util/JdbcTemplateUtils.java
+++ b/dbswitch-data/src/main/java/com/gitee/dbswitch/data/util/JdbcTemplateUtils.java
@@ -16,11 +16,13 @@ import java.sql.SQLException;
import java.sql.Statement;
import java.util.HashMap;
import java.util.Map;
-import org.apache.commons.dbcp2.BasicDataSource;
+import javax.sql.DataSource;
+import org.springframework.boot.jdbc.DatabaseDriver;
import org.springframework.dao.DataAccessException;
import org.springframework.jdbc.core.ConnectionCallback;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.support.JdbcUtils;
+import org.springframework.jdbc.support.MetaDataAccessException;
import com.gitee.dbswitch.common.constant.DatabaseTypeEnum;
/**
@@ -41,25 +43,32 @@ public final class JdbcTemplateUtils {
* @param dataSource 数据源
* @return DatabaseType 数据库类型
*/
- public static DatabaseTypeEnum getDatabaseProduceName(BasicDataSource dataSource) {
- String driverClassName = dataSource.getDriverClassName();
- if (driverClassName.contains("mysql")) {
- return DatabaseTypeEnum.MYSQL;
- } else if (driverClassName.contains("mariadb")) {
- return DatabaseTypeEnum.MYSQL;
- } else if (driverClassName.contains("oracle")) {
- return DatabaseTypeEnum.ORACLE;
- } else if (driverClassName.contains("postgresql")) {
- return DatabaseTypeEnum.POSTGRESQL;
- } else if (driverClassName.contains("Greenplum")) {
- return DatabaseTypeEnum.GREENPLUM;
- } else if (driverClassName.contains("sqlserver")) {
- return DatabaseTypeEnum.SQLSERVER;
- } else if (driverClassName.contains("db2")) {
- return DatabaseTypeEnum.DB2;
- } else {
- throw new RuntimeException(
- String.format("Unsupport database type by driver class name [%s]", driverClassName));
+ public static DatabaseTypeEnum getDatabaseProduceName(DataSource dataSource) {
+ try {
+ String productName = JdbcUtils.commonDatabaseName(
+ JdbcUtils.extractDatabaseMetaData(dataSource, "getDatabaseProductName").toString());
+ if (productName.equalsIgnoreCase("Greenplum")) {
+ return DatabaseTypeEnum.GREENPLUM;
+ } else if (productName.equalsIgnoreCase("Microsoft SQL Server")) {
+ return DatabaseTypeEnum.SQLSERVER;
+ }
+
+ DatabaseDriver databaseDriver = DatabaseDriver.fromProductName(productName);
+ if (DatabaseDriver.MARIADB == databaseDriver) {
+ return DatabaseTypeEnum.MARIADB;
+ } else if (DatabaseDriver.MYSQL == databaseDriver) {
+ return DatabaseTypeEnum.MYSQL;
+ } else if (DatabaseDriver.ORACLE == databaseDriver) {
+ return DatabaseTypeEnum.ORACLE;
+ } else if (DatabaseDriver.POSTGRESQL == databaseDriver) {
+ return DatabaseTypeEnum.POSTGRESQL;
+ } else if (DatabaseDriver.DB2 == databaseDriver) {
+ return DatabaseTypeEnum.DB2;
+ } else {
+ throw new RuntimeException(String.format("Unsupport database type by product name [%s]", productName));
+ }
+ } catch (MetaDataAccessException ex) {
+ throw new IllegalStateException("Unable to detect database type", ex);
}
}
diff --git a/dbswitch-data/src/main/resources/config.properties b/dbswitch-data/src/main/resources/config.properties
index 5b97584e4caa479f228bbc712e51a6a46282a3f9..b5f4f212ad4343dc0a00891ff37f5de5ce78e1b2 100644
--- a/dbswitch-data/src/main/resources/config.properties
+++ b/dbswitch-data/src/main/resources/config.properties
@@ -1,36 +1,34 @@
# source database connection information
-## support MySQL/Oracle/SQLServer/PostgreSQL/Greenplum
-source.datasource.url= jdbc:oracle:thin:@172.17.2.58:1521:ORCL
-source.datasource.driver-class-name= oracle.jdbc.driver.OracleDriver
-source.datasource.username= tang
-source.datasource.password= tang
-
-# target database connection information
-## support MySQL/Oracle/SQLServer/PostgreSQL/Greenplum
-target.datasource.url= jdbc:postgresql://172.17.2.10:5432/study
-target.datasource.driver-class-name= org.postgresql.Driver
-target.datasource.username= tang
-target.datasource.password= tang
-
+## support MySQL/MariaDB/DB2/Oracle/SQLServer/PostgreSQL/Greenplum
+dbswitch.source[0].url= jdbc:oracle:thin:@172.17.20.58:1521:ORCL
+dbswitch.source[0].driver-class-name= oracle.jdbc.driver.OracleDriver
+dbswitch.source[0].username= tang
+dbswitch.source[0].password= tang
# source database configuration parameters
## fetch size for query source database
-source.datasource-fetch.size=10000
+dbswitch.source[0].fetch-size=10000
## schema name for query source database
-source.datasource-source.schema=TANG
+dbswitch.source[0].source-schema=TANG
## table name include from table lists
-source.datasource-source.includes=
+dbswitch.source[0].source-includes=
## table name exclude from table lists
-source.datasource-source.excludes=
+dbswitch.source[0].source-excludes=
+# target database connection information
+## support Oracle/PostgreSQL/Greenplum
+dbswitch.target.url= jdbc:postgresql://172.17.20.44:5432/study
+dbswitch.target.driver-class-name= org.postgresql.Driver
+dbswitch.target.username= tang
+dbswitch.target.password= tang
# target database configuration parameters
## schema name for create/insert table data
-target.datasource-target.schema=public
+dbswitch.target.target-schema=public
## whether drop-create table when target table exist
-target.datasource-target.drop=true
+dbswitch.target.target-drop=true
## whether create table support auto increment for primary key field
-target.create-table.auto-increment=false
+dbswitch.target.create-table-auto-increment=false
## whether use insert engine to write data for target database
## Only usefull for PostgreSQL/Greenplum database
-target.writer-engine.insert=false
+dbswitch.target.writer-engine-insert=false
## whether use change data synchronize to target database table
-target.change-data-synch=true
+dbswitch.target.change-data-synch=true
diff --git a/dbswitch-dbchange/pom.xml b/dbswitch-dbchange/pom.xml
index d788159910869488560feb40c59b1a030ca6fd0f..2c4bc2655d8e29678a5bf174221f26a755b78f01 100644
--- a/dbswitch-dbchange/pom.xml
+++ b/dbswitch-dbchange/pom.xml
@@ -5,7 +5,7 @@
com.gitee
dbswitch
- 1.5.0
+ 1.5.1
dbswitch-dbchange
diff --git a/dbswitch-dbcommon/pom.xml b/dbswitch-dbcommon/pom.xml
index f3fa11096bf6dbf28f144ba6737c4a7ed8977cfe..3936a1da87d83a2f7271591df1a5b5656fadc54a 100644
--- a/dbswitch-dbcommon/pom.xml
+++ b/dbswitch-dbcommon/pom.xml
@@ -3,7 +3,7 @@
com.gitee
dbswitch
- 1.5.0
+ 1.5.1
dbswitch-dbcommon
diff --git a/dbswitch-dbsynch/pom.xml b/dbswitch-dbsynch/pom.xml
index 59e086002399572df4f977570842c13de1404607..66ecd11414310f1528d41ea54b8ba5dffb7209b7 100644
--- a/dbswitch-dbsynch/pom.xml
+++ b/dbswitch-dbsynch/pom.xml
@@ -3,7 +3,7 @@
com.gitee
dbswitch
- 1.5.0
+ 1.5.1
dbswitch-dbsynch
diff --git a/dbswitch-dbwriter/pom.xml b/dbswitch-dbwriter/pom.xml
index bd02811498b092958133353fa109f5c01b8c30fc..5b65d9edb14433378ac735794a878387e0701352 100644
--- a/dbswitch-dbwriter/pom.xml
+++ b/dbswitch-dbwriter/pom.xml
@@ -5,7 +5,7 @@
com.gitee
dbswitch
- 1.5.0
+ 1.5.1
dbswitch-dbwriter
diff --git a/dbswitch-pgwriter/pom.xml b/dbswitch-pgwriter/pom.xml
index b2f819dc4a29f353f2fa676e06930ab10a66e9d9..00f99a31f1c5c6f1f26bc67ca91b4ca2de33b89c 100644
--- a/dbswitch-pgwriter/pom.xml
+++ b/dbswitch-pgwriter/pom.xml
@@ -5,7 +5,7 @@
com.gitee
dbswitch
- 1.5.0
+ 1.5.1
dbswitch-pgwriter
diff --git a/dbswitch-sql/pom.xml b/dbswitch-sql/pom.xml
index d45419a22219cabb3e9089e8644f55dcdf03a6a7..9103616cafa30235d7eb2fd8571b47be8f5420f3 100644
--- a/dbswitch-sql/pom.xml
+++ b/dbswitch-sql/pom.xml
@@ -5,7 +5,7 @@
com.gitee
dbswitch
- 1.5.0
+ 1.5.1
dbswitch-sql
diff --git a/dbswitch-webapi/pom.xml b/dbswitch-webapi/pom.xml
index e8e5f74ddf906a796665394a261a60b719bd63e2..77d2c66bc437baee12a445fadbf019c8ab2d3e3f 100644
--- a/dbswitch-webapi/pom.xml
+++ b/dbswitch-webapi/pom.xml
@@ -5,7 +5,7 @@
com.gitee
dbswitch
- 1.5.0
+ 1.5.1
dbswitch-webapi
diff --git a/package-tool/pom.xml b/package-tool/pom.xml
index 19e8b441da41ef607cf908d041148e79a4f565ee..85cb8815497708bbd7e422b86c5dcd43c1e5edf2 100644
--- a/package-tool/pom.xml
+++ b/package-tool/pom.xml
@@ -5,7 +5,7 @@
com.gitee
dbswitch
- 1.5.0
+ 1.5.1
package-tool
diff --git a/pom.xml b/pom.xml
index 193e7fc71d5796aa2180de76ba4631da8609a2fc..d0ebc5009cb76d9bda4e866184dcf816161b6994 100644
--- a/pom.xml
+++ b/pom.xml
@@ -21,7 +21,7 @@
com.gitee
dbswitch
- 1.5.0
+ 1.5.1
pom
dbswitch
database switch project
diff --git a/version.cmd b/version.cmd
index a5ee4b018a70c4ce76dbb0837c728ebe3f4562c0..a3325f930bc314b7860bc79f9adebac1243aa655 100644
--- a/version.cmd
+++ b/version.cmd
@@ -1,6 +1,6 @@
@echo off
-set APP_VERSION=1.5.0
+set APP_VERSION=1.5.1
echo "Clean Project ..."
call mvn clean -f pom.xml