登录
注册
开源
企业版
高校版
搜索
帮助中心
使用条款
关于我们
开源
企业版
高校版
私有云
模力方舟
AI 队友
登录
注册
轻量养虾,开箱即用!低 Token + 稳定算力,Gitee & 模力方舟联合出品的 PocketClaw 正式开售!点击了解详情~
代码拉取完成,页面将自动刷新
开源项目
>
数据库相关
>
数据库管理/监控
&&
捐赠
捐赠前请先登录
取消
前往登录
扫描微信二维码支付
取消
支付完成
支付提示
将跳转至支付宝完成支付
确定
取消
Watch
不关注
关注所有动态
仅关注版本发行动态
关注但不提醒动态
28
Star
56
Fork
74
openGauss
/
debezium
代码
Issues
10
Pull Requests
4
Wiki
统计
流水线
服务
JavaDoc
PHPDoc
质量分析
Jenkins for Gitee
腾讯云托管
腾讯云 Serverless
悬镜安全
阿里云 SAE
Codeblitz
SBOM
开发画像分析
我知道了,不再自动展开
更新失败,请稍后重试!
移除标识
内容风险标识
本任务被
标识为内容中包含有代码安全 Bug 、隐私泄露等敏感信息,仓库外成员不可访问
【测试类型:工具功能】【测试版本:7.0.0rc1】pg数据库增量迁移至openGauss数据库,枚举、数组类型迁移失败
已验收
#IBNOQU
缺陷
lihongji
创建于
2025-02-20 17:10
【标题描述】:pg数据库增量迁移至openGauss数据库,枚举、数组类型迁移失败 【测试类型:工具功能】【测试版本:7.0.0rc1】 问题描述:pg数据库增量迁移至openGauss数据库,枚举、数组类型迁移失败 【操作系统和硬件信息】(查询命令: cat /etc/system-release, uname -a): openEuler release 20.03 (LTS) Linux openGauss85 4.19.90-2003.4.0.0036.oe1.aarch64 #1 SMP Mon Mar 23 19:06:43 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux 【测试环境】(单机/1主x备x级联备): 单机 【被测功能】: pg增量迁移 【测试类型】: 功能测试 【数据库版本】(查询命令: gaussdb -V): gaussdb (openGauss 7.0.0-RC1 build 24e89e20) compiled at 2025-02-12 18:26:05 commit 0 last mr 【预置条件】: 增量迁移环境搭建成功 【操作步骤】(请填写详细的操作步骤): 1. 在pg与og分别预置表 ``` CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy'); CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy'); CREATE TABLE person ( name text, current_mood mood ); CREATE TYPE happiness AS ENUM ('happy', 'very happy', 'ecstatic'); CREATE TABLE holidays ( num_weeks integer, happiness happiness ); CREATE TABLE array_test ( id SERIAL, t1 INT[], t2 TEXT[], t3 VARCHAR(10)[], t4 JSONB[], t5 INT[][], t6 integer[], t7 text[][], t8 integer[3][3] ); ``` 2. 在pg侧写入数据 ``` INSERT INTO holidays(num_weeks,happiness) VALUES (4, 'happy'); INSERT INTO holidays(num_weeks,happiness) VALUES (6, 'very happy'); INSERT INTO holidays(num_weeks,happiness) VALUES (8, 'ecstatic'); INSERT INTO holidays(num_weeks,happiness) VALUES (2, 'sad'); INSERT INTO person VALUES ('Moe', 'happy'); INSERT INTO person VALUES ('Larry', 'sad'); INSERT INTO person VALUES ('Curly', 'ok'); INSERT INTO array_test (t1, t2, t3, t4, t5, t6, t7, t8) VALUES ( ARRAY[-2147483648, 2147483647], ARRAY['', 'max_length_string'], ARRAY['1234567890', ''], ARRAY['null'::jsonb, '{"key":[]}'], ARRAY[[1],[2]], '{}', ARRAY[['']], '{{1,2,3},{4,5,6},{7,8,9}}' ); INSERT INTO array_test (t1, t2, t6) VALUES ( '{1, 2, 3}', ARRAY['特殊"字符', 'line\nbreak'], ARRAY[1, 2, 3]::integer[] ); INSERT INTO array_test (t4) VALUES ( ARRAY[ '{"id":1, "tags":["A","B"]}'::jsonb, '[true, {"score":95.5}]'::jsonb ] ); INSERT INTO array_test (t8) VALUES ( '{{1,2,3}, {4,5,6}, {7,8,9}}' ), ( ARRAY[[1,2,3], [4,5,6], [7,8,9]] ); INSERT INTO array_test (t8) VALUES (ARRAY[1, 2, 3]), (ARRAY[ARRAY[NULL::INTEGER, NULL]]), (ARRAY[NULL, 3, 5]::INTEGER[]); ``` 3. 在openGauss端查看增量迁移成功 【预期输出】: 1. 预置表成功 2. 写入数据成功 3. 在openGauss端查看增量迁移成功 【实际输出】: 1. 预置表成功 2. 写入数据成功 3. 增量迁移失败 ``` :706) [2025-02-20 19:56:01,788] INFO create statement info success (io.debezium.sink.worker.ReplayWorkThread:82) [2025-02-20 19:56:01,803] ERROR convert occurred exception, columnName: current_mood, columnType: USER-DEFINED, value: sad (io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters:140) org.apache.kafka.connect.errors.DataException: Field 'current_mood' is not of type BYTES at org.apache.kafka.connect.data.Struct.getCheckType(Struct.java:263) at org.apache.kafka.connect.data.Struct.getBytes(Struct.java:165) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters.convertByte(DebeziumValueConverters.java:337) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters.access$000(DebeziumValueConverters.java:64) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters$1.lambda$new$30(DebeziumValueConverters.java:112) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters.getValue(DebeziumValueConverters.java:128) at io.debezium.connector.postgresql.sink.utils.SqlTools.getValueList(SqlTools.java:332) at io.debezium.connector.postgresql.sink.utils.SqlTools.getInsertSql(SqlTools.java:259) at io.debezium.connector.postgresql.sink.worker.PostgresDataReplayWorkThread.constructSql(PostgresDataReplayWorkThread.java:213) at io.debezium.connector.postgresql.sink.worker.PostgresDataReplayWorkThread.run(PostgresDataReplayWorkThread.java:129) [2025-02-20 19:56:01,805] ERROR DataException occurred because of invalid field, possible reason is tables of OpenGauss and Postgresql have same table name but different table structure. (io.debezium.connector.postgresql.sink.worker.PostgresDataReplayWorkThread:154) org.apache.kafka.connect.errors.DataException: org.apache.kafka.connect.errors.DataException: Field 'current_mood' is not of type BYTES at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters.getValue(DebeziumValueConverters.java:142) at io.debezium.connector.postgresql.sink.utils.SqlTools.getValueList(SqlTools.java:332) at io.debezium.connector.postgresql.sink.utils.SqlTools.getInsertSql(SqlTools.java:259) at io.debezium.connector.postgresql.sink.worker.PostgresDataReplayWorkThread.constructSql(PostgresDataReplayWorkThread.java:213) at io.debezium.connector.postgresql.sink.worker.PostgresDataReplayWorkThread.run(PostgresDataReplayWorkThread.java:129) Caused by: org.apache.kafka.connect.errors.DataException: Field 'current_mood' is not of type BYTES at org.apache.kafka.connect.data.Struct.getCheckType(Struct.java:263) at org.apache.kafka.connect.data.Struct.getBytes(Struct.java:165) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters.convertByte(DebeziumValueConverters.java:337) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters.access$000(DebeziumValueConverters.java:64) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters$1.lambda$new$30(DebeziumValueConverters.java:112) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters.getValue(DebeziumValueConverters.java:128) ... 4 more [2025-02-20 19:56:02,705] INFO incremental migration have replayed 4 data, and current time is 2025-02-20 19:56:02.705, and current speed is 1 (io.debezium.co ``` 【原因分析】: 1. 这个问题的根因 2. 问题推断过程 3. 还有哪些原因可能造成类似现象 4. 该问题是否有临时规避措施 5. 问题解决方案 6. 预计修复问题时间 【日志信息】(请附上日志文件、截图、coredump信息): 【测试代码】:
【标题描述】:pg数据库增量迁移至openGauss数据库,枚举、数组类型迁移失败 【测试类型:工具功能】【测试版本:7.0.0rc1】 问题描述:pg数据库增量迁移至openGauss数据库,枚举、数组类型迁移失败 【操作系统和硬件信息】(查询命令: cat /etc/system-release, uname -a): openEuler release 20.03 (LTS) Linux openGauss85 4.19.90-2003.4.0.0036.oe1.aarch64 #1 SMP Mon Mar 23 19:06:43 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux 【测试环境】(单机/1主x备x级联备): 单机 【被测功能】: pg增量迁移 【测试类型】: 功能测试 【数据库版本】(查询命令: gaussdb -V): gaussdb (openGauss 7.0.0-RC1 build 24e89e20) compiled at 2025-02-12 18:26:05 commit 0 last mr 【预置条件】: 增量迁移环境搭建成功 【操作步骤】(请填写详细的操作步骤): 1. 在pg与og分别预置表 ``` CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy'); CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy'); CREATE TABLE person ( name text, current_mood mood ); CREATE TYPE happiness AS ENUM ('happy', 'very happy', 'ecstatic'); CREATE TABLE holidays ( num_weeks integer, happiness happiness ); CREATE TABLE array_test ( id SERIAL, t1 INT[], t2 TEXT[], t3 VARCHAR(10)[], t4 JSONB[], t5 INT[][], t6 integer[], t7 text[][], t8 integer[3][3] ); ``` 2. 在pg侧写入数据 ``` INSERT INTO holidays(num_weeks,happiness) VALUES (4, 'happy'); INSERT INTO holidays(num_weeks,happiness) VALUES (6, 'very happy'); INSERT INTO holidays(num_weeks,happiness) VALUES (8, 'ecstatic'); INSERT INTO holidays(num_weeks,happiness) VALUES (2, 'sad'); INSERT INTO person VALUES ('Moe', 'happy'); INSERT INTO person VALUES ('Larry', 'sad'); INSERT INTO person VALUES ('Curly', 'ok'); INSERT INTO array_test (t1, t2, t3, t4, t5, t6, t7, t8) VALUES ( ARRAY[-2147483648, 2147483647], ARRAY['', 'max_length_string'], ARRAY['1234567890', ''], ARRAY['null'::jsonb, '{"key":[]}'], ARRAY[[1],[2]], '{}', ARRAY[['']], '{{1,2,3},{4,5,6},{7,8,9}}' ); INSERT INTO array_test (t1, t2, t6) VALUES ( '{1, 2, 3}', ARRAY['特殊"字符', 'line\nbreak'], ARRAY[1, 2, 3]::integer[] ); INSERT INTO array_test (t4) VALUES ( ARRAY[ '{"id":1, "tags":["A","B"]}'::jsonb, '[true, {"score":95.5}]'::jsonb ] ); INSERT INTO array_test (t8) VALUES ( '{{1,2,3}, {4,5,6}, {7,8,9}}' ), ( ARRAY[[1,2,3], [4,5,6], [7,8,9]] ); INSERT INTO array_test (t8) VALUES (ARRAY[1, 2, 3]), (ARRAY[ARRAY[NULL::INTEGER, NULL]]), (ARRAY[NULL, 3, 5]::INTEGER[]); ``` 3. 在openGauss端查看增量迁移成功 【预期输出】: 1. 预置表成功 2. 写入数据成功 3. 在openGauss端查看增量迁移成功 【实际输出】: 1. 预置表成功 2. 写入数据成功 3. 增量迁移失败 ``` :706) [2025-02-20 19:56:01,788] INFO create statement info success (io.debezium.sink.worker.ReplayWorkThread:82) [2025-02-20 19:56:01,803] ERROR convert occurred exception, columnName: current_mood, columnType: USER-DEFINED, value: sad (io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters:140) org.apache.kafka.connect.errors.DataException: Field 'current_mood' is not of type BYTES at org.apache.kafka.connect.data.Struct.getCheckType(Struct.java:263) at org.apache.kafka.connect.data.Struct.getBytes(Struct.java:165) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters.convertByte(DebeziumValueConverters.java:337) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters.access$000(DebeziumValueConverters.java:64) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters$1.lambda$new$30(DebeziumValueConverters.java:112) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters.getValue(DebeziumValueConverters.java:128) at io.debezium.connector.postgresql.sink.utils.SqlTools.getValueList(SqlTools.java:332) at io.debezium.connector.postgresql.sink.utils.SqlTools.getInsertSql(SqlTools.java:259) at io.debezium.connector.postgresql.sink.worker.PostgresDataReplayWorkThread.constructSql(PostgresDataReplayWorkThread.java:213) at io.debezium.connector.postgresql.sink.worker.PostgresDataReplayWorkThread.run(PostgresDataReplayWorkThread.java:129) [2025-02-20 19:56:01,805] ERROR DataException occurred because of invalid field, possible reason is tables of OpenGauss and Postgresql have same table name but different table structure. (io.debezium.connector.postgresql.sink.worker.PostgresDataReplayWorkThread:154) org.apache.kafka.connect.errors.DataException: org.apache.kafka.connect.errors.DataException: Field 'current_mood' is not of type BYTES at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters.getValue(DebeziumValueConverters.java:142) at io.debezium.connector.postgresql.sink.utils.SqlTools.getValueList(SqlTools.java:332) at io.debezium.connector.postgresql.sink.utils.SqlTools.getInsertSql(SqlTools.java:259) at io.debezium.connector.postgresql.sink.worker.PostgresDataReplayWorkThread.constructSql(PostgresDataReplayWorkThread.java:213) at io.debezium.connector.postgresql.sink.worker.PostgresDataReplayWorkThread.run(PostgresDataReplayWorkThread.java:129) Caused by: org.apache.kafka.connect.errors.DataException: Field 'current_mood' is not of type BYTES at org.apache.kafka.connect.data.Struct.getCheckType(Struct.java:263) at org.apache.kafka.connect.data.Struct.getBytes(Struct.java:165) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters.convertByte(DebeziumValueConverters.java:337) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters.access$000(DebeziumValueConverters.java:64) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters$1.lambda$new$30(DebeziumValueConverters.java:112) at io.debezium.connector.postgresql.sink.utils.DebeziumValueConverters.getValue(DebeziumValueConverters.java:128) ... 4 more [2025-02-20 19:56:02,705] INFO incremental migration have replayed 4 data, and current time is 2025-02-20 19:56:02.705, and current speed is 1 (io.debezium.co ``` 【原因分析】: 1. 这个问题的根因 2. 问题推断过程 3. 还有哪些原因可能造成类似现象 4. 该问题是否有临时规避措施 5. 问题解决方案 6. 预计修复问题时间 【日志信息】(请附上日志文件、截图、coredump信息): 【测试代码】:
评论 (
4
)
登录
后才可以发表评论
状态
已验收
待办的
已确认
已答复
已取消
挂起
修复中
已完成
待回归
测试中
已验收
负责人
未设置
田宾
tianbin815
负责人
协作者
+负责人
+协作者
申正
shenzheng4
负责人
协作者
+负责人
+协作者
标签
sig/tools
未设置
项目
未立项任务
未立项任务
里程碑
未关联里程碑
未关联里程碑
Pull Requests
未关联
未关联
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
未关联
分支 (
-
)
标签 (
-
)
开始日期   -   截止日期
-
置顶选项
不置顶
置顶等级:高
置顶等级:中
置顶等级:低
优先级
不指定
严重
主要
次要
不重要
预计工期
(小时)
参与者(4)
1
https://gitee.com/opengauss/debezium.git
git@gitee.com:opengauss/debezium.git
opengauss
debezium
debezium
点此查找更多帮助
搜索帮助
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
仓库举报
回到顶部
登录提示
该操作需登录 Gitee 帐号,请先登录后再操作。
立即登录
没有帐号,去注册