1 Star 0 Fork 0

OceanBase / tutorials-doc

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
该仓库未声明开源许可证文件(LICENSE),使用请关注具体项目描述及其代码上游依赖。
克隆/下载
9.4-8-how-to-use-obdumper-obloader-to-export-import.md 23.08 KB
一键复制 编辑 原始数据 按行查看 历史
obdev 提交于 2023-08-07 12:38 . PullRequest: 39 常用词扫描

4.8 如何使用 OBDUMPER / OBLOADER 工具导出/导入 OceanBase 数据

OBDUMPER 导出 OceanBase 数据库

和导出 MySQL 数据库一样,导出 OceanBase 数据库建议结构和数据分开导出。

命令帮助如下,需要指定业务租户的用户名和密码。比较特别的地方是,如果要导出表结构,还需要指定 sys 租户的用户 root 和密码或者用户 proxyro 和密码。

为了安全性,ODP 默认禁止使用用户 proxyro 登录 OceanBase 集群,需要先测试其连通性。因为root 用户的权限过于大,所以这里推荐使用用户 proxyro 获取表的元数据信息。

bin/obdumper -h <主机IP> -P <端⼝> -u <⽤⼾> -p <密码> --sys-user <sys 租户下的root 用户或 proxyro 用户> --sys-password <sys 租⼾下的账⼾密码> -c <集群> -t <租⼾> -D <Schema 库名> [--ddl] [--csv|--sql] [--all|--table '表名'] -f<数据⽂件或者⽬录>

测试 proxyro 连通性

proxyro 的密码在 OBD 部署集群配置文件里指定。

[root@obce00 ~]# mysql -h 172.xx.xxx.52 -P 2883 -u proxyro@sys#obdemo -puY****zx -c -A
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading authorization packet', system error: 11

上面密码并没有错,登录失败是因为 ODP 默认禁止用户 proxyro 登录。修改 ODP 参数跳过用户 proxyro 登录检查。

mysql -h 172.xx.xxx.52 -u root@proxysys -P 2883 -p0M****tm
alter proxyconfig set skip_proxyro_check=true;
MySQL [(none)]> show proxyconfig like '%skip_proxyro_check%';
+--------------------+-------+--------------------------------------------------+-------------+---------------+
| name               | value | info                                             | need_reboot | visible_level |
+--------------------+-------+--------------------------------------------------+-------------+---------------+
| skip_proxyro_check | True  | used for proxro@sys, if set false, access denied | false       | SYS           |
+--------------------+-------+--------------------------------------------------+-------------+---------------+
1 row in set (0.010 sec)

再次测试 proxyro 连通性。

[root@obce00 ~]# mysql -h 172.xx.xxx.52 -P 2883 -u proxyro@sys#obdemo -puY****zx -c -A -Ns -e "show databases;"
oceanbase
information_schema
[root@obce00 ~]#

只导出表结构

下面导出业务租户 obmysql 下的数据库 test 下的表。

[root@obce00 ob-loader-dumper-2.1.13-SNAPSHOT]# bin/obdumper -h 172.xx.xxx.52 -P 2883 -u dumper -p ***** --sys-user=proxyro --sys-password=uY****zx -c obdemo -t obmysql -D test --ddl --all -f /tmp/obdumper
2021-09-29 21:14:55 [INFO] Parsed args:
-h[--host] 172.xx.xxx.52
-P[--port] 2883
-u[--user] dumper
-p[--password] ******
[--sys-user] proxyro
[--sys-password] ******
-c[--cluster] obdemo
-t[--tenant] obmysql
-D[--database] test
[--ddl]
[--all]
-f[--file-path] /tmp/obdumper

obproxy Druid LogFactory, userDefinedLogType=null, logInfo=public com.alipay.oceanbase.obproxy.druid.support.logging.Log4j2Impl(java.lang.String)
2021-09-29 21:14:55 [INFO] {dataSource-1} inited
2021-09-29 21:14:55 [INFO] {dataSource-2} inited
2021-09-29 21:14:55 [INFO] The manifest file: "/tmp/obdumper/data/MANIFEST.bin" has been saved
2021-09-29 21:14:57 [WARN] No views are exist in the schema: "test"
2021-09-29 21:14:57 [INFO] Generate 1 dump tasks finished
2021-09-29 21:14:57 [INFO] Start 1 schema dump threads finished
2021-09-29 21:14:57 [INFO] Build direct com.alibaba.druid.pool.DruidDataSource finished
2021-09-29 21:14:57 [INFO] Build proxyro com.alibaba.druid.pool.DruidDataSource finished
2021-09-29 21:14:57 [INFO] Return the latest compatible version: 3.1.0 -> 2.2.71
2021-09-29 21:14:57 [INFO] DbType: OBMYSQL Version: 3.1.0
2021-09-29 21:14:57 [INFO] ObMySql(3.1.0) is older than 2.2.71 ? false
2021-09-29 21:14:57 [INFO] Load meta/mysql/mysql56.xml, meta/ob/obmysql14x.xml, meta/ob/obmysql22x.xml, meta/ob/obmysql2271.xml successed
2021-09-29 21:14:58 [WARN] c.o.o.d.m.o.ObMySql14xDatabase does't implement acquireDependencies()
2021-09-29 21:14:58 [INFO] Query 0 dependencies elapsed 846.9 ms
2021-09-29 21:15:01 [INFO]
Finished Tasks: 0       Running Tasks: 1        Progress: 0.00%
2021-09-29 21:15:04 [INFO] Query table: "t1" attr finished. Remain: 0
2021-09-29 21:15:04 [INFO] Query 1 tables elapsed 5.775 s
2021-09-29 21:15:04 [WARN] c.o.o.d.m.o.ObMySql14xDatabase does't implement acquireTablespaceMapping()
2021-09-29 21:15:04 [WARN] c.o.o.d.m.o.ObMySql14xDatabase does't implement acquireSequenceMapping()
2021-09-29 21:15:04 [WARN] c.o.o.d.m.o.ObMySql14xDatabase does't implement acquireMaterializedViewMapping()
2021-09-29 21:15:04 [WARN] c.o.o.d.m.o.ObMySql14xDatabase does't implement acquireAliasMapping()
2021-09-29 21:15:04 [WARN] c.o.o.d.m.o.ObMySql14xDatabase does't implement acquireSynonymMapping()
2021-09-29 21:15:04 [WARN] c.o.o.d.m.o.ObMySql14xDatabase does't implement acquireTypeMapping()
2021-09-29 21:15:04 [WARN] c.o.o.d.m.o.ObMySql14xDatabase does't implement acquireTypeBodyMapping()
2021-09-29 21:15:04 [WARN] c.o.o.d.m.o.ObMySql14xDatabase does't implement acquirePackageMapping()
2021-09-29 21:15:04 [WARN] c.o.o.d.m.o.ObMySql14xDatabase does't implement acquirePackageBodyMapping()
2021-09-29 21:15:04 [WARN] c.o.o.d.m.o.ObMySql14xDatabase does't implement acquireTriggerMapping()
2021-09-29 21:15:04 [WARN] c.o.o.d.m.o.ObMySql14xDatabase does't implement acquireProcedureMapping()
2021-09-29 21:15:04 [WARN] c.o.o.d.m.o.ObMySql14xDatabase does't implement acquireFunctionMapping()
2021-09-29 21:15:04 [WARN] c.o.o.d.m.o.ObMySql14xDatabase does't implement acquireDatabaseLinkMapping()
2021-09-29 21:15:04 [WARN] c.o.o.d.m.o.ObMySql14xDatabase does't implement acquireDependencies()
2021-09-29 21:15:04 [INFO] Dump [TABLE] t1 to "/tmp/obdumper/data/test/TABLE/t1-schema.sql" finished
2021-09-29 21:15:04 [INFO] No.1 It has dumped 1 tables finished. Remain: 0
2021-09-29 21:15:04 [INFO] Total dumped 1 tables finished
2021-09-29 21:15:04 [INFO] Dump the ddl of schema: "test" finished
2021-09-29 21:15:04 [INFO] {dataSource-1} closing ...
2021-09-29 21:15:04 [INFO] {dataSource-1} closed
2021-09-29 21:15:04 [INFO] Close count: 7
2021-09-29 21:15:04 [INFO] {dataSource-2} closing ...
2021-09-29 21:15:04 [INFO] {dataSource-2} closed
2021-09-29 21:15:04 [INFO] Close count: 6
2021-09-29 21:15:04 [INFO] Shutdown task context finished
2021-09-29 21:15:04 [INFO]
Finished Tasks: 1       Running Tasks: 0        Progress: 100.00%
2021-09-29 21:15:04 [INFO]

All Dump Tasks Finished:

----------------------------------------------------------------------------------------------------------------------------
No.#        |        Type        |             Name             |            Count             |       Status
----------------------------------------------------------------------------------------------------------------------------
1          |       TABLE        |              t1              |              1               |      SUCCESS
----------------------------------------------------------------------------------------------------------------------------

Total Count: 1          End Time: 2021-09-29 21:15:04


2021-09-29 21:15:04 [INFO] Dump schema finished. Total Elapsed: 7.051 s
2021-09-29 21:15:04 [INFO] System exit 0
[root@obce00 ob-loader-dumper-2.1.13-SNAPSHOT]#

OBDUMPER 导出的目录结构需要熟悉,方便快速找到导出脚本或者日志,具体如下:

[root@obce00 ob-loader-dumper-2.1.13-SNAPSHOT]# tree /tmp/obdumper/
/tmp/obdumper/
├── CHECKPOINT.bin
├── data
│   ├── MANIFEST.bin
│   └── test
│       └── TABLE
│           └── t1-schema.sql
└── logs
    ├── ob-loader-dumper.error
    ├── ob-loader-dumper.info
    └── ob-loader-dumper.warn

4 directories, 6 files
[root@obce00 ob-loader-dumper-2.1.13-SNAPSHOT]#

如果导出报错,可以在目录 logs/ 下查看日志。

只导出数据

[root@obce00 ob-loader-dumper-2.1.13-SNAPSHOT]# bin/obdumper -h 172.xx.xxx.52 -P 2883 -u dumper -p ****** --sys-user=proxyro --sys-password=uY****zx -c obdemo -t obmysql -D test --csv --all -f /tmp/obdumper
2021-09-29 21:16:54 [INFO] Parsed args:
-h[--host] 172.xx.xxx.52
-P[--port] 2883
-u[--user] dumper
-p[--password] ******
[--sys-user] proxyro
[--sys-password] ******
-c[--cluster] obdemo
-t[--tenant] obmysql
-D[--database] test
[--csv]
[--all]
-f[--file-path] /tmp/obdumper

obproxy Druid LogFactory, userDefinedLogType=null, logInfo=public com.alipay.oceanbase.obproxy.druid.support.logging.Log4j2Impl(java.lang.String)
2021-09-29 21:16:54 [INFO] {dataSource-1} inited
2021-09-29 21:16:54 [INFO] {dataSource-2} inited
2021-09-29 21:16:54 [INFO] The manifest file: "/tmp/obdumper/data/MANIFEST.bin" has been saved
2021-09-29 21:16:55 [INFO] Generate 1 csv dump tasks for non-partitioned table(without macro): t1. Remain: 0
2021-09-29 21:16:55 [INFO] Generate 1 dump tasks finished
2021-09-29 21:16:55 [INFO] Start 16 record dump threads finished
2021-09-29 21:16:55 [INFO] Dump 8 rows test.t1 to "/tmp/obdumper/data/test/TABLE/t1.csv" finished
2021-09-29 21:16:56 [INFO] {dataSource-1} closing ...
2021-09-29 21:16:56 [INFO] {dataSource-1} closed
2021-09-29 21:16:56 [INFO] Close count: 1
2021-09-29 21:16:56 [INFO] {dataSource-2} closing ...
2021-09-29 21:16:56 [INFO] {dataSource-2} closed
2021-09-29 21:16:56 [INFO] Close count: 1
2021-09-29 21:16:56 [INFO] Shutdown task context finished
2021-09-29 21:16:56 [INFO]
Finished Tasks: 1       Running Tasks: 0        Progress: 100.00%
2021-09-29 21:16:56 [INFO]

All Dump Tasks Finished:

----------------------------------------------------------------------------------------------------------------------------
        No.#        |        Type        |             Name             |            Count             |       Status
----------------------------------------------------------------------------------------------------------------------------
         1          |       TABLE        |              t1              |              8               |      SUCCESS
----------------------------------------------------------------------------------------------------------------------------

Total Count: 8          End Time: 2021-09-29 21:16:56


2021-09-29 21:16:56 [INFO] You can't merge the data files for 1 tables. --file-name is missing
2021-09-29 21:16:56 [INFO] Dump record finished. Total Elapsed: 1.548 s
2021-09-29 21:16:56 [INFO] System exit 0
[root@obce00 ob-loader-dumper-2.1.13-SNAPSHOT]#

目录结构如下:

[root@obce00 ob-loader-dumper-2.1.13-SNAPSHOT]# tree /tmp/obdumper
/tmp/obdumper
├── CHECKPOINT.bin
├── data
│   ├── MANIFEST.bin
│   └── test
│       └── TABLE
│           ├── t1.0.csv
│           └── t1-schema.sql
└── logs
    ├── ob-loader-dumper.error
    ├── ob-loader-dumper.info
    └── ob-loader-dumper.warn

4 directories, 7 files
[root@obce00 ob-loader-dumper-2.1.13-SNAPSHOT]#

导出的数据文件(CSV 格式)跟表结构文件在同一个目录。

查看导出结果文件:

[root@obce00 ob-loader-dumper-2.1.13-SNAPSHOT]# cat /tmp/obdumper/data/test/TABLE/t1-schema.sql
create table if not exists `t1` (
        `id` bigint(20) not null auto_increment,
        `c1` timestamp not null default CURRENT_TIMESTAMP,
        primary key (`id`)
)
default charset=utf8mb4
default collate=utf8mb4_general_ci;
[root@obce00 ob-loader-dumper-2.1.13-SNAPSHOT]# cat /tmp/obdumper/data/test/TABLE/t1.0.csv
'id','c1'
1,'2021-09-29 21:16:03'
2,'2021-09-29 21:16:05'
3,'2021-09-29 21:16:05'
4,'2021-09-29 21:16:06'
5,'2021-09-29 21:16:06'
6,'2021-09-29 21:16:18'
7,'2021-09-29 21:16:18'
8,'2021-09-29 21:16:19'

OBLOADER 导入 OceanBase 数据库

同样的,也建议导入 OceanBase 数据库的时候,表结构和数据分开导入。

OBLOADER 命令帮助:

bin/obloader -h <主机IP> -P <端⼝> -u <⽤⼾> -p <密码>  --sys-user <sys 租户下的 root 用户或 proxyro 用户> --sys-password <sys 租⼾下的账⼾密码> -c <集群> -t <租⼾> -D <Schema 库名> [--ddl] [--csv|--sql] [--all|--table '表名'] -f<数据⽂件或者⽬录>

跟上面一样,导入的时候 OBLOADER 需要获取表结构元数据信息,也需要连接到 sys 租户下。推荐使用用户 proxyro

导入表结构

下面将前面的数据导出文件导入到租户 obmysql 的数据库 test2 下。

[root@obce00 ob-loader-dumper-2.1.13-SNAPSHOT]# bin/obloader -h 172.xx.xxx.52 -P 2883 -u loader -p ****** --sys-user=proxyro --sys-password=uY****zx -c obdemo -t obmysql -D test2 --ddl --all -f /tmp/obdumper
2021-09-29 21:26:05 [INFO] Parsed args:
-h[--host] 172.xx.xxx.52
-P[--port] 2883
-u[--user] loader
-p[--password] ******
[--sys-user] proxyro
[--sys-password] ******
-c[--cluster] obdemo
-t[--tenant] obmysql
-D[--database] test2
[--ddl]
[--all]
-f[--file-path] /tmp/obdumper

2021-09-29 21:26:05 [INFO] No control files were defined in the path: "/tmp/obdumper"
2021-09-29 21:26:05 [INFO] No control files were defined in the path: "/tmp/obdumper"
2021-09-29 21:26:05 [INFO] No mapping files were defined in the path: "/tmp/obdumper"
obproxy Druid LogFactory, userDefinedLogType=null, logInfo=public com.alipay.oceanbase.obproxy.druid.support.logging.Log4j2Impl(java.lang.String)
2021-09-29 21:26:05 [INFO] {dataSource-1} inited
2021-09-29 21:26:05 [INFO] {dataSource-2} inited
2021-09-29 21:26:05 [INFO] The manifest file: "/tmp/obdumper/data/MANIFEST.bin" has been saved
2021-09-29 21:26:05 [INFO] Init writer thread pool finished
2021-09-29 21:26:05 [INFO] No.1 sql of the file: "t1-schema.sql" exec success. Elapsed: 114.4 ms
2021-09-29 21:26:05 [INFO] Load file: "t1-schema.sql" finished
2021-09-29 21:26:06 [INFO] {dataSource-2} closing ...
2021-09-29 21:26:06 [INFO] {dataSource-2} closed
2021-09-29 21:26:06 [INFO] Close count: 0
2021-09-29 21:26:06 [INFO] {dataSource-1} closing ...
2021-09-29 21:26:06 [INFO] {dataSource-1} closed
2021-09-29 21:26:06 [INFO] Close count: 1
2021-09-29 21:26:06 [INFO] Shutdown task context finished
2021-09-29 21:26:06 [INFO]
Finished Tasks: 1       Running Tasks: 0        Progress: 100.00%
2021-09-29 21:26:06 [INFO]

All Load Tasks Finished:

----------------------------------------------------------------------------------------------------------------------------
        No.#        |        Type        |             Name             |            Count             |       Status
----------------------------------------------------------------------------------------------------------------------------
         1          |       TABLE        |              t1              |              1               |      SUCCESS
----------------------------------------------------------------------------------------------------------------------------

Total Count: 1          End Time: 2021-09-29 21:26:06


2021-09-29 21:26:06 [INFO] Load schema finished. Total Elapsed: 1.070 s
2021-09-29 21:26:06 [INFO] System exit 0
[root@obce00 ob-loader-dumper-2.1.13-SNAPSHOT]#

导入数据

[root@obce00 ob-loader-dumper-2.1.13-SNAPSHOT]# bin/obloader -h 172.xx.xxx.52 -P 2883 -u loader -p ****** --sys-user=proxyro --sys-password=uY****zx -c obdemo -t obmysql -D test2 --csv --all -f /tmp/obdumper
2021-09-29 21:27:53 [INFO] Parsed args:
-h[--host] 172.xx.xxx.52
-P[--port] 2883
-u[--user] loader
-p[--password] ******
[--sys-user] proxyro
[--sys-password] ******
-c[--cluster] obdemo
-t[--tenant] obmysql
-D[--database] test2
[--csv]
[--all]
-f[--file-path] /tmp/obdumper

2021-09-29 21:27:53 [INFO] No control files were defined in the path: "/tmp/obdumper"
2021-09-29 21:27:53 [INFO] No control files were defined in the path: "/tmp/obdumper"
2021-09-29 21:27:53 [INFO] No mapping files were defined in the path: "/tmp/obdumper"
obproxy Druid LogFactory, userDefinedLogType=null, logInfo=public com.alipay.oceanbase.obproxy.druid.support.logging.Log4j2Impl(java.lang.String)
2021-09-29 21:27:53 [INFO] {dataSource-1} inited
2021-09-29 21:27:53 [INFO] {dataSource-2} inited
2021-09-29 21:27:53 [INFO] The manifest file: "/tmp/obdumper/data/MANIFEST.bin" has been saved
2021-09-29 21:27:54 [WARN] File: "/tmp/obdumper/data/MANIFEST.bin" is unmatched on the suffix[.csv], ignore it
2021-09-29 21:27:54 [WARN] File: "/tmp/obdumper/data/test/TABLE/t1-schema.sql" is unmatched on the suffix[.csv], ignore it
2021-09-29 21:27:54 [INFO] Binding table: "t1" to the file: "/tmp/obdumper/data/test/TABLE/t1.0.csv" finished
2021-09-29 21:27:54 [INFO] File: "/tmp/obdumper/data/test/TABLE/t1.0.csv" has not been splitted. 202 < 67108864
2021-09-29 21:27:54 [INFO] Splitted 1 csv subfiles by 64.0 MB. Elapsed: 10.49 ms
2021-09-29 21:27:54 [INFO] Generate 1 subfiles finished
2021-09-29 21:27:55 [INFO] Query table entry and primary key for table: t1 finished. Remain: 0
2021-09-29 21:27:55 [INFO] Query the leader location for the table: "t1". Remain: 0
2021-09-29 21:27:55 [INFO] Calculate leader: 172.xx.xxx.52:2881 of table: "t1", part: 0. Remain: 0
2021-09-29 21:27:55 [INFO] Waiting to refresh observer load status ......
2021-09-29 21:27:55 [INFO] Refresh the observer load status success. Table: "t1". Remain: 0
2021-09-29 21:27:55 [INFO] Refresh observer load status finished. Elapsed: 94.30 ms
2021-09-29 21:27:55 [INFO] Create 16384 slots for ring buffer finished. [172.xx.xxx.52:2881]
2021-09-29 21:27:55 [INFO] Start 11 database writer threads finished. [172.xx.xxx.52:2881]
2021-09-29 21:27:55 [INFO] Start 3 csv file reader threads successed
2021-09-29 21:27:55 [INFO] File: "/tmp/obdumper/data/test/TABLE/t1.0.csv" has been parsed finished
2021-09-29 21:27:56 [INFO] Wait for the all the workers to drain of published events then halt the workers
2021-09-29 21:27:56 [INFO] {dataSource-2} closing ...
2021-09-29 21:27:56 [INFO] {dataSource-2} closed
2021-09-29 21:27:56 [INFO] Close count: 5
2021-09-29 21:27:56 [INFO] {dataSource-1} closing ...
2021-09-29 21:27:56 [INFO] {dataSource-1} closed
2021-09-29 21:27:56 [INFO] Close count: 2
2021-09-29 21:27:56 [INFO] Shutdown task context finished
2021-09-29 21:27:56 [INFO]
Finished Tasks: 1       Running Tasks: 0        Progress: 100.00%
2021-09-29 21:27:56 [INFO]

All Load Tasks Finished:

----------------------------------------------------------------------------------------------------------------------------
        No.#        |        Type        |             Name             |            Count             |       Status
----------------------------------------------------------------------------------------------------------------------------
         1          |       TABLE        |              t1              |            8 -> 8            |      SUCCESS
----------------------------------------------------------------------------------------------------------------------------

Total Count: 8          End Time: 2021-09-29 21:27:56


2021-09-29 21:27:56 [INFO] Load record finished. Total Elapsed: 2.140 s
2021-09-29 21:27:56 [INFO] System exit 0

检查导入的表结构和数据:

MySQL [test]> show create table test2.t1\G
*************************** 1. row ***************************
       Table: t1
Create Table: CREATE TABLE `t1` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `c1` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`)
) AUTO_INCREMENT = 1 DEFAULT CHARSET = utf8mb4 ROW_FORMAT = COMPACT COMPRESSION = 'zstd_1.3.8' REPLICA_NUM = 3 BLOCK_SIZE = 16384 USE_BLOOM_FILTER = FALSE TABLET_SIZE = 134217728 PCTFREE = 0
1 row in set (0.012 sec)

MySQL [test]> select * from test2.t1;
+----+---------------------+
| id | c1                  |
+----+---------------------+
|  1 | 2021-09-29 21:16:03 |
|  2 | 2021-09-29 21:16:05 |
|  3 | 2021-09-29 21:16:05 |
|  4 | 2021-09-29 21:16:06 |
|  5 | 2021-09-29 21:16:06 |
|  6 | 2021-09-29 21:16:18 |
|  7 | 2021-09-29 21:16:18 |
|  8 | 2021-09-29 21:16:19 |
+----+---------------------+
8 rows in set (0.015 sec)

常见导出或导入问题

多次导出时没有换目录导致导入数据时出现非预期的结果

OBDUMPER 导出的时候需要指定目录。如果多次导出不同数据库或表时,使用了同一个目录,导出文件会混在一起,同名的文件会被覆盖。在用 OBLOADER 导入的时候,可能会导入非预期内的数据。OBLOADER 导入是解析这个目录结构自动获取要导入的表。

所以,如果需要多次导出,建议每次导出时,导出目录都不要重复。或者在导出之前,清空导出目录。

外部文件不符合要求导致导入失败

该问题需要分情况讨论:

  • 外部文件是 SQL 文件

    • SQL 内容是 DDL,要求 SQL 里不能有注释和 SET 开关语句等。

    • SQL 内容是 DML, 要求 SQL 里面只有 INSERT 语句,并且每个语句不换行。

说明

这类文件格式很麻烦,建议不要使用 OBLOADER 执行,改为在 MySQL 命令行下直接执行。

  • 外部文件如果是 CSV 文件

    CSV 文件要符合标准定义。要求有列分隔符、行分隔符。列尽量用特定字符串(通常是双引号)包括起来。同时对于被包括起来的列里面出现双引号要实现转义。

导出性能问题

导出表结构的时候,如果表很多,不能开启太多并发, --threads 控制在 4 以内。太高的并发没有意义,会加重 sys 租户访问内部视图的负担,进而出现导出超时报错。

导出表数据的时候,如果表很多,可以开启并发,--threads 默认会根据 CPU 动态调整,也可以手动指定,以控制导出对主机性能的影响。

导出或导入大量的数据时,可能瓶颈处在 OBLOADER 和 OBDUMPER 自身,这时候可以编辑这两个文件,调大里面的 Java 参数 XmsXmx 后面的值,分别表示 Java 初始内存和最大可用内存。

vim bin/obdumper  或 vim bin/obloader

 50 JAVA_OPTS="$JAVA_OPTS -server -Xms4G -Xmx4G -XX:MetaspaceSize=512M -XX:MaxMetaspaceSize=512M -Xss352K"

导入性能问题

导入表结构时,指定 --threads 并发数不要超过 2 。并发 DDL 并不会一定加快速度,在 OceanBase 数据库里, DDL 是串行执行的。每个 DDL 平均耗时 1s 。

导入大量数据时,可以开启并发,--threads 默认会根据 CPU 动态调整,也可以手动指定,以控制导出对主机性能的影响。

导入的性能还受表上的索引影响。对于很大的表,建议建表的时候,除了主键索引外,其他唯一索引、普通索引都留到数据导入结束后再创建。

导入的性能还受租户的增量内存写入速度限制。如果增量内存不足时会触发合并或转储。合并比较耗性能,应该尽量避免不要触发。可以开启内存的转储,并将转储的次数设置为 100 以上(通常足够坚持到 24h )。如果增量内存使用率达到租户限速阈值时,导入性能也会下降。如果增量内存使用率满了,会出现导入报错的现象,这个容易导致数据导入前功尽弃。

所以租户限速的阈值建议不要高于 90 。转储相关参数的设置跟租户内存的大小、写入速度都有关系,需要根据实际情况调优。

更多使用问题请参考官方 产品文档

1
https://gitee.com/oceanbase/tutorials-doc.git
git@gitee.com:oceanbase/tutorials-doc.git
oceanbase
tutorials-doc
tutorials-doc
V1.0.0

搜索帮助