[openEuler-22.03-LTS-SP2-round-2] Lustre client test
Hi xin3liang, welcome to the openEuler Community.
I'm the Bot here serving you. You can find the instructions on how to interact with me at Here.
If you have any questions, please contact the SIG: sig-SDS, and any of the maintainers: @chixinze , @liuzhiqiang , @liuqinfei , @luorixin , @Xinliang Liu , @openeuler-ci-bot
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。
测试集群配置, 5台机器
[openeuler@oe2203-test-01 ~]$ cat multinode.sh
RCLIENTS="oe2203-test-01"
MDSCOUNT=4
mds_HOST="oe2203-test-02"
MDSDEV1="/dev/vdb"
mds2_HOST="oe2203-test-03"
MDSDEV2="/dev/vdb"
mds3_HOST="oe2203-test-02"
MDSDEV3="/dev/vdc"
mds4_HOST="oe2203-test-03"
MDSDEV4="/dev/vdc"
OSTCOUNT=4
ost_HOST="oe2203-test-04"
OSTDEV1="/dev/vdb"
ost2_HOST="oe2203-test-05"
OSTDEV2="/dev/vdb"
ost3_HOST="oe2203-test-04"
OSTDEV3="/dev/vdc"
ost4_HOST="oe2203-test-05"
OSTDEV4="/dev/vdc"
PDSH="pdsh -S -Rssh -w"
SHARED_DIRECTORY=${SHARED_DIRECTORY:-/opt/testing/shared}
LUSTRE_BRANCH=master
LOAD_MODULES_REMOTE=true
MDSSIZE=0
OSTSIZE=0
MGSSIZE=0
RUNAS_ID="1000"
. $LUSTRE/tests/cfg/ncli.sh
安装部署, client 2.15.2, server 2.15.3.RC1
sudo dnf install -y lustre-client lustre-client-tests # install client rpms and
sudo dnf install e2fsprogs kmod-lustre lustre # install server rpms built from source code https://github.com/xin3liang/lustre-release/tree/b2_15-openeuler-22.03
sudo /lib64/lustre/tests/llmount.sh -f ~/multinode.sh
[openeuler@oe2203-test-01 ~]$ uname -r
5.10.0-152.0.0.78.oe2203sp2.aarch64
[openeuler@oe2203-test-01 ~]$ sudo dnf list --installed |grep lustre-client
kmod-lustre-client.aarch64 2.15.2-2.oe2203sp2 @EPOL
kmod-lustre-client-tests.aarch64 2.15.2-2.oe2203sp2 @EPOL
lustre-client.aarch64 2.15.2-2.oe2203sp2 @EPOL
lustre-client-devel.aarch64 2.15.2-2.oe2203sp2 @EPOL
lustre-client-tests.aarch64 2.15.2-2.oe2203sp2 @EPOL
[openeuler@oe2203-test-01 ~]$ lfs df -h
UUID bytes Used Available Use% Mounted on
lustre-MDT0000_UUID 27.7G 3.8M 25.2G 1% /mnt/lustre[MDT:0]
lustre-MDT0001_UUID 27.7G 3.5M 25.2G 1% /mnt/lustre[MDT:1]
lustre-MDT0002_UUID 27.7G 3.5M 25.2G 1% /mnt/lustre[MDT:2]
lustre-MDT0003_UUID 27.7G 3.5M 25.2G 1% /mnt/lustre[MDT:3]
lustre-OST0000_UUID 48.2G 1.6M 45.7G 1% /mnt/lustre[OST:0]
lustre-OST0001_UUID 48.2G 1.6M 45.7G 1% /mnt/lustre[OST:1]
lustre-OST0002_UUID 48.2G 1.6M 45.7G 1% /mnt/lustre[OST:2]
lustre-OST0003_UUID 97.4G 1.6M 92.4G 1% /mnt/lustre[OST:3]
filesystem_summary: 242.0G 6.5M 229.5G 1% /mnt/lustre
Run the sanity test suite
sudo /lib64/lustre/tests/llmount.sh -f ~/multinode.sh
...
== sanity test complete, duration 11627 sec ============== 07:41:57 (1685518917)
sanity: FAIL: test_39r atime on client 1685509010 != ost 0x00000000
sanity: FAIL: test_101j expected 4096 got 8192
sanity: FAIL: test_130a filefrag /mnt/lustre/f130a.sanity failed
sanity: FAIL: test_130b filefrag /mnt/lustre/f130b.sanity failed
sanity: FAIL: test_130c filefrag /mnt/lustre/f130c.sanity failed
sanity: FAIL: test_130d filefrag /mnt/lustre/f130d.sanity failed
sanity: FAIL: test_130e filefrag /mnt/lustre/f130e.sanity failed
sanity: FAIL: test_130g filefrag printed 0 < 400 extents
sanity: FAIL: test_155e dd of=/tmp/f155e.sanity bs=0 count=1k failed
sanity: FAIL: test_155f dd of=/tmp/f155f.sanity bs=0 count=1k failed
sanity: FAIL: test_155g dd of=/tmp/f155g.sanity bs=0 count=1k failed
sanity: FAIL: test_155h dd of=/tmp/f155h.sanity bs=0 count=1k failed
sanity: FAIL: test_230t set d230t.sanity project id failed
sanity: FAIL: test_256 Empty plain llog was not deleted from changelog catalog
sanity: FAIL: test_398m parallel dio write with failure on first stripe succeeded
sanity: FAIL: test_432 mgs and active mismatch, 10 attempts
sanity: FAIL: test_904 set /mnt/lustre/d904.sanity/f904.sanity project id failed
debug=super ioctl neterror warning dlmtrace error emerg ha rpctrace vfstrace config console lfsck
sanity returned 1
Finished at Wed May 31 07:43:12 AM UTC 2023 in 11719s
/lib64/lustre/tests/auster: completed with rc 0
用时3小时运行890+测试,失败测试为:39r,101j,130a,130b,130c,130d,130e,130g,155e,155f,155g,155h,230t,256,398m,432,904
经过分析,
其中 155e,155f,155g,155h 为测试集相关问题,可以通过backport patch1可以修复
其他的均为server端相关问题
其中测试130a,130b,130c,130d,130e,130g可以通过backport patch2可以修复
patch1: https://review.whamcloud.com/48030
patch2: https://review.whamcloud.com/49190
加上以上两个patch后,测试结果
== server 2.15.3 rc 1 sanity test complete, duration 240 sec ================ 07:45:36 (1685605536)
sanity: FAIL: test_39r atime on client 1685605331 != ost 0x00000000
sanity: FAIL: test_101j expected 4096 got 8192
sanity: FAIL: test_230t set d230t.sanity project id failed
sanity: FAIL: test_256 Empty plain llog was not deleted from changelog catalog
sanity: FAIL: test_398m parallel dio write with failure on first stripe succeeded
sanity: FAIL: test_432 mgs and active mismatch, 10 attempts
sanity: FAIL: test_904 set /mnt/lustre/d904.sanity/f904.sanity project id failed
debug=super ioctl neterror warning dlmtrace error emerg ha rpctrace vfstrace config console lfsck
sanity returned 1
Finished at Thu Jun 1 07:45:37 AM UTC 2023 in 258s
/lib64/lustre/tests/auster: completed with rc 0
Fired an issues for client end failed tests 155e,155f,155g,155h:
#I7ABEY:[22.03-LTS-SP2 round2] sanity test 155 failed
And fired a ticket for other server end failed tests
https://linaro.atlassian.net/browse/STOR-209
登录 后才可以发表评论