152 Star 376 Fork 1.2K

openEuler / community

 / 详情

ssh远程鹏城测试机openEuler操作系统root登录后报Message from syslogd…… kernel:[278281.523785] watchdog: BUG: soft lockup - CPU#3 stuck for 33s! [kworker/3:0:12999]

已完成
缺陷
创建于  
2020-04-20 20:05

这是一个bug:

  • bug

发生情况:
ssh远程鹏城测试机openEuler操作系统root登录后报Message from syslogd…… kernel:[278281.523785] watchdog: BUG: soft lockup - CPU#3 stuck for 33s! [kworker/3:0:12999]

发生结果:输入图片说明

期望结果:
ssh远程鹏城测试机openEuler操作系统root登录后不再报Message

补充说明:
从2020年4月17日中午12点到2020年4月21日晚7点,中间我测试了很多次,并且已经给贵社区及鹏城实验平台发了提交bug的邮件;依然是报错,Message from syslogd@pc-openeuler-1 at Apr 20 19:11:25 ...
kernel:[278281.523785] watchdog: BUG: soft lockup - CPU#3 stuck for 33s! [kworker/3:0:12999]

当我同时按住键盘上的 Ctrl 和 C 键,又回到正常的终端模式下了,然而过了一会,又报Message……;另外还有就是网络延迟非常大,用户体验性不友好,用着ssh远程还突然掉线。

环境情况:

  • 版本:
    [root@pc-openeuler-1 ~]# hostnamectl
    Static hostname: pc-openeuler-1
    Icon name: computer-vm
    Chassis: vm
    Machine ID: 9e1ff7da223f495288b145eea478b74f
    Boot ID: ad48c274749348eb82a3ec0783a45770
    Virtualization: kvm
    Operating System: openEuler 1.0 ()
    Kernel: Linux 4.19.90-vhulk2001.1.0.0026.aarch64
    Architecture: arm64

  • 操作系统版本:
    [root@pc-openeuler-1 ~]# cat /etc/os-release
    NAME="openEuler"
    VERSION="1.0 ()"
    ID="openEuler"
    VERSION_ID="1.0"
    PRETTY_NAME="openEuler 1.0 ()"
    ANSI_COLOR="0;31"

  • 内核版本:
    [root@pc-openeuler-1 ~]# uname -a
    Linux pc-openeuler-1 4.19.90-vhulk2001.1.0.0026.aarch64 #1 SMP Fri Feb 7 04:09:58 UTC 2020 aarch64 GNU/Linux

  • 其它:
    目前我在使用的这台测试机的具体情况是:
    openEuler操作系统,鲲鹏920架构,4核、8G内存、80G磁盘,1个网卡,无云硬盘;

评论 (133)

coding 创建了缺陷
coding 关联仓库设置为openEuler/community
展开全部操作日志

Hey @coding, Welcome to openEuler Community.
All of the projects in openEuler Community are maintained by @openeuler-ci-bot.
That means the developpers can comment below every pull request or issue to trigger Bot Commands.
Please follow instructions at https://gitee.com/openeuler/community/blob/master/en/sig-infrastructure/command.md to find the details.

谢谢反馈
请内核团队的同事了解

@coding 感谢您的反馈,请在issue附上发给我的messages文件,问题进展在issue内跟踪

虚拟化同事一起看看,另外可能需要鹏城实验室一起看一下host的情况。优先了解host的版本。

截取了日志里面的几次soft lockup打印
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.712488] dm_region_hash dm_log dm_mod
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.714635] CPU: 3 PID: 11867 Comm: kworker/3:0 Kdump: loaded Tainted: G C L 4.19.90-vhulk2001.1.0.0026.aarch64 #1
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.718953] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.721133] Workqueue: events virtio_gpu_fb_dirty_work [virtio_gpu]
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.722885] pstate: 00000005 (nzcv daif -PAN -UAO)
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.724391] pc : vp_notify+0x28/0x38 [virtio_pci]
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.726326] lr : virtqueue_kick+0x3c/0x78 [virtio_ring]
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.728013] sp : ffff0000110cfae0
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.729224] x29: ffff0000110cfae0 x28: 0000000000000000
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.730811] x27: ffff0000110cfc20 x26: 0000000000000001
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.733028] x25: 0000000000480020 x24: 0000000000000001
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.735203] x23: ffff800161c61140 x22: ffff80016cafa448
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.737428] x21: 0000000000000000 x20: ffff80016c6f6000
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.741074] x19: ffff80016c6f6000 x18: 0000000000000000
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.743481] x17: 0000000000000000 x16: 0000000000000000
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.744935] x15: 0000000000000000 x14: 0000000000000000
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.746376] x13: 0000000000000000 x12: 0000000000000000
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.747828] x11: 0000000000000000 x10: 0000000000000b80
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.749280] x9 : ffff0000110cfd40 x8 : ffff0000110cfbf8
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.751135] x7 : 00000000000011c0 x6 : ffff7fe000587180
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.752596] x5 : 000000000034e2bb x4 : ffff8001ff720ba0
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.754056] x3 : 0000000000000001 x2 : 0000000000000040
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.756162] x1 : ffff00000fda3000 x0 : 0000000000000000
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.758529] Call trace:
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.761243] vp_notify+0x28/0x38 [virtio_pci]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.763495] virtqueue_kick+0x3c/0x78 [virtio_ring]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.766302] virtio_gpu_queue_ctrl_buffer_locked+0x180/0x248 [virtio_gpu]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.768560] virtio_gpu_queue_fenced_ctrl_buffer+0xdc/0x160 [virtio_gpu]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.775386] virtio_gpu_cmd_transfer_to_host_2d+0xa4/0xd0 [virtio_gpu]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.777164] virtio_gpu_dirty_update+0x194/0x218 [virtio_gpu]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.778933] virtio_gpu_fb_dirty_work+0x3c/0x48 [virtio_gpu]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.796026] process_one_work+0x1b0/0x448
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.800202] worker_thread+0x54/0x468
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.804346] kthread+0x134/0x138
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.808260] ret_from_fork+0x10/0x18

Apr 20 04:55:30 pc-openeuler-1 kernel: [226946.264203] watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [dnf:12072]
Apr 20 04:55:30 pc-openeuler-1 kernel: [226947.285148] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_probe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring virtio dm_mirror
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.223435] dm_region_hash dm_log dm_mod
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.225542] CPU: 0 PID: 12072 Comm: dnf Kdump: loaded Tainted: G C L 4.19.90-vhulk2001.1.0.0026.aarch64 #1
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.228853] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.231627] pstate: 60000005 (nZCv daif -PAN -UAO)
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.233654] pc : run_timer_softirq+0x1a8/0x1f0
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.235679] lr : run_timer_softirq+0x154/0x1f0
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.237129] sp : ffff00000800fe60
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.238743] x29: ffff00000800fe60 x28: 0000000000000282
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.241127] x27: 0000000000000002 x26: ffff0000092700c8
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.243156] x25: ffff000008f40018 x24: ffff0000092700c0
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.245564] x23: 0000000000000001 x22: ffff000009273000
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.247614] x21: ffff000009271000 x20: ffff8001ff6776c0
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.249988] x19: 00000000ffffffff x18: ffff000009271000
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.252308] x17: 0000000000000000 x16: 0000000000000000
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.254611] x15: 0000000000000000 x14: 0000000000000400
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.256974] x13: 0000000000000040 x12: 0000000000000228
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.259227] x11: 0000000000000000 x10: 0000000000000040
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.261346] x9 : ffff000009296320 x8 : ffff800140004900
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.263484] x7 : ffff800140004928 x6 : 0000000000000000
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.265839] x5 : ffff800140004900 x4 : 00000001436077a5
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.268065] x3 : 0000000000000000 x2 : 008c36560de8ef00
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.475510] x1 : ffffffffffffffff x0 : 00000000000000e0
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.479356] Call trace:
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.481259] run_timer_softirq+0x1a8/0x1f0
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.483024] __do_softirq+0x11c/0x31c
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.485434] irq_exit+0x11c/0x128
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.487340] __handle_domain_irq+0x6c/0xc0
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.489226] gic_handle_irq+0x6c/0x170
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.491132] el1_irq+0xb8/0x140
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.493253] __arch_copy_to_user+0x180/0x21c
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.495568] copy_page_to_iter+0xd0/0x320
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.497488] generic_file_buffered_read+0x254/0x740
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.499294] generic_file_read_iter+0x114/0x190
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.501420] ext4_file_read_iter+0x5c/0x140 [ext4]
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.503734] __vfs_read+0x11c/0x188
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.505560] vfs_read+0x94/0x150
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.506989] ksys_read+0x74/0xf0
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.508206] __arm64_sys_read+0x24/0x30
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.509489] el0_svc_common+0x78/0x130
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.510871] el0_svc_handler+0x38/0x78
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.512155] el0_svc+0x8/0xc

Apr 20 05:02:11 pc-openeuler-1 kernel: [227329.042977] watchdog: BUG: soft lockup - CPU#0 stuck for 28s! [swapper/0:0]
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.081876] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_probe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring virtio dm_mirror
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.102522] dm_region_hash dm_log dm_mod
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.104628] CPU: 0 PID: 0 Comm: swapper/0 Kdump: loaded Tainted: G C L 4.19.90-vhulk2001.1.0.0026.aarch64 #1
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.107273] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.109014] pstate: 40000005 (nZcv daif -PAN -UAO)
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.110487] pc : __do_softirq+0xa0/0x31c
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.111826] lr : __do_softirq+0x64/0x31c
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.113156] sp : ffff00000800fee0
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.114398] x29: ffff00000800fee0 x28: 0000000000000082
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.115866] x27: ffff000008f5cd80 x26: ffff000008010000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.117358] x25: ffff000008000000 x24: ffff800160039800
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.118892] x23: ffff00000924fd50 x22: 0000000000000000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.122482] x21: 0000000000000000 x20: 0000000000000003
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.124803] x19: ffff00000927f080 x18: ffff000009271000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.127733] x17: 0000000000000000 x16: 0000000000000000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.129320] x15: 0000000000000000 x14: 0000000000000000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.131120] x13: 0000000000000000 x12: 0000000000000000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.132542] x11: ffff000008a6e0c0 x10: 0000000000000040
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.133985] x9 : ffff000008a6e0c8 x8 : 0000000000000000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.135404] x7 : 0000000000000004 x6 : 00000aa3a4c2f21a
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.136825] x5 : 00ffffffffffffff x4 : 0000000000000015
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.138268] x3 : 0000cbffd54d76f4 x2 : 00008001f6730000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.139701] x1 : 00000000000000e0 x0 : ffff000008f5cd80
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.141187] Call trace:
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.142296] __do_softirq+0xa0/0x31c
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.144318] irq_exit+0x11c/0x128
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.145815] __handle_domain_irq+0x6c/0xc0
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.147258] gic_handle_irq+0x6c/0x170
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.148501] el1_irq+0xb8/0x140
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.149697] arch_cpu_idle+0x38/0x1c0
Apr 20 05:02:12 pc-openeuler-1 kernel: [227338.150920] default_idle_call+0x24/0x40
Apr 20 05:02:12 pc-openeuler-1 kernel: [227338.152169] do_idle+0x1d4/0x2b0
Apr 20 05:02:12 pc-openeuler-1 kernel: [227338.153338] cpu_startup_entry+0x2c/0x30
Apr 20 05:02:12 pc-openeuler-1 kernel: [227338.154583] rest_init+0xb8/0xc8
Apr 20 05:02:12 pc-openeuler-1 kernel: [227338.155747] start_kernel+0x4d0/0x4fc

solarhu 添加协作者zhanghailiang
solarhu 计划截止日期设置为2020-04-24
solarhu 计划开始日期设置为2020-04-24
solarhu 计划开始日期2020-04-24 修改为2020-04-21
solarhu 添加了
 
kind/bug
标签
solarhu 优先级设置为主要
Xie XiuQi 添加协作者Xie XiuQi
Xie XiuQi 负责人Xie XiuQi 修改为wangxiongfeng
Xie XiuQi 负责人wangxiongfeng 修改为未设置
Xie XiuQi 添加协作者wangxiongfeng

由于在此issue下,我没找到如何上传 messages 文件

故自建仓库,下方链接打开即可看到 messages 文件

pc-openEuler-messages.

@solarhu @木得感情的openEuler机器人 @Xie XiuQi @openeuler-ci-bot @coding

现在实时还是报Message

[root@pc-openeuler-1 log]# 
Message from syslogd@pc-openeuler-1 at Apr 21 13:30:35 ...
 kernel:[344259.358524] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [dbus-daemon:1296]

2020-04-21 14:05 实时情况更新

[root@pc-openeuler-1 log]# 
Message from syslogd@pc-openeuler-1 at Apr 21 13:30:35 ...
 kernel:[344259.358524] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [dbus-daemon:1296]

Message from syslogd@pc-openeuler-1 at Apr 21 14:00:58 ...
 kernel:[346062.993562] watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [kworker/0:0:14989]

Message from syslogd@pc-openeuler-1 at Apr 21 14:01:42 ...
 kernel:[346109.818760] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:2:15437]

Message from syslogd@pc-openeuler-1 at Apr 21 14:01:42 ...
 kernel:[346121.510088] watchdog: BUG: soft lockup - CPU#3 stuck for 37s! [kworker/3:0:15454]

@solarhu @木得感情的openEuler机器人 @openeuler-ci-bot @myeuler @coding

我看每次报错都有 multipathd[1236]: sda: unusable path 的错误陆续报。
还有报 soft lockup 的地方,每次都是 virtio 先报。

Apr 20 04:00:40 pc-openeuler-1 chronyd[1307]: Can't synchronise: no selectable sources
Apr 20 04:00:44 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:00:49 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:00:55 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:01:01 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:01:09 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:02:04 pc-openeuler-1 kernel: [223721.182851] watchdog: BUG: soft lockup - CPU#3 stuck for 21s! [kworker/3:0:11867]
Apr 20 04:02:05 pc-openeuler-1 kernel: [223724.162697] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_probe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring virtio dm_mirror
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.712488]  dm_region_hash dm_log dm_mod
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.714635] CPU: 3 PID: 11867 Comm: kworker/3:0 Kdump: loaded Tainted: G         C   L    4.19.90-vhulk2001.1.0.0026.aarch64 #1
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.718953] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.721133] Workqueue: events virtio_gpu_fb_dirty_work [virtio_gpu]
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.722885] pstate: 00000005 (nzcv daif -PAN -UAO)
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.724391] pc : vp_notify+0x28/0x38 [virtio_pci]
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.726326] lr : virtqueue_kick+0x3c/0x78 [virtio_ring]
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.728013] sp : ffff0000110cfae0
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.729224] x29: ffff0000110cfae0 x28: 0000000000000000 
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.730811] x27: ffff0000110cfc20 x26: 0000000000000001 
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.733028] x25: 0000000000480020 x24: 0000000000000001 
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.735203] x23: ffff800161c61140 x22: ffff80016cafa448 
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.737428] x21: 0000000000000000 x20: ffff80016c6f6000 
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.741074] x19: ffff80016c6f6000 x18: 0000000000000000 
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.743481] x17: 0000000000000000 x16: 0000000000000000 
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.744935] x15: 0000000000000000 x14: 0000000000000000 
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.746376] x13: 0000000000000000 x12: 0000000000000000 
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.747828] x11: 0000000000000000 x10: 0000000000000b80 
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.749280] x9 : ffff0000110cfd40 x8 : ffff0000110cfbf8 
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.751135] x7 : 00000000000011c0 x6 : ffff7fe000587180 
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.752596] x5 : 000000000034e2bb x4 : ffff8001ff720ba0 
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.754056] x3 : 0000000000000001 x2 : 0000000000000040 
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.756162] x1 : ffff00000fda3000 x0 : 0000000000000000 
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.758529] Call trace:
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.761243]  vp_notify+0x28/0x38 [virtio_pci]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.763495]  virtqueue_kick+0x3c/0x78 [virtio_ring]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.766302]  virtio_gpu_queue_ctrl_buffer_locked+0x180/0x248 [virtio_gpu]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.768560]  virtio_gpu_queue_fenced_ctrl_buffer+0xdc/0x160 [virtio_gpu]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.775386]  virtio_gpu_cmd_transfer_to_host_2d+0xa4/0xd0 [virtio_gpu]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.777164]  virtio_gpu_dirty_update+0x194/0x218 [virtio_gpu]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.778933]  virtio_gpu_fb_dirty_work+0x3c/0x48 [virtio_gpu]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.796026]  process_one_work+0x1b0/0x448
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.800202]  worker_thread+0x54/0x468
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.804346]  kthread+0x134/0x138
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.808260]  ret_from_fork+0x10/0x18
Apr 20 04:02:09 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:02:14 pc-openeuler-1 multipathd[1236]: sda: unusable path

我也发现这个了,今天中午11:45跟 @solarhu 打电话讨论到这个multipathd服务,他说把multipathd服务关掉。

@Xie XiuQi

查看所有进程

[root@pc-openeuler-1 ~]# ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
root           1       0  3 Apr17 ?        03:27:55 /usr/lib/systemd/systemd --switched-root --system --deserialize 16
root           2       0  0 Apr17 ?        00:00:30 [kthreadd]
root           3       2  0 Apr17 ?        00:00:00 [rcu_gp]
root           4       2  0 Apr17 ?        00:00:00 [rcu_par_gp]
root           6       2  0 Apr17 ?        00:00:00 [kworker/0:0H-kblockd]
root           8       2  0 Apr17 ?        00:00:00 [mm_percpu_wq]
root           9       2  0 Apr17 ?        00:04:05 [ksoftirqd/0]
root          10       2  0 Apr17 ?        00:07:17 [rcu_sched]
root          11       2  0 Apr17 ?        00:00:00 [rcu_bh]
root          12       2  0 Apr17 ?        00:06:48 [migration/0]
root          13       2  0 Apr17 ?        00:00:00 [cpuhp/0]
root          14       2  0 Apr17 ?        00:00:00 [cpuhp/1]
root          15       2  0 Apr17 ?        00:03:22 [migration/1]
root          16       2  0 Apr17 ?        00:04:44 [ksoftirqd/1]
root          18       2  0 Apr17 ?        00:00:00 [kworker/1:0H-kblockd]
root          19       2  0 Apr17 ?        00:00:00 [cpuhp/2]
root          20       2  0 Apr17 ?        00:05:38 [migration/2]
root          21       2  0 Apr17 ?        00:35:55 [ksoftirqd/2]
root          23       2  0 Apr17 ?        00:00:00 [kworker/2:0H]
root          24       2  0 Apr17 ?        00:00:00 [cpuhp/3]
root          25       2  0 Apr17 ?        00:05:36 [migration/3]
root          26       2  0 Apr17 ?        00:03:48 [ksoftirqd/3]
root          28       2  0 Apr17 ?        00:00:00 [kworker/3:0H-kblockd]
root          29       2  0 Apr17 ?        00:00:00 [kdevtmpfs]
root          30       2  0 Apr17 ?        00:00:00 [netns]
root          31       2  0 Apr17 ?        00:00:01 [kauditd]
root          34       2  0 Apr17 ?        00:09:32 [khungtaskd]
root          35       2  0 Apr17 ?        00:00:00 [oom_reaper]
root          36       2  0 Apr17 ?        00:00:00 [writeback]
root          37       2  0 Apr17 ?        00:00:00 [kcompactd0]
root          38       2  0 Apr17 ?        00:00:00 [ksmd]
root          39       2  0 Apr17 ?        00:00:00 [khugepaged]
root          41       2  0 Apr17 ?        00:00:00 [crypto]
root          42       2  0 Apr17 ?        00:00:00 [kintegrityd]
root          43       2  0 Apr17 ?        00:00:00 [kblockd]
root          44       2  0 Apr17 ?        00:00:00 [md]
root          45       2  0 Apr17 ?        00:00:00 [edac-poller]
root          46       2  0 Apr17 ?        00:00:00 [watchdogd]
root          49       2  0 Apr17 ?        00:00:00 [kswapd0]
root         100       2  0 Apr17 ?        00:00:00 [kthrotld]
root         101       2  0 Apr17 ?        00:00:00 [irq/41-pciehp]
root         102       2  0 Apr17 ?        00:00:00 [irq/42-pciehp]
root         103       2  0 Apr17 ?        00:00:00 [irq/43-pciehp]
root         104       2  0 Apr17 ?        00:00:00 [irq/44-pciehp]
root         105       2  0 Apr17 ?        00:00:00 [irq/45-pciehp]
root         106       2  0 Apr17 ?        00:00:00 [irq/47-pciehp]
root         107       2  0 Apr17 ?        00:00:00 [irq/50-pciehp]
root         108       2  0 Apr17 ?        00:00:00 [irq/53-pciehp]
root         109       2  0 Apr17 ?        00:00:00 [irq/55-pciehp]
root         110       2  0 Apr17 ?        00:00:00 [irq/57-pciehp]
root         111       2  0 Apr17 ?        00:00:00 [irq/59-pciehp]
root         112       2  0 Apr17 ?        00:00:00 [irq/61-pciehp]
root         113       2  0 Apr17 ?        00:00:00 [irq/63-pciehp]
root         115       2  0 Apr17 ?        00:00:00 [kmpath_rdacd]
root         116       2  0 Apr17 ?        00:00:00 [kaluad]
root         117       2  0 Apr17 ?        00:00:00 [ipv6_addrconf]
root         339       2  0 Apr17 ?        00:00:00 [scsi_eh_0]
root         340       2  0 Apr17 ?        00:00:00 [scsi_tmf_0]
root         341       2  0 Apr17 ?        00:00:00 [ttm_swap]
root         348       2  0 Apr17 ?        00:06:28 [kworker/0:1H-kblockd]
root         403       2  0 Apr17 ?        00:00:00 [kdmflush]
root         415       2  0 Apr17 ?        00:00:56 [kworker/1:1H-kblockd]
root         421       2  2 Apr17 ?        02:33:23 [jbd2/dm-0-8]
root         422       2  0 Apr17 ?        00:00:00 [ext4-rsv-conver]
root         486       2  0 Apr17 ?        00:04:48 [kworker/3:1H-kblockd]
root         498       2  0 Apr17 ?        00:00:45 [kworker/2:1H-kblockd]
root        1222       2  0 Apr17 ?        00:00:00 [kdmflush]
root        1232       2  0 Apr17 ?        00:00:00 [kmpathd]
root        1233       2  0 Apr17 ?        00:00:00 [kmpath_handlerd]
root        1236       1 13 Apr17 ?        13:21:54 /sbin/multipathd -d -s
root        1259       2  0 Apr17 ?        00:00:00 [jbd2/sda2-8]
root        1260       2  0 Apr17 ?        00:00:00 [ext4-rsv-conver]
root        1269       2  0 Apr17 ?        00:00:01 [jbd2/dm-1-8]
root        1270       2  0 Apr17 ?        00:00:00 [ext4-rsv-conver]
root        1277       1  0 Apr17 ?        00:57:29 /sbin/auditd
avahi       1294       1  0 Apr17 ?        00:20:00 avahi-daemon: running [pc-openeuler-1.local]
dbus        1296       1  0 Apr17 ?        00:13:37 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
polkitd     1299       1  0 Apr17 ?        00:15:28 /usr/lib/polkit-1/polkitd --no-debug
root        1303       1  9 Apr17 ?        08:49:34 /usr/sbin/rsyslogd -n -iNONE
root        1306       1  6 Apr17 ?        06:23:31 /usr/sbin/irqbalance --pid=/var/run/irqbalance.pid
chrony      1307       1  1 Apr17 ?        01:42:08 /usr/sbin/chronyd
root        1315       1  0 Apr17 ?        00:00:04 /usr/sbin/restorecond
avahi       1338    1294  0 Apr17 ?        00:00:00 avahi-daemon: chroot helper
root        1374       1  0 Apr17 ?        00:00:09 /usr/bin/python3 /usr/sbin/firewalld --nofork --nopid
root        1376       1  5 Apr17 ?        05:25:28 /usr/sbin/NetworkManager --no-daemon
systemd+    1377       1  1 Apr17 ?        01:42:42 /usr/lib/systemd/systemd-networkd
root        1396       1  0 Apr17 ?        00:00:36 /usr/sbin/sshd -D
root        1400       1  8 Apr17 ?        08:25:36 /usr/bin/python3 -Es /usr/sbin/tuned -l -P
root        1424       1  2 Apr17 ?        02:02:19 /usr/sbin/crond -n
root        1735       1  0 Apr17 ?        00:00:12 dhclient
root        1752       1  0 Apr17 tty1     00:00:00 /sbin/agetty -o -p -- \u --noclear tty1 linux
root        1763       1  0 Apr17 ttyAMA0  00:00:00 /sbin/agetty -o -p -- \u --keep-baud 115200,38400,9600 ttyAMA0 vt220
root       14981       1  0 07:57 ?        00:02:31 /usr/lib/systemd/systemd-udevd
root       14989       2  1 07:57 ?        00:06:09 [kworker/0:0-events_power_efficient]
root       15150       1  0 09:16 ?        00:02:04 /usr/lib/systemd/systemd-logind
root       15157       1  5 09:21 ?        00:16:26 /usr/lib/systemd/systemd-journald
root       15400       2  0 11:11 ?        00:01:11 [kworker/u8:1-events_unbound]
root       15437       2  0 11:17 ?        00:01:13 [kworker/1:2-events]
root       15452       2  0 11:27 ?        00:00:03 [kworker/2:0-mm_percpu_wq]
root       15454       2  6 11:27 ?        00:12:31 [kworker/3:0-events]
root       15460       2  0 11:27 ?        00:00:27 [kworker/u8:2-events_unbound]
root       15544    1396  0 12:16 ?        00:00:02 sshd: root [priv]
root       15551       1  0 12:16 ?        00:00:00 /usr/lib/systemd/systemd --user
root       15553   15551  0 12:16 ?        00:00:00 (sd-pam)
root       15559   15544  0 12:16 ?        00:00:41 sshd: root@pts/0
root       15560   15559  0 12:16 pts/0    00:00:03 -bash
root       15831       2  0 13:18 ?        00:00:00 [kworker/0:2-events_power_efficient]
root       15878       2  0 14:10 ?        00:00:00 [kworker/2:2]
root       15891       2  0 14:18 ?        00:00:00 [kworker/3:2]
root       15893       2  0 14:18 ?        00:00:00 [kworker/1:3-cgroup_destroy]
root       15968   15560  0 14:37 pts/0    00:00:00 ps -ef
[root@pc-openeuler-1 ~]# 

@Xie XiuQi @solarhu @木得感情的openEuler机器人 @openeuler-ci-bot @coding

查看 watchdogd 进程

[root@pc-openeuler-1 ~]# ps -aux | grep watchdogd                                                                                                                                                                  
root          46  0.0  0.0      0     0 ?        S    Apr17   0:00 [watchdogd]
root       15995 23.6  0.0 214016  1536 pts/0    S+   14:39   0:01 grep watchdogd
[root@pc-openeuler-1 ~]# 

@Xie XiuQi @solarhu @木得感情的openEuler机器人 @openeuler-ci-bot @coding

查看 multipathd 进程

[root@pc-openeuler-1 ~]# ps -aux | grep multipathd
root        1236 13.8  0.2 350144 20800 ?        SLsl Apr17 802:24 /sbin/multipathd -d -s
root       16033 47.0  0.0 214016  1536 pts/0    S+   14:43   0:00 grep multipathd
[root@pc-openeuler-1 ~]#

@Xie XiuQi @solarhu @木得感情的openEuler机器人 @openeuler-ci-bot @coding

multipathd应该没关系,关闭这个服务就好了。目前不支持多路径,后续已经默认关闭了这个服务

我觉得重点是看一下virtio调用栈,可能的原因

@freesky-edward 请帮忙看看需要什么权限,才能提交附件。

@solarhu 什么附件?

就是如何在此 issue 上传 messages 文件啊

@freesky-edward

multipathd应该没关系,关闭这个服务就好了。目前不支持多路径,后续已经默认关闭了这个服务
我觉得重点是看一下virtio调用栈,可能的原因

那要怎么才能看到 virtio 调用栈呐?需要我做什么?

@solarhu

实时情况继续卡在报Message……

[root@pc-openeuler-1 ~]# 
Message from syslogd@pc-openeuler-1 at Apr 21 16:00:31 ...
 kernel:[353237.984311] watchdog: BUG: soft lockup - CPU#1 stuck for 11s! [kworker/1:2:15437]

Message from syslogd@pc-openeuler-1 at Apr 21 16:00:31 ...
 kernel:[353254.729207] watchdog: BUG: soft lockup - CPU#3 stuck for 27s! [kworker/3:0:15454]

Message from syslogd@pc-openeuler-1 at Apr 21 16:01:31 ...
 kernel:[353309.845852] watchdog: BUG: soft lockup - CPU#0 stuck for 14s! [kworker/0:1:18880]

我刚刚看了下watchdog

[root@pc-openeuler-1 ~]# cat /proc/sys/kernel/watchdog
1
[root@pc-openeuler-1 ~]# cat /proc/sys/kernel/watchdog_cpumask 
0-3
[root@pc-openeuler-1 ~]# cat /proc/sys/kernel/watchdog_thresh  
5
[root@pc-openeuler-1 ~]#

参考链接: link.

内核软死锁(soft lockup)bug 原因分析

soft lockup这个bug没有让系统彻底死机,但是若干个进程(或者kernel thread)被锁死在了某个状态(一般在内核区域),很多情况下这个是由于内核锁的使用的问题。

lockup分为soft lockup和hard lockup。

soft lockup是指内核中有BUG导致在内核模式下一直循环的时间超过10s(根据实现和配置有所不同),而其他进程得不到运行的机会。
hard softlockup是指内核已经挂起,可以通过watchdog这样的机制来获取详细信息。

------------------------------------------------------------------

请负责内核一块的工程师帮忙排查一下

messages文件在我 个人仓库

下方是sysctl.conf

[root@pc-openeuler-1 ~]# cat /etc/sysctl.conf 
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
kernel.sysrq=0
net.ipv4.ip_forward=0
net.ipv4.conf.all.send_redirects=0
net.ipv4.conf.default.send_redirects=0
net.ipv4.conf.all.accept_source_route=0
net.ipv4.conf.default.accept_source_route=0
net.ipv4.conf.all.accept_redirects=0
net.ipv4.conf.default.accept_redirects=0
net.ipv4.conf.all.secure_redirects=0
net.ipv4.conf.default.secure_redirects=0
net.ipv4.icmp_echo_ignore_broadcasts=1
net.ipv4.icmp_ignore_bogus_error_responses=1
net.ipv4.conf.all.rp_filter=1
net.ipv4.conf.default.rp_filter=1
net.ipv4.tcp_syncookies=1
kernel.dmesg_restrict=1
net.ipv6.conf.all.accept_redirects=0
net.ipv6.conf.default.accept_redirects=0
[root@pc-openeuler-1 ~]#

我用iostat查看磁盘io了

[root@pc-openeuler-1 ~]# iostat -x 1 10
Linux 4.19.90-vhulk2001.1.0.0026.aarch64 (pc-openeuler-1) 	04/21/2020 	_aarch64_	(4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.59    0.01   10.14    2.05    0.00   87.21

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda              0.05    0.81      0.87     46.64     0.00     0.43   1.82  34.86   13.78 1294.49   0.92    16.17    57.56 157.02  13.57
dm-0             0.05    1.24      0.74     47.15     0.00     0.00   0.00   0.00   12.89 1052.62   1.31    13.79    37.91 139.69  18.12
dm-1             0.00    0.00      0.01      0.00     0.00     0.00   0.00   0.00    9.46  254.78   0.00    52.75     5.14  95.45   0.00


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.24    0.00    0.24    0.00    0.00   99.52

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda              0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00
dm-0             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00
dm-1             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.31    0.00    2.21    0.00    0.00   94.48

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda              0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00
dm-0             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00
dm-1             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.79    0.00   48.66    0.00    0.00   49.55

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda              0.00    0.41      0.00      0.00     0.00     0.00   0.00   0.00    0.00  104.00   0.00     0.00     0.00   0.00   0.00
dm-0             0.00    0.83      0.00      3.32     0.00     0.00   0.00   0.00    0.00   72.00   0.06     0.00     4.00  72.00   5.98
dm-1             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.28    0.00    1.42    0.57    0.00   97.73

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda              0.00    1.05      0.00      4.21     0.00     0.00   0.00   0.00    0.00    7.00   0.00     0.00     4.00   0.00   0.00
dm-0             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.42
dm-1             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.25    0.00    0.00   99.75

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda              0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00
dm-0             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00
dm-1             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00    0.00    0.00  100.00

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda              0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00
dm-0             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00
dm-1             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00    0.00    0.00  100.00

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda              0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00
dm-0             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00
dm-1             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.51    0.00    0.00   99.49

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda              0.00   11.00      0.00    644.00     0.00     0.00   0.00   0.00    0.00   61.27   1.02     0.00    58.55  10.18  11.20
dm-0             0.00   17.00      0.00   1156.00     0.00     0.00   0.00   0.00    0.00   38.35   1.33     0.00    68.00   8.24  14.00
dm-1             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.00    0.00    0.00  100.00

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda              0.00    6.00      0.00    512.00     0.00     0.00   0.00   0.00    0.00  175.67   0.37     0.00    85.33  16.00   9.60
dm-0             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.37     0.00     0.00   0.00   9.60
dm-1             0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00


[root@pc-openeuler-1 ~]#

@solarhu @Xie XiuQi @木得感情的openEuler机器人 @openeuler-ci-bot

查看top情况

[root@pc-openeuler-1 ~]# top
top - 17:07:03 up 4 days,  3:14,  3 users,  load average: 0.96, 0.77, 0.67
Tasks: 117 total,   1 running, 116 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.3 us,  0.1 sy,  0.0 ni, 99.6 id,  0.0 wa,  0.0 hi,  0.1 si,  0.0 st
MiB Mem :   6813.8 total,   5251.5 free,    426.6 used,   1135.8 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5390.0 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                                                     
   1424 root      20   0  215872   4864   2688 S  19.3   0.1 123:40.06 crond                                                                                                                                       
  21156 root      20   0  218432   6144   3392 R   0.3   0.1   0:00.77 top                                                                                                                                         
      1 root      20   0  185600  20288   9024 S   0.0   0.3 210:32.94 systemd                                                                                                                                     
      2 root      20   0       0      0      0 S   0.0   0.0   0:34.45 kthreadd                                                                                                                                    
      3 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 rcu_gp                                                                                                                                      
      4 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 rcu_par_gp                                                                                                                                  
      6 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kworker/0:0H-kblockd                                                                                                                        
      8 root       0 -20       0      0      0 I   0.0   0.0   0:00.06 mm_percpu_wq                                                                                                                                
      9 root      20   0       0      0      0 S   0.0   0.0   4:14.29 ksoftirqd/0                                                                                                                                 
     10 root      20   0       0      0      0 I   0.0   0.0   7:20.88 rcu_sched                                                                                                                                   
     11 root      20   0       0      0      0 I   0.0   0.0   0:00.00 rcu_bh                                                                                                                                      
     12 root      rt   0       0      0      0 S   0.0   0.0   6:54.58 migration/0                                                                                                                                 
     13 root      20   0       0      0      0 S   0.0   0.0   0:00.00 cpuhp/0                                                                                                                                     
     14 root      20   0       0      0      0 S   0.0   0.0   0:00.00 cpuhp/1                                                                                                                                     
     15 root      rt   0       0      0      0 S   0.0   0.0   3:23.77 migration/1                                                                                                                                 
     16 root      20   0       0      0      0 S   0.0   0.0   4:48.34 ksoftirqd/1                                                                                                                                 
     18 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kworker/1:0H-kblockd                                                                                                                        
     19 root      20   0       0      0      0 S   0.0   0.0   0:00.00 cpuhp/2                                                                                                                                     
     20 root      rt   0       0      0      0 S   0.0   0.0   5:44.07 migration/2                                                                                                                                 
     21 root      20   0       0      0      0 S   0.0   0.0  36:36.95 ksoftirqd/2                                                                                                                                 
     23 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kworker/2:0H                                                                                                                                
     24 root      20   0       0      0      0 S   0.0   0.0   0:00.00 cpuhp/3                                                                                                                                     
     25 root      rt   0       0      0      0 S   0.0   0.0   5:44.36 migration/3                                                                                                                                 
     26 root      20   0       0      0      0 S   0.0   0.0   4:00.66 ksoftirqd/3                                                                                                                                 
     28 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kworker/3:0H-kblockd                                                                                                                        
     29 root      20   0       0      0      0 S   0.0   0.0   0:00.23 kdevtmpfs                                                                                                                                   
     30 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 netns                                                                                                                                       
     31 root      20   0       0      0      0 S   0.0   0.0   0:01.36 kauditd                                                                                                                                     
     34 root      20   0       0      0      0 S   0.0   0.0   9:38.33 khungtaskd                                                                                                                                  
     35 root      20   0       0      0      0 S   0.0   0.0   0:00.00 oom_reaper                                                                                                                                  
     36 root       0 -20       0      0      0 I   0.0   0.0   0:00.65 writeback                                                                                                                                   
     37 root      20   0       0      0      0 S   0.0   0.0   0:00.00 kcompactd0                                                                                                                                  
     38 root      25   5       0      0      0 S   0.0   0.0   0:00.00 ksmd                                                                                                                                        
     39 root      39  19       0      0      0 S   0.0   0.0   0:00.00 khugepaged                                                                                                                                  
     41 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 crypto                                                                                                                                      
     42 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kintegrityd                                                                                                                                 
     43 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kblockd                                                                                                                                     
     44 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 md                                                                                                                                          
     45 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 edac-poller                                                                                                                                 
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 watchdogd                                                                                                                                   
     49 root      20   0       0      0      0 S   0.0   0.0   0:00.00 kswapd0                                                                                                                                     
    100 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 kthrotld                                                                                                                                    
    101 root     -51   0       0      0      0 S   0.0   0.0   0:00.00 irq/41-pciehp                                                                                                                               
    102 root     -51   0       0      0      0 S   0.0   0.0   0:00.00 irq/42-pciehp                                                                                                                               
    103 root     -51   0       0      0      0 S   0.0   0.0   0:00.00 irq/43-pciehp                                                                                                                               
    104 root     -51   0       0      0      0 S   0.0   0.0   0:00.00 irq/44-pciehp                                                                                                                               
    105 root     -51   0       0      0      0 S   0.0   0.0   0:00.00 irq/45-pciehp                                                                                                                               
    106 root     -51   0       0      0      0 S   0.0   0.0   0:00.00 irq/47-pciehp                                                                                                                               
    107 root     -51   0       0      0      0 S   0.0   0.0   0:00.00 irq/50-pciehp                                                                                                                               

查看iotop

[root@pc-openeuler-1 ~]# iotop

Total DISK READ :	0.00 B/s | Total DISK WRITE :       0.00 B/s
Actual DISK READ:	0.00 B/s | Actual DISK WRITE:       0.00 B/s
    TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                                                                                                                            
      1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % systemd --switched-root --system --deserialize 16
      2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
      3 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [rcu_gp]
      4 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [rcu_par_gp]
      6 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/0:0H-kblockd]
      8 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [mm_percpu_wq]
      9 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
     10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [rcu_sched]
     11 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [rcu_bh]
     12 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
     13 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [cpuhp/0]
     14 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [cpuhp/1]
     15 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
     16 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]
     18 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/1:0H-kblockd]
     19 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [cpuhp/2]
     20 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/2]
     21 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/2]
     23 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/2:0H]
     24 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [cpuhp/3]
     25 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/3]
     26 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/3]
     28 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/3:0H-kblockd]
     29 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kdevtmpfs]
     30 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [netns]
     31 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kauditd]
     34 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [khungtaskd]
     35 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [oom_reaper]
     36 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [writeback]
     37 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kcompactd0]
     38 be/5 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksmd]
     39 be/7 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [khugepaged]
     41 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [crypto]
     42 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kintegrityd]
     43 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kblockd]
     44 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [md]
     45 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [edac-poller]
     46 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdogd]
     49 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kswapd0]
    100 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthrotld]
    101 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [irq/41-pciehp]
    102 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [irq/42-pciehp]
    103 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [irq/43-pciehp]
    104 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [irq/44-pciehp]
    105 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [irq/45-pciehp]
    106 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [irq/47-pciehp]
    107 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [irq/50-pciehp]
    108 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [irq/53-pciehp]
    109 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [irq/55-pciehp]
    110 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [irq/57-pciehp]
    111 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [irq/59-pciehp]
    112 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [irq/61-pciehp]
    113 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [irq/63-pciehp]

又实时报Message

[root@pc-openeuler-1 ~]# uname -a
Linux pc-openeuler-1 4.19.90-vhulk2001.1.0.0026.aarch64 #1 SMP Fri Feb 7 04:09:58 UTC 2020 aarch64 GNU/Linux
[root@pc-openeuler-1 ~]# ps -ef | grep find
root       21305   15560  0 17:14 pts/0    00:00:00 grep find
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# ps -ef | grep find
Message from syslogd@pc-openeuler-1 at Apr 21 17:15:21 ...
 kernel:[357748.172879] watchdog: BUG: soft lockup - CPU#3 stuck for 13s! [kworker/3:2:21034]

查看了/proc

[root@pc-openeuler-1 ~]# ls -l /proc/
total 0
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 1
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 10
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 100
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 101
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 102
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 103
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 104
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 105
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 106
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 107
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 108
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 109
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 11
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 110
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 111
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 112
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 113
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 115
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 116
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 117
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 12
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:57 1222
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:57 1232
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:57 1233
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:57 1259
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:57 1260
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:57 1269
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:57 1270
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 1277
dr-xr-xr-x.  9 avahi           avahi                         0 Apr 17 13:53 1294
dr-xr-xr-x.  9 dbus            dbus                          0 Apr 17 13:53 1296
dr-xr-xr-x.  9 polkitd         polkitd                       0 Apr 17 13:53 1299
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 13
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 1303
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 1306
dr-xr-xr-x.  9 chrony          chrony                        0 Apr 17 13:53 1307
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 1315
dr-xr-xr-x.  9 avahi           avahi                         0 Apr 17 13:57 1338
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 1374
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 1376
dr-xr-xr-x.  9 systemd-network systemd-network               0 Apr 17 13:53 1377
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 1396
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 14
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 1400
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 1424
dr-xr-xr-x.  9 root            root                          0 Apr 21 07:57 14981
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 15
dr-xr-xr-x.  9 root            root                          0 Apr 21 09:16 15150
dr-xr-xr-x.  9 root            root                          0 Apr 21 09:21 15157
dr-xr-xr-x.  9 root            root                          0 Apr 21 12:16 15437
dr-xr-xr-x.  9 root            root                          0 Apr 21 12:16 15544
dr-xr-xr-x.  9 root            root                          0 Apr 21 12:16 15551
dr-xr-xr-x.  9 root            root                          0 Apr 21 12:16 15553
dr-xr-xr-x.  9 root            root                          0 Apr 21 12:16 15559
dr-xr-xr-x.  9 root            root                          0 Apr 21 12:16 15560
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 16
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:57 1735
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 1763
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 18
dr-xr-xr-x.  9 root            root                          0 Apr 21 15:56 18880
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 19
dr-xr-xr-x.  9 root            root                          0 Apr 21 16:18 19866
dr-xr-xr-x.  9 root            root                          0 Apr 21 16:16 19877
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 2
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 20
dr-xr-xr-x.  9 root            root                          0 Apr 21 16:25 20200
dr-xr-xr-x.  9 root            root                          0 Apr 21 16:26 20204
dr-xr-xr-x.  9 root            root                          0 Apr 21 16:26 20205
dr-xr-xr-x.  9 root            root                          0 Apr 21 16:36 20448
dr-xr-xr-x.  9 root            root                          0 Apr 21 16:36 20452
dr-xr-xr-x.  9 root            root                          0 Apr 21 16:36 20453
dr-xr-xr-x.  9 root            root                          0 Apr 21 16:43 20795
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 21
dr-xr-xr-x.  9 root            root                          0 Apr 21 17:06 21031
dr-xr-xr-x.  9 root            root                          0 Apr 21 17:06 21034
dr-xr-xr-x.  9 root            root                          0 Apr 21 17:06 21035
dr-xr-xr-x.  9 root            root                          0 Apr 21 17:06 21036
dr-xr-xr-x.  9 root            root                          0 Apr 21 17:06 21082
dr-xr-xr-x.  9 root            root                          0 Apr 21 17:10 21209
dr-xr-xr-x.  9 root            root                          0 Apr 21 17:14 21303
dr-xr-xr-x.  9 root            root                          0 Apr 21 17:18 21448
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 23
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 24
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 25
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 26
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 28
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 29
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 3
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 30
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 31
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 339
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 34
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 340
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 341
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 348
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 35
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 36
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 37
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 38
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 39
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 4
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 403
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 41
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 415
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 42
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 421
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 422
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 43
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 44
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 45
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 46
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:53 486
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 49
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:57 498
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 6
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 8
dr-xr-xr-x.  9 root            root                          0 Apr 17 13:52 9
-r--r--r--.  1 root            root                          0 Apr 21 17:17 buddyinfo
dr-xr-xr-x.  4 root            root                          0 Apr 17 13:53 bus
-r--r--r--.  1 root            root                          0 Apr 17 13:52 cgroups
-r--r--r--.  1 root            root                          0 Apr 17 13:52 cmdline
-r--r--r--.  1 root            root                          0 Apr 21 17:17 consoles
-r--r--r--.  1 root            root                          0 Apr 17 13:53 cpuinfo
-r--r--r--.  1 root            root                          0 Apr 21 17:17 crypto
-r--r--r--.  1 root            root                          0 Apr 17 13:52 devices
lrwxrwxrwx.  1 root            root                         29 Apr 18 00:31 device-tree -> /sys/firmware/devicetree/base
-r--r--r--.  1 root            root                          0 Apr 21 16:03 diskstats
dr-xr-xr-x.  2 root            root                          0 Apr 18 14:25 driver
-r--r--r--.  1 root            root                          0 Apr 21 17:17 execdomains
-r--r--r--.  1 root            root                          0 Apr 21 17:17 fb
-r--r--r--.  1 root            root                          0 Apr 17 13:52 filesystems
dr-xr-xr-x.  5 root            root                          0 Apr 17 13:53 fs
-r--r--r--.  1 root            root                          0 Apr 17 13:53 interrupts
-r--r--r--.  1 root            root                          0 Apr 17 13:53 iomem
-r--r--r--.  1 root            root                          0 Apr 21 17:17 ioports
dr-xr-xr-x. 52 root            root                          0 Apr 17 13:53 irq
-r--r--r--.  1 root            root                          0 Apr 17 13:53 kallsyms
-r--------.  1 root            root            140746078355456 Apr 17 13:52 kcore
-r--r--r--.  1 root            root                          0 Apr 21 17:17 keys
-r--r--r--.  1 root            root                          0 Apr 21 17:17 key-users
-r--------.  1 root            root                          0 Apr 21 17:17 kmsg
-r--------.  1 root            root                          0 Apr 21 17:17 kpagecgroup
-r--------.  1 root            root                          0 Apr 21 17:17 kpagecount
-r--------.  1 root            root                          0 Apr 21 17:17 kpageflags
dr-xr-xr-x.  2 root            root                          0 Apr 18 14:25 livepatch
-r--r--r--.  1 root            root                          0 Apr 17 13:53 loadavg
-r--r--r--.  1 root            root                          0 Apr 21 17:17 locks
-r--r--r--.  1 root            root                          0 Apr 21 17:17 mdstat
-r--r--r--.  1 root            root                          0 Apr 17 13:52 meminfo
-r--r--r--.  1 root            root                          0 Apr 17 13:52 misc
-r--r--r--.  1 root            root                          0 Apr 17 13:53 modules
lrwxrwxrwx.  1 root            root                         11 Apr 17 13:52 mounts -> self/mounts
-r--r--r--.  1 root            root                          0 Apr 21 17:17 mtd
lrwxrwxrwx.  1 root            root                          8 Apr 17 13:52 net -> self/net
-r--------.  1 root            root                          0 Apr 21 17:17 pagetypeinfo
-r--r--r--.  1 root            root                          0 Apr 21 17:17 partitions
-r--r--r--.  1 root            root                          0 Apr 21 15:43 sched_debug
-r--r--r--.  1 root            root                          0 Apr 21 17:17 schedstat
dr-xr-xr-x.  3 root            root                          0 Apr 17 13:53 scsi
lrwxrwxrwx.  1 root            root                          0 Jan  1  1970 self -> 21448
-r--------.  1 root            root                          0 Apr 21 17:17 slabinfo
-r--r--r--.  1 root            root                          0 Apr 21 17:17 softirqs
-r--r--r--.  1 root            root                          0 Apr 17 13:53 stat
-r--r--r--.  1 root            root                          0 Apr 17 13:52 swaps
dr-xr-xr-x.  1 root            root                          0 Apr 17 13:52 sys
--w-------.  1 root            root                          0 Apr 17 13:53 sysrq-trigger
dr-xr-xr-x.  2 root            root                          0 Apr 18 14:25 sysvipc
lrwxrwxrwx.  1 root            root                          0 Jan  1  1970 thread-self -> 21448/task/21448
-r--------.  1 root            root                          0 Apr 21 17:17 timer_list
dr-xr-xr-x.  4 root            root                          0 Apr 17 13:57 tty
-r--r--r--.  1 root            root                          0 Apr 17 13:57 uptime
-r--r--r--.  1 root            root                          0 Apr 21 17:17 version
-r--------.  1 root            root                          0 Apr 21 17:17 vmallocinfo
-r--r--r--.  1 root            root                          0 Apr 21 17:10 vmstat
-r--r--r--.  1 root            root                          0 Apr 21 17:17 zoneinfo
[root@pc-openeuler-1 ~]#

ftrace跟踪内核函数调用

[root@pc-openeuler-1 ~]# cd /sys/kernel/debug/tracing/
[root@pc-openeuler-1 tracing]# ls
available_events	    hwlat_detector   saved_cmdlines_size  stack_max_size      trace_pipe
available_filter_functions  instances	     saved_tgids	  stack_trace	      tracing_cpumask
available_tracers	    kprobe_events    set_event		  stack_trace_filter  tracing_max_latency
buffer_size_kb		    kprobe_profile   set_event_pid	  synthetic_events    tracing_on
buffer_total_size_kb	    max_graph_depth  set_ftrace_filter	  timestamp_mode      tracing_thresh
current_tracer		    options	     set_ftrace_notrace   trace		      uprobe_events
dyn_ftrace_total_info	    per_cpu	     set_ftrace_pid	  trace_clock	      uprobe_profile
enabled_functions	    printk_formats   set_graph_function   trace_marker
events			    README	     set_graph_notrace	  trace_marker_raw
free_buffer		    saved_cmdlines   snapshot		  trace_options
[root@pc-openeuler-1 tracing]# 
[root@pc-openeuler-1 tracing]# cat available_tracers 
hwlat blk function_graph wakeup_dl wakeup_rt wakeup function nop
[root@pc-openeuler-1 tracing]# 
[root@pc-openeuler-1 tracing]# cat current_tracer 
nop
[root@pc-openeuler-1 tracing]# echo function > current_tracer 
[root@pc-openeuler-1 tracing]# cat current_tracer             
function
[root@pc-openeuler-1 tracing]# cat trace | head -n 20
# tracer: function
#
# entries-in-buffer/entries-written: 210704/1172169   #P:4
#
#                              _-----=> irqs-off
#                             / _----=> need-resched
#                            | / _---=> hardirq/softirq
#                            || / _--=> preempt-depth
#                            ||| /     delay
#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
#              | |       |   ||||       |         |
      irqbalance-1306  [002] .... 358490.143816: show_ipi_list <-arch_show_interrupts
      irqbalance-1306  [002] .... 358490.143816: seq_printf <-show_ipi_list
      irqbalance-1306  [002] .... 358490.143816: seq_vprintf <-seq_printf
      irqbalance-1306  [002] .... 358490.143817: seq_printf <-show_ipi_list
      irqbalance-1306  [002] .... 358490.143817: seq_vprintf <-seq_printf
      irqbalance-1306  [002] .... 358490.143818: seq_printf <-show_ipi_list
      irqbalance-1306  [002] .... 358490.143818: seq_vprintf <-seq_printf
      irqbalance-1306  [002] .... 358490.143819: seq_printf <-show_ipi_list
      irqbalance-1306  [002] .... 358490.143819: seq_vprintf <-seq_printf
[root@pc-openeuler-1 tracing]# 
[root@pc-openeuler-1 tracing]# echo function_graph > current_tracer 
[root@pc-openeuler-1 tracing]# cat trace | head -n 20               
# tracer: function_graph
#
# CPU  DURATION                  FUNCTION CALLS
# |     |   |                     |   |   |   |
 2)               |    tick_nohz_idle_stop_tick() {
 2)   0.360 us    |      can_stop_idle_tick.isra.1();
 2)               |      tick_nohz_next_event() {
 2)   0.380 us    |        rcu_needs_cpu();
 2)               |        get_next_timer_interrupt() {
 2)   0.480 us    |          __next_timer_interrupt();
 2)   0.600 us    |          hrtimer_get_next_event();
 2)   2.800 us    |        }
 2)   0.340 us    |        timekeeping_max_deferment();
 2)   5.600 us    |      }
 2)               |      tick_nohz_stop_tick() {
 2)   0.340 us    |        calc_load_nohz_start();
 2)   0.340 us    |        cpu_load_update_nohz_start();
 2)   0.340 us    |        quiet_vmstat();
 2)               |        hrtimer_start_range_ns() {
 2)   0.360 us    |          lock_hrtimer_base.isra.1();
[root@pc-openeuler-1 tracing]#

实时报Message

[root@pc-openeuler-1 tracing]#    
Message from syslogd@pc-openeuler-1 at Apr 21 18:00:39 ...
 kernel:[360450.162283] watchdog: BUG: soft lockup - CPU#0 stuck for 11s! [systemd:1]

Message from syslogd@pc-openeuler-1 at Apr 21 18:30:37 ...
 kernel:[362252.300305] watchdog: BUG: soft lockup - CPU#3 stuck for 15s! [kworker/3:2:21034]

Message依然存在

[root@pc-openeuler-1 tracing]#    
Message from syslogd@pc-openeuler-1 at Apr 21 18:00:39 ...
 kernel:[360450.162283] watchdog: BUG: soft lockup - CPU#0 stuck for 11s! [systemd:1]

Message from syslogd@pc-openeuler-1 at Apr 21 18:30:37 ...
 kernel:[362252.300305] watchdog: BUG: soft lockup - CPU#3 stuck for 15s! [kworker/3:2:21034]
^C
[root@pc-openeuler-1 tracing]# 
Message from syslogd@pc-openeuler-1 at Apr 21 19:00:25 ...
 kernel:[364043.641048] watchdog: BUG: soft lockup - CPU#0 stuck for 11s! [in:imjournal:1327]

Message from syslogd@pc-openeuler-1 at Apr 21 19:00:33 ...
 kernel:[364044.161561] watchdog: BUG: soft lockup - CPU#1 stuck for 11s! [crond:22012]

Message from syslogd@pc-openeuler-1 at Apr 21 19:01:47 ...
 kernel:[364107.280553] watchdog: BUG: soft lockup - CPU#0 stuck for 14s! [kworker/u8:0:22015]

Message from syslogd@pc-openeuler-1 at Apr 21 19:01:48 ...
 kernel:[364113.221898] watchdog: BUG: soft lockup - CPU#2 stuck for 19s! [crond:22021]

coding 修改了描述
coding 修改了描述

实时更新 还是报Message

[root@pc-openeuler-1 tracing]#    
Message from syslogd@pc-openeuler-1 at Apr 21 18:00:39 ...
 kernel:[360450.162283] watchdog: BUG: soft lockup - CPU#0 stuck for 11s! [systemd:1]

Message from syslogd@pc-openeuler-1 at Apr 21 18:30:37 ...
 kernel:[362252.300305] watchdog: BUG: soft lockup - CPU#3 stuck for 15s! [kworker/3:2:21034]
^C
[root@pc-openeuler-1 tracing]# 
Message from syslogd@pc-openeuler-1 at Apr 21 19:00:25 ...
 kernel:[364043.641048] watchdog: BUG: soft lockup - CPU#0 stuck for 11s! [in:imjournal:1327]

Message from syslogd@pc-openeuler-1 at Apr 21 19:00:33 ...
 kernel:[364044.161561] watchdog: BUG: soft lockup - CPU#1 stuck for 11s! [crond:22012]

Message from syslogd@pc-openeuler-1 at Apr 21 19:01:47 ...
 kernel:[364107.280553] watchdog: BUG: soft lockup - CPU#0 stuck for 14s! [kworker/u8:0:22015]

Message from syslogd@pc-openeuler-1 at Apr 21 19:01:48 ...
 kernel:[364113.221898] watchdog: BUG: soft lockup - CPU#2 stuck for 19s! [crond:22021]
^C
[root@pc-openeuler-1 tracing]# 
Message from syslogd@pc-openeuler-1 at Apr 21 19:20:25 ...
 kernel:[365244.886399] watchdog: BUG: soft lockup - CPU#0 stuck for 11s! [kworker/0:0:21036]

Message from syslogd@pc-openeuler-1 at Apr 21 19:50:56 ...
 kernel:[367073.585774] watchdog: BUG: soft lockup - CPU#3 stuck for 12s! [kworker/3:2:21034]

Message from syslogd@pc-openeuler-1 at Apr 21 19:51:02 ...
 kernel:[367073.585780] watchdog: BUG: soft lockup - CPU#0 stuck for 17s! [kworker/0:0:21036]

Message from syslogd@pc-openeuler-1 at Apr 21 19:51:04 ...
 kernel:[367083.948643] watchdog: BUG: soft lockup - CPU#1 stuck for 11s! [kworker/1:2:15437]

Message from syslogd@pc-openeuler-1 at Apr 21 20:01:31 ...
 kernel:[367708.053723] watchdog: BUG: soft lockup - CPU#2 stuck for 11s! [swapper/2:0]

查看watchdogd和multipathd进程

[root@pc-openeuler-1 tracing]# ps -aux | grep watchdogd
root          46  0.0  0.0      0     0 ?        S    Apr17   0:00 [watchdogd]
root       22113 35.0  0.0 214016  1600 pts/0    S+   20:07   0:00 grep watchdogd
[root@pc-openeuler-1 tracing]# ps -aux | grep multipathd
root       22127 12.0  0.0 214016  1536 pts/0    S+   20:07   0:00 grep multipathd
[root@pc-openeuler-1 tracing]#

新发现

[root@pc-openeuler-1 ~]# ps -aux | grep watchdogd 
root          46  0.0  0.0      0     0 ?        S    Apr17   0:00 [watchdogd]
root       22187 84.0  0.0 214016  1536 pts/0    S+   20:36   0:00 grep watchdogd
[root@pc-openeuler-1 ~]# ps -aux | grep multipathd
root       22201  0.0  0.0 214016  1536 pts/0    S+   20:36   0:00 grep multipathd
[root@pc-openeuler-1 ~]# top -cbp 46  
top - 20:37:52 up 4 days,  6:45,  1 user,  load average: 0.66, 0.54, 0.49
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.3 us,  4.5 sy,  0.0 ni, 93.2 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:37:55 up 4 days,  6:45,  1 user,  load average: 0.66, 0.54, 0.49
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.3 us,  0.2 sy,  0.0 ni, 99.4 id,  0.0 wa,  0.1 hi,  0.1 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:37:58 up 4 days,  6:45,  1 user,  load average: 0.61, 0.53, 0.48
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.1 us,  0.2 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.1 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:01 up 4 days,  6:45,  1 user,  load average: 0.61, 0.53, 0.48
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.2 us,  0.1 sy,  0.0 ni, 99.6 id,  0.0 wa,  0.1 hi,  0.1 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:04 up 4 days,  6:45,  1 user,  load average: 0.56, 0.52, 0.48
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.1 sy,  0.0 ni, 99.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:07 up 4 days,  6:45,  1 user,  load average: 0.52, 0.51, 0.48
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.2 us,  0.2 sy,  0.0 ni, 99.6 id,  0.0 wa,  0.1 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:10 up 4 days,  6:45,  1 user,  load average: 0.52, 0.51, 0.48
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.2 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.0 hi,  0.1 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:13 up 4 days,  6:45,  1 user,  load average: 0.47, 0.50, 0.47
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.2 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.1 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:21 up 4 days,  6:45,  1 user,  load average: 0.60, 0.53, 0.48
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.4 us,  3.3 sy,  0.0 ni, 59.6 id,  0.0 wa, 19.1 hi, 15.6 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:25 up 4 days,  6:45,  1 user,  load average: 0.55, 0.52, 0.48
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.5 us,  1.0 sy,  0.0 ni, 93.7 id,  0.0 wa,  3.4 hi,  0.4 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:28 up 4 days,  6:45,  1 user,  load average: 0.50, 0.51, 0.48
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.2 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.1 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:28 up 4 days,  6:45,  1 user,  load average: 0.50, 0.51, 0.48
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  1.6 sy,  0.0 ni, 96.8 id,  0.0 wa,  0.0 hi,  1.6 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:28 up 4 days,  6:45,  1 user,  load average: 0.50, 0.51, 0.48
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us, 50.0 sy,  0.0 ni, 50.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:29 up 4 days,  6:45,  1 user,  load average: 0.50, 0.51, 0.48
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:29 up 4 days,  6:45,  1 user,  load average: 0.50, 0.51, 0.48
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  4.8 sy,  0.0 ni, 95.2 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:32 up 4 days,  6:45,  1 user,  load average: 0.50, 0.51, 0.48
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.1 us,  0.2 sy,  0.0 ni, 99.6 id,  0.0 wa,  0.1 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:35 up 4 days,  6:45,  1 user,  load average: 0.46, 0.50, 0.47
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.2 sy,  0.0 ni, 95.1 id,  0.0 wa,  4.6 hi,  0.1 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]


top - 20:38:38 up 4 days,  6:45,  1 user,  load average: 0.43, 0.49, 0.47
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.7 us,  0.3 sy,  0.0 ni, 95.4 id,  0.0 wa,  3.6 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:46 up 4 days,  6:45,  1 user,  load average: 1.19, 0.65, 0.52
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s): 10.4 us,  7.5 sy,  0.0 ni,  0.7 id,  0.0 wa, 61.6 hi, 19.8 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:50 up 4 days,  6:45,  1 user,  load average: 1.50, 0.72, 0.55
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.9 us,  0.4 sy,  0.0 ni, 97.0 id,  0.0 wa,  0.5 hi,  0.1 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:53 up 4 days,  6:46,  1 user,  load average: 1.38, 0.71, 0.54
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.1 sy,  0.0 ni, 99.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:56 up 4 days,  6:46,  1 user,  load average: 1.38, 0.71, 0.54
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.2 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.1 hi,  0.1 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:38:59 up 4 days,  6:46,  1 user,  load average: 1.27, 0.70, 0.54
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.2 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:39:02 up 4 days,  6:46,  1 user,  load average: 1.17, 0.69, 0.54
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.2 us,  0.2 sy,  0.0 ni, 99.5 id,  0.0 wa,  0.1 hi,  0.1 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]

top - 20:39:05 up 4 days,  6:46,  1 user,  load average: 1.17, 0.69, 0.54
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.1 sy,  0.0 ni, 99.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5260.9 free,    409.9 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5406.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]^C
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# 

[root@pc-openeuler-1 ~]# ps -ef | grep watchdogd
root          46       2  0 Apr17 ?        00:00:00 [watchdogd]
root       22254   15560 46 20:41 pts/0    00:00:07 grep watchdogd
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# top -cbp 46
top - 20:42:34 up 4 days,  6:49,  1 user,  load average: 2.06, 1.50, 0.89
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  3.1 sy,  0.0 ni, 96.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5262.8 free,    408.1 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5408.7 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]^C
^C
[root@pc-openeuler-1 ~]# strace -cp 46
strace: attach: ptrace(PTRACE_SEIZE, 46): Operation not permitted
[root@pc-openeuler-1 ~]#

新发现

[root@pc-openeuler-1 ~]# ps -aux | grep watchdogd 
root          46  0.0  0.0      0     0 ?        S    Apr17   0:00 [watchdogd]
root       22187 84.0  0.0 214016  1536 pts/0    S+   20:36   0:00 grep watchdogd
[root@pc-openeuler-1 ~]# ps -aux | grep multipathd
root       22201  0.0  0.0 214016  1536 pts/0    S+   20:36   0:00 grep multipathd
[root@pc-openeuler-1 ~]# top -cbp 46  
top - 20:37:52 up 4 days,  6:45,  1 user,  load average: 0.66, 0.54, 0.49

。。。

@coding

这里有发现什么问题吗?

[root@pc-openeuler-1 ~]# ps -ef | grep watchdogd
root          46       2  0 Apr17 ?        00:00:00 [watchdogd]
root       22254   15560 46 20:41 pts/0    00:00:07 grep watchdogd
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# top -cbp 46
top - 20:42:34 up 4 days,  6:49,  1 user,  load average: 2.06, 1.50, 0.89
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  3.1 sy,  0.0 ni, 96.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5262.8 free,    408.1 used,   1143.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5408.7 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 [watchdogd]^C
^C
[root@pc-openeuler-1 ~]# strace -cp 46
strace: attach: ptrace(PTRACE_SEIZE, 46): Operation not permitted
[root@pc-openeuler-1 ~]#

@coding

watchdogd 是内核线程,strace 不了。

[root@pc-openeuler-1 ~]# pstack 46          
[root@pc-openeuler-1 ~]# 

Message from syslogd@pc-openeuler-1 at Apr 21 21:01:27 ...
 kernel:[371308.844919] watchdog: BUG: soft lockup - CPU#0 stuck for 14s! [run-parts:22456]
[root@pc-openeuler-1 ~]#                                                                                      
[root@pc-openeuler-1 ~]# ps -flp 46
F S UID          PID    PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
1 S root          46       2  0 -40   - -     0 kthrea Apr17 ?        00:00:00 [watchdogd]
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# pstack 46 
[root@pc-openeuler-1 ~]# ^C
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# ps -flp 46
F S UID          PID    PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
1 S root          46       2  0 -40   - -     0 kthrea Apr17 ?        00:00:00 [watchdogd]
[root@pc-openeuler-1 ~]# 

[root@pc-openeuler-1 ~]# cat /proc/46/wchan 
kthread_worker_fn[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]#

@Xie XiuQi

[root@pc-openeuler-1 ~]# cat /proc/46/wchan    
kthread_worker_fn
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# cat /proc/46/status 
Name:	watchdogd
Umask:	0000
State:	S (sleeping)
Tgid:	46
Ngid:	0
Pid:	46
PPid:	2
TracerPid:	0
Uid:	0	0	0	0
Gid:	0	0	0	0
FDSize:	64
Groups:	 
NStgid:	46
NSpid:	46
NSpgid:	0
NSsid:	0
Threads:	1
SigQ:	0/6276
SigPnd:	0000000000000000
ShdPnd:	0000000000000000
SigBlk:	0000000000000000
SigIgn:	ffffffffffffffff
SigCgt:	0000000000000000
CapInh:	0000000000000000
CapPrm:	0000003fffffffff
CapEff:	0000003fffffffff
CapBnd:	0000003fffffffff
CapAmb:	0000000000000000
NoNewPrivs:	0
Seccomp:	0
Speculation_Store_Bypass:	unknown
Cpus_allowed:	f
Cpus_allowed_list:	0-3
Mems_allowed:	0001
Mems_allowed_list:	0
voluntary_ctxt_switches:	2
nonvoluntary_ctxt_switches:	0
[root@pc-openeuler-1 ~]#

@Xie XiuQi

[root@pc-openeuler-1 ~]# cat /proc/46/sched  
watchdogd (46, #threads: 1)
-------------------------------------------------------------------
se.exec_start                                :           690.323500
se.vruntime                                  :             0.000000
se.sum_exec_runtime                          :             0.049360
se.nr_migrations                             :                    1
nr_switches                                  :                    2
nr_voluntary_switches                        :                    2
nr_involuntary_switches                      :                    0
se.load.weight                               :              1048576
se.runnable_weight                           :              1048576
se.avg.load_sum                              :                47058
se.avg.runnable_load_sum                     :                47058
se.avg.util_sum                              :                    0
se.avg.load_avg                              :                 1024
se.avg.runnable_load_avg                     :                 1024
se.avg.util_avg                              :                    0
se.avg.last_update_time                      :            690311168
se.avg.util_est.ewma                         :                    0
se.avg.util_est.enqueued                     :                    0
policy                                       :                    1
prio                                         :                    0
clock-delta                                  :                   40
numa_pages_migrated                          :                    0
numa_preferred_nid                           :                   -1
total_numa_faults                            :                    0
current_node=0, numa_group_id=0
numa_faults node=0 task_private=0 task_shared=0 group_private=0 group_shared=0
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# cat /proc/46/schedstat 
49360 17260 2
[root@pc-openeuler-1 ~]#

@Xie XiuQi

[root@pc-openeuler-1 ~]# cat /proc/46/syscall   
0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# ls /usr/include/asm/             
auxvec.h	  fcntl.h   kvm.h	perf_regs.h    sembuf.h      signal.h	swab.h	    unistd.h
bitsperlong.h	  hwcap.h   kvm_para.h	poll.h	       setup.h	     socket.h	termbits.h
bpf_perf_event.h  ioctl.h   mman.h	posix_types.h  shmbuf.h      sockios.h	termios.h
byteorder.h	  ioctls.h  msgbuf.h	ptrace.h       sigcontext.h  statfs.h	types.h
errno.h		  ipcbuf.h  param.h	resource.h     siginfo.h     stat.h	ucontext.h
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# grep 0 /usr/include/asm/unistd.h 
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
 * Copyright (C) 2012 ARM Ltd.
[root@pc-openeuler-1 ~]# 

[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# cat /proc/46/stack               
[<0>] __switch_to+0xe4/0x148
[<0>] kthread_worker_fn+0x208/0x240
[<0>] kthread+0x134/0x138
[<0>] ret_from_fork+0x10/0x18
[<0>] 0xffffffffffffffff
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# ps -fp 46
UID          PID    PPID  C STIME TTY          TIME CMD
root          46       2  0 Apr17 ?        00:00:00 [watchdogd]
[root@pc-openeuler-1 ~]#

@Xie XiuQi

[root@pc-openeuler-1 ~]# kill -9 46
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# ps -fp 46              
UID          PID    PPID  C STIME TTY          TIME CMD
root          46       2  0 Apr17 ?        00:00:00 [watchdogd]
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# ps -fp 46
UID          PID    PPID  C STIME TTY          TIME CMD
root          46       2  0 Apr17 ?        00:00:00 [watchdogd]
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# kill -9 46
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# ps -fp 46 
UID          PID    PPID  C STIME TTY          TIME CMD
root          46       2  0 Apr17 ?        00:00:00 [watchdogd]
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]#

杀不死它

@Xie XiuQi

[root@pc-openeuler-1 ~]# cat /proc/46/stack     

Message from syslogd@pc-openeuler-1 at Apr 21 21:30:32 ...
 kernel:[373045.191273] watchdog: BUG: soft lockup - CPU#2 stuck for 12s! [crond:23098]

Message from syslogd@pc-openeuler-1 at Apr 21 21:30:49 ...
 kernel:[373045.843894] watchdog: BUG: soft lockup - CPU#0 stuck for 13s! [sshd:15559]

Message from syslogd@pc-openeuler-1 at Apr 21 21:30:53 ...
 kernel:[373046.632763] watchdog: BUG: soft lockup - CPU#1 stuck for 13s! [systemd-network:1377]

又报Message了

[root@pc-openeuler-1 ~]# ps -ef | grep syslogd  
root        1303       1  8 Apr17 ?        09:16:58 /usr/sbin/rsyslogd -n -iNONE
root       23133   15560  0 21:33 pts/0    00:00:00 grep syslogd
[root@pc-openeuler-1 ~]# ps -fp 1303
UID          PID    PPID  C STIME TTY          TIME CMD
root        1303       1  8 Apr17 ?        09:17:07 /usr/sbin/rsyslogd -n -iNONE
[root@pc-openeuler-1 ~]#
[root@pc-openeuler-1 ~]# cat /proc/46/stack
[<0>] __switch_to+0xe4/0x148
[<0>] kthread_worker_fn+0x208/0x240
[<0>] kthread+0x134/0x138
[<0>] ret_from_fork+0x10/0x18
[<0>] 0xffffffffffffffff
[root@pc-openeuler-1 ~]#
[root@pc-openeuler-1 ~]# cat /proc/1303/stack
[<0>] __switch_to+0xe4/0x148
[<0>] do_select+0x440/0x658
[<0>] core_sys_select+0x200/0x438
[<0>] __arm64_sys_pselect6+0x290/0x2e8
[<0>] el0_svc_common+0x78/0x130
[<0>] el0_svc_handler+0x38/0x78
[<0>] el0_svc+0x8/0xc
[<0>] 0xffffffffffffffff
[root@pc-openeuler-1 ~]# cat /proc/1303/syscall 
72 0x1 0x0 0x0 0x0 0xffffcb39a5d8 0x0 0xffffcb39a580 0xfffc2fb521c0
[root@pc-openeuler-1 ~]# grep 72 /usr/include/asm/unistd.h 
[root@pc-openeuler-1 ~]#
[root@pc-openeuler-1 ~]# grep 72 /usr/include/asm/unistd.h 
[root@pc-openeuler-1 ~]# 

Message from syslogd@pc-openeuler-1 at Apr 21 21:41:08 ...
 kernel:[373679.994219] watchdog: BUG: soft lockup - CPU#3 stuck for 11s! [kworker/3:2:21034]

Message from syslogd@pc-openeuler-1 at Apr 21 21:41:13 ...
 kernel:[373680.317773] watchdog: BUG: soft lockup - CPU#0 stuck for 19s! [kworker/0:0:21036]
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]#

还是继续报Message

[root@pc-openeuler-1 ~]# ps -ef | grep watchdogd
root          46       2  0 Apr17 ?        00:00:00 [watchdogd]
root       23585   15560 70 21:50 pts/0    00:00:02 grep watchdogd
[root@pc-openeuler-1 ~]# top -H -p 46
top - 21:51:35 up 4 days,  7:58,  1 user,  load average: 0.88, 1.04, 1.45
Threads:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.2 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.1 hi,  0.0 si,  0.0 st
MiB Mem :   6813.8 total,   5235.6 free,    410.4 used,   1167.9 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5404.7 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                              
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 watchdogd    

top - 21:53:01 up 4 days,  8:00,  1 user,  load average: 0.19, 0.76, 1.31
Threads:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.1 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.0 hi,  0.1 si,  0.0 st
MiB Mem :   6813.8 total,   5235.6 free,    410.4 used,   1167.9 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5404.7 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                              
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 watchdogd
[root@pc-openeuler-1 ~]# ps -ef | grep watchdogd
root          46       2  0 Apr17 ?        00:00:00 [watchdogd]
root       23585   15560 70 21:50 pts/0    00:00:02 grep watchdogd
[root@pc-openeuler-1 ~]# top -H -p 46
top - 21:52:24 up 4 days,  7:59,  1 user,  load average: 0.38, 0.87, 1.37
Threads:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.1 hi,  0.1 si,  0.0 st
MiB Mem :   6813.8 total,   5235.6 free,    410.4 used,   1167.9 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5404.7 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                              
     46 root      rt   0       0      0      0 S   0.0   0.0   0:00.00 watchdogd                            

列举所有已加载模块

[root@pc-openeuler-1 ~]# lsmod
Module                  Size  Used by
binfmt_misc           262144  1
ip6t_rpfilter         262144  1
ip6t_REJECT           262144  2
nf_reject_ipv6        262144  1 ip6t_REJECT
ipt_REJECT            262144  2
nf_reject_ipv4        262144  1 ipt_REJECT
xt_conntrack          262144  13
ebtable_nat           262144  1
ip6table_nat          262144  1
nf_nat_ipv6           262144  1 ip6table_nat
ip6table_mangle       262144  1
ip6table_raw          262144  1
ip6table_security     262144  1
iptable_nat           262144  1
nf_nat_ipv4           262144  1 iptable_nat
nf_nat                262144  2 nf_nat_ipv6,nf_nat_ipv4
iptable_mangle        262144  1
iptable_raw           262144  1
iptable_security      262144  1
nf_conntrack          327680  4 xt_conntrack,nf_nat,nf_nat_ipv6,nf_nat_ipv4
nf_defrag_ipv6        262144  1 nf_conntrack
nf_defrag_ipv4        262144  1 nf_conntrack
libcrc32c             262144  2 nf_conntrack,nf_nat
ip_set                262144  0
nfnetlink             262144  1 ip_set
ebtable_filter        262144  1
ebtables              262144  2 ebtable_nat,ebtable_filter
ip6table_filter       262144  1
ip6_tables            262144  7 ip6table_filter,ip6table_raw,ip6table_nat,ip6table_mangle,ip6table_security
iptable_filter        262144  1
vfat                  262144  1
fat                   262144  1 vfat
dm_multipath          262144  0
aes_ce_blk            262144  0
crypto_simd           262144  1 aes_ce_blk
cryptd                262144  1 crypto_simd
aes_ce_cipher         262144  1 aes_ce_blk
ghash_ce              262144  0
sha2_ce               262144  0
sha256_arm64          262144  1 sha2_ce
sha1_ce               262144  0
ofpart                262144  0
cmdlinepart           262144  0
sg                    262144  0
virtio_input          262144  0
cfi_cmdset_0001       262144  2
virtio_balloon        262144  0
cfi_probe             262144  0
cfi_util              262144  2 cfi_probe,cfi_cmdset_0001
gen_probe             262144  1 cfi_probe
physmap_of            262144  0
chipreg               262144  2 physmap_of,cfi_probe
uio_pdrv_genirq       262144  0
mtd                   262144  4 cmdlinepart,physmap_of,ofpart
uio                   262144  1 uio_pdrv_genirq
sch_fq_codel          262144  2
ip_tables             262144  5 iptable_filter,iptable_security,iptable_raw,iptable_nat,iptable_mangle
ext4                  851968  3
mbcache               262144  1 ext4
jbd2                  262144  1 ext4
sd_mod                262144  4
virtio_net            262144  0
net_failover          262144  1 virtio_net
virtio_scsi           262144  3
virtio_gpu            262144  1
failover              262144  1 net_failover
virtio_mmio           262144  0
virtio_pci            262144  0
virtio_ring           262144  7 virtio_mmio,virtio_balloon,virtio_scsi,virtio_input,virtio_gpu,virtio_pci,virtio_net
virtio                262144  7 virtio_mmio,virtio_balloon,virtio_scsi,virtio_input,virtio_gpu,virtio_pci,virtio_net
dm_mirror             262144  0
dm_region_hash        262144  1 dm_mirror
dm_log                262144  2 dm_region_hash,dm_mirror
dm_mod                327680  9 dm_multipath,dm_log,dm_mirror
[root@pc-openeuler-1 ~]#

printk内容查看 使用dmesg命令

[346062.993562] watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [kworker/0:0:14989]
[346063.090304] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_probe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring virtio dm_mirror
[346066.664181]  dm_region_hash dm_log dm_mod
[346067.055975] CPU: 0 PID: 14989 Comm: kworker/0:0 Kdump: loaded Tainted: G         C   L    4.19.90-vhulk2001.1.0.0026.aarch64 #1
[346067.076928] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
[346067.090846] Workqueue: events_freezable update_balloon_stats_func [virtio_balloon]
[346067.103875] pstate: 00000005 (nzcv daif -PAN -UAO)
[346067.120772] pc : vp_notify+0x28/0x38 [virtio_pci]
[346067.135242] lr : virtqueue_kick+0x3c/0x78 [virtio_ring]
[346067.145279] sp : ffff00000e2cfd00

[373060.164367] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_probe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring virtio dm_mirror
[373063.760164] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_probe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring virtio dm_mirror
[373063.834630]  dm_region_hash dm_log dm_mod
[373063.920678]  dm_region_hash dm_log dm_mod
[373063.923125] CPU: 2 PID: 23098 Comm: crond Kdump: loaded Tainted: G         C   L    4.19.90-vhulk2001.1.0.0026.aarch64 #1
[373063.944445] CPU: 0 PID: 15559 Comm: sshd Kdump: loaded Tainted: G         C   L    4.19.90-vhulk2001.1.0.0026.aarch64 #1
[373063.950905] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
[373063.963750] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
[373063.967359] pstate: 00000005 (nzcv daif -PAN -UAO)
[373063.978235] pstate: 80000005 (Nzcv daif -PAN -UAO)
[373063.980572] pc : sched_clock+0x40/0x88
[373063.991197] pc : flush_work+0x0/0x38
[373063.993862] lr : sched_clock+0x40/0x88
[373064.004405] lr : tty_buffer_flush_work+0x20/0x30
[373064.006914] sp : ffff00000804f600
[373064.017473] sp : ffff00000c44f800
[373064.019953] x29: ffff00000804f600 x28: 000000000000fa80 
[373064.030509] x29: ffff00000c44f800 x28: 0000000000000000 
[373064.033074] x27: ffff80016d88e800 x26: 0000000000000048 
[373064.043441] x27: 0000000000000001 x26: 0000000000000000 
[373064.045961] x25: 0000000000000000 x24: 0000000000000028 
[373064.056242] x25: 0000000000002000 x24: 0000000000000000 
[373064.058715] x23: ffff0000092b70c8 x22: 00000000000000ac 
[373064.068988] x23: ffff800162e8f000 x22: ffff00000c44f9a8 
[373064.071389] x21: 0000000000000000 x20: ffff0000092b70c0 
[373064.081727] x21: ffff800162e8f000 x20: ffff00000c44f9a8 
[373064.104275] x19: ffff0000092b70c8 x18: 0000000000000000 
[373064.184870] x19: ffff8001668efe00 x18: 0000000000000000 
[373064.187345] x17: 0000000000000000 x16: 0000000000000000 
[373064.197661] x17: 0000000000000000 x16: 0000000000000000 
[373064.201186] x15: 0000000000000000 x14: cb73e42f88b7a687 
[373064.212928] x15: 0000000000000000 x14: 0000000000000000 
[373064.215816] x13: b3c5100000000000 x12: 3dcf040218507684 
[373064.226698] x13: 0000000000000000 x12: 0000000000000000 
[373064.229237] x11: ffff000002520600 x10: 0000000000005b68 
[373064.239772] x11: 0000000000000000 x10: 00000000000072c0 
[373064.242083] x9 : ffff800160047c00 x8 : 0000000000000018 
[373064.252136] x9 : ffff800160040e00 x8 : 0000000000000018 
[373064.254427] x7 : ffff800166625b78 x6 : ffff000009273708 
[373064.264612] x7 : ffff8001708a72d0 x6 : 0000000000000001 
[373064.266787] x5 : 0000000000000000 x4 : ffff000009273708 
[373064.276421] x5 : 00008001f6730000 x4 : 00000000000072c0 
[373064.278858] x3 : 0000000000000000 x2 : 0000000000000023 
[373064.288841] x3 : 00000000000072d8 x2 : 008c36560de8ef00 
[373064.291273] x1 : ffff00000804f5f0 x0 : 0000116722f5099f 
[373064.301321] x1 : 0000000000000000 x0 : ffff8001668efe08 
[373064.303749] Call trace:
[373064.313780] Call trace:
[373064.316218]  sched_clock+0x40/0x88
[373064.326262]  flush_work+0x0/0x38
[373064.328685]  trace_clock_local+0xc/0x18
[373064.338715]  n_tty_poll+0x13c/0x1d0
[373064.341119]  function_graph_enter+0x80/0x190
[373064.351100]  tty_poll+0x90/0xd0
[373064.353516]  prepare_ftrace_return+0x28/0x58
[373064.363449]  do_select+0x2b0/0x658
[373064.365809]  ftrace_graph_caller+0x1c/0x24
[373064.375689]  core_sys_select+0x200/0x438
[373064.378079]  iptable_mangle_hook+0x20/0x128 [iptable_mangle]
[373064.387938]  __arm64_sys_pselect6+0x290/0x2e8
[373064.390338]  nf_hook_slow+0x50/0x100
[373064.400205]  el0_svc_common+0x78/0x130
[373064.402556]  ip_output+0x11c/0x130
[373064.412261]  el0_svc_handler+0x38/0x78
[373064.414551]  ip_local_out+0x58/0x68
[373064.424202]  el0_svc+0x8/0xc
[373064.426448]  __ip_queue_xmit+0x12c/0x368
[373064.480996]  ip_queue_xmit+0x10/0x18
[373064.482692]  __tcp_transmit_skb+0x4f0/0xa28
[373064.484490]  __tcp_send_ack.part.14+0xb4/0x130
[373064.486321]  tcp_send_ack+0x34/0x40
[373064.488015]  __tcp_ack_snd_check+0x50/0x1d0
[373064.489793]  tcp_rcv_established+0x2f4/0x698
[373064.491597]  tcp_v4_do_rcv+0x178/0x228
[373064.493281]  tcp_v4_rcv+0xc8c/0xde0
[373064.494935]  ip_local_deliver_finish+0x84/0x288
[373064.496730]  ip_local_deliver+0x68/0x118
[373064.498473]  ip_rcv_finish+0x90/0xb0
[373064.500215]  ip_rcv+0x64/0xd8
[373064.501820]  __netif_receive_skb_one_core+0x68/0x90
[373064.503772]  __netif_receive_skb+0x28/0x80
[373064.505543]  netif_receive_skb_internal+0x54/0x100
[373064.507438]  napi_gro_receive+0xf8/0x170
[373064.509187]  receive_buf+0x15c/0x500 [virtio_net]
[373064.511051]  virtnet_poll+0x170/0x338 [virtio_net]
[373064.512923]  net_rx_action+0x178/0x400
[373064.514641]  __do_softirq+0x11c/0x31c
[373064.516332]  irq_exit+0x11c/0x128
[373064.517956]  __handle_domain_irq+0x6c/0xc0
[373064.519708]  gic_handle_irq+0x6c/0x170
[373064.521386]  el1_irq+0xb8/0x140
[373064.522972]  __arch_copy_to_user+0x180/0x21c
[373064.524728]  copy_page_to_iter+0xd0/0x320
[373064.526443]  generic_file_buffered_read+0x254/0x740
[373064.528296]  generic_file_read_iter+0x114/0x190
[373064.530118]  ext4_file_read_iter+0x5c/0x140 [ext4]
[373064.531972]  __vfs_read+0x11c/0x188
[373064.533614]  vfs_read+0x94/0x150
[373064.535221]  ksys_read+0x74/0xf0
[373064.536813]  __arm64_sys_read+0x24/0x30
[373064.538498]  el0_svc_common+0x78/0x130
[373064.540173]  el0_svc_handler+0x38/0x78
[373064.541840]  el0_svc+0x8/0xc
[373679.994219] watchdog: BUG: soft lockup - CPU#3 stuck for 11s! [kworker/3:2:21034]
[373680.317773] watchdog: BUG: soft lockup - CPU#0 stuck for 19s! [kworker/0:0:21036]
[373686.483476] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_probe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring virtio dm_mirror
[373687.655553] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_probe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring virtio dm_mirror
[373687.851687]  dm_region_hash dm_log dm_mod
[373688.245240]  dm_region_hash dm_log dm_mod
[373688.254562] CPU: 3 PID: 21034 Comm: kworker/3:2 Kdump: loaded Tainted: G         C   L    4.19.90-vhulk2001.1.0.0026.aarch64 #1
[373688.263482] CPU: 0 PID: 21036 Comm: kworker/0:0 Kdump: loaded Tainted: G         C   L    4.19.90-vhulk2001.1.0.0026.aarch64 #1
[373688.282701] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
[373688.302453] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
[373688.312720] Workqueue: events virtio_gpu_fb_dirty_work [virtio_gpu]
[373688.323287] Workqueue: events_freezable update_balloon_stats_func [virtio_balloon]
[373688.333455] pstate: 80000005 (Nzcv daif -PAN -UAO)
[373688.343978] pstate: 80000005 (Nzcv daif -PAN -UAO)
[373688.353791] pc : vp_notify+0x28/0x38 [virtio_pci]
[373688.363714] pc : vp_notify+0x28/0x38 [virtio_pci]
[373688.373441] lr : vp_notify+0x18/0x38 [virtio_pci]
[373688.383294] lr : vp_notify+0x18/0x38 [virtio_pci]
[373688.393043] sp : ffff000013a6fb50
[373688.403165] sp : ffff000010b8fd00
[373688.412829] x29: ffff000013a6fb50 x28: 0000000000000000 
[373688.422477] x29: ffff000010b8fd00 x28: 0000000000000000 
[373688.432407] x27: ffff000013a6fc90 x26: 0000000000000001 
[373688.442585] x27: 0000000000000000 x26: ffff000009296300 
[373688.533883] x25: 0000000000480020 x24: 0000000000000001 
[373688.544419] x25: 0000000000000000 x24: ffff80016deca028 
[373688.554674] x23: ffff800161c68a00 x22: 0000000000000010 
[373688.565065] x23: ffff80016deca000 x22: 0000000000000032 
[373688.575399] x21: 0000000000000000 x20: ffff80016c6f6000 
[373688.586122] x21: ffff80016ddd5000 x20: ffff80016deca020 
[373688.596567] x19: ffff80016c6f6000 x18: 0000000000000000 
[373688.607270] x19: ffff80016ddd5000 x18: 0000000000000000 
[373688.617882] x17: 0000000000000000 x16: 0000000000000000 
[373688.628566] x17: 0000000000000000 x16: 0000000000000000 
[373688.639147] x15: 0000000000000000 x14: 00000000000007ff 
[373688.649748] x15: ffff800169216f00 x14: 0000000000000000 
[373688.660042] x13: 0000000000000000 x12: 0000000000000000 
[373688.670628] x13: ffff000008a6e0e8 x12: 0000000000000000 
[373688.680740] x11: ffff000008a6e0c0 x10: 000000000000d508 
[373688.690962] x11: ffff000008a6e0c0 x10: 0000000000000ac8 
[373688.700985] x9 : ffff800160048200 x8 : 0000000000000018 
[373688.711121] x9 : ffff800160040e00 x8 : 0000000000000018 
[373688.721080] x7 : ffff800162e1d518 x6 : 0000000000000001 
[373688.731093] x7 : ffff800166a60ad8 x6 : 0000000000000001 
[373688.740815] x5 : 00008001f67c0000 x4 : 000000000000d508 
[373688.750576] x5 : 00008001f6730000 x4 : 0000000000000ac8 
[373688.760251] x3 : 000000000000d520 x2 : 008c36560de8ef00 
[373688.770207] x3 : 0000000000000ae0 x2 : 008c36560de8ef00 
[373688.779837] x1 : ffff00000fda3000 x0 : 0000000000000000 
[373688.789628] x1 : ffff00000fd03008 x0 : 0000000000000002 
[373688.799182] Call trace:
[373688.809100] Call trace:
[373688.818187]  vp_notify+0x28/0x38 [virtio_pci]
[373688.827224]  vp_notify+0x28/0x38 [virtio_pci]
[373688.836343]  virtqueue_kick+0x3c/0x78 [virtio_ring]
[373688.845569]  virtqueue_kick+0x3c/0x78 [virtio_ring]
[373688.854498]  virtio_gpu_queue_ctrl_buffer_locked+0x180/0x248 [virtio_gpu]
[373688.863407]  update_balloon_stats_func+0x90/0xc0 [virtio_balloon]
[373688.872522]  virtio_gpu_queue_ctrl_buffer+0x50/0x78 [virtio_gpu]
[373688.881825]  process_one_work+0x1b0/0x448
[373688.890876]  virtio_gpu_cmd_resource_flush+0x8c/0xb0 [virtio_gpu]
[373688.899759]  worker_thread+0x54/0x468
[373688.946220]  virtio_gpu_dirty_update+0x1b0/0x218 [virtio_gpu]
[373688.955306]  kthread+0x134/0x138
[373688.964455]  virtio_gpu_fb_dirty_work+0x3c/0x48 [virtio_gpu]
[373688.973295]  ret_from_fork+0x10/0x18
[373688.982442]  process_one_work+0x1b0/0x448
[373689.001666]  worker_thread+0x54/0x468
[373689.010761]  kthread+0x134/0x138
[373689.019700]  ret_from_fork+0x10/0x18
[root@pc-openeuler-1 ~]# strace -fp 46
strace: attach: ptrace(PTRACE_SEIZE, 46): Operation not permitted
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# fdisk -l
GPT PMBR size mismatch (125829119 != 167772159) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.
Disk /dev/sda: 80 GiB, 85899345920 bytes, 167772160 sectors
Disk model: QEMU HARDDISK   
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 47AE6BAF-E4DA-48D8-AF88-9800C5702C9C

Device       Start       End   Sectors  Size Type
/dev/sda1     2048    411647    409600  200M EFI System
/dev/sda2   411648   2508799   2097152    1G Linux filesystem
/dev/sda3  2508800 125827071 123318272 58.8G Linux LVM


Disk /dev/mapper/openeuler-root: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/openeuler-home: 38.82 GiB, 41662021632 bytes, 81371136 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# ifconfig       
enp1s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 15.0.0.12  netmask 255.255.255.0  broadcast 15.0.0.255
        ether fa:16:3e:e1:28:5c  txqueuelen 1000  (Ethernet)
        RX packets 224540  bytes 48437128 (46.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 77928  bytes 47811934 (45.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 6  bytes 504 (504.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6  bytes 504 (504.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@pc-openeuler-1 ~]# ethtool -i enp1s0
driver: virtio_net
version: 1.0.0
firmware-version: 
expansion-rom-version: 
bus-info: 0000:01:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
[root@pc-openeuler-1 ~]#

看一下开机日志

[root@pc-openeuler-1 ~]# dmesg |grep stuck
[128870.694299] watchdog: BUG: soft lockup - CPU#3 stuck for 27s! [kworker/3:3:8429]
[128870.694305] watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [in:imjournal:1327]
[130067.792545] watchdog: BUG: soft lockup - CPU#1 stuck for 24s! [kworker/1:1:9346]
[130076.289237] watchdog: BUG: soft lockup - CPU#3 stuck for 42s! [kworker/3:3:8429]
[133473.581850] watchdog: BUG: soft lockup - CPU#3 stuck for 25s! [kworker/3:3:8429]
[133473.582078] watchdog: BUG: soft lockup - CPU#0 stuck for 25s! [crond:1424]
[135359.002369] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [systemd-journal:9255]
[139790.355979] watchdog: BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:2:9415]
[139802.211050] watchdog: BUG: soft lockup - CPU#3 stuck for 35s! [kworker/3:2:10258]
[148299.325471] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [in:imjournal:1327]
[148299.325659] watchdog: BUG: soft lockup - CPU#3 stuck for 27s! [kworker/3:2:10258]
[151125.634696] watchdog: BUG: soft lockup - CPU#3 stuck for 40s! [kworker/3:2:10258]
[151667.052727] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [gdbus:1345]
[153793.452831] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [kworker/0:2:9535]
[155857.012259] watchdog: BUG: soft lockup - CPU#0 stuck for 30s! [kworker/0:2:9535]
[156808.255160] watchdog: BUG: soft lockup - CPU#3 stuck for 24s! [kworker/3:2:10258]
[156859.640637] watchdog: BUG: soft lockup - CPU#0 stuck for 34s! [systemd-journal:9255]
[157443.036563] watchdog: BUG: soft lockup - CPU#3 stuck for 48s! [kworker/3:2:10258]
[158857.885466] watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [kworker/0:2:9535]
[159169.649188] watchdog: BUG: soft lockup - CPU#3 stuck for 37s! [kworker/3:2:10258]
[159579.779217] watchdog: BUG: soft lockup - CPU#3 stuck for 35s! [kworker/3:2:10258]
[162514.502111] watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [irqbalance:1306]
[162519.488658] watchdog: BUG: soft lockup - CPU#1 stuck for 23s! [gmain:1387]
[162567.495478] watchdog: BUG: soft lockup - CPU#0 stuck for 30s! [irqbalance:1306]
[163783.836495] watchdog: BUG: soft lockup - CPU#3 stuck for 27s! [kworker/3:2:10258]
[164286.582186] watchdog: BUG: soft lockup - CPU#3 stuck for 25s! [kworker/3:2:10258]
[167886.034996] watchdog: BUG: soft lockup - CPU#3 stuck for 30s! [kworker/3:2:10258]
[168243.650181] watchdog: BUG: soft lockup - CPU#3 stuck for 31s! [kworker/3:2:10258]
[168491.682829] watchdog: BUG: soft lockup - CPU#3 stuck for 27s! [kworker/3:0:10716]
[168495.335455] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [gmain:1387]
[169097.929006] watchdog: BUG: soft lockup - CPU#3 stuck for 21s! [kworker/3:0:10716]
[172058.327343] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [jbd2/dm-0-8:421]
[172380.784948] watchdog: BUG: soft lockup - CPU#3 stuck for 24s! [kworker/3:0:10716]
[174746.710358] watchdog: BUG: soft lockup - CPU#0 stuck for 34s! [kworker/0:2:9535]
[174746.710410] watchdog: BUG: soft lockup - CPU#3 stuck for 30s! [kworker/3:0:10716]
[175318.907551] watchdog: BUG: soft lockup - CPU#3 stuck for 27s! [kworker/3:0:10716]
[175477.460468] watchdog: BUG: soft lockup - CPU#0 stuck for 32s! [tuned:1856]
[176050.228361] watchdog: BUG: soft lockup - CPU#3 stuck for 24s! [kworker/3:0:10716]
[176082.402569] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [kworker/3:0:10716]
[176514.627397] watchdog: BUG: soft lockup - CPU#3 stuck for 42s! [kworker/3:0:10716]
[177278.778754] watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [kworker/3:0:10716]
[177278.779115] watchdog: BUG: soft lockup - CPU#0 stuck for 26s! [in:imjournal:1327]
[178315.482262] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [kworker/0:2:9535]
[179157.323750] watchdog: BUG: soft lockup - CPU#0 stuck for 25s! [systemd-journal:9255]
[179426.257803] watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [kworker/0:2:9535]
[180631.651655] watchdog: BUG: soft lockup - CPU#3 stuck for 63s! [kworker/3:0:10716]
[181829.343981] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [kworker/0:0:10935]
[182781.566385] watchdog: BUG: soft lockup - CPU#3 stuck for 36s! [kworker/3:0:10716]
[182981.314382] watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [multipathd:1239]
[183011.080375] watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [kworker/3:0:10716]
[183018.243739] watchdog: BUG: soft lockup - CPU#1 stuck for 24s! [kworker/1:0:10862]
[183033.956106] watchdog: BUG: soft lockup - CPU#0 stuck for 29s! [multipathd:1239]
[184624.937194] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [systemd-journal:9255]
[185454.118628] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [kworker/0:0:10935]
[185759.707002] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [gmain:1387]
[185862.303287] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:0:10862]
[185872.473180] watchdog: BUG: soft lockup - CPU#3 stuck for 35s! [kworker/3:0:10716]
[185873.590524] watchdog: BUG: soft lockup - CPU#0 stuck for 37s! [kworker/0:0:10935]
[186144.235752] watchdog: BUG: soft lockup - CPU#3 stuck for 24s! [kworker/3:0:10716]
[186334.951796] watchdog: BUG: soft lockup - CPU#3 stuck for 30s! [kworker/3:0:10716]
[186567.624788] watchdog: BUG: soft lockup - CPU#3 stuck for 25s! [kworker/3:0:10716]
[187160.984935] watchdog: BUG: soft lockup - CPU#0 stuck for 37s! [kworker/0:0:10935]
[187162.059751] watchdog: BUG: soft lockup - CPU#3 stuck for 30s! [kworker/3:0:10716]
[187814.231070] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:0:10862]
[187819.887867] watchdog: BUG: soft lockup - CPU#0 stuck for 31s! [kworker/0:0:10935]
[187819.897111] watchdog: BUG: soft lockup - CPU#3 stuck for 31s! [kworker/3:0:10716]
[187874.230950] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:0:10862]
[187883.970423] watchdog: BUG: soft lockup - CPU#3 stuck for 34s! [kworker/3:0:10716]
[188454.072544] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [kworker/3:1:11103]
[188566.618771] watchdog: BUG: soft lockup - CPU#3 stuck for 41s! [kworker/3:1:11103]
[188566.619047] watchdog: BUG: soft lockup - CPU#0 stuck for 56s! [kworker/0:2:11097]
[189029.661174] watchdog: BUG: soft lockup - CPU#3 stuck for 29s! [kworker/3:1:11103]
[189381.451840] watchdog: BUG: soft lockup - CPU#3 stuck for 33s! [kworker/3:1:11103]
[189617.022590] watchdog: BUG: soft lockup - CPU#3 stuck for 54s! [kworker/3:1:11103]
[189617.022609] watchdog: BUG: soft lockup - CPU#0 stuck for 40s! [systemd-journal:11102]
[189850.692760] watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [kworker/3:1:11103]
[190210.798148] watchdog: BUG: soft lockup - CPU#0 stuck for 37s! [kworker/0:2:11097]
[190306.246783] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:0]
[190435.146785] watchdog: BUG: soft lockup - CPU#3 stuck for 60s! [kworker/3:1:11103]
[190642.376525] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [kworker/3:1:11103]
[191346.853680] watchdog: BUG: soft lockup - CPU#3 stuck for 34s! [kworker/3:1:11103]
[192376.198777] watchdog: BUG: soft lockup - CPU#0 stuck for 28s! [systemd:1]
[192462.217530] watchdog: BUG: soft lockup - CPU#1 stuck for 24s! [kworker/1:0:10862]
[192475.332515] watchdog: BUG: soft lockup - CPU#3 stuck for 37s! [kworker/3:1:11103]
[194138.214367] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:0:10862]
[194140.432350] watchdog: BUG: soft lockup - CPU#3 stuck for 24s! [kworker/3:1:11103]
[194625.750293] watchdog: BUG: soft lockup - CPU#0 stuck for 25s! [multipathd:1239]
[194878.897237] watchdog: BUG: soft lockup - CPU#0 stuck for 30s! [jbd2/dm-0-8:421]
[199888.592950] watchdog: BUG: soft lockup - CPU#3 stuck for 28s! [kworker/3:1:11103]
[199888.593060] watchdog: BUG: soft lockup - CPU#0 stuck for 32s! [kworker/0:0:11255]
[201475.037098] watchdog: BUG: soft lockup - CPU#3 stuck for 30s! [kworker/3:1:11103]
[202493.337472] watchdog: BUG: soft lockup - CPU#3 stuck for 29s! [kworker/3:1:11103]
[203005.156440] watchdog: BUG: soft lockup - CPU#3 stuck for 25s! [kworker/3:1:11103]
[204917.086088] watchdog: BUG: soft lockup - CPU#3 stuck for 21s! [kworker/3:1:11103]
[207458.402133] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [crond:1424]
[208284.072103] watchdog: BUG: soft lockup - CPU#3 stuck for 24s! [kworker/3:2:11417]
[210602.171495] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:2:11419]
[212152.703107] watchdog: BUG: soft lockup - CPU#3 stuck for 36s! [kworker/3:2:11417]
[212950.237931] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [bash:11790]
[212954.660038] watchdog: BUG: soft lockup - CPU#0 stuck for 26s! [kworker/0:0:11255]
[214738.160242] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:11779]
[214749.389826] watchdog: BUG: soft lockup - CPU#3 stuck for 34s! [kworker/3:2:11417]
[215316.624207] watchdog: BUG: soft lockup - CPU#0 stuck for 25s! [multipathd:1239]
[215431.254970] watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [kworker/3:2:11417]
[217486.155822] watchdog: BUG: soft lockup - CPU#3 stuck for 28s! [kworker/3:0:11867]
[217579.023600] watchdog: BUG: soft lockup - CPU#3 stuck for 27s! [kworker/3:0:11867]
[218251.541736] watchdog: BUG: soft lockup - CPU#3 stuck for 38s! [kworker/3:0:11867]
[218259.413299] watchdog: BUG: soft lockup - CPU#0 stuck for 42s! [kworker/0:0:11255]
[218540.328702] watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [kworker/0:0:11255]
[220436.716664] watchdog: BUG: soft lockup - CPU#3 stuck for 28s! [kworker/3:0:11867]
[220436.716727] watchdog: BUG: soft lockup - CPU#0 stuck for 36s! [kworker/0:0:11255]
[220685.086569] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [jbd2/dm-0-8:421]
[220794.661758] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [kworker/3:0:11867]
[221051.822463] watchdog: BUG: soft lockup - CPU#3 stuck for 24s! [kworker/3:0:11867]
[223721.182851] watchdog: BUG: soft lockup - CPU#3 stuck for 21s! [kworker/3:0:11867]
[226946.264203] watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [dnf:12072]
[227283.923013] watchdog: BUG: soft lockup - CPU#3 stuck for 24s! [kworker/3:0:11867]
[227329.042977] watchdog: BUG: soft lockup - CPU#0 stuck for 28s! [swapper/0:0]
[229502.572246] watchdog: BUG: soft lockup - CPU#0 stuck for 33s! [jbd2/dm-0-8:421]
[229514.535201] watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [migration/2:20]
[229821.789426] watchdog: BUG: soft lockup - CPU#0 stuck for 73s! [gmain:1387]
[229821.968490] watchdog: BUG: soft lockup - CPU#3 stuck for 59s! [kworker/3:0:11867]
[234531.199273] watchdog: BUG: soft lockup - CPU#3 stuck for 35s! [kworker/3:0:11867]
[235064.122732] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [systemd-journal:11102]
[235097.398378] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [systemd-journal:11102]
[236592.312114] watchdog: BUG: soft lockup - CPU#3 stuck for 27s! [kworker/3:0:11867]
[238289.253720] watchdog: BUG: soft lockup - CPU#0 stuck for 25s! [jbd2/dm-0-8:421]
[238756.029303] watchdog: BUG: soft lockup - CPU#3 stuck for 26s! [kworker/3:3:12015]
[239970.094922] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:11779]
[239971.426464] watchdog: BUG: soft lockup - CPU#0 stuck for 38s! [kworker/0:0:11255]
[239971.426552] watchdog: BUG: soft lockup - CPU#3 stuck for 34s! [kworker/3:3:12015]
[241705.377841] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [swapper/0:0]
[244002.189839] watchdog: BUG: soft lockup - CPU#3 stuck for 51s! [kworker/3:3:12015]
[244050.083863] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:11779]
[245257.248524] watchdog: BUG: soft lockup - CPU#3 stuck for 29s! [kworker/3:3:12015]
[245257.265565] watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [kworker/0:0:11255]
[252458.061986] watchdog: BUG: soft lockup - CPU#1 stuck for 21s! [kworker/1:2:12465]
[252474.310382] watchdog: BUG: soft lockup - CPU#3 stuck for 37s! [kworker/3:3:12015]
[258484.584783] watchdog: BUG: soft lockup - CPU#0 stuck for 43s! [kworker/0:1:12464]
[258484.643666] watchdog: BUG: soft lockup - CPU#3 stuck for 39s! [kworker/3:0:12608]
[259677.110212] watchdog: BUG: soft lockup - CPU#3 stuck for 36s! [kworker/3:0:12608]
[260125.177085] watchdog: BUG: soft lockup - CPU#0 stuck for 36s! [kworker/0:1:12464]
[260185.312565] watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [kworker/0:1:12464]
[260245.571511] watchdog: BUG: soft lockup - CPU#3 stuck for 55s! [kworker/3:0:12608]
[260319.282342] watchdog: BUG: soft lockup - CPU#0 stuck for 31s! [kworker/0:1:12464]
[260423.132452] watchdog: BUG: soft lockup - CPU#3 stuck for 30s! [kworker/3:0:12608]
[260423.132538] watchdog: BUG: soft lockup - CPU#0 stuck for 36s! [kworker/0:1:12464]
[260605.996061] watchdog: BUG: soft lockup - CPU#0 stuck for 52s! [kworker/0:1:12464]
[260606.045055] watchdog: BUG: soft lockup - CPU#3 stuck for 49s! [kworker/3:0:12608]
[260675.490227] watchdog: BUG: soft lockup - CPU#3 stuck for 38s! [kworker/3:0:12608]
[261326.037859] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:2:12465]
[261336.241710] watchdog: BUG: soft lockup - CPU#3 stuck for 32s! [kworker/3:0:12608]
[261466.468141] watchdog: BUG: soft lockup - CPU#3 stuck for 26s! [kworker/3:0:12608]
[261710.038310] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:2:12465]
[261717.045323] watchdog: BUG: soft lockup - CPU#3 stuck for 28s! [kworker/3:0:12608]
[261717.045327] watchdog: BUG: soft lockup - CPU#0 stuck for 32s! [irqbalance:1306]
[262378.657488] watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [kworker/3:0:12608]
[263092.577248] watchdog: BUG: soft lockup - CPU#0 stuck for 29s! [kworker/0:1H:348]
[264170.122250] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [jbd2/dm-0-8:421]
[267501.682759] watchdog: BUG: soft lockup - CPU#3 stuck for 48s! [kworker/3:0:12608]
[267505.075842] watchdog: BUG: soft lockup - CPU#2 stuck for 21s! [systemd-network:1377]
[267508.627874] watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [kworker/0:1:12464]
[268060.520928] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [systemd:1]
[268060.678706] watchdog: BUG: soft lockup - CPU#2 stuck for 21s! [chronyd:1307]
[268060.832732] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [systemd:12711]
[268076.817436] watchdog: BUG: soft lockup - CPU#3 stuck for 47s! [kworker/3:0:12608]
[276582.647298] watchdog: BUG: soft lockup - CPU#0 stuck for 29s! [kworker/0:3:12897]
[278281.523785] watchdog: BUG: soft lockup - CPU#3 stuck for 33s! [kworker/3:0:12999]
[281367.153803] watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [ksoftirqd/2:21]
[281881.619504] watchdog: BUG: soft lockup - CPU#0 stuck for 44s! [irqbalance:1306]
[282618.082931] watchdog: BUG: soft lockup - CPU#3 stuck for 26s! [kworker/3:0:12999]
[282649.804723] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [in:imjournal:1327]
[282650.015663] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [kworker/0:3:12897]
[282650.294629] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [migration/3:25]
[282802.062919] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [swapper/2:0]
[283080.710904] watchdog: BUG: soft lockup - CPU#0 stuck for 46s! [crond:13510]
[283164.615539] watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [migration/0:12]
[283218.537473] watchdog: BUG: soft lockup - CPU#3 stuck for 26s! [kworker/3:0:12999]
[283425.141632] watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [auditd:1277]
[284914.567248] watchdog: BUG: soft lockup - CPU#0 stuck for 26s! [multipathd:1239]
[284977.507167] watchdog: BUG: soft lockup - CPU#2 stuck for 21s! [irqbalance:1306]
[284980.253554] watchdog: BUG: soft lockup - CPU#3 stuck for 28s! [kworker/3:0:12999]
[284980.255455] watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [sh:13585]
[285447.056229] watchdog: BUG: soft lockup - CPU#0 stuck for 26s! [sshd:13334]
[285680.684896] watchdog: BUG: soft lockup - CPU#0 stuck for 28s! [kworker/0:0:13498]
[285695.898677] watchdog: BUG: soft lockup - CPU#2 stuck for 28s! [NetworkManager:1376]
[285696.792136] watchdog: BUG: soft lockup - CPU#1 stuck for 25s! [in:imjournal:1327]
[285755.478715] watchdog: BUG: soft lockup - CPU#1 stuck for 27s! [kworker/1:1:13501]
[285756.051611] watchdog: BUG: soft lockup - CPU#0 stuck for 25s! [jbd2/dm-0-8:421]
[285758.119680] watchdog: BUG: soft lockup - CPU#2 stuck for 25s! [kworker/2:0:13520]
[285758.489484] watchdog: BUG: soft lockup - CPU#3 stuck for 30s! [kworker/3:0:12999]
[286149.594963] watchdog: BUG: soft lockup - CPU#0 stuck for 25s! [multipathd:1239]
[286728.360225] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [kworker/0:0:13498]
[286782.695193] watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [kworker/3:1:13738]
[286842.796319] watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [kworker/0:0:13498]
[287877.309063] watchdog: BUG: soft lockup - CPU#2 stuck for 21s! [systemd-journal:13810]
[288551.160744] watchdog: BUG: soft lockup - CPU#0 stuck for 54s! [swapper/0:0]
[289681.063337] watchdog: BUG: soft lockup - CPU#0 stuck for 36s! [kworker/0:1:14135]
[290779.739085] watchdog: BUG: soft lockup - CPU#0 stuck for 31s! [kworker/0:1:14135]
[290824.256353] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [kworker/0:1:14135]
[290990.264792] watchdog: BUG: soft lockup - CPU#0 stuck for 28s! [kworker/0:1:14135]
[291133.536777] watchdog: BUG: soft lockup - CPU#0 stuck for 26s! [swapper/0:0]
[291508.593883] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [tuned:1856]
[292106.788463] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [multipathd:1239]
[292166.486342] watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [systemd-udevd:13327]
[292170.103246] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [multipathd:1240]
[292173.013452] watchdog: BUG: soft lockup - CPU#1 stuck for 25s! [in:imjournal:1327]
[292194.153504] watchdog: BUG: soft lockup - CPU#0 stuck for 46s! [irqbalance:1306]
[292664.932470] watchdog: BUG: soft lockup - CPU#0 stuck for 36s! [chronyd:1307]
[292844.076609] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [tuned:1856]
[295118.304311] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [in:imjournal:1327]
[295119.286204] watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [gmain:1387]
[295120.572802] watchdog: BUG: soft lockup - CPU#3 stuck for 24s! [kworker/3:3:14233]
[298055.608630] watchdog: BUG: soft lockup - CPU#0 stuck for 25s! [multipathd:1239]
[298106.997405] watchdog: BUG: soft lockup - CPU#0 stuck for 25s! [auditd:1277]
[298270.427633] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [systemd-journal:14378]
[298684.420041] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [multipathd:1239]
[298847.587060] watchdog: BUG: soft lockup - CPU#0 stuck for 46s! [in:imjournal:1327]
[299333.468769] watchdog: BUG: soft lockup - CPU#1 stuck for 21s! [multipathd:1236]
[299334.425976] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [migration/3:25]
[299334.460984] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [crond:1424]
[299349.593055] watchdog: BUG: soft lockup - CPU#0 stuck for 56s! [kworker/0:0:14287]
[301057.171637] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [kworker/3:3:14233]
[301057.200076] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [in:imjournal:1327]
[301506.303404] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [kworker/3:3:14233]
[301564.402609] watchdog: BUG: soft lockup - CPU#0 stuck for 41s! [jbd2/dm-0-8:421]
[301688.371032] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [kworker/3:3:14233]
[301735.545321] watchdog: BUG: soft lockup - CPU#0 stuck for 28s! [irqbalance:1306]
[301842.572314] watchdog: BUG: soft lockup - CPU#3 stuck for 34s! [kworker/3:3:14233]
[302649.197038] watchdog: BUG: soft lockup - CPU#0 stuck for 26s! [irqbalance:1306]
[302649.695680] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [in:imjournal:1327]
[302654.281508] watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [gmain:1387]
[303210.363763] watchdog: BUG: soft lockup - CPU#0 stuck for 36s! [kworker/u8:2:13855]
[303453.760247] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [kworker/3:1:14496]
[303837.260184] watchdog: BUG: soft lockup - CPU#0 stuck for 55s! [swapper/0:0]
[304810.669507] watchdog: BUG: soft lockup - CPU#0 stuck for 33s! [crond:1424]
[305181.193161] watchdog: BUG: soft lockup - CPU#0 stuck for 25s! [kworker/0:0:14287]
[305330.128073] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [multipathd:1239]
[306470.927763] watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [kworker/0:0:14287]
[306716.557214] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [kworker/3:1:14496]
[306745.312876] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [migration/0:12]
[311321.640447] watchdog: BUG: soft lockup - CPU#0 stuck for 32s! [kworker/0:1:14491]
[313683.065199] watchdog: BUG: soft lockup - CPU#3 stuck for 46s! [kworker/3:1:14496]
[313708.535288] watchdog: BUG: soft lockup - CPU#2 stuck for 24s! [migration/2:20]
[313711.627287] watchdog: BUG: soft lockup - CPU#0 stuck for 26s! [gmain:1387]
[313803.402766] watchdog: BUG: soft lockup - CPU#1 stuck for 24s! [kworker/1:1:14377]
[313806.334229] watchdog: BUG: soft lockup - CPU#3 stuck for 26s! [kworker/3:1:14496]
[313820.351686] watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [systemd:1]
[313888.811532] watchdog: BUG: soft lockup - CPU#0 stuck for 25s! [auditd:1277]
[316123.944561] watchdog: BUG: soft lockup - CPU#0 stuck for 38s! [jbd2/dm-0-8:421]
[317266.108915] watchdog: BUG: soft lockup - CPU#1 stuck for 24s! [kworker/1:1:14377]
[317398.795688] watchdog: BUG: soft lockup - CPU#1 stuck for 34s! [crond:14825]
[317407.764676] watchdog: BUG: soft lockup - CPU#0 stuck for 30s! [kworker/0:1H:348]
[317511.682669] watchdog: BUG: soft lockup - CPU#0 stuck for 38s! [crond:1424]
[317854.560515] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [migration/0:12]
[319566.438018] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [tuned:1856]
[320278.442703] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [jbd2/dm-0-8:421]
[320870.860909] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [kworker/0:2:14861]
[320972.886780] watchdog: BUG: soft lockup - CPU#0 stuck for 32s! [gmain:1387]
[321078.766553] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [irqbalance:1306]
[321084.108837] watchdog: BUG: soft lockup - CPU#0 stuck for 28s! [in:imjournal:1327]
[321092.008514] watchdog: BUG: soft lockup - CPU#3 stuck for 36s! [kworker/3:0:14768]
[321438.678316] watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [kworker/3:0:14768]
[323021.513176] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [systemd-journal:14919]
[324011.994033] watchdog: BUG: soft lockup - CPU#0 stuck for 31s! [irqbalance:1306]
[324063.390582] watchdog: BUG: soft lockup - CPU#2 stuck for 26s! [systemd-journal:14919]
[324118.410748] watchdog: BUG: soft lockup - CPU#0 stuck for 26s! [kworker/0:2:14861]
[324203.901426] watchdog: BUG: soft lockup - CPU#0 stuck for 50s! [systemd-journal:14919]
[324482.761460] watchdog: BUG: soft lockup - CPU#0 stuck for 34s! [kworker/0:2:14861]
[325741.996911] watchdog: BUG: soft lockup - CPU#0 stuck for 88s! [kworker/0:2:14861]
[325797.482123] watchdog: BUG: soft lockup - CPU#0 stuck for 25s! [kworker/0:2:14861]
[325800.111634] watchdog: BUG: soft lockup - CPU#1 stuck for 23s! [gmain:1387]
[325800.572637] watchdog: BUG: soft lockup - CPU#2 stuck for 25s! [systemd-journal:14919]
[326377.870443] watchdog: BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:0:14915]
[326377.912311] watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [kworker/3:0:14768]
[326461.102135] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [systemd-network:1377]
[326525.275584] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [kworker/0:2:14861]
[327224.736393] watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [kworker/0:2:14861]
[327907.358616] watchdog: BUG: soft lockup - CPU#0 stuck for 35s! [kworker/0:2:14861]
[328110.449018] watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [kworker/3:0:14768]
[328137.966011] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [irqbalance:1306]
[328138.114008] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [tuned:1856]
[328138.602006] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [in:imjournal:1327]
[328184.472259] watchdog: BUG: soft lockup - CPU#3 stuck for 32s! [kworker/3:0:14768]
[328188.377630] watchdog: BUG: soft lockup - CPU#1 stuck for 32s! [kworker/1:0:14915]
[328192.675804] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [multipathd:1239]
[328198.502586] watchdog: BUG: soft lockup - CPU#2 stuck for 26s! [irqbalance:1306]
[328369.864290] watchdog: BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:15078]
[328374.194376] watchdog: BUG: soft lockup - CPU#3 stuck for 27s! [kworker/3:0:14768]
[328430.558425] watchdog: BUG: soft lockup - CPU#0 stuck for 34s! [kworker/0:0:14989]
[328453.445212] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [in:imjournal:1327]
[328458.095589] watchdog: BUG: soft lockup - CPU#2 stuck for 25s! [gmain:1387]
[328592.256335] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [dbus-daemon:1296]
[328658.016372] watchdog: BUG: soft lockup - CPU#3 stuck for 24s! [multipathd:1240]
[328668.024007] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [migration/2:20]
[328668.313987] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/u8:2:15095]
[328710.435741] watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [kworker/0:0:14989]
[328875.225318] watchdog: BUG: soft lockup - CPU#1 stuck for 21s! [systemd-logind:14506]
[328876.000812] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [kworker/0:0:14989]
[329034.452844] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [(coredump):15121]
[329127.988885] watchdog: BUG: soft lockup - CPU#0 stuck for 35s! [kworker/0:0:14989]
[329260.050923] watchdog: BUG: soft lockup - CPU#0 stuck for 61s! [migration/0:12]
[329279.192063] watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [chronyd:1307]
[330357.707226] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [jbd2/dm-0-8:421]
[340090.834176] watchdog: BUG: soft lockup - CPU#3 stuck for 31s! [kworker/3:0:15454]
[340091.129561] watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [systemd-network:1377]
[344259.358524] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [dbus-daemon:1296]
[346062.993562] watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [kworker/0:0:14989]
[346109.818760] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:2:15437]
[346121.510088] watchdog: BUG: soft lockup - CPU#3 stuck for 37s! [kworker/3:0:15454]
[353237.984311] watchdog: BUG: soft lockup - CPU#1 stuck for 11s! [kworker/1:2:15437]
[353254.729207] watchdog: BUG: soft lockup - CPU#3 stuck for 27s! [kworker/3:0:15454]
[353309.845852] watchdog: BUG: soft lockup - CPU#0 stuck for 14s! [kworker/0:1:18880]
[355678.501557] watchdog: BUG: soft lockup - CPU#0 stuck for 12s! [swapper/0:0]
[357748.172879] watchdog: BUG: soft lockup - CPU#3 stuck for 13s! [kworker/3:2:21034]
[360450.162283] watchdog: BUG: soft lockup - CPU#0 stuck for 11s! [systemd:1]
[362252.300305] watchdog: BUG: soft lockup - CPU#3 stuck for 15s! [kworker/3:2:21034]
[364043.641048] watchdog: BUG: soft lockup - CPU#0 stuck for 11s! [in:imjournal:1327]
[364044.161561] watchdog: BUG: soft lockup - CPU#1 stuck for 11s! [crond:22012]
[364107.280553] watchdog: BUG: soft lockup - CPU#0 stuck for 14s! [kworker/u8:0:22015]
[364113.221898] watchdog: BUG: soft lockup - CPU#2 stuck for 19s! [crond:22021]
[365244.886399] watchdog: BUG: soft lockup - CPU#0 stuck for 11s! [kworker/0:0:21036]
[367073.585774] watchdog: BUG: soft lockup - CPU#3 stuck for 12s! [kworker/3:2:21034]
[367073.585780] watchdog: BUG: soft lockup - CPU#0 stuck for 17s! [kworker/0:0:21036]
[367083.948643] watchdog: BUG: soft lockup - CPU#1 stuck for 11s! [kworker/1:2:15437]
[367708.053723] watchdog: BUG: soft lockup - CPU#2 stuck for 11s! [swapper/2:0]
[370645.938502] watchdog: BUG: soft lockup - CPU#1 stuck for 11s! [kworker/1:2:15437]
[370646.990510] watchdog: BUG: soft lockup - CPU#3 stuck for 12s! [kworker/3:2:21034]
[371308.844919] watchdog: BUG: soft lockup - CPU#0 stuck for 14s! [run-parts:22456]
[373045.191273] watchdog: BUG: soft lockup - CPU#2 stuck for 12s! [crond:23098]
[373045.843894] watchdog: BUG: soft lockup - CPU#0 stuck for 13s! [sshd:15559]
[373046.632763] watchdog: BUG: soft lockup - CPU#1 stuck for 13s! [systemd-network:1377]
[373679.994219] watchdog: BUG: soft lockup - CPU#3 stuck for 11s! [kworker/3:2:21034]
[373680.317773] watchdog: BUG: soft lockup - CPU#0 stuck for 19s! [kworker/0:0:21036]
[root@pc-openeuler-1 ~]#
[root@pc-openeuler-1 ~]# cat /proc/cpuinfo |grep processor
processor	: 0
processor	: 1
processor	: 2
processor	: 3
[root@pc-openeuler-1 ~]# ps -eo ppid,pid,user,args |grep watchdog
      2      46 root     [watchdogd]
  15560   24185 root     grep watchdog

Message from syslogd@pc-openeuler-1 at Apr 21 22:30:22 ...
 kernel:[376640.304555] watchdog: BUG: soft lockup - CPU#3 stuck for 11s! [kworker/3:1:23736]
[root@pc-openeuler-1 ~]# 
[root@pc-openeuler-1 ~]# ps -eo ppid,pid,user,args |grep watchdog
      2      46 root     [watchdogd]
  15560   24219 root     grep watchdog
[root@pc-openeuler-1 ~]#
Apr 20 03:19:13 pc-openeuler-1 rsyslogd[1303]: [origin software="rsyslogd" swVersion="8.1907.0" x-pid="1303" x-info="https://www.rsyslog.com"] rsyslogd was HUPed
Apr 20 03:19:14 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:19:20 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:19:22 pc-openeuler-1 rhsmd[11976]: In order for Subscription Manager to provide your system with updates, your system must be registered with the Customer Portal. Please enter your Red Hat login to ensure your system is up-to-date.
Apr 20 03:20:03 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:20:05 pc-openeuler-1 chronyd[1307]: Forward time jump detected!
Apr 20 03:20:07 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:20:27 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:20:35 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:20:40 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:20:45 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:20:50 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:20:55 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:00 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:05 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:11 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:16 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:21 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:27 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:32 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:37 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:42 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:47 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:52 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:57 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:11 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:16 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:21 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:27 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:32 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:37 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:42 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:47 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:52 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:57 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:23:02 pc-openeuler-1 multipathd[1236]: sda: unusable path
/var/log/messages...skipping...
Apr 20 03:19:13 pc-openeuler-1 rsyslogd[1303]: [origin software="rsyslogd" swVersion="8.1907.0" x-pid="1303" x-info="https://www.rsyslog.com"] rsyslogd was HUPed
Apr 20 03:19:14 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:19:20 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:19:22 pc-openeuler-1 rhsmd[11976]: In order for Subscription Manager to provide your system with updates, your system must be registered with the Customer Portal. Please enter your Red Hat login to ensure your system is up-to-date.
Apr 20 03:20:03 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:20:05 pc-openeuler-1 chronyd[1307]: Forward time jump detected!
Apr 20 03:20:07 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:20:27 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:20:35 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:20:40 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:20:45 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:20:50 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:20:55 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:00 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:05 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:11 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:16 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:21 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:27 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:32 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:37 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:42 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:47 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:52 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:21:57 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:11 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:16 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:21 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:27 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:32 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:37 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:42 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:47 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:52 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:22:57 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:23:02 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:23:07 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:23:12 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:23:14 pc-openeuler-1 chronyd[1307]: Selected source 199.182.204.197
Apr 20 03:23:17 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:23:22 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:23:27 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:23:33 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:23:38 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:23:43 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:23:48 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:23:55 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:24:03 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:24:08 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:24:13 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:24:18 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:24:18 pc-openeuler-1 chronyd[1307]: Selected source 203.107.6.88
Apr 20 03:24:24 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:24:29 pc-openeuler-1 multipathd[1236]: sda: unusable path

[root@pc-openeuler-1 ~]# ps -ef | grep multipathd
root       24271   15560 41 22:36 pts/0    00:00:00 grep multipathd
[root@pc-openeuler-1 ~]# ps -fp 24271
UID          PID    PPID  C STIME TTY          TIME CMD
[root@pc-openeuler-1 ~]#

2020-04-21 23:43 重启测试机openEuler

[root@pc-openeuler-1 ~]# reboot

然后通过ssh远程连接测试,出现陷入Message

Message from syslogd@pc-openeuler-1 at Apr 18 23:58:30 ...
 kernel:[122698.773329] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [systemd:1]

Message from syslogd@pc-openeuler-1 at Apr 18 23:59:55 ...
 kernel:[122823.508244] watchdog: BUG: soft lockup - CPU#3 stuck for 38s! [kworker/3:0:9130]

Message from syslogd@pc-openeuler-1 at Apr 19 00:07:50 ...
 kernel:[123195.779527] watchdog: BUG: soft lockup - CPU#2 stuck for 24s! [sshd:7789]

Message from syslogd@pc-openeuler-1 at Apr 19 00:13:59 ...
 kernel:[123655.217188] watchdog: BUG: soft lockup - CPU#0 stuck for 37s! [kworker/0:1:8930]

Message from syslogd@pc-openeuler-1 at Apr 19 00:35:29 ...
 kernel:[124937.671514] watchdog: BUG: soft lockup - CPU#3 stuck for 74s! [kworker/3:3:8429]

Message from syslogd@pc-openeuler-1 at Apr 19 00:35:29 ...
 kernel:[124937.671720] watchdog: BUG: soft lockup - CPU#0 stuck for 55s! [in:imjournal:1327]

Message from syslogd@pc-openeuler-1 at Apr 19 01:40:44 ...
 kernel:[128870.694305] watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [in:imjournal:1327]

Message from syslogd@pc-openeuler-1 at Apr 21 17:15:21 ...
 kernel:[357748.172879] watchdog: BUG: soft lockup - CPU#3 stuck for 13s! [kworker/3:2:21034]

Message from syslogd@pc-openeuler-1 at Apr 21 18:00:39 ...
 kernel:[360450.162283] watchdog: BUG: soft lockup - CPU#0 stuck for 11s! [systemd:1]

Message from syslogd@pc-openeuler-1 at Apr 21 18:30:37 ...
 kernel:[362252.300305] watchdog: BUG: soft lockup - CPU#3 stuck for 15s! [kworker/3:2:21034]

Message from syslogd@pc-openeuler-1 at Apr 21 19:00:25 ...
 kernel:[364043.641048] watchdog: BUG: soft lockup - CPU#0 stuck for 11s! [in:imjournal:1327]

Message from syslogd@pc-openeuler-1 at Apr 21 19:00:33 ...
 kernel:[364044.161561] watchdog: BUG: soft lockup - CPU#1 stuck for 11s! [crond:22012]

Message from syslogd@pc-openeuler-1 at Apr 21 19:01:47 ...
 kernel:[364107.280553] watchdog: BUG: soft lockup - CPU#0 stuck for 14s! [kworker/u8:0:22015]

Message from syslogd@pc-openeuler-1 at Apr 21 19:01:48 ...

Message from syslogd@pc-openeuler-1 at Apr 21 23:02:50 ...
 kernel:[378557.385893] watchdog: BUG: soft lockup - CPU#2 stuck for 13s! [crond:1424]

Message from syslogd@pc-openeuler-1 at Apr 21 23:30:18 ...
 kernel:[380239.974362] watc

当我用Ctrl-C,又回到了正常的终端模式下

Message from syslogd@pc-openeuler-1 at Apr 21 23:30:18 ...
kernel:[380239.974362] watchdog: BUG: soft lockup - CPU#3 stuck for 12s! [kworker/3:1:23736]
^C^C
^C
^C
^C
^C
^C
^C
^C
^C
[root@pc-openeuler-1 ~]# ^C
[root@pc-openeuler-1 ~]# ^C
[root@pc-openeuler-1 ~]# ^C
[root@pc-openeuler-1 ~]# ^C
[root@pc-openeuler-1 ~]# ^C
[root@pc-openeuler-1 ~]# ^C
[root@pc-openeuler-1 ~]# ^C
[root@pc-openeuler-1 ~]# ^C
[root@pc-openeuler-1 ~]# ^C
[root@pc-openeuler-1 ~]# ^C
[root@pc-openeuler-1 ~]#

仔细看ssh刚远程时报的Message,刚好是从4月18号开始的,一直到21号也就是今天,其实最初我是2020-04-17 11:12:29 在鹏城实验室平台申请的openEuler测试机,最早发现这个问题是4月17号11:47,在ssh远程登录测试机以后,正常终端模式下,突然开始报Message

杀不死watchdogd

[root@pc-openeuler-1 ~]# ps -ef | grep watchdogd
root 45 2 0 23:45 ? 00:00:00 [watchdogd]
root 2254 1946 48 23:58 pts/0 00:00:00 grep watchdogd
[root@pc-openeuler-1 ~]# ps -fp 45
UID PID PPID C STIME TTY TIME CMD
root 45 2 0 Apr21 ? 00:00:00 [watchdogd]
[root@pc-openeuler-1 ~]# kill -9 45
[root@pc-openeuler-1 ~]# ps -fp 45
UID PID PPID C STIME TTY TIME CMD
root 45 2 0 Apr21 ? 00:00:00 [watchdogd]
[root@pc-openeuler-1 ~]# kill -s 9 45
[root@pc-openeuler-1 ~]# ps -fp 45
UID PID PPID C STIME TTY TIME CMD
root 45 2 0 Apr21 ? 00:00:00 [watchdogd]
[root@pc-openeuler-1 ~]# pgrep watchdogd
45
[root@pc-openeuler-1 ~]# pkill -9 45
[root@pc-openeuler-1 ~]# ps -ef | grep watchdogd
root 45 2 0 Apr21 ? 00:00:00 [watchdogd]
root 2385 1946 76 00:06 pts/0 00:00:00 grep watchdogd
[root@pc-openeuler-1 ~]# pkill -9 watchdogd
[root@pc-openeuler-1 ~]# ps -ef | grep watchdogd
root 45 2 0 Apr21 ? 00:00:00 [watchdogd]
root 2412 1946 0 00:07 pts/0 00:00:00 grep watchdogd

[root@pc-openeuler-1 ~]# killall -9 watchdogd
[root@pc-openeuler-1 ~]# ps -ef | grep watchdogd
root 45 2 0 Apr21 ? 00:00:00 [watchdogd]
root 2539 1946 0 00:09 pts/0 00:00:00 grep watchdogd
[root@pc-openeuler-1 ~]#

[root@pc-openeuler-1 ~]# find / -name watchdog
/proc/sys/kernel/watchdog
/var/lib/selinux/targeted/active/modules/100/watchdog
/sys/class/watchdog
/sys/module/watchdog
/usr/share/selinux/targeted/default/active/modules/100/watchdog
/usr/lib/modules/4.19.90-vhulk2001.1.0.0026.aarch64/kernel/drivers/watchdog
/usr/src/kernels/4.19.90-vhulk2001.1.0.0026.aarch64/include/config/watchdog
/usr/src/kernels/4.19.90-vhulk2001.1.0.0026.aarch64/drivers/watchdog
/usr/src/kernels/4.19.90-vhulk2001.1.0.0026.aarch64/samples/watchdog
/usr/src/kernels/4.19.90-vhulk2001.1.0.0026.aarch64/tools/testing/selftests/watchdog
[root@pc-openeuler-1 ~]#
[root@pc-openeuler-1 ~]# cat /proc/sys/kernel/watchdog
1
[root@pc-openeuler-1 ~]#

参考链接

网上查找资料,发现引发CPU死锁的原因有很多种:

[root@pc-openeuler-1 ~]# ps -ef | grep watchdogd
root 45 2 0 Apr21 ? 00:00:00 [watchdogd]
root 2866 2810 22 00:42 pts/0 00:00:00 grep watchdogd
[root@pc-openeuler-1 ~]# ps -ef | grep mulyipathd
root 2880 2810 0 00:42 pts/0 00:00:00 grep mulyipathd
[root@pc-openeuler-1 ~]#
[root@pc-openeuler-1 ~]# ps -ef | grep multipathd
root 1234 1 8 Apr21 ? 00:04:59 /sbin/multipathd -d -s
root 2905 2810 0 00:43 pts/0 00:00:00 grep multipathd
[root@pc-openeuler-1 ~]#
[root@pc-openeuler-1 ~]#
[root@pc-openeuler-1 ~]# ps -ef | grep watchdogd
root 45 2 0 Apr21 ? 00:00:00 [watchdogd]
root 2941 2810 19 00:43 pts/0 00:00:00 grep watchdogd
[root@pc-openeuler-1 ~]#
[root@pc-openeuler-1 ~]# ps -ef | grep multipathd
root 1234 1 8 Apr21 ? 00:04:59 /sbin/multipathd -d -s
root 2966 2810 0 00:43 pts/0 00:00:00 grep multipathd
[root@pc-openeuler-1 ~]#
[root@pc-openeuler-1 ~]# pkill -9 1234
[root@pc-openeuler-1 ~]# pkill -9 45
[root@pc-openeuler-1 ~]#
[root@pc-openeuler-1 ~]# ps -ef | grep watchdogd
root 45 2 0 Apr21 ? 00:00:00 [watchdogd]
root 3028 2810 0 00:43 pts/0 00:00:00 grep watchdogd
[root@pc-openeuler-1 ~]# ps -ef | grep multipathd
root 1234 1 8 Apr21 ? 00:05:02 /sbin/multipathd -d -s
root 3042 2810 14 00:44 pts/0 00:00:00 grep multipathd
[root@pc-openeuler-1 ~]# kill -9 45
[root@pc-openeuler-1 ~]# kill -9 1234
[root@pc-openeuler-1 ~]# ps -ef | grep watchdogd
root 45 2 0 Apr21 ? 00:00:00 [watchdogd]
root 3082 2810 0 00:44 pts/0 00:00:00 grep watchdogd
[root@pc-openeuler-1 ~]# ps -ef | grep multipathd
root 3096 2810 0 00:44 pts/0 00:00:00 grep multipathd
[root@pc-openeuler-1 ~]#

[root@pc-openeuler-1 ~]# more /var/log/messages
Apr 20 03:19:13 pc-openeuler-1 rsyslogd[1303]: [origin software="rsyslogd" swVersion="8.1907.0" x-pid="1303" x-info="https://www.rsyslog.com"] rsyslogd was HUPed
Apr 20 03:19:14 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:19:20 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:19:22 pc-openeuler-1 rhsmd[11976]: In order for Subscription Manager to provide your system with updates, your system must be registered with the Customer Portal. Please enter your Red Hat login to ensu
re your system is up-to-date.
Apr 20 03:20:03 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:20:05 pc-openeuler-1 chronyd[1307]: Forward time jump detected!
Apr 20 03:20:07 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:23:07 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:23:12 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:23:14 pc-openeuler-1 chronyd[1307]: Selected source 199.182.204.197
Apr 20 03:23:17 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:24:18 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:24:18 pc-openeuler-1 chronyd[1307]: Selected source 203.107.6.88
Apr 20 03:24:24 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:24:29 pc-openeuler-1 multipathd[1236]: sda: unusable path

Apr 20 03:53:37 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:53:39 pc-openeuler-1 systemd[1]: Starting dnf makecache...
Apr 20 03:53:42 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 03:53:44 pc-openeuler-1 dnf[12011]: EulerOS-2.0SP8 base 3.9 kB/s | 3.8 kB 00:00
Apr 20 03:53:44 pc-openeuler-1 dnf[12011]: Metadata cache created.
Apr 20 03:53:45 pc-openeuler-1 systemd[1]: dnf-makecache.service: Succeeded.
Apr 20 03:53:45 pc-openeuler-1 systemd[1]: Started dnf makecache.
Apr 20 03:53:45 pc-openeuler-1 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dnf-makecache comm="systemd" exe="/usr/lib/systemd/systemd" hostname=?
addr=? terminal=? res=success'
Apr 20 03:53:45 pc-openeuler-1 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dnf-makecache comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? a
ddr=? terminal=? res=success'
Apr 20 03:53:47 pc-openeuler-1 multipathd[1236]: sda: unusable path

Apr 20 04:01:09 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:02:04 pc-openeuler-1 kernel: [223721.182851] watchdog: BUG: soft lockup - CPU#3 stuck for 21s! [kworker/3:0:11867]
Apr 20 04:02:05 pc-openeuler-1 kernel: [223724.162697] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_
mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6ta
ble_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_p
robe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring virtio
dm_mirror
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.712488] dm_region_hash dm_log dm_mod
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.714635] CPU: 3 PID: 11867 Comm: kworker/3:0 Kdump: loaded Tainted: G C L 4.19.90-vhulk2001.1.0.0026.aarch64 #1
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.718953] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.721133] Workqueue: events virtio_gpu_fb_dirty_work [virtio_gpu]
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.722885] pstate: 00000005 (nzcv daif -PAN -UAO)
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.724391] pc : vp_notify+0x28/0x38 [virtio_pci]
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.726326] lr : virtqueue_kick+0x3c/0x78 [virtio_ring]
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.728013] sp : ffff0000110cfae0
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.729224] x29: ffff0000110cfae0 x28: 0000000000000000
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.730811] x27: ffff0000110cfc20 x26: 0000000000000001
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.733028] x25: 0000000000480020 x24: 0000000000000001
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.735203] x23: ffff800161c61140 x22: ffff80016cafa448
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.737428] x21: 0000000000000000 x20: ffff80016c6f6000
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.741074] x19: ffff80016c6f6000 x18: 0000000000000000
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.743481] x17: 0000000000000000 x16: 0000000000000000
Apr 20 04:02:06 pc-openeuler-1 kernel: [223724.744935] x15: 0000000000000000 x14: 0000000000000000
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.746376] x13: 0000000000000000 x12: 0000000000000000
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.747828] x11: 0000000000000000 x10: 0000000000000b80
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.749280] x9 : ffff0000110cfd40 x8 : ffff0000110cfbf8
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.751135] x7 : 00000000000011c0 x6 : ffff7fe000587180
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.752596] x5 : 000000000034e2bb x4 : ffff8001ff720ba0
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.754056] x3 : 0000000000000001 x2 : 0000000000000040
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.756162] x1 : ffff00000fda3000 x0 : 0000000000000000
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.758529] Call trace:
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.761243] vp_notify+0x28/0x38 [virtio_pci]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.763495] virtqueue_kick+0x3c/0x78 [virtio_ring]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.766302] virtio_gpu_queue_ctrl_buffer_locked+0x180/0x248 [virtio_gpu]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.768560] virtio_gpu_queue_fenced_ctrl_buffer+0xdc/0x160 [virtio_gpu]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.775386] virtio_gpu_cmd_transfer_to_host_2d+0xa4/0xd0 [virtio_gpu]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.777164] virtio_gpu_dirty_update+0x194/0x218 [virtio_gpu]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.778933] virtio_gpu_fb_dirty_work+0x3c/0x48 [virtio_gpu]
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.796026] process_one_work+0x1b0/0x448
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.800202] worker_thread+0x54/0x468
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.804346] kthread+0x134/0x138
Apr 20 04:02:09 pc-openeuler-1 kernel: [223724.808260] ret_from_fork+0x10/0x18
Apr 20 04:02:09 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:02:14 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:02:19 pc-openeuler-1 multipathd[1236]: sda: unusable path

pr 20 04:48:08 pc-openeuler-1 chronyd[1307]: Selected source 111.230.189.174
Apr 20 04:48:09 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:48:15 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:48:20 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:48:33 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:48:36 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:48:36 pc-openeuler-1 chronyd[1307]: Selected source 84.16.73.33
Apr 20 04:48:41 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:48:46 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:48:51 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:48:56 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:49:01 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:49:02 pc-openeuler-1 chronyd[1307]: Selected source 111.230.189.174
Apr 20 04:49:04 pc-openeuler-1 chronyd[1307]: Source 111.230.189.174 replaced with 144.76.76.107
Apr 20 04:49:06 pc-openeuler-1 multipathd[1236]: sda: unusable path

Apr 20 04:54:32 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:54:37 pc-openeuler-1 systemd[1]: Starting dnf makecache...
Apr 20 04:54:38 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:54:43 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:55:11 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:55:30 pc-openeuler-1 kernel: [226946.264203] watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [dnf:12072]
Apr 20 04:55:30 pc-openeuler-1 kernel: [226947.285148] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_
mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6ta
ble_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_p
robe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring virtio
dm_mirror
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.223435] dm_region_hash dm_log dm_mod
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.225542] CPU: 0 PID: 12072 Comm: dnf Kdump: loaded Tainted: G C L 4.19.90-vhulk2001.1.0.0026.aarch64 #1
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.228853] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.231627] pstate: 60000005 (nZCv daif -PAN -UAO)
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.233654] pc : run_timer_softirq+0x1a8/0x1f0
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.235679] lr : run_timer_softirq+0x154/0x1f0
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.237129] sp : ffff00000800fe60
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.238743] x29: ffff00000800fe60 x28: 0000000000000282
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.241127] x27: 0000000000000002 x26: ffff0000092700c8
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.243156] x25: ffff000008f40018 x24: ffff0000092700c0
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.245564] x23: 0000000000000001 x22: ffff000009273000
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.247614] x21: ffff000009271000 x20: ffff8001ff6776c0
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.249988] x19: 00000000ffffffff x18: ffff000009271000
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.252308] x17: 0000000000000000 x16: 0000000000000000
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.254611] x15: 0000000000000000 x14: 0000000000000400
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.256974] x13: 0000000000000040 x12: 0000000000000228
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.259227] x11: 0000000000000000 x10: 0000000000000040
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.261346] x9 : ffff000009296320 x8 : ffff800140004900
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.263484] x7 : ffff800140004928 x6 : 0000000000000000
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.265839] x5 : ffff800140004900 x4 : 00000001436077a5
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.268065] x3 : 0000000000000000 x2 : 008c36560de8ef00
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.475510] x1 : ffffffffffffffff x0 : 00000000000000e0
Apr 20 04:55:30 pc-openeuler-1 kernel: [226954.479356] Call trace:
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.481259] run_timer_softirq+0x1a8/0x1f0
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.483024] __do_softirq+0x11c/0x31c
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.485434] irq_exit+0x11c/0x128
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.487340] __handle_domain_irq+0x6c/0xc0
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.489226] gic_handle_irq+0x6c/0x170
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.491132] el1_irq+0xb8/0x140
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.493253] __arch_copy_to_user+0x180/0x21c
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.495568] copy_page_to_iter+0xd0/0x320
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.497488] generic_file_buffered_read+0x254/0x740
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.499294] generic_file_read_iter+0x114/0x190
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.501420] ext4_file_read_iter+0x5c/0x140 [ext4]
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.503734] __vfs_read+0x11c/0x188
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.505560] vfs_read+0x94/0x150
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.506989] ksys_read+0x74/0xf0
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.508206] __arm64_sys_read+0x24/0x30
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.509489] el0_svc_common+0x78/0x130
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.510871] el0_svc_handler+0x38/0x78
Apr 20 04:55:31 pc-openeuler-1 kernel: [226954.512155] el0_svc+0x8/0xc
Apr 20 04:55:31 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:55:32 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:55:33 pc-openeuler-1 dnf[12072]: Metadata cache refreshed recently.
Apr 20 04:55:34 pc-openeuler-1 systemd[1]: dnf-makecache.service: Succeeded.
Apr 20 04:55:34 pc-openeuler-1 systemd[1]: Started dnf makecache.
Apr 20 04:55:34 pc-openeuler-1 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dnf-makecache comm="systemd" exe="/usr/lib/systemd/systemd" hostname=?
addr=? terminal=? res=success'
Apr 20 04:55:34 pc-openeuler-1 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dnf-makecache comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? a
ddr=? terminal=? res=success'
Apr 20 04:55:37 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 04:55:42 pc-openeuler-1 multipathd[1236]: sda: unusable path

Apr 20 05:00:47 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 05:01:08 pc-openeuler-1 kernel: [227283.923013] watchdog: BUG: soft lockup - CPU#3 stuck for 24s! [kworker/3:0:11867]
Apr 20 05:01:12 pc-openeuler-1 kernel: [227283.955199] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_
mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6ta
ble_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_p
robe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring virtio
dm_mirror
Apr 20 05:02:10 pc-openeuler-1 kernel: [227283.973027] dm_region_hash dm_log dm_mod
Apr 20 05:02:10 pc-openeuler-1 kernel: [227283.975387] CPU: 3 PID: 11867 Comm: kworker/3:0 Kdump: loaded Tainted: G C L 4.19.90-vhulk2001.1.0.0026.aarch64 #1
Apr 20 05:02:10 pc-openeuler-1 kernel: [227283.979483] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
Apr 20 05:02:10 pc-openeuler-1 kernel: [227283.981199] Workqueue: events virtio_gpu_fb_dirty_work [virtio_gpu]
Apr 20 05:02:10 pc-openeuler-1 kernel: [227283.983350] pstate: 00000005 (nzcv daif -PAN -UAO)
Apr 20 05:02:10 pc-openeuler-1 kernel: [227283.986138] pc : vp_notify+0x28/0x38 [virtio_pci]
Apr 20 05:02:10 pc-openeuler-1 kernel: [227283.989090] lr : virtqueue_kick+0x3c/0x78 [virtio_ring]
Apr 20 05:02:10 pc-openeuler-1 kernel: [227283.991140] sp : ffff0000110cfae0
Apr 20 05:02:10 pc-openeuler-1 kernel: [227283.992766] x29: ffff0000110cfae0 x28: 0000000000000000
Apr 20 05:02:10 pc-openeuler-1 kernel: [227283.994864] x27: ffff0000110cfc20 x26: 0000000000000001
Apr 20 05:02:10 pc-openeuler-1 kernel: [227283.996953] x25: 0000000000480020 x24: 0000000000000001
Apr 20 05:02:10 pc-openeuler-1 kernel: [227283.999003] x23: ffff800161c6d8c0 x22: ffff80016cafa448
Apr 20 05:02:10 pc-openeuler-1 kernel: [227284.001094] x21: 0000000000000000 x20: ffff80016c6f6000
Apr 20 05:02:10 pc-openeuler-1 kernel: [227284.006353] x19: ffff80016c6f6000 x18: 0000000000000000
Apr 20 05:02:10 pc-openeuler-1 kernel: [227284.010833] x17: 0000000000000000 x16: 0000000000000000
Apr 20 05:02:10 pc-openeuler-1 kernel: [227284.015454] x15: 0000000000000000 x14: 0000000000000000
Apr 20 05:02:10 pc-openeuler-1 kernel: [227284.017677] x13: 0000000000000000 x12: 0000000000000000
Apr 20 05:02:10 pc-openeuler-1 kernel: [227284.019742] x11: 0000000000000000 x10: 0000000000000b80
Apr 20 05:02:10 pc-openeuler-1 kernel: [227284.021755] x9 : ffff0000110cfce0 x8 : ffff0000110cfbf8
Apr 20 05:02:10 pc-openeuler-1 kernel: [227284.023837] x7 : 000000000000d940 x6 : ffff7fe000587180
Apr 20 05:02:11 pc-openeuler-1 kernel: [227284.025721] x5 : 0000000000358a39 x4 : ffff8001ff720ba0
Apr 20 05:02:11 pc-openeuler-1 kernel: [227284.029584] x3 : 0000000000000001 x2 : 0000000000000040
Apr 20 05:02:11 pc-openeuler-1 kernel: [227284.034892] x1 : ffff00000fda3000 x0 : 0000000000000000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227284.040246] Call trace:
Apr 20 05:02:11 pc-openeuler-1 kernel: [227284.041482] vp_notify+0x28/0x38 [virtio_pci]
Apr 20 05:02:11 pc-openeuler-1 kernel: [227284.043030] virtqueue_kick+0x3c/0x78 [virtio_ring]
Apr 20 05:02:11 pc-openeuler-1 kernel: [227284.044999] virtio_gpu_queue_ctrl_buffer_locked+0x180/0x248 [virtio_gpu]
Apr 20 05:02:11 pc-openeuler-1 kernel: [227284.046908] virtio_gpu_queue_fenced_ctrl_buffer+0xdc/0x160 [virtio_gpu]
Apr 20 05:02:11 pc-openeuler-1 kernel: [227284.049263] virtio_gpu_cmd_transfer_to_host_2d+0xa4/0xd0 [virtio_gpu]
Apr 20 05:02:11 pc-openeuler-1 kernel: [227284.055439] virtio_gpu_dirty_update+0x194/0x218 [virtio_gpu]
Apr 20 05:02:11 pc-openeuler-1 kernel: [227284.059419] virtio_gpu_fb_dirty_work+0x3c/0x48 [virtio_gpu]
Apr 20 05:02:11 pc-openeuler-1 kernel: [227284.063097] process_one_work+0x1b0/0x448
Apr 20 05:02:11 pc-openeuler-1 kernel: [227284.065644] worker_thread+0x54/0x468
Apr 20 05:02:11 pc-openeuler-1 kernel: [227284.066985] kthread+0x134/0x138
Apr 20 05:02:11 pc-openeuler-1 kernel: [227284.068589] ret_from_fork+0x10/0x18
Apr 20 05:02:11 pc-openeuler-1 kernel: [227329.042977] watchdog: BUG: soft lockup - CPU#0 stuck for 28s! [swapper/0:0]
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.081876] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_
mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6ta
ble_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_p
robe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring virtio
dm_mirror
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.102522] dm_region_hash dm_log dm_mod
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.104628] CPU: 0 PID: 0 Comm: swapper/0 Kdump: loaded Tainted: G C L 4.19.90-vhulk2001.1.0.0026.aarch64 #1
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.107273] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.109014] pstate: 40000005 (nZcv daif -PAN -UAO)
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.110487] pc : __do_softirq+0xa0/0x31c
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.111826] lr : __do_softirq+0x64/0x31c
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.113156] sp : ffff00000800fee0
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.114398] x29: ffff00000800fee0 x28: 0000000000000082
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.115866] x27: ffff000008f5cd80 x26: ffff000008010000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.117358] x25: ffff000008000000 x24: ffff800160039800
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.118892] x23: ffff00000924fd50 x22: 0000000000000000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.122482] x21: 0000000000000000 x20: 0000000000000003
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.124803] x19: ffff00000927f080 x18: ffff000009271000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.127733] x17: 0000000000000000 x16: 0000000000000000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.129320] x15: 0000000000000000 x14: 0000000000000000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.131120] x13: 0000000000000000 x12: 0000000000000000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.132542] x11: ffff000008a6e0c0 x10: 0000000000000040
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.133985] x9 : ffff000008a6e0c8 x8 : 0000000000000000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.135404] x7 : 0000000000000004 x6 : 00000aa3a4c2f21a
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.136825] x5 : 00ffffffffffffff x4 : 0000000000000015
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.138268] x3 : 0000cbffd54d76f4 x2 : 00008001f6730000
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.139701] x1 : 00000000000000e0 x0 : ffff000008f5cd80
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.141187] Call trace:
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.142296] __do_softirq+0xa0/0x31c
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.144318] irq_exit+0x11c/0x128
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.145815] __handle_domain_irq+0x6c/0xc0
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.147258] gic_handle_irq+0x6c/0x170
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.148501] el1_irq+0xb8/0x140
Apr 20 05:02:11 pc-openeuler-1 kernel: [227338.149697] arch_cpu_idle+0x38/0x1c0
Apr 20 05:02:12 pc-openeuler-1 kernel: [227338.150920] default_idle_call+0x24/0x40
Apr 20 05:02:12 pc-openeuler-1 kernel: [227338.152169] do_idle+0x1d4/0x2b0
Apr 20 05:02:12 pc-openeuler-1 kernel: [227338.153338] cpu_startup_entry+0x2c/0x30
Apr 20 05:02:12 pc-openeuler-1 kernel: [227338.154583] rest_init+0xb8/0xc8
Apr 20 05:02:12 pc-openeuler-1 kernel: [227338.155747] start_kernel+0x4d0/0x4fc
Apr 20 05:02:12 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 05:02:12 pc-openeuler-1 multipathd[1236]: path checkers took longer than 29 seconds, consider increasing max_polling_interval
Apr 20 05:02:12 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 05:02:12 pc-openeuler-1 chronyd[1307]: Forward time jump detected!
Apr 20 05:02:12 pc-openeuler-1 chronyd[1307]: Can't synchronise: no selectable sources
Apr 20 05:02:15 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 05:02:20 pc-openeuler-1 multipathd[1236]: sda: unusable path

Apr 20 05:37:12 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 05:38:28 pc-openeuler-1 kernel: [229502.572246] watchdog: BUG: soft lockup - CPU#0 stuck for 33s! [jbd2/dm-0-8:421]
Apr 20 05:38:36 pc-openeuler-1 kernel: [229508.523710] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_
mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6ta
ble_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_p
robe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring
Apr 20 05:38:38 pc-openeuler-1 kernel: [229514.535201] watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [migration/2:20]
Apr 20 05:38:38 pc-openeuler-1 kernel: [229514.800294] virtio dm_mirror dm_region_hash dm_log dm_mod
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.296570] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_
mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6ta
ble_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_p
robe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring virtio
dm_mirror
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.299785] CPU: 0 PID: 421 Comm: jbd2/dm-0-8 Kdump: loaded Tainted: G C L 4.19.90-vhulk2001.1.0.0026.aarch64 #1
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.302271] dm_region_hash dm_log dm_mod
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.350416] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.356026] CPU: 2 PID: 20 Comm: migration/2 Kdump: loaded Tainted: G C L 4.19.90-vhulk2001.1.0.0026.aarch64 #1
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.360232] pstate: 60000005 (nZCv daif -PAN -UAO)
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.366090] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.377289] pc : __srcu_read_lock+0x18/0x58
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.383635] pstate: 00000005 (nzcv daif -PAN -UAO)
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.392651] lr : dm_make_request+0x30/0xb0 [dm_mod]
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.582736] pc : ipt_do_table+0x108/0x6f0 [ip_tables]
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.582759] lr : iptable_filter_hook+0x30/0x40 [iptable_filter]
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.588032] sp : ffff00000f26fa80
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.590549] sp : ffff00000804f960
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.593447] x29: ffff00000f26fa80 x28: ffff800161739024
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.595079] x29: ffff00000804f980 x28: ffff800161731008
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.600257] x27: 00000000ffffffff x26: ffff80016dddb020
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.602260] x27: ffff800160203000 x26: ffff8001661b4c50
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.608353] x25: 0000000000000000 x24: 00000000ffffffff
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.610362] x25: ffff000002520600 x24: ffff800160203000
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.618631] x23: 000000000000000c x22: 000000000000000a
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.623403] x23: ffff00000804fb48 x22: ffff800167924380
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.635978] x21: ffff800166a7ba00 x20: ffff80016202c6e8
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.640264] x21: ffff800162024000 x20: ffff80016edf6500
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.652341] x19: ffff80016202c6e8 x18: ffff000009271000
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656559] x19: ffff00000804fb48 x18: ffff000009271000
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656561] x17: 0000000000000000 x16: 0000000000000000
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656562] x15: 0000000000000000 x14: 0000000000000000
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656564] x13: 0000000000000000 x12: 585f3dd07644acd8
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656566] x11: ffff000002520600 x10: ffff800160203000
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656567] x9 : ffff00000bd2a260 x8 : ffff800164f743a8
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656569] x7 : 0000000000000001 x6 : 0000000000000000
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656571] x5 : 000000000000000e x4 : 0000000000000001
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656573] x3 : 00008001f6790000 x2 : ffff00000804f960
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656574] x1 : ffff800162024040 x0 : ffff000008f40018
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656577] Call trace:
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656588] ipt_do_table+0x108/0x6f0 [ip_tables]
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656591] iptable_filter_hook+0x30/0x40 [iptable_filter]
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656598] nf_hook_slow+0x50/0x100
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656600] ip_local_deliver+0x104/0x118
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656602] ip_rcv_finish+0x90/0xb0
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656604] ip_rcv+0x64/0xd8
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656607] __netif_receive_skb_one_core+0x68/0x90
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656608] __netif_receive_skb+0x28/0x80
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656610] netif_receive_skb_internal+0x54/0x100
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656612] napi_gro_receive+0xf8/0x170
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656616] receive_buf+0x15c/0x500 [virtio_net]
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656619] virtnet_poll+0x170/0x338 [virtio_net]
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656621] net_rx_action+0x178/0x400
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656624] __do_softirq+0x11c/0x31c
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656627] irq_exit+0x11c/0x128
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656630] __handle_domain_irq+0x6c/0xc0
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656631] gic_handle_irq+0x6c/0x170
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656633] el1_irq+0xb8/0x140
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656636] finish_task_switch+0x74/0x240
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656639] __schedule+0x290/0x938
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656641] schedule+0x2c/0x88
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656643] smpboot_thread_fn+0x1dc/0x1e8
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656645] kthread+0x134/0x138
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.656647] ret_from_fork+0x10/0x18
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.789048] x17: 0000000000000000 x16: 0000000000000000
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.790260] x15: 0000000000000000 x14: 0000000000000c00
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.791457] x13: 0000000000000000 x12: 0000000000000000
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.792669] x11: ffff000008a6e0c0 x10: 0000000000000b80
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.793865] x9 : 0000000000007000 x8 : 0000000000000001
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.795060] x7 : 0000000000000000 x6 : ffff800166a79600
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.796265] x5 : ffff800166a7bab0 x4 : 000000000000d90d
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.797466] x3 : 0000000000000000 x2 : ffff000000c533d0
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.798658] x1 : ffff800166a7ba00 x0 : ffff000000c53400
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.799850] Call trace:
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.800675] __srcu_read_lock+0x18/0x58
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.801687] dm_make_request+0x30/0xb0 [dm_mod]
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.802800] generic_make_request+0x174/0x350
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.803880] submit_bio+0x5c/0x198
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.804888] submit_bh_wbc+0x198/0x210
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.805889] submit_bh+0x3c/0x50
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.806824] jbd2_journal_commit_transaction+0x6a4/0x1950 [jbd2]
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.808144] kjournald2+0xe4/0x2e8 [jbd2]
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.809203] kthread+0x134/0x138
Apr 20 05:38:39 pc-openeuler-1 kernel: [229515.811146] ret_from_fork+0x10/0x18
Apr 20 05:38:39 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 05:38:39 pc-openeuler-1 chronyd[1307]: Forward time jump detected!
Apr 20 05:38:39 pc-openeuler-1 chronyd[1307]: Can't synchronise: no selectable sources
Apr 20 05:38:39 pc-openeuler-1 chronyd[1307]: Forward time jump detected!
Apr 20 05:38:43 pc-openeuler-1 multipathd[1236]: sda: unusable path

Apr 20 05:42:19 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 05:42:19 pc-openeuler-1 multipathd[1236]: path checkers took longer than 21 seconds, consider increasing max_polling_interval
Apr 20 05:43:23 pc-openeuler-1 kernel: [229821.789426] watchdog: BUG: soft lockup - CPU#0 stuck for 73s! [gmain:1387]
Apr 20 05:43:24 pc-openeuler-1 kernel: [229821.916066] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
Apr 20 05:43:24 pc-openeuler-1 kernel: [229821.968490] watchdog: BUG: soft lockup - CPU#3 stuck for 59s! [kworker/3:0:11867]
Apr 20 05:43:24 pc-openeuler-1 kernel: [229821.968491] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_
mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6ta
ble_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_p
robe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring virtio
dm_mirror
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968539] dm_region_hash dm_log dm_mod
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968546] CPU: 3 PID: 11867 Comm: kworker/3:0 Kdump: loaded Tainted: G C L 4.19.90-vhulk2001.1.0.0026.aarch64 #1
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968547] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968563] Workqueue: events drm_fb_helper_dirty_work
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968565] pstate: 00000005 (nzcv daif -PAN -UAO)
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968573] pc : vp_notify+0x28/0x38 [virtio_pci]
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968577] lr : virtqueue_kick+0x3c/0x78 [virtio_ring]
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968578] sp : ffff0000110cfb30
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968579] x29: ffff0000110cfb30 x28: 0000000000000000
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968581] x27: ffff0000110cfc70 x26: 0000000000000001
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968583] x25: 0000000000480020 x24: 0000000000000001
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968584] x23: ffff800161c6b100 x22: 0000000000000010
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968585] x21: 0000000000000000 x20: ffff80016c6f6000
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968587] x19: ffff80016c6f6000 x18: 0000000000000000
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968588] x17: 0000000000000000 x16: 0000000000000000
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968590] x15: 0000000000000001 x14: ffff000008ade210
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968591] x13: 0000000000000000 x12: 0000000000000000
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968593] x11: ffff80016de06800 x10: 0000000000000040
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968595] x9 : ffff000009296320 x8 : ffff0000110cfc48
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968597] x7 : 000000000000b178 x6 : ffff7fe000587180
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968598] x5 : 000000000035fd66 x4 : ffff8001ff720ba0
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968600] x3 : 0000000000000001 x2 : 0000000000000040
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968602] x1 : ffff00000fda3000 x0 : 0000000000000000
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968605] Call trace:
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968609] vp_notify+0x28/0x38 [virtio_pci]
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968612] virtqueue_kick+0x3c/0x78 [virtio_ring]
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968620] virtio_gpu_queue_ctrl_buffer_locked+0x180/0x248 [virtio_gpu]
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968625] virtio_gpu_queue_ctrl_buffer+0x50/0x78 [virtio_gpu]
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968630] virtio_gpu_cmd_resource_flush+0x8c/0xb0 [virtio_gpu]
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968635] virtio_gpu_surface_dirty+0x60/0x110 [virtio_gpu]
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968640] virtio_gpu_framebuffer_surface_dirty+0x34/0x48 [virtio_gpu]
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968642] drm_fb_helper_dirty_work+0x174/0x1c0
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968646] process_one_work+0x1b0/0x448
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968649] worker_thread+0x54/0x468
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968651] kthread+0x134/0x138
Apr 20 05:43:25 pc-openeuler-1 kernel: [229821.968654] ret_from_fork+0x10/0x18
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.024965] Modules linked in: binfmt_misc ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ebtable_nat ip6table_nat nf_nat_ipv6 ip6table_
mangle ip6table_raw ip6table_security iptable_nat nf_nat_ipv4 nf_nat iptable_mangle iptable_raw iptable_security nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c ip_set nfnetlink ebtable_filter ebtables ip6ta
ble_filter ip6_tables iptable_filter vfat fat dm_multipath aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce sha2_ce sha256_arm64 sha1_ce ofpart cmdlinepart sg virtio_input cfi_cmdset_0001 virtio_balloon cfi_p
robe cfi_util gen_probe physmap_of chipreg uio_pdrv_genirq mtd uio sch_fq_codel ip_tables ext4 mbcache jbd2 sd_mod virtio_net net_failover virtio_scsi virtio_gpu failover virtio_mmio virtio_pci virtio_ring virtio
dm_mirror
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.026328] rcu: #0110-...!: (14 ticks this GP) idle=93e/1/0x4000000000000004 softirq=565772/565772 fqs=13
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.028036] dm_region_hash dm_log dm_mod
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.042274] rcu: #011(detected by 2, t=17163 jiffies, g=1595741, q=160)
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.043605] CPU: 0 PID: 1387 Comm: gmain Kdump: loaded Tainted: G C L 4.19.90-vhulk2001.1.0.0026.aarch64 #1
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.046619] Sending NMI from CPU 2 to CPUs 0:
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.048422] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.128003] pstate: 40000005 (nZcv daif -PAN -UAO)
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.129565] pc : __do_softirq+0xa0/0x31c
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.130982] lr : __do_softirq+0x64/0x31c
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.132468] sp : ffff00000800fee0
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.133816] x29: ffff00000800fee0 x28: 0000000000000082
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.135457] x27: ffff000008f5cd80 x26: ffff000008010000
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.137076] x25: ffff000008000000 x24: ffff800160039800
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.138703] x23: ffff0000129efec0 x22: 0000000000000000
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.140304] x21: 0000000000000000 x20: 0000000000000003
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.141930] x19: ffff80016863ba00 x18: 0000000000000000
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.143665] x17: 0000000000000000 x16: 0000000000000000
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.145291] x15: 0000000000000000 x14: 0000000000000c00
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.146909] x13: 0000000000000000 x12: 0000000000000000
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.148537] x11: ffff000008a6e0c0 x10: 0000000000000040
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.150151] x9 : 0000000000000000 x8 : 0000000000000000
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.151768] x7 : ffff80016033b8a8 x6 : 00000ad6d8991bb0
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.153388] x5 : 00ffffffffffffff x4 : 0000000000000015
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.155010] x3 : 0000cfffe208b6ac x2 : 00008001f6730000
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.156633] x1 : 00000000000000e0 x0 : ffff000008f5cd80
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.158236] Call trace:
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.159436] __do_softirq+0xa0/0x31c
Apr 20 05:43:25 pc-openeuler-1 kernel: [229822.160796] irq_exit+0x11c/0x128
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.162120] __handle_domain_irq+0x6c/0xc0
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.163543] gic_handle_irq+0x6c/0x170
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.164910] el0_irq_naked+0x50/0x58
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630514] NMI backtrace for cpu 0
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630516] CPU: 0 PID: 1387 Comm: gmain Kdump: loaded Tainted: G C L 4.19.90-vhulk2001.1.0.0026.aarch64 #1
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630517] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630518] pstate: 40000005 (nZcv daif -PAN -UAO)
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630518] pc : __do_softirq+0xa0/0x31c
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630519] lr : __do_softirq+0x64/0x31c
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630520] sp : ffff00000800fee0
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630521] x29: ffff00000800fee0 x28: 0000000000000082
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630523] x27: ffff000008f5cd80 x26: ffff000008010000
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630524] x25: ffff000008000000 x24: ffff800160039800
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630526] x23: ffff0000129efec0 x22: 0000000000000000
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630528] x21: 0000000000000000 x20: 0000000000000003
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630529] x19: ffff80016863ba00 x18: 0000000000000000
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630530] x17: 0000000000000000 x16: 0000000000000000
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630532] x15: 0000000000000000 x14: 0000000000000c00
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630533] x13: 0000000000000000 x12: 0000000000000000
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630535] x11: ffff000008a6e0c0 x10: 0000000000000040
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630536] x9 : 0000000000000000 x8 : 0000000000000000
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630537] x7 : ffff80016033b8a8 x6 : 00000ad6d8991bb0
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630539] x5 : 00ffffffffffffff x4 : 0000000000000015
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630540] x3 : 0000cfffe208b6ac x2 : 00008001f6730000
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630541] x1 : 00000000000000e0 x0 : ffff000008f5cd80
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630543] Call trace:
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630543] __do_softirq+0xa0/0x31c
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630544] irq_exit+0x11c/0x128
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630545] __handle_domain_irq+0x6c/0xc0
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630546] gic_handle_irq+0x6c/0x170
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.630546] el0_irq_naked+0x50/0x58
Apr 20 05:43:26 pc-openeuler-1 kernel: [229822.631403] rcu: rcu_sched kthread starved for 15508 jiffies! g1595741 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=2
Apr 20 05:43:26 pc-openeuler-1 kernel: [229827.024028] rcu: RCU grace-period kthread stack dump:
Apr 20 05:43:26 pc-openeuler-1 kernel: [229827.025419] rcu_sched I 0 10 2 0x00000028
Apr 20 05:43:26 pc-openeuler-1 kernel: [229827.026882] Call trace:
Apr 20 05:43:26 pc-openeuler-1 kernel: [229827.027877] __switch_to+0xe4/0x148
Apr 20 05:43:26 pc-openeuler-1 kernel: [229827.029023] __schedule+0x28c/0x938
Apr 20 05:43:26 pc-openeuler-1 kernel: [229827.030191] schedule+0x2c/0x88
Apr 20 05:43:26 pc-openeuler-1 kernel: [229827.031373] schedule_timeout+0xa0/0x468
Apr 20 05:43:26 pc-openeuler-1 kernel: [229827.032594] rcu_gp_kthread+0x1e8/0x320
Apr 20 05:43:26 pc-openeuler-1 kernel: [229827.033743] kthread+0x134/0x138
Apr 20 05:43:26 pc-openeuler-1 kernel: [229827.034840] ret_from_fork+0x10/0x18
Apr 20 05:43:26 pc-openeuler-1 multipathd[1236]: sda: unusable path

Apr 20 05:55:36 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 05:55:40 pc-openeuler-1 systemd[1]: Starting dnf makecache...
Apr 20 05:55:41 pc-openeuler-1 multipathd[1236]: sda: unusable path
Apr 20 05:55:43 pc-openeuler-1 dnf[12134]: Metadata cache refreshed recently.
Apr 20 05:55:43 pc-openeuler-1 systemd[1]: dnf-makecache.service: Succeeded.
Apr 20 05:55:43 pc-openeuler-1 systemd[1]: Started dnf makecache.
Apr 20 05:55:43 pc-openeuler-1 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dnf-makecache comm="systemd" exe="/usr/lib/systemd/systemd" hostname=?
addr=? terminal=? res=success'
Apr 20 05:55:43 pc-openeuler-1 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dnf-makecache comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? a
ddr=? terminal=? res=success'
Apr 20 05:55:46 pc-openeuler-1 multipathd[1236]: sda: unusable path

[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 22 02:31:21 ...
kernel:[ 9968.868449] watchdog: BUG: soft lockup - CPU#3 stuck for 39s! [kworker/3:1:47]

会用到gpu吗 一直是virtio-gpu这个模块在报错 不加载下面这个ko试试,看看是不是这个ko引起的问题
drivers/gpu/drm/virtio/virtio-gpu.ko

多谢楼上提醒,目前我已尝试,找到了virtio所在目录,只有两个文件就是Kconfig和Makefile,但是没有您说的virtio-gpu.ko文件啊

[root@pc-openeuler-1 virtio]# pwd
/usr/src/kernels/4.19.90-vhulk2001.1.0.0026.aarch64/drivers/gpu/drm/virtio
[root@pc-openeuler-1 virtio]# 
[root@pc-openeuler-1 virtio]# ls -l
total 8
-rw-r--r--. 1 root root 259 Feb  7 12:48 Kconfig
-rw-r--r--. 1 root root 471 Feb  7 12:48 Makefile
[root@pc-openeuler-1 virtio]# cat Kconfig 
config DRM_VIRTIO_GPU
	tristate "Virtio GPU driver"
	depends on DRM && VIRTIO && MMU
	select DRM_KMS_HELPER
	select DRM_TTM
	help
	   This is the virtual GPU driver for virtio.  It can be used with
	   QEMU based VMMs (like KVM or Xen).

	   If unsure say M.
[root@pc-openeuler-1 virtio]# 
[root@pc-openeuler-1 virtio]# cat Makefile 
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for the drm device driver.  This driver provides support for the
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.

virtio-gpu-y := virtgpu_drv.o virtgpu_kms.o virtgpu_drm_bus.o virtgpu_gem.o \
	virtgpu_fb.o virtgpu_display.o virtgpu_vq.o virtgpu_ttm.o \
	virtgpu_fence.o virtgpu_object.o virtgpu_debugfs.o virtgpu_plane.o \
	virtgpu_ioctl.o virtgpu_prime.o

obj-$(CONFIG_DRM_VIRTIO_GPU) += virtio-gpu.o
[root@pc-openeuler-1 virtio]# 
[root@pc-openeuler-1 virtio]#

@Xie XiuQi @solarhu @木得感情的openEuler机器人 @openeuler-ci-bot

之前讨论说的virtio找到了,是这个吗?

又报Message了,但是今天下午三点到现在就出现了这一次

[root@pc-openeuler-1 log]# 
Message from syslogd@pc-openeuler-1 at Apr 22 20:01:57 ...
 kernel:[72937.931100] watchdog: BUG: soft lockup - CPU#0 stuck for 34s! [kworker/0:0:6057]

又报Message了,但是今天下午三点到现在就出现了这一次

[root@pc-openeuler-1 log]# 
Message from syslogd@pc-openeuler-1 at Apr 22 20:01:57 ...
 kernel:[72937.931100] watchdog: BUG: soft lockup - CPU#0 stuck for 34s! [kworker/0:0:6057]

@coding

后台还需要排查,你的环境先别破坏哈,尽量保留一下。多谢!

好的,没破坏!现在自动掉线了,啥都没干,就跟它较上劲了。

@Xie XiuQi

[root@pc-openeuler-1 log]# 
Message from syslogd@pc-openeuler-1 at Apr 22 22:02:17 ...
 kernel:[80173.367799] watchdog: BUG: soft lockup - CPU#1 stuck for 29s! [tuned:1810]

Message from syslogd@pc-openeuler-1 at Apr 22 22:02:18 ...
 kernel:[80177.551759] watchdog: BUG: soft lockup - CPU#2 stuck for 34s! [systemd-journal:4081]

Message from syslogd@pc-openeuler-1 at Apr 22 22:02:28 ...
 kernel:[80179.130809] watchdog: BUG: soft lockup - CPU#3 stuck for 34s! [irqbalance:1309]

Message from syslogd@pc-openeuler-1 at Apr 22 22:02:29 ...
 kernel:[80185.486994] watchdog: BUG: soft lockup - CPU#0 stuck for 45s! [sshd:4570]

Socket error Event: 32 Error: 10053.
Connection closing...Socket close.

Connection closed by foreign host.
[root@pc-openeuler-1 ~]# 
Socket error Event: 32 Error: 10053.
Connection closing...Socket close.

Connection closed by foreign host.

Disconnected from remote host at 23:30:29.

其中发现两个可疑IP

IP:185.209.85.222

位置:俄罗斯圣彼得堡

IP:162.159.200.123

位置:美国

[root@pc-openeuler-1 log]# tail -n 20 messages-20200422 
Apr 22 03:01:59 pc-openeuler-1 kernel: [11809.107186]  kthread+0x134/0x138
Apr 22 03:01:59 pc-openeuler-1 kernel: [11809.108846]  ret_from_fork+0x10/0x18
Apr 22 03:01:59 pc-openeuler-1 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sysstat-collect comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Apr 22 03:01:59 pc-openeuler-1 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sysstat-collect comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Apr 22 03:01:59 pc-openeuler-1 systemd[1]: sysstat-collect.service: Succeeded.
Apr 22 03:01:59 pc-openeuler-1 systemd[1]: Started system activity accounting tool.
Apr 22 03:01:59 pc-openeuler-1 chronyd[1306]: Forward time jump detected!
Apr 22 03:01:59 pc-openeuler-1 chronyd[1306]: Can't synchronise: no selectable sources
Apr 22 03:05:13 pc-openeuler-1 chronyd[1306]: Selected source 185.209.85.222
Apr 22 03:09:33 pc-openeuler-1 chronyd[1306]: Selected source 162.159.200.123
Apr 22 03:10:36 pc-openeuler-1 systemd[1]: Starting system activity accounting tool...
Apr 22 03:10:36 pc-openeuler-1 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sysstat-collect comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Apr 22 03:10:36 pc-openeuler-1 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sysstat-collect comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Apr 22 03:10:36 pc-openeuler-1 systemd[1]: sysstat-collect.service: Succeeded.
Apr 22 03:10:37 pc-openeuler-1 systemd[1]: Started system activity accounting tool.
Apr 22 03:20:32 pc-openeuler-1 systemd[1]: Starting system activity accounting tool...
Apr 22 03:20:42 pc-openeuler-1 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sysstat-collect comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Apr 22 03:20:42 pc-openeuler-1 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sysstat-collect comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Apr 22 03:20:42 pc-openeuler-1 systemd[1]: sysstat-collect.service: Succeeded.
Apr 22 03:20:42 pc-openeuler-1 systemd[1]: Started system activity accounting tool.
[root@pc-openeuler-1 log]#
[root@pc-openeuler-1 log]# 
Message from syslogd@pc-openeuler-1 at Apr 23 00:10:57 ...
 kernel:[87932.939881] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [kworker/3:0:7246]

其中发现两个可疑IP

IP:185.209.85.222

位置:俄罗斯圣彼得堡

IP:162.159.200.123

位置:美国

[root@pc-openeuler-1 log]# tail -n 20 messages-20200422 
Apr 22 03:01:59 pc-openeuler-1 kernel: [11809.107186]  kthread+0x134/0x138
Apr 22 03:01:59 pc-openeuler-1 kernel: [11809.108846]  ret_from_fork+0x10/0x18
Apr 22 03:01:59 pc-openeuler-1 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sysstat-collect comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Apr 22 03:01:59 pc-openeuler-1 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sysstat-collect comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Apr 22 03:01:59 pc-openeuler-1 systemd[1]: sysstat-collect.service: Succeeded.
Apr 22 03:01:59 pc-openeuler-1 systemd[1]: Started system activity accounting tool.
Apr 22 03:01:59 pc-openeuler-1 chronyd[1306]: Forward time jump detected!
Apr 22 03:01:59 pc-openeuler-1 chronyd[1306]: Can't synchronise: no selectable sources
Apr 22 03:05:13 pc-openeuler-1 chronyd[1306]: Selected source 185.209.85.222
Apr 22 03:09:33 pc-openeuler-1 chronyd[1306]: Selected source 162.159.200.123
Apr 22 03:10:36 pc-openeuler-1 systemd[1]: Starting system activity accounting tool...
Apr 22 03:10:36 pc-openeuler-1 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sysstat-collect comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Apr 22 03:10:36 pc-openeuler-1 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sysstat-collect comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Apr 22 03:10:36 pc-openeuler-1 systemd[1]: sysstat-collect.service: Succeeded.
Apr 22 03:10:37 pc-openeuler-1 systemd[1]: Started system activity accounting tool.
Apr 22 03:20:32 pc-openeuler-1 systemd[1]: Starting system activity accounting tool...
Apr 22 03:20:42 pc-openeuler-1 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sysstat-collect comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Apr 22 03:20:42 pc-openeuler-1 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sysstat-collect comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Apr 22 03:20:42 pc-openeuler-1 systemd[1]: sysstat-collect.service: Succeeded.
Apr 22 03:20:42 pc-openeuler-1 systemd[1]: Started system activity accounting tool.
[root@pc-openeuler-1 log]#

@coding

chronyd 是 ntp 时钟同步的 service。

@coding
chronyd 是 ntp 时钟同步的 service。

好的,收到!

@Xie XiuQi

多谢楼上提醒,目前我已尝试,找到了virtio所在目录,只有两个文件就是Kconfig和Makefile,但是没有您说的virtio-gpu.ko文件啊

[root@pc-openeuler-1 virtio]# pwd
/usr/src/kernels/4.19.90-vhulk2001.1.0.0026.aarch64/drivers/gpu/drm/virtio
[root@pc-openeuler-1 virtio]# 
[root@pc-openeuler-1 virtio]# ls -l
total 8
-rw-r--r--. 1 root root 259 Feb  7 12:48 Kconfig
-rw-r--r--. 1 root root 471 Feb  7 12:48 Makefile
[root@pc-openeuler-1 virtio]# cat Kconfig 
config DRM_VIRTIO_GPU
	tristate "Virtio GPU driver"
	depends on DRM && VIRTIO && MMU
	select DRM_KMS_HELPER
	select DRM_TTM
	help
	   This is the virtual GPU driver for virtio.  It can be used with
	   QEMU based VMMs (like KVM or Xen).

	   If unsure say M.
[root@pc-openeuler-1 virtio]# 
[root@pc-openeuler-1 virtio]# cat Makefile 
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for the drm device driver.  This driver provides support for the
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.

virtio-gpu-y := virtgpu_drv.o virtgpu_kms.o virtgpu_drm_bus.o virtgpu_gem.o \
	virtgpu_fb.o virtgpu_display.o virtgpu_vq.o virtgpu_ttm.o \
	virtgpu_fence.o virtgpu_object.o virtgpu_debugfs.o virtgpu_plane.o \
	virtgpu_ioctl.o virtgpu_prime.o

obj-$(CONFIG_DRM_VIRTIO_GPU) += virtio-gpu.o
[root@pc-openeuler-1 virtio]# 
[root@pc-openeuler-1 virtio]#

@Xie XiuQi @solarhu @木得感情的openEuler机器人 @openeuler-ci-bot

之前讨论说的virtio找到了,是这个吗?

@coding
把机器上下面这个文件换个位置后重启系统,让系统启动的时候不能加载virtio-gpu.ko 这个模块再试试看
/lib/modules/你的版本号/kernel/drivers/gpu/drm/virtio/virtio-gpu.ko

@coding
把机器上下面这个文件换个位置后重启系统,让系统启动的时候不能加载virtio-gpu.ko 这个模块再试试看
/lib/modules/你的版本号/kernel/drivers/gpu/drm/virtio/virtio-gpu.ko

好的,这就试试看!

@wangxiongfeng

[root@pc-openeuler-1 ~]# uname -r
Message from syslogd@pc-openeuler-1 at Apr 23 20:33:38 ...
kernel:[161292.938664] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [systemd-logind:1342]

查看版本还报Message……,不行

@wangxiongfeng

[root@pc-openeuler-1 ~]# /lib/modules/4.19.90-vhulk2001.1.0.0026.aarch64/kernel/drivers/gpu/drm/virtio/virtio-gpu.ko
-bash: /lib/modules/4.19.90-vhulk2001.1.0.0026.aarch64/kernel/drivers/gpu/drm/virtio/virtio-gpu.ko: Permission denied
[root@pc-openeuler-1 ~]#

@wangxiongfeng

重启机器了

[root@pc-openeuler-1 ~]# reboot

Socket error Event: 32 Error: 10053.
Connection closing...Socket close.

Connection closed by foreign host.

Disconnected from remote host at 20:59:05.

Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.

Authorized users only. All activities may be monitored and reported.

WARNING! The remote SSH server rejected X11 forwarding request.

Authorized users only. All activities may be monitored and reported.
Last login: Thu Apr 23 20:25:57 2020 from
awk: cmd. line:1: (FILENAME=- FNR=3) fatal: division by zero attempted

Welcome to 4.19.90-vhulk2001.1.0.0026.aarch64

System information as of time: Thu Apr 23 21:08:26 CST 2020

System load: 5.16
Processes: 125
Memory used: 7.1%
Swap used:
Usage On: 14%
IP address: 15.0.0.12
Users online: 1

[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 23 21:11:15 ...
kernel:[ 479.381096] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [in:imjournal:1332]

@wangxiongfeng

[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 23 21:11:15 ...
kernel:[ 479.381096] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [in:imjournal:1332]
^C
[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 23 22:00:37 ...
kernel:[ 3445.168882] watchdog: BUG: soft lockup - CPU#0 stuck for 28s! [crond:2179]

Message from syslogd@pc-openeuler-1 at Apr 23 22:02:17 ...
kernel:[ 3517.796115] watchdog: BUG: soft lockup - CPU#3 stuck for 25s! [kworker/3:0:2139]

Message from syslogd@pc-openeuler-1 at Apr 23 22:02:19 ...
kernel:[ 3520.299109] watchdog: BUG: soft lockup - CPU#0 stuck for 27s! [migration/0:12]

Message from syslogd@pc-openeuler-1 at Apr 23 22:12:28 ...
kernel:[ 4049.826671] watchdog: BUG: soft lockup - CPU#0 stuck for 32s! [in:imjournal:1332]

Message from syslogd@pc-openeuler-1 at Apr 23 22:12:30 ...
kernel:[ 4093.061935] watchdog: BUG: soft lockup - CPU#3 stuck for 37s! [kworker/3:0:2139]

Message from syslogd@pc-openeuler-1 at Apr 23 22:12:30 ...
kernel:[ 4096.234141] watchdog: BUG: soft lockup - CPU#2 stuck for 41s! [sadc:2206]

[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 23 23:10:33 ...
kernel:[ 7629.456907] watchdog: BUG: soft lockup - CPU#3 stuck for 21s! [kworker/3:2:2229]

Message from syslogd@pc-openeuler-1 at Apr 23 23:21:03 ...
kernel:[ 8241.821174] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [kworker/0:3:2212]

[root@pc-openeuler-1 virtio]# ls -l
Message from syslogd@pc-openeuler-1 at Apr 24 00:02:18 ...
kernel:[10706.901660] watchdog: BUG: soft lockup - CPU#3 stuck for 52s! [kworker/3:0:2139]

@wangxiongfeng

[root@pc-openeuler-1 virtio]# pwd
/lib/modules/4.19.90-vhulk2001.1.0.0026.aarch64/kernel/drivers/gpu/drm/virtio
[root@pc-openeuler-1 virtio]# ls -l
total 120
-rw-r--r--. 1 root root 122426 Feb 7 12:50 virtio-gpu.ko
[root@pc-openeuler-1 virtio]# mv virtio-gpu.ko /root/
[root@pc-openeuler-1 virtio]# ls
[root@pc-openeuler-1 virtio]#

[root@pc-openeuler-1 ~]# reboot

Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.

Authorized users only. All activities may be monitored and reported.

WARNING! The remote SSH server rejected X11 forwarding request.

Authorized users only. All activities may be monitored and reported.
Last login: Thu Apr 23 21:08:26 2020 from
awk: cmd. line:1: (FILENAME=- FNR=3) fatal: division by zero attempted

Welcome to 4.19.90-vhulk2001.1.0.0026.aarch64

System information as of time: Fri Apr 24 00:28:59 CST 2020

System load: 5.39
Processes: 120
Memory used: 6.4%
Swap used:
Usage On: 14%
IP address: 15.0.0.12
Users online: 1

[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 24 00:41:08 ...
kernel:[ 1362.851796] watchdog: BUG: soft lockup - CPU#3 stuck for 33s! [kworker/3:2:159]

报的调用栈还是 virtio-gpu.ko 里面吗

Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.

Authorized users only. All activities may be monitored and reported.

WARNING! The remote SSH server rejected X11 forwarding request.

Authorized users only. All activities may be monitored and reported.
Last login: Fri Apr 24 00:28:58 2020 from
awk: cmd. line:1: (FILENAME=- FNR=3) fatal: division by zero attempted

Welcome to 4.19.90-vhulk2001.1.0.0026.aarch64

System information as of time: Fri Apr 24 15:43:06 CST 2020

System load: 1.61
Processes: 118
Memory used: 6.0%
Swap used:
Usage On: 14%
IP address: 15.0.0.12
Users online: 1

[root@pc-openeuler-1 ~]#
Socket error Event: 32 Error: 10053.
Connection closing...Socket close.

Connection closed by foreign host.

Disconnected from remote host() at 16:01:16.

报的调用栈还是 virtio-gpu.ko 里面吗

virtio-gpu.ko已经移动到 /root/目录了

@wangxiongfeng

[root@pc-openeuler-1 ~]# tail -n 20 /var/log/messages
Apr 24 16:33:59 pc-openeuler-1 sshd[3638]: User child is on pid 3655
Apr 24 16:33:59 pc-openeuler-1 audit[3655]: CRYPTO_KEY_USER pid=3655 uid=0 auid=0 ses=5 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=destroy kind=server fp=SHA256:a3:18:bf:56:e1:cf:2d:a4:25:1d:6d:67:bc:8a:29:d3:7a:6f:c0:b3:69:b8:70:a5:ef:23:28:7d:25:68:a0:65 direction=? spid=3655 suid=0 exe="/usr/sbin/sshd" hostname=? addr=? terminal=? res=success'
Apr 24 16:33:59 pc-openeuler-1 audit[3655]: CRYPTO_KEY_USER pid=3655 uid=0 auid=0 ses=5 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=destroy kind=server fp=SHA256:2c:0f:31:bf:ff:bb:5a:9d:67:7d:23:12:3e:ac:c3:a4:da:75:5e:74:31:d9:18:69:59:74:b5:7f:6f:9a:42:f3 direction=? spid=3655 suid=0 exe="/usr/sbin/sshd" hostname=? addr=? terminal=? res=success'
Apr 24 16:33:59 pc-openeuler-1 audit[3655]: CRYPTO_KEY_USER pid=3655 uid=0 auid=0 ses=5 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=destroy kind=server fp=SHA256:42:10:b9:7a:3e:c0:d7:1c:08:d7:84:a7:8f:e0:8b:7f:a2:87:de:d5:95:ea:4e:95:a5:28:2b:b5:bc:5a:ea:22 direction=? spid=3655 suid=0 exe="/usr/sbin/sshd" hostname=? addr=? terminal=? res=success'
Apr 24 16:33:59 pc-openeuler-1 audit[3655]: CRED_ACQ pid=3655 uid=0 auid=0 ses=5 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_faillock,pam_unix acct="root" exe="/usr/sbin/sshd" hostname= addr= terminal=ssh res=success'
Apr 24 16:33:59 pc-openeuler-1 audit[3638]: USER_LOGIN pid=3638 uid=0 auid=0 ses=5 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=login id=0 exe="/usr/sbin/sshd" hostname=? addr= terminal=/dev/pts/0 res=success'
Apr 24 16:33:59 pc-openeuler-1 audit[3638]: USER_START pid=3638 uid=0 auid=0 ses=5 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=login id=0 exe="/usr/sbin/sshd" hostname=? addr= terminal=/dev/pts/0 res=success'
Apr 24 16:33:59 pc-openeuler-1 sshd[3655]: Starting session: shell on pts/0 for root from id 0
Apr 24 16:33:59 pc-openeuler-1 audit[3638]: CRYPTO_KEY_USER pid=3638 uid=0 auid=0 ses=5 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=destroy kind=server fp=SHA256:42:10:b9:7a:3e:c0:d7:1c:08:d7:84:a7:8f:e0:8b:7f:a2:87:de:d5:95:ea:4e:95:a5:28:2b:b5:bc:5a:ea:22 direction=? spid=3656 suid=0 exe="/usr/sbin/sshd" hostname=? addr=? terminal=? res=success'
Apr 24 16:34:00 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 16:34:05 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 16:34:10 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 16:34:15 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 16:34:20 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 16:34:25 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 16:34:30 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 16:34:37 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 16:34:41 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 16:34:46 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 16:34:51 pc-openeuler-1 multipathd[1228]: sda: unusable path
[root@pc-openeuler-1 ~]#

这次是multipathd

[root@pc-openeuler-1 ~]# ps -ef | grep watchdogd
root 46 2 0 00:18 ? 00:00:00 [watchdogd]
root 3861 3656 0 16:41 pts/0 00:00:00 grep watchdogd
[root@pc-openeuler-1 ~]#
[root@pc-openeuler-1 ~]# ps -ef | grep multipathd
root 1228 1 11 00:18 ? 01:51:39 /sbin/multipathd -d -s
root 3886 3656 20 16:42 pts/0 00:00:00 grep multipathd
[root@pc-openeuler-1 ~]#

[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 24 18:31:00 ...
kernel:[65543.079046] watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [bash:4051]

[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 24 18:31:00 ...
kernel:[65543.079046] watchdog: BUG: soft lockup - CPU#0 stuck for 24s! [bash:4051]
[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 24 19:52:11 ...
kernel:[70424.599990] watchdog: BUG: soft lockup - CPU#3 stuck for 27s! [kworker/3:2:3793]

Message from syslogd@pc-openeuler-1 at Apr 24 20:23:21 ...
kernel:[72267.448814] watchdog: BUG: soft lockup - CPU#1 stuck for 23s! [tuned:1760]

Message from syslogd@pc-openeuler-1 at Apr 24 20:23:36 ...
kernel:[72279.307354] watchdog: BUG: soft lockup - CPU#0 stuck for 34s! [kworker/0:0:3637]
[root@pc-openeuler-1 ~]#

等了好久,还是报Message……

[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 24 21:41:38 ...
kernel:[76946.074309] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [systemd-cgroups:4351]

@Xie XiuQi @solarhu @木得感情的openEuler机器人 @openeuler-ci-bot @wangxiongfeng

[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 24 21:41:38 ...
kernel:[76946.074309] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [systemd-cgroups:4351]
^C
[root@pc-openeuler-1 ~]# tail -n 20 /var/log/messages
Apr 24 21:44:37 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:44:42 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:44:47 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:44:58 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:45:05 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:45:10 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:45:15 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:45:20 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:45:25 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:45:30 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:45:35 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:45:40 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:45:45 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:45:51 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:45:56 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:46:04 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:46:07 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:46:14 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:46:18 pc-openeuler-1 multipathd[1228]: sda: unusable path
Apr 24 21:46:23 pc-openeuler-1 multipathd[1228]: sda: unusable path
[root@pc-openeuler-1 ~]# ps -ef | grep multipathd
root 1228 1 10 00:18 ? 02:17:17 /sbin/multipathd -d -s
root 4393 3656 0 21:46 pts/0 00:00:00 grep multipathd
[root@pc-openeuler-1 ~]# ps -fp 1228
UID PID PPID C STIME TTY TIME CMD
root 1228 1 10 00:18 ? 02:17:20 /sbin/multipathd -d -s
[root@pc-openeuler-1 ~]# pkill -9 1228
[root@pc-openeuler-1 ~]# ps -fp 1228
UID PID PPID C STIME TTY TIME CMD
root 1228 1 10 00:18 ? 02:17:20 /sbin/multipathd -d -s
[root@pc-openeuler-1 ~]#
[root@pc-openeuler-1 ~]# kill -9 1228
[root@pc-openeuler-1 ~]# ps -fp 1228
UID PID PPID C STIME TTY TIME CMD
[root@pc-openeuler-1 ~]#
[root@pc-openeuler-1 ~]# ps -ef | grep multipathd
root 4497 3656 0 21:48 pts/0 00:00:00 grep multipathd
[root@pc-openeuler-1 ~]#
[root@pc-openeuler-1 ~]#

[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 24 23:11:41 ...
kernel:[82373.304330] watchdog: BUG: soft lockup - CPU#3 stuck for 25s! [kworker/3:2:3793]

Message from syslogd@pc-openeuler-1 at Apr 24 23:32:27 ...
kernel:[83603.747884] watchdog: BUG: soft lockup - CPU#0 stuck for 25s! [kworker/0:1:4562]

Message from syslogd@pc-openeuler-1 at Apr 25 00:01:45 ...
kernel:[85384.344771] watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [kworker/3:0:4646]

Message from syslogd@pc-openeuler-1 at Apr 25 00:02:10 ...
kernel:[85386.883128] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:2:4605]

[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 25 00:31:27 ...
kernel:[87179.239314] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [kworker/3:0:4646]

[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 25 00:31:27 ...
kernel:[87179.239314] watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [kworker/3:0:4646]

Message from syslogd@pc-openeuler-1 at Apr 25 00:41:20 ...
kernel:[87765.829549] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [swapper/0:0]

Message from syslogd@pc-openeuler-1 at Apr 25 01:00:39 ...
kernel:[88941.816637] watchdog: BUG: soft lockup - CPU#0 stuck for 21s! [systemd-logind:3036]

[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 25 19:30:39 ...
kernel:[155544.026690] watchdog: BUG: soft lockup - CPU#3 stuck for 30s! [kworker/3:0:6377]

Message from syslogd@pc-openeuler-1 at Apr 25 19:30:39 ...
kernel:[155544.036940] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [sadc:6393]

Socket error Event: 32 Error: 10053.
Connection closing...Socket close.

Connection closed by foreign host.

Disconnected from remote host() at 20:01:22.

[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 25 23:30:57 ...
kernel:[169944.810557] watchdog: BUG: soft lockup - CPU#2 stuck for 24s! [gdbus:1349]

Message from syslogd@pc-openeuler-1 at Apr 25 23:31:40 ...
kernel:[169950.297758] watchdog: BUG: soft lockup - CPU#3 stuck for 29s! [kworker/3:0:6647]

Message from syslogd@pc-openeuler-1 at Apr 25 23:32:06 ...
kernel:[169950.718210] watchdog: BUG: soft lockup - CPU#0 stuck for 28s! [systemd:1]

[root@pc-openeuler-1 ~]#
Message from syslogd@pc-openeuler-1 at Apr 25 23:30:57 ...
kernel:[169944.810557] watchdog: BUG: soft lockup - CPU#2 stuck for 24s! [gdbus:1349]

Message from syslogd@pc-openeuler-1 at Apr 25 23:31:40 ...
kernel:[169950.297758] watchdog: BUG: soft lockup - CPU#3 stuck for 29s! [kworker/3:0:6647]

Message from syslogd@pc-openeuler-1 at Apr 25 23:32:06 ...
kernel:[169950.718210] watchdog: BUG: soft lockup - CPU#0 stuck for 28s! [systemd:1]

Message from syslogd@pc-openeuler-1 at Apr 25 23:51:01 ...
kernel:[171162.764170] watchdog: BUG: soft lockup - CPU#2 stuck for 21s! [irqbalance:1309]

Message from syslogd@pc-openeuler-1 at Apr 26 00:01:01 ...
kernel:[171758.866407] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [kworker/0:1:6745]

如果您要上传附件,有两种方式:
1)在google doc 或者 其他类似平台上共享,在此贴链接。

2)编辑issue的内容(就是原帖,不是comments),应该可以增加附件。

谢谢

调用栈的信息发一下 soft lockup打印看出不什么来

dmesg 里面应该有soft lockup的调用栈

当前正在做host排查,目前发现cpu为916,内部排查916环境没有发现类似问题。当前正在做的事情,
1.了解鹏城使用宿主机虚拟化系统(内部测试未发现该问题),对比分析两者的情况
2.协调鹏城的同事在同一台宿主机上新建虚拟机是否能重现,
3.如果能重现,直接改小硬件狗时间,产生core文件,继续分析。

@coding 最好配置watchdog,出现soft lockup就触发coredump。然后我们再去看下到底锁死在哪里,从最开始那个virtio-gpu栈,我们只能从代码看virtio_gpu_queue_fenced_ctrl_buffer->spin_lock(&vgdev->ctrlq.qlock)->virtio_gpu_queue_ctrl_buffer_locked->virtioqueue_kick->vp_notify-> iowrite16(vq->index, (void __iomem *)vq->priv); 这个地方一直没有返回?这个地方需要后端虚拟化模拟

当前正在做host排查,目前发现cpu为916,内部排查916环境没有发现类似问题。当前正在做的事情,
1.了解鹏城使用宿主机虚拟化系统(内部测试未发现该问题),对比分析两者的情况
2.协调鹏城的同事在同一台宿主机上新建虚拟机是否能重现,
3.如果能重现,直接改小硬件狗时间,产生core文件,继续分析。

使用的cpu架构是鲲鹏920

@solarhu

如果您要上传附件,有两种方式:
1)在google doc 或者 其他类似平台上共享,在此贴链接。
2)编辑issue的内容(就是原帖,不是comments),应该可以增加附件。
谢谢

谢谢您的建议,我这边没上google,所以我想的办法是自己建了一个仓库,
https://gitee.com/striver619/pc-openEuler-messages
打开以后就能清晰的看到messages文件

@Fred_Li

输入图片说明

今天上午11:54朋友在鹏城实验室平台上申请的centos7虚拟机上跑docker也突然报系统内核软死锁的bug了

本人考虑是否是鹏城实验室平台虚拟化那一块存在问题?

本人考虑是否是鹏城实验室平台虚拟化那一块存在问题?

@coding 从前边openeuler分析看,是有可能的,你们修改过qemu组件的大锁相关的?qemu中存在死锁?

@coding 从前边openeuler分析看,是有可能的,你们修改过qemu组件的大锁相关的?qemu中存在死锁?

现在不仅仅是openEuler出现这个bug,在centos上也出现了

@zhanghailiang

@coding 从前边openeuler分析看,是有可能的,你们修改过qemu组件的大锁相关的?qemu中存在死锁?

请问怎么才能联系上鹏城实验室那边的工程师?

@zhanghailiang

现在不仅仅是openEuler出现这个bug,在centos上也出现了

@zhanghailiang _lucky

@coding 嗯,看到了,你们可以排查些HOST OS qemu组件的修改

@coding 嗯,看到了,你们可以排查些HOST OS qemu组件的修改

我这边只能是在虚拟机上,就是guest
没办法排查host

@zhanghailiang

鲲鹏920架构arm64内核版本,centos7也出现这个bug了

鲲鹏920架构arm64内核版本,centos7也出现这个bug了

在鲲鹏920架构arm64内核版本,centos7上又出现这个bug了

在鲲鹏920架构arm64内核版本,centos7上又出现这个bug了

输入图片说明

@coding 你是虚拟机内部跑了很大的压力出现的?还是空载出现的?

Welcome to 4.19.90-vhulk2001.1.0.0026.aarch64

System information as of time: Fri May 8 15:16:17 CST 2020

System load: 0.93
Processes: 119
Memory used: 6.1%
Swap used:
Usage On: 17%
IP address: 15.0.0.12
Users online: 1

[root@pc-openeuler-1 ~]# cat /etc/os-release
NAME="openEuler"
VERSION="1.0 ()"
ID="openEuler"
VERSION_ID="1.0"
PRETTY_NAME="openEuler 1.0 ()"
ANSI_COLOR="0;31"

[root@pc-openeuler-1 ~]# lscpu
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: ARM
Model: 2
Model name: Cortex-A72
Stepping: r0p2
BogoMIPS: 100.00
NUMA node0 CPU(s): 0-3
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Vulnerable
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
[root@pc-openeuler-1 ~]# hostnamectl
Static hostname: pc-openeuler-1
Icon name: computer-vm
Chassis: vm
Machine ID: 9e1ff7da223f495288b145eea478b74f
Boot ID: e4f88181c5174a5db90b1640e9a9f4a0
Virtualization: kvm
Operating System: openEuler 1.0 ()
Kernel: Linux 4.19.90-vhulk2001.1.0.0026.aarch64
Architecture: arm64
[root@pc-openeuler-1 ~]# uname -a
Linux pc-openeuler-1 4.19.90-vhulk2001.1.0.0026.aarch64 #1 SMP Fri Feb 7 04:09:58 UTC 2020 aarch64 GNU/Linux
[root@pc-openeuler-1 ~]#

'abrt-cli status' timed out
[root@pc-centos-vm-1 ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (AltArch)
[root@pc-centos-vm-1 ~]# lscpu
Architecture: aarch64
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Model: 2
BogoMIPS: 100.00
NUMA node0 CPU(s): 0-3
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
[root@pc-centos-vm-1 ~]# hostnamectl
[root@pc-centos-vm-1 ~]# hostnamectl
Static hostname: pc-centos-vm-1
Icon name: computer-vm
Chassis: vm
Machine ID: 65a539173aca43bca524c9829d298907
Boot ID: 8692b15488eb4c18bffbd866582d255a
Virtualization: kvm
Operating System: CentOS Linux 7 (AltArch)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 4.14.0-115.el7a.0.1.aarch64
Architecture: arm64
[root@pc-centos-vm-1 ~]# uname -a
Linux pc-centos-vm-1 4.14.0-115.el7a.0.1.aarch64 #1 SMP Sun Nov 25 20:54:21 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
[root@pc-centos-vm-1 ~]#
[root@pc-centos-vm-1 ~]#

@coding 你是虚拟机内部跑了很大的压力出现的?还是空载出现的?

openEuler是空载的

@zhanghailiang

这是鲲鹏916上CentOS7.6

输入图片说明
输入图片说明

已经跟鹏程实验室确定,非openeuler问题,而是host OS的问题,host os使用了中标麒麟,版本较老,鹏程实验室已经推送麒麟方面定位,此问题单关闭

zhanghailiang 任务状态待办的 修改为已完成

登录 后才可以发表评论

状态
负责人
项目
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
预计工期 (小时)
参与者(10)
5329419 openeuler ci bot 1632792936 5086767 striver619 1587446315 5210900 freesky edward 1578982489 5250947 zerodefect 1578983253
Go
1
https://gitee.com/openeuler/community.git
git@gitee.com:openeuler/community.git
openeuler
community
community

搜索帮助