Sign in
Sign up
Explore
Enterprise
Education
Search
Help
Terms of use
About Us
Explore
Enterprise
Education
Gitee Premium
Gitee AI
AI teammates
Sign in
Sign up
Fetch the repository succeeded.
description of repo status
Open Source
>
Other
>
Operation System
&&
Donate
Please sign in before you donate.
Cancel
Sign in
Scan WeChat QR to Pay
Cancel
Complete
Prompt
Switch to Alipay.
OK
Cancel
Watch
Unwatch
Watching
Releases Only
Ignoring
128
Star
73
Fork
330
src-openEuler
/
kernel
Closed
Code
Issues
1197
Pull Requests
35
Wiki
Insights
Pipelines
Service
JavaDoc
PHPDoc
Quality Analysis
Jenkins for Gitee
Tencent CloudBase
Tencent Cloud Serverless
悬镜安全
Aliyun SAE
Codeblitz
SBOM
DevLens
Don’t show this again
Update failed. Please try again later!
Remove this flag
Content Risk Flag
This task is identified by
as the content contains sensitive information such as code security bugs, privacy leaks, etc., so it is only accessible to contributors of this repository.
CVE-2023-52587
Done
#I96G8W
CVE和安全问题
openeuler-ci-bot
owner
Opened this issue
2024-03-06 21:49
一、漏洞信息 漏洞编号:[CVE-2023-52587](https://nvd.nist.gov/vuln/detail/CVE-2023-52587) 漏洞归属组件:[kernel](https://gitee.com/src-openeuler/kernel) 漏洞归属的版本:4.19.140,4.19.194,4.19.90,5.10.0,6.1.0,6.1.14,6.1.19,6.1.5,6.1.6,6.1.8,6.4.0 CVSS V2.0分值: BaseScore:0.0 Low Vector:CVSS:2.0/ 漏洞简述: In the Linux kernel, the following vulnerability has been resolved:IB/ipoib: Fix mcast list lockingReleasing the `priv->lock` while iterating the `priv->multicast_list` in`ipoib_mcast_join_task()` opens a window for `ipoib_mcast_dev_flush()` toremove the items while in the middle of iteration. If the mcast is removedwhile the lock was dropped, the for loop spins forever resulting in a hardlockup (as was reported on RHEL 4.18.0-372.75.1.el8_6 kernel): Task A (kworker/u72:2 below) | Task B (kworker/u72:0 below) -----------------------------------+----------------------------------- ipoib_mcast_join_task(work) | ipoib_ib_dev_flush_light(work) spin_lock_irq(&priv->lock) | __ipoib_ib_dev_flush(priv, ...) list_for_each_entry(mcast, | ipoib_mcast_dev_flush(dev = priv->dev) &priv->multicast_list, list) | ipoib_mcast_join(dev, mcast) | spin_unlock_irq(&priv->lock) | | spin_lock_irqsave(&priv->lock, flags) | list_for_each_entry_safe(mcast, tmcast, | &priv->multicast_list, list) | list_del(&mcast->list); | list_add_tail(&mcast->list, &remove_list) | spin_unlock_irqrestore(&priv->lock, flags) spin_lock_irq(&priv->lock) | | ipoib_mcast_remove_list(&remove_list) (Here, `mcast` is no longer on the | list_for_each_entry_safe(mcast, tmcast, `priv->multicast_list` and we keep | remove_list, list) spinning on the `remove_list` of | >>> wait_for_completion(&mcast->done) the other thread which is blocked | and the list is still valid on | it s stack.)Fix this by keeping the lock held and changing to GFP_ATOMIC to preventeventual sleeps.Unfortunately we could not reproduce the lockup and confirm this fix butbased on the code review I think this fix should address such lockups.crash> bc 31PID: 747 TASK: ff1c6a1a007e8000 CPU: 31 COMMAND: kworker/u72:2 -- [exception RIP: ipoib_mcast_join_task+0x1b1] RIP: ffffffffc0944ac1 RSP: ff646f199a8c7e00 RFLAGS: 00000002 RAX: 0000000000000000 RBX: ff1c6a1a04dc82f8 RCX: 0000000000000000 work (&priv->mcast_task{,.work}) RDX: ff1c6a192d60ac68 RSI: 0000000000000286 RDI: ff1c6a1a04dc8000 &mcast->list RBP: ff646f199a8c7e90 R8: ff1c699980019420 R9: ff1c6a1920c9a000 R10: ff646f199a8c7e00 R11: ff1c6a191a7d9800 R12: ff1c6a192d60ac00 mcast R13: ff1c6a1d82200000 R14: ff1c6a1a04dc8000 R15: ff1c6a1a04dc82d8 dev priv (&priv->lock) &priv->multicast_list (aka head) ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018--- <NMI exception stack> --- #5 [ff646f199a8c7e00] ipoib_mcast_join_task+0x1b1 at ffffffffc0944ac1 [ib_ipoib] #6 [ff646f199a8c7e98] process_one_work+0x1a7 at ffffffff9bf10967crash> rx ff646f199a8c7e68ff646f199a8c7e68: ff1c6a1a04dc82f8 <<< work = &priv->mcast_task.workcrash> list -hO ipoib_dev_priv.multicast_list ff1c6a1a04dc8000(empty)crash> ipoib_dev_priv.mcast_task.work.func,mcast_mutex.owner.counter ff1c6a1a04dc8000 mcast_task.work.func = 0xffffffffc0944910 <ipoib_mcast_join_task>, mcast_mutex.owner.counter = 0xff1c69998efec000crash> b 8PID: 8 TASK: ff1c69998efec000 CPU: 33 COMMAND: kworker/u72:0 -- #3 [ff646f1980153d50] wait_for_completion+0x96 at ffffffff9c7d7646 #4 [ff646f1980153d90] ipoib_mcast_remove_list+0x56 at ffffffffc0944dc6 [ib_ipoib] #5 [ff646f1980153de8] ipoib_mcast_dev_flush+0x1a7 at ffffffffc09455a7 [ib_ipoib] #6 [ff646f1980153e58] __ipoib_ib_dev_flush+0x1a4 at ffffffffc09431a4 [ib_ipoib] #7 [ff---truncated--- 漏洞公开时间:2024-03-06 15:15:07 漏洞创建时间:2024-03-06 21:49:58 漏洞详情参考链接: https://nvd.nist.gov/vuln/detail/CVE-2023-52587 <details> <summary>更多参考(点击展开)</summary> | 参考来源 | 参考链接 | 来源链接 | | ------- | -------- | -------- | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/342258fb46d66c1b4c7e2c3717ac01e10c03cf18 | | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/4c8922ae8eb8dcc1e4b7d1059d97a8334288d825 | | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/4f973e211b3b1c6d36f7c6a19239d258856749f9 | | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/5108a2dc2db5630fb6cd58b8be80a0c134bc310a | | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/615e3adc2042b7be4ad122a043fc9135e6342c90 | | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/7c7bd4d561e9dc6f5b7df9e184974915f6701a89 | | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/ac2630fd3c90ffec34a0bfc4d413668538b0e8f2 | | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/ed790bd0903ed3352ebf7f650d910f49b7319b34 | | | suse_bugzilla | http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2023-52587 | https://bugzilla.suse.com/show_bug.cgi?id=1221082 | | suse_bugzilla | https://www.cve.org/CVERecord?id=CVE-2023-52587 | https://bugzilla.suse.com/show_bug.cgi?id=1221082 | | suse_bugzilla | https://lore.kernel.org/linux-cve-announce/2024030644-CVE-2023-52587-5479@gregkh/ | https://bugzilla.suse.com/show_bug.cgi?id=1221082 | | suse_bugzilla | https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=4f973e211b3b | https://bugzilla.suse.com/show_bug.cgi?id=1221082 | | redhat_bugzilla | https://lore.kernel.org/linux-cve-announce/2024030644-CVE-2023-52587-5479@gregkh/T | https://bugzilla.redhat.com/show_bug.cgi?id=2268331 | | ubuntu | https://git.kernel.org/linus/4f973e211b3b1c6d36f7c6a19239d258856749f9 (6.8-rc1) | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/4c8922ae8eb8dcc1e4b7d1059d97a8334288d825 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/615e3adc2042b7be4ad122a043fc9135e6342c90 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/ac2630fd3c90ffec34a0bfc4d413668538b0e8f2 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/ed790bd0903ed3352ebf7f650d910f49b7319b34 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/5108a2dc2db5630fb6cd58b8be80a0c134bc310a | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/342258fb46d66c1b4c7e2c3717ac01e10c03cf18 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/7c7bd4d561e9dc6f5b7df9e184974915f6701a89 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/4f973e211b3b1c6d36f7c6a19239d258856749f9 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://ubuntu.com/security/notices/USN-6688-1 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://www.cve.org/CVERecord?id=CVE-2023-52587 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://nvd.nist.gov/vuln/detail/CVE-2023-52587 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://launchpad.net/bugs/cve/CVE-2023-52587 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://security-tracker.debian.org/tracker/CVE-2023-52587 | https://ubuntu.com/security/CVE-2023-52587 | | debian | | https://security-tracker.debian.org/tracker/CVE-2023-52587 | | cve_search | | https://git.kernel.org/stable/c/4c8922ae8eb8dcc1e4b7d1059d97a8334288d825 | | cve_search | | https://git.kernel.org/stable/c/615e3adc2042b7be4ad122a043fc9135e6342c90 | | cve_search | | https://git.kernel.org/stable/c/ac2630fd3c90ffec34a0bfc4d413668538b0e8f2 | | cve_search | | https://git.kernel.org/stable/c/ed790bd0903ed3352ebf7f650d910f49b7319b34 | | cve_search | | https://git.kernel.org/stable/c/5108a2dc2db5630fb6cd58b8be80a0c134bc310a | | cve_search | | https://git.kernel.org/stable/c/342258fb46d66c1b4c7e2c3717ac01e10c03cf18 | | cve_search | | https://git.kernel.org/stable/c/7c7bd4d561e9dc6f5b7df9e184974915f6701a89 | | cve_search | | https://git.kernel.org/stable/c/4f973e211b3b1c6d36f7c6a19239d258856749f9 | | ubuntu | https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-52587 | https://ubuntu.com/security/CVE-2023-52587 | </details> 漏洞分析指导链接: https://gitee.com/openeuler/cve-manager/blob/master/cve-vulner-manager/doc/md/manual.md 漏洞数据来源: openBrain开源漏洞感知系统 漏洞补丁信息: <details> <summary>详情(点击展开)</summary> | 影响的包 | 修复版本 | 修复补丁 | 问题引入补丁 | 来源 | | ------- | -------- | ------- | -------- | --------- | | | | https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=4f973e211b3b | | suse_bugzilla | | linux | | https://git.kernel.org/linus/4f973e211b3b1c6d36f7c6a19239d258856749f9 | https://git.kernel.org/linus/1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 | ubuntu | </details> 二、漏洞分析结构反馈 影响性分析说明: In the Linux kernel, the following vulnerability has been resolved:IB/ipoib: Fix mcast list lockingReleasing the `priv->lock` while iterating the `priv->multicast_list` in`ipoib_mcast_join_task()` opens a window for `ipoib_mcast_dev_flush()` toremove the items while in the middle of iteration. If the mcast is removedwhile the lock was dropped, the for loop spins forever resulting in a hardlockup (as was reported on RHEL 4.18.0-372.75.1.el8_6 kernel): Task A (kworker/u72:2 below) | Task B (kworker/u72:0 below) -----------------------------------+----------------------------------- ipoib_mcast_join_task(work) | ipoib_ib_dev_flush_light(work) spin_lock_irq(&priv->lock) | __ipoib_ib_dev_flush(priv, ...) list_for_each_entry(mcast, | ipoib_mcast_dev_flush(dev = priv->dev) &priv->multicast_list, list) | ipoib_mcast_join(dev, mcast) | spin_unlock_irq(&priv->lock) | | spin_lock_irqsave(&priv->lock, flags) | list_for_each_entry_safe(mcast, tmcast, | &priv->multicast_list, list) | list_del(&mcast->list); | list_add_tail(&mcast->list, &remove_list) | spin_unlock_irqrestore(&priv->lock, flags) spin_lock_irq(&priv->lock) | | ipoib_mcast_remove_list(&remove_list) (Here, `mcast` is no longer on the | list_for_each_entry_safe(mcast, tmcast, `priv->multicast_list` and we keep | remove_list, list) spinning on the `remove_list` of | >>> wait_for_completion(&mcast->done) the other thread which is blocked | and the list is still valid on | it s stack.)Fix this by keeping the lock held and changing to GFP_ATOMIC to preventeventual sleeps.Unfortunately we could not reproduce the lockup and confirm this fix butbased on the code review I think this fix should address such lockups.crash> bc 31PID: 747 TASK: ff1c6a1a007e8000 CPU: 31 COMMAND: kworker/u72:2 -- [exception RIP: ipoib_mcast_join_task+0x1b1] RIP: ffffffffc0944ac1 RSP: ff646f199a8c7e00 RFLAGS: 00000002 RAX: 0000000000000000 RBX: ff1c6a1a04dc82f8 RCX: 0000000000000000 work (&priv->mcast_task{,.work}) RDX: ff1c6a192d60ac68 RSI: 0000000000000286 RDI: ff1c6a1a04dc8000 &mcast->list RBP: ff646f199a8c7e90 R8: ff1c699980019420 R9: ff1c6a1920c9a000 R10: ff646f199a8c7e00 R11: ff1c6a191a7d9800 R12: ff1c6a192d60ac00 mcast R13: ff1c6a1d82200000 R14: ff1c6a1a04dc8000 R15: ff1c6a1a04dc82d8 dev priv (&priv->lock) &priv->multicast_list (aka head) ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018--- <NMI exception stack> --- #5 [ff646f199a8c7e00] ipoib_mcast_join_task+0x1b1 at ffffffffc0944ac1 [ib_ipoib] #6 [ff646f199a8c7e98] process_one_work+0x1a7 at ffffffff9bf10967crash> rx ff646f199a8c7e68ff646f199a8c7e68: ff1c6a1a04dc82f8 <<< work = &priv->mcast_task.workcrash> list -hO ipoib_dev_priv.multicast_list ff1c6a1a04dc8000(empty)crash> ipoib_dev_priv.mcast_task.work.func,mcast_mutex.owner.counter ff1c6a1a04dc8000 mcast_task.work.func = 0xffffffffc0944910 <ipoib_mcast_join_task>, mcast_mutex.owner.counter = 0xff1c69998efec000crash> b 8PID: 8 TASK: ff1c69998efec000 CPU: 33 COMMAND: kworker/u72:0 -- #3 [ff646f1980153d50] wait_for_completion+0x96 at ffffffff9c7d7646 #4 [ff646f1980153d90] ipoib_mcast_remove_list+0x56 at ffffffffc0944dc6 [ib_ipoib] #5 [ff646f1980153de8] ipoib_mcast_dev_flush+0x1a7 at ffffffffc09455a7 [ib_ipoib] #6 [ff646f1980153e58] __ipoib_ib_dev_flush+0x1a4 at ffffffffc09431a4 [ib_ipoib] #7 [ff---truncated--- openEuler评分: 4.7 Vector:CVSS:2.0/AV:L/AC:H/PR:L/UI:N/S:U/C:N/I:N/A:H 受影响版本排查(受影响/不受影响): 1.openEuler-20.03-LTS-SP1(4.19.90):受影响 2.openEuler-20.03-LTS-SP4(4.19.90):受影响 3.openEuler-22.03-LTS(5.10.0):受影响 4.openEuler-22.03-LTS-SP1(5.10.0):受影响 5.openEuler-22.03-LTS-SP2(5.10.0):受影响 6.openEuler-22.03-LTS-SP3(5.10.0):受影响 7.master(6.1.0):不受影响 8.openEuler-22.03-LTS-Next(5.10.0):不受影响 9.openEuler-24.03-LTS:不受影响 10.openEuler-24.03-LTS-Next:不受影响 修复是否涉及abi变化(是/否): 1.openEuler-20.03-LTS-SP1(4.19.90):否 2.openEuler-20.03-LTS-SP4(4.19.90):否 3.openEuler-22.03-LTS(5.10.0):否 4.openEuler-22.03-LTS-SP1(5.10.0):否 5.openEuler-22.03-LTS-SP2(5.10.0):否 6.openEuler-22.03-LTS-SP3(5.10.0):否 7.master(6.1.0):否 8.openEuler-22.03-LTS-Next(5.10.0):否 9.openEuler-24.03-LTS:否 10.openEuler-24.03-LTS-Next:否 三、漏洞修复 安全公告链接:https://www.openeuler.org/zh/security/safety-bulletin/detail/?id=openEuler-SA-2024-1487
一、漏洞信息 漏洞编号:[CVE-2023-52587](https://nvd.nist.gov/vuln/detail/CVE-2023-52587) 漏洞归属组件:[kernel](https://gitee.com/src-openeuler/kernel) 漏洞归属的版本:4.19.140,4.19.194,4.19.90,5.10.0,6.1.0,6.1.14,6.1.19,6.1.5,6.1.6,6.1.8,6.4.0 CVSS V2.0分值: BaseScore:0.0 Low Vector:CVSS:2.0/ 漏洞简述: In the Linux kernel, the following vulnerability has been resolved:IB/ipoib: Fix mcast list lockingReleasing the `priv->lock` while iterating the `priv->multicast_list` in`ipoib_mcast_join_task()` opens a window for `ipoib_mcast_dev_flush()` toremove the items while in the middle of iteration. If the mcast is removedwhile the lock was dropped, the for loop spins forever resulting in a hardlockup (as was reported on RHEL 4.18.0-372.75.1.el8_6 kernel): Task A (kworker/u72:2 below) | Task B (kworker/u72:0 below) -----------------------------------+----------------------------------- ipoib_mcast_join_task(work) | ipoib_ib_dev_flush_light(work) spin_lock_irq(&priv->lock) | __ipoib_ib_dev_flush(priv, ...) list_for_each_entry(mcast, | ipoib_mcast_dev_flush(dev = priv->dev) &priv->multicast_list, list) | ipoib_mcast_join(dev, mcast) | spin_unlock_irq(&priv->lock) | | spin_lock_irqsave(&priv->lock, flags) | list_for_each_entry_safe(mcast, tmcast, | &priv->multicast_list, list) | list_del(&mcast->list); | list_add_tail(&mcast->list, &remove_list) | spin_unlock_irqrestore(&priv->lock, flags) spin_lock_irq(&priv->lock) | | ipoib_mcast_remove_list(&remove_list) (Here, `mcast` is no longer on the | list_for_each_entry_safe(mcast, tmcast, `priv->multicast_list` and we keep | remove_list, list) spinning on the `remove_list` of | >>> wait_for_completion(&mcast->done) the other thread which is blocked | and the list is still valid on | it s stack.)Fix this by keeping the lock held and changing to GFP_ATOMIC to preventeventual sleeps.Unfortunately we could not reproduce the lockup and confirm this fix butbased on the code review I think this fix should address such lockups.crash> bc 31PID: 747 TASK: ff1c6a1a007e8000 CPU: 31 COMMAND: kworker/u72:2 -- [exception RIP: ipoib_mcast_join_task+0x1b1] RIP: ffffffffc0944ac1 RSP: ff646f199a8c7e00 RFLAGS: 00000002 RAX: 0000000000000000 RBX: ff1c6a1a04dc82f8 RCX: 0000000000000000 work (&priv->mcast_task{,.work}) RDX: ff1c6a192d60ac68 RSI: 0000000000000286 RDI: ff1c6a1a04dc8000 &mcast->list RBP: ff646f199a8c7e90 R8: ff1c699980019420 R9: ff1c6a1920c9a000 R10: ff646f199a8c7e00 R11: ff1c6a191a7d9800 R12: ff1c6a192d60ac00 mcast R13: ff1c6a1d82200000 R14: ff1c6a1a04dc8000 R15: ff1c6a1a04dc82d8 dev priv (&priv->lock) &priv->multicast_list (aka head) ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018--- <NMI exception stack> --- #5 [ff646f199a8c7e00] ipoib_mcast_join_task+0x1b1 at ffffffffc0944ac1 [ib_ipoib] #6 [ff646f199a8c7e98] process_one_work+0x1a7 at ffffffff9bf10967crash> rx ff646f199a8c7e68ff646f199a8c7e68: ff1c6a1a04dc82f8 <<< work = &priv->mcast_task.workcrash> list -hO ipoib_dev_priv.multicast_list ff1c6a1a04dc8000(empty)crash> ipoib_dev_priv.mcast_task.work.func,mcast_mutex.owner.counter ff1c6a1a04dc8000 mcast_task.work.func = 0xffffffffc0944910 <ipoib_mcast_join_task>, mcast_mutex.owner.counter = 0xff1c69998efec000crash> b 8PID: 8 TASK: ff1c69998efec000 CPU: 33 COMMAND: kworker/u72:0 -- #3 [ff646f1980153d50] wait_for_completion+0x96 at ffffffff9c7d7646 #4 [ff646f1980153d90] ipoib_mcast_remove_list+0x56 at ffffffffc0944dc6 [ib_ipoib] #5 [ff646f1980153de8] ipoib_mcast_dev_flush+0x1a7 at ffffffffc09455a7 [ib_ipoib] #6 [ff646f1980153e58] __ipoib_ib_dev_flush+0x1a4 at ffffffffc09431a4 [ib_ipoib] #7 [ff---truncated--- 漏洞公开时间:2024-03-06 15:15:07 漏洞创建时间:2024-03-06 21:49:58 漏洞详情参考链接: https://nvd.nist.gov/vuln/detail/CVE-2023-52587 <details> <summary>更多参考(点击展开)</summary> | 参考来源 | 参考链接 | 来源链接 | | ------- | -------- | -------- | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/342258fb46d66c1b4c7e2c3717ac01e10c03cf18 | | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/4c8922ae8eb8dcc1e4b7d1059d97a8334288d825 | | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/4f973e211b3b1c6d36f7c6a19239d258856749f9 | | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/5108a2dc2db5630fb6cd58b8be80a0c134bc310a | | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/615e3adc2042b7be4ad122a043fc9135e6342c90 | | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/7c7bd4d561e9dc6f5b7df9e184974915f6701a89 | | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/ac2630fd3c90ffec34a0bfc4d413668538b0e8f2 | | | 416baaa9-dc9f-4396-8d5f-8c081fb06d67 | https://git.kernel.org/stable/c/ed790bd0903ed3352ebf7f650d910f49b7319b34 | | | suse_bugzilla | http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2023-52587 | https://bugzilla.suse.com/show_bug.cgi?id=1221082 | | suse_bugzilla | https://www.cve.org/CVERecord?id=CVE-2023-52587 | https://bugzilla.suse.com/show_bug.cgi?id=1221082 | | suse_bugzilla | https://lore.kernel.org/linux-cve-announce/2024030644-CVE-2023-52587-5479@gregkh/ | https://bugzilla.suse.com/show_bug.cgi?id=1221082 | | suse_bugzilla | https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=4f973e211b3b | https://bugzilla.suse.com/show_bug.cgi?id=1221082 | | redhat_bugzilla | https://lore.kernel.org/linux-cve-announce/2024030644-CVE-2023-52587-5479@gregkh/T | https://bugzilla.redhat.com/show_bug.cgi?id=2268331 | | ubuntu | https://git.kernel.org/linus/4f973e211b3b1c6d36f7c6a19239d258856749f9 (6.8-rc1) | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/4c8922ae8eb8dcc1e4b7d1059d97a8334288d825 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/615e3adc2042b7be4ad122a043fc9135e6342c90 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/ac2630fd3c90ffec34a0bfc4d413668538b0e8f2 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/ed790bd0903ed3352ebf7f650d910f49b7319b34 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/5108a2dc2db5630fb6cd58b8be80a0c134bc310a | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/342258fb46d66c1b4c7e2c3717ac01e10c03cf18 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/7c7bd4d561e9dc6f5b7df9e184974915f6701a89 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://git.kernel.org/stable/c/4f973e211b3b1c6d36f7c6a19239d258856749f9 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://ubuntu.com/security/notices/USN-6688-1 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://www.cve.org/CVERecord?id=CVE-2023-52587 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://nvd.nist.gov/vuln/detail/CVE-2023-52587 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://launchpad.net/bugs/cve/CVE-2023-52587 | https://ubuntu.com/security/CVE-2023-52587 | | ubuntu | https://security-tracker.debian.org/tracker/CVE-2023-52587 | https://ubuntu.com/security/CVE-2023-52587 | | debian | | https://security-tracker.debian.org/tracker/CVE-2023-52587 | | cve_search | | https://git.kernel.org/stable/c/4c8922ae8eb8dcc1e4b7d1059d97a8334288d825 | | cve_search | | https://git.kernel.org/stable/c/615e3adc2042b7be4ad122a043fc9135e6342c90 | | cve_search | | https://git.kernel.org/stable/c/ac2630fd3c90ffec34a0bfc4d413668538b0e8f2 | | cve_search | | https://git.kernel.org/stable/c/ed790bd0903ed3352ebf7f650d910f49b7319b34 | | cve_search | | https://git.kernel.org/stable/c/5108a2dc2db5630fb6cd58b8be80a0c134bc310a | | cve_search | | https://git.kernel.org/stable/c/342258fb46d66c1b4c7e2c3717ac01e10c03cf18 | | cve_search | | https://git.kernel.org/stable/c/7c7bd4d561e9dc6f5b7df9e184974915f6701a89 | | cve_search | | https://git.kernel.org/stable/c/4f973e211b3b1c6d36f7c6a19239d258856749f9 | | ubuntu | https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-52587 | https://ubuntu.com/security/CVE-2023-52587 | </details> 漏洞分析指导链接: https://gitee.com/openeuler/cve-manager/blob/master/cve-vulner-manager/doc/md/manual.md 漏洞数据来源: openBrain开源漏洞感知系统 漏洞补丁信息: <details> <summary>详情(点击展开)</summary> | 影响的包 | 修复版本 | 修复补丁 | 问题引入补丁 | 来源 | | ------- | -------- | ------- | -------- | --------- | | | | https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=4f973e211b3b | | suse_bugzilla | | linux | | https://git.kernel.org/linus/4f973e211b3b1c6d36f7c6a19239d258856749f9 | https://git.kernel.org/linus/1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 | ubuntu | </details> 二、漏洞分析结构反馈 影响性分析说明: In the Linux kernel, the following vulnerability has been resolved:IB/ipoib: Fix mcast list lockingReleasing the `priv->lock` while iterating the `priv->multicast_list` in`ipoib_mcast_join_task()` opens a window for `ipoib_mcast_dev_flush()` toremove the items while in the middle of iteration. If the mcast is removedwhile the lock was dropped, the for loop spins forever resulting in a hardlockup (as was reported on RHEL 4.18.0-372.75.1.el8_6 kernel): Task A (kworker/u72:2 below) | Task B (kworker/u72:0 below) -----------------------------------+----------------------------------- ipoib_mcast_join_task(work) | ipoib_ib_dev_flush_light(work) spin_lock_irq(&priv->lock) | __ipoib_ib_dev_flush(priv, ...) list_for_each_entry(mcast, | ipoib_mcast_dev_flush(dev = priv->dev) &priv->multicast_list, list) | ipoib_mcast_join(dev, mcast) | spin_unlock_irq(&priv->lock) | | spin_lock_irqsave(&priv->lock, flags) | list_for_each_entry_safe(mcast, tmcast, | &priv->multicast_list, list) | list_del(&mcast->list); | list_add_tail(&mcast->list, &remove_list) | spin_unlock_irqrestore(&priv->lock, flags) spin_lock_irq(&priv->lock) | | ipoib_mcast_remove_list(&remove_list) (Here, `mcast` is no longer on the | list_for_each_entry_safe(mcast, tmcast, `priv->multicast_list` and we keep | remove_list, list) spinning on the `remove_list` of | >>> wait_for_completion(&mcast->done) the other thread which is blocked | and the list is still valid on | it s stack.)Fix this by keeping the lock held and changing to GFP_ATOMIC to preventeventual sleeps.Unfortunately we could not reproduce the lockup and confirm this fix butbased on the code review I think this fix should address such lockups.crash> bc 31PID: 747 TASK: ff1c6a1a007e8000 CPU: 31 COMMAND: kworker/u72:2 -- [exception RIP: ipoib_mcast_join_task+0x1b1] RIP: ffffffffc0944ac1 RSP: ff646f199a8c7e00 RFLAGS: 00000002 RAX: 0000000000000000 RBX: ff1c6a1a04dc82f8 RCX: 0000000000000000 work (&priv->mcast_task{,.work}) RDX: ff1c6a192d60ac68 RSI: 0000000000000286 RDI: ff1c6a1a04dc8000 &mcast->list RBP: ff646f199a8c7e90 R8: ff1c699980019420 R9: ff1c6a1920c9a000 R10: ff646f199a8c7e00 R11: ff1c6a191a7d9800 R12: ff1c6a192d60ac00 mcast R13: ff1c6a1d82200000 R14: ff1c6a1a04dc8000 R15: ff1c6a1a04dc82d8 dev priv (&priv->lock) &priv->multicast_list (aka head) ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018--- <NMI exception stack> --- #5 [ff646f199a8c7e00] ipoib_mcast_join_task+0x1b1 at ffffffffc0944ac1 [ib_ipoib] #6 [ff646f199a8c7e98] process_one_work+0x1a7 at ffffffff9bf10967crash> rx ff646f199a8c7e68ff646f199a8c7e68: ff1c6a1a04dc82f8 <<< work = &priv->mcast_task.workcrash> list -hO ipoib_dev_priv.multicast_list ff1c6a1a04dc8000(empty)crash> ipoib_dev_priv.mcast_task.work.func,mcast_mutex.owner.counter ff1c6a1a04dc8000 mcast_task.work.func = 0xffffffffc0944910 <ipoib_mcast_join_task>, mcast_mutex.owner.counter = 0xff1c69998efec000crash> b 8PID: 8 TASK: ff1c69998efec000 CPU: 33 COMMAND: kworker/u72:0 -- #3 [ff646f1980153d50] wait_for_completion+0x96 at ffffffff9c7d7646 #4 [ff646f1980153d90] ipoib_mcast_remove_list+0x56 at ffffffffc0944dc6 [ib_ipoib] #5 [ff646f1980153de8] ipoib_mcast_dev_flush+0x1a7 at ffffffffc09455a7 [ib_ipoib] #6 [ff646f1980153e58] __ipoib_ib_dev_flush+0x1a4 at ffffffffc09431a4 [ib_ipoib] #7 [ff---truncated--- openEuler评分: 4.7 Vector:CVSS:2.0/AV:L/AC:H/PR:L/UI:N/S:U/C:N/I:N/A:H 受影响版本排查(受影响/不受影响): 1.openEuler-20.03-LTS-SP1(4.19.90):受影响 2.openEuler-20.03-LTS-SP4(4.19.90):受影响 3.openEuler-22.03-LTS(5.10.0):受影响 4.openEuler-22.03-LTS-SP1(5.10.0):受影响 5.openEuler-22.03-LTS-SP2(5.10.0):受影响 6.openEuler-22.03-LTS-SP3(5.10.0):受影响 7.master(6.1.0):不受影响 8.openEuler-22.03-LTS-Next(5.10.0):不受影响 9.openEuler-24.03-LTS:不受影响 10.openEuler-24.03-LTS-Next:不受影响 修复是否涉及abi变化(是/否): 1.openEuler-20.03-LTS-SP1(4.19.90):否 2.openEuler-20.03-LTS-SP4(4.19.90):否 3.openEuler-22.03-LTS(5.10.0):否 4.openEuler-22.03-LTS-SP1(5.10.0):否 5.openEuler-22.03-LTS-SP2(5.10.0):否 6.openEuler-22.03-LTS-SP3(5.10.0):否 7.master(6.1.0):否 8.openEuler-22.03-LTS-Next(5.10.0):否 9.openEuler-24.03-LTS:否 10.openEuler-24.03-LTS-Next:否 三、漏洞修复 安全公告链接:https://www.openeuler.org/zh/security/safety-bulletin/detail/?id=openEuler-SA-2024-1487
Comments (
12
)
Sign in
to comment
Status
Done
Backlog
已挂起
Doing
Done
Declined
Assignees
Not set
sanglipeng
sanglipeng
Assignee
Collaborator
+Assign
+Mention
Labels
CVE/FIXED
sig/Kernel
Not set
Projects
Unprojected
Unprojected
Pull Requests
None yet
None yet
Successfully merging a pull request will close this issue.
Branches
No related branch
Branches (
-
)
Tags (
-
)
Planed to start   -   Planed to end
-
Top level
Not Top
Top Level: High
Top Level: Medium
Top Level: Low
Priority
Not specified
Serious
Main
Secondary
Unimportant
Duration
(hours)
参与者(2)
1
https://gitee.com/src-openeuler/kernel.git
git@gitee.com:src-openeuler/kernel.git
src-openeuler
kernel
kernel
Going to Help Center
Search
Git 命令在线学习
如何在 Gitee 导入 GitHub 仓库
Git 仓库基础操作
企业版和社区版功能对比
SSH 公钥设置
如何处理代码冲突
仓库体积过大,如何减小?
如何找回被删除的仓库数据
Gitee 产品配额说明
GitHub仓库快速导入Gitee及同步更新
什么是 Release(发行版)
将 PHP 项目自动发布到 packagist.org
Comment
Repository Report
Back to the top
Login prompt
This operation requires login to the code cloud account. Please log in before operating.
Go to login
No account. Register