From 4382392c36304d3a7b28dfeccff81d1af0dc5fdb Mon Sep 17 00:00:00 2001 From: Wenyu Huang Date: Fri, 29 Nov 2024 12:03:37 +0800 Subject: [PATCH] Fix UAF in __update_blocked_fair hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/IB7B5F -------------------------------- After running the long-term stability test for a period of time, the UAF problem is triggered: [ 9533.667062] dump_stack_lvl+0x47/0x80 [ 9533.667158] print_address_description.constprop.0+0x66/0x300 [ 9533.667346] print_report+0x3e/0x70 [ 9533.667436] kasan_report+0xb4/0xf0 [ 9533.667619] __update_blocked_fair+0x421/0x15c0 [ 9533.667804] update_blocked_averages+0x14d/0x360 [ 9533.668176] run_rebalance_domains+0x66/0xa0 [ 9533.668271] handle_softirqs+0x10e/0x4c0 [ 9533.668370] irq_exit_rcu+0xea/0x120 [ 9533.668458] sysvec_apic_timer_interrupt+0x72/0x90 The unthrottle_qos_sched_group adds the leaf_cfs_rq back to the leaf_cfs_rq and sets on_list to 1. When unthrottle_qos_sched_group is executed in free_fair_sched_group, the node is inserted into the linked list again and then freed. So it causes UAF. Fixes: 926b9b0cd97e ("sched: Throttle qos cfs_rq when current cpu is running online task") Signed-off-by: Wenyu Huang Signed-off-by: Liu Kai --- kernel/sched/fair.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2ef32e806f54..fecef2dc0bab 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -14718,10 +14718,6 @@ void free_fair_sched_group(struct task_group *tg) int i; for_each_possible_cpu(i) { -#ifdef CONFIG_QOS_SCHED - if (tg->cfs_rq && tg->cfs_rq[i]) - unthrottle_qos_sched_group(tg->cfs_rq[i]); -#endif if (tg->cfs_rq) kfree(tg->cfs_rq[i]); if (tg->se) @@ -14808,6 +14804,11 @@ void unregister_fair_sched_group(struct task_group *tg) if (tg->se[cpu]) remove_entity_load_avg(tg->se[cpu]); + #ifdef CONFIG_QOS_SCHED + if (tg->cfs_rq && tg->cfs_rq[cpu]) + unthrottle_qos_sched_group(tg->cfs_rq[cpu]); + #endif + /* * Only empty task groups can be destroyed; so we can speculatively * check on_list without danger of it being re-added. -- Gitee