diff mbox series

mm: fix kmap_high deadlock V2

Message ID 4f69441a.4a3d.19071d26866.Coremail.shaohaojize@126.com (mailing list archive)
State New
Headers show
Series mm: fix kmap_high deadlock V2 | expand

Commit Message

thington July 2, 2024, 5:02 a.m. UTC
From: "zhang.chun" <zhang.chuna@h3c.com>

I work with zhangzhansheng(from h3c) find that some situation may casue deadlock when we use kmap_high and kmap_XXX or kumap_xxx between  differt cores at the same time,  kmap_high->map_new_virtual-> flush_all_zero_pkmaps->flush_tlb_kernel_range->on_each_cpu. On this condition, kmap_high hold the kmap_lock,wait the other cores respond to ipi. But the others may disable irq and wait kmap_lock, this is some deadlock condition. I think it's necessary to give kmap_lock before call flush_tlb_kernel_range.
Like this:
spin_unlock(&kmap_lock);
flush_tlb_kernel_range(xxx);
spin_lock(&kmap_lock);

CPU 0:                cpu 1:
                        kmap_xxx() {
xxx                     irq_disable();
kmap_high();            spin_lock(&kmap_lock)
xxx                     yyyyyyy
                        spin_unlock(&kmap_lock)
                        irq_enable();
                        }
kmap_high detail:
kmap_high() {
        zzz
        spin_lock(&kmap_lock)
        map_new_virtual->
                flush_all_zero_pkmaps->
                        flush_tlb_kernel_range->
                                on_each_cpu
        /*
        if cpu 1 irq_disabled, the cpu 1
        cannot ack, then cpu 0 and cpu 1 may hangup.
        */
        spin_unlock(&kmap_lock)
        zzz
}
Signed-off-by: zhangchun <zhang.chuna@h3c.com>
Reviewed-by: zhangzhengming <zhang.zhengming@h3c.com>
---
 mm/highmem.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

--
2.34.1
diff mbox series

Patch

diff --git a/mm/highmem.c b/mm/highmem.c index bd48ba4..841b370 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -220,8 +220,11 @@  static void flush_all_zero_pkmaps(void)
                set_page_address(page, NULL);
                need_flush = 1;
        }
-       if (need_flush)
+       if (need_flush) {
+               spin_unlock(&kmap_lock);
                flush_tlb_kernel_range(PKMAP_ADDR(0), PKMAP_ADDR(LAST_PKMAP));
+               spin_lock(&kmap_lock);
+       }
 }

 void __kmap_flush_unused(void)