Message ID | 20221117112520.3942618-1-qiang1.zhang@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] mm: Make vmalloc_dump_obj() call in a preemptible context | expand |
On Thu, 17 Nov 2022 19:25:20 +0800 Zqiang <qiang1.zhang@intel.com> wrote: > Currently, the mem_dump_obj() is invoked in call_rcu(), the > call_rcu() is maybe invoked in non-preemptive code segment, > for object allocated from vmalloc(), the following scenarios > may occur: > > CPU 0 > tasks context > spin_lock(&vmap_area_lock) > Interrupt context > call_rcu() > mem_dump_obj > vmalloc_dump_obj > spin_lock(&vmap_area_lock) <--deadlock > > and for PREEMPT-RT kernel, the spinlock will convert to sleepable > lock, so the vmap_area_lock spinlock not allowed to get in non-preemptive > code segment. therefore, this commit make the vmalloc_dump_obj() call in > a preemptible context. > > ... > > --- a/mm/util.c > +++ b/mm/util.c > @@ -1128,7 +1128,9 @@ void mem_dump_obj(void *object) > return; > > if (virt_addr_valid(object)) > - type = "non-slab/vmalloc memory"; > + type = "non-slab memory"; > + else if (is_vmalloc_addr(object)) > + type = "vmalloc memory"; > else if (object == NULL) > type = "NULL pointer"; > else if (object == ZERO_SIZE_PTR) > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index ccaa461998f3..018e417b12d6 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -4034,6 +4034,10 @@ bool vmalloc_dump_obj(void *object) > struct vm_struct *vm; > void *objp = (void *)PAGE_ALIGN((unsigned long)object); > > + if (!is_vmalloc_addr(objp) || ((IS_ENABLED(CONFIG_PREEMPT_RT) && > + !preemptible()) || in_interrupt())) > + return false; > + > vm = find_vm_area(objp); > if (!vm) > return false; I suggest this be restructured so we can comment each test: /* comment goes here */ if (!is_vmalloc_addr(objp)) return false; /* comment goes here */ if (IS_ENABLED(CONFIG_PREEMPT_RT) && !preemptible()) return false; /* comment goes here */ if (in_interrupt())) return false; Where each comment carefully explains why we're performing the test. It will generate the same code.
diff --git a/mm/util.c b/mm/util.c index 12984e76767e..2b0222a728cc 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1128,7 +1128,9 @@ void mem_dump_obj(void *object) return; if (virt_addr_valid(object)) - type = "non-slab/vmalloc memory"; + type = "non-slab memory"; + else if (is_vmalloc_addr(object)) + type = "vmalloc memory"; else if (object == NULL) type = "NULL pointer"; else if (object == ZERO_SIZE_PTR) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ccaa461998f3..018e417b12d6 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -4034,6 +4034,10 @@ bool vmalloc_dump_obj(void *object) struct vm_struct *vm; void *objp = (void *)PAGE_ALIGN((unsigned long)object); + if (!is_vmalloc_addr(objp) || ((IS_ENABLED(CONFIG_PREEMPT_RT) && + !preemptible()) || in_interrupt())) + return false; + vm = find_vm_area(objp); if (!vm) return false;
Currently, the mem_dump_obj() is invoked in call_rcu(), the call_rcu() is maybe invoked in non-preemptive code segment, for object allocated from vmalloc(), the following scenarios may occur: CPU 0 tasks context spin_lock(&vmap_area_lock) Interrupt context call_rcu() mem_dump_obj vmalloc_dump_obj spin_lock(&vmap_area_lock) <--deadlock and for PREEMPT-RT kernel, the spinlock will convert to sleepable lock, so the vmap_area_lock spinlock not allowed to get in non-preemptive code segment. therefore, this commit make the vmalloc_dump_obj() call in a preemptible context. Signed-off-by: Zqiang <qiang1.zhang@intel.com> --- v1->v2: Add IS_ENABLED(CONFIG_PREEMPT_RT) check. mm/util.c | 4 +++- mm/vmalloc.c | 4 ++++ 2 files changed, 7 insertions(+), 1 deletion(-)