diff mbox series

mm: vmalloc: annotate find_vmap_area_exceed_addr_lock() for lockdep

Message ID eaad5cd8-5d70-4890-a290-c04b07558c33@kernel.dk (mailing list archive)
State New
Headers show
Series mm: vmalloc: annotate find_vmap_area_exceed_addr_lock() for lockdep | expand

Commit Message

Jens Axboe March 26, 2024, 9:25 p.m. UTC
lockdep gets confused with the nested locking:

============================================
WARNING: possible recursive locking detected
6.9.0-rc1-00060-ged3ccc57b108-dirty #6140 Not tainted
--------------------------------------------
drgn/455 is trying to acquire lock:
ffff0000c00131d0 (&vn->busy.lock/1){+.+.}-{2:2}, at: find_vmap_area_exceed_addr_lock+0x64/0x124

but task is already holding lock:
ffff0000c0011878 (&vn->busy.lock/1){+.+.}-{2:2}, at: find_vmap_area_exceed_addr_lock+0x64/0x124

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(&vn->busy.lock/1);
  lock(&vn->busy.lock/1);

 *** DEADLOCK ***

 May be due to missing lock nesting notation

3 locks held by drgn/455:
 #0: ffff800081ecbba8 (kclist_lock){++++}-{3:3}, at: read_kcore_iter+0x5c/0xa24
 #1: ffff800081ea7688 (page_offline_rwsem){.+.+}-{3:3}, at: page_offline_freeze+0x14/0x1c
 #2: ffff0000c0011878 (&vn->busy.lock/1){+.+.}-{2:2}, at: find_vmap_area_exceed_addr_lock+0x64/0x124

stack backtrace:
CPU: 5 PID: 455 Comm: drgn Not tainted 6.9.0-rc1-00060-ged3ccc57b108-dirty #6140
Hardware name: linux,dummy-virt (DT)
Call trace:
 dump_backtrace+0x90/0xe4
 show_stack+0x14/0x1c
 dump_stack_lvl+0x84/0xc0
 dump_stack+0x14/0x1c
 print_deadlock_bug+0x24c/0x334
 __lock_acquire+0xdf4/0x20e0
 lock_acquire+0x204/0x330
 _raw_spin_lock_nested+0x40/0x54
 find_vmap_area_exceed_addr_lock+0x64/0x124
 vread_iter+0x44/0x428
 read_kcore_iter+0x170/0xa24
 proc_reg_read_iter+0x7c/0xcc
 vfs_read+0x220/0x2c4
 ksys_pread64+0x74/0xb4
 __arm64_sys_pread64+0x1c/0x24
 invoke_syscall+0x44/0x104
 el0_svc_common.constprop.0+0xb4/0xd4
 do_el0_svc+0x18/0x20
 el0_svc+0x44/0x108
 el0t_64_sync_handler+0x118/0x124
 el0t_64_sync+0x168/0x16c

which seems to be because it's missing the proper nested annotation.
Add the level annotation to make lockdep happy about this use case.

Fixes: 53becf32aec1 ("mm: vmalloc: support multiple nodes in vread_iter")
Signed-off-by: Jens Axboe <axboe@kernel.dk>

---

Comments

Jens Axboe March 26, 2024, 10:24 p.m. UTC | #1
On 3/26/24 3:25 PM, Jens Axboe wrote:
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 22aa63f4ef63..26a69fa6809c 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1032,7 +1032,7 @@ find_vmap_area_exceed_addr_lock(unsigned long addr, struct vmap_area **va)
>  	for (i = 0; i < nr_vmap_nodes; i++) {
>  		vn = &vmap_nodes[i];
>  
> -		spin_lock(&vn->busy.lock);
> +		spin_lock_nested(&vn->busy.lock, i);
>  		va_lowest = __find_vmap_area_exceed_addr(addr, &vn->busy.root);
>  		if (va_lowest) {
>  			if (!va_node || va_lowest->va_start < (*va)->va_start) {

Omar said he tested this and ran into lockdep complaining as it only
supports 8 subclasses. So this patch can't work, but that still leaves
the current kernel code buggy...
Uladzislau Rezki March 27, 2024, 9:57 a.m. UTC | #2
On Tue, Mar 26, 2024 at 04:24:01PM -0600, Jens Axboe wrote:
> On 3/26/24 3:25 PM, Jens Axboe wrote:
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index 22aa63f4ef63..26a69fa6809c 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -1032,7 +1032,7 @@ find_vmap_area_exceed_addr_lock(unsigned long addr, struct vmap_area **va)
> >  	for (i = 0; i < nr_vmap_nodes; i++) {
> >  		vn = &vmap_nodes[i];
> >  
> > -		spin_lock(&vn->busy.lock);
> > +		spin_lock_nested(&vn->busy.lock, i);
> >  		va_lowest = __find_vmap_area_exceed_addr(addr, &vn->busy.root);
> >  		if (va_lowest) {
> >  			if (!va_node || va_lowest->va_start < (*va)->va_start) {
> 
> Omar said he tested this and ran into lockdep complaining as it only
> supports 8 subclasses. So this patch can't work, but that still leaves
> the current kernel code buggy...
> 	
It is a bit tricky. Let me rewrite it so a lockdep does not complain.

Thank you for your report.

--
Uladzislau Rezki
Uladzislau Rezki March 27, 2024, 5:04 p.m. UTC | #3
Hello, Jens, Omar!

> On Tue, Mar 26, 2024 at 04:24:01PM -0600, Jens Axboe wrote:
> > On 3/26/24 3:25 PM, Jens Axboe wrote:
> > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > index 22aa63f4ef63..26a69fa6809c 100644
> > > --- a/mm/vmalloc.c
> > > +++ b/mm/vmalloc.c
> > > @@ -1032,7 +1032,7 @@ find_vmap_area_exceed_addr_lock(unsigned long addr, struct vmap_area **va)
> > >  	for (i = 0; i < nr_vmap_nodes; i++) {
> > >  		vn = &vmap_nodes[i];
> > >  
> > > -		spin_lock(&vn->busy.lock);
> > > +		spin_lock_nested(&vn->busy.lock, i);
> > >  		va_lowest = __find_vmap_area_exceed_addr(addr, &vn->busy.root);
> > >  		if (va_lowest) {
> > >  			if (!va_node || va_lowest->va_start < (*va)->va_start) {
> > 
> > Omar said he tested this and ran into lockdep complaining as it only
> > supports 8 subclasses. So this patch can't work, but that still leaves
> > the current kernel code buggy...
> > 	
> It is a bit tricky. Let me rewrite it so a lockdep does not complain.
> 
> Thank you for your report.
> 

Could you please check and test below? It is based on latest 6.9-rc1 tip.
I have reworked it a bit and now it does not hold two locks so the lockdep
should not complain.

<snip>
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 22aa63f4ef63..9b1a41e12d70 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -989,6 +989,27 @@ unsigned long vmalloc_nr_pages(void)
 	return atomic_long_read(&nr_vmalloc_pages);
 }
 
+static struct vmap_area *__find_vmap_area(unsigned long addr, struct rb_root *root)
+{
+	struct rb_node *n = root->rb_node;
+
+	addr = (unsigned long)kasan_reset_tag((void *)addr);
+
+	while (n) {
+		struct vmap_area *va;
+
+		va = rb_entry(n, struct vmap_area, rb_node);
+		if (addr < va->va_start)
+			n = n->rb_left;
+		else if (addr >= va->va_end)
+			n = n->rb_right;
+		else
+			return va;
+	}
+
+	return NULL;
+}
+
 /* Look up the first VA which satisfies addr < va_end, NULL if none. */
 static struct vmap_area *
 __find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root)
@@ -1025,47 +1046,40 @@ __find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root)
 static struct vmap_node *
 find_vmap_area_exceed_addr_lock(unsigned long addr, struct vmap_area **va)
 {
-	struct vmap_node *vn, *va_node = NULL;
-	struct vmap_area *va_lowest;
+	unsigned long va_start_lowest;
+	struct vmap_node *vn;
 	int i;
 
-	for (i = 0; i < nr_vmap_nodes; i++) {
+repeat:
+	for (i = 0, va_start_lowest = 0; i < nr_vmap_nodes; i++) {
 		vn = &vmap_nodes[i];
 
 		spin_lock(&vn->busy.lock);
-		va_lowest = __find_vmap_area_exceed_addr(addr, &vn->busy.root);
-		if (va_lowest) {
-			if (!va_node || va_lowest->va_start < (*va)->va_start) {
-				if (va_node)
-					spin_unlock(&va_node->busy.lock);
-
-				*va = va_lowest;
-				va_node = vn;
-				continue;
-			}
-		}
+		*va = __find_vmap_area_exceed_addr(addr, &vn->busy.root);
+
+		if (*va)
+			if (!va_start_lowest || (*va)->va_start < va_start_lowest)
+				va_start_lowest = (*va)->va_start;
 		spin_unlock(&vn->busy.lock);
 	}
 
-	return va_node;
-}
-
-static struct vmap_area *__find_vmap_area(unsigned long addr, struct rb_root *root)
-{
-	struct rb_node *n = root->rb_node;
+	/*
+	 * Check if found VA exists, it might it is gone away.
+	 * In this case we repeat the search because a VA has
+	 * been removed concurrently thus we need to proceed
+	 * with next one what is a rare case.
+	 */
+	if (va_start_lowest) {
+		vn = addr_to_node(va_start_lowest);
 
-	addr = (unsigned long)kasan_reset_tag((void *)addr);
+		spin_lock(&vn->busy.lock);
+		*va = __find_vmap_area(va_start_lowest, &vn->busy.root);
 
-	while (n) {
-		struct vmap_area *va;
+		if (*va)
+			return vn;
 
-		va = rb_entry(n, struct vmap_area, rb_node);
-		if (addr < va->va_start)
-			n = n->rb_left;
-		else if (addr >= va->va_end)
-			n = n->rb_right;
-		else
-			return va;
+		spin_unlock(&vn->busy.lock);
+		goto repeat;
 	}
 
 	return NULL;
<snip>

Thank you!

--
Uladzislau Rezki
Jens Axboe March 27, 2024, 5:21 p.m. UTC | #4
On 3/27/24 11:04 AM, Uladzislau Rezki wrote:
> Hello, Jens, Omar!
> 
>> On Tue, Mar 26, 2024 at 04:24:01PM -0600, Jens Axboe wrote:
>>> On 3/26/24 3:25 PM, Jens Axboe wrote:
>>>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>>>> index 22aa63f4ef63..26a69fa6809c 100644
>>>> --- a/mm/vmalloc.c
>>>> +++ b/mm/vmalloc.c
>>>> @@ -1032,7 +1032,7 @@ find_vmap_area_exceed_addr_lock(unsigned long addr, struct vmap_area **va)
>>>>  	for (i = 0; i < nr_vmap_nodes; i++) {
>>>>  		vn = &vmap_nodes[i];
>>>>  
>>>> -		spin_lock(&vn->busy.lock);
>>>> +		spin_lock_nested(&vn->busy.lock, i);
>>>>  		va_lowest = __find_vmap_area_exceed_addr(addr, &vn->busy.root);
>>>>  		if (va_lowest) {
>>>>  			if (!va_node || va_lowest->va_start < (*va)->va_start) {
>>>
>>> Omar said he tested this and ran into lockdep complaining as it only
>>> supports 8 subclasses. So this patch can't work, but that still leaves
>>> the current kernel code buggy...
>>> 	
>> It is a bit tricky. Let me rewrite it so a lockdep does not complain.
>>
>> Thank you for your report.
>>
> 
> Could you please check and test below? It is based on latest 6.9-rc1 tip.
> I have reworked it a bit and now it does not hold two locks so the lockdep
> should not complain.

Works for me:

Tested-by: Jens Axboe <axboe@kernel.dk>
Omar Sandoval March 27, 2024, 5:22 p.m. UTC | #5
On Wed, Mar 27, 2024 at 06:04:59PM +0100, Uladzislau Rezki wrote:
> Hello, Jens, Omar!
> 
> > On Tue, Mar 26, 2024 at 04:24:01PM -0600, Jens Axboe wrote:
> > > On 3/26/24 3:25 PM, Jens Axboe wrote:
> > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > > index 22aa63f4ef63..26a69fa6809c 100644
> > > > --- a/mm/vmalloc.c
> > > > +++ b/mm/vmalloc.c
> > > > @@ -1032,7 +1032,7 @@ find_vmap_area_exceed_addr_lock(unsigned long addr, struct vmap_area **va)
> > > >  	for (i = 0; i < nr_vmap_nodes; i++) {
> > > >  		vn = &vmap_nodes[i];
> > > >  
> > > > -		spin_lock(&vn->busy.lock);
> > > > +		spin_lock_nested(&vn->busy.lock, i);
> > > >  		va_lowest = __find_vmap_area_exceed_addr(addr, &vn->busy.root);
> > > >  		if (va_lowest) {
> > > >  			if (!va_node || va_lowest->va_start < (*va)->va_start) {
> > > 
> > > Omar said he tested this and ran into lockdep complaining as it only
> > > supports 8 subclasses. So this patch can't work, but that still leaves
> > > the current kernel code buggy...
> > > 	
> > It is a bit tricky. Let me rewrite it so a lockdep does not complain.
> > 
> > Thank you for your report.
> > 
> 
> Could you please check and test below? It is based on latest 6.9-rc1 tip.
> I have reworked it a bit and now it does not hold two locks so the lockdep
> should not complain.

Works here, too.

Tested-by: Omar Sandoval <osandov@fb.com>
Uladzislau Rezki March 27, 2024, 5:40 p.m. UTC | #6
On Wed, Mar 27, 2024 at 11:21:59AM -0600, Jens Axboe wrote:
> On 3/27/24 11:04 AM, Uladzislau Rezki wrote:
> > Hello, Jens, Omar!
> > 
> >> On Tue, Mar 26, 2024 at 04:24:01PM -0600, Jens Axboe wrote:
> >>> On 3/26/24 3:25 PM, Jens Axboe wrote:
> >>>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> >>>> index 22aa63f4ef63..26a69fa6809c 100644
> >>>> --- a/mm/vmalloc.c
> >>>> +++ b/mm/vmalloc.c
> >>>> @@ -1032,7 +1032,7 @@ find_vmap_area_exceed_addr_lock(unsigned long addr, struct vmap_area **va)
> >>>>  	for (i = 0; i < nr_vmap_nodes; i++) {
> >>>>  		vn = &vmap_nodes[i];
> >>>>  
> >>>> -		spin_lock(&vn->busy.lock);
> >>>> +		spin_lock_nested(&vn->busy.lock, i);
> >>>>  		va_lowest = __find_vmap_area_exceed_addr(addr, &vn->busy.root);
> >>>>  		if (va_lowest) {
> >>>>  			if (!va_node || va_lowest->va_start < (*va)->va_start) {
> >>>
> >>> Omar said he tested this and ran into lockdep complaining as it only
> >>> supports 8 subclasses. So this patch can't work, but that still leaves
> >>> the current kernel code buggy...
> >>> 	
> >> It is a bit tricky. Let me rewrite it so a lockdep does not complain.
> >>
> >> Thank you for your report.
> >>
> > 
> > Could you please check and test below? It is based on latest 6.9-rc1 tip.
> > I have reworked it a bit and now it does not hold two locks so the lockdep
> > should not complain.
> 
> Works for me:
> 
> Tested-by: Jens Axboe <axboe@kernel.dk>
> 
Thanks!

I will add tags and send out the patch.

--
Uladzislau Rezki
Uladzislau Rezki March 27, 2024, 5:41 p.m. UTC | #7
On Wed, Mar 27, 2024 at 10:22:38AM -0700, Omar Sandoval wrote:
> On Wed, Mar 27, 2024 at 06:04:59PM +0100, Uladzislau Rezki wrote:
> > Hello, Jens, Omar!
> > 
> > > On Tue, Mar 26, 2024 at 04:24:01PM -0600, Jens Axboe wrote:
> > > > On 3/26/24 3:25 PM, Jens Axboe wrote:
> > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > > > index 22aa63f4ef63..26a69fa6809c 100644
> > > > > --- a/mm/vmalloc.c
> > > > > +++ b/mm/vmalloc.c
> > > > > @@ -1032,7 +1032,7 @@ find_vmap_area_exceed_addr_lock(unsigned long addr, struct vmap_area **va)
> > > > >  	for (i = 0; i < nr_vmap_nodes; i++) {
> > > > >  		vn = &vmap_nodes[i];
> > > > >  
> > > > > -		spin_lock(&vn->busy.lock);
> > > > > +		spin_lock_nested(&vn->busy.lock, i);
> > > > >  		va_lowest = __find_vmap_area_exceed_addr(addr, &vn->busy.root);
> > > > >  		if (va_lowest) {
> > > > >  			if (!va_node || va_lowest->va_start < (*va)->va_start) {
> > > > 
> > > > Omar said he tested this and ran into lockdep complaining as it only
> > > > supports 8 subclasses. So this patch can't work, but that still leaves
> > > > the current kernel code buggy...
> > > > 	
> > > It is a bit tricky. Let me rewrite it so a lockdep does not complain.
> > > 
> > > Thank you for your report.
> > > 
> > 
> > Could you please check and test below? It is based on latest 6.9-rc1 tip.
> > I have reworked it a bit and now it does not hold two locks so the lockdep
> > should not complain.
> 
> Works here, too.
> 
> Tested-by: Omar Sandoval <osandov@fb.com>
>
Good!

I will send out the fix.

Thank you.

--
Uladzislau Rezki
diff mbox series

Patch

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 22aa63f4ef63..26a69fa6809c 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1032,7 +1032,7 @@  find_vmap_area_exceed_addr_lock(unsigned long addr, struct vmap_area **va)
 	for (i = 0; i < nr_vmap_nodes; i++) {
 		vn = &vmap_nodes[i];
 
-		spin_lock(&vn->busy.lock);
+		spin_lock_nested(&vn->busy.lock, i);
 		va_lowest = __find_vmap_area_exceed_addr(addr, &vn->busy.root);
 		if (va_lowest) {
 			if (!va_node || va_lowest->va_start < (*va)->va_start) {