diff mbox series

net: xfrm: fix a race condition during allocing spi

Message ID 20201020081800.29454-1-zhuoliang.zhang@mediatek.com (mailing list archive)
State New, archived
Headers show
Series net: xfrm: fix a race condition during allocing spi | expand

Commit Message

Zhuoliang Zhang Oct. 20, 2020, 8:18 a.m. UTC
From: zhuoliang zhang <zhuoliang.zhang@mediatek.com>

we found that the following race condition exists in
xfrm_alloc_userspi flow:

user thread                                    state_hash_work thread
----                                           ----
xfrm_alloc_userspi()
 __find_acq_core()
   /*alloc new xfrm_state:x*/
   xfrm_state_alloc()
   /*schedule state_hash_work thread*/
   xfrm_hash_grow_check()   	               xfrm_hash_resize()
 xfrm_alloc_spi                                  /*hold lock*/
      x->id.spi = htonl(spi)                     spin_lock_bh(&net->xfrm.xfrm_state_lock)
      /*waiting lock release*/                     xfrm_hash_transfer()
      spin_lock_bh(&net->xfrm.xfrm_state_lock)      /*add x into hlist:net->xfrm.state_byspi*/
	                                                hlist_add_head_rcu(&x->byspi)
                                                 spin_unlock_bh(&net->xfrm.xfrm_state_lock)

    /*add x into hlist:net->xfrm.state_byspi 2 times*/
    hlist_add_head_rcu(&x->byspi)

So the same xfrm_stame (x) is added into the same list_hash
(net->xfrm.state_byspi)2 times that makes the list_hash become
a inifite loop.

To fix the race,x->id.spi = htonl(spi) in the xfrm_alloc_spi() 
is moved to the back of spin_lock_bh,sothat state_hash_work thread 
no longer add x which id.spi is zero into the hash_list.

Signed-off-by: zhuoliang zhang <zhuoliang.zhang@mediatek.com>
---
 net/xfrm/xfrm_state.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

Comments

Herbert Xu Oct. 21, 2020, 7:12 a.m. UTC | #1
On Tue, Oct 20, 2020 at 04:18:00PM +0800, Zhuoliang Zhang wrote:
> From: zhuoliang zhang <zhuoliang.zhang@mediatek.com>
> 
> we found that the following race condition exists in
> xfrm_alloc_userspi flow:
> 
> user thread                                    state_hash_work thread
> ----                                           ----
> xfrm_alloc_userspi()
>  __find_acq_core()
>    /*alloc new xfrm_state:x*/
>    xfrm_state_alloc()
>    /*schedule state_hash_work thread*/
>    xfrm_hash_grow_check()   	               xfrm_hash_resize()
>  xfrm_alloc_spi                                  /*hold lock*/
>       x->id.spi = htonl(spi)                     spin_lock_bh(&net->xfrm.xfrm_state_lock)
>       /*waiting lock release*/                     xfrm_hash_transfer()
>       spin_lock_bh(&net->xfrm.xfrm_state_lock)      /*add x into hlist:net->xfrm.state_byspi*/
> 	                                                hlist_add_head_rcu(&x->byspi)
>                                                  spin_unlock_bh(&net->xfrm.xfrm_state_lock)
> 
>     /*add x into hlist:net->xfrm.state_byspi 2 times*/
>     hlist_add_head_rcu(&x->byspi)
> 
> So the same xfrm_stame (x) is added into the same list_hash
> (net->xfrm.state_byspi)2 times that makes the list_hash become
> a inifite loop.

Your explanation makes no sense.  Prior to obtaining the spin lock
on the LHS, the state x hasn't been added to thte table yet.  So
how can the RHS be transferring it?

Cheers,
Zhuoliang Zhang Oct. 22, 2020, 5:53 a.m. UTC | #2
hi maintainer:

there are 2 related hash lists : net->xfrm.state_bydst and
net->xfrm.state_byspi:

1. a new state x is alloced in xfrm_state_alloc() and added into the
bydst hlist in  __find_acq_core() on the LHS;
2. on the RHS, state_hash_work thread travels the old bydst and tranfers
every xfrm_state (include x) to the new bydst hlist and new byspi hlist;
3. user thread on the LHS gets the lock and adds x to the new byspi
hlist again.

thanks




On Wed, 2020-10-21 at 18:12 +1100, Herbert Xu wrote:
> On Tue, Oct 20, 2020 at 04:18:00PM +0800, Zhuoliang Zhang wrote:
> > From: zhuoliang zhang <zhuoliang.zhang@mediatek.com>
> > 
> > we found that the following race condition exists in
> > xfrm_alloc_userspi flow:
> > 
> > user thread                                    state_hash_work thread
> > ----                                           ----
> > xfrm_alloc_userspi()
> >  __find_acq_core()
> >    /*alloc new xfrm_state:x*/
> >    xfrm_state_alloc()
> >    /*schedule state_hash_work thread*/
> >    xfrm_hash_grow_check()   	               xfrm_hash_resize()
> >  xfrm_alloc_spi                                  /*hold lock*/
> >       x->id.spi = htonl(spi)                     spin_lock_bh(&net->xfrm.xfrm_state_lock)
> >       /*waiting lock release*/                     xfrm_hash_transfer()
> >       spin_lock_bh(&net->xfrm.xfrm_state_lock)      /*add x into hlist:net->xfrm.state_byspi*/
> > 	                                                hlist_add_head_rcu(&x->byspi)
> >                                                  spin_unlock_bh(&net->xfrm.xfrm_state_lock)
> > 
> >     /*add x into hlist:net->xfrm.state_byspi 2 times*/
> >     hlist_add_head_rcu(&x->byspi)
> > 
> > So the same xfrm_stame (x) is added into the same list_hash
> > (net->xfrm.state_byspi)2 times that makes the list_hash become
> > a inifite loop.
> 
> Your explanation makes no sense.  Prior to obtaining the spin lock
> on the LHS, the state x hasn't been added to thte table yet.  So
> how can the RHS be transferring it?
> 
> Cheers,
Herbert Xu Oct. 22, 2020, 6:48 a.m. UTC | #3
On Thu, Oct 22, 2020 at 01:53:15PM +0800, zhuoliang.zhang wrote:
> 
> there are 2 related hash lists : net->xfrm.state_bydst and
> net->xfrm.state_byspi:
> 
> 1. a new state x is alloced in xfrm_state_alloc() and added into the
> bydst hlist in  __find_acq_core() on the LHS;
> 2. on the RHS, state_hash_work thread travels the old bydst and tranfers
> every xfrm_state (include x) to the new bydst hlist and new byspi hlist;
> 3. user thread on the LHS gets the lock and adds x to the new byspi
> hlist again.

Good catch.  Please add a Fixes header.  I think this was introduced
with the dynamic resizing in f034b5d4efdfe0fb9e2a1ce1d95fa7914f24de49.

Thanks,
diff mbox series

Patch

diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
index bbd4643d7e82..a77da7aae6fe 100644
--- a/net/xfrm/xfrm_state.c
+++ b/net/xfrm/xfrm_state.c
@@ -2004,6 +2004,7 @@  int xfrm_alloc_spi(struct xfrm_state *x, u32 low, u32 high)
 	int err = -ENOENT;
 	__be32 minspi = htonl(low);
 	__be32 maxspi = htonl(high);
+	__be32 newspi = 0;
 	u32 mark = x->mark.v & x->mark.m;
 
 	spin_lock_bh(&x->lock);
@@ -2022,21 +2023,22 @@  int xfrm_alloc_spi(struct xfrm_state *x, u32 low, u32 high)
 			xfrm_state_put(x0);
 			goto unlock;
 		}
-		x->id.spi = minspi;
+		newspi = minspi;
 	} else {
 		u32 spi = 0;
 		for (h = 0; h < high-low+1; h++) {
 			spi = low + prandom_u32()%(high-low+1);
 			x0 = xfrm_state_lookup(net, mark, &x->id.daddr, htonl(spi), x->id.proto, x->props.family);
 			if (x0 == NULL) {
-				x->id.spi = htonl(spi);
+				newspi = htonl(spi);
 				break;
 			}
 			xfrm_state_put(x0);
 		}
 	}
-	if (x->id.spi) {
+	if (newspi) {
 		spin_lock_bh(&net->xfrm.xfrm_state_lock);
+		x->id.spi = newspi;
 		h = xfrm_spi_hash(net, &x->id.daddr, x->id.spi, x->id.proto, x->props.family);
 		hlist_add_head_rcu(&x->byspi, net->xfrm.state_byspi + h);
 		spin_unlock_bh(&net->xfrm.xfrm_state_lock);