diff mbox series

[3/5] sbitmap: fix improper use of smp_mb__before_atomic()

Message ID 1556568902-12464-4-git-send-email-andrea.parri@amarulasolutions.com (mailing list archive)
State New, archived
Headers show
Series None | expand

Commit Message

Andrea Parri April 29, 2019, 8:14 p.m. UTC
This barrier only applies to the read-modify-write operations; in
particular, it does not apply to the atomic_set() primitive.

Replace the barrier with an smp_mb().

Fixes: 6c0ca7ae292ad ("sbitmap: fix wakeup hang after sbq resize")
Cc: stable@vger.kernel.org
Reported-by: "Paul E. McKenney" <paulmck@linux.ibm.com>
Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Omar Sandoval <osandov@fb.com>
Cc: linux-block@vger.kernel.org
---
 lib/sbitmap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Andrea Parri May 9, 2019, 8:26 p.m. UTC | #1
On Mon, Apr 29, 2019 at 10:14:59PM +0200, Andrea Parri wrote:
> This barrier only applies to the read-modify-write operations; in
> particular, it does not apply to the atomic_set() primitive.
> 
> Replace the barrier with an smp_mb().
> 
> Fixes: 6c0ca7ae292ad ("sbitmap: fix wakeup hang after sbq resize")
> Cc: stable@vger.kernel.org
> Reported-by: "Paul E. McKenney" <paulmck@linux.ibm.com>
> Reported-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
> Cc: Jens Axboe <axboe@kernel.dk>
> Cc: Omar Sandoval <osandov@fb.com>
> Cc: linux-block@vger.kernel.org

Jens, Omar: any suggestions to move this patch forward?

Thanx,
  Andrea


> ---
>  lib/sbitmap.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
> index 155fe38756ecf..4a7fc4915dfc6 100644
> --- a/lib/sbitmap.c
> +++ b/lib/sbitmap.c
> @@ -435,7 +435,7 @@ static void sbitmap_queue_update_wake_batch(struct sbitmap_queue *sbq,
>  		 * to ensure that the batch size is updated before the wait
>  		 * counts.
>  		 */
> -		smp_mb__before_atomic();
> +		smp_mb();
>  		for (i = 0; i < SBQ_WAIT_QUEUES; i++)
>  			atomic_set(&sbq->ws[i].wait_cnt, 1);
>  	}
> -- 
> 2.7.4
>
Ming Lei May 10, 2019, 3:41 a.m. UTC | #2
On Mon, Apr 29, 2019 at 10:14:59PM +0200, Andrea Parri wrote:
> This barrier only applies to the read-modify-write operations; in
> particular, it does not apply to the atomic_set() primitive.
> 
> Replace the barrier with an smp_mb().
> 
> Fixes: 6c0ca7ae292ad ("sbitmap: fix wakeup hang after sbq resize")
> Cc: stable@vger.kernel.org
> Reported-by: "Paul E. McKenney" <paulmck@linux.ibm.com>
> Reported-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
> Cc: Jens Axboe <axboe@kernel.dk>
> Cc: Omar Sandoval <osandov@fb.com>
> Cc: linux-block@vger.kernel.org
> ---
>  lib/sbitmap.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
> index 155fe38756ecf..4a7fc4915dfc6 100644
> --- a/lib/sbitmap.c
> +++ b/lib/sbitmap.c
> @@ -435,7 +435,7 @@ static void sbitmap_queue_update_wake_batch(struct sbitmap_queue *sbq,
>  		 * to ensure that the batch size is updated before the wait
>  		 * counts.
>  		 */
> -		smp_mb__before_atomic();
> +		smp_mb();
>  		for (i = 0; i < SBQ_WAIT_QUEUES; i++)
>  			atomic_set(&sbq->ws[i].wait_cnt, 1);
>  	}
> -- 
> 2.7.4
> 

sbitmap_queue_update_wake_batch() won't be called in fast path, and
the fix is correct too, so:

Reviewed-by: Ming Lei <ming.lei@redhat.com>

thanks,
Ming
Andrea Parri May 10, 2019, 6:27 a.m. UTC | #3
Hi Ming,

On Fri, May 10, 2019 at 11:41:02AM +0800, Ming Lei wrote:
> On Mon, Apr 29, 2019 at 10:14:59PM +0200, Andrea Parri wrote:
> > This barrier only applies to the read-modify-write operations; in
> > particular, it does not apply to the atomic_set() primitive.
> > 
> > Replace the barrier with an smp_mb().
> > 
> > Fixes: 6c0ca7ae292ad ("sbitmap: fix wakeup hang after sbq resize")
> > Cc: stable@vger.kernel.org
> > Reported-by: "Paul E. McKenney" <paulmck@linux.ibm.com>
> > Reported-by: Peter Zijlstra <peterz@infradead.org>
> > Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
> > Cc: Jens Axboe <axboe@kernel.dk>
> > Cc: Omar Sandoval <osandov@fb.com>
> > Cc: linux-block@vger.kernel.org
> > ---
> >  lib/sbitmap.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/lib/sbitmap.c b/lib/sbitmap.c
> > index 155fe38756ecf..4a7fc4915dfc6 100644
> > --- a/lib/sbitmap.c
> > +++ b/lib/sbitmap.c
> > @@ -435,7 +435,7 @@ static void sbitmap_queue_update_wake_batch(struct sbitmap_queue *sbq,
> >  		 * to ensure that the batch size is updated before the wait
> >  		 * counts.
> >  		 */
> > -		smp_mb__before_atomic();
> > +		smp_mb();
> >  		for (i = 0; i < SBQ_WAIT_QUEUES; i++)
> >  			atomic_set(&sbq->ws[i].wait_cnt, 1);
> >  	}
> > -- 
> > 2.7.4
> > 
> 
> sbitmap_queue_update_wake_batch() won't be called in fast path, and
> the fix is correct too, so:
> 
> Reviewed-by: Ming Lei <ming.lei@redhat.com>

Thank you for the review(s),

  Andrea


> thanks,
> Ming
diff mbox series

Patch

diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index 155fe38756ecf..4a7fc4915dfc6 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -435,7 +435,7 @@  static void sbitmap_queue_update_wake_batch(struct sbitmap_queue *sbq,
 		 * to ensure that the batch size is updated before the wait
 		 * counts.
 		 */
-		smp_mb__before_atomic();
+		smp_mb();
 		for (i = 0; i < SBQ_WAIT_QUEUES; i++)
 			atomic_set(&sbq->ws[i].wait_cnt, 1);
 	}