Message ID | 20200602071205.22057-1-jack@suse.cz (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | blktrace: Avoid sparse warnings when assigning q->blk_trace | expand |
On Tue, Jun 02, 2020 at 09:12:05AM +0200, Jan Kara wrote: > Here is version of my patch rebased on top of Luis' blktrace fixes. Luis, if > the patch looks fine, can you perhaps include it in your series since it seems > you'll do another revision of your series due to discussion over patch 5/7? > Thanks! Sure thing, will throw in the pile. Luis
On Tue, Jun 02, 2020 at 02:17:34PM +0000, Luis Chamberlain wrote: > On Tue, Jun 02, 2020 at 09:12:05AM +0200, Jan Kara wrote: > > Here is version of my patch rebased on top of Luis' blktrace fixes. Luis, if > > the patch looks fine, can you perhaps include it in your series since it seems > > you'll do another revision of your series due to discussion over patch 5/7? > > Thanks! > > Sure thing, will throw in the pile. I've updated the commit log as follows as well, as I think its important to annotate that the check for processing of the blktrace only makes sense if it was not set. Let me know if this is fine. The commit log is below. From: Jan Kara <jack@suse.cz> Date: Tue, 2 Jun 2020 09:12:05 +0200 Subject: [PATCH 1/8] blktrace: Avoid sparse warnings when assigning q->blk_trace Mostly for historical reasons, q->blk_trace is assigned through xchg() and cmpxchg() atomic operations. Although this is correct, sparse complains about this because it violates rcu annotations since commit c780e86dd48e ("blktrace: Protect q->blk_trace with RCU") which started to use rcu for accessing q->blk_trace. Furthermore there's no real need for atomic operations anymore since all changes to q->blk_trace happen under q->blk_trace_mutex *and* since it also makes more sense to check if q->blk_trace is set with the mutex held *earlier* and this is now done through the patch titled "blktrace: break out on concurrent calls" and was already before on blk_trace_setup_queue(). So let's just replace xchg() with rcu_replace_pointer() and cmpxchg() with explicit check and rcu_assign_pointer(). This makes the code more efficient and sparse happy. Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
On Tue 02-06-20 15:10:33, Luis Chamberlain wrote: > On Tue, Jun 02, 2020 at 02:17:34PM +0000, Luis Chamberlain wrote: > > On Tue, Jun 02, 2020 at 09:12:05AM +0200, Jan Kara wrote: > > > Here is version of my patch rebased on top of Luis' blktrace fixes. Luis, if > > > the patch looks fine, can you perhaps include it in your series since it seems > > > you'll do another revision of your series due to discussion over patch 5/7? > > > Thanks! > > > > Sure thing, will throw in the pile. > > I've updated the commit log as follows as well, as I think its important > to annotate that the check for processing of the blktrace only makes > sense if it was not set. Let me know if this is fine. The commit log > is below. Thanks! The changelog looks good to me. Honza > > From: Jan Kara <jack@suse.cz> > Date: Tue, 2 Jun 2020 09:12:05 +0200 > Subject: [PATCH 1/8] blktrace: Avoid sparse warnings when assigning > q->blk_trace > > Mostly for historical reasons, q->blk_trace is assigned through xchg() > and cmpxchg() atomic operations. Although this is correct, sparse > complains about this because it violates rcu annotations since commit > c780e86dd48e ("blktrace: Protect q->blk_trace with RCU") which started > to use rcu for accessing q->blk_trace. Furthermore there's no real need > for atomic operations anymore since all changes to q->blk_trace happen > under q->blk_trace_mutex *and* since it also makes more sense to check > if q->blk_trace is set with the mutex held *earlier* and this is now > done through the patch titled "blktrace: break out on concurrent calls" > and was already before on blk_trace_setup_queue(). > > So let's just replace xchg() with rcu_replace_pointer() and cmpxchg() > with explicit check and rcu_assign_pointer(). This makes the code more > efficient and sparse happy. > > Reported-by: kbuild test robot <lkp@intel.com> > Signed-off-by: Jan Kara <jack@suse.cz> > Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c index ac6650828d49..13bc09e4594c 100644 --- a/kernel/trace/blktrace.c +++ b/kernel/trace/blktrace.c @@ -346,7 +346,8 @@ static int __blk_trace_remove(struct request_queue *q) { struct blk_trace *bt; - bt = xchg(&q->blk_trace, NULL); + bt = rcu_replace_pointer(q->blk_trace, NULL, + lockdep_is_held(&q->blk_trace_mutex)); if (!bt) return -EINVAL; @@ -500,7 +501,8 @@ static int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev, * bdev can be NULL, as with scsi-generic, this is a helpful as * we can be. */ - if (q->blk_trace) { + if (rcu_dereference_protected(q->blk_trace, + lockdep_is_held(&q->blk_trace_mutex))) { pr_warn("Concurrent blktraces are not allowed on %s\n", buts->name); return -EBUSY; @@ -570,10 +572,7 @@ static int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev, bt->pid = buts->pid; bt->trace_state = Blktrace_setup; - ret = -EBUSY; - if (cmpxchg(&q->blk_trace, NULL, bt)) - goto err; - + rcu_assign_pointer(q->blk_trace, bt); get_probe_ref(); ret = 0; @@ -1662,7 +1661,8 @@ static int blk_trace_remove_queue(struct request_queue *q) { struct blk_trace *bt; - bt = xchg(&q->blk_trace, NULL); + bt = rcu_replace_pointer(q->blk_trace, NULL, + lockdep_is_held(&q->blk_trace_mutex)); if (bt == NULL) return -EINVAL; @@ -1694,10 +1694,7 @@ static int blk_trace_setup_queue(struct request_queue *q, blk_trace_setup_lba(bt, bdev); - ret = -EBUSY; - if (cmpxchg(&q->blk_trace, NULL, bt)) - goto free_bt; - + rcu_assign_pointer(q->blk_trace, bt); get_probe_ref(); return 0;
Mostly for historical reasons, q->blk_trace is assigned through xchg() and cmpxchg() atomic operations. Although this is correct, sparse complains about this because it violates rcu annotations since commit c780e86dd48e ("blktrace: Protect q->blk_trace with RCU") which started to use rcu for accessing q->blk_trace. Furthermore there's no real need for atomic operations anymore since all changes to q->blk_trace happen under q->blk_trace_mutex. So let's just replace xchg() with rcu_replace_pointer() and cmpxchg() with explicit check and rcu_assign_pointer(). This makes the code more efficient and sparse happy. Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Jan Kara <jack@suse.cz> --- kernel/trace/blktrace.c | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-) Here is version of my patch rebased on top of Luis' blktrace fixes. Luis, if the patch looks fine, can you perhaps include it in your series since it seems you'll do another revision of your series due to discussion over patch 5/7? Thanks!