diff mbox

[v2,2/5] blk-mq: Make it safe to use RCU to iterate over blk_mq_tag_set.tag_list

Message ID 20170403232228.11208-3-bart.vanassche@sandisk.com
State New, archived
Headers show

Commit Message

Bart Van Assche April 3, 2017, 11:22 p.m. UTC
A later patch in this series will namely use RCU to iterate over
this list.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
---
 block/blk-mq.c | 6 ++++++
 1 file changed, 6 insertions(+)

Comments

Christoph Hellwig April 4, 2017, 6:40 a.m. UTC | #1
On Mon, Apr 03, 2017 at 04:22:25PM -0700, Bart Van Assche wrote:
> A later patch in this series will namely use RCU to iterate over
> this list.

It also adds a couple lockdep_assert_held calls, which might be worth
mentionining.

Otherwise looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>
Bart Van Assche April 4, 2017, 3:55 p.m. UTC | #2
On 04/03/2017 11:40 PM, Christoph Hellwig wrote:
> On Mon, Apr 03, 2017 at 04:22:25PM -0700, Bart Van Assche wrote:
>> A later patch in this series will namely use RCU to iterate over
>> this list.
> 
> It also adds a couple lockdep_assert_held calls, which might be worth
> mentioning.
> 
> Otherwise looks good:
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>

Thanks for the review. I will update the patch description.

Bart.
diff mbox

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 061fc2cc88d3..8fb983e6e2e4 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2093,6 +2093,8 @@  static void blk_mq_update_tag_set_depth(struct blk_mq_tag_set *set, bool shared)
 {
 	struct request_queue *q;
 
+	lockdep_assert_held(&set->tag_list_lock);
+
 	list_for_each_entry(q, &set->tag_list, tag_set_list) {
 		blk_mq_freeze_queue(q);
 		queue_set_hctx_shared(q, shared);
@@ -2113,6 +2115,8 @@  static void blk_mq_del_queue_tag_set(struct request_queue *q)
 		blk_mq_update_tag_set_depth(set, false);
 	}
 	mutex_unlock(&set->tag_list_lock);
+
+	synchronize_rcu();
 }
 
 static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set,
@@ -2618,6 +2622,8 @@  void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues)
 {
 	struct request_queue *q;
 
+	lockdep_assert_held(&set->tag_list_lock);
+
 	if (nr_hw_queues > nr_cpu_ids)
 		nr_hw_queues = nr_cpu_ids;
 	if (nr_hw_queues < 1 || nr_hw_queues == set->nr_hw_queues)