diff mbox

[v3,11/11] nvme: Fix a race condition

Message ID 2a29b7ec-0113-3450-9a36-b925b47b1fb0@sandisk.com (mailing list archive)
State Not Applicable
Headers show

Commit Message

Bart Van Assche Oct. 18, 2016, 9:53 p.m. UTC
Avoid that nvme_queue_rq() is still running when nvme_stop_queues()
returns.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Christoph Hellwig <hch@lst.de>
---
 drivers/nvme/host/core.c | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

Comments

Christoph Hellwig Oct. 19, 2016, 1:41 p.m. UTC | #1
Hi Bart,

this looks great!

Reviewed-by: Christoph Hellwig <hch@lst.de>

Some minor nitpicks below:

>  void nvme_requeue_req(struct request *req)
>  {
> +	blk_mq_requeue_request(req, true);
>  }
>  EXPORT_SYMBOL_GPL(nvme_requeue_req);

Please just remove the nvme_requeue_req wrapper.

>  
> @@ -2074,11 +2068,14 @@ EXPORT_SYMBOL_GPL(nvme_kill_queues);
>  void nvme_stop_queues(struct nvme_ctrl *ctrl)
>  {
>  	struct nvme_ns *ns;
> +	struct request_queue *q;
>  
>  	mutex_lock(&ctrl->namespaces_mutex);
>  	list_for_each_entry(ns, &ctrl->namespaces, list) {
> +		q = ns->queue;
> +		blk_mq_cancel_requeue_work(q);
> +		blk_mq_stop_hw_queues(q);
> +		blk_mq_quiesce_queue(q);
>  	}

I'd keep the q declaration in the minimal scope, e.g.

	list_for_each_entry(ns, &ctrl->namespaces, list) {
		struct request_queue *q = ns->queue;

		blk_mq_cancel_requeue_work(q);
		blk_mq_stop_hw_queues(q);
		blk_mq_quiesce_queue(q);
	}
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 18a265d..96f00c7 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -201,13 +201,7 @@  static struct nvme_ns *nvme_get_ns_from_disk(struct gendisk *disk)
 
 void nvme_requeue_req(struct request *req)
 {
-	unsigned long flags;
-
-	blk_mq_requeue_request(req, false);
-	spin_lock_irqsave(req->q->queue_lock, flags);
-	if (!blk_mq_queue_stopped(req->q))
-		blk_mq_kick_requeue_list(req->q);
-	spin_unlock_irqrestore(req->q->queue_lock, flags);
+	blk_mq_requeue_request(req, true);
 }
 EXPORT_SYMBOL_GPL(nvme_requeue_req);
 
@@ -2074,11 +2068,14 @@  EXPORT_SYMBOL_GPL(nvme_kill_queues);
 void nvme_stop_queues(struct nvme_ctrl *ctrl)
 {
 	struct nvme_ns *ns;
+	struct request_queue *q;
 
 	mutex_lock(&ctrl->namespaces_mutex);
 	list_for_each_entry(ns, &ctrl->namespaces, list) {
-		blk_mq_cancel_requeue_work(ns->queue);
-		blk_mq_stop_hw_queues(ns->queue);
+		q = ns->queue;
+		blk_mq_cancel_requeue_work(q);
+		blk_mq_stop_hw_queues(q);
+		blk_mq_quiesce_queue(q);
 	}
 	mutex_unlock(&ctrl->namespaces_mutex);
 }