Message ID | 20170922150915.GA1294@localhost.localdomain (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, Sep 22, 2017 at 11:09:16AM -0400, Keith Busch wrote: > On Mon, Sep 18, 2017 at 04:14:53PM -0700, Christoph Hellwig wrote: > > +static void nvme_failover_req(struct request *req) > > +{ > > + struct nvme_ns *ns = req->q->queuedata; > > + unsigned long flags; > > + > > + spin_lock_irqsave(&ns->head->requeue_lock, flags); > > + blk_steal_bios(&ns->head->requeue_list, req); > > + spin_unlock_irqrestore(&ns->head->requeue_lock, flags); > > + > > + nvme_reset_ctrl(ns->ctrl); > > + kblockd_schedule_work(&ns->head->requeue_work); > > +} > > Need to call blk_mq_free_req after stealing all its bios to prevent > leaking that entered request. I think this should be a blk_mq_end_request actually. The difference is that blk_mq_end_request will get the I/O accounting right, and treats the case of having and ->end_io handler correctly as well.
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 5449c83..55620ba 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -115,6 +115,8 @@ static void nvme_failover_req(struct request *req) blk_steal_bios(&ns->head->requeue_list, req); spin_unlock_irqrestore(&ns->head->requeue_lock, flags); + blk_mq_free_request(req); + nvme_reset_ctrl(ns->ctrl); kblockd_schedule_work(&ns->head->requeue_work); } @@ -1935,6 +1937,9 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) { struct nvme_subsystem *subsys, *found; + if (ctrl->identified) + return 0; + subsys = kzalloc(sizeof(*subsys), GFP_KERNEL); if (!subsys) return -ENOMEM;