From patchwork Fri Apr 27 17:51:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Keith Busch X-Patchwork-Id: 10369565 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E35FB602DC for ; Fri, 27 Apr 2018 17:50:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D401B28471 for ; Fri, 27 Apr 2018 17:50:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C6E38294B0; Fri, 27 Apr 2018 17:50:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 600B728471 for ; Fri, 27 Apr 2018 17:50:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757841AbeD0Rui (ORCPT ); Fri, 27 Apr 2018 13:50:38 -0400 Received: from mga03.intel.com ([134.134.136.65]:40360 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757809AbeD0Rug (ORCPT ); Fri, 27 Apr 2018 13:50:36 -0400 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Apr 2018 10:50:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,336,1520924400"; d="scan'208";a="35787417" Received: from unknown (HELO localhost.localdomain) ([10.232.112.44]) by fmsmga008.fm.intel.com with ESMTP; 27 Apr 2018 10:50:34 -0700 Date: Fri, 27 Apr 2018 11:51:57 -0600 From: Keith Busch To: Ming Lei Cc: Jens Axboe , linux-block@vger.kernel.org, Jianchao Wang , Christoph Hellwig , Sagi Grimberg , linux-nvme@lists.infradead.org Subject: Re: [PATCH 1/2] nvme: pci: simplify timeout handling Message-ID: <20180427175157.GB5073@localhost.localdomain> References: <20180426123956.26039-1-ming.lei@redhat.com> <20180426123956.26039-2-ming.lei@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180426123956.26039-2-ming.lei@redhat.com> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Thu, Apr 26, 2018 at 08:39:55PM +0800, Ming Lei wrote: > +/* > + * This one is called after queues are quiesced, and no in-fligh timeout > + * and nvme interrupt handling. > + */ > +static void nvme_pci_cancel_request(struct request *req, void *data, > + bool reserved) > +{ > + /* make sure timed-out requests are covered too */ > + if (req->rq_flags & RQF_MQ_TIMEOUT_EXPIRED) { > + req->aborted_gstate = 0; > + req->rq_flags &= ~RQF_MQ_TIMEOUT_EXPIRED; > + } > + > + nvme_cancel_request(req, data, reserved); > +} I don't know about this. I feel like blk-mq owns these flags and LLD's shouldn't require such knowledge of their implementation details. I understand how the problems are happening a bit better now. It used to be that blk-mq would lock an expired command one at a time, so when we had a batch of IO timeouts, the driver was able to complete all of them inside a single IO timeout handler. That's not the case anymore, so the driver is called for every IO timeout even though if it reaped all the commands at once. IMO, the block layer should allow the driver to complete all timed out IOs at once rather than go through ->timeout() for each of them and without knowing about the request state details. Much of the problems would go away in that case. But I think much of this can be fixed by just syncing the queues in the probe work, and may be a more reasonable bug-fix for 4.17 rather than rewrite the entire timeout handler. What do you think of this patch? It's an update of one I posted a few months ago, but uses Jianchao's safe namespace list locking. --- -- diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index a3771c5729f5..198b4469c3e2 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -3562,12 +3562,25 @@ void nvme_start_queues(struct nvme_ctrl *ctrl) struct nvme_ns *ns; down_read(&ctrl->namespaces_rwsem); - list_for_each_entry(ns, &ctrl->namespaces, list) + list_for_each_entry(ns, &ctrl->namespaces, list) { blk_mq_unquiesce_queue(ns->queue); + blk_mq_kick_requeue_list(ns->queue); + } up_read(&ctrl->namespaces_rwsem); } EXPORT_SYMBOL_GPL(nvme_start_queues); +void nvme_sync_queues(struct nvme_ctrl *ctrl) +{ + struct nvme_ns *ns; + + down_read(&ctrl->namespaces_rwsem); + list_for_each_entry(ns, &ctrl->namespaces, list) + blk_sync_queue(ns->queue); + up_read(&ctrl->namespaces_rwsem); +} +EXPORT_SYMBOL_GPL(nvme_sync_queues); + int nvme_reinit_tagset(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set) { if (!ctrl->ops->reinit_request) diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 7ded7a51c430..e62273198725 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -402,6 +402,7 @@ int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len, void nvme_complete_async_event(struct nvme_ctrl *ctrl, __le16 status, union nvme_result *res); +void nvme_sync_queues(struct nvme_ctrl *ctrl); void nvme_stop_queues(struct nvme_ctrl *ctrl); void nvme_start_queues(struct nvme_ctrl *ctrl); void nvme_kill_queues(struct nvme_ctrl *ctrl); diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index fbc71fac6f1e..a2f3ad105620 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -2303,6 +2303,7 @@ static void nvme_reset_work(struct work_struct *work) */ if (dev->ctrl.ctrl_config & NVME_CC_ENABLE) nvme_dev_disable(dev, false); + nvme_sync_queues(&dev->ctrl); /* * Introduce CONNECTING state from nvme-fc/rdma transports to mark the