From patchwork Sat May 5 13:59:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10382171 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4935E6053F for ; Sat, 5 May 2018 14:00:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3A69D29253 for ; Sat, 5 May 2018 14:00:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2F11729282; Sat, 5 May 2018 14:00:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7F4C029253 for ; Sat, 5 May 2018 14:00:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751771AbeEEOAa (ORCPT ); Sat, 5 May 2018 10:00:30 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:49294 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751755AbeEEOA3 (ORCPT ); Sat, 5 May 2018 10:00:29 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 34F4F430A665; Sat, 5 May 2018 14:00:29 +0000 (UTC) Received: from localhost (ovpn-12-52.pek2.redhat.com [10.72.12.52]) by smtp.corp.redhat.com (Postfix) with ESMTP id AA6FB2022EE0; Sat, 5 May 2018 14:00:20 +0000 (UTC) From: Ming Lei To: Keith Busch Cc: Jens Axboe , linux-block@vger.kernel.org, Ming Lei , Jianchao Wang , Christoph Hellwig , Sagi Grimberg , linux-nvme@lists.infradead.org, Laurence Oberman Subject: [PATCH V4 6/7] nvme: pci: prepare for supporting error recovery from resetting context Date: Sat, 5 May 2018 21:59:04 +0800 Message-Id: <20180505135905.18815-7-ming.lei@redhat.com> In-Reply-To: <20180505135905.18815-1-ming.lei@redhat.com> References: <20180505135905.18815-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Sat, 05 May 2018 14:00:29 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Sat, 05 May 2018 14:00:29 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Either the admin or normal IO in reset context may be timed out because controller error happens. When this timeout happens, we may have to start controller recovery again. This patch holds the introduced reset lock when running reset, so that we may support nested reset in the following patches. Cc: Jianchao Wang Cc: Christoph Hellwig Cc: Sagi Grimberg Cc: linux-nvme@lists.infradead.org Cc: Laurence Oberman Signed-off-by: Ming Lei --- drivers/nvme/host/pci.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 1fafe5d01355..2fbe24274ad0 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -2365,14 +2365,14 @@ static void nvme_remove_dead_ctrl(struct nvme_dev *dev, int status) nvme_put_ctrl(&dev->ctrl); } -static void nvme_reset_work(struct work_struct *work) +static void nvme_reset_dev(struct nvme_dev *dev) { - struct nvme_dev *dev = - container_of(work, struct nvme_dev, ctrl.reset_work); bool was_suspend = !!(dev->ctrl.ctrl_config & NVME_CC_SHN_NORMAL); int result = -ENODEV; enum nvme_ctrl_state new_state = NVME_CTRL_LIVE; + mutex_lock(&dev->ctrl.reset_lock); + if (WARN_ON(dev->ctrl.state != NVME_CTRL_RESETTING)) goto out; @@ -2448,7 +2448,11 @@ static void nvme_reset_work(struct work_struct *work) new_state = NVME_CTRL_ADMIN_ONLY; } else { nvme_start_queues(&dev->ctrl); + mutex_unlock(&dev->ctrl.reset_lock); + nvme_wait_freeze(&dev->ctrl); + + mutex_lock(&dev->ctrl.reset_lock); /* hit this only when allocate tagset fails */ if (nvme_dev_add(dev)) new_state = NVME_CTRL_ADMIN_ONLY; @@ -2466,10 +2470,20 @@ static void nvme_reset_work(struct work_struct *work) } nvme_start_ctrl(&dev->ctrl); + mutex_unlock(&dev->ctrl.reset_lock); return; out: nvme_remove_dead_ctrl(dev, result); + mutex_unlock(&dev->ctrl.reset_lock); +} + +static void nvme_reset_work(struct work_struct *work) +{ + struct nvme_dev *dev = + container_of(work, struct nvme_dev, ctrl.reset_work); + + nvme_reset_dev(dev); } static void nvme_remove_dead_ctrl_work(struct work_struct *work)