From patchwork Mon Aug 3 04:15:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stanley Chu X-Patchwork-Id: 11697099 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C4F85912 for ; Mon, 3 Aug 2020 04:15:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AD46E207FB for ; Mon, 3 Aug 2020 04:15:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=mediatek.com header.i=@mediatek.com header.b="PgOd+GHW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725992AbgHCEPu (ORCPT ); Mon, 3 Aug 2020 00:15:50 -0400 Received: from mailgw02.mediatek.com ([210.61.82.184]:56642 "EHLO mailgw02.mediatek.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1725268AbgHCEPt (ORCPT ); Mon, 3 Aug 2020 00:15:49 -0400 X-UUID: 298d602db99c4f7ea51cad75a2c366db-20200803 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Transfer-Encoding:Content-Type:MIME-Version:Message-ID:Date:Subject:CC:To:From; bh=pnVyHtm1kZIK8iFHc2H0Nggm6hcXfQ0T1tugUK4dLkI=; b=PgOd+GHWtTHElIt8Kj9BCXan9eCf1V+qineVTxVWU7xKGENxFXT7fJsocQyhlvB/wULQZaAK2iEXOs1rNzNYuDJPSoJcHTaIbcSCW3WZBtP+YAUowuMsqclZm3tmmqev9Z0C2AIDCnw/O8/Y6jTPQrHGJU8xAj6nbDT9g/aonHo=; X-UUID: 298d602db99c4f7ea51cad75a2c366db-20200803 Received: from mtkcas08.mediatek.inc [(172.21.101.126)] by mailgw02.mediatek.com (envelope-from ) (Cellopoint E-mail Firewall v4.1.10 Build 0809 with TLS) with ESMTP id 103747508; Mon, 03 Aug 2020 12:15:45 +0800 Received: from mtkcas07.mediatek.inc (172.21.101.84) by mtkmbs08n2.mediatek.inc (172.21.101.56) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 3 Aug 2020 12:15:35 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkcas07.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 3 Aug 2020 12:15:36 +0800 From: Stanley Chu To: , , , , , CC: , , , , , , , , , , , , , , Stanley Chu Subject: [PATCH v5] scsi: ufs: Quiesce all scsi devices before shutdown Date: Mon, 3 Aug 2020 12:15:36 +0800 Message-ID: <20200803041536.6575-1-stanley.chu@mediatek.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 X-TM-SNTS-SMTP: 1FB26CA140231F142D06814CF4C037385A0CA463E7871E07C06AEF6C4CC617842000:8 X-MTK: N Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Currently I/O request could be still submitted to UFS device while UFS is working on shutdown flow. This may lead to racing as below scenarios and finally system may crash due to unclocked register accesses. To fix this kind of issues, in ufshcd_shutdown(), 1. Use pm_runtime_get_sync() instead of resuming UFS device by ufshcd_runtime_resume() "internally" to let runtime PM framework manage and prevent concurrent runtime operations by incoming I/O requests. 2. Specifically quiesce all SCSI devices to block all I/O requests after device is resumed. Example of racing scenario: While UFS device is runtime-suspended Thread #1: Executing UFS shutdown flow, e.g., ufshcd_suspend(UFS_SHUTDOWN_PM) Thread #2: Executing runtime resume flow triggered by I/O request, e.g., ufshcd_resume(UFS_RUNTIME_PM) This breaks the assumption that UFS PM flows can not be running concurrently and some unexpected racing behavior may happen. Signed-off-by: Stanley Chu --- drivers/scsi/ufs/ufshcd.c | 40 ++++++++++++++++++++++++++++++++++----- 1 file changed, 35 insertions(+), 5 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 307622284239..e5b99f1b826a 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -159,6 +159,12 @@ struct ufs_pm_lvl_states ufs_pm_lvl_states[] = { {UFS_POWERDOWN_PWR_MODE, UIC_LINK_OFF_STATE}, }; +#define ufshcd_scsi_for_each_sdev(fn) \ + list_for_each_entry(starget, &hba->host->__targets, siblings) { \ + __starget_for_each_device(starget, NULL, \ + fn); \ + } + static inline enum ufs_dev_pwr_mode ufs_get_pm_lvl_to_dev_pwr_mode(enum ufs_pm_level lvl) { @@ -8629,6 +8635,13 @@ int ufshcd_runtime_idle(struct ufs_hba *hba) } EXPORT_SYMBOL(ufshcd_runtime_idle); +static void ufshcd_quiesce_sdev(struct scsi_device *sdev, void *data) +{ + /* Suspended devices are already quiesced so can be skipped */ + if (!pm_runtime_suspended(&sdev->sdev_gendev)) + scsi_device_quiesce(sdev); +} + /** * ufshcd_shutdown - shutdown routine * @hba: per adapter instance @@ -8640,6 +8653,7 @@ EXPORT_SYMBOL(ufshcd_runtime_idle); int ufshcd_shutdown(struct ufs_hba *hba) { int ret = 0; + struct scsi_target *starget; if (!hba->is_powered) goto out; @@ -8647,11 +8661,27 @@ int ufshcd_shutdown(struct ufs_hba *hba) if (ufshcd_is_ufs_dev_poweroff(hba) && ufshcd_is_link_off(hba)) goto out; - if (pm_runtime_suspended(hba->dev)) { - ret = ufshcd_runtime_resume(hba); - if (ret) - goto out; - } + /* + * Let runtime PM framework manage and prevent concurrent runtime + * operations with shutdown flow. + */ + if (pm_runtime_get_sync(hba->dev)) + pm_runtime_put_noidle(hba->dev); + + /* + * Quiesce all SCSI devices to prevent any non-PM requests sending + * from block layer during and after shutdown. + * + * Here we can not use blk_cleanup_queue() since PM requests + * (with BLK_MQ_REQ_PREEMPT flag) are still required to be sent + * through block layer. Therefore SCSI command queued after the + * scsi_target_quiesce() call returned will block until + * blk_cleanup_queue() is called. + * + * Besides, scsi_target_"un"quiesce (e.g., scsi_target_resume) can + * be ignored since shutdown is one-way flow. + */ + ufshcd_scsi_for_each_sdev(ufshcd_quiesce_sdev); ret = ufshcd_suspend(hba, UFS_SHUTDOWN_PM); out: