From patchwork Sun Sep 5 09:51:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrian Hunter X-Patchwork-Id: 12476055 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C6F2C433EF for ; Sun, 5 Sep 2021 09:51:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6BBCD60F12 for ; Sun, 5 Sep 2021 09:51:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237381AbhIEJwe (ORCPT ); Sun, 5 Sep 2021 05:52:34 -0400 Received: from mga09.intel.com ([134.134.136.24]:45314 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234907AbhIEJwe (ORCPT ); Sun, 5 Sep 2021 05:52:34 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10097"; a="219779267" X-IronPort-AV: E=Sophos;i="5.85,269,1624345200"; d="scan'208";a="219779267" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Sep 2021 02:51:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,269,1624345200"; d="scan'208";a="468519265" Received: from ahunter-desktop.fi.intel.com ([10.237.72.174]) by orsmga007.jf.intel.com with ESMTP; 05 Sep 2021 02:51:28 -0700 From: Adrian Hunter To: "Martin K . Petersen" Cc: "James E . J . Bottomley" , Bean Huo , Avri Altman , Alim Akhtar , Can Guo , Asutosh Das , Bart Van Assche , Manivannan Sadhasivam , Wei Li , linux-scsi@vger.kernel.org Subject: [PATCH V3 1/3] scsi: ufs: Fix error handler clear ua deadlock Date: Sun, 5 Sep 2021 12:51:51 +0300 Message-Id: <20210905095153.6217-2-adrian.hunter@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210905095153.6217-1-adrian.hunter@intel.com> References: <20210905095153.6217-1-adrian.hunter@intel.com> Organization: Intel Finland Oy, Registered Address: PL 281, 00181 Helsinki, Business Identity Code: 0357606 - 4, Domiciled in Helsinki Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org There is no guarantee to be able to enter the queue if requests are blocked. That is because freezing the queue will block entry to the queue, but freezing also waits for outstanding requests which can make no progress while the queue is blocked. That situation can happen when the error handler issues requests to clear unit attention condition. Requests can be blocked if the ufshcd_state is UFSHCD_STATE_EH_SCHEDULED_FATAL, which can happen as a result either of error handler activity, or theoretically a request that is issued after the error handler unblocks the queue but before clearing unit attention condition. The deadlock is very unlikely, so the error handler can be expected to clear ua at some point anyway, so the simple solution is not to wait to enter the queue. Additionally, note that the RPMB queue might be not be entered because it is runtime suspended, but in that case ua will be cleared at RPMB runtime resume. Cc: stable@vger.kernel.org # 5.14+ ac1bc2ba060f: scsi: ufs: Request sense data asynchronously Cc: stable@vger.kernel.org # 5.14+ 9b5ac8ab4e8b: scsi: ufs: Fix ufshcd_request_sense_async() for Samsung KLUFG8RHDA-B2D1 Signed-off-by: Adrian Hunter Signed-off-by: Bart Van Assche --- Changes in V3: Correct commit message. Amend stable tags to add dependent cherry picks drivers/scsi/ufs/ufshcd.c | 33 +++++++++++++++++++-------------- 1 file changed, 19 insertions(+), 14 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 67889d74761c..52fb059efa77 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -224,7 +224,7 @@ static int ufshcd_reset_and_restore(struct ufs_hba *hba); static int ufshcd_eh_host_reset_handler(struct scsi_cmnd *cmd); static int ufshcd_clear_tm_cmd(struct ufs_hba *hba, int tag); static void ufshcd_hba_exit(struct ufs_hba *hba); -static int ufshcd_clear_ua_wluns(struct ufs_hba *hba); +static int ufshcd_clear_ua_wluns(struct ufs_hba *hba, bool nowait); static int ufshcd_probe_hba(struct ufs_hba *hba, bool async); static int ufshcd_setup_clocks(struct ufs_hba *hba, bool on); static int ufshcd_uic_hibern8_enter(struct ufs_hba *hba); @@ -4110,7 +4110,7 @@ int ufshcd_link_recovery(struct ufs_hba *hba) dev_err(hba->dev, "%s: link recovery failed, err %d", __func__, ret); else - ufshcd_clear_ua_wluns(hba); + ufshcd_clear_ua_wluns(hba, false); return ret; } @@ -5974,7 +5974,7 @@ static void ufshcd_err_handling_unprepare(struct ufs_hba *hba) ufshcd_release(hba); if (ufshcd_is_clkscaling_supported(hba)) ufshcd_clk_scaling_suspend(hba, false); - ufshcd_clear_ua_wluns(hba); + ufshcd_clear_ua_wluns(hba, true); ufshcd_rpm_put(hba); } @@ -7907,7 +7907,7 @@ static int ufshcd_add_lus(struct ufs_hba *hba) if (ret) goto out; - ufshcd_clear_ua_wluns(hba); + ufshcd_clear_ua_wluns(hba, false); /* Initialize devfreq after UFS device is detected */ if (ufshcd_is_clkscaling_supported(hba)) { @@ -7943,7 +7943,8 @@ static void ufshcd_request_sense_done(struct request *rq, blk_status_t error) } static int -ufshcd_request_sense_async(struct ufs_hba *hba, struct scsi_device *sdev) +ufshcd_request_sense_async(struct ufs_hba *hba, struct scsi_device *sdev, + bool nowait) { /* * Some UFS devices clear unit attention condition only if the sense @@ -7951,6 +7952,7 @@ ufshcd_request_sense_async(struct ufs_hba *hba, struct scsi_device *sdev) */ static const u8 cmd[6] = {REQUEST_SENSE, 0, 0, 0, UFS_SENSE_SIZE, 0}; struct scsi_request *rq; + blk_mq_req_flags_t flags; struct request *req; char *buffer; int ret; @@ -7959,8 +7961,8 @@ ufshcd_request_sense_async(struct ufs_hba *hba, struct scsi_device *sdev) if (!buffer) return -ENOMEM; - req = blk_get_request(sdev->request_queue, REQ_OP_DRV_IN, - /*flags=*/BLK_MQ_REQ_PM); + flags = BLK_MQ_REQ_PM | (nowait ? BLK_MQ_REQ_NOWAIT : 0); + req = blk_get_request(sdev->request_queue, REQ_OP_DRV_IN, flags); if (IS_ERR(req)) { ret = PTR_ERR(req); goto out_free; @@ -7990,7 +7992,7 @@ ufshcd_request_sense_async(struct ufs_hba *hba, struct scsi_device *sdev) return ret; } -static int ufshcd_clear_ua_wlun(struct ufs_hba *hba, u8 wlun) +static int ufshcd_clear_ua_wlun(struct ufs_hba *hba, u8 wlun, bool nowait) { struct scsi_device *sdp; unsigned long flags; @@ -8016,7 +8018,10 @@ static int ufshcd_clear_ua_wlun(struct ufs_hba *hba, u8 wlun) if (ret) goto out_err; - ret = ufshcd_request_sense_async(hba, sdp); + ret = ufshcd_request_sense_async(hba, sdp, nowait); + if (nowait && ret && wlun == UFS_UPIU_RPMB_WLUN && + pm_runtime_suspended(&sdp->sdev_gendev)) + ret = 0; /* RPMB runtime resume will clear UAC */ scsi_device_put(sdp); out_err: if (ret) @@ -8025,16 +8030,16 @@ static int ufshcd_clear_ua_wlun(struct ufs_hba *hba, u8 wlun) return ret; } -static int ufshcd_clear_ua_wluns(struct ufs_hba *hba) +static int ufshcd_clear_ua_wluns(struct ufs_hba *hba, bool nowait) { int ret = 0; if (!hba->wlun_dev_clr_ua) goto out; - ret = ufshcd_clear_ua_wlun(hba, UFS_UPIU_UFS_DEVICE_WLUN); + ret = ufshcd_clear_ua_wlun(hba, UFS_UPIU_UFS_DEVICE_WLUN, nowait); if (!ret) - ret = ufshcd_clear_ua_wlun(hba, UFS_UPIU_RPMB_WLUN); + ret = ufshcd_clear_ua_wlun(hba, UFS_UPIU_RPMB_WLUN, nowait); if (!ret) hba->wlun_dev_clr_ua = false; out: @@ -8656,7 +8661,7 @@ static int ufshcd_set_dev_pwr_mode(struct ufs_hba *hba, */ hba->host->eh_noresume = 1; if (hba->wlun_dev_clr_ua) - ufshcd_clear_ua_wlun(hba, UFS_UPIU_UFS_DEVICE_WLUN); + ufshcd_clear_ua_wlun(hba, UFS_UPIU_UFS_DEVICE_WLUN, false); cmd[4] = pwr_mode << 4; @@ -9825,7 +9830,7 @@ static inline int ufshcd_clear_rpmb_uac(struct ufs_hba *hba) if (!hba->wlun_rpmb_clr_ua) return 0; - ret = ufshcd_clear_ua_wlun(hba, UFS_UPIU_RPMB_WLUN); + ret = ufshcd_clear_ua_wlun(hba, UFS_UPIU_RPMB_WLUN, false); if (!ret) hba->wlun_rpmb_clr_ua = 0; return ret; From patchwork Sun Sep 5 09:51:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrian Hunter X-Patchwork-Id: 12476057 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B876C433F5 for ; Sun, 5 Sep 2021 09:51:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1228161004 for ; Sun, 5 Sep 2021 09:51:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237479AbhIEJwi (ORCPT ); Sun, 5 Sep 2021 05:52:38 -0400 Received: from mga09.intel.com ([134.134.136.24]:45314 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234907AbhIEJwh (ORCPT ); Sun, 5 Sep 2021 05:52:37 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10097"; a="219779271" X-IronPort-AV: E=Sophos;i="5.85,269,1624345200"; d="scan'208";a="219779271" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Sep 2021 02:51:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,269,1624345200"; d="scan'208";a="468519268" Received: from ahunter-desktop.fi.intel.com ([10.237.72.174]) by orsmga007.jf.intel.com with ESMTP; 05 Sep 2021 02:51:31 -0700 From: Adrian Hunter To: "Martin K . Petersen" Cc: "James E . J . Bottomley" , Bean Huo , Avri Altman , Alim Akhtar , Can Guo , Asutosh Das , Bart Van Assche , Manivannan Sadhasivam , Wei Li , linux-scsi@vger.kernel.org Subject: [PATCH V3 2/3] scsi: ufs: Fix runtime PM dependencies getting broken Date: Sun, 5 Sep 2021 12:51:52 +0300 Message-Id: <20210905095153.6217-3-adrian.hunter@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210905095153.6217-1-adrian.hunter@intel.com> References: <20210905095153.6217-1-adrian.hunter@intel.com> Organization: Intel Finland Oy, Registered Address: PL 281, 00181 Helsinki, Business Identity Code: 0357606 - 4, Domiciled in Helsinki Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org UFS SCSI devices make use of device links to establish PM dependencies. However, SCSI PM will force devices' runtime PM state to be active during system resume. That can break runtime PM dependencies for UFS devices. Fix by adding a flag 'preserve_rpm' to let UFS SCSI devices opt-out of the unwanted behaviour. Fixes: b294ff3e34490f ("scsi: ufs: core: Enable power management for wlun") Cc: stable@vger.kernel.org Signed-off-by: Adrian Hunter --- drivers/scsi/scsi_pm.c | 16 +++++++++++----- drivers/scsi/ufs/ufshcd.c | 1 + include/scsi/scsi_device.h | 1 + 3 files changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/scsi/scsi_pm.c b/drivers/scsi/scsi_pm.c index 3717eea37ecb..0557c1ad304d 100644 --- a/drivers/scsi/scsi_pm.c +++ b/drivers/scsi/scsi_pm.c @@ -73,13 +73,22 @@ static int scsi_dev_type_resume(struct device *dev, int (*cb)(struct device *, const struct dev_pm_ops *)) { const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; + struct scsi_device *sdev = NULL; + bool preserve_rpm = false; int err = 0; + if (scsi_is_sdev_device(dev)) { + sdev = to_scsi_device(dev); + preserve_rpm = sdev->preserve_rpm; + if (preserve_rpm && pm_runtime_suspended(dev)) + return 0; + } + err = cb(dev, pm); scsi_device_resume(to_scsi_device(dev)); dev_dbg(dev, "scsi resume: %d\n", err); - if (err == 0) { + if (err == 0 && !preserve_rpm) { pm_runtime_disable(dev); err = pm_runtime_set_active(dev); pm_runtime_enable(dev); @@ -91,11 +100,8 @@ static int scsi_dev_type_resume(struct device *dev, * * The resume hook will correct runtime PM status of the disk. */ - if (!err && scsi_is_sdev_device(dev)) { - struct scsi_device *sdev = to_scsi_device(dev); - + if (!err && sdev) blk_set_runtime_active(sdev->request_queue); - } } return err; diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 52fb059efa77..57ed4b93b949 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -5016,6 +5016,7 @@ static int ufshcd_slave_configure(struct scsi_device *sdev) pm_runtime_get_noresume(&sdev->sdev_gendev); else if (ufshcd_is_rpm_autosuspend_allowed(hba)) sdev->rpm_autosuspend = 1; + sdev->preserve_rpm = 1; ufshcd_crypto_setup_rq_keyslot_manager(hba, q); diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h index 09a17f6e93a7..47eb30a6b7b2 100644 --- a/include/scsi/scsi_device.h +++ b/include/scsi/scsi_device.h @@ -197,6 +197,7 @@ struct scsi_device { unsigned no_read_disc_info:1; /* Avoid READ_DISC_INFO cmds */ unsigned no_read_capacity_16:1; /* Avoid READ_CAPACITY_16 cmds */ unsigned try_rc_10_first:1; /* Try READ_CAPACACITY_10 first */ + unsigned preserve_rpm:1; /* Preserve runtime PM */ unsigned security_supported:1; /* Supports Security Protocols */ unsigned is_visible:1; /* is the device visible in sysfs */ unsigned wce_default_on:1; /* Cache is ON by default */ From patchwork Sun Sep 5 09:51:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrian Hunter X-Patchwork-Id: 12476059 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AD46C433EF for ; Sun, 5 Sep 2021 09:51:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4D35D61004 for ; Sun, 5 Sep 2021 09:51:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237485AbhIEJwl (ORCPT ); Sun, 5 Sep 2021 05:52:41 -0400 Received: from mga09.intel.com ([134.134.136.24]:45314 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234907AbhIEJwk (ORCPT ); Sun, 5 Sep 2021 05:52:40 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10097"; a="219779274" X-IronPort-AV: E=Sophos;i="5.85,269,1624345200"; d="scan'208";a="219779274" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Sep 2021 02:51:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,269,1624345200"; d="scan'208";a="468519278" Received: from ahunter-desktop.fi.intel.com ([10.237.72.174]) by orsmga007.jf.intel.com with ESMTP; 05 Sep 2021 02:51:34 -0700 From: Adrian Hunter To: "Martin K . Petersen" Cc: "James E . J . Bottomley" , Bean Huo , Avri Altman , Alim Akhtar , Can Guo , Asutosh Das , Bart Van Assche , Manivannan Sadhasivam , Wei Li , linux-scsi@vger.kernel.org Subject: [PATCH V3 3/3] scsi: ufs: Let devices remain runtime suspended during system suspend Date: Sun, 5 Sep 2021 12:51:53 +0300 Message-Id: <20210905095153.6217-4-adrian.hunter@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210905095153.6217-1-adrian.hunter@intel.com> References: <20210905095153.6217-1-adrian.hunter@intel.com> Organization: Intel Finland Oy, Registered Address: PL 281, 00181 Helsinki, Business Identity Code: 0357606 - 4, Domiciled in Helsinki Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org If the UFS Device WLUN is runtime suspended and is in the same power mode, link state and b_rpm_dev_flush_capable (BKOP or WB buffer flush etc) state, then it can remain runtime suspended instead of being runtime resumed and then system suspended. The following patches have cleared the way for that to happen: scsi: ufs: Fix runtime PM dependencies getting broken scsi: ufs: Fix error handler clear ua deadlock So amend the logic accordingly. Note, the ufs-hisi driver uses different RPM and SPM, but it is made explicit by a new parameter to suspend prepare. Signed-off-by: Adrian Hunter --- Changes in V3: None. Changes in V2: The ufs-hisi driver uses different RPM and SPM, but it is made explicit by a new parameter to suspend prepare. drivers/scsi/ufs/ufs-hisi.c | 8 +++++- drivers/scsi/ufs/ufshcd.c | 53 ++++++++++++++++++++++++++++--------- drivers/scsi/ufs/ufshcd.h | 12 ++++++++- 3 files changed, 58 insertions(+), 15 deletions(-) diff --git a/drivers/scsi/ufs/ufs-hisi.c b/drivers/scsi/ufs/ufs-hisi.c index 6b706de8354b..4a08fb35642c 100644 --- a/drivers/scsi/ufs/ufs-hisi.c +++ b/drivers/scsi/ufs/ufs-hisi.c @@ -396,6 +396,12 @@ static int ufs_hisi_pwr_change_notify(struct ufs_hba *hba, return ret; } +static int ufs_hisi_suspend_prepare(struct device *dev) +{ + /* RPM and SPM are different. Refer ufs_hisi_suspend() */ + return __ufshcd_suspend_prepare(dev, false); +} + static int ufs_hisi_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) { struct ufs_hisi_host *host = ufshcd_get_variant(hba); @@ -574,7 +580,7 @@ static int ufs_hisi_remove(struct platform_device *pdev) static const struct dev_pm_ops ufs_hisi_pm_ops = { SET_SYSTEM_SLEEP_PM_OPS(ufshcd_system_suspend, ufshcd_system_resume) SET_RUNTIME_PM_OPS(ufshcd_runtime_suspend, ufshcd_runtime_resume, NULL) - .prepare = ufshcd_suspend_prepare, + .prepare = ufs_hisi_suspend_prepare, .complete = ufshcd_resume_complete, }; diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 57ed4b93b949..453fbb8753e2 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -9722,14 +9722,30 @@ void ufshcd_resume_complete(struct device *dev) ufshcd_rpm_put(hba); hba->complete_put = false; } - if (hba->rpmb_complete_put) { - ufshcd_rpmb_rpm_put(hba); - hba->rpmb_complete_put = false; - } } EXPORT_SYMBOL_GPL(ufshcd_resume_complete); -int ufshcd_suspend_prepare(struct device *dev) +static bool ufshcd_rpm_ok_for_spm(struct ufs_hba *hba) +{ + struct device *dev = &hba->sdev_ufs_device->sdev_gendev; + enum ufs_dev_pwr_mode dev_pwr_mode; + enum uic_link_state link_state; + unsigned long flags; + bool res; + + spin_lock_irqsave(&dev->power.lock, flags); + dev_pwr_mode = ufs_get_pm_lvl_to_dev_pwr_mode(hba->spm_lvl); + link_state = ufs_get_pm_lvl_to_link_pwr_state(hba->spm_lvl); + res = pm_runtime_suspended(dev) && + hba->curr_dev_pwr_mode == dev_pwr_mode && + hba->uic_link_state == link_state && + !hba->dev_info.b_rpm_dev_flush_capable; + spin_unlock_irqrestore(&dev->power.lock, flags); + + return res; +} + +int __ufshcd_suspend_prepare(struct device *dev, bool rpm_ok_for_spm) { struct ufs_hba *hba = dev_get_drvdata(dev); int ret; @@ -9741,19 +9757,30 @@ int ufshcd_suspend_prepare(struct device *dev) * Refer ufshcd_resume_complete() */ if (hba->sdev_ufs_device) { - ret = ufshcd_rpm_get_sync(hba); - if (ret < 0 && ret != -EACCES) { - ufshcd_rpm_put(hba); - return ret; + /* Prevent runtime suspend */ + ufshcd_rpm_get_noresume(hba); + /* + * Check if already runtime suspended in same state as system + * suspend would be. + */ + if (!rpm_ok_for_spm || !ufshcd_rpm_ok_for_spm(hba)) { + /* RPM state is not ok for SPM, so runtime resume */ + ret = ufshcd_rpm_resume(hba); + if (ret < 0 && ret != -EACCES) { + ufshcd_rpm_put(hba); + return ret; + } } hba->complete_put = true; } - if (hba->sdev_rpmb) { - ufshcd_rpmb_rpm_get_sync(hba); - hba->rpmb_complete_put = true; - } return 0; } +EXPORT_SYMBOL_GPL(__ufshcd_suspend_prepare); + +int ufshcd_suspend_prepare(struct device *dev) +{ + return __ufshcd_suspend_prepare(dev, true); +} EXPORT_SYMBOL_GPL(ufshcd_suspend_prepare); #ifdef CONFIG_PM_SLEEP diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index 4723f27a55d1..1dc8024d5211 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -915,7 +915,6 @@ struct ufs_hba { #endif u32 luns_avail; bool complete_put; - bool rpmb_complete_put; }; /* Returns true if clocks can be gated. Otherwise false */ @@ -1175,6 +1174,7 @@ int ufshcd_exec_raw_upiu_cmd(struct ufs_hba *hba, int ufshcd_wb_toggle(struct ufs_hba *hba, bool enable); int ufshcd_suspend_prepare(struct device *dev); +int __ufshcd_suspend_prepare(struct device *dev, bool rpm_ok_for_spm); void ufshcd_resume_complete(struct device *dev); /* Wrapper functions for safely calling variant operations */ @@ -1383,6 +1383,16 @@ static inline int ufshcd_rpm_put_sync(struct ufs_hba *hba) return pm_runtime_put_sync(&hba->sdev_ufs_device->sdev_gendev); } +static inline void ufshcd_rpm_get_noresume(struct ufs_hba *hba) +{ + pm_runtime_get_noresume(&hba->sdev_ufs_device->sdev_gendev); +} + +static inline int ufshcd_rpm_resume(struct ufs_hba *hba) +{ + return pm_runtime_resume(&hba->sdev_ufs_device->sdev_gendev); +} + static inline int ufshcd_rpm_put(struct ufs_hba *hba) { return pm_runtime_put(&hba->sdev_ufs_device->sdev_gendev);