From patchwork Sat May 16 17:46:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stanley Chu X-Patchwork-Id: 11553623 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D36A3697 for ; Sat, 16 May 2020 17:46:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B536A206C0 for ; Sat, 16 May 2020 17:46:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=mediatek.com header.i=@mediatek.com header.b="d6+wbIvo" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726563AbgEPRq3 (ORCPT ); Sat, 16 May 2020 13:46:29 -0400 Received: from mailgw01.mediatek.com ([210.61.82.183]:17158 "EHLO mailgw01.mediatek.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1726414AbgEPRqY (ORCPT ); Sat, 16 May 2020 13:46:24 -0400 X-UUID: d16d24d53932432892e565b389e20361-20200517 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=5P/GwSlNrq5PAZrytzRd4P4S4nturapsNoUmppSNqBc=; b=d6+wbIvoYN4XoIp0jYcnkyMsvFD/I+0nafJ5awL3ajrg1as8aUvmmjEegIHB0Kkr+NY8bBykIWC5/ASsMFtuBGaK1vVYAhzIcbvYqwJQ+Kej5//DgQN7KgkVRozuIjva3W/ZwULoaIRa8ZT3e3emkh5IlieofebBnUIqMB+ysGI=; X-UUID: d16d24d53932432892e565b389e20361-20200517 Received: from mtkcas10.mediatek.inc [(172.21.101.39)] by mailgw01.mediatek.com (envelope-from ) (Cellopoint E-mail Firewall v4.1.10 Build 0809 with TLS) with ESMTP id 1083562162; Sun, 17 May 2020 01:46:19 +0800 Received: from mtkcas08.mediatek.inc (172.21.101.126) by mtkmbs08n1.mediatek.inc (172.21.101.55) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 17 May 2020 01:46:14 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkcas08.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 17 May 2020 01:46:14 +0800 From: Stanley Chu To: , , , , , CC: , , , , , , , , , , , Stanley Chu Subject: [PATCH v3 4/5] scsi: ufs: Fix WriteBooster flush during runtime suspend Date: Sun, 17 May 2020 01:46:14 +0800 Message-ID: <20200516174615.15445-5-stanley.chu@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20200516174615.15445-1-stanley.chu@mediatek.com> References: <20200516174615.15445-1-stanley.chu@mediatek.com> MIME-Version: 1.0 X-MTK: N Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Currently UFS host driver promises VCC supply if UFS device needs to do WriteBooster flush during runtime suspend. However the UFS specification mentions, "While the flushing operation is in progress, the device is in Active power mode." Therefore UFS host driver needs to promise more: Keep UFS device as "Active power mode", otherwise UFS device shall not do any flush if device enters Sleep or PowerDown power mode. Fix this by not changing device power mode if WriteBooster flush is required in ufshcd_suspend(). Signed-off-by: Stanley Chu --- drivers/scsi/ufs/ufs.h | 1 - drivers/scsi/ufs/ufshcd.c | 42 ++++++++++++++++++++------------------- 2 files changed, 22 insertions(+), 21 deletions(-) diff --git a/drivers/scsi/ufs/ufs.h b/drivers/scsi/ufs/ufs.h index fadba3a3bbcd..db07eedfed96 100644 --- a/drivers/scsi/ufs/ufs.h +++ b/drivers/scsi/ufs/ufs.h @@ -574,7 +574,6 @@ struct ufs_dev_info { u32 d_ext_ufs_feature_sup; u8 b_wb_buffer_type; u32 d_wb_alloc_units; - bool keep_vcc_on; u8 b_presrv_uspc_en; }; diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index c7c2bd7860b8..f4f2c7b5ab0a 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -8094,8 +8094,7 @@ static void ufshcd_vreg_set_lpm(struct ufs_hba *hba) !hba->dev_info.is_lu_power_on_wp) { ufshcd_setup_vreg(hba, false); } else if (!ufshcd_is_ufs_dev_active(hba)) { - if (!hba->dev_info.keep_vcc_on) - ufshcd_toggle_vreg(hba->dev, hba->vreg_info.vcc, false); + ufshcd_toggle_vreg(hba->dev, hba->vreg_info.vcc, false); if (!ufshcd_is_link_active(hba)) { ufshcd_config_vreg_lpm(hba, hba->vreg_info.vccq); ufshcd_config_vreg_lpm(hba, hba->vreg_info.vccq2); @@ -8165,6 +8164,7 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) enum ufs_pm_level pm_lvl; enum ufs_dev_pwr_mode req_dev_pwr_mode; enum uic_link_state req_link_state; + bool keep_curr_dev_pwr_mode = false; hba->pm_op_in_progress = 1; if (!ufshcd_is_shutdown_pm(pm_op)) { @@ -8220,27 +8220,29 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) ufshcd_disable_auto_bkops(hba); } /* - * With wb enabled, if the bkops is enabled or if the - * configured WB type is 70% full, keep vcc ON - * for the device to flush the wb buffer + * If device needs to do BKOP or WB buffer flush during + * Hibern8, keep device power mode as "active power mode" + * and VCC supply. */ - if ((hba->auto_bkops_enabled && ufshcd_is_wb_allowed(hba)) || - ufshcd_wb_keep_vcc_on(hba)) - hba->dev_info.keep_vcc_on = true; - else - hba->dev_info.keep_vcc_on = false; - } else { - hba->dev_info.keep_vcc_on = false; + keep_curr_dev_pwr_mode = hba->auto_bkops_enabled || + (((req_link_state == UIC_LINK_HIBERN8_STATE) || + ((req_link_state == UIC_LINK_ACTIVE_STATE) && + ufshcd_is_auto_hibern8_enabled(hba))) && + ufshcd_wb_keep_vcc_on(hba)); } - if ((req_dev_pwr_mode != hba->curr_dev_pwr_mode) && - ((ufshcd_is_runtime_pm(pm_op) && !hba->auto_bkops_enabled) || - !ufshcd_is_runtime_pm(pm_op))) { - /* ensure that bkops is disabled */ - ufshcd_disable_auto_bkops(hba); - ret = ufshcd_set_dev_pwr_mode(hba, req_dev_pwr_mode); - if (ret) - goto enable_gating; + if (req_dev_pwr_mode != hba->curr_dev_pwr_mode) { + if ((ufshcd_is_runtime_pm(pm_op) && !hba->auto_bkops_enabled) || + !ufshcd_is_runtime_pm(pm_op)) { + /* ensure that bkops is disabled */ + ufshcd_disable_auto_bkops(hba); + } + + if (!keep_curr_dev_pwr_mode) { + ret = ufshcd_set_dev_pwr_mode(hba, req_dev_pwr_mode); + if (ret) + goto enable_gating; + } } flush_work(&hba->eeh_work);