From patchwork Thu Aug 31 13:08:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?UGV0ZXIgV2FuZyAo546L5L+h5Y+LKQ==?= X-Patchwork-Id: 13371569 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9DBB4C83F01 for ; Thu, 31 Aug 2023 13:19:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=j94MKW/Ipk61cmiDB92T1TBsjG8l0jMqeVIaXF7MFGA=; b=b5WXb3S5KYohqzXBIebo1RRewD yG9w/PT6VZu11fariLQb4CfKT1edsiRZqY56r9TdymmBp3To+RmPRif5o8J7fALxXKreqzp/Z+AU+ qiQeogUxBmoh+g6cyic7F9bikc47qeKEe0kLIRJLbVDNFoFJM91znBK68JD9f3PbZBF6X24aWpWO2 acZrLvVSraA7zAt0TyuJ4KRkR1x35abKoE5bsoEUFYDIFp+0YmTs7FzzDQAaL9jT9KgCF8lwMf8/N ihN7zvFMF7Yyri6Fr6moVCwGX1QqtgvlalHaJKJvD0AubiiLwWYCt4aSc8l/T71ADL0Pl2XytHXod ziYPgC2g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qbhaN-00FMJq-25; Thu, 31 Aug 2023 13:19:47 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qbhaL-00FMJF-2B for linux-mediatek@lists.infradead.org; Thu, 31 Aug 2023 13:19:46 +0000 X-UUID: 07ae84aa480111ee9b7791016c24628a-20230831 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=j94MKW/Ipk61cmiDB92T1TBsjG8l0jMqeVIaXF7MFGA=; b=ntpOeYYi2giCNTvKLgTRNn+vHfjXugJnHUkQQ0rFJsCgZiBG3nCymPp/rOg17kmhwAsrvBR9XDYAMvILVRnqlejes7vpIJ7HPyoN455E4SVXReXVyxV9lpM3cl3CsFkJD/mmHRkZNU4UxxBVEOJMH9rAcDzgu+cZsAu8C+gBOM0=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.31,REQID:7e25fcae-2f92-406c-a45f-2c15cf6e40a7,IP:0,U RL:0,TC:0,Content:-5,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTION :release,TS:-5 X-CID-META: VersionHash:0ad78a4,CLOUDID:55117fc2-1e57-4345-9d31-31ad9818b39f,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0,OSI:0,OSA:0,AV:0,LES:1,SPR: NO,DKR:0,DKP:0,BRR:0,BRE:0 X-CID-BVR: 0 X-CID-BAS: 0,_,0,_ X-CID-FACTOR: TF_CID_SPAM_SNR,TF_CID_SPAM_ULN X-UUID: 07ae84aa480111ee9b7791016c24628a-20230831 Received: from mtkmbs11n1.mediatek.inc [(172.21.101.185)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 1783618024; Thu, 31 Aug 2023 06:19:35 -0700 Received: from mtkmbs11n1.mediatek.inc (172.21.101.185) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.26; Thu, 31 Aug 2023 21:08:30 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkmbs11n1.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1118.26 via Frontend Transport; Thu, 31 Aug 2023 21:08:30 +0800 From: To: , , , , , CC: , , , , , , , , , , , , , Subject: [PATCH v2 1/3] ufs: core: only suspend clock scaling if scale down Date: Thu, 31 Aug 2023 21:08:24 +0800 Message-ID: <20230831130826.5592-2-peter.wang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230831130826.5592-1-peter.wang@mediatek.com> References: <20230831130826.5592-1-peter.wang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230831_061945_721503_CC4645D9 X-CRM114-Status: GOOD ( 11.36 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org From: Peter Wang If clock scale up and suspend clock scaling, ufs will keep high performance/power mode but no read/write requests on going. It is logic wrong and have power concern. Signed-off-by: Peter Wang Reviewed-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 129446775796..e3672e55efae 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -1458,7 +1458,7 @@ static int ufshcd_devfreq_target(struct device *dev, ktime_to_us(ktime_sub(ktime_get(), start)), ret); out: - if (sched_clk_scaling_suspend_work) + if (sched_clk_scaling_suspend_work && !scale_up) queue_work(hba->clk_scaling.workq, &hba->clk_scaling.suspend_work); From patchwork Thu Aug 31 13:08:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?UGV0ZXIgV2FuZyAo546L5L+h5Y+LKQ==?= X-Patchwork-Id: 13371570 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8EC25C83F01 for ; Thu, 31 Aug 2023 13:20:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=vpMcaEyrAdaGuO54jgsDq+62Am9F9en4DBy1lZ6dSgs=; b=Xg/LYC34mXHlNf56hu5sIZ1YZ/ qCneZVxBpdTh2liNu01vmfEmPbvORWZCpR/6K8E6L/BUsrZCTQ9T5y6bOEHsFuR18MK2//qSGqEu9 TPcIggaky12XZMc3m5gd4so1ctd4eShVYy5P2+/Rl0HJ458ZEAqAw+364TzvvqhqT7xXVvPT1vI4t PMm1lr9DunfP2nG4pHKu9D1AyP0n0zpjapLs80yKmByqC8GGgngBfdKBvHGGym+8oQXbPA2O+EJBq lQi3RuP2+8YUbe3yXDqubcdFzb8KoZ5BxobNY6iu2UOjryA3FfEgJYN2QfKdfiSvQf+7coZSOcMSV J6bnP2lA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qbhal-00FMLt-1v; Thu, 31 Aug 2023 13:20:11 +0000 Received: from mailgw02.mediatek.com ([216.200.240.185]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qbhai-00FMLW-3A for linux-mediatek@lists.infradead.org; Thu, 31 Aug 2023 13:20:10 +0000 X-UUID: 1914a684480111ee86758d4a7c00f3a0-20230831 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=vpMcaEyrAdaGuO54jgsDq+62Am9F9en4DBy1lZ6dSgs=; b=fBp5ulhT2YSuGFVmKl4wdQHtdK0ASEt4GtzWGweQwqdK1EenS9ZzmmRvt96nMHAJrQ0xiUkpswPXqw4NJw3bN1pvj4zJ958cSRpjrgJOOrZ3DWokIA3VeZNiF8yF8TN7mPfXYfU9flkUwj750Ya79kMPLop+4B4ZPVb2AhNBAOU=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.31,REQID:9094cbaf-b187-4672-b604-d19e52b88a43,IP:0,U RL:0,TC:0,Content:-5,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTION :release,TS:-5 X-CID-META: VersionHash:0ad78a4,CLOUDID:c5ad6813-4929-4845-9571-38c601e9c3c9,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0,OSI:0,OSA:0,AV:0,LES:1,SPR: NO,DKR:0,DKP:0,BRR:0,BRE:0 X-CID-BVR: 0 X-CID-BAS: 0,_,0,_ X-CID-FACTOR: TF_CID_SPAM_SNR,TF_CID_SPAM_ULN X-UUID: 1914a684480111ee86758d4a7c00f3a0-20230831 Received: from mtkmbs11n1.mediatek.inc [(172.21.101.185)] by mailgw02.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 1446147599; Thu, 31 Aug 2023 06:20:04 -0700 Received: from mtkmbs11n1.mediatek.inc (172.21.101.185) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.26; Thu, 31 Aug 2023 21:08:30 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkmbs11n1.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1118.26 via Frontend Transport; Thu, 31 Aug 2023 21:08:30 +0800 From: To: , , , , , CC: , , , , , , , , , , , , , Subject: [PATCH v2 2/3] ufs: core: fix abnormal scale up after last cmd finish Date: Thu, 31 Aug 2023 21:08:25 +0800 Message-ID: <20230831130826.5592-3-peter.wang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230831130826.5592-1-peter.wang@mediatek.com> References: <20230831130826.5592-1-peter.wang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230831_062009_049310_9B97505D X-CRM114-Status: GOOD ( 11.26 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org From: Peter Wang When ufshcd_clk_scaling_suspend_work(Thread A) running and new command coming, ufshcd_clk_scaling_start_busy(Thread B) may get host_lock after Thread A first time release host_lock. Then Thread A second time get host_lock will set clk_scaling.window_start_t = 0 which scale up clock abnormal next polling_ms time. Also inlines another __ufshcd_suspend_clkscaling calls. Below is racing step: 1 hba->clk_scaling.suspend_work (Thread A) ufshcd_clk_scaling_suspend_work 2 spin_lock_irqsave(hba->host->host_lock, irq_flags); 3 hba->clk_scaling.is_suspended = true; 4 spin_unlock_irqrestore(hba->host->host_lock, irq_flags); __ufshcd_suspend_clkscaling 7 spin_lock_irqsave(hba->host->host_lock, flags); 8 hba->clk_scaling.window_start_t = 0; 9 spin_unlock_irqrestore(hba->host->host_lock, flags); ufshcd_send_command (Thread B) ufshcd_clk_scaling_start_busy 5 spin_lock_irqsave(hba->host->host_lock, flags); .... 6 spin_unlock_irqrestore(hba->host->host_lock, flags); Signed-off-by: Peter Wang Reviewed-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 17 ++++------------- 1 file changed, 4 insertions(+), 13 deletions(-) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index e3672e55efae..057549b0e586 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -275,7 +275,6 @@ static inline void ufshcd_add_delay_before_dme_cmd(struct ufs_hba *hba); static int ufshcd_host_reset_and_restore(struct ufs_hba *hba); static void ufshcd_resume_clkscaling(struct ufs_hba *hba); static void ufshcd_suspend_clkscaling(struct ufs_hba *hba); -static void __ufshcd_suspend_clkscaling(struct ufs_hba *hba); static int ufshcd_scale_clks(struct ufs_hba *hba, bool scale_up); static irqreturn_t ufshcd_intr(int irq, void *__hba); static int ufshcd_change_power_mode(struct ufs_hba *hba, @@ -1385,9 +1384,10 @@ static void ufshcd_clk_scaling_suspend_work(struct work_struct *work) return; } hba->clk_scaling.is_suspended = true; + hba->clk_scaling.window_start_t = 0; spin_unlock_irqrestore(hba->host->host_lock, irq_flags); - __ufshcd_suspend_clkscaling(hba); + devfreq_suspend_device(hba->devfreq); } static void ufshcd_clk_scaling_resume_work(struct work_struct *work) @@ -1564,16 +1564,6 @@ static void ufshcd_devfreq_remove(struct ufs_hba *hba) dev_pm_opp_remove(hba->dev, clki->max_freq); } -static void __ufshcd_suspend_clkscaling(struct ufs_hba *hba) -{ - unsigned long flags; - - devfreq_suspend_device(hba->devfreq); - spin_lock_irqsave(hba->host->host_lock, flags); - hba->clk_scaling.window_start_t = 0; - spin_unlock_irqrestore(hba->host->host_lock, flags); -} - static void ufshcd_suspend_clkscaling(struct ufs_hba *hba) { unsigned long flags; @@ -1586,11 +1576,12 @@ static void ufshcd_suspend_clkscaling(struct ufs_hba *hba) if (!hba->clk_scaling.is_suspended) { suspend = true; hba->clk_scaling.is_suspended = true; + hba->clk_scaling.window_start_t = 0; } spin_unlock_irqrestore(hba->host->host_lock, flags); if (suspend) - __ufshcd_suspend_clkscaling(hba); + devfreq_suspend_device(hba->devfreq); } static void ufshcd_resume_clkscaling(struct ufs_hba *hba) From patchwork Thu Aug 31 13:08:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?UGV0ZXIgV2FuZyAo546L5L+h5Y+LKQ==?= X-Patchwork-Id: 13371594 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E2134C83F01 for ; Thu, 31 Aug 2023 13:30:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:CC:To:From:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=MtZHyXQ7DGRPhpbCpbsIJoEBq+bGheQvQ3n4RUzQzJQ=; b=KoWJ0QUIoGfwlRzoVyciiUvMca ORIZ3hva4/oVv1XlsNN6DYjWK3Z7tF+zVqOOuHEiS66UcVaDZKPtbZth50OX/2wYzFjItjXpegiSL 6v6wSvftas1mX24earNywvd850GMsR96pIWfKEDHaO73fWLDjk5DFG0owCMqOGGKkIC3gtB587K3c tMFXz0bx9k1a3GTgLqrDDMjGX2qcrvh6ZGV7AFPVALn2Ngxq/52Nxn5NeUTJod9bADjg1jcct2p5x 8oIzImS0IE/W1FBV3BzwUBGAq8BNmtW+J/3XQsw4yngUYZ6Og29BrYx0nptBcvdc6M+0+1/djQOBV 4IXA386Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qbhkw-00FN2s-0o; Thu, 31 Aug 2023 13:30:42 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qbhkt-00FN1n-1K for linux-mediatek@lists.infradead.org; Thu, 31 Aug 2023 13:30:40 +0000 X-UUID: 90baa804480211ee9b7791016c24628a-20230831 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=MtZHyXQ7DGRPhpbCpbsIJoEBq+bGheQvQ3n4RUzQzJQ=; b=T1D94anUrA3pbS5+DXxwEZs6I6ep9KcWaLV9BKz8hud1JtV15G6Elp9mnl0GdRpsTXB2hAdiFKP+Ut5zeu+4nU14jH/MQmEDsbYMsKimA7eYsXxVWildY4Hw3hdz4oFZzYUPeZUhGPcwasF3P43oMR8tP+EtBtZMYktDs2HFAGM=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.31,REQID:c229f1c7-6c80-4a5e-b44a-eb065d6a91c0,IP:0,U RL:0,TC:0,Content:-5,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTION :release,TS:-5 X-CID-META: VersionHash:0ad78a4,CLOUDID:49bd6813-4929-4845-9571-38c601e9c3c9,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0,OSI:0,OSA:0,AV:0,LES:1,SPR: NO,DKR:0,DKP:0,BRR:0,BRE:0 X-CID-BVR: 0,NGT X-CID-BAS: 0,NGT,0,_ X-CID-FACTOR: TF_CID_SPAM_SNR,TF_CID_SPAM_ULN X-UUID: 90baa804480211ee9b7791016c24628a-20230831 Received: from mtkmbs11n1.mediatek.inc [(172.21.101.185)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 1621864693; Thu, 31 Aug 2023 06:30:35 -0700 Received: from mtkmbs11n1.mediatek.inc (172.21.101.185) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.26; Thu, 31 Aug 2023 21:08:30 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkmbs11n1.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1118.26 via Frontend Transport; Thu, 31 Aug 2023 21:08:30 +0800 From: To: , , , , , CC: , , , , , , , , , , , , , Subject: [PATCH v2 3/3] ufs: core: fix abnormal scale up after scale down Date: Thu, 31 Aug 2023 21:08:26 +0800 Message-ID: <20230831130826.5592-4-peter.wang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20230831130826.5592-1-peter.wang@mediatek.com> References: <20230831130826.5592-1-peter.wang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230831_063039_461867_ACD2A0D0 X-CRM114-Status: GOOD ( 11.47 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org From: Peter Wang When no active_reqs, devfreq_monitor(Thread A) will suspend clock scaling. But it may have racing with clk_scaling.suspend_work(Thread B) and actually not suspend clock scaling(requue after suspend). Next time after polling_ms, devfreq_monitor read clk_scaling.window_start_t = 0 then scale up clock abnormal. Below is racing step: devfreq->work (Thread A) devfreq_monitor update_devfreq ..... ufshcd_devfreq_target queue_work(hba->clk_scaling.workq, 1 &hba->clk_scaling.suspend_work) ..... 5 queue_delayed_work(devfreq_wq, &devfreq->work, msecs_to_jiffies(devfreq->profile->polling_ms)); 2 hba->clk_scaling.suspend_work (Thread B) ufshcd_clk_scaling_suspend_work __ufshcd_suspend_clkscaling devfreq_suspend_device(hba->devfreq); 3 cancel_delayed_work_sync(&devfreq->work); 4 hba->clk_scaling.window_start_t = 0; ..... Signed-off-by: Peter Wang Reviewed-by: Bart Van Assche --- drivers/ufs/core/ufshcd.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 057549b0e586..d72fa2c1e316 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -1430,6 +1430,13 @@ static int ufshcd_devfreq_target(struct device *dev, return 0; } + /* Skip scaling clock when clock scaling is suspended */ + if (hba->clk_scaling.is_suspended) { + spin_unlock_irqrestore(hba->host->host_lock, irq_flags); + dev_warn(hba->dev, "clock scaling is suspended, skip"); + return 0; + } + if (!hba->clk_scaling.active_reqs) sched_clk_scaling_suspend_work = true;