From patchwork Wed Sep 27 03:35:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?UGV0ZXIgV2FuZyAo546L5L+h5Y+LKQ==?= X-Patchwork-Id: 13399841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3B82EE7F159 for ; Wed, 27 Sep 2023 03:36:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=QY8wTZWBh0Cu35+gt0igMLXPa6Vr9TYelNtjwTezN4Y=; b=CpVj2KvdGojcvXvhe0d73cID09 prOBfncF9G/H+0XUfm+n1zAfWljGJxRVPsPyJNsYhiIn735QdaYzpjM8EDc4trpEOJqqKsX22aR6L HAtTTkP2di/W5FQCKjwS7zitWYVU6tY1kX2kTAmRDZMCsSkPTaTR13/V6eHUm97tuAi5MzzFQYFHo tL0VJ/rbHaDs1bIYoaaqa+uy2f/zWwQ2/v1+VEXJSphjydahwuDt9PQkaKa68+3c+Yb2M/h9tUTiO vn1cQl6C39PsxGZhK113s3UOPAA0dX2vTNd8ecp3W8B5IpGN411RKcdQQZ7Olz4+yVpNSAv4pdhw2 QplHcgMQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qlLLN-00HSRz-0L; Wed, 27 Sep 2023 03:36:09 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qlLLK-00HSRZ-1D for linux-mediatek@lists.infradead.org; Wed, 27 Sep 2023 03:36:07 +0000 X-UUID: fbb9512a5ce611ee9b7791016c24628a-20230926 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:Message-ID:Date:Subject:CC:To:From; bh=QY8wTZWBh0Cu35+gt0igMLXPa6Vr9TYelNtjwTezN4Y=; b=fHHyxCiIRpD4hP/HyyoPl8N5frakhKuDD+c4a9vFboqRCFk1VWa35DEXN58nVXe71NkITpgqQ3PiqBuHP3J4hPTPOu2B0RAl3b6dEHqiKybVz/1OhWkjdwAlecAx4OpzayPfGPnV0H5s//RvqCQuVPf2aM6L1EdYBAYZ6QJ/W2E=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.32,REQID:797147e5-49e3-444c-a28a-e043a284e423,IP:0,U RL:0,TC:0,Content:0,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTION: release,TS:0 X-CID-META: VersionHash:5f78ec9,CLOUDID:ba3a51bf-14cc-44ca-b657-2d2783296e72,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0,OSI:0,OSA:0,AV:0,LES:1,SPR: NO,DKR:0,DKP:0,BRR:0,BRE:0 X-CID-BVR: 0 X-CID-BAS: 0,_,0,_ X-CID-FACTOR: TF_CID_SPAM_SNR,TF_CID_SPAM_ULN X-UUID: fbb9512a5ce611ee9b7791016c24628a-20230926 Received: from mtkmbs14n1.mediatek.inc [(172.21.101.75)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 664146858; Tue, 26 Sep 2023 20:36:03 -0700 Received: from mtkmbs13n2.mediatek.inc (172.21.101.108) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.26; Wed, 27 Sep 2023 11:36:00 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkmbs13n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1118.26 via Frontend Transport; Wed, 27 Sep 2023 11:36:00 +0800 From: To: , , , , , CC: , , , , , , , , , , , , , Subject: [PATCH v5] ufs: core: wlun send SSU timeout recovery Date: Wed, 27 Sep 2023 11:35:57 +0800 Message-ID: <20230927033557.13801-1-peter.wang@mediatek.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 X-TM-AS-Product-Ver: SMEX-14.0.0.3152-9.1.1006-23728.005 X-TM-AS-Result: No-10--1.982800-8.000000 X-TMASE-MatchedRID: txJkgVt2LQAMQLXc2MGSbEKcYi5Qw/RVaN2KuTwsCwIGW3hFnC9N1XVi 1hAU/d+eLiiu988wpnv6XP0KNPsN+xXfDcvxC40Q0Xw0ILvo/uV+tO36GYDlsruqk4cq52pz8R2 RsKRNAl33RDlj++r10gG2ORx9EyapKc+6Aaw3enmrVklnbP5Jtn0tCKdnhB58k4rY1r+vswgXvQ kGi3tjz/cUt5lc1lLgKIzdZS3ou0W8241FL2hxML1ccGegns0wJiJB6FMX2gKghkaAgQH3kY5Tm ctR+5kToOAaijm225g6sj43Zt+ZdH80Ey2I90n2o+mBAiSUmlt5lSmbrC6fdtr/To2FgNrjDLMI OOVTHz12N6Rg5qIpOg== X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--1.982800-8.000000 X-TMASE-Version: SMEX-14.0.0.3152-9.1.1006-23728.005 X-TM-SNTS-SMTP: 2380C98A8E4F0699B9DE8648C946633CB6D3CEA3E10ABAAD2EEDD7252CC8FEF52000:8 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230926_203606_419218_4FE555A8 X-CRM114-Status: UNSURE ( 9.86 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org From: Peter Wang When runtime pm send SSU times out, the SCSI core invokes eh_host_reset_handler, which hooks function ufshcd_eh_host_reset_handler schedule eh_work and stuck at wait flush_work(&hba->eh_work). However, ufshcd_err_handler hangs in wait rpm resume. Do link recovery only in this case. Below is IO hang stack dump in kernel-6.1 kworker/4:0 D __switch_to+0x180/0x344 __schedule+0x5ec/0xa14 schedule+0x78/0xe0 schedule_timeout+0xb0/0x15c io_schedule_timeout+0x48/0x70 do_wait_for_common+0x108/0x19c wait_for_completion_io_timeout+0x50/0x78 blk_execute_rq+0x1b8/0x218 scsi_execute_cmd+0x148/0x238 ufshcd_set_dev_pwr_mode+0xe8/0x244 __ufshcd_wl_resume+0x1e0/0x45c ufshcd_wl_runtime_resume+0x3c/0x174 scsi_runtime_resume+0x7c/0xc8 __rpm_callback+0xa0/0x410 rpm_resume+0x43c/0x67c __rpm_callback+0x1f0/0x410 rpm_resume+0x460/0x67c pm_runtime_work+0xa4/0xac process_one_work+0x208/0x598 worker_thread+0x228/0x438 kthread+0x104/0x1d4 ret_from_fork+0x10/0x20 scsi_eh_0 D __switch_to+0x180/0x344 __schedule+0x5ec/0xa14 schedule+0x78/0xe0 schedule_timeout+0x44/0x15c do_wait_for_common+0x108/0x19c wait_for_completion+0x48/0x64 __flush_work+0x260/0x2d0 flush_work+0x10/0x20 ufshcd_eh_host_reset_handler+0x88/0xcc scsi_try_host_reset+0x48/0xe0 scsi_eh_ready_devs+0x934/0xa40 scsi_error_handler+0x168/0x374 kthread+0x104/0x1d4 ret_from_fork+0x10/0x20 kworker/u16:5 D __switch_to+0x180/0x344 __schedule+0x5ec/0xa14 schedule+0x78/0xe0 rpm_resume+0x114/0x67c __pm_runtime_resume+0x70/0xb4 ufshcd_err_handler+0x1a0/0xe68 process_one_work+0x208/0x598 worker_thread+0x228/0x438 kthread+0x104/0x1d4 ret_from_fork+0x10/0x20 Signed-off-by: Peter Wang Reviewed-by: Bart Van Assche Reviewed-by: Stanley Chu --- drivers/ufs/core/ufshcd.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index c2df07545f96..0619cefa092e 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -7716,6 +7716,20 @@ static int ufshcd_eh_host_reset_handler(struct scsi_cmnd *cmd) hba = shost_priv(cmd->device->host); + /* + * If runtime pm send SSU and got timeout, scsi_error_handler + * stuck at this function to wait for flush_work(&hba->eh_work). + * And ufshcd_err_handler(eh_work) stuck at wait for runtime pm active. + * Do ufshcd_link_recovery instead schedule eh_work can prevent + * dead lock to happen. + */ + if (hba->pm_op_in_progress) { + if (ufshcd_link_recovery(hba)) + err = FAILED; + + return err; + } + spin_lock_irqsave(hba->host->host_lock, flags); hba->force_reset = true; ufshcd_schedule_eh_work(hba);