From patchwork Sat Jul 9 05:59:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil P Oommen X-Patchwork-Id: 12912096 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DE53CCA47C for ; Sat, 9 Jul 2022 06:00:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229582AbiGIGAL (ORCPT ); Sat, 9 Jul 2022 02:00:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229581AbiGIGAJ (ORCPT ); Sat, 9 Jul 2022 02:00:09 -0400 Received: from alexa-out-sd-02.qualcomm.com (alexa-out-sd-02.qualcomm.com [199.106.114.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD3124E86D; Fri, 8 Jul 2022 23:00:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1657346407; x=1688882407; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=Nx33hXN1LossIpCnrHswpELkvsLkDlmpBkfWihCZXuM=; b=yWTOQ8z9jwPdQDFv0ttrHHhy1SeMh09Rt0Rp6rToQMDrb1IrVa1KUpPT K1ZuZtoWnZKrr7MO34jAUgfCTQ8tYzSXBC5FFRNd1KnlIMZVnyac0tTOo 800SvnrtuOZhDX/yrArF52QcaNxdC2JA+FryQJxuOqQEUJcF6Pj84KFk7 8=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 08 Jul 2022 23:00:07 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jul 2022 23:00:07 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Jul 2022 23:00:07 -0700 Received: from hyd-lnxbld559.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Jul 2022 23:00:02 -0700 From: Akhil P Oommen To: freedreno , , , Rob Clark , Bjorn Andersson CC: Jonathan Marek , Jordan Crouse , Matthias Kaehlcke , "Douglas Anderson" , Akhil P Oommen , Abhinav Kumar , Daniel Vetter , David Airlie , Dmitry Baryshkov , Sean Paul , Subject: [PATCH v2 1/7] drm/msm: Remove unnecessary pm_runtime_get/put Date: Sat, 9 Jul 2022 11:29:29 +0530 Message-ID: <20220709112837.v2.1.Icf1e8f0c9b3e7e9933c3b48c70477d0582f3243f@changeid> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1657346375-1461-1-git-send-email-quic_akhilpo@quicinc.com> References: <1657346375-1461-1-git-send-email-quic_akhilpo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org We already enable gpu power from msm_gpu_submit(), so avoid a duplicate pm_runtime_get/put from msm_job_run(). Signed-off-by: Akhil P Oommen --- (no changes since v1) drivers/gpu/drm/msm/msm_ringbuffer.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index 56eecb4..cad4c35 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -29,8 +29,6 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job) msm_gem_unlock(obj); } - pm_runtime_get_sync(&gpu->pdev->dev); - /* TODO move submit path over to using a per-ring lock.. */ mutex_lock(&gpu->lock); @@ -38,8 +36,6 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job) mutex_unlock(&gpu->lock); - pm_runtime_put(&gpu->pdev->dev); - return dma_fence_get(submit->hw_fence); } From patchwork Sat Jul 9 05:59:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil P Oommen X-Patchwork-Id: 12912097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF169C43334 for ; Sat, 9 Jul 2022 06:00:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229581AbiGIGAU (ORCPT ); Sat, 9 Jul 2022 02:00:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229602AbiGIGAS (ORCPT ); Sat, 9 Jul 2022 02:00:18 -0400 Received: from alexa-out.qualcomm.com (alexa-out.qualcomm.com [129.46.98.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA3FF4E86D; Fri, 8 Jul 2022 23:00:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1657346415; x=1688882415; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=szSAz8ROfVHFgMHsQ/LOkkqGXDYJLFJ2B2IrwKUm1WM=; b=kl4ISqkVRAfHJDNGz99SOegNDnREaz77+Wdf5NghsMglYlCeEJxBi3b3 QdUBEa6nGX9tJWVCb7QSJpvxkEe2Kc78wpX3A6rNVvAX5c5NzkoGKZ/gQ 8jqVOaDVZbhLcfihxzNlP9+gL6Ng8lbBIVmW99qsIghhVFSM8pCbbjXs1 0=; Received: from ironmsg09-lv.qualcomm.com ([10.47.202.153]) by alexa-out.qualcomm.com with ESMTP; 08 Jul 2022 23:00:14 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg09-lv.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jul 2022 23:00:13 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Jul 2022 23:00:12 -0700 Received: from hyd-lnxbld559.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Jul 2022 23:00:07 -0700 From: Akhil P Oommen To: freedreno , , , Rob Clark , Bjorn Andersson CC: Jonathan Marek , Jordan Crouse , Matthias Kaehlcke , "Douglas Anderson" , Akhil P Oommen , Abhinav Kumar , Daniel Vetter , David Airlie , Dmitry Baryshkov , Sean Paul , Subject: [PATCH v2 2/7] drm/msm: Correct pm_runtime votes in recover worker Date: Sat, 9 Jul 2022 11:29:30 +0530 Message-ID: <20220709112837.v2.2.Ib07ecec3d5c17cb0e1efa6fcddaaa019ec2fb556@changeid> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1657346375-1461-1-git-send-email-quic_akhilpo@quicinc.com> References: <1657346375-1461-1-git-send-email-quic_akhilpo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org In the scenario where there is one a single submit which is hung, gpu is power collapsed when it is retired. Because of this, by the time we call reover(), gpu state would be already clear. Fix this by correctly managing the pm runtime votes. Signed-off-by: Akhil P Oommen --- (no changes since v1) drivers/gpu/drm/msm/msm_gpu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index c2bfcf3f..18c1544 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -394,7 +394,6 @@ static void recover_worker(struct kthread_work *work) /* Record the crash state */ pm_runtime_get_sync(&gpu->pdev->dev); msm_gpu_crashstate_capture(gpu, submit, comm, cmd); - pm_runtime_put_sync(&gpu->pdev->dev); kfree(cmd); kfree(comm); @@ -442,6 +441,8 @@ static void recover_worker(struct kthread_work *work) } } + pm_runtime_put_sync(&gpu->pdev->dev); + mutex_unlock(&gpu->lock); msm_gpu_retire(gpu); From patchwork Sat Jul 9 05:59:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil P Oommen X-Patchwork-Id: 12912098 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 140BCC433EF for ; Sat, 9 Jul 2022 06:00:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229600AbiGIGAW (ORCPT ); Sat, 9 Jul 2022 02:00:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229612AbiGIGAU (ORCPT ); Sat, 9 Jul 2022 02:00:20 -0400 Received: from alexa-out-sd-02.qualcomm.com (alexa-out-sd-02.qualcomm.com [199.106.114.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 230DB57E18; Fri, 8 Jul 2022 23:00:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1657346419; x=1688882419; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=wJCaw3CiF0MD4eKDCyob1ussNDvrGgARTORwyELixQI=; b=E9J1ust4U08lWioh28u1HXHKvQidz52xohF55cQEb/MooVO1Jber6Fw9 FErZwfqifBiI7BsFE7BXieGgYBI+2jIJv6ZzlMn9JvWiZAyf0LG2FMoZq SjgJGeMJJkXHYq0G5S/iyPt9nIWj/UEDXcFsSJGn+znOqHOIMo9oyQqQE 8=; Received: from unknown (HELO ironmsg02-sd.qualcomm.com) ([10.53.140.142]) by alexa-out-sd-02.qualcomm.com with ESMTP; 08 Jul 2022 23:00:18 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg02-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jul 2022 23:00:18 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Jul 2022 23:00:18 -0700 Received: from hyd-lnxbld559.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Jul 2022 23:00:12 -0700 From: Akhil P Oommen To: freedreno , , , Rob Clark , Bjorn Andersson CC: Jonathan Marek , Jordan Crouse , Matthias Kaehlcke , "Douglas Anderson" , Akhil P Oommen , Abhinav Kumar , Chia-I Wu , Daniel Vetter , David Airlie , "Dmitry Baryshkov" , Konrad Dybcio , Sean Paul , Subject: [PATCH v2 3/7] drm/msm: Fix cx collapse issue during recovery Date: Sat, 9 Jul 2022 11:29:31 +0530 Message-ID: <20220709112837.v2.3.I4ac27a0b34ea796ce0f938bb509e257516bc6f57@changeid> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1657346375-1461-1-git-send-email-quic_akhilpo@quicinc.com> References: <1657346375-1461-1-git-send-email-quic_akhilpo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org There are some hardware logic under CX domain. For a successful recovery, we should ensure cx headswitch collapses to ensure all the stale states are cleard out. This is especially true to for a6xx family where we can GMU co-processor. Currently, cx doesn't collapse due to a devlink between gpu and its smmu. So the *struct gpu device* needs to be runtime suspended to ensure that the iommu driver removes its vote on cx gdsc. Signed-off-by: Akhil P Oommen --- (no changes since v1) drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 16 ++++++++++++++-- drivers/gpu/drm/msm/msm_gpu.c | 2 -- 2 files changed, 14 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 4d50110..7ed347c 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -1278,8 +1278,20 @@ static void a6xx_recover(struct msm_gpu *gpu) */ gmu_write(&a6xx_gpu->gmu, REG_A6XX_GMU_GMU_PWR_COL_KEEPALIVE, 0); - gpu->funcs->pm_suspend(gpu); - gpu->funcs->pm_resume(gpu); + /* + * Now drop all the pm_runtime usage count to allow cx gdsc to collapse. + * First drop the usage count from all active submits + */ + for (i = gpu->active_submits; i > 0; i--) + pm_runtime_put(&gpu->pdev->dev); + + /* And the final one from recover worker */ + pm_runtime_put_sync(&gpu->pdev->dev); + + for (i = gpu->active_submits; i > 0; i--) + pm_runtime_get(&gpu->pdev->dev); + + pm_runtime_get_sync(&gpu->pdev->dev); msm_gpu_hw_init(gpu); } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 18c1544..aa6f34f 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -422,9 +422,7 @@ static void recover_worker(struct kthread_work *work) /* retire completed submits, plus the one that hung: */ retire_submits(gpu); - pm_runtime_get_sync(&gpu->pdev->dev); gpu->funcs->recover(gpu); - pm_runtime_put_sync(&gpu->pdev->dev); /* * Replay all remaining submits starting with highest priority From patchwork Sat Jul 9 05:59:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil P Oommen X-Patchwork-Id: 12912099 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61DE8C433EF for ; Sat, 9 Jul 2022 06:00:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229631AbiGIGAd (ORCPT ); Sat, 9 Jul 2022 02:00:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229637AbiGIGA0 (ORCPT ); Sat, 9 Jul 2022 02:00:26 -0400 Received: from alexa-out-sd-02.qualcomm.com (alexa-out-sd-02.qualcomm.com [199.106.114.39]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E876C7CB4E; Fri, 8 Jul 2022 23:00:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1657346425; x=1688882425; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=XGQbLWmmXvctI4HYl9j46qsii5iPHrQ4eKepFp+Nbs8=; b=uff26fSVlT1SH+lIoueRw/RffWzdTFOIt9gqQURM88eeeFe/THaqgD47 5uwwWNyfa7qIj2dbzx/o0OpirVllA0OxGPeMSJuHVFZZojXO4ykoaNy8O ee6Vx3k9h/rTJQ3rOKwV7qyIDPkhKB1iCf0f7tWE92eSHuLBnkwAvvBPK s=; Received: from unknown (HELO ironmsg01-sd.qualcomm.com) ([10.53.140.141]) by alexa-out-sd-02.qualcomm.com with ESMTP; 08 Jul 2022 23:00:25 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg01-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jul 2022 23:00:24 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Jul 2022 23:00:24 -0700 Received: from hyd-lnxbld559.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Jul 2022 23:00:18 -0700 From: Akhil P Oommen To: freedreno , , , Rob Clark , Bjorn Andersson CC: Jonathan Marek , Jordan Crouse , Matthias Kaehlcke , "Douglas Anderson" , Akhil P Oommen , Abhinav Kumar , Chia-I Wu , Daniel Vetter , David Airlie , "Dmitry Baryshkov" , Konrad Dybcio , Sean Paul , Stephen Boyd , Subject: [PATCH v2 4/7] drm/msm: Ensure cx gdsc collapse during recovery Date: Sat, 9 Jul 2022 11:29:32 +0530 Message-ID: <20220709112837.v2.4.I510084ecc82b2efe42dd904fea595cdec99058b2@changeid> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1657346375-1461-1-git-send-email-quic_akhilpo@quicinc.com> References: <1657346375-1461-1-git-send-email-quic_akhilpo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org To improve our chance of a successful recovery, we should ensure that cx headswitch collapses. Cx headswitch might be kept enabled through a vote from another driver like iommu or even another hardware subsystem. So, poll the cx gdscr register to ensure that it collapses during recovery. Signed-off-by: Akhil P Oommen --- (no changes since v1) drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 13 ++++++++++++- drivers/gpu/drm/msm/msm_gpu.c | 4 ++++ drivers/gpu/drm/msm/msm_gpu.h | 1 + 3 files changed, 17 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 7ed347c..9aaa469 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -1257,11 +1257,15 @@ static void a6xx_dump(struct msm_gpu *gpu) #define VBIF_RESET_ACK_TIMEOUT 100 #define VBIF_RESET_ACK_MASK 0x00f0 +#define CX_GDSCR_OFFSET 0x106c +#define CX_GDSC_ON_MASK BIT(31) + static void a6xx_recover(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); - int i; + int i, ret; + u32 val; adreno_dump_info(gpu); @@ -1288,6 +1292,13 @@ static void a6xx_recover(struct msm_gpu *gpu) /* And the final one from recover worker */ pm_runtime_put_sync(&gpu->pdev->dev); + if (gpu->gpucc_io) { + ret = readl_poll_timeout(gpu->gpucc_io + CX_GDSCR_OFFSET, val, + !(val & CX_GDSC_ON_MASK), 100, 500000); + if (ret) + DRM_DEV_INFO(&gpu->pdev->dev, "cx gdsc didn't collapse\n"); + } + for (i = gpu->active_submits; i > 0; i--) pm_runtime_get(&gpu->pdev->dev); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index aa6f34f..7ada0785 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -865,6 +865,10 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, goto fail; } + gpu->gpucc_io = msm_ioremap(pdev, "gpucc"); + if (IS_ERR(gpu->gpucc_io)) + gpu->gpucc_io = NULL; + /* Get Interrupt: */ gpu->irq = platform_get_irq(pdev, 0); if (gpu->irq < 0) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 4d935fe..1fe498f 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -226,6 +226,7 @@ struct msm_gpu { int global_faults; void __iomem *mmio; + void __iomem *gpucc_io; int irq; struct msm_gem_address_space *aspace; From patchwork Sat Jul 9 05:59:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil P Oommen X-Patchwork-Id: 12912100 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 104C9CCA47C for ; Sat, 9 Jul 2022 06:00:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229641AbiGIGAn (ORCPT ); Sat, 9 Jul 2022 02:00:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbiGIGAd (ORCPT ); Sat, 9 Jul 2022 02:00:33 -0400 Received: from alexa-out.qualcomm.com (alexa-out.qualcomm.com [129.46.98.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCF728AEF6; Fri, 8 Jul 2022 23:00:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1657346431; x=1688882431; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=nt30bBXYQxj3wIUD5YWrfgOK4r6dcgXbMH9U/4HHkbM=; b=ottRu/Pg7HU9ugdNWN9bIAT2KmcuCbzufXuvsu5bCZCAcSfHB/+PJE7a JOmn/fqSYavffc9B+L323G8Yv2/GFKw1qLjIWSyPNf0kUJ3WTcf4mgPMc QRZeJgXMSF0p87bmNNXJJ0kh9/mhzwbohkfuUlvVwERs7w3ZFR2gJbaHb s=; Received: from ironmsg-lv-alpha.qualcomm.com ([10.47.202.13]) by alexa-out.qualcomm.com with ESMTP; 08 Jul 2022 23:00:30 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg-lv-alpha.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jul 2022 23:00:29 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Jul 2022 23:00:29 -0700 Received: from hyd-lnxbld559.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Jul 2022 23:00:24 -0700 From: Akhil P Oommen To: freedreno , , , Rob Clark , Bjorn Andersson CC: Jonathan Marek , Jordan Crouse , Matthias Kaehlcke , "Douglas Anderson" , Akhil P Oommen , Andy Gross , Krzysztof Kozlowski , Rob Herring , , Subject: [PATCH v2 5/7] arm64: dts: qcom: sc7280: Update gpu register list Date: Sat, 9 Jul 2022 11:29:33 +0530 Message-ID: <20220709112837.v2.5.I7291c830ace04fce07e6bd95a11de4ba91410f7b@changeid> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1657346375-1461-1-git-send-email-quic_akhilpo@quicinc.com> References: <1657346375-1461-1-git-send-email-quic_akhilpo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Update gpu register array with gpucc memory region. Signed-off-by: Akhil P Oommen --- (no changes since v1) arch/arm64/boot/dts/qcom/sc7280.dtsi | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi index e66fc67..defdb25 100644 --- a/arch/arm64/boot/dts/qcom/sc7280.dtsi +++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi @@ -2228,10 +2228,12 @@ compatible = "qcom,adreno-635.0", "qcom,adreno"; reg = <0 0x03d00000 0 0x40000>, <0 0x03d9e000 0 0x1000>, - <0 0x03d61000 0 0x800>; + <0 0x03d61000 0 0x800>, + <0 0x03d90000 0 0x2000>; reg-names = "kgsl_3d0_reg_memory", "cx_mem", - "cx_dbgc"; + "cx_dbgc", + "gpucc"; interrupts = ; iommus = <&adreno_smmu 0 0x401>; operating-points-v2 = <&gpu_opp_table>; From patchwork Sat Jul 9 05:59:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil P Oommen X-Patchwork-Id: 12912101 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24D5DC43334 for ; Sat, 9 Jul 2022 06:00:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229723AbiGIGA5 (ORCPT ); Sat, 9 Jul 2022 02:00:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229619AbiGIGAm (ORCPT ); Sat, 9 Jul 2022 02:00:42 -0400 Received: from alexa-out-sd-01.qualcomm.com (alexa-out-sd-01.qualcomm.com [199.106.114.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA4F98AEEB; Fri, 8 Jul 2022 23:00:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1657346437; x=1688882437; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=nDxf6RRc+Iy2T3lzPVvtVMapao3821IfRei11SrjtDg=; b=h/khdlpbDJx9tN0WgL3kvx0TQcg3W/S2E3kI5tf7kSOKAbqa6riL+Vs6 0rpSUHWoDu87J9rpX/X+ShNWZsk/vmIr+3IYf/etHXp+OCTWGaJYEadGP Fv366m9kmFsh+O6oC0lpyuPLjv6v1e0ruMV7YU1eq3piT1XGEVMynaFxh g=; Received: from unknown (HELO ironmsg03-sd.qualcomm.com) ([10.53.140.143]) by alexa-out-sd-01.qualcomm.com with ESMTP; 08 Jul 2022 23:00:37 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg03-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jul 2022 23:00:36 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Jul 2022 23:00:35 -0700 Received: from hyd-lnxbld559.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Jul 2022 23:00:29 -0700 From: Akhil P Oommen To: freedreno , , , Rob Clark , Bjorn Andersson CC: Jonathan Marek , Jordan Crouse , Matthias Kaehlcke , "Douglas Anderson" , Akhil P Oommen , Abhinav Kumar , Chia-I Wu , =?utf-8?q?Christian_K=C3=B6nig?= , Daniel Vetter , David Airlie , Dmitry Baryshkov , Geert Uytterhoeven , Konrad Dybcio , Sean Paul , Wang Qing , Subject: [PATCH v2 6/7] drm/msm/a6xx: Improve gpu recovery sequence Date: Sat, 9 Jul 2022 11:29:34 +0530 Message-ID: <20220709112837.v2.6.Idf2ba51078e87ae7ceb75cc77a5bd4ff2bd31eab@changeid> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1657346375-1461-1-git-send-email-quic_akhilpo@quicinc.com> References: <1657346375-1461-1-git-send-email-quic_akhilpo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org We can do a few more things to improve our chance at a successful gpu recovery, especially during a hangcheck timeout: 1. Halt CP and GMU core 2. Do RBBM GBIF HALT sequence 3. Do a soft reset of GPU core Signed-off-by: Akhil P Oommen --- (no changes since v1) drivers/gpu/drm/msm/adreno/a6xx.xml.h | 4 ++ drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 77 +++++++++++++++++++++-------------- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 7 ++++ 3 files changed, 58 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx.xml.h b/drivers/gpu/drm/msm/adreno/a6xx.xml.h index b03e2c4..beea4a7 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx.xml.h +++ b/drivers/gpu/drm/msm/adreno/a6xx.xml.h @@ -1413,6 +1413,10 @@ static inline uint32_t REG_A6XX_RBBM_PERFCTR_RBBM_SEL(uint32_t i0) { return 0x00 #define REG_A6XX_RBBM_GBIF_CLIENT_QOS_CNTL 0x00000011 +#define REG_A6XX_RBBM_GBIF_HALT 0x00000016 + +#define REG_A6XX_RBBM_GBIF_HALT_ACK 0x00000017 + #define REG_A6XX_RBBM_WAIT_FOR_GPU_IDLE_CMD 0x0000001c #define A6XX_RBBM_WAIT_FOR_GPU_IDLE_CMD_WAIT_GPU_IDLE 0x00000001 diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 310a317..ec25f19 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -873,9 +873,47 @@ static void a6xx_gmu_rpmh_off(struct a6xx_gmu *gmu) (val & 1), 100, 1000); } +#define GBIF_CLIENT_HALT_MASK BIT(0) +#define GBIF_ARB_HALT_MASK BIT(1) + +static void a6xx_bus_clear_pending_transactions(struct adreno_gpu *adreno_gpu) +{ + struct msm_gpu *gpu = &adreno_gpu->base; + + if (!a6xx_has_gbif(adreno_gpu)) { + gpu_write(gpu, REG_A6XX_VBIF_XIN_HALT_CTRL0, 0xf); + spin_until((gpu_read(gpu, REG_A6XX_VBIF_XIN_HALT_CTRL1) & + 0xf) == 0xf); + gpu_write(gpu, REG_A6XX_VBIF_XIN_HALT_CTRL0, 0); + + return; + } + + /* Halt the gx side of GBIF */ + gpu_write(gpu, REG_A6XX_RBBM_GBIF_HALT, 1); + spin_until(gpu_read(gpu, REG_A6XX_RBBM_GBIF_HALT_ACK) & 1); + + /* Halt new client requests on GBIF */ + gpu_write(gpu, REG_A6XX_GBIF_HALT, GBIF_CLIENT_HALT_MASK); + spin_until((gpu_read(gpu, REG_A6XX_GBIF_HALT_ACK) & + (GBIF_CLIENT_HALT_MASK)) == GBIF_CLIENT_HALT_MASK); + + /* Halt all AXI requests on GBIF */ + gpu_write(gpu, REG_A6XX_GBIF_HALT, GBIF_ARB_HALT_MASK); + spin_until((gpu_read(gpu, REG_A6XX_GBIF_HALT_ACK) & + (GBIF_ARB_HALT_MASK)) == GBIF_ARB_HALT_MASK); + + /* The GBIF halt needs to be explicitly cleared */ + gpu_write(gpu, REG_A6XX_GBIF_HALT, 0x0); +} + /* Force the GMU off in case it isn't responsive */ static void a6xx_gmu_force_off(struct a6xx_gmu *gmu) { + struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + struct msm_gpu *gpu = &adreno_gpu->base; + /* Flush all the queues */ a6xx_hfi_stop(gmu); @@ -887,6 +925,15 @@ static void a6xx_gmu_force_off(struct a6xx_gmu *gmu) /* Make sure there are no outstanding RPMh votes */ a6xx_gmu_rpmh_off(gmu); + + /* Halt the gmu cm3 core */ + gmu_write(gmu, REG_A6XX_GMU_CM3_SYSRESET, 1); + + a6xx_bus_clear_pending_transactions(adreno_gpu); + + /* Reset GPU core blocks */ + gpu_write(gpu, REG_A6XX_RBBM_SW_RESET_CMD, 1); + udelay(100); } static void a6xx_gmu_set_initial_freq(struct msm_gpu *gpu, struct a6xx_gmu *gmu) @@ -1014,36 +1061,6 @@ bool a6xx_gmu_isidle(struct a6xx_gmu *gmu) return true; } -#define GBIF_CLIENT_HALT_MASK BIT(0) -#define GBIF_ARB_HALT_MASK BIT(1) - -static void a6xx_bus_clear_pending_transactions(struct adreno_gpu *adreno_gpu) -{ - struct msm_gpu *gpu = &adreno_gpu->base; - - if (!a6xx_has_gbif(adreno_gpu)) { - gpu_write(gpu, REG_A6XX_VBIF_XIN_HALT_CTRL0, 0xf); - spin_until((gpu_read(gpu, REG_A6XX_VBIF_XIN_HALT_CTRL1) & - 0xf) == 0xf); - gpu_write(gpu, REG_A6XX_VBIF_XIN_HALT_CTRL0, 0); - - return; - } - - /* Halt new client requests on GBIF */ - gpu_write(gpu, REG_A6XX_GBIF_HALT, GBIF_CLIENT_HALT_MASK); - spin_until((gpu_read(gpu, REG_A6XX_GBIF_HALT_ACK) & - (GBIF_CLIENT_HALT_MASK)) == GBIF_CLIENT_HALT_MASK); - - /* Halt all AXI requests on GBIF */ - gpu_write(gpu, REG_A6XX_GBIF_HALT, GBIF_ARB_HALT_MASK); - spin_until((gpu_read(gpu, REG_A6XX_GBIF_HALT_ACK) & - (GBIF_ARB_HALT_MASK)) == GBIF_ARB_HALT_MASK); - - /* The GBIF halt needs to be explicitly cleared */ - gpu_write(gpu, REG_A6XX_GBIF_HALT, 0x0); -} - /* Gracefully try to shut down the GMU and by extension the GPU */ static void a6xx_gmu_shutdown(struct a6xx_gmu *gmu) { diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 9aaa469..60c3033 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -987,6 +987,10 @@ static int hw_init(struct msm_gpu *gpu) /* Make sure the GMU keeps the GPU on while we set it up */ a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET); + /* Clear GBIF halt in case GX domain was not collapsed */ + if (a6xx_has_gbif(adreno_gpu)) + gpu_write(gpu, REG_A6XX_RBBM_GBIF_HALT, 0); + gpu_write(gpu, REG_A6XX_RBBM_SECVID_TSB_CNTL, 0); /* @@ -1276,6 +1280,9 @@ static void a6xx_recover(struct msm_gpu *gpu) if (hang_debug) a6xx_dump(gpu); + /* Halt SQE first */ + gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 3); + /* * Turn off keep alive that might have been enabled by the hang * interrupt From patchwork Sat Jul 9 05:59:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil P Oommen X-Patchwork-Id: 12912102 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10586C433EF for ; Sat, 9 Jul 2022 06:01:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229685AbiGIGBA (ORCPT ); Sat, 9 Jul 2022 02:01:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229656AbiGIGAw (ORCPT ); Sat, 9 Jul 2022 02:00:52 -0400 Received: from alexa-out-sd-01.qualcomm.com (alexa-out-sd-01.qualcomm.com [199.106.114.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 701295B057; Fri, 8 Jul 2022 23:00:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1657346442; x=1688882442; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=NR9iNRPQAa66fCPpIPSZS3+ID1qCvirnJLJqeYPK8yo=; b=HGOXaK6yQ8qGrhGoOv4aTa7SzBxWSVwKqvqbt9i2B/TbpfBe8nJArNc1 Au52gur83BeHpo3IjDMk9p/dTbhgt9CoXJz2uGKz+sbhYO7vSx8dTHb/U FnuOjfb4iTjKooXMG8eBXU4lo0cGY4LNGdcVC9v6RAa7Iwug6TnvbWKdt 8=; Received: from unknown (HELO ironmsg01-sd.qualcomm.com) ([10.53.140.141]) by alexa-out-sd-01.qualcomm.com with ESMTP; 08 Jul 2022 23:00:42 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg01-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jul 2022 23:00:41 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Jul 2022 23:00:41 -0700 Received: from hyd-lnxbld559.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 8 Jul 2022 23:00:36 -0700 From: Akhil P Oommen To: freedreno , , , Rob Clark , Bjorn Andersson CC: Jonathan Marek , Jordan Crouse , Matthias Kaehlcke , "Douglas Anderson" , Akhil P Oommen , Abhinav Kumar , Daniel Vetter , David Airlie , Dmitry Baryshkov , Geert Uytterhoeven , Konrad Dybcio , Sean Paul , Subject: [PATCH v2 7/7] drm/msm/a6xx: Handle GMU prepare-slumber hfi failure Date: Sat, 9 Jul 2022 11:29:35 +0530 Message-ID: <20220709112837.v2.7.I54815c7c36b80d4725cd054e536365250454452f@changeid> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1657346375-1461-1-git-send-email-quic_akhilpo@quicinc.com> References: <1657346375-1461-1-git-send-email-quic_akhilpo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org When prepare-slumber hfi fails, we should follow a6xx_gmu_force_off() sequence. Signed-off-by: Akhil P Oommen --- (no changes since v1) drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index ec25f19..e033d6a 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1086,7 +1086,11 @@ static void a6xx_gmu_shutdown(struct a6xx_gmu *gmu) a6xx_bus_clear_pending_transactions(adreno_gpu); /* tell the GMU we want to slumber */ - a6xx_gmu_notify_slumber(gmu); + ret = a6xx_gmu_notify_slumber(gmu); + if (ret) { + a6xx_gmu_force_off(gmu); + return; + } ret = gmu_poll_timeout(gmu, REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS, val,