From patchwork Thu Apr 18 15:00:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10907487 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 84E9A161F for ; Thu, 18 Apr 2019 15:00:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6AC9B28C2A for ; Thu, 18 Apr 2019 15:00:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 690C428C2B; Thu, 18 Apr 2019 15:00:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D507728B2A for ; Thu, 18 Apr 2019 15:00:38 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 336296E15F; Thu, 18 Apr 2019 15:00:37 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM05-BY2-obe.outbound.protection.outlook.com (mail-eopbgr710061.outbound.protection.outlook.com [40.107.71.61]) by gabe.freedesktop.org (Postfix) with ESMTPS id 53EAF6E15C; Thu, 18 Apr 2019 15:00:35 +0000 (UTC) Received: from DM6PR12CA0005.namprd12.prod.outlook.com (2603:10b6:5:1c0::18) by SN1PR12MB0589.namprd12.prod.outlook.com (2a01:111:e400:c429::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1813.14; Thu, 18 Apr 2019 15:00:33 +0000 Received: from DM3NAM03FT008.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e49::208) by DM6PR12CA0005.outlook.office365.com (2603:10b6:5:1c0::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1813.11 via Frontend Transport; Thu, 18 Apr 2019 15:00:33 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV01.amd.com (165.204.84.17) by DM3NAM03FT008.mail.protection.outlook.com (10.152.82.122) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1771.16 via Frontend Transport; Thu, 18 Apr 2019 15:00:33 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV01.amd.com (10.181.40.71) with Microsoft SMTP Server id 14.3.389.1; Thu, 18 Apr 2019 10:00:32 -0500 From: Andrey Grodzovsky To: , , , , Subject: [PATCH v5 1/6] drm/amd/display: wait for fence without holding reservation lock Date: Thu, 18 Apr 2019 11:00:19 -0400 Message-ID: <1555599624-12285-1-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(346002)(39860400002)(136003)(396003)(376002)(2980300002)(428003)(199004)(189003)(2201001)(72206003)(6666004)(356004)(47776003)(97736004)(14444005)(305945005)(50466002)(68736007)(86362001)(81166006)(36756003)(81156014)(2870700001)(53936002)(66574012)(478600001)(8676002)(44832011)(23676004)(476003)(2616005)(126002)(486006)(7696005)(336012)(426003)(4326008)(186003)(2906002)(77096007)(26005)(50226002)(8936002)(53416004)(5660300002)(110136005)(54906003)(5820100001)(316002)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:SN1PR12MB0589; H:SATLEXCHOV01.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; A:1; MX:1; X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 97be1552-ba48-47b9-24d7-08d6c40e9b4c X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600141)(711020)(4605104)(2017052603328); SRVR:SN1PR12MB0589; X-MS-TrafficTypeDiagnostic: SN1PR12MB0589: X-Microsoft-Antispam-PRVS: X-Forefront-PRVS: 0011612A55 X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Message-Info: 9OwGv21g3N5XnOAs5MaywEk6G9MbfLA1HWwpK2mdPVZWaJy4rYDwHegapwSDmRLAnxioY6Ll7dIG3ppUS0rds/YjU4m6/zQDxZWlfGXm2gjW5mDq2hYpLIjDcNVM60gTVQFOgSxnKb9iUuJDOHJ2nJOVpuCs1YNO/Td+yJpfqO89NnUHTRztYUg5Dlema4yzO/AXNCAmXkT03SZRPX0+jrUGBapYTUaJ8lvqOgGPd/y/oErhCgWmHcaiTgiIdHEVwN6qGAze8K8sO72A/5JYDAbfXdsHAsvHOmMjH7NAmm3eRnyXpyrLJkuzKKlZVMi4gFerql76pzcjRyohjeosMhTCfwga7Gb1LR8zr/oug2mQIrhCDMxFfsguqfCWphoztvkL/U1bF9GwFtBn+lvrCnZxB8I5ummed8gbGOTRf5k= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2019 15:00:33.0221 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 97be1552-ba48-47b9-24d7-08d6c40e9b4c X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV01.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR12MB0589 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dGScxoAHR6623aEhsWPM0p9AgLaThDXVOGPAX7vGecY=; b=C1UIBZfJ6FiyIsDFrzXDhIlHYhBssMPCsgrWmwcUYqD7fBIssdT+2eF/WSS+geakI4cOer5k26Oiz5XQpG4wYqlZ5Q/apo379S/rWDBSqgvwJNYRD1M5vUMriCXKL93KBe2wqZiOBhw8VQXYlP33Nc5y8lUsLsrhAF8zzeeSFbo= X-Mailman-Original-Authentication-Results: spf=none (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; lists.freedesktop.org; dkim=none (message not signed) header.d=none;lists.freedesktop.org; dmarc=permerror action=none header.from=amd.com; X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Nicholas.Kazlauskas@amd.com, =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Christian König Don't block others while waiting for the fences to finish, concurrent submission is perfectly valid in this case and holding the lock can prevent killed applications from terminating. Signed-off-by: Christian König Reviewed-by: Nicholas Kazlauskas --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 380a7f9..ad4f0e5 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -4814,23 +4814,26 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state, continue; } + abo = gem_to_amdgpu_bo(fb->obj[0]); + + /* Wait for all fences on this FB */ + r = reservation_object_wait_timeout_rcu(abo->tbo.resv, true, + false, + MAX_SCHEDULE_TIMEOUT); + WARN_ON(r < 0); + /* * TODO This might fail and hence better not used, wait * explicitly on fences instead * and in general should be called for * blocking commit to as per framework helpers */ - abo = gem_to_amdgpu_bo(fb->obj[0]); r = amdgpu_bo_reserve(abo, true); if (unlikely(r != 0)) { DRM_ERROR("failed to reserve buffer before flip\n"); WARN_ON(1); } - /* Wait for all fences on this FB */ - WARN_ON(reservation_object_wait_timeout_rcu(abo->tbo.resv, true, false, - MAX_SCHEDULE_TIMEOUT) < 0); - amdgpu_bo_get_tiling_flags(abo, &tiling_flags); amdgpu_bo_unreserve(abo); From patchwork Thu Apr 18 15:00:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10907489 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 47C30161F for ; Thu, 18 Apr 2019 15:00:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3209228AFF for ; Thu, 18 Apr 2019 15:00:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3036628C26; Thu, 18 Apr 2019 15:00:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BCD1A28AFF for ; Thu, 18 Apr 2019 15:00:45 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CE0616E161; Thu, 18 Apr 2019 15:00:44 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM01-SN1-obe.outbound.protection.outlook.com (mail-eopbgr820088.outbound.protection.outlook.com [40.107.82.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id A13396E15E; Thu, 18 Apr 2019 15:00:42 +0000 (UTC) Received: from DM3PR12CA0077.namprd12.prod.outlook.com (2603:10b6:0:57::21) by BY2PR12MB0581.namprd12.prod.outlook.com (2a01:111:e400:52dd::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1813.12; Thu, 18 Apr 2019 15:00:40 +0000 Received: from DM3NAM03FT010.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e49::202) by DM3PR12CA0077.outlook.office365.com (2603:10b6:0:57::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1813.12 via Frontend Transport; Thu, 18 Apr 2019 15:00:40 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV01.amd.com (165.204.84.17) by DM3NAM03FT010.mail.protection.outlook.com (10.152.82.65) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1771.16 via Frontend Transport; Thu, 18 Apr 2019 15:00:39 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV01.amd.com (10.181.40.71) with Microsoft SMTP Server id 14.3.389.1; Thu, 18 Apr 2019 10:00:38 -0500 From: Andrey Grodzovsky To: , , , , Subject: [PATCH v5 2/6] drm/amd/display: Use a reasonable timeout for framebuffer fence waits Date: Thu, 18 Apr 2019 11:00:20 -0400 Message-ID: <1555599624-12285-2-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555599624-12285-1-git-send-email-andrey.grodzovsky@amd.com> References: <1555599624-12285-1-git-send-email-andrey.grodzovsky@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(376002)(39860400002)(136003)(396003)(346002)(2980300002)(428003)(199004)(189003)(36756003)(305945005)(16586007)(2201001)(7696005)(50466002)(76176011)(51416003)(97736004)(316002)(110136005)(54906003)(86362001)(48376002)(68736007)(4326008)(81156014)(186003)(77096007)(81166006)(26005)(50226002)(44832011)(2906002)(486006)(356004)(14444005)(478600001)(6666004)(53936002)(476003)(2616005)(47776003)(5660300002)(446003)(426003)(126002)(11346002)(8936002)(8676002)(336012)(53416004)(72206003)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:BY2PR12MB0581; H:SATLEXCHOV01.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; A:1; MX:1; X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ec349f1a-ab6e-4789-1b87-08d6c40e9f3c X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600141)(711020)(4605104)(2017052603328); SRVR:BY2PR12MB0581; X-MS-TrafficTypeDiagnostic: BY2PR12MB0581: X-Microsoft-Antispam-PRVS: X-Forefront-PRVS: 0011612A55 X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Message-Info: mMsOQa24TJ3+qZcXEh4hEoq6Cnrj8GS4hNuI7puGHxdBCwbHMCLnOn4fjr8WDzuQ6PWftKtl+qJOC/Na9Erty87WToyXKDgcG0DaU3mLPuow/VEbfSjMG6lMyDBWQF246d8VGJAH158kcExUNIi3G8s+k1rNXuBpoMiStzagOEI7N9bjce9bZ0uKfjzq8LdQXL0XWu/wpfTDs0Bw505S4AX5a//3BYeKTZGtTZ/OyR/Ag6VXJlqdDhDZDG6K301gbgi9x4WPB76x/YN5XXt0eZtdtq+hmC4c/7dHhukhhmMBEYuPyLSq4XxZIMoD8fktCc0mPqpw9FgJiI18CX+8Y2elb1sQO8lxiMRbbMalGZLV6UmVmlLJJNl6XanXNDf9VrfAgFuQe+kjiKY6Q9eh8a2ez2hCJk5caAv7Usa8J8Q= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2019 15:00:39.6254 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ec349f1a-ab6e-4789-1b87-08d6c40e9f3c X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV01.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY2PR12MB0581 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=P9GbvaP8VeAAnoe3WOwqqIG4cb4JuDC7xCnCw6LeA9k=; b=H/AGJtHCGh7Bc8gWNhvZ0nPMKz4tiOVRrJQkDDxk06qdL4jFsAUGjkCW6TcgPWCQC7jUb/25GXcKOZpPv9RlszatiK77YIvR6k9zyISsKx3r6OIh4V8bdPhgGG/SE31SEUugKmWaWQOb47XkKZGZAbS1gpmitWCLbUJ3G3VvJdU= X-Mailman-Original-Authentication-Results: spf=none (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; lists.freedesktop.org; dkim=none (message not signed) header.d=none;lists.freedesktop.org; dmarc=permerror action=none header.from=amd.com; X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Nicholas.Kazlauskas@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Patch '5edb0c9b Fix deadlock with display during hanged ring recovery' was accidentaly removed during one of DALs code merges. v4: Update description. Signed-off-by: Andrey Grodzovsky Reviewed-by: Nicholas Kazlauskas --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index ad4f0e5..88e42ad 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -4816,11 +4816,16 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state, abo = gem_to_amdgpu_bo(fb->obj[0]); - /* Wait for all fences on this FB */ + /* + * Wait for all fences on this FB. Do limited wait to avoid + * deadlock during GPU reset when this fence will not signal + * but we hold reservation lock for the BO. + */ r = reservation_object_wait_timeout_rcu(abo->tbo.resv, true, false, - MAX_SCHEDULE_TIMEOUT); - WARN_ON(r < 0); + msecs_to_jiffies(5000)); + if (unlikely(r <= 0)) + DRM_ERROR("Waiting for fences timed out or interrupted!"); /* * TODO This might fail and hence better not used, wait @@ -4829,10 +4834,8 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state, * blocking commit to as per framework helpers */ r = amdgpu_bo_reserve(abo, true); - if (unlikely(r != 0)) { + if (unlikely(r != 0)) DRM_ERROR("failed to reserve buffer before flip\n"); - WARN_ON(1); - } amdgpu_bo_get_tiling_flags(abo, &tiling_flags); From patchwork Thu Apr 18 15:00:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10907491 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 729EA922 for ; Thu, 18 Apr 2019 15:00:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4861828B04 for ; Thu, 18 Apr 2019 15:00:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3AB0728B96; Thu, 18 Apr 2019 15:00:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2021F28B97 for ; Thu, 18 Apr 2019 15:00:52 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 594EF6E165; Thu, 18 Apr 2019 15:00:50 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM04-BN3-obe.outbound.protection.outlook.com (mail-eopbgr680086.outbound.protection.outlook.com [40.107.68.86]) by gabe.freedesktop.org (Postfix) with ESMTPS id BC1636E163; Thu, 18 Apr 2019 15:00:48 +0000 (UTC) Received: from MN2PR12CA0023.namprd12.prod.outlook.com (2603:10b6:208:a8::36) by BLUPR12MB0578.namprd12.prod.outlook.com (2a01:111:e400:594f::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1813.12; Thu, 18 Apr 2019 15:00:46 +0000 Received: from DM3NAM03FT017.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e49::209) by MN2PR12CA0023.outlook.office365.com (2603:10b6:208:a8::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1813.11 via Frontend Transport; Thu, 18 Apr 2019 15:00:46 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV01.amd.com (165.204.84.17) by DM3NAM03FT017.mail.protection.outlook.com (10.152.82.201) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1771.16 via Frontend Transport; Thu, 18 Apr 2019 15:00:46 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV01.amd.com (10.181.40.71) with Microsoft SMTP Server id 14.3.389.1; Thu, 18 Apr 2019 10:00:44 -0500 From: Andrey Grodzovsky To: , , , , Subject: [PATCH v5 3/6] drm/scheduler: rework job destruction Date: Thu, 18 Apr 2019 11:00:21 -0400 Message-ID: <1555599624-12285-3-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555599624-12285-1-git-send-email-andrey.grodzovsky@amd.com> References: <1555599624-12285-1-git-send-email-andrey.grodzovsky@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(346002)(396003)(39860400002)(136003)(376002)(2980300002)(428003)(189003)(199004)(5660300002)(305945005)(186003)(6666004)(2616005)(356004)(446003)(11346002)(26005)(97736004)(72206003)(54906003)(966005)(478600001)(77096007)(110136005)(316002)(53416004)(126002)(68736007)(53936002)(426003)(476003)(2870700001)(44832011)(5820100001)(50466002)(336012)(30864003)(486006)(2906002)(6306002)(50226002)(47776003)(86362001)(23676004)(2201001)(7696005)(14444005)(8936002)(66574012)(8676002)(76176011)(36756003)(4326008)(81156014)(81166006)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:BLUPR12MB0578; H:SATLEXCHOV01.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; MX:1; A:1; X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0679c57b-c6e2-49e1-cb37-08d6c40ea30d X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600141)(711020)(4605104)(2017052603328); SRVR:BLUPR12MB0578; X-MS-TrafficTypeDiagnostic: BLUPR12MB0578: X-MS-Exchange-PUrlCount: 1 X-Microsoft-Antispam-PRVS: X-Forefront-PRVS: 0011612A55 X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Message-Info: jDcI2KTEF5Zjf5LURoUxkrNI/NkrnnJT6IzhRKJGk54h0Ssjvy/i9bNeugMlbg/hanxCJxULzRV7HkD8vss2mRwGegsp8vrY4t2lYO5s8vy7ju822MvtB/XgE9veddHl3f30b3qtOpVoDiuFPHJNVi+TcXHhX2G6by/hWGOGUdXcXVwk6DVsJAR7tYhHosDOJO8F4snAWc3arZw3S0q/g0Uya4aXCS26gdGQCbePJRr4JX/MVsl99nQX5KoUL0FN+zPRA+aQYOYau6gRlYySRnH0rjfU0CTD1LYXoORHClPtPH6lago4J2tDAs0q6rtQrnWTy2eJr+vGyRJsXJjRg9iTKMJ1fyxA4+/Nk0ToXA4EaiyMapDawoon9GJrl2iIhrnToEV/sb047iznnodJ7LpfUNM79y/TdcGwDqerf7k= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2019 15:00:46.0107 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0679c57b-c6e2-49e1-cb37-08d6c40ea30d X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV01.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLUPR12MB0578 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jGze8ybx7MQV+deBQ6ktf8MCTPgtWKHqmEBs1N9Yasc=; b=detNjf0q2PtHxVuhMJS0gFFLl9+DB45ddJ99hOPyzrwmjrwBgRQjld/8CSa+3AdFt7TZ4mgDNuF4Zp2S6vCLaududGx/+yUsDO9mYlGf0FAWshpcpnYmnOZRUrtRFlNAh+HDR6g3znm0Lkg4rYcdZWrxyJwAIBMGCHj4RrUPHOU= X-Mailman-Original-Authentication-Results: spf=none (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; lists.freedesktop.org; dkim=none (message not signed) header.d=none;lists.freedesktop.org; dmarc=permerror action=none header.from=amd.com; X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Nicholas.Kazlauskas@amd.com, =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Christian König We now destroy finished jobs from the worker thread to make sure that we never destroy a job currently in timeout processing. By this we avoid holding lock around ring mirror list in drm_sched_stop which should solve a deadlock reported by a user. v2: Remove unused variable. v4: Move guilty job free into sched code. v5: Move sched->hw_rq_count to drm_sched_start to account for counter decrement in drm_sched_stop even when we don't call resubmit jobs if guily job did signal. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109692 Signed-off-by: Christian König Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 9 +- drivers/gpu/drm/etnaviv/etnaviv_dump.c | 4 - drivers/gpu/drm/etnaviv/etnaviv_sched.c | 2 +- drivers/gpu/drm/lima/lima_sched.c | 2 +- drivers/gpu/drm/panfrost/panfrost_job.c | 2 +- drivers/gpu/drm/scheduler/sched_main.c | 159 +++++++++++++++++------------ drivers/gpu/drm/v3d/v3d_sched.c | 2 +- include/drm/gpu_scheduler.h | 6 +- 8 files changed, 102 insertions(+), 84 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index 7cee269..a0e165c 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -3334,7 +3334,7 @@ static int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, if (!ring || !ring->sched.thread) continue; - drm_sched_stop(&ring->sched); + drm_sched_stop(&ring->sched, &job->base); /* after all hw jobs are reset, hw fence is meaningless, so force_completion */ amdgpu_fence_driver_force_completion(ring); @@ -3343,8 +3343,6 @@ static int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, if(job) drm_sched_increase_karma(&job->base); - - if (!amdgpu_sriov_vf(adev)) { if (!need_full_reset) @@ -3482,8 +3480,7 @@ static int amdgpu_do_asic_reset(struct amdgpu_hive_info *hive, return r; } -static void amdgpu_device_post_asic_reset(struct amdgpu_device *adev, - struct amdgpu_job *job) +static void amdgpu_device_post_asic_reset(struct amdgpu_device *adev) { int i; @@ -3623,7 +3620,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev, /* Post ASIC reset for all devs .*/ list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) { - amdgpu_device_post_asic_reset(tmp_adev, tmp_adev == adev ? job : NULL); + amdgpu_device_post_asic_reset(tmp_adev); if (r) { /* bad news, how to tell it to userspace ? */ diff --git a/drivers/gpu/drm/etnaviv/etnaviv_dump.c b/drivers/gpu/drm/etnaviv/etnaviv_dump.c index 33854c9..5778d9c 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_dump.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_dump.c @@ -135,13 +135,11 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu) mmu_size + gpu->buffer.size; /* Add in the active command buffers */ - spin_lock_irqsave(&gpu->sched.job_list_lock, flags); list_for_each_entry(s_job, &gpu->sched.ring_mirror_list, node) { submit = to_etnaviv_submit(s_job); file_size += submit->cmdbuf.size; n_obj++; } - spin_unlock_irqrestore(&gpu->sched.job_list_lock, flags); /* Add in the active buffer objects */ list_for_each_entry(vram, &gpu->mmu->mappings, mmu_node) { @@ -183,14 +181,12 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu) gpu->buffer.size, etnaviv_cmdbuf_get_va(&gpu->buffer)); - spin_lock_irqsave(&gpu->sched.job_list_lock, flags); list_for_each_entry(s_job, &gpu->sched.ring_mirror_list, node) { submit = to_etnaviv_submit(s_job); etnaviv_core_dump_mem(&iter, ETDUMP_BUF_CMD, submit->cmdbuf.vaddr, submit->cmdbuf.size, etnaviv_cmdbuf_get_va(&submit->cmdbuf)); } - spin_unlock_irqrestore(&gpu->sched.job_list_lock, flags); /* Reserve space for the bomap */ if (n_bomap_pages) { diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index 6d24fea..a813c82 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -109,7 +109,7 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job) } /* block scheduler */ - drm_sched_stop(&gpu->sched); + drm_sched_stop(&gpu->sched, sched_job); if(sched_job) drm_sched_increase_karma(sched_job); diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index 97bd9c1..df98931 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -300,7 +300,7 @@ static struct dma_fence *lima_sched_run_job(struct drm_sched_job *job) static void lima_sched_handle_error_task(struct lima_sched_pipe *pipe, struct lima_sched_task *task) { - drm_sched_stop(&pipe->base); + drm_sched_stop(&pipe->base, &task->base); if (task) drm_sched_increase_karma(&task->base); diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 0a7ed04..c6336b7 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -385,7 +385,7 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job) sched_job); for (i = 0; i < NUM_JOB_SLOTS; i++) - drm_sched_stop(&pfdev->js->queue[i].sched); + drm_sched_stop(&pfdev->js->queue[i].sched, sched_job); if (sched_job) drm_sched_increase_karma(sched_job); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 19fc601..7816de7 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -265,32 +265,6 @@ void drm_sched_resume_timeout(struct drm_gpu_scheduler *sched, } EXPORT_SYMBOL(drm_sched_resume_timeout); -/* job_finish is called after hw fence signaled - */ -static void drm_sched_job_finish(struct work_struct *work) -{ - struct drm_sched_job *s_job = container_of(work, struct drm_sched_job, - finish_work); - struct drm_gpu_scheduler *sched = s_job->sched; - unsigned long flags; - - /* - * Canceling the timeout without removing our job from the ring mirror - * list is safe, as we will only end up in this worker if our jobs - * finished fence has been signaled. So even if some another worker - * manages to find this job as the next job in the list, the fence - * signaled check below will prevent the timeout to be restarted. - */ - cancel_delayed_work_sync(&sched->work_tdr); - - spin_lock_irqsave(&sched->job_list_lock, flags); - /* queue TDR for next job */ - drm_sched_start_timeout(sched); - spin_unlock_irqrestore(&sched->job_list_lock, flags); - - sched->ops->free_job(s_job); -} - static void drm_sched_job_begin(struct drm_sched_job *s_job) { struct drm_gpu_scheduler *sched = s_job->sched; @@ -315,6 +289,13 @@ static void drm_sched_job_timedout(struct work_struct *work) if (job) job->sched->ops->timedout_job(job); + /* + * Guilty job did complete and hence needs to be manually removed + * See drm_sched_stop doc. + */ + if (list_empty(&job->node)) + job->sched->ops->free_job(job); + spin_lock_irqsave(&sched->job_list_lock, flags); drm_sched_start_timeout(sched); spin_unlock_irqrestore(&sched->job_list_lock, flags); @@ -371,23 +352,26 @@ EXPORT_SYMBOL(drm_sched_increase_karma); * @sched: scheduler instance * @bad: bad scheduler job * + * Stop the scheduler and also removes and frees all completed jobs. + * Note: bad job will not be freed as it might be used later and so it's + * callers responsibility to release it manually if it's not part of the + * mirror list any more. + * */ -void drm_sched_stop(struct drm_gpu_scheduler *sched) +void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) { - struct drm_sched_job *s_job; + struct drm_sched_job *s_job, *tmp; unsigned long flags; - struct dma_fence *last_fence = NULL; kthread_park(sched->thread); /* - * Verify all the signaled jobs in mirror list are removed from the ring - * by waiting for the latest job to enter the list. This should insure that - * also all the previous jobs that were in flight also already singaled - * and removed from the list. + * Iterate the job list from later to earlier one and either deactive + * their HW callbacks or remove them from mirror list if they already + * signaled. + * This iteration is thread safe as sched thread is stopped. */ - spin_lock_irqsave(&sched->job_list_lock, flags); - list_for_each_entry_reverse(s_job, &sched->ring_mirror_list, node) { + list_for_each_entry_safe_reverse(s_job, tmp, &sched->ring_mirror_list, node) { if (s_job->s_fence->parent && dma_fence_remove_callback(s_job->s_fence->parent, &s_job->cb)) { @@ -395,16 +379,30 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched) s_job->s_fence->parent = NULL; atomic_dec(&sched->hw_rq_count); } else { - last_fence = dma_fence_get(&s_job->s_fence->finished); - break; + /* + * remove job from ring_mirror_list. + * Locking here is for concurrent resume timeout + */ + spin_lock_irqsave(&sched->job_list_lock, flags); + list_del_init(&s_job->node); + spin_unlock_irqrestore(&sched->job_list_lock, flags); + + /* + * Wait for job's HW fence callback to finish using s_job + * before releasing it. + * + * Job is still alive so fence refcount at least 1 + */ + dma_fence_wait(&s_job->s_fence->finished, false); + + /* + * We must keep bad job alive for later use during + * recovery by some of the drivers + */ + if (bad != s_job) + sched->ops->free_job(s_job); } } - spin_unlock_irqrestore(&sched->job_list_lock, flags); - - if (last_fence) { - dma_fence_wait(last_fence, false); - dma_fence_put(last_fence); - } } EXPORT_SYMBOL(drm_sched_stop); @@ -418,21 +416,22 @@ EXPORT_SYMBOL(drm_sched_stop); void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) { struct drm_sched_job *s_job, *tmp; + unsigned long flags; int r; - if (!full_recovery) - goto unpark; - /* * Locking the list is not required here as the sched thread is parked - * so no new jobs are being pushed in to HW and in drm_sched_stop we - * flushed all the jobs who were still in mirror list but who already - * signaled and removed them self from the list. Also concurrent + * so no new jobs are being inserted or removed. Also concurrent * GPU recovers can't run in parallel. */ list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { struct dma_fence *fence = s_job->s_fence->parent; + atomic_inc(&sched->hw_rq_count); + + if (!full_recovery) + continue; + if (fence) { r = dma_fence_add_callback(fence, &s_job->cb, drm_sched_process_job); @@ -445,9 +444,12 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) drm_sched_process_job(NULL, &s_job->cb); } - drm_sched_start_timeout(sched); + if (full_recovery) { + spin_lock_irqsave(&sched->job_list_lock, flags); + drm_sched_start_timeout(sched); + spin_unlock_irqrestore(&sched->job_list_lock, flags); + } -unpark: kthread_unpark(sched->thread); } EXPORT_SYMBOL(drm_sched_start); @@ -464,7 +466,6 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched) uint64_t guilty_context; bool found_guilty = false; - /*TODO DO we need spinlock here ? */ list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { struct drm_sched_fence *s_fence = s_job->s_fence; @@ -477,7 +478,6 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched) dma_fence_set_error(&s_fence->finished, -ECANCELED); s_job->s_fence->parent = sched->ops->run_job(s_job); - atomic_inc(&sched->hw_rq_count); } } EXPORT_SYMBOL(drm_sched_resubmit_jobs); @@ -514,7 +514,6 @@ int drm_sched_job_init(struct drm_sched_job *job, return -ENOMEM; job->id = atomic64_inc_return(&sched->job_id_count); - INIT_WORK(&job->finish_work, drm_sched_job_finish); INIT_LIST_HEAD(&job->node); return 0; @@ -597,24 +596,53 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) struct drm_sched_job *s_job = container_of(cb, struct drm_sched_job, cb); struct drm_sched_fence *s_fence = s_job->s_fence; struct drm_gpu_scheduler *sched = s_fence->sched; - unsigned long flags; - - cancel_delayed_work(&sched->work_tdr); atomic_dec(&sched->hw_rq_count); atomic_dec(&sched->num_jobs); - spin_lock_irqsave(&sched->job_list_lock, flags); - /* remove job from ring_mirror_list */ - list_del_init(&s_job->node); - spin_unlock_irqrestore(&sched->job_list_lock, flags); + trace_drm_sched_process_job(s_fence); drm_sched_fence_finished(s_fence); - - trace_drm_sched_process_job(s_fence); wake_up_interruptible(&sched->wake_up_worker); +} + +/** + * drm_sched_cleanup_jobs - destroy finished jobs + * + * @sched: scheduler instance + * + * Remove all finished jobs from the mirror list and destroy them. + */ +static void drm_sched_cleanup_jobs(struct drm_gpu_scheduler *sched) +{ + unsigned long flags; + + /* Don't destroy jobs while the timeout worker is running */ + if (!cancel_delayed_work(&sched->work_tdr)) + return; + + + while (!list_empty(&sched->ring_mirror_list)) { + struct drm_sched_job *job; + + job = list_first_entry(&sched->ring_mirror_list, + struct drm_sched_job, node); + if (!dma_fence_is_signaled(&job->s_fence->finished)) + break; + + spin_lock_irqsave(&sched->job_list_lock, flags); + /* remove job from ring_mirror_list */ + list_del_init(&job->node); + spin_unlock_irqrestore(&sched->job_list_lock, flags); + + sched->ops->free_job(job); + } + + /* queue timeout for next job */ + spin_lock_irqsave(&sched->job_list_lock, flags); + drm_sched_start_timeout(sched); + spin_unlock_irqrestore(&sched->job_list_lock, flags); - schedule_work(&s_job->finish_work); } /** @@ -656,9 +684,10 @@ static int drm_sched_main(void *param) struct dma_fence *fence; wait_event_interruptible(sched->wake_up_worker, + (drm_sched_cleanup_jobs(sched), (!drm_sched_blocked(sched) && (entity = drm_sched_select_entity(sched))) || - kthread_should_stop()); + kthread_should_stop())); if (!entity) continue; diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index e740f3b..1a4abe7 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -232,7 +232,7 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job) /* block scheduler */ for (q = 0; q < V3D_MAX_QUEUES; q++) - drm_sched_stop(&v3d->queue[q].sched); + drm_sched_stop(&v3d->queue[q].sched, sched_job); if (sched_job) drm_sched_increase_karma(sched_job); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 0daca4d..9ee0f27 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -167,9 +167,6 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); * @sched: the scheduler instance on which this job is scheduled. * @s_fence: contains the fences for the scheduling of job. * @finish_cb: the callback for the finished fence. - * @finish_work: schedules the function @drm_sched_job_finish once the job has - * finished to remove the job from the - * @drm_gpu_scheduler.ring_mirror_list. * @node: used to append this struct to the @drm_gpu_scheduler.ring_mirror_list. * @id: a unique id assigned to each job scheduled on the scheduler. * @karma: increment on every hang caused by this job. If this exceeds the hang @@ -188,7 +185,6 @@ struct drm_sched_job { struct drm_gpu_scheduler *sched; struct drm_sched_fence *s_fence; struct dma_fence_cb finish_cb; - struct work_struct finish_work; struct list_head node; uint64_t id; atomic_t karma; @@ -296,7 +292,7 @@ int drm_sched_job_init(struct drm_sched_job *job, void *owner); void drm_sched_job_cleanup(struct drm_sched_job *job); void drm_sched_wakeup(struct drm_gpu_scheduler *sched); -void drm_sched_stop(struct drm_gpu_scheduler *sched); +void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad); void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery); void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched); void drm_sched_increase_karma(struct drm_sched_job *bad); From patchwork Thu Apr 18 15:00:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10907493 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5ACFB922 for ; Thu, 18 Apr 2019 15:00:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 42E5928B97 for ; Thu, 18 Apr 2019 15:00:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 411D628D08; Thu, 18 Apr 2019 15:00:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C5E2C28B97 for ; Thu, 18 Apr 2019 15:00:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7A12F6E166; Thu, 18 Apr 2019 15:00:57 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM05-BY2-obe.outbound.protection.outlook.com (mail-eopbgr710084.outbound.protection.outlook.com [40.107.71.84]) by gabe.freedesktop.org (Postfix) with ESMTPS id 272706E15D; Thu, 18 Apr 2019 15:00:56 +0000 (UTC) Received: from CY4PR12CA0042.namprd12.prod.outlook.com (2603:10b6:903:129::28) by MN2PR12MB3470.namprd12.prod.outlook.com (2603:10b6:208:d0::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1792.17; Thu, 18 Apr 2019 15:00:53 +0000 Received: from DM3NAM03FT010.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e49::207) by CY4PR12CA0042.outlook.office365.com (2603:10b6:903:129::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1813.12 via Frontend Transport; Thu, 18 Apr 2019 15:00:53 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV01.amd.com (165.204.84.17) by DM3NAM03FT010.mail.protection.outlook.com (10.152.82.65) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1771.16 via Frontend Transport; Thu, 18 Apr 2019 15:00:52 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV01.amd.com (10.181.40.71) with Microsoft SMTP Server id 14.3.389.1; Thu, 18 Apr 2019 10:00:50 -0500 From: Andrey Grodzovsky To: , , , , Subject: [PATCH v5 4/6] drm/sched: Keep s_fence->parent pointer Date: Thu, 18 Apr 2019 11:00:22 -0400 Message-ID: <1555599624-12285-4-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555599624-12285-1-git-send-email-andrey.grodzovsky@amd.com> References: <1555599624-12285-1-git-send-email-andrey.grodzovsky@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(346002)(376002)(136003)(396003)(39860400002)(2980300002)(428003)(199004)(189003)(486006)(14444005)(2906002)(11346002)(476003)(426003)(356004)(86362001)(336012)(2870700001)(126002)(44832011)(186003)(6666004)(97736004)(36756003)(478600001)(2201001)(4326008)(53416004)(77096007)(68736007)(446003)(26005)(66574012)(47776003)(50466002)(7696005)(8676002)(53936002)(316002)(305945005)(23676004)(76176011)(5660300002)(5820100001)(2616005)(54906003)(72206003)(50226002)(8936002)(81166006)(110136005)(81156014)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:MN2PR12MB3470; H:SATLEXCHOV01.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; A:1; MX:1; X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 6a4b66fd-486d-4199-5cb0-08d6c40ea707 X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(5600141)(711020)(4605104)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328); SRVR:MN2PR12MB3470; X-MS-TrafficTypeDiagnostic: MN2PR12MB3470: X-Microsoft-Antispam-PRVS: X-Forefront-PRVS: 0011612A55 X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Message-Info: LIlUqaabE/muC9qCCIa7fKk2Os9zJBHNf+25SUqg08optFRcdjVx5Q3LWfd+kD9pDPKs+/OLUEx14IPfrbef4XMuFc+EAZZBqD4+Y8nqzVJGedxHcRSpFVuXwtKBgzltj22gdFVIJ4b2cEV0AWx5aXzZOvhwl1p77JZ6pu5MaDQKR4ThZ+ty1Fv76e4XXHAP9bvdu8zmltksu5bdxtW2dFPLkpZ8qS4ibEpGeVhqDBHOF9jD7fG0AD+2SJCaydNnOaOuSNW6boKx2k3R/ZpJ9GEuAQfKpl/8IPnJOS8vM5v4VhA+P/T8cD35pUfQmifw5fq+6pabcr9+V7Xckg3n6WvDirdNybA+Div+Tej3DYWMMM1HLY9+mk2565zucqPs9hGP083d/+QAFpPuOaGlzMwoYbrAaErzCZnQzUc6hwE= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2019 15:00:52.7015 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6a4b66fd-486d-4199-5cb0-08d6c40ea707 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV01.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3470 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9+NYb+9fHUL7qhoomz1Y6zHjBbkEyfUook0CTqlr8ao=; b=RyEfHHe/fRArOPmOLlAQEmy2MlDf8DFVr+gKwBtOfwTapG1gKpjgD/tPneHfXdz8gEB8lmasg/jIAo8AfdKlDO+hRBjpICvATuGvs5Fpa4OXnojC2504vKwqYtdJthub85ZGX3kkwOTTv79w79LEU9FTWYcAJ1e0mOK6SYvENWI= X-Mailman-Original-Authentication-Results: spf=none (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; lists.freedesktop.org; dkim=none (message not signed) header.d=none;lists.freedesktop.org; dmarc=permerror action=none header.from=amd.com; X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Nicholas.Kazlauskas@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP For later driver's reference to see if the fence is signaled. v2: Move parent fence put to resubmit jobs. Signed-off-by: Andrey Grodzovsky Reviewed-by: Christian König --- drivers/gpu/drm/scheduler/sched_main.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 7816de7..03e6bd8 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -375,8 +375,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) if (s_job->s_fence->parent && dma_fence_remove_callback(s_job->s_fence->parent, &s_job->cb)) { - dma_fence_put(s_job->s_fence->parent); - s_job->s_fence->parent = NULL; atomic_dec(&sched->hw_rq_count); } else { /* @@ -403,6 +401,14 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) sched->ops->free_job(s_job); } } + + /* + * Stop pending timer in flight as we rearm it in drm_sched_start. This + * avoids the pending timeout work in progress to fire right away after + * this TDR finished and before the newly restarted jobs had a + * chance to complete. + */ + cancel_delayed_work(&sched->work_tdr); } EXPORT_SYMBOL(drm_sched_stop); @@ -477,6 +483,7 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched) if (found_guilty && s_job->s_fence->scheduled.context == guilty_context) dma_fence_set_error(&s_fence->finished, -ECANCELED); + dma_fence_put(s_job->s_fence->parent); s_job->s_fence->parent = sched->ops->run_job(s_job); } } From patchwork Thu Apr 18 15:00:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10907495 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7B6EB161F for ; Thu, 18 Apr 2019 15:01:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5E18028C3D for ; Thu, 18 Apr 2019 15:01:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5C80828CC4; Thu, 18 Apr 2019 15:01:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F16F628CE4 for ; Thu, 18 Apr 2019 15:01:04 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9507B6E164; Thu, 18 Apr 2019 15:01:03 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM04-BN3-obe.outbound.protection.outlook.com (mail-eopbgr680082.outbound.protection.outlook.com [40.107.68.82]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5E2956E160; Thu, 18 Apr 2019 15:01:02 +0000 (UTC) Received: from MWHPR12CA0028.namprd12.prod.outlook.com (2603:10b6:301:2::14) by BLUPR12MB0580.namprd12.prod.outlook.com (2a01:111:e400:594f::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1813.12; Thu, 18 Apr 2019 15:01:00 +0000 Received: from DM3NAM03FT026.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e49::203) by MWHPR12CA0028.outlook.office365.com (2603:10b6:301:2::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1813.12 via Frontend Transport; Thu, 18 Apr 2019 15:00:59 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV01.amd.com (165.204.84.17) by DM3NAM03FT026.mail.protection.outlook.com (10.152.82.185) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1771.16 via Frontend Transport; Thu, 18 Apr 2019 15:00:58 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV01.amd.com (10.181.40.71) with Microsoft SMTP Server id 14.3.389.1; Thu, 18 Apr 2019 10:00:56 -0500 From: Andrey Grodzovsky To: , , , , Subject: [PATCH v5 5/6] drm/scheduler: Add flag to hint the release of guilty job. Date: Thu, 18 Apr 2019 11:00:23 -0400 Message-ID: <1555599624-12285-5-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555599624-12285-1-git-send-email-andrey.grodzovsky@amd.com> References: <1555599624-12285-1-git-send-email-andrey.grodzovsky@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(396003)(346002)(376002)(39860400002)(136003)(2980300002)(428003)(199004)(189003)(426003)(7696005)(11346002)(446003)(336012)(16586007)(72206003)(51416003)(486006)(476003)(126002)(2616005)(110136005)(54906003)(44832011)(356004)(53416004)(36756003)(478600001)(4326008)(6666004)(97736004)(50226002)(8936002)(14444005)(47776003)(5660300002)(50466002)(8676002)(48376002)(81166006)(86362001)(2201001)(81156014)(68736007)(26005)(77096007)(316002)(186003)(2906002)(53936002)(305945005)(76176011)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:BLUPR12MB0580; H:SATLEXCHOV01.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; A:1; MX:1; X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0f1d09d6-e4cb-4de8-0f34-08d6c40eaab4 X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600141)(711020)(4605104)(2017052603328); SRVR:BLUPR12MB0580; X-MS-TrafficTypeDiagnostic: BLUPR12MB0580: X-Microsoft-Antispam-PRVS: X-Forefront-PRVS: 0011612A55 X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Message-Info: CKu9l/bNT1GunZUESykTM96R+bGz0qd3pU5z3hocxh/dR1Eb/WhLHtY72amKg6pH+ZpAdaIdqsUQcbfHNi6fhKDxFGuWMkoluQLlj7PuRhANgFE0FWt0uUwZscfrmnXBJdPf/Hd88LCWuHL7iuGmRU+Az2JufjpRhgkM7uuqlN+bEnrKPCIWUTYoHNUdbsj9dbWDSecckvijKkELDCWbcV19an/k2HEzbFdNBWWcwEvynEZzoNdBdn1ouA8apFMSSjkLCO/SnUC4PLmTjC9w72fPn+7KnSnMuvqaTkt8RFC9dUt2+IsuLB4NxOea8gmZkcBn8NfDEjkmkMZ+Ei2EBEXA1JgnmKLjnJ9WrZwOWOxg1QzOnGqvW1WnHfGiZXT2YGIIZ37x8+UlFegiiYx1cADCiZmg8dOpDghcupqFz4M= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2019 15:00:58.8634 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0f1d09d6-e4cb-4de8-0f34-08d6c40eaab4 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV01.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLUPR12MB0580 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=eIIk7p7TgL3NEF1nV2HF3uB+U9/vGqY4A2MTbT+nT+8=; b=TCzGo+7Z17fQ7n+4euHVem4drBcelUtTxo75srPAYWi0/vFZDD9uWCdJv1wzwbCO689RLIqIypZBpaGoh0Fn/zWBoUpyTAV4yTZqdZUInwS8a8YkYwAhizobZ7OIzeeSJk9zGuddcJI2qhtfDfBuD3gbXMsCjSn8hcHLoFd3QVQ= X-Mailman-Original-Authentication-Results: spf=none (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; lists.freedesktop.org; dkim=none (message not signed) header.d=none;lists.freedesktop.org; dmarc=permerror action=none header.from=amd.com; X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Nicholas.Kazlauskas@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Problem: Sched thread's cleanup function races against TO handler and removes the guilty job from mirror list and we have no way of differentiating if the job was removed from within the TO handler or from the sched thread's clean-up function. Fix: Add a flag to scheduler to hint the TO handler that the guilty job needs to be explicitly released. Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/scheduler/sched_main.c | 9 +++++++-- include/drm/gpu_scheduler.h | 2 ++ 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 03e6bd8..f8f0e1c 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -293,8 +293,10 @@ static void drm_sched_job_timedout(struct work_struct *work) * Guilty job did complete and hence needs to be manually removed * See drm_sched_stop doc. */ - if (list_empty(&job->node)) + if (sched->free_guilty) { job->sched->ops->free_job(job); + sched->free_guilty = false; + } spin_lock_irqsave(&sched->job_list_lock, flags); drm_sched_start_timeout(sched); @@ -395,10 +397,13 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) /* * We must keep bad job alive for later use during - * recovery by some of the drivers + * recovery by some of the drivers but leave a hint + * that the guilty job must be released. */ if (bad != s_job) sched->ops->free_job(s_job); + else + sched->free_guilty = true; } } diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 9ee0f27..fc0b421 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -259,6 +259,7 @@ struct drm_sched_backend_ops { * guilty and it will be considered for scheduling further. * @num_jobs: the number of jobs in queue in the scheduler * @ready: marks if the underlying HW is ready to work + * @free_guilty: A hit to time out handler to free the guilty job. * * One scheduler is implemented for each hardware ring. */ @@ -279,6 +280,7 @@ struct drm_gpu_scheduler { int hang_limit; atomic_t num_jobs; bool ready; + bool free_guilty; }; int drm_sched_init(struct drm_gpu_scheduler *sched, From patchwork Thu Apr 18 15:00:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10907497 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 75372922 for ; Thu, 18 Apr 2019 15:01:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5D97728C96 for ; Thu, 18 Apr 2019 15:01:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5B78E28CCE; Thu, 18 Apr 2019 15:01:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B5CFC28B50 for ; Thu, 18 Apr 2019 15:01:12 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D85B96E15D; Thu, 18 Apr 2019 15:01:11 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM02-BL2-obe.outbound.protection.outlook.com (mail-eopbgr750042.outbound.protection.outlook.com [40.107.75.42]) by gabe.freedesktop.org (Postfix) with ESMTPS id 85A166E16B; Thu, 18 Apr 2019 15:01:09 +0000 (UTC) Received: from MWHPR12CA0032.namprd12.prod.outlook.com (2603:10b6:301:2::18) by SN1PR12MB0591.namprd12.prod.outlook.com (2a01:111:e400:c429::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1792.19; Thu, 18 Apr 2019 15:01:07 +0000 Received: from DM3NAM03FT026.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e49::202) by MWHPR12CA0032.outlook.office365.com (2603:10b6:301:2::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1813.12 via Frontend Transport; Thu, 18 Apr 2019 15:01:06 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV01.amd.com (165.204.84.17) by DM3NAM03FT026.mail.protection.outlook.com (10.152.82.185) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1771.16 via Frontend Transport; Thu, 18 Apr 2019 15:01:06 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV01.amd.com (10.181.40.71) with Microsoft SMTP Server id 14.3.389.1; Thu, 18 Apr 2019 10:01:01 -0500 From: Andrey Grodzovsky To: , , , , Subject: [PATCH v5 6/6] drm/amdgpu: Avoid HW reset if guilty job already signaled. Date: Thu, 18 Apr 2019 11:00:24 -0400 Message-ID: <1555599624-12285-6-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555599624-12285-1-git-send-email-andrey.grodzovsky@amd.com> References: <1555599624-12285-1-git-send-email-andrey.grodzovsky@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(39860400002)(346002)(396003)(376002)(136003)(2980300002)(428003)(43544003)(199004)(189003)(36756003)(4326008)(47776003)(2201001)(8676002)(14444005)(72206003)(86362001)(50466002)(356004)(6666004)(53416004)(478600001)(446003)(126002)(486006)(11346002)(2616005)(476003)(48376002)(336012)(426003)(77096007)(186003)(8936002)(81166006)(53936002)(97736004)(81156014)(2906002)(44832011)(26005)(7696005)(51416003)(76176011)(54906003)(16586007)(316002)(305945005)(5660300002)(68736007)(50226002)(110136005)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:SN1PR12MB0591; H:SATLEXCHOV01.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; A:1; MX:1; X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 186d2229-4976-4939-2e94-08d6c40eaf28 X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600141)(711020)(4605104)(2017052603328); SRVR:SN1PR12MB0591; X-MS-TrafficTypeDiagnostic: SN1PR12MB0591: X-Microsoft-Antispam-PRVS: X-Forefront-PRVS: 0011612A55 X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Message-Info: /CZU/ZhV3sh3ikw6f6WzFg1pno07KwqmsLlsL+0y3dApiUyCphoGa5TdhhUkFEAP5+If3s4vKIFp0crZiqxXbdqAmP18axXqwJknSZUP8rhHeF7ykVrhVuWW3wgwe5yqRN2UFWNnWOe/ac2XKLBKcq/87T67lFwICrABSj6y3tsdNPlUATSB79c9CmrT47exW+f5/fh8PueJ1HVr0Vok0PaVKjMH+ums5SrgcFQU7YhcLGibsFAbp55VYoEtpNdkCKaVjrRlb4UWmcDItQsabrqIhp9xEUkUQq1v0w+i1fLJ6CbWco5QSvzC08QWq5p/DRj2/oGF8uPDyF6C3NqTdrLIvzl9UgLyV36cLCUv3z5YlALnMdKhK2bXR8Zgu9j9KMEt7JiZxpwdAJFJJveBczfb6rOimZAxH64sd6GXXS0= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2019 15:01:06.0731 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 186d2229-4976-4939-2e94-08d6c40eaf28 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV01.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR12MB0591 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/zl5H8uL7rlD2xzunwwktuPkV3U5MJwaLXxH1xGks8M=; b=rYpoPMUhlBbWADL3shqcQ5zr1tRH3EbgnNI/q9TSSpdOsDcRbfbqk4XZSo0mx+/vu8rHOHcbqLYTXSx0bHBQ9pu9mQDZF17scrOlhH08mqqFnUv/TfuU2ikzs3QwBLrWMMZ92+KTGtcNQPhDshXQ85gId26racaRSEZJXzSnLGs= X-Mailman-Original-Authentication-Results: spf=none (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; lists.freedesktop.org; dkim=none (message not signed) header.d=none;lists.freedesktop.org; dmarc=permerror action=none header.from=amd.com; X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Nicholas.Kazlauskas@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Also reject TDRs if another one already running. v2: Stop all schedulers across device and entire XGMI hive before force signaling HW fences. Avoid passing job_signaled to helper fnctions to keep all the decision making about skipping HW reset in one place. v3: Fix SW sched. hang after non HW reset. sched.hw_rq_count has to be balanced against it's decrement in drm_sched_stop in non HW reset case. v4: rebase v5: Revert v3 as we do it now in sceduler code. Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 143 +++++++++++++++++++---------- 1 file changed, 95 insertions(+), 48 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index a0e165c..85f8792 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -3334,8 +3334,6 @@ static int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, if (!ring || !ring->sched.thread) continue; - drm_sched_stop(&ring->sched, &job->base); - /* after all hw jobs are reset, hw fence is meaningless, so force_completion */ amdgpu_fence_driver_force_completion(ring); } @@ -3343,6 +3341,7 @@ static int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, if(job) drm_sched_increase_karma(&job->base); + /* Don't suspend on bare metal if we are not going to HW reset the ASIC */ if (!amdgpu_sriov_vf(adev)) { if (!need_full_reset) @@ -3480,37 +3479,21 @@ static int amdgpu_do_asic_reset(struct amdgpu_hive_info *hive, return r; } -static void amdgpu_device_post_asic_reset(struct amdgpu_device *adev) +static bool amdgpu_device_lock_adev(struct amdgpu_device *adev, bool trylock) { - int i; - - for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { - struct amdgpu_ring *ring = adev->rings[i]; - - if (!ring || !ring->sched.thread) - continue; - - if (!adev->asic_reset_res) - drm_sched_resubmit_jobs(&ring->sched); + if (trylock) { + if (!mutex_trylock(&adev->lock_reset)) + return false; + } else + mutex_lock(&adev->lock_reset); - drm_sched_start(&ring->sched, !adev->asic_reset_res); - } - - if (!amdgpu_device_has_dc_support(adev)) { - drm_helper_resume_force_mode(adev->ddev); - } - - adev->asic_reset_res = 0; -} - -static void amdgpu_device_lock_adev(struct amdgpu_device *adev) -{ - mutex_lock(&adev->lock_reset); atomic_inc(&adev->gpu_reset_counter); adev->in_gpu_reset = 1; /* Block kfd: SRIOV would do it separately */ if (!amdgpu_sriov_vf(adev)) amdgpu_amdkfd_pre_reset(adev); + + return true; } static void amdgpu_device_unlock_adev(struct amdgpu_device *adev) @@ -3538,40 +3521,42 @@ static void amdgpu_device_unlock_adev(struct amdgpu_device *adev) int amdgpu_device_gpu_recover(struct amdgpu_device *adev, struct amdgpu_job *job) { - int r; + struct list_head device_list, *device_list_handle = NULL; + bool need_full_reset, job_signaled; struct amdgpu_hive_info *hive = NULL; - bool need_full_reset = false; struct amdgpu_device *tmp_adev = NULL; - struct list_head device_list, *device_list_handle = NULL; + int i, r = 0; + need_full_reset = job_signaled = false; INIT_LIST_HEAD(&device_list); dev_info(adev->dev, "GPU reset begin!\n"); + hive = amdgpu_get_xgmi_hive(adev, false); + /* - * In case of XGMI hive disallow concurrent resets to be triggered - * by different nodes. No point also since the one node already executing - * reset will also reset all the other nodes in the hive. + * Here we trylock to avoid chain of resets executing from + * either trigger by jobs on different adevs in XGMI hive or jobs on + * different schedulers for same device while this TO handler is running. + * We always reset all schedulers for device and all devices for XGMI + * hive so that should take care of them too. */ - hive = amdgpu_get_xgmi_hive(adev, 0); - if (hive && adev->gmc.xgmi.num_physical_nodes > 1 && - !mutex_trylock(&hive->reset_lock)) + + if (hive && !mutex_trylock(&hive->reset_lock)) { + DRM_INFO("Bailing on TDR for s_job:%llx, hive: %llx as another already in progress", + job->base.id, hive->hive_id); return 0; + } /* Start with adev pre asic reset first for soft reset check.*/ - amdgpu_device_lock_adev(adev); - r = amdgpu_device_pre_asic_reset(adev, - job, - &need_full_reset); - if (r) { - /*TODO Should we stop ?*/ - DRM_ERROR("GPU pre asic reset failed with err, %d for drm dev, %s ", - r, adev->ddev->unique); - adev->asic_reset_res = r; + if (!amdgpu_device_lock_adev(adev, !hive)) { + DRM_INFO("Bailing on TDR for s_job:%llx, as another already in progress", + job->base.id); + return 0; } /* Build list of devices to reset */ - if (need_full_reset && adev->gmc.xgmi.num_physical_nodes > 1) { + if (adev->gmc.xgmi.num_physical_nodes > 1) { if (!hive) { amdgpu_device_unlock_adev(adev); return -ENODEV; @@ -3588,13 +3573,56 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev, device_list_handle = &device_list; } + /* block all schedulers and reset given job's ring */ + list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) { + for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { + struct amdgpu_ring *ring = tmp_adev->rings[i]; + + if (!ring || !ring->sched.thread) + continue; + + drm_sched_stop(&ring->sched, &job->base); + } + } + + + /* + * Must check guilty signal here since after this point all old + * HW fences are force signaled. + * + * job->base holds a reference to parent fence + */ + if (job && job->base.s_fence->parent && + dma_fence_is_signaled(job->base.s_fence->parent)) + job_signaled = true; + + if (!amdgpu_device_ip_need_full_reset(adev)) + device_list_handle = &device_list; + + if (job_signaled) { + dev_info(adev->dev, "Guilty job already signaled, skipping HW reset"); + goto skip_hw_reset; + } + + + /* Guilty job will be freed after this*/ + r = amdgpu_device_pre_asic_reset(adev, + job, + &need_full_reset); + if (r) { + /*TODO Should we stop ?*/ + DRM_ERROR("GPU pre asic reset failed with err, %d for drm dev, %s ", + r, adev->ddev->unique); + adev->asic_reset_res = r; + } + retry: /* Rest of adevs pre asic reset from XGMI hive. */ list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) { if (tmp_adev == adev) continue; - amdgpu_device_lock_adev(tmp_adev); + amdgpu_device_lock_adev(tmp_adev, false); r = amdgpu_device_pre_asic_reset(tmp_adev, NULL, &need_full_reset); @@ -3618,9 +3646,28 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev, goto retry; } +skip_hw_reset: + /* Post ASIC reset for all devs .*/ list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) { - amdgpu_device_post_asic_reset(tmp_adev); + for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { + struct amdgpu_ring *ring = tmp_adev->rings[i]; + + if (!ring || !ring->sched.thread) + continue; + + /* No point to resubmit jobs if we didn't HW reset*/ + if (!tmp_adev->asic_reset_res && !job_signaled) + drm_sched_resubmit_jobs(&ring->sched); + + drm_sched_start(&ring->sched, !tmp_adev->asic_reset_res); + } + + if (!amdgpu_device_has_dc_support(tmp_adev) && !job_signaled) { + drm_helper_resume_force_mode(tmp_adev->ddev); + } + + tmp_adev->asic_reset_res = 0; if (r) { /* bad news, how to tell it to userspace ? */ @@ -3633,7 +3680,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev, amdgpu_device_unlock_adev(tmp_adev); } - if (hive && adev->gmc.xgmi.num_physical_nodes > 1) + if (hive) mutex_unlock(&hive->reset_lock); if (r)