From patchwork Mon Oct 22 20:46:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10652509 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2D2E814DE for ; Mon, 22 Oct 2018 20:47:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6C9D528C29 for ; Mon, 22 Oct 2018 20:47:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5CD8E29022; Mon, 22 Oct 2018 20:47:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAD_ENC_HEADER,BAYES_00, MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 27B7628C29 for ; Mon, 22 Oct 2018 20:47:12 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6D8D589BBE; Mon, 22 Oct 2018 20:47:09 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM01-SN1-obe.outbound.protection.outlook.com (mail-sn1nam01on0060.outbound.protection.outlook.com [104.47.32.60]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5CD3D89BB0; Mon, 22 Oct 2018 20:47:08 +0000 (UTC) Received: from DM3PR12CA0114.namprd12.prod.outlook.com (2603:10b6:0:55::34) by BN4PR12MB0738.namprd12.prod.outlook.com (2a01:111:e400:59bc::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1250.30; Mon, 22 Oct 2018 20:47:05 +0000 Received: from CO1NAM03FT062.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e48::202) by DM3PR12CA0114.outlook.office365.com (2603:10b6:0:55::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1250.29 via Frontend Transport; Mon, 22 Oct 2018 20:47:04 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV02.amd.com (165.204.84.17) by CO1NAM03FT062.mail.protection.outlook.com (10.152.81.50) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1273.13 via Frontend Transport; Mon, 22 Oct 2018 20:47:04 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV02.amd.com (10.181.40.72) with Microsoft SMTP Server id 14.3.389.1; Mon, 22 Oct 2018 15:47:00 -0500 From: Andrey Grodzovsky To: , Subject: [PATCH v3 1/2] drm/sched: Add boolean to mark if sched is ready to work v2 Date: Mon, 22 Oct 2018 16:46:53 -0400 Message-ID: <1540241214-21077-1-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(136003)(346002)(396003)(376002)(39860400002)(2980300002)(428003)(199004)(189003)(8936002)(72206003)(53936002)(105586002)(106466001)(86362001)(104016004)(478600001)(47776003)(36756003)(6666004)(356004)(110136005)(54906003)(53416004)(450100002)(316002)(4326008)(16586007)(77096007)(186003)(68736007)(2906002)(97736004)(476003)(126002)(2616005)(486006)(44832011)(336012)(26005)(426003)(81156014)(81166006)(48376002)(8676002)(305945005)(50466002)(7696005)(5660300001)(50226002)(51416003)(14444005); DIR:OUT; SFP:1101; SCL:1; SRVR:BN4PR12MB0738; H:SATLEXCHOV02.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; A:1; MX:1; X-Microsoft-Exchange-Diagnostics: 1; CO1NAM03FT062; 1:1xI2ez+Rao4sYXUjj1ucsYth8q4ozCV6fLqO9fWeTAQCIWAH6MqiK0IKSuavMis1Swlj2Ta80G2xLrT2gQoZs9TvCTPjgBF1UWXZJ2eEml0lcxaGlqm+aIp1IZBHv5zw X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9ae5ba25-d34d-4af5-f5d6-08d6385f8654 X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(2017052603328)(7153060); SRVR:BN4PR12MB0738; X-Microsoft-Exchange-Diagnostics: 1; BN4PR12MB0738; 3:5xUN8o5CcaHS0ZtmlSPb/gwr68b4nPDEI0Vfe3/CyYfxj2FSfgnVPPlJLEw9vJhkPr0YtKOxaskRhH+wx0+WzBTZKAVurrx1hpeAAOFdwIKMoK/c0DqXGvyBWOXQl5D5tUN8/wSAco6Mjf599LkNv+OX9nBXcV7hmf8fR11GgOZE8hN9maBAQ8OY28YAqzAd3lBm+9etwLQ210+4jzHT0vm+BJZuwLf+zNjx+x4/AtRcIpt4e9rDFfsmMT8U1z+poU1vAwx7SmIo51PMSXDKj2SJKk0FKiJ+aZXJlW0F+SIpu132OMw0X+rxNvzC6fBMJlyIityG6y6BByffO6/c//eWrSmad6EdzfL5oYQLL2E=; 25:nW1G4p7ufR8FZOP2Y6SNQpclbRzG+fEe1hm55YQTJlInKQ0Hm2Ge4gHVKL8Sde7sI1uwXzQkMwkOwlWXs3aAVQAOWoX7Xaz8J/e1DhqhQSBuCQNfRbiiPOo6JUFaufCNFWNkYGaZWvaanKGDaIW6vI0W5HSEay7e2XmTJ2bBe3TgwF9X1UG4XJzSKMnln3rWBQoxAVGLQby/NWBId96ju/vQ+GyDggPJj2+qzZ8jV6DFqZ0aTBROXoKIon3ptT+fxcc9hEcVHyq3qmg1LllSTlQlRnEnCJl1OWK0hcZqm0X71Do+9TO6+NXalw9y1ybK5rfJpWRz2coxRSQA5ilMXsESIMCdX4wlGUV2v76nMKE= X-MS-TrafficTypeDiagnostic: BN4PR12MB0738: X-Microsoft-Exchange-Diagnostics: 1; BN4PR12MB0738; 31:OZ5VwxlYqH/SHyZdaVAj928C7jCmxa2URW3mOemLgIHZWmhi0GH7WW3HbKq66O2KLglKcY8+piko1C+Jhf+yaERwXAcpeJ7E+dhHsS0cgyr5odBANAKlDFiN0Vdf6qxiWDmsLY7fDR5xPkzqP8tXmfVBHxwqLG/TJWMiWe1lbxG4UdUPgPWXrf6LMZNFJJVKfRxOxSfcCgdIy5wFuhfIWIU/jSzPonUuQpfgm6XxSZ0=; 20:EJUibsisSUz8Uo6/727UqZapbI7/eJEXwZDJgKI7xnJisy/HUFwB0aGVwRAJYcQXcY5KZrrzfMSo/EBGwWJ/c+hj9hXaJq/ZRRmKrsm2fCnPbr1zb9EhDcV6jdv1qZN6CIEGSp2g+tSjpgMHciCTe4LFcoDZYyTvm+kuKH8g9Ekp1KkyF1cymL2jW3rm1GGXuNY1rEbcWcHDhK/Bnd2blVR2v5VK5oLfGPAm3fqyy8xeRnunsepjGPjTZQZyJx4+wf3IqvyKP3BuZZLV82EmhFqvoMkwXLBvdpXdykwGSUAjDSU1EltZmQFnl85t6R/7DPyFuu0wtah3bSbU6ZZZEAAf0KFGlzY3w7J/K4NxGayEqyNqFKHwJaa2u1Zjrv1CaHp1k8i3vG7ASWZisMl/+V3P4jYqmPOasySREhz7MSgj6DF8YKuSjYpsPN9TzW6TWdRcZO8FWTvVnqiHP602GkwhEmgdLOUANAdJAzE+ebjDsMrX4LLSFEAcfscKXFbx X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(3231355)(944501410)(52105095)(3002001)(93006095)(93003095)(10201501046)(6055026)(148016)(149066)(150057)(6041310)(20161123560045)(20161123564045)(20161123562045)(20161123558120)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(201708071742011)(7699051)(76991095); SRVR:BN4PR12MB0738; BCL:0; PCL:0; RULEID:; SRVR:BN4PR12MB0738; X-Microsoft-Exchange-Diagnostics: 1; BN4PR12MB0738; 4:mX3onHI4cax1MVJ2XBWPsvjho/i+2R+HLne9jjfrdF/NK3h301MVNhiOKdBscNVV+jlwIaeFnAywhzdR8QtFN0Y4QEoDVf+EErJQ06hWrD04E2yeD5len+VIg0+rEcPHyewhPtdTLeyCF6/aHjbZnahguDu8CIBwPGEAe9tl2NXmkiLk1JMHvybt3AeAouTxNWgoslRG0S0pVKCXBMA57Lac+sY/dNy4GUUZGVdypbdzB1CEhbTh4hIeZnz3uLSwFC4Dpj1xUJjASAwBm5rJugTktGZ5LL8kWpWa35btEnE/sblP3UmC3wqGdVS2/tX/ X-Forefront-PRVS: 08331F819E X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN4PR12MB0738; 23:UqQbVdMqtpWWqAvDAyr6/e7KLcfKKYHweiiOp9++Z?= d1GtBpN6stAmBbUqPj82PTUEUKlydRqhKqH+3wPzfgJaJKWsjV+0fjmHhTEdm8kIaR1Vc5e7v9uAF4IGPfjnb/0LM5dKJ9gv7SxRfCv6XrAtgvw8aFEHr74ZLSQwCPSEgvizxXDafh1oPrlex7DFwXwTF7kU/3uIHyU39F7W5sna/qWcYjmic2yIzsvK+vBCpx1LwZqy3C35hN6mUUHDr4dC5yS+UmHX+H5fdzijBGKbwhNow8wrfvhVonNwL9yRyRn+lwTXVHkcfwrk2GaLxLddbTfJhEvQtk6EV/MOFtZIY31YtKs/klaRbZ1il2n/kta+AgE2L3GZJV93Z3RbgAmC+roQ9gNler0eBPqGBNhuJLsoaFIK8CWc7C/GI3kdVfWJLQIsCoh7l+OUbysXNY2/9DNMIoI++fNhrtGc3OR34FrsU8p9CDw3Awqj5OsfQ1QIIaS2pKUdUbgXSNyv9MRriSTTvKcEMZFFPLN1vtbNEZT2GNdmhNzFmrZaSp/lcw8vWfHfVgVMT1dVvaSyxOnefMDptU0KAJWBHCAMBe0zmletwirXVH2wE5hRhJLE2pkz5m1zh9w9pOv3yC+ZIB/fpVGziScvSjIZ6E/IwOhV5Ed3/RSY6rtcYrm/STcd27pv8db3ZPy8McNKTHPadOLGDvZSVp/hFkba44BuuHqH2ikDUMRBugL7jiqjJs3klRt0eWCvA19F1vAMUcZkAJy2isRl19lvMWu01of6mANQhu80zJsQuIGYgghEK2KvVu34gzsFzzwh5+Dw/eEcz+4A12heP8Ce8Oe9SW1XrIg/neAYK7xid6J+pP0s5EQB7SDMhWeEmPjJ/N3FMy6sOFWWwiMUFjOBBx/6pFNrkordMWxwYxVoHK5n0Vqx8xlmN0pxjvNbo5X3zyaXB/QgCtWwVRE7H6maB4Zz2is4KDQ/99J64kJFMDmxNeZWoq9nQxUrKAzCWOeqt2pTv2TBxMnrIzu5Nli/3yD4noUO4Rr07m5b5DSKnLeCzVQ+GXvIo4d5GKtsuTfyX5Hf542iYHFGkhNhBjjcxPa1qkL4BCsaqFSJeba5rqXHukuA43s9jWImWjH77AJ4avbwtMboNtnnr7m31ZlQ1vjuXZorjLRpw== X-Microsoft-Antispam-Message-Info: OYsztd1KfqXYHMK01dyCgd6phB75/HRj/0fBID+bqzw21O80eN3MYWUHbtZCaehoWE8woLUhvhvffrn+d2TbMh4egmNRVg9/2y/Q7A0aCjlY69+d+hHSvzV9Lo6T5/uR7zQQxOkKyqHA7QLqUyCQGRBaM3Ss9cllaKGaqYGljcmjXc/jdZ3O1NQvDGb9GG4W+UNyqOtJkfNgZbYKdpmQ2+fgr/3Cc8a6wnpP7cSeCy/vVTGtYfXRBDkSl5mMkR6t5KHDCLpg7Vp2EPoH/ZjluVLWjg5TH1MFoBJ4r0syinjXaOU79FcDGXufJoWaUi8ZiimFQfuQwXKjigt7S2/WltVP4FYNOrlPHgMQyodS4Qc= X-Microsoft-Exchange-Diagnostics: 1; BN4PR12MB0738; 6:RPn+xh+WqFKBqFvmxemkEh+pd8YOYkG8mXfcl5985zEgHTMXQnQLSH27L/zVkmKH+K3kv57bmmhX8vyVO1Ue+b5Pd3idEwerIm0izYTUCfUPCAlTX6A9mgkF42zhZ65RPzqT+8kO8NvHqpJdBe9e8+2f/Wr5xuy9QDJpCqj1Pa5n+lTKszXtAgYVX1gFWJw7QrJueZAPAPCmz5SznFV6hwrnmm1OuqUeevcOTJqQynObBUkJlervaNd9K4n3vVq0j22cCT2Q0sG6ACZRQtUqPizVfy9LDDIy/NaHWvSPZErZBfThjxJTiFyvZ9dDtzAc07R10jfhX31pwQbkUkoYymuo833c7iy6Wat7L3LVN1WK/gmRSCfsYM2CTc6sll4OihH8/XZWNrt61iFdDZrphOCyA+CEzv0TlXPkD3pfuaArp7JGwhi/RiSRCG3qYS51M6vA0TEeg62jzIBipsxruw==; 5:3pzvvKIuBCL7lE9skYiF1JR/UBFzxQxBlOXHQoGOxCTu6lBmebJTGH3qLh1LfUnanoyGiUR6xSSrfdhHHFSQ44vc1r1jQvzPmVEOB2prFRt2Wltsi4BqPB2I8SbAZMwO4PHdkSgQ3BQNutpzqWksnJugZPvPZpPJD3HG9ZYzClE=; 7:v9PHc/mZbp+BpmbBoJqlfgPzgoAIdaPMMmfR7R+EYnYIDkKKg9wWJvpUlW/1xqDkG+ItABgQyJWOtsP3fUn6+AoK58W/yVKRWI+WtriV+Ao4DlfkQoBz3N+6VlC6u0QBBFU7c0gntw1bqm9b7MT+wVqMhQh3fl+NI0IppnxRT6zyfIONFG7qjBIWGvWbw/vyHp6fVY9uqBYNVciSpoBHAzmmfLUDCK/BwOEU9RA/kHGYSh1n3ndWDkLqJHNmvucF SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BN4PR12MB0738; 20:Qw+QbfJlrIAOtvNQtZG0feqImwI+0JzlY0/5LeG/kXbvZHTfgaeIroslugMuNS5++8yL/81AqdhE24f+pGTMKP2T+nSvTjnN2Koenvvsl+S9Er9/7jzHnx1XRpcY205VE7/jEmCU6szxHLX96DlWSarZhVQpvO8U6TfCk4afIvOr9eiNVBwTUU07j5tmrUYiviAGM0uFM8O4cBDAbNhRjWGnep9XsJPAX1zdlsx7d3Rl4XXAX32Xz0aZkoIXe0Hd X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Oct 2018 20:47:04.1892 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9ae5ba25-d34d-4af5-f5d6-08d6385f8654 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV02.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN4PR12MB0738 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexander.Deucher@amd.com, christian.koenig@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Problem: A particular scheduler may become unsuable (underlying HW) after some event (e.g. GPU reset). If it's later chosen by the get free sched. policy a command will fail to be submitted. Fix: Add a driver specific callback to report the sched status so rq with bad sched can be avoided in favor of working one or none in which case job init will fail. v2: Switch from driver callback to flag in scheduler. Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 2 +- drivers/gpu/drm/etnaviv/etnaviv_sched.c | 2 +- drivers/gpu/drm/scheduler/sched_entity.c | 9 ++++++++- drivers/gpu/drm/scheduler/sched_main.c | 10 +++++++++- drivers/gpu/drm/v3d/v3d_sched.c | 4 ++-- include/drm/gpu_scheduler.h | 5 ++++- 6 files changed, 25 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index 5448cf2..bf845b0 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -450,7 +450,7 @@ int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring, r = drm_sched_init(&ring->sched, &amdgpu_sched_ops, num_hw_submission, amdgpu_job_hang_limit, - timeout, ring->name); + timeout, ring->name, false); if (r) { DRM_ERROR("Failed to create scheduler on ring %s.\n", ring->name); diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index f8c5f1e..9dca347 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -178,7 +178,7 @@ int etnaviv_sched_init(struct etnaviv_gpu *gpu) ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops, etnaviv_hw_jobs_limit, etnaviv_job_hang_limit, - msecs_to_jiffies(500), dev_name(gpu->dev)); + msecs_to_jiffies(500), dev_name(gpu->dev), true); if (ret) return ret; diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 3e22a54..ba54c30 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -130,7 +130,14 @@ drm_sched_entity_get_free_sched(struct drm_sched_entity *entity) int i; for (i = 0; i < entity->num_rq_list; ++i) { - num_jobs = atomic_read(&entity->rq_list[i]->sched->num_jobs); + struct drm_gpu_scheduler *sched = entity->rq_list[i]->sched; + + if (!entity->rq_list[i]->sched->ready) { + DRM_WARN("sched%s is not ready, skipping", sched->name); + continue; + } + + num_jobs = atomic_read(&sched->num_jobs); if (num_jobs < min_jobs) { min_jobs = num_jobs; rq = entity->rq_list[i]; diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 63b997d..772adec 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -420,6 +420,9 @@ int drm_sched_job_init(struct drm_sched_job *job, struct drm_gpu_scheduler *sched; drm_sched_entity_select_rq(entity); + if (!entity->rq) + return -ENOENT; + sched = entity->rq->sched; job->sched = sched; @@ -598,6 +601,7 @@ static int drm_sched_main(void *param) * @hang_limit: number of times to allow a job to hang before dropping it * @timeout: timeout value in jiffies for the scheduler * @name: name used for debugging + * @ready: marks if the underlying HW is ready to work * * Return 0 on success, otherwise error code. */ @@ -606,7 +610,8 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, unsigned hw_submission, unsigned hang_limit, long timeout, - const char *name) + const char *name, + bool ready) { int i; sched->ops = ops; @@ -633,6 +638,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, return PTR_ERR(sched->thread); } + sched->ready = ready; return 0; } EXPORT_SYMBOL(drm_sched_init); @@ -648,5 +654,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched) { if (sched->thread) kthread_stop(sched->thread); + + sched->ready = false; } EXPORT_SYMBOL(drm_sched_fini); diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index 80b641f..7cedb5f 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -212,7 +212,7 @@ v3d_sched_init(struct v3d_dev *v3d) &v3d_sched_ops, hw_jobs_limit, job_hang_limit, msecs_to_jiffies(hang_limit_ms), - "v3d_bin"); + "v3d_bin", true); if (ret) { dev_err(v3d->dev, "Failed to create bin scheduler: %d.", ret); return ret; @@ -222,7 +222,7 @@ v3d_sched_init(struct v3d_dev *v3d) &v3d_sched_ops, hw_jobs_limit, job_hang_limit, msecs_to_jiffies(hang_limit_ms), - "v3d_render"); + "v3d_render", true); if (ret) { dev_err(v3d->dev, "Failed to create render scheduler: %d.", ret); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 0684dcd..037caea 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -264,6 +264,7 @@ struct drm_sched_backend_ops { * @hang_limit: once the hangs by a job crosses this limit then it is marked * guilty and it will be considered for scheduling further. * @num_jobs: the number of jobs in queue in the scheduler + * @ready: marks if the underlying HW is ready to work * * One scheduler is implemented for each hardware ring. */ @@ -283,12 +284,14 @@ struct drm_gpu_scheduler { spinlock_t job_list_lock; int hang_limit; atomic_t num_jobs; + bool ready; }; int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_backend_ops *ops, uint32_t hw_submission, unsigned hang_limit, long timeout, - const char *name); + const char *name, + bool ready); void drm_sched_fini(struct drm_gpu_scheduler *sched); int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, From patchwork Mon Oct 22 20:46:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10652513 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 126E013A4 for ; Mon, 22 Oct 2018 20:47:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5309D28C29 for ; Mon, 22 Oct 2018 20:47:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4723029022; Mon, 22 Oct 2018 20:47:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAD_ENC_HEADER,BAYES_00, MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3892028C29 for ; Mon, 22 Oct 2018 20:47:36 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2E8DC89BCD; Mon, 22 Oct 2018 20:47:34 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM01-SN1-obe.outbound.protection.outlook.com (mail-sn1nam01on0051.outbound.protection.outlook.com [104.47.32.51]) by gabe.freedesktop.org (Postfix) with ESMTPS id 63CAF89BCD; Mon, 22 Oct 2018 20:47:33 +0000 (UTC) Received: from MWHPR12CA0041.namprd12.prod.outlook.com (2603:10b6:301:2::27) by BN4PR12MB0739.namprd12.prod.outlook.com (2a01:111:e400:59bc::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1250.30; Mon, 22 Oct 2018 20:47:29 +0000 Received: from DM3NAM03FT029.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e49::200) by MWHPR12CA0041.outlook.office365.com (2603:10b6:301:2::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1250.20 via Frontend Transport; Mon, 22 Oct 2018 20:47:28 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV02.amd.com (165.204.84.17) by DM3NAM03FT029.mail.protection.outlook.com (10.152.82.194) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1273.13 via Frontend Transport; Mon, 22 Oct 2018 20:47:28 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV02.amd.com (10.181.40.72) with Microsoft SMTP Server id 14.3.389.1; Mon, 22 Oct 2018 15:47:25 -0500 From: Andrey Grodzovsky To: , Subject: [PATCH v3 2/2] drm/amdgpu: Retire amdgpu_ring.ready flag v3 Date: Mon, 22 Oct 2018 16:46:54 -0400 Message-ID: <1540241214-21077-2-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1540241214-21077-1-git-send-email-andrey.grodzovsky@amd.com> References: <1540241214-21077-1-git-send-email-andrey.grodzovsky@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(376002)(39860400002)(346002)(136003)(396003)(2980300002)(428003)(199004)(189003)(8676002)(4744004)(106466001)(72206003)(81166006)(104016004)(36756003)(356004)(305945005)(6666004)(86362001)(76176011)(486006)(2616005)(446003)(11346002)(105586002)(126002)(97736004)(48376002)(44832011)(50466002)(51416003)(426003)(7696005)(53416004)(450100002)(476003)(53946003)(53936002)(336012)(478600001)(4326008)(186003)(5660300001)(77096007)(50226002)(26005)(81156014)(16586007)(54906003)(47776003)(8936002)(110136005)(2906002)(68736007)(316002); DIR:OUT; SFP:1101; SCL:1; SRVR:BN4PR12MB0739; H:SATLEXCHOV02.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; A:1; MX:1; X-Microsoft-Exchange-Diagnostics: 1; DM3NAM03FT029; 1:pg09AAaDeAp7zycmd6cGRcHqr5j+4frrP8j1NqaIEn+Qnc9H8HliVr+MiPxhXZt0XOtl9tEiFSQjTq+oXWB9ih85NCc5w/yVlASQ46I4ysjX7/AmOpf6Tr6D5wHzHo7E X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: bf252c5a-f276-4728-43a1-08d6385f94a6 X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989299)(5600074)(711020)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7153060); SRVR:BN4PR12MB0739; X-Microsoft-Exchange-Diagnostics: 1; BN4PR12MB0739; 3:agvbH0BU7Ahc6yjiSZaVQZkD8pqCn3+L6n1uPfUPuBAxIatyLaaIONK/Ieo1/ioLi3GZyOei2yx83WYrxnNsWTlJQSrKzfgpyrDwdc6k7nFb8BWmVjcwYp3vj/aUzqbT9vq/+GX9F8p+BJHLNQx+nT2d6QRSt8aWSoJYyUZUGrmzddWAqULnAHYYaLd7ZPJ4Af9PRaiz7kBObh1NIEmvjVKaSmfrPn5mck24h9hcTABd7wF/nz9wx/gjSzcDQdriKolH+xX1KB5z36mJNg7cIZ0Ob8n4bqRl4ndQbJLQJVoWC59uyfJ/J56Zeae0cNzthCwvipDeMgJhnFsF5qDY4ZF8S1alIo/fyiggrcLUd7Y=; 25:z7wn0IlDaS3rtjkUKH3V5VTjKcTw1jNipq0IwHzInR3hfeVUNexLMe6saClu89/t2SyqzaOW6GOZ5vy0H2aSW9qXXMVKY55PKLnvVvOajP9BaHAvGTy4HsMN7C/0Z1a4icqeTLIP0xsYSYduvcDOU9HxTgQoAWuwg/NbwyTp1piF54U0nuNsERLud0eByhldkPkvuVpYvFT4y7oEu20Jx7kP0h7d2XDMGU+sDwzjz8rWD+o3wuilyvFnlUltf4yo/yInLzfoPuuz63Ix63mYEcuSPj5V3J/cLUguaO+D3iYoceHQ57iD95o1TgjkFI+9+S3jqtyB5mRvw05FB5ylSmvUfj4rqMQLcq6lmvLktlg= X-MS-TrafficTypeDiagnostic: BN4PR12MB0739: X-Microsoft-Exchange-Diagnostics: 1; BN4PR12MB0739; 31:iV5S+4EpCvDdxKF02j2/l6337TLeF+64htsLNkZEA2xSzNNHc9AgvqYQhiPQ5VD/lKvY5Cz/Xwo+/DrA+EcXYM49TRxV6stnHzWc3l0ikMzOHFapaTe+d1UbPd1b+uLjNKQIClZadHc4cjce7YfAaXYAmTE3YCJLeT9Kc5SAmwWlAdGGA7iHsmt+mXFB1jfKYMc7/kQ38R6i0dzAaOfwM6J06Ze4wfnex0vEBaKFBXQ=; 20:gLwZz5MWKC+nSzm8qycntkAW24sbgJ2L7L0USa8xQ4TI09V0VlG+a7kKi/n6bqvVwkcPdsywbtu1rBlipkaW4VSEaD4xO3oOOr+Q7IbBLlgnXiemgn1fDuLpqr59/S25HnIrencgYK4Fn14Pak+/5Vc4KJ+zuPN32XqdisKRfhWGI+x89+Qv3TlURnywKXj5nvLqig8owF0ExMNDAfZCSYP+RSZac/Dg688crG7QLKutq3YUtDDbxbrRYYyL5cROf8vT64qFMk54kon3EjzibckqCs3uo8MjTjE/y5JWBY8klHh/HEQUV/NXqTwhokTuNAuld5bEWvKW862RukMgG57FiSi/weI5MFKN2VH0OMSvPV1dkM1gjfpIu9dUvEFLeekiYQvYBQzObKDI/i7vxVhkavz+/P9LLTMWaSFY/ZRNli/sMj92171QPXlGW7d5/XzYhJYaqJFNSBtH1kSNO4bJWtqYFpZ6/NsVyj0OkN7EOREmJtWRuYzwzIo2K74s X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(3231355)(944501410)(52105095)(3002001)(93006095)(93003095)(10201501046)(6055026)(148016)(149066)(150057)(6041310)(20161123560045)(20161123564045)(20161123562045)(20161123558120)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(201708071742011)(7699051)(76991095); SRVR:BN4PR12MB0739; BCL:0; PCL:0; RULEID:; SRVR:BN4PR12MB0739; X-Microsoft-Exchange-Diagnostics: 1; BN4PR12MB0739; 4:MEgQWv1hxgkqnFHYOkX+Cb53oSnUSDYEXZgFu6ODVoxOWw513gYbUtbNJ/rAqNNkw+PsChC9K0ZWmk1DUjsN3nfmepj1+V04c2ouD2Jy/qSh4n0jDgeyAWhobbRd2Bb3UjbyVkcYJ5bJl/W5+tvgrpff7kSgGI4cj2vthJqe0vbmkxpQipdi8KKzxk2/zi4XEEhHMuXniT5cgaVT7+8GzZZgNh344CkV2BZ44/lDLNuHXScP/4Rjz+9Ut89DMWo7g0OeUH4vfY7fSk43bRpyTt/obzxAH6Lfw0GaRNUWjLGKTB4k1hF9i3fEAe2Est3d X-Forefront-PRVS: 08331F819E X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN4PR12MB0739; 23:uAF/xzXFB8+9lKViJdtcyDN/Q+oIs2cruA0dDfHoi?= 4lF9Mw/vEy+wJDtexX5x/MRHOx1e7RULVW6wCTS5IZ2cVhN2WifyRsv36vQMy7ZxdWCCpdu+79LvygcgwgGdAWhaW/uSkEIU/eW7YnhhEQ1y7q2A8S98ohw3WDtVU1sLTo2G7qHlFWsG13mxIy8gjkrgzKFNVsdjlInKUhozPyEF871SZZAnxxt5pyqlTx/e/IfbdeanJhtuM4eNerVB3Kv0otjxJV9L1TZfZROQFMCr833M6l8oYyFkRGmFiFKnt+gktxcDIt4bUM4XrcaoWWxskNfd/yrkhzx5DxuP2ClqOTOAhvUsumRoH4XAMMLOYrvUj9OBuUxx0Ab5VRj3Db4bjZ11sDzgdNQucjwIUyLFTQltdV5PvwRNq4Upk0lsQQXt6eulwoU4Kp+0EItBl3cJKCT5+LEZ24k58BgKQwvmsWRZqwKIgeF5NCX9m9ltl4lqQRLBc9VZs+btrYqFdd6bTGcyVqIf8+6EfU/o4EwjQiY+Xix7REpl1+BwF8jg8wANmkvOge11gaA5FC/wRqh/slXlAZa0S7lksf1ATfYJ763sZF3oLddY10s7HrHjxkGUtaMKdBZI2UqKfpnJY5ldlu6CT8LpvrK+Jd+6ECaMs3nayxMlO7sdlJpv3Kn3kDUwGx9HbIaXycRSbQn8DuFg7ltCtbXoJY/UPkM0EtKer2TFobvWIMJsi/YQp4uFXYFzihx45pgu/+5mCoQuj37tUx37EhuBRAF6bXDVJUnq4AAEUnb48bniVPRPpphGskCy5RMGyXUzmDD7IYmMdQqFjK9lHOLZhI5JtZj9oNQwDGEQ0UqrC2sgDyBhllQBdKqZfZ5LkFDCRdpFjcGSWGLtp0O1ojDjEdiscUs8KFxM5MHpJG5lWi5KEVpJJheRZkbPWrYNyCb3yH4ae2zRky4CAR3A6ozbz4GdIWqLfHRct7ktSG7mUfw3qCM7fPfS6uwRAzUQTWdN7ESMlUOfDdl3V/ocIYFB5riFnXypLB8zwNSNS7PDoCxivQA8fpgx/DbM1sjDX2T70GcfJcMPoPXAladRs1tc6BrV+WHzUqiXKnVqLaO8G8IVuoOGPUUqXCPpuOuVEBJfX0NWqr+0EsfqJpqP6xSN8mj1I6ICssBhJB/JpV5H933lQR0JU84jaTYJ9T7G+ZiqLgfLlfOxgXhZSPFUHwcjgWnTFkKCVjYu1yg+UHe/gv3R8GD1gsWDXE= X-Microsoft-Antispam-Message-Info: 3rMHlFG59DYwXfTtjQO/gSDtrh1TFt24nbLZrGEQqr+yartqEM68SRnkyrDhP6L91rCXmei7AD6pMlhnep3NVVUpKe+LuA41bt84Hg8SYSzI/lBRmX7iiNd5+yy8rkqq7oXh63CaSGWNpBfHRt5JtNoYHFetYRgdNmgdsdOxvEMcn5WzqWsr6XcZohpBlYA1xdzKj4+fgi4Uvg6CLCg1xqaNA1L5D4pPwr56Ca1LpC3pAx28PgnyeLyDVO4of2QF3l26r2vY8oAxfW51Py+gBFOBQ3yP9vxwJfaiECi2sW10kcgA7i/QFaI/nYncjzgqFeEq42K8z9i3qh6rmLr9qe5FtSGdHJ1EB0N7v8XMrDM= X-Microsoft-Exchange-Diagnostics: 1; BN4PR12MB0739; 6:OjSibXwUEX8j5SjEQOeOnPvlhyQthMQxsx7wUrSKBnHIPIcGOBw5SSVGfHSZKIGbbRR/yf4iF5iSoSb6kXKMRYaFEaAfiEt4TFhDpO4j52O2g92zD6BCvC8bBv9tcvBm3X3dSW+drHE0KcJom1Igfx1BGG7Jp0i5QTHcGjnWXsLUE+IStQmjRiQtKidCnxfxWB2td9OPdAkAp4SUp5RS9AhJrptlRpAUmFK6T/tHP1R4virGj927SvVU31dRjlJU8VSNoW5491ws6nrZiOCyMeICXH5QeROwXQKtrWYty8lnfDCugeT+QjLBB/VzFPSrqSdDlvPXg6embfUBdkf3c7sLLK6Ex1HWv2FO7BdHRZbpnDW9W5N/CuXBinYdY1nchutWfabgbPVbKo9AYMyIW5/sOiV6gWM0ud4u/QnSUGmO3beZpwh+a753Lw3YD20tR1HGR5UYUKGRfQCeBpDclA==; 5:hy6PGGhveBODILxyggIkBqyOwL4iXS3qyw24XX71LUp6tYq4I1OXhoQnDXfVutjJ+bYPvaEMRKMHw3LcGkVe2nAYtDQ4Hxj9D4vQ2L8LRkleE1zIKTToVRUjz05+ZgSRHRpja/Dblz8p6VBcAtariuzARxq75LikWvJ2iiahW8E=; 7:lbypMkjUDRRp9Jv2hGBH1+jvyRQPuJGTd+1gdVkK+NTvs107xMleOIC1rc6K6vHRhpvCZrXbIeUNpMXng2Lyo7QLodPWK6Ztp8MolQYP9SJBJHZhpxOC9gM3X72xVFAYVj2rGcwz+nrEIX+IMWGXvWvk1WpmV8L+1DQEdUsrcWUdTFVRD6IuCauSUIyXaJzk4ng/IXV12oemWnHgHzKxYYqpXc9IOuIpZ8j8qp7s1qfvFvbSFu2/zK1NsLpUtnXV SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BN4PR12MB0739; 20:kpamp3a4KL0JdIIzDJfGiPsdSc1rmP/zia2QKlLyaIEGVc1lS9W6FfDA83RFWVJa1VtTa2tUv7X7dRUiHAkKOl9rFmIlSeU+LM1f4P2imb8H6oSa2jO1vwFIKSIia0emdaQPpICTOXAnENXTL8MMIL+lJhHiYi8BBOgBi27hFnfMM3lJjpLzBATvo1fX+cWPI3uyEu2GzAd6Mz8AOMpwbdIYLIvtrRGYgKwuoeabQtkpUVdrCuVSVzoyfkzAr7la X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Oct 2018 20:47:28.4757 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bf252c5a-f276-4728-43a1-08d6385f94a6 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV02.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN4PR12MB0739 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alexander.Deucher@amd.com, christian.koenig@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Start using drm_gpu_scheduler.ready isntead. v3: Add helper function to run ring test and set sched.ready flag status accordingly, clean explicit sched.ready sets from the IP specific files. Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 6 ++--- drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 18 +++++++------- drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 13 +++++++++- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 3 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 2 +- drivers/gpu/drm/amd/amdgpu/cik_sdma.c | 12 ++++----- drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c | 16 ++++-------- drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 16 ++++-------- drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 29 +++++++++------------- drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 30 +++++++++-------------- drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 2 +- drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c | 12 ++++----- drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c | 12 ++++----- drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 18 ++++++-------- drivers/gpu/drm/amd/amdgpu/si_dma.c | 10 +++----- drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c | 9 +++---- drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c | 9 +++---- drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c | 16 ++++-------- drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c | 16 ++++-------- drivers/gpu/drm/amd/amdgpu/vce_v2_0.c | 6 +---- drivers/gpu/drm/amd/amdgpu/vce_v3_0.c | 7 +----- drivers/gpu/drm/amd/amdgpu/vce_v4_0.c | 9 ++----- drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c | 24 ++++++------------ 26 files changed, 118 insertions(+), 183 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c index c31a884..eaa58bb 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c @@ -144,7 +144,7 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev) KGD_MAX_QUEUES); /* remove the KIQ bit as well */ - if (adev->gfx.kiq.ring.ready) + if (adev->gfx.kiq.ring.sched.ready) clear_bit(amdgpu_gfx_queue_to_bit(adev, adev->gfx.kiq.ring.me - 1, adev->gfx.kiq.ring.pipe, diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c index 42cb4c4..f7819a5 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c @@ -876,7 +876,7 @@ static int invalidate_tlbs(struct kgd_dev *kgd, uint16_t pasid) if (adev->in_gpu_reset) return -EIO; - if (ring->ready) + if (ring->sched.ready) return invalidate_tlbs_with_kiq(adev, pasid); for (vmid = 0; vmid < 16; vmid++) { diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c index b8963b7..fc74f40a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c @@ -146,7 +146,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs, fence_ctx = 0; } - if (!ring->ready) { + if (!ring->sched.ready) { dev_err(adev->dev, "couldn't schedule ib on ring <%s>\n", ring->name); return -EINVAL; } @@ -351,7 +351,7 @@ int amdgpu_ib_ring_tests(struct amdgpu_device *adev) struct amdgpu_ring *ring = adev->rings[i]; long tmo; - if (!ring || !ring->ready) + if (!ring || !ring->sched.ready) continue; /* skip IB tests for KIQ in general for the below reasons: @@ -375,7 +375,7 @@ int amdgpu_ib_ring_tests(struct amdgpu_device *adev) r = amdgpu_ring_test_ib(ring, tmo); if (r) { - ring->ready = false; + ring->sched.ready = false; if (ring == &adev->gfx.gfx_ring[0]) { /* oh, oh, that's really bad */ diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c index 50ece76..25307a4 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c @@ -336,7 +336,7 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev, case AMDGPU_HW_IP_GFX: type = AMD_IP_BLOCK_TYPE_GFX; for (i = 0; i < adev->gfx.num_gfx_rings; i++) - if (adev->gfx.gfx_ring[i].ready) + if (adev->gfx.gfx_ring[i].sched.ready) ++num_rings; ib_start_alignment = 32; ib_size_alignment = 32; @@ -344,7 +344,7 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev, case AMDGPU_HW_IP_COMPUTE: type = AMD_IP_BLOCK_TYPE_GFX; for (i = 0; i < adev->gfx.num_compute_rings; i++) - if (adev->gfx.compute_ring[i].ready) + if (adev->gfx.compute_ring[i].sched.ready) ++num_rings; ib_start_alignment = 32; ib_size_alignment = 32; @@ -352,7 +352,7 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev, case AMDGPU_HW_IP_DMA: type = AMD_IP_BLOCK_TYPE_SDMA; for (i = 0; i < adev->sdma.num_instances; i++) - if (adev->sdma.instance[i].ring.ready) + if (adev->sdma.instance[i].ring.sched.ready) ++num_rings; ib_start_alignment = 256; ib_size_alignment = 4; @@ -363,7 +363,7 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev, if (adev->uvd.harvest_config & (1 << i)) continue; - if (adev->uvd.inst[i].ring.ready) + if (adev->uvd.inst[i].ring.sched.ready) ++num_rings; } ib_start_alignment = 64; @@ -372,7 +372,7 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev, case AMDGPU_HW_IP_VCE: type = AMD_IP_BLOCK_TYPE_VCE; for (i = 0; i < adev->vce.num_rings; i++) - if (adev->vce.ring[i].ready) + if (adev->vce.ring[i].sched.ready) ++num_rings; ib_start_alignment = 4; ib_size_alignment = 1; @@ -384,7 +384,7 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev, continue; for (j = 0; j < adev->uvd.num_enc_rings; j++) - if (adev->uvd.inst[i].ring_enc[j].ready) + if (adev->uvd.inst[i].ring_enc[j].sched.ready) ++num_rings; } ib_start_alignment = 64; @@ -392,7 +392,7 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev, break; case AMDGPU_HW_IP_VCN_DEC: type = AMD_IP_BLOCK_TYPE_VCN; - if (adev->vcn.ring_dec.ready) + if (adev->vcn.ring_dec.sched.ready) ++num_rings; ib_start_alignment = 16; ib_size_alignment = 16; @@ -400,14 +400,14 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev, case AMDGPU_HW_IP_VCN_ENC: type = AMD_IP_BLOCK_TYPE_VCN; for (i = 0; i < adev->vcn.num_enc_rings; i++) - if (adev->vcn.ring_enc[i].ready) + if (adev->vcn.ring_enc[i].sched.ready) ++num_rings; ib_start_alignment = 64; ib_size_alignment = 1; break; case AMDGPU_HW_IP_VCN_JPEG: type = AMD_IP_BLOCK_TYPE_VCN; - if (adev->vcn.ring_jpeg.ready) + if (adev->vcn.ring_jpeg.sched.ready) ++num_rings; ib_start_alignment = 16; ib_size_alignment = 16; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c index 59cc678..7235cd0 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c @@ -2129,7 +2129,7 @@ void amdgpu_pm_compute_clocks(struct amdgpu_device *adev) for (i = 0; i < AMDGPU_MAX_RINGS; i++) { struct amdgpu_ring *ring = adev->rings[i]; - if (ring && ring->ready) + if (ring && ring->sched.ready) amdgpu_fence_wait_empty(ring); } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c index b70e85e..ffdd016 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c @@ -338,7 +338,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring, */ void amdgpu_ring_fini(struct amdgpu_ring *ring) { - ring->ready = false; + ring->sched.ready = false; /* Not to finish a ring which is not initialized */ if (!(ring->adev) || !(ring->adev->rings[ring->idx])) @@ -500,3 +500,14 @@ static void amdgpu_debugfs_ring_fini(struct amdgpu_ring *ring) debugfs_remove(ring->ent); #endif } + +int amdgpu_ring_test_helper(struct amdgpu_ring *ring) +{ + int r; + + r = amdgpu_ring_test_ring(ring); + + ring->sched.ready = !r; + + return r; +} diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h index 4caa301..4cdddbc 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h @@ -189,7 +189,6 @@ struct amdgpu_ring { uint64_t gpu_addr; uint64_t ptr_mask; uint32_t buf_mask; - bool ready; u32 idx; u32 me; u32 pipe; @@ -313,4 +312,6 @@ static inline void amdgpu_ring_write_multiple(struct amdgpu_ring *ring, ring->count_dw -= count_dw; } +int amdgpu_ring_test_helper(struct amdgpu_ring *ring); + #endif diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index 3a68028..d76895c 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -2069,7 +2069,7 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset, unsigned i; int r; - if (direct_submit && !ring->ready) { + if (direct_submit && !ring->sched.ready) { DRM_ERROR("Trying to move memory with ring turned off.\n"); return -EINVAL; } diff --git a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c index 32eb43d..561406a 100644 --- a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c +++ b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c @@ -316,8 +316,8 @@ static void cik_sdma_gfx_stop(struct amdgpu_device *adev) WREG32(mmSDMA0_GFX_RB_CNTL + sdma_offsets[i], rb_cntl); WREG32(mmSDMA0_GFX_IB_CNTL + sdma_offsets[i], 0); } - sdma0->ready = false; - sdma1->ready = false; + sdma0->sched.ready = false; + sdma1->sched.ready = false; } /** @@ -494,18 +494,16 @@ static int cik_sdma_gfx_resume(struct amdgpu_device *adev) /* enable DMA IBs */ WREG32(mmSDMA0_GFX_IB_CNTL + sdma_offsets[i], ib_cntl); - ring->ready = true; + ring->sched.ready = true; } cik_sdma_enable(adev, true); for (i = 0; i < adev->sdma.num_instances; i++) { ring = &adev->sdma.instance[i].ring; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) return r; - } if (adev->mman.buffer_funcs_ring == ring) amdgpu_ttm_set_buffer_funcs_status(adev, true); diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c index 622dd70..c8f0381 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c @@ -1950,9 +1950,9 @@ static void gfx_v6_0_cp_gfx_enable(struct amdgpu_device *adev, bool enable) CP_ME_CNTL__CE_HALT_MASK)); WREG32(mmSCRATCH_UMSK, 0); for (i = 0; i < adev->gfx.num_gfx_rings; i++) - adev->gfx.gfx_ring[i].ready = false; + adev->gfx.gfx_ring[i].sched.ready = false; for (i = 0; i < adev->gfx.num_compute_rings; i++) - adev->gfx.compute_ring[i].ready = false; + adev->gfx.compute_ring[i].sched.ready = false; } udelay(50); } @@ -2124,12 +2124,9 @@ static int gfx_v6_0_cp_gfx_resume(struct amdgpu_device *adev) /* start the rings */ gfx_v6_0_cp_gfx_start(adev); - ring->ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) return r; - } return 0; } @@ -2227,14 +2224,11 @@ static int gfx_v6_0_cp_compute_resume(struct amdgpu_device *adev) WREG32(mmCP_RB2_CNTL, tmp); WREG32(mmCP_RB2_BASE, ring->gpu_addr >> 8); - adev->gfx.compute_ring[0].ready = false; - adev->gfx.compute_ring[1].ready = false; for (i = 0; i < 2; i++) { - r = amdgpu_ring_test_ring(&adev->gfx.compute_ring[i]); + r = amdgpu_ring_test_helper(&adev->gfx.compute_ring[i]); if (r) return r; - adev->gfx.compute_ring[i].ready = true; } return 0; diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c index 9fadb32..b6617fa 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c @@ -2403,7 +2403,7 @@ static void gfx_v7_0_cp_gfx_enable(struct amdgpu_device *adev, bool enable) } else { WREG32(mmCP_ME_CNTL, (CP_ME_CNTL__ME_HALT_MASK | CP_ME_CNTL__PFP_HALT_MASK | CP_ME_CNTL__CE_HALT_MASK)); for (i = 0; i < adev->gfx.num_gfx_rings; i++) - adev->gfx.gfx_ring[i].ready = false; + adev->gfx.gfx_ring[i].sched.ready = false; } udelay(50); } @@ -2613,12 +2613,9 @@ static int gfx_v7_0_cp_gfx_resume(struct amdgpu_device *adev) /* start the ring */ gfx_v7_0_cp_gfx_start(adev); - ring->ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) return r; - } return 0; } @@ -2675,7 +2672,7 @@ static void gfx_v7_0_cp_compute_enable(struct amdgpu_device *adev, bool enable) } else { WREG32(mmCP_MEC_CNTL, (CP_MEC_CNTL__MEC_ME1_HALT_MASK | CP_MEC_CNTL__MEC_ME2_HALT_MASK)); for (i = 0; i < adev->gfx.num_compute_rings; i++) - adev->gfx.compute_ring[i].ready = false; + adev->gfx.compute_ring[i].sched.ready = false; } udelay(50); } @@ -3106,10 +3103,7 @@ static int gfx_v7_0_cp_compute_resume(struct amdgpu_device *adev) for (i = 0; i < adev->gfx.num_compute_rings; i++) { ring = &adev->gfx.compute_ring[i]; - ring->ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) - ring->ready = false; + amdgpu_ring_test_helper(ring); } return 0; diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c index 4e6d31f..042c642 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c @@ -1629,7 +1629,7 @@ static int gfx_v8_0_do_edc_gpr_workarounds(struct amdgpu_device *adev) return 0; /* bail if the compute ring is not ready */ - if (!ring->ready) + if (!ring->sched.ready) return 0; tmp = RREG32(mmGB_EDC_MODE); @@ -4197,7 +4197,7 @@ static void gfx_v8_0_cp_gfx_enable(struct amdgpu_device *adev, bool enable) tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, PFP_HALT, 1); tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, CE_HALT, 1); for (i = 0; i < adev->gfx.num_gfx_rings; i++) - adev->gfx.gfx_ring[i].ready = false; + adev->gfx.gfx_ring[i].sched.ready = false; } WREG32(mmCP_ME_CNTL, tmp); udelay(50); @@ -4379,10 +4379,8 @@ static int gfx_v8_0_cp_gfx_resume(struct amdgpu_device *adev) /* start the ring */ amdgpu_ring_clear_ring(ring); gfx_v8_0_cp_gfx_start(adev); - ring->ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) - ring->ready = false; + ring->sched.ready = true; + r = amdgpu_ring_test_helper(ring); return r; } @@ -4396,8 +4394,8 @@ static void gfx_v8_0_cp_compute_enable(struct amdgpu_device *adev, bool enable) } else { WREG32(mmCP_MEC_CNTL, (CP_MEC_CNTL__MEC_ME1_HALT_MASK | CP_MEC_CNTL__MEC_ME2_HALT_MASK)); for (i = 0; i < adev->gfx.num_compute_rings; i++) - adev->gfx.compute_ring[i].ready = false; - adev->gfx.kiq.ring.ready = false; + adev->gfx.compute_ring[i].sched.ready = false; + adev->gfx.kiq.ring.sched.ready = false; } udelay(50); } @@ -4473,11 +4471,9 @@ static int gfx_v8_0_kiq_kcq_enable(struct amdgpu_device *adev) amdgpu_ring_write(kiq_ring, upper_32_bits(wptr_addr)); } - r = amdgpu_ring_test_ring(kiq_ring); - if (r) { + r = amdgpu_ring_test_helper(kiq_ring); + if (r) DRM_ERROR("KCQ enable failed\n"); - kiq_ring->ready = false; - } return r; } @@ -4781,7 +4777,7 @@ static int gfx_v8_0_kiq_resume(struct amdgpu_device *adev) amdgpu_bo_kunmap(ring->mqd_obj); ring->mqd_ptr = NULL; amdgpu_bo_unreserve(ring->mqd_obj); - ring->ready = true; + ring->sched.ready = true; return 0; } @@ -4818,10 +4814,7 @@ static int gfx_v8_0_kcq_resume(struct amdgpu_device *adev) /* Test KCQs */ for (i = 0; i < adev->gfx.num_compute_rings; i++) { ring = &adev->gfx.compute_ring[i]; - ring->ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) - ring->ready = false; + r = amdgpu_ring_test_helper(ring); } done: @@ -4897,7 +4890,7 @@ static int gfx_v8_0_kcq_disable(struct amdgpu_device *adev) amdgpu_ring_write(kiq_ring, 0); amdgpu_ring_write(kiq_ring, 0); } - r = amdgpu_ring_test_ring(kiq_ring); + r = amdgpu_ring_test_helper(kiq_ring); if (r) DRM_ERROR("KCQ disable failed\n"); diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c index 0ce1e14..7c35abb 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c @@ -2537,7 +2537,7 @@ static void gfx_v9_0_cp_gfx_enable(struct amdgpu_device *adev, bool enable) tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, CE_HALT, enable ? 0 : 1); if (!enable) { for (i = 0; i < adev->gfx.num_gfx_rings; i++) - adev->gfx.gfx_ring[i].ready = false; + adev->gfx.gfx_ring[i].sched.ready = false; } WREG32_SOC15(GC, 0, mmCP_ME_CNTL, tmp); udelay(50); @@ -2727,7 +2727,7 @@ static int gfx_v9_0_cp_gfx_resume(struct amdgpu_device *adev) /* start the ring */ gfx_v9_0_cp_gfx_start(adev); - ring->ready = true; + ring->sched.ready = true; return 0; } @@ -2742,8 +2742,8 @@ static void gfx_v9_0_cp_compute_enable(struct amdgpu_device *adev, bool enable) WREG32_SOC15(GC, 0, mmCP_MEC_CNTL, (CP_MEC_CNTL__MEC_ME1_HALT_MASK | CP_MEC_CNTL__MEC_ME2_HALT_MASK)); for (i = 0; i < adev->gfx.num_compute_rings; i++) - adev->gfx.compute_ring[i].ready = false; - adev->gfx.kiq.ring.ready = false; + adev->gfx.compute_ring[i].sched.ready = false; + adev->gfx.kiq.ring.sched.ready = false; } udelay(50); } @@ -2866,11 +2866,9 @@ static int gfx_v9_0_kiq_kcq_enable(struct amdgpu_device *adev) amdgpu_ring_write(kiq_ring, upper_32_bits(wptr_addr)); } - r = amdgpu_ring_test_ring(kiq_ring); - if (r) { + r = amdgpu_ring_test_helper(kiq_ring); + if (r) DRM_ERROR("KCQ enable failed\n"); - kiq_ring->ready = false; - } return r; } @@ -3249,7 +3247,7 @@ static int gfx_v9_0_kiq_resume(struct amdgpu_device *adev) amdgpu_bo_kunmap(ring->mqd_obj); ring->mqd_ptr = NULL; amdgpu_bo_unreserve(ring->mqd_obj); - ring->ready = true; + ring->sched.ready = true; return 0; } @@ -3314,19 +3312,13 @@ static int gfx_v9_0_cp_resume(struct amdgpu_device *adev) return r; ring = &adev->gfx.gfx_ring[0]; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) return r; - } for (i = 0; i < adev->gfx.num_compute_rings; i++) { ring = &adev->gfx.compute_ring[i]; - - ring->ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) - ring->ready = false; + amdgpu_ring_test_helper(ring); } gfx_v9_0_enable_gui_idle_interrupt(adev, true); @@ -3391,7 +3383,7 @@ static int gfx_v9_0_kcq_disable(struct amdgpu_device *adev) amdgpu_ring_write(kiq_ring, 0); amdgpu_ring_write(kiq_ring, 0); } - r = amdgpu_ring_test_ring(kiq_ring); + r = amdgpu_ring_test_helper(kiq_ring); if (r) DRM_ERROR("KCQ disable failed\n"); diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c index f35d7a5..56fd3d4 100644 --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c @@ -381,7 +381,7 @@ static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device *adev, struct amdgpu_vmhub *hub = &adev->vmhub[i]; u32 tmp = gmc_v9_0_get_invalidate_req(vmid); - if (adev->gfx.kiq.ring.ready && + if (adev->gfx.kiq.ring.sched.ready && (amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev)) && !adev->in_gpu_reset) { r = amdgpu_kiq_reg_write_reg_wait(adev, hub->vm_inv_eng0_req + eng, diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c index bedbd5f..fa2f6be 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c @@ -349,8 +349,8 @@ static void sdma_v2_4_gfx_stop(struct amdgpu_device *adev) ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL, IB_ENABLE, 0); WREG32(mmSDMA0_GFX_IB_CNTL + sdma_offsets[i], ib_cntl); } - sdma0->ready = false; - sdma1->ready = false; + sdma0->sched.ready = false; + sdma1->sched.ready = false; } /** @@ -471,17 +471,15 @@ static int sdma_v2_4_gfx_resume(struct amdgpu_device *adev) /* enable DMA IBs */ WREG32(mmSDMA0_GFX_IB_CNTL + sdma_offsets[i], ib_cntl); - ring->ready = true; + ring->sched.ready = true; } sdma_v2_4_enable(adev, true); for (i = 0; i < adev->sdma.num_instances; i++) { ring = &adev->sdma.instance[i].ring; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) return r; - } if (adev->mman.buffer_funcs_ring == ring) amdgpu_ttm_set_buffer_funcs_status(adev, true); diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c index 415968d..942fe36 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c @@ -523,8 +523,8 @@ static void sdma_v3_0_gfx_stop(struct amdgpu_device *adev) ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL, IB_ENABLE, 0); WREG32(mmSDMA0_GFX_IB_CNTL + sdma_offsets[i], ib_cntl); } - sdma0->ready = false; - sdma1->ready = false; + sdma0->sched.ready = false; + sdma1->sched.ready = false; } /** @@ -739,7 +739,7 @@ static int sdma_v3_0_gfx_resume(struct amdgpu_device *adev) /* enable DMA IBs */ WREG32(mmSDMA0_GFX_IB_CNTL + sdma_offsets[i], ib_cntl); - ring->ready = true; + ring->sched.ready = true; } /* unhalt the MEs */ @@ -749,11 +749,9 @@ static int sdma_v3_0_gfx_resume(struct amdgpu_device *adev) for (i = 0; i < adev->sdma.num_instances; i++) { ring = &adev->sdma.instance[i].ring; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) return r; - } if (adev->mman.buffer_funcs_ring == ring) amdgpu_ttm_set_buffer_funcs_status(adev, true); diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c index 6ad4fda..5206713 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c @@ -634,8 +634,8 @@ static void sdma_v4_0_gfx_stop(struct amdgpu_device *adev) WREG32_SDMA(i, mmSDMA0_GFX_IB_CNTL, ib_cntl); } - sdma0->ready = false; - sdma1->ready = false; + sdma0->sched.ready = false; + sdma1->sched.ready = false; } /** @@ -675,8 +675,8 @@ static void sdma_v4_0_page_stop(struct amdgpu_device *adev) WREG32_SDMA(i, mmSDMA0_PAGE_IB_CNTL, ib_cntl); } - sdma0->ready = false; - sdma1->ready = false; + sdma0->sched.ready = false; + sdma1->sched.ready = false; } /** @@ -863,7 +863,7 @@ static void sdma_v4_0_gfx_resume(struct amdgpu_device *adev, unsigned int i) /* enable DMA IBs */ WREG32_SDMA(i, mmSDMA0_GFX_IB_CNTL, ib_cntl); - ring->ready = true; + ring->sched.ready = true; } /** @@ -956,7 +956,7 @@ static void sdma_v4_0_page_resume(struct amdgpu_device *adev, unsigned int i) /* enable DMA IBs */ WREG32_SDMA(i, mmSDMA0_PAGE_IB_CNTL, ib_cntl); - ring->ready = true; + ring->sched.ready = true; } static void @@ -1144,11 +1144,9 @@ static int sdma_v4_0_start(struct amdgpu_device *adev) for (i = 0; i < adev->sdma.num_instances; i++) { ring = &adev->sdma.instance[i].ring; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) return r; - } if (adev->mman.buffer_funcs_ring == ring) amdgpu_ttm_set_buffer_funcs_status(adev, true); diff --git a/drivers/gpu/drm/amd/amdgpu/si_dma.c b/drivers/gpu/drm/amd/amdgpu/si_dma.c index d9b27d7..05ce1ca 100644 --- a/drivers/gpu/drm/amd/amdgpu/si_dma.c +++ b/drivers/gpu/drm/amd/amdgpu/si_dma.c @@ -122,7 +122,7 @@ static void si_dma_stop(struct amdgpu_device *adev) if (adev->mman.buffer_funcs_ring == ring) amdgpu_ttm_set_buffer_funcs_status(adev, false); - ring->ready = false; + ring->sched.ready = false; } } @@ -175,13 +175,11 @@ static int si_dma_start(struct amdgpu_device *adev) WREG32(DMA_RB_WPTR + sdma_offsets[i], lower_32_bits(ring->wptr) << 2); WREG32(DMA_RB_CNTL + sdma_offsets[i], rb_cntl | DMA_RB_ENABLE); - ring->ready = true; + ring->sched.ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) return r; - } if (adev->mman.buffer_funcs_ring == ring) amdgpu_ttm_set_buffer_funcs_status(adev, true); diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c b/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c index 1fc17bf..8cabe98 100644 --- a/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c @@ -162,12 +162,9 @@ static int uvd_v4_2_hw_init(void *handle) uvd_v4_2_enable_mgcg(adev, true); amdgpu_asic_set_uvd_clocks(adev, 10000, 10000); - ring->ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) goto done; - } r = amdgpu_ring_alloc(ring, 10); if (r) { @@ -218,7 +215,7 @@ static int uvd_v4_2_hw_fini(void *handle) if (RREG32(mmUVD_STATUS) != 0) uvd_v4_2_stop(adev); - ring->ready = false; + ring->sched.ready = false; return 0; } diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c index fde6ad5..56b02ee 100644 --- a/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c @@ -158,12 +158,9 @@ static int uvd_v5_0_hw_init(void *handle) uvd_v5_0_set_clockgating_state(adev, AMD_CG_STATE_UNGATE); uvd_v5_0_enable_mgcg(adev, true); - ring->ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) goto done; - } r = amdgpu_ring_alloc(ring, 10); if (r) { @@ -215,7 +212,7 @@ static int uvd_v5_0_hw_fini(void *handle) if (RREG32(mmUVD_STATUS) != 0) uvd_v5_0_stop(adev); - ring->ready = false; + ring->sched.ready = false; return 0; } diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c index 7a5b402..3027607 100644 --- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c @@ -476,12 +476,9 @@ static int uvd_v6_0_hw_init(void *handle) uvd_v6_0_set_clockgating_state(adev, AMD_CG_STATE_UNGATE); uvd_v6_0_enable_mgcg(adev, true); - ring->ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) goto done; - } r = amdgpu_ring_alloc(ring, 10); if (r) { @@ -513,12 +510,9 @@ static int uvd_v6_0_hw_init(void *handle) if (uvd_v6_0_enc_support(adev)) { for (i = 0; i < adev->uvd.num_enc_rings; ++i) { ring = &adev->uvd.inst->ring_enc[i]; - ring->ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) goto done; - } } } @@ -548,7 +542,7 @@ static int uvd_v6_0_hw_fini(void *handle) if (RREG32(mmUVD_STATUS) != 0) uvd_v6_0_stop(adev); - ring->ready = false; + ring->sched.ready = false; return 0; } diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c index 58b39af..76a7fbe 100644 --- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c @@ -540,12 +540,9 @@ static int uvd_v7_0_hw_init(void *handle) ring = &adev->uvd.inst[j].ring; if (!amdgpu_sriov_vf(adev)) { - ring->ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) goto done; - } r = amdgpu_ring_alloc(ring, 10); if (r) { @@ -582,12 +579,9 @@ static int uvd_v7_0_hw_init(void *handle) for (i = 0; i < adev->uvd.num_enc_rings; ++i) { ring = &adev->uvd.inst[j].ring_enc[i]; - ring->ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) goto done; - } } } done: @@ -619,7 +613,7 @@ static int uvd_v7_0_hw_fini(void *handle) for (i = 0; i < adev->uvd.num_uvd_inst; ++i) { if (adev->uvd.harvest_config & (1 << i)) continue; - adev->uvd.inst[i].ring.ready = false; + adev->uvd.inst[i].ring.sched.ready = false; } return 0; diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c index ea28828..bed78a7 100644 --- a/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c +++ b/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c @@ -463,15 +463,11 @@ static int vce_v2_0_hw_init(void *handle) amdgpu_asic_set_vce_clocks(adev, 10000, 10000); vce_v2_0_enable_mgcg(adev, true, false); - for (i = 0; i < adev->vce.num_rings; i++) - adev->vce.ring[i].ready = false; for (i = 0; i < adev->vce.num_rings; i++) { - r = amdgpu_ring_test_ring(&adev->vce.ring[i]); + r = amdgpu_ring_test_helper(&adev->vce.ring[i]); if (r) return r; - else - adev->vce.ring[i].ready = true; } DRM_INFO("VCE initialized successfully.\n"); diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c index 6dbd397..2b1a5a7 100644 --- a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c +++ b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c @@ -474,15 +474,10 @@ static int vce_v3_0_hw_init(void *handle) amdgpu_asic_set_vce_clocks(adev, 10000, 10000); - for (i = 0; i < adev->vce.num_rings; i++) - adev->vce.ring[i].ready = false; - for (i = 0; i < adev->vce.num_rings; i++) { - r = amdgpu_ring_test_ring(&adev->vce.ring[i]); + r = amdgpu_ring_test_helper(&adev->vce.ring[i]); if (r) return r; - else - adev->vce.ring[i].ready = true; } DRM_INFO("VCE initialized successfully.\n"); diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c index 1c94718..65b71fc 100644 --- a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c +++ b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c @@ -519,15 +519,10 @@ static int vce_v4_0_hw_init(void *handle) if (r) return r; - for (i = 0; i < adev->vce.num_rings; i++) - adev->vce.ring[i].ready = false; - for (i = 0; i < adev->vce.num_rings; i++) { - r = amdgpu_ring_test_ring(&adev->vce.ring[i]); + r = amdgpu_ring_test_helper(&adev->vce.ring[i]); if (r) return r; - else - adev->vce.ring[i].ready = true; } DRM_INFO("VCE initialized successfully.\n"); @@ -549,7 +544,7 @@ static int vce_v4_0_hw_fini(void *handle) } for (i = 0; i < adev->vce.num_rings; i++) - adev->vce.ring[i].ready = false; + adev->vce.ring[i].sched.ready = false; return 0; } diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c index eae9092..29628f6 100644 --- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c @@ -176,30 +176,22 @@ static int vcn_v1_0_hw_init(void *handle) struct amdgpu_ring *ring = &adev->vcn.ring_dec; int i, r; - ring->ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) goto done; - } for (i = 0; i < adev->vcn.num_enc_rings; ++i) { ring = &adev->vcn.ring_enc[i]; - ring->ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + ring->sched.ready = true; + r = amdgpu_ring_test_helper(ring); + if (r) goto done; - } } ring = &adev->vcn.ring_jpeg; - ring->ready = true; - r = amdgpu_ring_test_ring(ring); - if (r) { - ring->ready = false; + r = amdgpu_ring_test_helper(ring); + if (r) goto done; - } done: if (!r) @@ -224,7 +216,7 @@ static int vcn_v1_0_hw_fini(void *handle) if (RREG32_SOC15(VCN, 0, mmUVD_STATUS)) vcn_v1_0_stop(adev); - ring->ready = false; + ring->sched.ready = false; return 0; }