From patchwork Thu Dec 6 21:26:13 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10716793 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5EDEE1759 for ; Thu, 6 Dec 2018 21:26:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E2002D68C for ; Thu, 6 Dec 2018 21:26:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3F0582DA84; Thu, 6 Dec 2018 21:26:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAD_ENC_HEADER,BAYES_00, MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B06E32D68C for ; Thu, 6 Dec 2018 21:26:35 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7C8286E66F; Thu, 6 Dec 2018 21:26:34 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM01-SN1-obe.outbound.protection.outlook.com (mail-eopbgr820048.outbound.protection.outlook.com [40.107.82.48]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4F58B6E66F; Thu, 6 Dec 2018 21:26:33 +0000 (UTC) Received: from DM5PR12CA0019.namprd12.prod.outlook.com (2603:10b6:4:1::29) by BY2PR12MB0054.namprd12.prod.outlook.com (2a01:111:e400:2c80::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1404.17; Thu, 6 Dec 2018 21:26:30 +0000 Received: from BY2NAM03FT007.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e4a::200) by DM5PR12CA0019.outlook.office365.com (2603:10b6:4:1::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1404.21 via Frontend Transport; Thu, 6 Dec 2018 21:26:30 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV02.amd.com (165.204.84.17) by BY2NAM03FT007.mail.protection.outlook.com (10.152.84.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1404.17 via Frontend Transport; Thu, 6 Dec 2018 21:26:29 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV02.amd.com (10.181.40.72) with Microsoft SMTP Server id 14.3.389.1; Thu, 6 Dec 2018 15:26:27 -0600 From: Andrey Grodzovsky To: , , , , Subject: [PATCH v2 2/2] drm/sched: Rework HW fence processing. Date: Thu, 6 Dec 2018 16:26:13 -0500 Message-ID: <1544131573-4799-2-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1544131573-4799-1-git-send-email-andrey.grodzovsky@amd.com> References: <1544131573-4799-1-git-send-email-andrey.grodzovsky@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(346002)(376002)(396003)(39860400002)(136003)(2980300002)(428003)(199004)(189003)(316002)(8676002)(2616005)(16586007)(54906003)(81156014)(47776003)(53416004)(104016004)(39060400002)(336012)(110136005)(81166006)(426003)(2906002)(48376002)(44832011)(26005)(106466001)(486006)(97736004)(105586002)(476003)(126002)(36756003)(77096007)(51416003)(50226002)(14444005)(50466002)(8936002)(186003)(6666004)(86362001)(356004)(2201001)(53936002)(76176011)(5660300001)(7696005)(11346002)(478600001)(72206003)(68736007)(305945005)(446003)(4326008)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:BY2PR12MB0054; H:SATLEXCHOV02.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; A:1; MX:1; X-Microsoft-Exchange-Diagnostics: 1; BY2NAM03FT007; 1:IfKTpfiRaBe6VGlEp31s0vzw3tdzQoj2I2QG7mwuUAe1s2weK2rcWca2mFpPkLPkIUmy3ww9VPdHjozBx2CjX2U0ReNq7MvD1JeGl7GYg3/oKy3bQIi8s29MVU93kF/R X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 805bd528-22c9-4f3f-c0c4-08d65bc17cd1 X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390098)(7020095)(4652040)(8989299)(5600074)(711020)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7153060); SRVR:BY2PR12MB0054; X-Microsoft-Exchange-Diagnostics: 1; BY2PR12MB0054; 3:jbGoYFvgY8atbRehXLSG/lDSShu21r08KsQsQ2hrAXHyVdl/x6xaniqy/9KCewKSbBa41NZ5Sf3VF2dYuqos+Do2ThByy9foXCz1TEfEHlP0k1qad2b8E16XD2vZ7SnSkzjkT9UY8WEVks45IhdAnYFWkPTrTelnw0RXF8xVj8PYkqnl53c79gkUM80zxLB1XJb+AmhwnK6LlzMbcmW4P8LePAWxnDr9uItuoNiDk53E8ZfempCL2p1LBAxeoe6m37Mbyzcbeud9v/2SvGe37y1QsHGtPc8QTlZXNwRG2NlXIRSoagAXfilS5pn67Qy+w7Fnl3pbkBb+93zWILnEFw3T/h2oTCGpwx0ZUTk65RM=; 25:06w6Yc2qESIuv4NMEFdsICFkVZA7g3U/19MTuMl87ty2QCy+BSPdquZzPxYxsUVp06u1Fuvp20Gss17Ob2/b2o+n0Pa4FU77INhzMwdwtLyDv9fJxvcNC6myu4ey8MMnirAzm91OT3MZnZA9JcZX3xEjUHN/+uNxdsqUGp6XEbVG9r7xquIQCtL0n6zfJ9uxabA+p1fpJVf0ONMb3mtq/M/Tv3DMArrz7O4vosRpGmHQlMT2T5UTvcQu3tpUN2+IE9L5OuX8iwnc36qWsM8YktvFEIeOd4KE4c8aCfk/qY9B2TVDjPVQ6+XIKJaXN5ofOfqRp7HwcL6tEpp6KV5pJw== X-MS-TrafficTypeDiagnostic: BY2PR12MB0054: X-Microsoft-Exchange-Diagnostics: 1; BY2PR12MB0054; 31:O3dnts58ThYoPj/yDtyt8k/uGfDMWy72wBXzlG5Ee/qqwj+S/p40lL9B2CkvkQVqpZZ5j4U/6L1FVCPHZ8hRBz1ZejN7L7UwYD0E5ddE4eczIE5kNhPwkoNW2Urv0SESe9c9f4+tjf26+COMtZJvOqh1uIYXYuJQO1hdVIO0iiSVhTyYqjeoxChfqL8CTtCk4xDCzZp261bib/kpwjP3oB+dtxYJ0J0QYLSCzHUjEqI=; 20:/014S9YiCb/2K0lePtZffiqHdKFbFrSoba/VmQeB4lI5XFErX3KMZHIa3OcVqE4nGNqvpKW8jlWsieRUpcOnID87hTANl2tR225U1ww8dVUbWiF3VskcgCZysjuLbUKLI4FrycHBQPCF21063GFRuuFkxitU9XbM2L3d8X6he4y/x7VxXvQOCPUYnGFnJi10iGEw2eOPAtZiCWJdjGCq5w6DbRdIVFfISiHI+I45pbkSTlsplbzKPOPNSU4yZAneUcxw3dHdu55CTVQcEcOjm1njyRSzmbrP/a25e0yHzgMKq1UjcUbc9iGeff8v2lgK/sj5deyF2ExVIcgt1seY0JbxbKldEZHI8Qxbt1aP001Ds5L12+iZumewQ/E0nhklMzHZ/OPMpo6EDnKVNG0XX+E5n644sNJ8/NkunCitslpCd0lXJ0XtvONIL0nCiZJrsMYvthOzLu0WOv0UiGQYe7WInND7yQg5jhY/GzTjqjfHt7J/o5wZqVH4o8OQi7sD X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(10201501046)(3002001)(93006095)(93003095)(3231455)(999002)(944501520)(52105112)(6055026)(148016)(149066)(150057)(6041310)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123558120)(20161123562045)(20161123560045)(20161123564045)(201708071742011)(7699051)(76991095); SRVR:BY2PR12MB0054; BCL:0; PCL:0; RULEID:; SRVR:BY2PR12MB0054; X-Microsoft-Exchange-Diagnostics: 1; BY2PR12MB0054; 4:L9qpJEAHTyyDcXoGooc8ubo3eLbIoCcge7W/F8JLDOGxPV1C5qrzOgTRTn5KkMmhERHtY1GpRDrcWzmVz1FKRknsZ+LCLdOedwyecEI+LvSLYtG7E68nc9kL14ZetYoxyxwAZn1/97GXwcSPQ6RtICzBYQ4K9/c14Kmwj/J9dfsrkPSsNMirsa3jfPX7C88ucSz50z69J4ccVQzGDui3Q/zsgvgihQNtJ50qfbTqVTLz7hn91wyJ0yaZo9Pw++EjlEepev/AcF6A844uAApreA== X-Forefront-PRVS: 087894CD3C X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BY2PR12MB0054; 23:uXEFI8jLuaiI0KPKgS1AICaeldqBWfnuJfvRwozG+?= s6gepUk2g2CwK+WN8VP9j2M+9HsVvT7PNpNW5mwLTAUNBrQC3sMY3lUhrnKPuZDcUQG3iq8EnK/6Na6s2pQpGjQceunpXSKNzVBT+LqCzKc6v9qDZSLv8c6fpwZ7w47WKOzatPM3FWA6leH1Zotd8RpuL7Kb4Wz3F7J4kvFE0qMNnfkoqBG42azKXcrpkjilDQKEYdqnSq2tcZBNW84cb1S7JFOmwgPfgrnxQQds/3+fAzsJacxYA14oElSG2yN2hGmseaRS85tSB2ddBuqrJQssRJGKoSX1iQ3A2+1efqWSIi50fkVd6mX3U/n0BNEh+6j3cwO8J3OqjdeelWvVywVq66T5VvWHUCouA8uOjUgBcLHS8mtSogSjb/wC+AauXwKjn4C352+VVS2jJvhRZsWQNBHkBHSTiHy3ccHASlo7yOvJxzJs1QJmYndwZm3O8zPPTwFofiCBGskuL5/8uMCOn/y3rqB88KIc7RKCO0Ho2QTrR9dKLKaxVm5p6M0x+NMYvwIOLbnF9Md2pGnUY1OY6/AwCd3etGv/QOYdyyxqPyNz3F8fA4xxvLPJ5hGu2zm0MyuLA084Ize8uVuEbjUR4emn9WYOlwZ+MGAtWsGIz3/3BH9BN10brf3QeJJQo4P74f9H6CrpvjBndnPHzIYNsaRnH1toJeniRTgYj+/+ENQqtWK7zcRDobskT0BKTmlBCffHoEKwRUMcELfBsf0jC98sycuTx8vcvbNBLvQUk+Zn5Cu79Jq5Yd6idoooUdZkqDl+1+x6aGv9WiC3XZtu/8ILM8wJ22XUdRlHcUnzuF8ygAjc5M6SjMaxkh1apQoh/XLyh0nqWMUjuB8D+5sks2Awq3DcXMtVXux2CsWE7uIyzu9oAdaaiysE3vAy1A1zIxLnd7j/TD4VdU5MEPZ9OsnHrI/O+uapGjnn/zXsTamKMq/Y7rg6msSDQf7hJasWlABdCEblTkVo8N7WPNyKw4RqH4wi8vUTbfs7ntajivVld/LcpSi58g63EJ46XIol4WEnJ8/t+wF5NsxwQ3CGjWSQKtnflRL8sOpNcazuj06esleH/Mb9sYcsEZ3dijfcxmewUXPlPBIhZVjY5pY4UZu9NI2QmEP5luodQo3qBqvXgOhVu0tjboudpYpd7qdtTbRy+DohPNwIx3QbA82N/LxioUdpicFCFb4kumnVol+RfX5tB8xFKd5xYkl0qWE3P+Nq84BO24limwyhLS3 X-Microsoft-Antispam-Message-Info: AXdJUTRDN1G+Yqv2BtBpZGfXUB67cqTgzsPrv4x0Sle0C/dE34qvpV1oVDcxoMSjYEagsMe4hqhp8q38iPC4EhZp57dlsgmWKgnMsG63fgfXCjGPhDKwmD+1fOrWNLMhLtKTt57xOlNntMrztVmGeNbx2HaBZi5Ecoy/jzrcBhFekSPafKeaxW76+OkCFvM4dXTY6bzwmVsLfLG24V9f1GW0yD1BOIel5Nmkuhk3C8IjEC0SJpArWs9eWKgIidgo9/TVIUh+epcckB5zK3xd7m1KkkBjoKgAxHQDYMFOjCBe83wpd6P0J33RXZJG+vF5etQc1PbKLiWYRCjGtlkRXaOSIQ27a9dCDBkWgT0PfyI= X-Microsoft-Exchange-Diagnostics: 1; BY2PR12MB0054; 6:4opkVhmvtNsEMUn445ozv2AmLW0lYdGU4D4Js0ir2o4DdT5PTpnPC058rUD6Mcd1JiqOEZS/SFyyhy13WbLEySHf+z9I2k44/4912ZMrs8S4Aaq3dhqZ/xYorxCnkspapXWbDO3keZEueNCbq18wy3W0OOl/rFwN3Vtz9esPdW0rH2SI034ZrDaH6blcJUEnrUpjezm1nJeBQR5LzHX7zTi2d1UYglya+Pma12CZDbwizQu92B2i2fF327pmp4gzQt8y/pzCJ5NHnPBQA2q2EG9mKOQpAo8sHrtbL4bhaiL4jQIF+4NwE8HZ3gKHmttoUJjLT8TZ+URjDsC0X+VRL/308YUucGE4iv6ls2IaKPVMEoZrEDA9fQsgoCydua/ebZZgHB9ag0069YM57s72iUOQLF3EO4HEoZi65AxdU3sK5xJcRB+uyW4WqXJK4/cy3+v56ZoqivuxuZHz+lzxFw==; 5:HjzanrAx4HY2bAnnBr+mqR0+H7j3l1l49k7Us4RFl2xIFgOrJOLAXiqxdP0SU6aud67KmgKsduH97KUmd8AyjPOtLcJ5qmkn5U306nG34ydJtzmTXl3p4+mez7R5qIN/bLenkkfr7RhqJV4qFM4TMN+H+yrnvprPZGC9gLrfG2A=; 7:w9xG7BrQypGbt99g0MWoM+U6uKxcpWxyiZsOqu7CntEd+c98YA0unOTo0VNgt9nmQHU6YTipmxj8sZQJMY1vSbRyh56b0ysyRzQIYGeauR82s8X/ldtFMowVR4n/rLa3FHOx/hSHCROduGeRTB+0wg== SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BY2PR12MB0054; 20:f2hcHpO7qf0lZB3v/7yxSfETDuHdY5Iodc9soJJIXw4Jk0PJXtm3veXrgjsgkI+u5sJXE6Vi5Qvpm6a3At+xLb+aTdoQHLi8kr4YICRDZW/4QZqBGiGj76zsq1Kzf5EMZ5wlzT9aWg3X61VCsIaCnimeet6XVrgvzfIt5PC4on+UvDCH2j1UC6vAneAI3PcJkmu6K1vZv7l1NFVMIlz2UxqdiD6nwiQgwd5SFy6nhtEZNiaOoPOXLrrKSEwUz1qZ X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Dec 2018 21:26:29.8565 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 805bd528-22c9-4f3f-c0c4-08d65bc17cd1 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV02.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY2PR12MB0054 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Monk.Liu@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Expedite job deletion from ring mirror list to the HW fence signal callback instead from finish_work, together with waiting for all such fences to signal in drm_sched_stop we garantee that already signaled job will not be processed twice. Remove the sched finish fence callback and just submit finish_work directly from the HW fence callback. v2: Fix comments. Suggested-by: Christian Koenig Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/scheduler/sched_fence.c | 4 +++- drivers/gpu/drm/scheduler/sched_main.c | 41 +++++++++++++++++---------------- include/drm/gpu_scheduler.h | 10 ++++++-- 3 files changed, 32 insertions(+), 23 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index d8d2dff..e62c239 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -151,7 +151,8 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f) EXPORT_SYMBOL(to_drm_sched_fence); struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity, - void *owner) + void *owner, + struct drm_sched_job *s_job) { struct drm_sched_fence *fence = NULL; unsigned seq; @@ -163,6 +164,7 @@ struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity, fence->owner = owner; fence->sched = entity->rq->sched; spin_lock_init(&fence->lock); + fence->s_job = s_job; seq = atomic_inc_return(&entity->fence_seq); dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled, diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index cdf95e2..5359418 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -284,8 +284,6 @@ static void drm_sched_job_finish(struct work_struct *work) cancel_delayed_work_sync(&sched->work_tdr); spin_lock_irqsave(&sched->job_list_lock, flags); - /* remove job from ring_mirror_list */ - list_del_init(&s_job->node); /* queue TDR for next job */ drm_sched_start_timeout(sched); spin_unlock_irqrestore(&sched->job_list_lock, flags); @@ -293,22 +291,11 @@ static void drm_sched_job_finish(struct work_struct *work) sched->ops->free_job(s_job); } -static void drm_sched_job_finish_cb(struct dma_fence *f, - struct dma_fence_cb *cb) -{ - struct drm_sched_job *job = container_of(cb, struct drm_sched_job, - finish_cb); - schedule_work(&job->finish_work); -} - static void drm_sched_job_begin(struct drm_sched_job *s_job) { struct drm_gpu_scheduler *sched = s_job->sched; unsigned long flags; - dma_fence_add_callback(&s_job->s_fence->finished, &s_job->finish_cb, - drm_sched_job_finish_cb); - spin_lock_irqsave(&sched->job_list_lock, flags); list_add_tail(&s_job->node, &sched->ring_mirror_list); drm_sched_start_timeout(sched); @@ -363,8 +350,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad, dma_fence_put(s_job->s_fence->parent); s_job->s_fence->parent = NULL; atomic_dec(&sched->hw_rq_count); - } - else { + } else { /* TODO Is it get/put neccessey here ? */ dma_fence_get(&s_job->s_fence->finished); list_add(&s_job->finish_node, &wait_list); @@ -423,7 +409,13 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool unpark_only) if (unpark_only) goto unpark; - spin_lock_irqsave(&sched->job_list_lock, flags); + /* + * Locking the list is not required here as the sched thread is parked + * so no new jobs are being pushed in to HW and in drm_sched_stop we + * flushed all the jobs who were still in mirror list but who already + * signaled and removed them self from the list. Also concurrent + * GPU recovers can't run in parallel. + */ list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { struct drm_sched_fence *s_fence = s_job->s_fence; struct dma_fence *fence = s_job->s_fence->parent; @@ -441,7 +433,6 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool unpark_only) } drm_sched_start_timeout(sched); - spin_unlock_irqrestore(&sched->job_list_lock, flags); unpark: kthread_unpark(sched->thread); @@ -505,7 +496,7 @@ int drm_sched_job_init(struct drm_sched_job *job, job->sched = sched; job->entity = entity; job->s_priority = entity->rq - sched->sched_rq; - job->s_fence = drm_sched_fence_create(entity, owner); + job->s_fence = drm_sched_fence_create(entity, owner, job); if (!job->s_fence) return -ENOMEM; job->id = atomic64_inc_return(&sched->job_id_count); @@ -593,15 +584,25 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) struct drm_sched_fence *s_fence = container_of(cb, struct drm_sched_fence, cb); struct drm_gpu_scheduler *sched = s_fence->sched; + struct drm_sched_job *s_job = s_fence->s_job; + unsigned long flags; + + cancel_delayed_work(&sched->work_tdr); - dma_fence_get(&s_fence->finished); atomic_dec(&sched->hw_rq_count); atomic_dec(&sched->num_jobs); + + spin_lock_irqsave(&sched->job_list_lock, flags); + /* remove job from ring_mirror_list */ + list_del_init(&s_job->node); + spin_unlock_irqrestore(&sched->job_list_lock, flags); + drm_sched_fence_finished(s_fence); trace_drm_sched_process_job(s_fence); - dma_fence_put(&s_fence->finished); wake_up_interruptible(&sched->wake_up_worker); + + schedule_work(&s_job->finish_work); } /** diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index c94b592..23855c6 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -115,6 +115,8 @@ struct drm_sched_rq { struct drm_sched_entity *current_entity; }; +struct drm_sched_job; + /** * struct drm_sched_fence - fences corresponding to the scheduling of a job. */ @@ -160,6 +162,9 @@ struct drm_sched_fence { * @owner: job owner for debugging */ void *owner; + + /* Back pointer to owning job */ + struct drm_sched_job *s_job; }; struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); @@ -330,8 +335,9 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity, enum drm_sched_priority priority); bool drm_sched_entity_is_ready(struct drm_sched_entity *entity); -struct drm_sched_fence *drm_sched_fence_create( - struct drm_sched_entity *s_entity, void *owner); +struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *s_entity, + void *owner, + struct drm_sched_job *s_job); void drm_sched_fence_scheduled(struct drm_sched_fence *fence); void drm_sched_fence_finished(struct drm_sched_fence *fence);