From patchwork Thu Dec 20 19:23:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10739443 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B02CD17E1 for ; Thu, 20 Dec 2018 19:23:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9FB2A271CB for ; Thu, 20 Dec 2018 19:23:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 910A127CEE; Thu, 20 Dec 2018 19:23:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAD_ENC_HEADER,BAYES_00, MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4179E271CB for ; Thu, 20 Dec 2018 19:23:59 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3DB9A6F4D8; Thu, 20 Dec 2018 19:23:57 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM01-BN3-obe.outbound.protection.outlook.com (mail-eopbgr740083.outbound.protection.outlook.com [40.107.74.83]) by gabe.freedesktop.org (Postfix) with ESMTPS id A17C26F4D9; Thu, 20 Dec 2018 19:23:55 +0000 (UTC) Received: from DM5PR12CA0001.namprd12.prod.outlook.com (2603:10b6:4:1::11) by MWHPR12MB1326.namprd12.prod.outlook.com (2603:10b6:300:11::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1446.19; Thu, 20 Dec 2018 19:23:53 +0000 Received: from DM3NAM03FT059.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e49::203) by DM5PR12CA0001.outlook.office365.com (2603:10b6:4:1::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1446.17 via Frontend Transport; Thu, 20 Dec 2018 19:23:52 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV02.amd.com (165.204.84.17) by DM3NAM03FT059.mail.protection.outlook.com (10.152.82.207) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1446.11 via Frontend Transport; Thu, 20 Dec 2018 19:23:52 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV02.amd.com (10.181.40.72) with Microsoft SMTP Server id 14.3.389.1; Thu, 20 Dec 2018 13:23:50 -0600 From: Andrey Grodzovsky To: , , , , Subject: [PATCH v5 2/2] drm/sched: Rework HW fence processing. Date: Thu, 20 Dec 2018 14:23:35 -0500 Message-ID: <1545333815-29870-2-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1545333815-29870-1-git-send-email-andrey.grodzovsky@amd.com> References: <1545333815-29870-1-git-send-email-andrey.grodzovsky@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(346002)(396003)(376002)(136003)(39860400002)(2980300002)(428003)(199004)(189003)(26005)(305945005)(50226002)(8936002)(14444005)(5024004)(77096007)(36756003)(8676002)(6346003)(81156014)(81166006)(48376002)(2906002)(50466002)(2616005)(76176011)(186003)(51416003)(7696005)(72206003)(47776003)(68736007)(5660300001)(446003)(11346002)(86362001)(336012)(6666004)(53416004)(356004)(478600001)(104016004)(476003)(126002)(2201001)(44832011)(54906003)(106466001)(39060400002)(53936002)(486006)(105586002)(426003)(110136005)(4326008)(316002)(97736004)(16586007)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:MWHPR12MB1326; H:SATLEXCHOV02.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; A:1; MX:1; X-Microsoft-Exchange-Diagnostics: 1; DM3NAM03FT059; 1:XTaEKpnREoHMFdKxm2wanb2h5zzA3QdT4ZcQHNkSWZkXabrg4b5y7zyqStVYhfAZSB2FN65Ko2tHCBquEYIYoaO/Z2BvW8yxhl+1lkSyjvqkNRVx4AyHnYZc+laPZweP X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d29c6a43-4d2f-439c-7e1f-08d666b0ad29 X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(5600074)(711020)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7153060); SRVR:MWHPR12MB1326; X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1326; 3:48oUpcChJ/8VXhgSLqGINOEVVtz5WYImY0Tuwxn/GATkvL6tMFFbYdpZPD6E30M8FYTqxa3/t9MzC2YL/nNExW0JUc3D5rcMXNiwW/Rj7UfJetv9v2y37Hl+g+/ZKO249qapyixhrT8fwcofa6aanWnnnCs5m+aXmpUw44HIebB7EWN1mmQTgCO8nuIeEWb3tpHw4/3jrnMC7Uky6I4cENs06GMEBc6HEAtT90ugwhdUO9W2IBRHJgbEDT6WgY7+d61qOoafuNDymX1uEOklstHDsVFtjv4OHA6Qa0ANNFAfSRpXg/dRwZkdQHq2P4VTERdJS35RI4ECx5Uywv+HiL1n3DxHa6kCnukcWhXc4+w=; 25:wmOMjWeH2KU/Qb5Zai5uy1aMV1iKYPE8V6ysCHU9XB/cfP1eTuHVTmdgwfsOPNVWPUwZOWjbz5+tcSHmvbGQlXFjje1lWGYZryICvWrzZyaDp+ldnzgyotMH7UTlWMB4pkVnZuVo5lM+jDKCNPengdJoSJFK6nooAzqtgZ1pMfklYK4iOGMoO4o7n7pIH0jM6LFizmRLUO2XqumWYH3M5ub5lD6RQOR8Jnfi4cyw37L5FB6YqIQNtyuEw7CzstagLB7H5nqZ0Xv/QmLK/mlPqrhMe7sWgBLne/qaumA/2ogj1ysvY15xuyPQSZHL97oKouphBpKBdMN6SzT4I8NKEw== X-MS-TrafficTypeDiagnostic: MWHPR12MB1326: X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1326; 31:u5H9MsIlS6NU1w6LOZoM0VrbEiioRXW1RuNExHHy3fCGfnBOSvjAqYcJvQy18x5qG0q4Xu6AIadOJad1tZm5gClB8q7UywgVGMt12qefikC+PD5kG8EQXYOSW09u6xwVSrP83U2pwGhzJ6Jk+8yc/eTpXPZASqdFsZCy/EP7mQN5XYSpS4KHzeQaAXX44+U2waTFRW2ptrjIPq5tY2vJf/Rb5DzEL1VYghih9O8e4o8=; 20:9LjRzxLdeUtaVOFDr+pULRx41+bfzuAzTJrI/Ecy0Ok1VAgWVw7s8E2+QPBLfbgvEEZRaWObBHmbZrP0lMeNfu1o6nO9xatjVEwG6N+CjoWo/FF6ENCzIWD4POHJ2Bhgh9SmS2rClxmrEj9/Ji/qtRBcgyEE23L+sGJpdE5H3U8EtWxiqVk1Udd+14+fliwD2Sa5Qz91luJIKE8++8U1y7xZObu6nU96BzHlk5vh2rWBFIpnIiNXTToQQGYwOo8vA6H6gzt8INU0l1XTpV46OOhSqzNkbAgQK2tpPafiEj9/Lw4nftUEfSTbAn/XTsSsQyIVgotJkKElVmKHl6c/Asq4g2g6e2D+hH9IlIG89bkcFvTERsDz/67nbYdl8GM29XFMvWOCtLLPFkB/suEnjtvWNdCA0Fl7ZBT7zdyCtIqS2eeC7WKwbrRPyaG1pNj0P0aAqv4w2CrYTtn62d2ZbWFZ4t6HGLXOnrCz3pRxmUELJo73d2/bl0/0+kaMELHI X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(3230021)(999002)(5005026)(6040522)(2401047)(8121501046)(3231475)(944501520)(52105112)(93006095)(93003095)(10201501046)(3002001)(6055026)(149066)(150057)(6041310)(20161123558120)(20161123560045)(20161123562045)(20161123564045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(201708071742011)(7699051)(76991095); SRVR:MWHPR12MB1326; BCL:0; PCL:0; RULEID:; SRVR:MWHPR12MB1326; X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1326; 4:6Ns0kKzcsnL1l7+JHy3n6CeRYGZBeqtwOI0ixvHx+Kl0YOkr9GCpqfXrj+0mCKVsxRuXUhHhLLJRenw7EK5Ylaxz6J8MSIvXUCeHmKMWSUNdK2NxTR4MfHxNalvmwD5yf2J716mramsKSPhAQp78+bqSvuZrDOTIcatKtTwn/cPaUC/17h3wPAw3FpCvvMzdnFqLB2YSz9FVyW80jNDukqxHUPPeIfhxz59uF2oLLqA6AUB9RqaBmSNoGBcfoRKZ/qxpkQrZZ+ypdFzM+K1HAw== X-Forefront-PRVS: 0892FA9A88 X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; MWHPR12MB1326; 23:g39UX/omkbDfc/WsfnN2lHTDGaXOSVQ2xkIl1ksGR?= 1tzZSJXvq2fWynjhxyX5z7Yq0oo4b5iXqesr4V6UQk54tTDSAeogk1z7+Q6Bo9X5krxMMhWo9pI27nxxDygOA0p2ehr7sG5J4Sov5fPMT8tmErpI7iV2KdpiN3TyM45/iKMuYWSmIFLQB3KiHDVo/N0SRMUH4ZaEsBDYzHlBFYqsmuI5QXJidRQPK5Qg90twbyYQXHw0EKlQVUJU2mHWD0ZR0WosKiHVRj4OAiZqan8AqJmDSrPIvPSqsbtEVRKmExe86VTtAZXo4GvTCc8QYpe/nhK1OlvzHwY76gdHAH7kEHwtYNEvl6RuoWjom192x6NU2xr42ghAI7JD7scLPKOlb/IZFtdApC0lIqDHiZ5/fQL0VMYgt9qKW+0BPQ/rpougVlzBKP2KXKi5mrSAbZgD6hR7Mb9nH5CfyeHrrmcVgDNDCCFtCzNC4IIoJa9pqD8ZCLfHJeJOoMnoWKX7vy0TBNUec9oQP9hzifOpItVpkvdVBQ+YdG738gSOnzWegywNgr3MH94mdS34sitO/GITi3+GSQsNJwvPkuizO80BMfXUeLb7vca1wRl+UzGv/HQ6KMvELVC9xkFCpSlgNVq1BG0G+/aIOfkhZZTKuHCNFSg6io13lqfs4kY0SW3xZUmlozAlTsDRxUh3huKP36m7TigqOC6WEaFBQk8DY1L9nAFGqRk5Ue6DAoHzDOCzeRf9uj14BEpQfFox42CjcJr7c+WIxFdWUrfyNdRziI3+aAUMtTNJjokSf/Sprt2WQ6KhuTie4EdcrGcgOKq8x6r6YDgvGIAcJqREGA+08P/xZ5Jz4+fONVvF9cjVubGM00vsNTgSKT/CTdsBVnRa6fo5Mxyo9nUtoSXhj4Q1z0vAFiDNLdn1bGs2P6weSxvDs82Ep2qtuMIUBLde58WaHC2Q5vKtFHkE8HyQ1BivR5lVKnStTBMeFNXxrcuKGlARi93pv71yh7NBOy5FsjDAGMTNBaMo4OUp+xOcd1a3xGiR5UoCkCjHcjh9gYUAhcrwf1pvuffWeYz9To2oLF5N/SnoQdZW+3DvGxBvHys0kDlAnfGTezvjLBmRHUvaqGLHTIWDUSR7VoXxRzExpHssQkcejIKmau1lEeGwE8CkM8y7To+Zr0u7Gb0GlTO+NyrTIVFdUw2n/2G+jCF4FAkjW+lVHVo8DLLlD0osrUSkpuM1MpWUSqvzugGTWOd0Aqw+zlYExCCjJW0w6rx1QERCuIY++DLd1mCvGQKB4rQ0blN+A== X-Microsoft-Antispam-Message-Info: 5bfnPNo8rhf24mW16sH2fAHCRiXao04Mtr5XUvWXFeUpBts6qDqASMxjN2gw0TfBjErUhSgPyu5YarswikdbdMboRKCK5C0Go5BMYESQBFunqZoeJ0+uq3kSh5CemAwKoQ+n5oYT2w14rkGm8QiBc9f4OadxSOxsk1TOWTQa961mWTK882ftPU4n/BF/39q4GWyBEFi+Nj/ERgltNEuhEkCSR3O3sBFSARMmLfnvWALKoQciParCf4DvwKANxkz6TNM89F/wdS0UtD0oqOcV9ntmpH2yVWDbsm0wWu6GB1QkweRO1xRUThfgs5TIywWJ X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1326; 6:ZAdj+Q/nMIuGgLPDkfeQRF7YmBFHZdEbORu2t9ZzWGgfv5nYSgyl861WGAhXcPNFRK6aGWtbnjlNMZ4cSaSY1e1jFPzZUiVt++YZYtXOd7nee64id5Eo83RwLAtue1F+mR3QqJKQm3TagGY8qgsXEWxTt5R028Fnjj6OzqwfWh8x/PBssZXi9sKdfqYTDc00ErvDSnNUrXC6ToDTEj+ZMHEOM6sdmnXBAZ1VOUqejUmpC8fu8PpFn0THSJYPoHpvbv07kznf4xsW1rp4eJmXmY3lxPs63kXXJdaNaujW0Lfa4OpUy3y12P00Xv32VHs+xmuu8KuONF6qlN9tdHsXXSZalIedPMlHHZXa+YWGICz+Qbk5EbH6/+wdyWTJwMFtYCsmGwh43Hu7J5Vj1xqXbe+dIbkJzReY1l5PLX48Iw4VqzVAJztrHl8F0JF9HFyLBQa9i6WuUnLLkNXWk0xtQA==; 5:95gOCMo7Su7hXHiufKg/MS5Sy2xWHNHxKdaK4P4oGK5ky3THZ5fWlT0CV+WXEGcQUgWyS5yM907uu9NsyATH1nPwebegn3AR8P0RdOzWfHqoysboBdfk0xU1FhasDUCKuiXRn9JAoaAj542t8V1CYa5hXm+pYhEzhMlyQyieUdI=; 7:vmsXz+vZWVc4bjpDwXoL4SEXdECpt4fvBAN0Qrexng/rVSqHqCamtYsVzB0bdT5jO4djXNBH9dwq/j/EN80pxtKpbZ7hBLYsh4HYK55e11wilte5TOdMEE9t/NGpk+bDzo+W28DzqQdzkGefKapQQQ== SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1326; 20:y8Rnm16F/g0WrgsEg+G4WzYQ2G+0coHAGkhPDY9zmQ/0prwN/ri63yFu47gcoRlKwrC7q6KXwemTwG3en6q7cOGXx2UrrbTuLT3cyMesul2til4Kbv3Br6LqSP4SAyf+yRAkWSEizlPzOP9eDhHWwqVrSJuaXUgNaTwkwJWyxeSGRb32gshYxx0G884hFeK2q+b/ZeRFQ6LuiYmi/b6l4wx75mLS62P5Jx4FYbge6PsrhJVTnuZynd0VQwZ7x5pL X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Dec 2018 19:23:52.1677 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d29c6a43-4d2f-439c-7e1f-08d666b0ad29 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV02.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1326 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Monk.Liu@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP Expedite job deletion from ring mirror list to the HW fence signal callback instead from finish_work, together with waiting for all such fences to signal in drm_sched_stop we garantee that already signaled job will not be processed twice. Remove the sched finish fence callback and just submit finish_work directly from the HW fence callback. v2: Fix comments. v3: Attach hw fence cb to sched_job v5: Rebase Suggested-by: Christian Koenig Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/scheduler/sched_main.c | 57 +++++++++++++++++----------------- include/drm/gpu_scheduler.h | 6 ++-- 2 files changed, 30 insertions(+), 33 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index b5c5bee..5f5b187 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -284,8 +284,6 @@ static void drm_sched_job_finish(struct work_struct *work) cancel_delayed_work_sync(&sched->work_tdr); spin_lock_irqsave(&sched->job_list_lock, flags); - /* remove job from ring_mirror_list */ - list_del_init(&s_job->node); /* queue TDR for next job */ drm_sched_start_timeout(sched); spin_unlock_irqrestore(&sched->job_list_lock, flags); @@ -293,22 +291,11 @@ static void drm_sched_job_finish(struct work_struct *work) sched->ops->free_job(s_job); } -static void drm_sched_job_finish_cb(struct dma_fence *f, - struct dma_fence_cb *cb) -{ - struct drm_sched_job *job = container_of(cb, struct drm_sched_job, - finish_cb); - schedule_work(&job->finish_work); -} - static void drm_sched_job_begin(struct drm_sched_job *s_job) { struct drm_gpu_scheduler *sched = s_job->sched; unsigned long flags; - dma_fence_add_callback(&s_job->s_fence->finished, &s_job->finish_cb, - drm_sched_job_finish_cb); - spin_lock_irqsave(&sched->job_list_lock, flags); list_add_tail(&s_job->node, &sched->ring_mirror_list); drm_sched_start_timeout(sched); @@ -396,7 +383,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) list_for_each_entry_reverse(s_job, &sched->ring_mirror_list, node) { if (s_job->s_fence->parent && dma_fence_remove_callback(s_job->s_fence->parent, - &s_job->s_fence->cb)) { + &s_job->cb)) { dma_fence_put(s_job->s_fence->parent); s_job->s_fence->parent = NULL; atomic_dec(&sched->hw_rq_count); @@ -420,7 +407,7 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) if (s_job->s_fence->parent) { r = dma_fence_add_callback(s_job->s_fence->parent, - &s_job->s_fence->cb, + &s_job->cb, drm_sched_process_job); if (r) DRM_ERROR("fence restore callback failed (%d)\n", @@ -449,31 +436,34 @@ EXPORT_SYMBOL(drm_sched_stop); void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) { struct drm_sched_job *s_job, *tmp; - unsigned long flags; int r; if (!full_recovery) goto unpark; - spin_lock_irqsave(&sched->job_list_lock, flags); + /* + * Locking the list is not required here as the sched thread is parked + * so no new jobs are being pushed in to HW and in drm_sched_stop we + * flushed all the jobs who were still in mirror list but who already + * signaled and removed them self from the list. Also concurrent + * GPU recovers can't run in parallel. + */ list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { - struct drm_sched_fence *s_fence = s_job->s_fence; struct dma_fence *fence = s_job->s_fence->parent; if (fence) { - r = dma_fence_add_callback(fence, &s_fence->cb, + r = dma_fence_add_callback(fence, &s_job->cb, drm_sched_process_job); if (r == -ENOENT) - drm_sched_process_job(fence, &s_fence->cb); + drm_sched_process_job(fence, &s_job->cb); else if (r) DRM_ERROR("fence add callback failed (%d)\n", r); } else - drm_sched_process_job(NULL, &s_fence->cb); + drm_sched_process_job(NULL, &s_job->cb); } drm_sched_start_timeout(sched); - spin_unlock_irqrestore(&sched->job_list_lock, flags); unpark: kthread_unpark(sched->thread); @@ -622,18 +612,27 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) */ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) { - struct drm_sched_fence *s_fence = - container_of(cb, struct drm_sched_fence, cb); + struct drm_sched_job *s_job = container_of(cb, struct drm_sched_job, cb); + struct drm_sched_fence *s_fence = s_job->s_fence; struct drm_gpu_scheduler *sched = s_fence->sched; + unsigned long flags; + + cancel_delayed_work(&sched->work_tdr); - dma_fence_get(&s_fence->finished); atomic_dec(&sched->hw_rq_count); atomic_dec(&sched->num_jobs); + + spin_lock_irqsave(&sched->job_list_lock, flags); + /* remove job from ring_mirror_list */ + list_del_init(&s_job->node); + spin_unlock_irqrestore(&sched->job_list_lock, flags); + drm_sched_fence_finished(s_fence); trace_drm_sched_process_job(s_fence); - dma_fence_put(&s_fence->finished); wake_up_interruptible(&sched->wake_up_worker); + + schedule_work(&s_job->finish_work); } /** @@ -696,16 +695,16 @@ static int drm_sched_main(void *param) if (fence) { s_fence->parent = dma_fence_get(fence); - r = dma_fence_add_callback(fence, &s_fence->cb, + r = dma_fence_add_callback(fence, &sched_job->cb, drm_sched_process_job); if (r == -ENOENT) - drm_sched_process_job(fence, &s_fence->cb); + drm_sched_process_job(fence, &sched_job->cb); else if (r) DRM_ERROR("fence add callback failed (%d)\n", r); dma_fence_put(fence); } else - drm_sched_process_job(NULL, &s_fence->cb); + drm_sched_process_job(NULL, &sched_job->cb); wake_up(&sched->job_scheduled); } diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 5ab2d97..6621f74 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -138,10 +138,6 @@ struct drm_sched_fence { struct dma_fence finished; /** - * @cb: the callback for the parent fence below. - */ - struct dma_fence_cb cb; - /** * @parent: the fence returned by &drm_sched_backend_ops.run_job * when scheduling the job on hardware. We signal the * &drm_sched_fence.finished fence once parent is signalled. @@ -182,6 +178,7 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); * be scheduled further. * @s_priority: the priority of the job. * @entity: the entity to which this job belongs. + * @cb: the callback for the parent fence in s_fence. * * A job is created by the driver using drm_sched_job_init(), and * should call drm_sched_entity_push_job() once it wants the scheduler @@ -199,6 +196,7 @@ struct drm_sched_job { atomic_t karma; enum drm_sched_priority s_priority; struct drm_sched_entity *entity; + struct dma_fence_cb cb; }; static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job,