From patchwork Mon Apr 15 15:11:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 10901015 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 008901390 for ; Mon, 15 Apr 2019 15:11:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D5B5E2899C for ; Mon, 15 Apr 2019 15:11:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CA04C289A7; Mon, 15 Apr 2019 15:11:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AE196289A5 for ; Mon, 15 Apr 2019 15:11:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A9CB0897AC; Mon, 15 Apr 2019 15:11:47 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM02-BL2-obe.outbound.protection.outlook.com (mail-eopbgr750072.outbound.protection.outlook.com [40.107.75.72]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6844489755; Mon, 15 Apr 2019 15:11:46 +0000 (UTC) Received: from SN1PR12CA0056.namprd12.prod.outlook.com (2603:10b6:802:20::27) by DM6PR12MB2970.namprd12.prod.outlook.com (2603:10b6:5:115::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1792.15; Mon, 15 Apr 2019 15:11:44 +0000 Received: from CO1NAM03FT046.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e48::205) by SN1PR12CA0056.outlook.office365.com (2603:10b6:802:20::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1792.15 via Frontend Transport; Mon, 15 Apr 2019 15:11:44 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV01.amd.com (165.204.84.17) by CO1NAM03FT046.mail.protection.outlook.com (10.152.81.15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1771.16 via Frontend Transport; Mon, 15 Apr 2019 15:11:43 +0000 Received: from agrodzovsky-All-Series.amd.com (10.34.1.3) by SATLEXCHOV01.amd.com (10.181.40.71) with Microsoft SMTP Server id 14.3.389.1; Mon, 15 Apr 2019 10:11:41 -0500 From: Andrey Grodzovsky To: , , , , Subject: [PATCH v2 1/5] drm/scheduler: rework job destruction Date: Mon, 15 Apr 2019 11:11:23 -0400 Message-ID: <1555341087-23339-1-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(376002)(346002)(136003)(39860400002)(396003)(2980300002)(428003)(189003)(199004)(316002)(53936002)(81156014)(8676002)(72206003)(356004)(6666004)(336012)(5660300002)(5820100001)(6306002)(106466001)(54906003)(966005)(14444005)(105586002)(53416004)(478600001)(4326008)(110136005)(97736004)(68736007)(47776003)(2201001)(305945005)(50226002)(81166006)(8936002)(426003)(104016004)(2616005)(476003)(126002)(26005)(486006)(2870700001)(77096007)(23676004)(2906002)(36756003)(66574012)(50466002)(30864003)(7696005)(86362001)(186003)(44832011)(2101003); DIR:OUT; SFP:1101; SCL:1; SRVR:DM6PR12MB2970; H:SATLEXCHOV01.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; MX:1; A:1; X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: de9a9a46-791e-44bc-590e-08d6c1b4abdc X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(5600140)(711020)(4605104)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328); SRVR:DM6PR12MB2970; X-MS-TrafficTypeDiagnostic: DM6PR12MB2970: X-MS-Exchange-PUrlCount: 1 X-Microsoft-Antispam-PRVS: X-Forefront-PRVS: 000800954F X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Message-Info: SzXNpjSZ0nxCsQGyPcqgVWdUznv/9jYPfa6xHLoc/7jrwDlmlEZ2D5RDStavyeQXua0pPiOILpo0816yNo1UXcEYcL+Q1un4kzrc6C07aJCOzb10DtUqmeQV9wncYjPO5UtafnK7afDGXLyqI1IPgA1zEz5DEIq8HaCoTt6oRPlu0Vgoa3sd5ZPzMMz5JtF1grQ5U2ACKyWOXhVaBbsyg3LXxQ84pdaIX3996okwoNbbEVWmOZ+R+eLpq3swioTKoCuAW7yeiozdN/5R+vig+UOdFE59lPZXMZEtmMqb4gE3sp7oLcDC/Vn3cnY+GIDuFr6KYQY61hIK4whxre/8EZ9atqOs8hTFA+te1en2Cn221ksbYTOE8C+NnKlo4OqHBgsC+Lcd8mMhMJSNFESXryhHJ3FbehkLq7TuYv6IZuM= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Apr 2019 15:11:43.8140 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: de9a9a46-791e-44bc-590e-08d6c1b4abdc X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV01.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2970 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TRQU5aDsSkyEhT/Gr7rG/t7QshCmdI1BgYSs2lRwuLA=; b=qM29xWxHkYNcypndkbx4dbe1/7Sk6Ioxb1tRCDaLl1O+DkKfynh2EjZprBadO5wa4pW1G/L3VL34kX57CPwexJqRK3XHI6Up5UsjAx5x1WKQjdiT2fQ4ZzJT5MYZif4qkS58mQ5apIogvmDfqrEjgHcqn8CB7iq4rMkYa3vf5C0= X-Mailman-Original-Authentication-Results: spf=none (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; lists.freedesktop.org; dkim=none (message not signed) header.d=none;lists.freedesktop.org; dmarc=permerror action=none header.from=amd.com; X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Nicholas.Kazlauskas@amd.com, =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Christian König We now destroy finished jobs from the worker thread to make sure that we never destroy a job currently in timeout processing. By this we avoid holding lock around ring mirror list in drm_sched_stop which should solve a deadlock reported by a user. v2: Remove unused variable. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109692 Signed-off-by: Christian König Signed-off-by: Andrey Grodzovsky --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 17 ++-- drivers/gpu/drm/etnaviv/etnaviv_dump.c | 4 - drivers/gpu/drm/etnaviv/etnaviv_sched.c | 9 +- drivers/gpu/drm/scheduler/sched_main.c | 138 +++++++++++++++++------------ drivers/gpu/drm/v3d/v3d_sched.c | 9 +- include/drm/gpu_scheduler.h | 6 +- 6 files changed, 108 insertions(+), 75 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index d46675b..8c41a9f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -3341,15 +3341,23 @@ static int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, if (!ring || !ring->sched.thread) continue; - drm_sched_stop(&ring->sched); + drm_sched_stop(&ring->sched, &job->base); /* after all hw jobs are reset, hw fence is meaningless, so force_completion */ amdgpu_fence_driver_force_completion(ring); } - if(job) + if(job) { drm_sched_increase_karma(&job->base); + /* + * Guilty job did complete and hence needs to be manually removed + * See drm_sched_stop doc. + */ + if (list_empty(&job->base.node)) + job->base.sched->ops->free_job(&job->base); + } + if (!amdgpu_sriov_vf(adev)) { @@ -3489,8 +3497,7 @@ static int amdgpu_do_asic_reset(struct amdgpu_hive_info *hive, return r; } -static void amdgpu_device_post_asic_reset(struct amdgpu_device *adev, - struct amdgpu_job *job) +static void amdgpu_device_post_asic_reset(struct amdgpu_device *adev) { int i; @@ -3630,7 +3637,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev, /* Post ASIC reset for all devs .*/ list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) { - amdgpu_device_post_asic_reset(tmp_adev, tmp_adev == adev ? job : NULL); + amdgpu_device_post_asic_reset(tmp_adev); if (r) { /* bad news, how to tell it to userspace ? */ diff --git a/drivers/gpu/drm/etnaviv/etnaviv_dump.c b/drivers/gpu/drm/etnaviv/etnaviv_dump.c index 3fbb485..8434715 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_dump.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_dump.c @@ -135,13 +135,11 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu) mmu_size + gpu->buffer.size; /* Add in the active command buffers */ - spin_lock_irqsave(&gpu->sched.job_list_lock, flags); list_for_each_entry(s_job, &gpu->sched.ring_mirror_list, node) { submit = to_etnaviv_submit(s_job); file_size += submit->cmdbuf.size; n_obj++; } - spin_unlock_irqrestore(&gpu->sched.job_list_lock, flags); /* Add in the active buffer objects */ list_for_each_entry(vram, &gpu->mmu->mappings, mmu_node) { @@ -183,14 +181,12 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu) gpu->buffer.size, etnaviv_cmdbuf_get_va(&gpu->buffer)); - spin_lock_irqsave(&gpu->sched.job_list_lock, flags); list_for_each_entry(s_job, &gpu->sched.ring_mirror_list, node) { submit = to_etnaviv_submit(s_job); etnaviv_core_dump_mem(&iter, ETDUMP_BUF_CMD, submit->cmdbuf.vaddr, submit->cmdbuf.size, etnaviv_cmdbuf_get_va(&submit->cmdbuf)); } - spin_unlock_irqrestore(&gpu->sched.job_list_lock, flags); /* Reserve space for the bomap */ if (n_bomap_pages) { diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c index 67ae266..8795c19 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c @@ -109,11 +109,18 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job) } /* block scheduler */ - drm_sched_stop(&gpu->sched); + drm_sched_stop(&gpu->sched, sched_job); if(sched_job) drm_sched_increase_karma(sched_job); + /* + * Guilty job did complete and hence needs to be manually removed + * See drm_sched_stop doc. + */ + if (list_empty(&sched_job->node)) + sched_job->sched->ops->free_job(sched_job); + /* get the GPU back into the init state */ etnaviv_core_dump(gpu); etnaviv_gpu_recover_hang(gpu); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 19fc601..05a4540 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -265,32 +265,6 @@ void drm_sched_resume_timeout(struct drm_gpu_scheduler *sched, } EXPORT_SYMBOL(drm_sched_resume_timeout); -/* job_finish is called after hw fence signaled - */ -static void drm_sched_job_finish(struct work_struct *work) -{ - struct drm_sched_job *s_job = container_of(work, struct drm_sched_job, - finish_work); - struct drm_gpu_scheduler *sched = s_job->sched; - unsigned long flags; - - /* - * Canceling the timeout without removing our job from the ring mirror - * list is safe, as we will only end up in this worker if our jobs - * finished fence has been signaled. So even if some another worker - * manages to find this job as the next job in the list, the fence - * signaled check below will prevent the timeout to be restarted. - */ - cancel_delayed_work_sync(&sched->work_tdr); - - spin_lock_irqsave(&sched->job_list_lock, flags); - /* queue TDR for next job */ - drm_sched_start_timeout(sched); - spin_unlock_irqrestore(&sched->job_list_lock, flags); - - sched->ops->free_job(s_job); -} - static void drm_sched_job_begin(struct drm_sched_job *s_job) { struct drm_gpu_scheduler *sched = s_job->sched; @@ -371,23 +345,26 @@ EXPORT_SYMBOL(drm_sched_increase_karma); * @sched: scheduler instance * @bad: bad scheduler job * + * Stop the scheduler and also removes and frees all completed jobs. + * Note: bad job will not be freed as it might be used later and so it's + * callers responsibility to release it manually if it's not part of the + * mirror list any more. + * */ -void drm_sched_stop(struct drm_gpu_scheduler *sched) +void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) { - struct drm_sched_job *s_job; + struct drm_sched_job *s_job, *tmp; unsigned long flags; - struct dma_fence *last_fence = NULL; kthread_park(sched->thread); /* - * Verify all the signaled jobs in mirror list are removed from the ring - * by waiting for the latest job to enter the list. This should insure that - * also all the previous jobs that were in flight also already singaled - * and removed from the list. + * Iterate the job list from later to earlier one and either deactive + * their HW callbacks or remove them from mirror list if they already + * signaled. + * This iteration is thread safe as sched thread is stopped. */ - spin_lock_irqsave(&sched->job_list_lock, flags); - list_for_each_entry_reverse(s_job, &sched->ring_mirror_list, node) { + list_for_each_entry_safe_reverse(s_job, tmp, &sched->ring_mirror_list, node) { if (s_job->s_fence->parent && dma_fence_remove_callback(s_job->s_fence->parent, &s_job->cb)) { @@ -395,16 +372,30 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched) s_job->s_fence->parent = NULL; atomic_dec(&sched->hw_rq_count); } else { - last_fence = dma_fence_get(&s_job->s_fence->finished); - break; + /* + * remove job from ring_mirror_list. + * Locking here is for concurrent resume timeout + */ + spin_lock_irqsave(&sched->job_list_lock, flags); + list_del_init(&s_job->node); + spin_unlock_irqrestore(&sched->job_list_lock, flags); + + /* + * Wait for job's HW fence callback to finish using s_job + * before releasing it. + * + * Job is still alive so fence refcount at least 1 + */ + dma_fence_wait(&s_job->s_fence->finished, false); + + /* + * We must keep bad job alive for later use during + * recovery by some of the drivers + */ + if (bad != s_job) + sched->ops->free_job(s_job); } } - spin_unlock_irqrestore(&sched->job_list_lock, flags); - - if (last_fence) { - dma_fence_wait(last_fence, false); - dma_fence_put(last_fence); - } } EXPORT_SYMBOL(drm_sched_stop); @@ -418,6 +409,7 @@ EXPORT_SYMBOL(drm_sched_stop); void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) { struct drm_sched_job *s_job, *tmp; + unsigned long flags; int r; if (!full_recovery) @@ -425,9 +417,7 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) /* * Locking the list is not required here as the sched thread is parked - * so no new jobs are being pushed in to HW and in drm_sched_stop we - * flushed all the jobs who were still in mirror list but who already - * signaled and removed them self from the list. Also concurrent + * so no new jobs are being inserted or removed. Also concurrent * GPU recovers can't run in parallel. */ list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { @@ -445,7 +435,9 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) drm_sched_process_job(NULL, &s_job->cb); } + spin_lock_irqsave(&sched->job_list_lock, flags); drm_sched_start_timeout(sched); + spin_unlock_irqrestore(&sched->job_list_lock, flags); unpark: kthread_unpark(sched->thread); @@ -464,7 +456,6 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched) uint64_t guilty_context; bool found_guilty = false; - /*TODO DO we need spinlock here ? */ list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, node) { struct drm_sched_fence *s_fence = s_job->s_fence; @@ -514,7 +505,6 @@ int drm_sched_job_init(struct drm_sched_job *job, return -ENOMEM; job->id = atomic64_inc_return(&sched->job_id_count); - INIT_WORK(&job->finish_work, drm_sched_job_finish); INIT_LIST_HEAD(&job->node); return 0; @@ -597,24 +587,53 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb) struct drm_sched_job *s_job = container_of(cb, struct drm_sched_job, cb); struct drm_sched_fence *s_fence = s_job->s_fence; struct drm_gpu_scheduler *sched = s_fence->sched; - unsigned long flags; - - cancel_delayed_work(&sched->work_tdr); atomic_dec(&sched->hw_rq_count); atomic_dec(&sched->num_jobs); - spin_lock_irqsave(&sched->job_list_lock, flags); - /* remove job from ring_mirror_list */ - list_del_init(&s_job->node); - spin_unlock_irqrestore(&sched->job_list_lock, flags); + trace_drm_sched_process_job(s_fence); drm_sched_fence_finished(s_fence); - - trace_drm_sched_process_job(s_fence); wake_up_interruptible(&sched->wake_up_worker); +} + +/** + * drm_sched_cleanup_jobs - destroy finished jobs + * + * @sched: scheduler instance + * + * Remove all finished jobs from the mirror list and destroy them. + */ +static void drm_sched_cleanup_jobs(struct drm_gpu_scheduler *sched) +{ + unsigned long flags; + + /* Don't destroy jobs while the timeout worker is running */ + if (!cancel_delayed_work(&sched->work_tdr)) + return; + + + while (!list_empty(&sched->ring_mirror_list)) { + struct drm_sched_job *job; + + job = list_first_entry(&sched->ring_mirror_list, + struct drm_sched_job, node); + if (!dma_fence_is_signaled(&job->s_fence->finished)) + break; + + spin_lock_irqsave(&sched->job_list_lock, flags); + /* remove job from ring_mirror_list */ + list_del_init(&job->node); + spin_unlock_irqrestore(&sched->job_list_lock, flags); + + sched->ops->free_job(job); + } + + /* queue timeout for next job */ + spin_lock_irqsave(&sched->job_list_lock, flags); + drm_sched_start_timeout(sched); + spin_unlock_irqrestore(&sched->job_list_lock, flags); - schedule_work(&s_job->finish_work); } /** @@ -656,9 +675,10 @@ static int drm_sched_main(void *param) struct dma_fence *fence; wait_event_interruptible(sched->wake_up_worker, + (drm_sched_cleanup_jobs(sched), (!drm_sched_blocked(sched) && (entity = drm_sched_select_entity(sched))) || - kthread_should_stop()); + kthread_should_stop())); if (!entity) continue; diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c index ce7c737b..8efb091 100644 --- a/drivers/gpu/drm/v3d/v3d_sched.c +++ b/drivers/gpu/drm/v3d/v3d_sched.c @@ -232,11 +232,18 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job) /* block scheduler */ for (q = 0; q < V3D_MAX_QUEUES; q++) - drm_sched_stop(&v3d->queue[q].sched); + drm_sched_stop(&v3d->queue[q].sched, sched_job); if(sched_job) drm_sched_increase_karma(sched_job); + /* + * Guilty job did complete and hence needs to be manually removed + * See drm_sched_stop doc. + */ + if (list_empty(&sched_job->node)) + sched_job->sched->ops->free_job(sched_job); + /* get the GPU back into the init state */ v3d_reset(v3d); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 0daca4d..9ee0f27 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -167,9 +167,6 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f); * @sched: the scheduler instance on which this job is scheduled. * @s_fence: contains the fences for the scheduling of job. * @finish_cb: the callback for the finished fence. - * @finish_work: schedules the function @drm_sched_job_finish once the job has - * finished to remove the job from the - * @drm_gpu_scheduler.ring_mirror_list. * @node: used to append this struct to the @drm_gpu_scheduler.ring_mirror_list. * @id: a unique id assigned to each job scheduled on the scheduler. * @karma: increment on every hang caused by this job. If this exceeds the hang @@ -188,7 +185,6 @@ struct drm_sched_job { struct drm_gpu_scheduler *sched; struct drm_sched_fence *s_fence; struct dma_fence_cb finish_cb; - struct work_struct finish_work; struct list_head node; uint64_t id; atomic_t karma; @@ -296,7 +292,7 @@ int drm_sched_job_init(struct drm_sched_job *job, void *owner); void drm_sched_job_cleanup(struct drm_sched_job *job); void drm_sched_wakeup(struct drm_gpu_scheduler *sched); -void drm_sched_stop(struct drm_gpu_scheduler *sched); +void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad); void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery); void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched); void drm_sched_increase_karma(struct drm_sched_job *bad);